text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
In this post we will see Clipboard API support in Windows Phone. Clipboard API support came as part of Windows Phone 7.5 and it was not part of Windows Phone 7. Class is defined in System.Windows namespace as below, You can set text as below, In above snippet txtSource is a text box. You can check whether clipboard has some data or not as below, One very important point you need to keep in mind that you cannot use GetText() method in Windows Phone. If you try to do that you will get security exception as below, For security reason access to clipboard data is prohibited in Windows Phone. In this way you can use Windows Phone Clipboard API. I hope this post is useful. Thanks for reading . Follow @debug_modeFollow @debug_mode 3 thoughts on “Clipboard API in Windows Phone”
https://debugmode.net/2012/01/28/clipboard-api-in-windows-phone/
CC-MAIN-2017-13
refinedweb
141
75.61
More Prototyping Tips Niall delves deeper into the benefits of prototyping a user interface on a PC. He shares pointers on graphics, gotchas, and event scheduling. In my last column (“User Interface Prototypes,” October 2002, p. 33) I showed you how to use Borland's C++ Builder (CPB) to develop a virtual user interface on a PC before committing to a hardware implementation. This month, we're going to look at some more situations in which this approach is useful. We'll also examine how to prototype graphical user interfaces. Custom LCDs When you order a custom LCD-the sort you get in a digital watch or calculator, with a few digits and some application specific icons-you have to get it right the first time. With lead times of several months to get tooled up, the product might miss its market window completely if you get the LCD wrong. You need to know if the icons work in practice long before you have the actual part. On one such project, I mocked up the LCD in CPB. Each segment was simply an image, an object that allows you to import a bitmap. For the digits, I made each of the seven segments of the digit into an image object.That way, the visibility of each segment can be controlled independently. In my project, the microcontroller had a built-in LCD controller that dedicated one bit to each segment of the display. We had 64 segments to control. The microcontroller performed some multiplexing, and some demultiplexing took place in the display to ensure that the number of I/O lines required was less than the total number of segments. The multiplexing and demultiplexing was transparent to the software, so, in the code, we mapped one bit to one segment. In the real target, once the correct bits were set, the hardware would look after the appearance of the appropriate segments. In the CPB environment, I defined the block of bits to be regular RAM: #define NUM_SEG_BYTES 8 #ifdef USING_CPBBYTE G_segments[NUM_SEG_BYTES];#else//Set pointer to LCD controller BYTE *G_segments = (BYTE *) 0x40;#endif /* USING_CPB */ I always define USING_CPB when building in the CPB environment, but not when building for the target. The #else of this piece of code causes G_segments to point into the registers that the microcontroller has dedicated to the LCD controller. Location 0x40 on the target is the start of the block of eight bytes. In the CPB code, I set a timer to regularly read the contents of G_segments and turn each image on or off according to the corresponding bit. The CPB program effectively imitates the LCD controller that is built into the final hardware. As long as the user interface software toggles the correct bits within that 8-byte block, the correct icons turn on, regardless of whether the software is running on a PC or on the target. This allowed us to develop most of the user interface software on a PC. When we wanted to try a different icon, we edited the bitmap associated with a particular bit. Often, no code changes were required. At the same time, we were experimenting with the user interactions and the key sequences that would cause certain icons or digits to appear and disappear. When the design of the LCD was finalized, the prototype remained a sufficient development environment to use for a few months while waiting for target hardware. Figure 1 shows this prototype running as a PC executable. Since the bits that control the LCD were so crucial to correct operation, I displayed them on the screen, for debugging purposes. Figure 1: A prototyping example Another debugging advantage of the CPB environment is that you can add controls for arbitrary variables. The slider for adjusting the global variable G_batteryLevel controls a value that would be set by reading an analog to digital converter (ADC) in the target environment. Controls such as these make it easy to exercise the user interface software to test the reaction to a decaying battery. These prototypes often have dozens of controls that simulate real world events. Sometimes you want to simulate those events to test that the user interface software is working correctly and other times you want to simulate them in order to get feedback on the usability of the interface. Event handling The examples I've used so far demonstrate how to simulate the output of the user interface. We also have to consider how to handle user input events, such as key presses. On the target, you might be scanning input lines that are connected to a keypad. This might happen in the mainline code or in an interrupt handler. In either case, you'll eventually get a key value, which requires application-level decisions to determine what actions result from that key. So there will be a function called handleKey(KeyId key) that maps a key to some resulting action. Any hardware issues, such as debouncing, should be resolved before this function is called. This key handler will be the common entry point for the target code and the CPB code. In CPB, all visible objects are associated with events. In the IDE, you can click on an object and then select its list of events. You can then name the function to call when an event occurs, such as, for example, when the mouse is dragged over that object. In the event handler for a mouse-down event, we can call the handleKey() function to activate the appropriate action in the user interface. The object that you use to attach this event handler will most often be a button or an image object, but mouse clicks can be detected on any of the other visible objects if needed. Timing Most user interfaces feature events that happen on a timed basis. For example, a light may flash regularly. You may wish to read a sensor regularly and update the display accordingly. In the simulation, you may not be reading a real sensor, but you might generate random data, read data from a file, or generate data based on some mathematical model. To drive these timed events, we need to create one of CPB's timers. The timer has an associated function that is called regularly. This regular event allows us to update a counter to measure out any time periods we require. This is analogous to a timed interrupt on the target system. Let's assume for the moment that the target system will not have an RTOS, and that we have a timed interrupt that increments a counter every 10 milliseconds. We would possibly have a simple super-loop architecture that might look like this: int main(void){ initialize(); while (1) { if (counter%INTERAL_1 == 0) { doWork_1(); } if (counter%INTERAL_2 == 0) { doWork_2(); } }} where counter is a global variable that will be incremented in our interrupt. We can use any multiple of this single counter to detect the times when we should do other work. Numerous scheduling algorithms could be applied here, but those variations are beyond the scope of this column. The doWork() functions could include scanning the keyboard and reacting to those keys. There is one important difference between the way the target code and CPB manage timed work. The CPB environment does not support continuous loops such as the one shown in the preceding code. After an event is handled, control must return to the Windows operating system to allow the application window to be updated. So the loop in my code is not practical. Instead, we need to call the scheduleTick() routine shown below on a regular timed basis. We do this using the CPB timer objects: void scheduleTick(void){ static int counter = 0; if (counter%INTERAL_1 == 0) { doWork_1(); } if (counter%INTERAL_2 == 0) { doWork_2(); } counter++;} We now have two ways to handle a key press in CPB. The first is an event handler that directly calls the handleKey() function in our user interface code. The second is to drive the polling with a timer object, which will then check the state of each clickable object to see if it has been activated. The latter might be a better simulation of what will happen on the target, but the first method is easier to implement. Most of the challenging scheduling issues are not of great concern in the user interface, since any deadlines on the user interface are almost always soft. If a flashing light is a little early one time and a little late another, it doesn't affect the behavior of the rest of the system. This is just as well, since the timing behavior of the user interface running on a PC will always be different from the target (the Windows environment makes it difficult to get accurate timing measurements at the millisecond level). So, while CPB is a good environment for simulating the user interface, it's not suitable for simulating the hard real-time aspects of your system. High fidelity If the prototype is going to be shown widely, a close resemblance to the real thing will help users understand the final product. One of the best ways to achieve this is to take a picture of the casing and use that as the background of the window being presented as the interface. To do this in CPB, you create an image object and import a bitmap of the interface into that image. The image might come from your CAD package or from a scanned picture of the plans for the device. For each area on the picture that should respond to mouse clicks, place an empty image object over the area, and associate mouse events with that image. Since the clickable image is empty and therefore transparent, the user will see the background image, though events will go to the clickable image. In this way, a whole set of buttons can be implemented. Each button-event handler will then call a function such as: void handleButton(ButtonId key); The code inside this function is common code that runs on the PC or on the target. It doesn't care whether the event came from the CPB environment or from a keypad-scanning routine on the target. Figure 2: The five-button interface simulation Figure 2, which was also used in last month's column, shows a CPB-simulated, five-button user interface. The buttons you can see are the ones in the single large background image, but the clicks are captured by empty images that are placed in front of the background image. Graphics The more complex the user interface, the greater the benefit of doing the initial development on a PC. Since graphical user interfaces tend to be more complex than nongraphical ones, the debugging environment of a PC is a big advantage. Third party graphics toolkits generally provide a library for a PC platform, allowing much of the development to be done there before porting to the target hardware. I usually program the graphics from the ground up, without a third party library. If you're working with a small display and a low-power CPU, you end up doing the same thing. The third party toolkits are generally only ported to 32-bit CPUs and graphics controllers that have a 640×480 resolution or greater. So I find myself wanting to test my graphics code in the CPB environment. Fortunately, the image object allows us to set individual pixels. Each image has a canvas, and we can draw directly to it. We create an image with width and height equal to the dimensions of the real screen. We can then access the canvas as a property of the image, and set individual pixels. For example, we can set the pixel at coordinates (20,30) to black with: FrontPanelWindow->GraphicsDisplay ->Canvas[20][30] = 0; //black where GraphicsDisplay is the name of the image that we created for displaying graphics. We can set other colors by assigning an RGB value, with eight bits for each component, or a total of 24 bits per pixel. For example, for purple, the red component is 0x80, the green component is 0, and the blue component is 0x80, so the 24-bit value to assign to the pixel is 0x800080. The CPB environment is typically more powerful than the physical screen you are imitating, so you may have fewer than 24 bits per pixel. In practice, you'll define constants for the colors commonly used in your application with separate definitions for the CPB environment and the target system. For example: #ifdef USING_CPB// CPB with 24 bits per pixel #define BLACK 0x000000#define WHITE 0xFFFFFF#define PURPLE 0x800080#else// Target with 6 bits per pixel #define BLACK 0#define WHITE 0xff#define PURPLE 0x22#endif If you write a macro or function to perform plotPixel (x, y, color ), the lines and bitmaps and other routines can often be written in a platform independent fashion. The exception is when the graphics hardware supports functions such as line drawing or bitmap copying. To take advantage of these features, your code will have to be target specific. Once you move up a level and you are writing code to render a screen layout, the code will always be platform independent. In most projects, I find that this portion of the code is the largest and changes most frequently. Once your line drawing works, you are unlikely to change it, but the layout of a screen might go through many iterations before you find a balance between fitting all the required information and making it pleasing to the eye. CPB gotchas The CPB environment's default settings do not always suit the uses I've described, so here are a few of the options that I set for any project I create. These properties generally only cause difficulty when using the application on a PC other than the one on which the application was originally developed. I often send an executable file to other parties, and I want them to be able to run that executable without having to install the CPB environment or any of its libraries. In the Project-Options dialog, there is a Linker tab. On this tab I turn off “Use Dynamic RTL.” There is also a Packages tab, and under it I turn off “Build with runtime packages.” These two options enlarge the executable file, but negate dependencies on other libraries at run time. On each form, a property called Scaled is defaulted to true. I always set this to false. This property allows the layout to be changed depending on whether the fonts available are the same size as the fonts used in the initial design. I find that this property, if set, sometimes upsets the layout of the background picture of the interface. Waiting for a download Hopefully these last two columns have given you a feel for implementing prototypes on a PC before building hardware. Even if C++ Builder is not your chosen tool, most of the principles still apply. As I write this, I am waiting for a three hour download of a software update to a target where I no longer have any debugging tools available. This is exactly the sort of environment where I like to try my code changes in a prototype first, to give me the best possible odds that the code changes will work the first time. I am keeping my fingers crossed! .
https://www.embedded.com/more-prototyping-tips/
CC-MAIN-2020-34
refinedweb
2,582
59.23
is it could be that my arduino IDE is wrongly setting? How to add slider in blynk to control temperature setpoint In the IDE under tools set erase flash to “All Flash Contents” and flash a blank sketch. Then change the settings back to erase “Only Sketch” and flash my latest sketch again. i checked all my sensor and they working using DHTtester code, i wonder what would seem the problem You shouldn’t have to change either. I think without Blynk your DHTtester sketch runs OK. At the moment I can’t work out why adding Blynk is a problem. Do any Blynkers have a DHT11 that they can test a sketch with? Sounds to me like there’s some newbie mistake happening here. I’m reading this on an iPhone (sat in a very nice, and hot, beach bar in Thailand by the way ) so following the ins-and-outs of the thread is tricky. Has the OP posted his DHT test code, and does it use the same DHT pins, library etc as is being used in the other code? Also, if this is lashed-up on a breadboard then there’s every chance that there are some dodgy connections in there somewhere. A more forensic approach to the fault-finding process would no doubt reveal the problem, but that’s not really possible with the OP’s obvious lack of experience and language issues. Pete. Yes iam actually very fresh newbie in programming . Iam sure the connection is right. Btw iam not using breadboard just straight jumper wire connection. I already download the library for blynk and the dht sensor. What do you mean that library being used in another code? Still warmer than Blighty though. Currently 31° and Factor 50 here! Pete. @PeteKnight don’t you travel with a WeMos and a DHT in your case? After changing all the jumper wire to new one and i using below code , it is functioning correctly! Unfortunately the serial monitor still showing " Failed to read from DHT sensor". Iam planning to put LED widget as notification when the relay module is on and Button widget for manually operate the relay. #include <SPI.h> #include <ESP8266WiFi.h> #include <BlynkSimpleEsp8266.h> #include <DHT.h> // You should get Auth Token in the Blynk App. // Go to the Project Settings (nut icon). char auth[] = "xxx"; // Your WiFi credentials. // Set password to "" for open networks. char ssid[] = "xxx"; char pass[] = "xxx"; #define DHTPIN 0 // D3 // Uncomment whatever type you're using! #define DHTTYPE DHT11 // DHT 11 DHT dht(DHTPIN, DHTTYPE); BlynkTimer timer; float setpoint = 0; bool relaystatuschanged = false; const int relayPin = 5; // D1; BLYNK_WRITE(V0)// slider widget { setpoint = param.asFloat(); relaystatuschanged = true; }); if((setpoint < t) && (relaystatuschanged == true)) { digitalWrite(relayPin, 0); // assuming relay is active HIGH relaystatuschanged = false; } if((setpoint > t) && (relaystatuschanged == true)) { digitalWrite(relayPin, 1); // assuming relay is active HIGH relaystatuschanged = false; } } void setup() { Serial.begin(9600); Blynk.begin(auth, ssid, pass); dht.begin(); pinMode(relayPin, OUTPUT); // Setup a function to be called every second timer.setInterval(5000L, sendSensor); } void loop() { Blynk.run(); timer.run(); }```
https://community.blynk.cc/t/how-to-add-slider-in-blynk-to-control-temperature-setpoint/32521?page=5
CC-MAIN-2019-04
refinedweb
513
74.9
Abstraction CascadeTue 14 November 2017 by Moshe Zadka (This is an adaptation of part of the talk Kurt Rose and I gave at PyBay 2017) An abstraction cascade is a common anti-pattern in legacy system. It is useful to understand how to recognize it, how it tends to come about, how to fix it -- and most importantly, what kind of things will not fix it. The last one is important, in general, for anti-patterns in legacy systems: if the obvious fix worked, it would have been already dealt with, and would not be a common anti-pattern in legacy systems. Recognition The usual pattern for a abstraction cascade looks like complicated, ad-hoc, if/else sequence to decide which path to take. Here is example for a abstraction cascade for finding out a network address corresponding to a name: History At each step, it seems reasonable to make a specific change. Here is a typical way this kind of code comes about. The initial version is reasonable: since DNS is a way to publish name to address mapping, why not use a standard? def get_address(name): return dns_lookup(name), DEFAULT_PORT Under load, an outage happened. There was no time to investigate how to configure DNS caching or TTL better -- so the "popular" services got added to a static list, with a "fast path" checking. This decision also makes sense: when an outage is ongoing, the top priority is to relieve the symptoms. def get_address(name): if name in services: # Fixes issues #6985 # TODO: Hotfix, clean-up later return service[name].address, DEFAULT_PORT return dns_lookup(name), DEFAULT_PORT However, now the door has opened to add another path in the function. When the need to support multiple services on one host happened, it was easier to just add another path: after all, this was only for new services. def get_address(name): if name in services: # Added for issue #2321 if ':' in services[name].address: return service[name].address.split(':') else: # Fixes issues #6985 # TODO: Hotfix, clean-up later return service[name].address, DEFAULT_PORT return dns_lookup(name), DEFAULT_PORT When the change to IPv6 occured, splitting on : was not a safe operation -- so a separate field was added. Again, the existing "new" services (by now, many -- and not so new!) did not need to be touched: Of course, this is typically just chapter one in the real story: having to adapt to multiple data centers, or multiple providers of services, will lead to more and more of these paths -- with nothing thrown away, because "some legacy service depends on it -- maybe". Non-fixes Fancier dispatch Sometimes the ad-hoc if/else pattern is obscured by more abstract dispatch logic: for example, something that loops through classes and finds out which one is the right one: class AbstractNameFinder(object): def matches(self, name): raise NotImplementedError() def get_address(self, name): raise NotImplementedError() class DNS(AbstractNameFinder): def matches(self, name): return True def get_address(self, name): return dns_lookup(name), DEFAULT_PORT class Local(AbstractNameFinder): def matches(self, name): return hasattr(services.get(name), 'ip') def get_address(self, name): return services[name].ip, services[name].port finders = [Local(), DNS()] def get_address(name): for finder in finders: if finder.match(name): return finder.get_address(name) This is actually worse -- now the problem can be spread over multiple files, with no single place to fix it. While the code can be converted to this form, semi-mechanically, this does not fix the underlying issue -- and will actually make the problem continue on with force. Pareto fix The Pareto rule is that 80% of the problem is solved with 20% of the effort. It is often the case that a big percentage (in the stereotypical Pareto case, 80%) of the problem is not hard to fix. For example, most services are actually listed in some file, and all we need to do is read this file in and look up based on that. The incentive to fix "80% of the problem" and leave the "20%" for later is strong. However, usually the problem is that each of those "Pareto fixes" again makes the problem worse: since it is not a complete replacement, another dispatch layer needs to be built to support the "legacy solution". The new dispatch layer, the new solution, and the legacy solution all become part of the newest iteration of the legacy system, and cause the problem to be even worse. Fixing 80% of the problem is useful for prototyping, since we are not sure we are solving the right problem and nothing better exists. However, in this case, the complete solution is necessary, so neither of these conditions hold. Escape strategy The reason this happens is because no single case can be removed. The way forward is not to add more cases, but to try and remove a single case. The first question to ask is: why was no case removed? Often, the reason is that there is no way to test whether removal is safe. It might take some work to build infrastructure that will properly make removal safe. Unit tests are often not enough. Integration tests, as well, are sometimes not enough. Sometimes canary systems, sometimes feature flag systems, or, if worst comes to worst, a way to test and roll-back quickly if a problem is found. Once it is possible to remove just one case (in our example above, maybe check what it would take to remove the case where we split on a colon, since this is clearly worse than just having separate attributes), thought needs to be given to which case is best. Sometimes, there is more than one case that is really needed: some inherent, deep, trade-off. However, it is rare to need more than two, and almost unheard of to need more than three. Start removing unneeded cases one by one. Conclusion When seeing an abstraction cascade, there is a temptation to "clean it up": but most obvious clean-ups end up making it worse. However, by understanding how it came to be, and finding a way to remove cases, it is possible to do away with it.
https://orbifold.xyz/abstraction-cascade.html
CC-MAIN-2020-10
refinedweb
1,021
59.03
Error Handling in Combine Explained Using code examples to show how to beat those failing cases When getting started with Combine you’ll quickly run into error handling issues. Each Combine stream receives either a value or an error, and unlike frameworks like RxSwift, you need to be specific about the expected error type. To prepare you for these cases, in this piece I’ll go over the options available in Combine to catch, ignore, and handle errors on a stream. We will also cover some important things you need to know when an error occurs on your stream. Just getting started with Combine? You might want to first take a look at Getting started with the Combine framework in Swift or my Combine Playground. Combine Streams and Typed Errors A big difference between a framework like RxSwift and Combine is the requirement of typed error definitions in streams. If we compare the Observable with its Combine equivalent AnyPublisher, we can see the difference in the type declaration. public class Observable<Element> : ObservableType struct AnyPublisher<Output, Failure> where Failure : Error The AnyPublisher requires us to specify the Failure error type while the Observable only takes the generic Element type. Swift requires us to think about error handling which we can take as something good. However, it does not hold us back from defining the expected type as just Swift.Error, which basically comes down to the same behavior as in RxSwift. Once you do require your stream to expect a certain error type, you’ll run into casting errors as each operator needs to return the same error type as the leading stream. Let’s dive into the Combine operators for error handling. Mapping errors using mapError To map an error to the expected error type we can use the mapError operator. In the following example, we have a passthrough subject which expects a URL output and a RequestError error type. Once we start mapping this stream into a URLSessionDataTaskPublisher we immediately get an error pointing out the error type mismatch. In this case, the solution is as simple as using the mapError operator which will wrap the URLError into a RequestError using the session error case we defined earlier. Using the retry operator In the above example, we’ve used a URLSessionDataTaskPublisher. You might want to use the retry operator before actually accepting an error when working with data requests. It takes the number of retries to take before letting the stream actually fail. Catching errors If you want to catch errors early and ignore them after you can use the catch operator. This operator allows you to return a default value for if the request failed. Examples of this could be: - An empty array for search results - A default image placeholder if the image request failed The latter is the one we will use in our example. Using replaceError instead of catch ReplaceError vs Catch: both operators seem very similar. The big difference is that the replaceError(:) operator is completely ignoring the error. As in the above example, we're doing nothing more than returning the placeholder notFoundImage in the case of an error. We could simplify this by using the replace error operator to directly map any errors into our placeholder image: When the assign(to:on:) operator is unavailable A common example in which you’ll need to map errors is when you try to assign an outcoming value to a property of an object. You’ll try to use the autocompletion and you find out that the assign(to:on:) operator is unavailable. The following error will occur if you are forced to write the code either way: Referencing instance method ‘assign(to:on:)’ on ‘Publisher’ requires the types ‘RequestError’ and ‘Never’ be equivalent You can fix this by either catching the error as in explained in the above example or by simply using the assertNoFailure operator. This operator will raise a fatalError and should, therefore, only be used if it's a programming error. If an error is expected you should always use the catch operator instead. Conclusion We’ve covered a lot about error handling in Combine which should be enough to make you beat all those failing cases! Make sure to handle errors accordingly and do not simply ignore them. The unhappy flow is just as important to your users as a happy flow. If you’d like to play around with the things you just have learned, take a look at my Swift Combine Playground which includes a page about error handling in Combine. To read more about Swift Combine, take a look at my other Combine blog posts:
https://medium.com/better-programming/error-handling-in-combine-explained-9f622ba759ce
CC-MAIN-2019-39
refinedweb
778
50.67
PICS/DSig Standard Library PICS/DSig Standard Library in Java -- Version History Current Version = 1.2 -- May 14 1998 Version 1.2.x History For information on the current version, see the Known Bugs / What's New page. DSig Reference Code Version History The DSig Reference Code did not previously have version numbers. It's version history was perserved in a history file that was distributed with it. Version 1.1.4 -- February 18 1998 New method setRating added to Label. IsoDate moved from package w3c.tools.util to w3c.pics.parser. The PICS library now contains classes only in w3c.pics.parser. Completely rewritten LabelFinder , by the creators of the PICS Robot Label Grabber . Version 1.1.3 -- January 29 1998 Fixed a bug in the ProfileParser when reading URLs containing %* Fixed a bug in the ProfileParser that caused the hostnames within URLs to be evaluated improperly. Updated ProfileParser to be in synch with PICSRules spec dated December 29 1997. Fixed a bug that occured when cloning Profiles. Policy was made a public class. Several of its methods were also made public. Removed the quotes that surrounded rating service and rating system URLs inside Service objects. Quotes still appear properly when toString() is called. New method writeSQLQuery() added to Policy. New method getFullServiceName() added to Profile. New method toCommaDelimString() added to Rating. Fixed a bug in Label that caused quotes to appear in generic clauses. Generic clauses should not have quotes around their boolean values. Updated all parsers to use JavaCC 0.7 NOTE: These parsers will no longer function properly under versions of JavaCC prior to 0.7 Version 1.1.2 -- October 31 1997 Fixed a bug in Policy that caused crashes when using the Filter command on a page that did not have any labels. New method clone() added to Profile. Version 1.1.1 -- October 24 1997 Updated ProfileParser to be in synch with PICSRules spec dated October 22 1997. New class ProfileDecoder : This class can be used to decode the hex-encodings that are found within strings in a PICSRule. ParseError was changed to make certain types of error messages more informative. Fixed the ProfileParser so that it correctly handles the case-insensitivity of attribute names. Version 1.1 -- October 16 1997 Updated ProfileParser to be in synch with PICSRules spec dated October 9 1997. Supports evaluation of a profile against multiple labels at a time. This allows labels for multiple services for the same page to be submitted to a Profile all at the same time, as per the PICSRules specification. New commands in PICSParser: Find Labels and Filter. See the documentation page for details. New class LabelFinder : This class is used to locate labels in the headers of HTML files. The LabelParser now attempts to parse PICS1.0 labels as PICS1.1 labels rather than automatically failing. If the PICS1.0 label actually has valid PICS1.1 syntax, it will parse correctly. In debugmode, profile evaluation now prints out the Explanation sub-clause of the Policy that activated. New method isValidWithReason() added to Profile. Version 1.0.x History Version 1.0.8 -- October 14 1997 The bugfix from version 1.0.7 did not work properly. It fixed some cases of the servicename.categoryname bug, but broke other parts of the label/profile evaluator. This version should fix any lingering problems in the evaluator. Version 1.0.7 -- October 6 1997 Fixed a bug in Label.toDsigString() that caused extra spaces to appear. New method getServiceNames() added to Profile. Fixed a bug in label/profile evaluation in which the Profile mistakenly only used the categoryname to determine whether a label's category matched a particular expression, rather than using the full servicename.categoryname . Version 1.0.6 -- September 23 1997 Updated ProfileParser to be in synch with PICSRules spec dated September 22 1997. Version 1.0.5 -- September 5 1997 Fixed a bug in Profile that caused crashes when evaluating profiles that contained no Policy clauses. A Profile without Policy clauses should always return true. Version 1.0.4 -- September 4 1997 Fixed a bug in Label regarding multivalued categories. Parens were accidentally being omitted. Fixed a bug in Rating. Range-style ratings were being incorrectly initialized. The high end of the range was correct, but the low end of the range was being left empty. Fixed a bug in Profile parsing/evaluation with simple expressions that use the '=' operator. They had not been working correctly. Version 1.0.3 -- September 2 1997 All objects in the library are now serializable as per the Java 1.1 Object Serialization Specification . New method getFullName() added to Category. New methods getName() and getValue() added to Enum. Shorter error messages. No more nearly-infinite error messages on parse errors. Version 1.0.2 -- August 28 1997 Updated ProfileParser to be in synch with PICSRules spec dated August 28 1997. Version 1.0.1 -- August 22 1997 Updated ProfileParser to be in synch with PICSRules spec dated August 22 1997. RejectByURL and AcceptByURL Policy clauses now evaluate correctly when the URLs do not match any known internet scheme. Many classes that were public in version 1.0.0 now have package access only. Extensions in profiles now parse correctly. Version 1.0.0 -- August 22 1997 Initial Release [email protected] 15 May 98
http://www.w3.org/PICS/refcode/Parser/history.html
CC-MAIN-2014-52
refinedweb
889
60.92
Using JUnit JUnit is a standardized framework for testing Java units (that is, Java classes). JUnit can be automated to take the some of the work out of testing. Imagine you’ve created an enum type with three values: GREEN, YELLOW, and RED. Listing 1 contains the code: Listing 1 public enum SignalColor { GREEN, YELLOW, RED } A traffic light has a state (which is a fancy name for the traffic light’s color). public class TrafficLight { SignalColor state = SignalColor.RED; If you know the traffic light’s current state, you can decide what the traffic light’s next state will be. public void nextState() { switch (state) { case RED: state = SignalColor.GREEN; break; case YELLOW: state = SignalColor.RED; break; case GREEN: state = SignalColor.YELLOW; break; default: state = SignalColor.RED; break; } } You can also change the traffic light’s state a certain number of times: public void change(int numberOfTimes) { for (int i = 0; i < numberOfTimes; i++) { nextState(); } } Putting it all together, you have the TrafficLight class in Listing 2. Listing 2 public class TrafficLight { SignalColor state = SignalColor.RED; public void nextState() { switch (state) { case RED: state = SignalColor.GREEN; break; case YELLOW: state = SignalColor.RED; break; case GREEN: state = SignalColor.YELLOW; break; default: state = SignalColor.RED; break; } } public void change(int numberOfTimes) { for (int i = 0; i < numberOfTimes; i++) { nextState(); } } } In the olden days you might have continued writing code, creating more classes, calling the nextState and change methods in Listing 2. Then, after several months of coding, you’d pause to test your work. And what a surprise! Your tests would fail miserably! You should never delay testing for more than a day or two. Test early and test often! One philosophy about testing says you should test each chunk of code as soon as you’ve written it. Of course, the phrase “chunk of code” doesn’t sound very scientific. It wouldn’t do to have developers walking around talking about the “chunk-of-code testing” that they did this afternoon. It’s better to call each chunk of code a unit, and get developers to talk about unit testing. The most common unit for testing is a class. So a typical Java developer tests each class as soon as the class’s code is written. So how do you go about testing a class? A novice might test the TrafficLight class by writing an additional class — a class containing a main method. The main method creates an instance of TrafficLight, calls the nextState and change methods, and displays the results. The novice examines the results and compares them with some expected values. After writing main methods for dozens, hundreds, or even thousands classes, the novice (now a full-fledged developer) becomes tired of the testing routine and looks for ways to automate the testing procedure. Tired or not, one developer might try to decipher another developer’s hand-coded tests. Without having any standards or guidelines for constructing tests, reading and understanding another developer’s tests can be difficult and tedious. So JUnit comes to the rescue!. To find out how Eclipse automates the use of JUnit, do the following: Create an Eclipse project containing Listings 1 and 2. In Windows, right-click the Package Explorer’s TrafficLight.java branch. On a Mac, control-click the Package Explorer’s TrafficLight.java branch. A context menu appears. In the context menu, select New→JUnit Test Case. As a result, the New JUnit Test Case dialog box appears. Click Next at the bottom of the New JUnit Test Case dialog box. As a result, you see the second page of the New JUnit Test Case dialog box. The second page lists methods belonging (either directly or indirectly) to the TrafficLight class. Place a checkmark in the checkbox labeled Traffic Light. As a result, Eclipse places checkmarks in the nextState() and change(int) checkboxes. Click Finish at the bottom of the New JUnit Test Case dialog box. JUnit isn’t formally part of Java. Instead comes with its own set of classes and methods to help you create tests for your code. After you click Finish, Eclipse asks you if you want to include the JUnit classes and methods as part of your project. Select Perform the Following Action and Add JUnit 4 Library to the Build Path. Then click OK. Eclipse closes the dialog boxes and your project has a brand new TrafficLightTest.java file. The file’s code is shown in Listing 3. The code in Listing 3 contains two tests, and both tests contain calls to an unpleasant sounding fail method. Eclipse wants you to add code to make these tests pass. Remove the calls to the fail method. In place of the fail method calls, type the code shown in bold in Listing 4. In the Package Explorer, right-click (in Windows) or control-click (on a Mac) the TrafficLightTest.java branch. In the resulting context menu, select Run As→JUnit Test. Eclipse might have more than one kind of JUnit testing framework up its sleeve. If so, you might see a dialog box like the one below. If you do, select the Eclipse JUnit Launcher, and then click OK. As a result, Eclipse runs your TrafficLightTest.java program. Eclipse displays the result of the run in front of its own Package Explorer. The result shows no errors and no failures. Whew! Listing 3 import static org.junit.Assert.*; import org.junit.Test; public class TrafficLightTest { @Test public void testNextState() { fail("Not yet implemented"); } @Test public void testChange() { fail("Not yet implemented"); } } Listing 4 import static org.junit.Assert.*; import org.junit.Test; public class TrafficLightTest { @Test public void testNextState() { TrafficLight light = new TrafficLight(); assertEquals(SignalColor.RED, light.state); light.nextState(); assertEquals(SignalColor.GREEN, light.state); light.nextState(); assertEquals(SignalColor.YELLOW, light.state); light.nextState(); assertEquals(SignalColor.RED, light.state); } @Test public void testChange() { TrafficLight light = new TrafficLight(); light.change(5); assertEquals(SignalColor.YELLOW, light.state); } } When you select Run As→JUnit Test, Eclipse doesn’t look for a main method. Instead, Eclipse looks for methods starting with @Test and other such things. Eclipse executes each of the @Test methods. Things like @Test are Java annotations. Listing 4 contains two @Test methods: testNextState and testChange. The testNextState method puts the TrafficLight nextState method to the test. Similarly, the testChange method flexes the TrafficLight change method’s muscles. Consider the code in the testNextState method. The testNextState method repeatedly calls the TrafficLight class’s nextState method and JUnit’s assertEquals method. The assertEquals method takes two parameters: an expected value and an actual value. In the topmost assertEquals call, the expected value is SignalColor.RED. You expect the traffic light to be RED because, in Listing 2, you initialize the light’s state with the value SignalColor.RED. In the topmost assertEquals call, the actual value is light.state (the color that’s actually stored in the traffic light’s state variable). If the actual value equals the expected value, the call to assertEquals passes and JUnit continues executing the statements in the testNextState method. But if the actual value is different from the expected value, the assertEquals fails and the result of the run displays the failure. For example, consider what happens when you change the expected value in the first assertEquals call in Listing 4: @Test public void testNextState() { TrafficLight light = new TrafficLight(); assertEquals(SignalColor.YELLOW, light.state); Immediately after its construction, a traffic light’s color is RED, not YELLOW. So the testNextState method contains a false assertion and the result of doing Run As→JUnit looks like a child’s bad report card. Having testNextState before testChange in Listing 4 does not guarantee that JUnit will execute testNextState before executing testChange. If you have three @Test methods, JUnit might execute them from topmost to bottommost, from bottommost to topmost, from the middle method to the topmost to the bottommost, or in any order at all. JUnit might even pause in the middle of one test to execute parts of another test. That’s why you should never make assumptions about the outcome of one test when you write another test. You might want certain statements to be executed before any of the tests begin. If you do, put those statements in a method named setUp, and preface that method with a @Before annotation. (See the setUp() checkbox in the figure at Step 3 in Listing 2, above.) Here’s news: Not all assertEquals methods are created equal! Imagine adding a Driver class to your project’s code. “Driver class” doesn’t mean a printer driver or a pile driver. It means a person driving a car — a car that’s approaching your traffic light. For the details, see Listing 5. Listing 5 public class Driver { public double velocity(TrafficLight light) { switch (light.state) { case RED: return 0.0; case YELLOW: return 10.0; case GREEN: return 30.0; default: return 0.0; } } } When the light is red, the driver’s velocity is 0.0. When the light is yellow, the car is slowing to a safe 10.0. When the light is green, the car cruises at a velocity of 30.0. (In this example, the units of velocity don’t matter. They could be miles per hour, kilometers per hour, or whatever. The only way it matters is if the car is in Boston or New York City. In that case, the velocity for YELLOW should be much higher than the velocity for GREEN, and the velocity for RED shouldn’t be 0.0.) To create JUnit tests for the Driver class, follow Steps 1 to 9 listed previously in this article, but be sure to make the following changes: In Step 2, right-click or control-click the Driver.java branch instead of the TrafficLight.java branch. In Step 5, put a check mark in the Driver branch. In Step 8, remove the fail method calls to create the DriverTest class shown in Listing 6. (The code that you type is shown in bold.) Listing 6 import static org.junit.Assert.*; import org.junit.Test; public class DriverTest { @Test public void testVelocity() { TrafficLight light = new TrafficLight(); light.change(7); Driver driver = new Driver(); assertEquals(30.0, driver.velocity(light), 0.1); } } If all goes well, the JUnit test passes with flying colors. (To be more precise, the JUnit passes with the color green!) So the running of DriverTest isn’t new or exciting. What’s exciting is the call to assertEquals in Listing 6. When you compare two double values in a Java program, you have no right to expect on-the-nose equality. That is, one of the double values might be 30.000000000 while the other double value is closer to 30.000000001. A computer has only 64 bits to store each double value, and inaccuracies creep in here and there. So in JUnit, the assertEquals method for comparing double values has a third parameter. The third parameter represents wiggle room. In Listing 6, the statement assertEquals(30.0, driver.velocity(light), 0.1); says the following: “Assert that the actual value of driver.velocity(light) is within 0.1 of the expected value 30.0. If so, the assertion passes. If not, the assertion fails.” When you call assertEquals for double values, selecting a good margin of error can be tricky. These figures illustrate the kinds of things that can go wrong. Here, your margin of error is too small. There, your margin of error is too large. Fortunately, in this DriverTest example, the margin 0.1 is a very safe bet. Here’s why: When the assertEquals test fails, it fails by much more than 0.1. Failure means having a driver.velocity(light) value such as 0.0 or 10.0. In this example, when the assertEquals test passes, it probably represents complete, on-the-nose equality. The value of driver.velocity(light) comes directly from the return 30.0 code in Listing 5. No arithmetic is involved. So the value of driver.velocity(light) and the expected 30.0 value should be exactly the same (or almost exactly the same).
https://www.dummies.com/programming/java/using-junit/
CC-MAIN-2019-30
refinedweb
2,023
67.76
Hey everybody, I'd like to write a python script which renames a file on my desktop to a number which is 1 less than what it was before. If you're wondering, the number tells me how many days left 'till I finish high school, lol, but I've also got other things inside the file. I'm still learning python (reading 'Learning Python') but I've been able to figure some of the commands I'll be using in this script. import os os.rename('C:/Users/Sam/Desktop/oldname.txt', 'C:/Users/Sam/Desktop/rename.txt') But what I want to do is take the file I have on my desktop ('516.txt' as of today) and have a command that does something like 'filename -= 1'. The problem is, I don't know how to attach the number in the filename to a variable. These commands might not exits (since I don't know too much syntax) but I want something like: x = numbers in filename.txt x -= 1 os.rename('current#.txt', 'x.txt') How might I go about doing this? Thanks
https://www.daniweb.com/programming/software-development/threads/184810/renaming-a-file-with-numbers
CC-MAIN-2022-33
refinedweb
186
80.62
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Function field that returns list We're trying to create odoo function field, that renders as list. And problem, that function does not execute. Extended question with code here: UPD Changed code according to Ivan's comment. Not helped. Question is more about, why it's not entering "_orders" function at all. The return should be a dictionary which keys are the ids of the processed object and the values are the returned values. I see that you are setting it as type many2many. x2many has a special way to specify the entries to add. For many2many the returned values need to be in the form of [(6, 0, [XXXX])] where [XXXX] is the list of IDs that you want to link. Also you are mixing v8 sytax with v7 syntax. I'm not quite sure if it is OK. So your method should look something like: def _orders(self, cr, uid, ids, fields, arg, context=None): res = {} statement = self.browse(cr, uid, context.get('active_id', False), context=context) _order_pool = self.pool.get('sale.order') if statement and statement.partner_id: for _obj in self.browse(cr, uid, ids, context=context): _orders = _order_pool.search(cr, uid, [('partner_id', '=', statement.partner_id.id), ], context=context) res[_obj.id] = [(6, 0, _orders)] return res To pre-populate the orders, you need to create the "account.bank.statement.review.wizard", passing the appropriate context, before displaying the view while passing the domain [('id', '=', created_wizard_id)]. New code looks like this: Still not works. Just not entering "_orders" function. UPD Final solution: Inspired from here: About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/function-field-that-returns-list-75051
CC-MAIN-2017-39
refinedweb
312
69.18
Yii2 custom gii generator and template? gii code template yii2 gii model yii2 gui yii2 frame yii2 create search model yii2 class yii2 tutorial I just implementing a new gii generator for my requirement on yii2. i want to know best place to keep those codes? Create app\modules\gii directory with own Generator class, views, templates. namespace app\modules\gii; class MyCustomGenerator extends \yii\gii\generators\crud\Generator { // ... public function generate() { // ... } } Then enable it in gii configuration. [ // ... 'modules' => [ 'gii' => [ 'class' => 'yii\gii\Module', 'generators' => [ 'class' => '\app\modules\gii\MyCustomGenerator', 'model' => ['class' => '\app\modules\gii\model\MyCustomGenerator'], ], ], ], ] Additional topics: Creating your own generators, In order to create your own generator, you need to create or override these classes in any folder. Again as in the previous section customize the configuration: Yii2-gii. It add theming generator and new template for crud.It based on yiisoft/yii2-gii. Installation. The preferred way to install this extension is through composer. Either run. php composer.phar require-dev myzero1/yii2-gii:1.* or add "myzero1/yii2-gii": "~1.0.0" to the require-dev section of your composer.json file. Setting Which module of gii that you need to custom? If it is CRUD module, I think you should put it into app/backend/ because it just use for this app. Here is my customization: custom crud gii templates My folder structure: yii2 - custom gii crud templates - folder structure Yii2 custom gii generator and template?, Create app\modules\gii directory with own Generator class, views, templates. namespace app\modules\gii; class MyCustomGenerator extends Yii 2 (in Chinese means “simple and evolutionary”) is a modern framework designed to be a solid foundation for your PHP application. Yii2 is a great choice if you want to save time on your development process. It has a CRUD generating module called Gii built into it. It helps you generate a lot of common code quickly. Yii2 - Gii generators - how to custom crud templates, 6/27/17. Yii2 - Gii generators - how to custom crud templates. 1. Create your template folders in The Gii CRUD Generator. So far we’ve been doing the code generating in a somewhat piecemeal way. With Gii, we can do an entire php CRUD scenario for tables in our database. In this go around, we’ll create a drums table and let Gii create the entire setup for us. 1. Create The Drums Table Generator, yii\gii\Generator, Source Code, string, The root path of the template files that are currently being used. yii\gii\Generator 'password'], 'required'], // built-in "string" validator customized with "min" and Other example, an advanced example list of commands could also be seen here which are the related commands for generating working REST resources in the app template I linked before. Built With. yiisoft/yii2 - this extension is powered by and for Yii framework 2.0 based applications. yiisoft/yii2-gii - the official Gii Extension for Yii How to create custom crud generator templates in Yii2-gii?, A step by Step guide to create custom crud generator templates in Yii2-gii .Rudra Softech is a Software development company for open source AdminLTE Asset Bundle. Backend UI for Yii2 Framework, based on AdminLTE. This package contains an Asset Bundle for Yii 2.0 Framework which registers the CSS files for the AdminLTE user-interface. Yii Framework: Tailoring Code Generators For Your App, Yii 1.1.2 introduced a module named Gii (rhymes with “Yee”) that took code generation to the web and made it fully customizable. Now you could have templates แม่แบบ(Application Template) ใน Yii2. Gii การใช้ Gii Generator สำหรับ Basic Template. การสร้าง Virtual Host สำหรับ Yii2 Basic Application Template. การติดตั้ง Yii2 Advanced Application Template. Gii Generator สำหรับ Advanced Template - See official guide - Creating your own templates. - i know how to write, what i want is were to place the generator and template. - @VijayArun Place them in app\modules\gii, as I told in my anwer.
http://thetopsites.net/article/54437088.shtml
CC-MAIN-2021-10
refinedweb
664
50.33
view raw I'm trying to use python to copy a tree of files/directories. is it possible to use copytree to copy everything that ends in foo? There is an ignore_patterns patterns function, can I give it a negated regular expression? Are they supported in python? eg. copytree(src, dest, False, ignore_pattern('!*.foo')) Where ! means NOT anything that ends in foo. thanks. shutil.copytree has an ignore keyword. ignore can be set to any callable. Given the directory being visited and a list of its contents, the callable should return a sequence of directory and filenames to be ignored. For example: import shutil def ignored_files(adir,filenames): return [filename for filename in filenames if not filename.endswith('foo')] shutil.copytree(source, destination, ignore=ignored_files)
https://codedump.io/share/AtuErvawdf16/1/python-copytree-with-negated-ignore-pattern
CC-MAIN-2017-22
refinedweb
125
61.02
In this article by Jonathan Peppers, author of the book Xamarin Cross-platform Application Development, we will see how Xamarin’s tools promise to share a good portion of your code between iOS and Android while taking advantage of the native APIs on each platform where possible. Doing so is an exercise in software engineering more than a programming skill or having the knowledge of each platform. To architect a Xamarin application to enable code sharing, it is a must to separate your application into distinct layers. We’ll cover the basics of this in this article as well as specific options to consider in certain situations. In this article, we will cover: - The MVVM design pattern for code sharing - Project and solution organization strategies - Portable Class Libraries (PCLs) - Preprocessor statements for platform-specific code - Dependency injection (DI) simplified - Inversion of Control (IoC) (For more resources related to this topic, see here.) Learning the MVVM design pattern The Model-View-ViewModel (MVVM) design pattern was originally invented for Windows Presentation Foundation (WPF) applications using XAML for separating the UI from business logic and taking full advantage of data binding. Applications architected in this way have a distinct ViewModel layer that has no dependencies on its user interface. This architecture in itself is optimized for unit testing as well as cross-platform development. Since an application’s ViewModel classes have no dependencies on the UI layer, you can easily swap an iOS user interface for an Android one and write tests against the ViewModellayer. The MVVM design pattern is also very similar to the MVC design pattern. The MVVM design pattern includes the following: - Model: The Model layer is the backend business logic that drives the application and any business objects to go along with it. This can be anything from making web requests to a server to using a backend database. - View: This layer is the actual user interface seen on the screen. In the case of cross-platform development, it includes any platform-specific code for driving the user interface of the application. On iOS, this includes controllers used throughout an application, and on Android, an application’s activities. - ViewModel: This layer acts as the glue in MVVM applications. The ViewModel layerscoordinate operations between the View and Model layers. A ViewModel layer will contain properties that the View will get or set, and functions for each operation that can be made by the user on each View. The ViewModel layer will also invoke operations on the Model layer if needed. The following figure shows you the MVVM design pattern: It is important to note that the interaction between the View and ViewModel layers is traditionally created by data binding with WPF. However, iOS and Android do not have built-in data binding mechanisms, so our general approach throughout the article will be to manually call the ViewModel layer from the View layer. There are a few frameworks out there that provide data binding functionality such as MVVMCross and Xamarin.Forms. Implementing MVVM in an example To understand this pattern better, let’s implement a common scenario. Let’s say we have a search box on the screen and a search button. When the user enters some text and clicks on the button, a list of products and prices will be displayed to the user. In our example, we use the async and await keywords that are available in C# 5 to simplify asynchronous programming. To implement this feature, we will start with a simple model class (also called a business object) as follows: public class Product { public int Id { get; set; } //Just a numeric identifier public string Name { get; set; } //Name of the product public float Price { get; set; } //Price of the product } Next, we will implement our Model layer to retrieve products based on the searched term. This is where the business logic is performed, expressing how the search needs to actually work. This is seen in the following lines of code: // An example class, in the real world would talk to a web // server or database. public class ProductRepository { // a sample list of products to simulate a database private Product[] products = new[] { new Product { Id = 1, Name = “Shoes”, Price = 19.99f }, new Product { Id = 2, Name = “Shirt”, Price = 15.99f }, new Product { Id = 3, Name = “Hat”, Price = 9.99f }, }; public async Task SearchProducts( string searchTerm) { // Wait 2 seconds to simulate web request await Task.Delay(2000); // Use Linq-to-objects to search, ignoring case searchTerm = searchTerm.ToLower(); return products.Where(p => p.Name.ToLower().Contains(searchTerm)) .ToArray(); } } It is important to note here that the Product and ProductRepository classes are both considered as a part of the Model layer of a cross-platform application. Some might consider ProductRepository as a service that is generally a self-contained class to retrieve data. It is a good idea to separate this functionality into two classes. The Product class’s job is to hold information about a product, while the ProductRepository class is in charge of retrieving products. This is the basis for the single responsibility principle, which states that each class should only have one job or concern. Next, we will implement a ViewModel class as follows: public class ProductViewModel { private readonly ProductRepository repository = new ProductRepository(); public string SearchTerm { get; set; } public Product[] Products { get; private set; } public async Task Search() { if (string.IsNullOrEmpty(SearchTerm)) Products = null; else Products = await repository.SearchProducts(SearchTerm); } } From here, your platform-specific code starts. Each platform will handle managing an instance of a ViewModel class, setting the SearchTerm property, and calling Search when the button is clicked. When the task completes, the user interface layer will update a list displayed on the screen. If you are familiar with the MVVM design pattern used with WPF, you might notice that we are not implementing INotifyPropertyChanged for data binding. Since iOS and Android don’t have the concept of data binding, we omitted this functionality. If you plan on having a WPF or Windows 8 version of your mobile application or are using a framework that provides data binding, you should implement support for it where needed. Comparing project organization strategies You might be asking yourself at this point, how do I set up my solution in Xamarin Studio to handle shared code and also have platform-specific projects? Xamarin.iOS applications can only reference Xamarin.iOS class libraries, so setting up a solution can be problematic. There are several strategies for setting up a cross-platform solution, each with its own advantages and disadvantages. Options for cross-platform solutions are as follows: - File Linking: For this option, you will start with either a plain .NET 4.0 or .NET 4.5 class library that contains all the shared code. You would then have a new project for each platform you want your app to run on. Each platform-specific project will have a subdirectory with all of the files linked in from the first class library. To set this up, add the existing files to the project and select the Add a link to the file option. Any unit tests can run against the original class library. The advantages and disadvantages of file linking are as follows: - Advantages: This approach is very flexible. You can choose to link or not link certain files and can also use preprocessor directives such as #if IPHONE. You can also reference different libraries on Android versus iOS. - Disadvantages: You have to manage a file’s existence in three projects: core library, iOS, and Android. This can be a hassle if it is a large application or if many people are working on it. This option is also a bit outdated since the arrival of shared projects. - Cloned Project Files: This is very similar to file linking. The main difference being that you have a class library for each platform in addition to the main project. By placing the iOS and Android projects in the same directory as the main project, the files can be added without linking. You can easily add files by right-clicking on the solution and navigating to Display Options | Show All Files. Unit tests can run against the original class library or the platform-specific versions: - Advantages: This approach is just as flexible as file linking, but you don’t have to manually link any files. You can still use preprocessor directives and reference different libraries on each platform. - Disadvantages: You still have to manage a file’s existence in three projects. There is additionally some manual file arranging required to set this up. You also end up with an extra project to manage on each platform. This option is also a bit outdated since the arrival of shared projects. - Shared Projects: Starting with Visual Studio 2013 Update 2, Microsoft created the concept of shared projects to enable code sharing between Windows 8 and Windows Phone apps. Xamarin has also implemented shared projects in Xamarin Studio as another option to enable code sharing. Shared projects are virtually the same as file linking, since adding a reference to a shared project effectively adds its files to your project: - Advantages: This approach is the same as file linking, but a lot cleaner since your shared code is in a single project. Xamarin Studio also provides a dropdown to toggle between each referencing project, so that you can see the effect of preprocessor statements in your code. - Disadvantages: Since all the files in a shared project get added to each platform’s main project, it can get ugly to include platform-specific code in a shared project. Preprocessor statements can quickly get out of hand if you have a large team or have team members that do not have a lot of experience. A shared project also doesn’t compile to a DLL, so there is no way to share this kind of project without the source code. - Portable Class Libraries: This is the most optimal option; you begin the solution by making a Portable Class Library (PCL) project for all your shared code. This is a special project type that allows multiple platforms to reference the same project, allowing you to use the smallest subset of C# and the .NET framework available in each platform. Each platform-specific project will reference this library directly as well as any unit test projects: - Advantages: All your shared code is in one project, and all platforms use the same library. Since preprocessor statements aren’t possible, PCL libraries generally have cleaner code. Platform-specific code is generally abstracted away by interfaces or abstract classes. - Disadvantages: You are limited to a subset of .NET depending on how many platforms you are targeting. Platform-specific code requires use of dependency injection, which can be a more advanced topic for developers not familiar with it. Setting up a cross-platform solution To understand each option completely and what different situations call for, let’s define a solution structure for each cross-platform solution. Let’s use the product search example and set up a solution for each approach. To set up file linking, perform the following steps: - Open Xamarin Studio and start a new solution. - Select a new Library project under the general C# section. - Name the project ProductSearch.Core, and name the solution ProductSearch. - Right-click on the newly created project and select Options. - Navigate to Build | General, and set the Target Framework option to .NET Framework 4.5. -. Then, navigate to iOS | iPhone | Single View Application and name the project ProductSearch.iOS. - Create a new Android project by right-clicking on the solution and navigating to Add | Add New Project. Create a new project by navigating to Android | Android Application and name it ProductSearch.Droid. - Add a new folder named Core to both the iOS and Android projects. - Right-click on the new folder for the iOS project and navigate to Add | Add Files from Folder. Select the root directory for the ProductSearch.Core project. - Check the three C# files in the root of the project. An Add File to Folder dialog will appear. - Select Add a link to the file and make sure that the Use the same action for all selected files checkbox is selected. - Repeat this process for the Android project. - Navigate to Build | Build All from the menu at the top to double-check everything. You have successfully set up a cross-platform solution with file linking. When all is done, you will have a solution tree that looks something like what you can see in the following screenshot: You should consider using this technique when you have to reference different libraries on each platform. You might consider using this option if you are using MonoGame, or other frameworks that require you to reference a different library on iOS versus Android. Setting up a solution with the cloned project files approach is similar to file linking, except that you will have to create an additional class library for each platform. To do this, create an Android library project and an iOS library project in the same ProductSearch.Core directory. You will have to create the projects and move them to the proper folder manually, then re-add them to the solution. Right-click on the solution and navigate to Display Options | Show All Files to add the required C# files to these two projects. Your main iOS and Android projects can reference these projects directly. Your project will look like what is shown in the following screenshot, with ProductSearch.iOS referencing ProductSearch.Core.iOS and ProductSearch.Droid referencing ProductSearch.Core.Droid: Working with Portable Class Libraries A Portable Class Library (PCL) is a C# library project that can be supported on multiple platforms, including iOS, Android, Windows, Windows Store apps, Windows Phone, Silverlight, and Xbox 360. PCLs have been an effort by Microsoft to simplify development across different versions of the .NET framework. Xamarin has also added support for iOS and Android for PCLs. Many popular cross-platform frameworks and open source libraries are starting to develop PCL versions such as Json.NET and MVVMCross. Using PCLs in Xamarin Let’s create our first portable class library: - Open Xamarin Studio and start a new solution. - Select a new Portable Library project under the general C# section. - Name the project ProductSearch.Core and name the solution ProductSearch. -. Create a new project by navigating to iOS | iPhone | Single View Application and name it ProductSearch.iOS. - Create a new Android project by right-clicking on the solution and navigating to Add | Add New Project. Then, navigate to Android | Android Application and name the project ProductSearch.Droid. - Simply add a reference to the portable class library from the iOS and Android projects. - Navigate to Build | Build All from the top menu and you have successfully set up a simple solution with a portable library. Each solution type has its distinct advantages and disadvantages. PCLs are generally better, but there are certain cases where they can’t be used. For example, if you were using a library such as MonoGame, which is a different library for each platform, you would be much better off using a shared project or file linking. Similar issues would arise if you needed to use a preprocessor statement such as #if IPHONE or a native library such as the Facebook SDK on iOS or Android. Setting up a shared project is almost the same as setting up a portable class library. In step 2, just select Shared Project under the general C# section and complete the remaining steps as stated. Using preprocessor statements When using shared projects, file linking, or cloned project files, one of your most powerful tools is the use of preprocessor statements. If you are unfamiliar with them, C# has the ability to define preprocessor variables such as #define IPHONE , allowing you to use #if IPHONE or #if !IPHONE. The following is a simple example of using this technique: #if IPHONE Console.WriteLine(“I am running on iOS”); #elif ANDROID Console.WriteLine(“I am running on Android”); #else Console.WriteLine(“I am running on ???”); #endif In Xamarin Studio, you can define preprocessor variables in your project’s options by navigating to Build | Compiler | Define Symbols, delimited with semicolons. These will be applied to the entire project. Be warned that you must set up these variables for each configuration setting in your solution (Debug and Release); this can be an easy step to miss. You can also define these variables at the top of any C# file by declaring #define IPHONE, but they will only be applied within the C# file. Let’s go over another example, assuming that we want to implement a class to open URLs on each platform: public static class Utility { public static void OpenUrl(string url) { //Open the url in the native browser } } The preceding example is a perfect candidate for using preprocessor statements, since it is very specific to each platform and is a fairly simple function. To implement the method on iOS and Android, we will need to take advantage of some native APIs. Refactor the class to look as follows: #if IPHONE //iOS using statements using MonoTouch.Foundation; using MonoTouch.UIKit; #elif ANDROID //Android using statements using Android.App; using Android.Content; using Android.Net; #else //Standard .Net using statement using System.Diagnostics; #endif public static class Utility { #if ANDROID public static void OpenUrl(Activity activity, string url) #else public static void OpenUrl(string url) #endif { //Open the url in the native browser #if IPHONE UIApplication.SharedApplication.OpenUrl( NSUrl.FromString(url)); #elif ANDROID var intent = new Intent(Intent.ActionView, Uri.Parse(url)); activity.StartActivity(intent); #else Process.Start(url); #endif } } The preceding class supports three different types of projects: Android, iOS, and a standard Mono or .NET framework class library. In the case of iOS, we can perform the functionality with static classes available in Apple’s APIs. Android is a little more problematic and requires an Activity object to launch a browser natively. We get around this by modifying the input parameters on Android. Lastly, we have a plain .NET version that uses Process.Start() to launch a URL. It is important to note that using the third option would not work on iOS or Android natively, which necessitates our use of preprocessor statements. Using preprocessor statements is not normally the cleanest or the best solution for cross-platform development. They are generally best used in a tight spot or for very simple functions. Code can easily get out of hand and can become very difficult to read with many #if statements, so it is always better to use it in moderation. Using inheritance or interfaces is generally a better solution when a class is mostly platform specific. Simplifying dependency injection Dependency injection at first seems like a complex topic, but for the most part it is a simple concept. It is a design pattern aimed at making your code within your applications more flexible so that you can swap out certain functionality when needed. The idea builds around setting up dependencies between classes in an application so that each class only interacts with an interface or base/abstract class. This gives you the freedom to override different methods on each platform when you need to fill in native functionality. The concept originated from the SOLID object-oriented design principles, which is a set of rules you might want to research if you are interested in software architecture. There is a good article about SOLID on Wikipedia, () if you would like to learn more. The D in SOLID, which we are interested in, stands for dependencies. Specifically, the principle declares that a program should depend on abstractions, not concretions (concrete types). To build upon this concept, let’s walk you through the following example: - Let’s assume that we need to store a setting in an application that determines whether the sound is on or off. - Now let’s declare a simple interface for the setting: interface ISettings { bool IsSoundOn { get; set; } }. - On iOS, we’d want to implement this interface using the NSUserDefaults class. - Likewise, on Android, we will implement this using SharedPreferences. - Finally, any class that needs to interact with this setting will only reference ISettings so that the implementation can be replaced on each platform. For reference, the full implementation of this example will look like the following snippet: public interface ISettings { bool IsSoundOn { get; set; } } //On iOS using MonoTouch.UIKit; using MonoTouch.Foundation; public class AppleSettings : ISettings { public bool IsSoundOn { get { return NSUserDefaults.StandardUserDefaults BoolForKey(“IsSoundOn”); } set { var defaults = NSUserDefaults.StandardUserDefaults; defaults.SetBool(value, “IsSoundOn”); defaults.Synchronize(); } } } //On Android using Android.Content; public class DroidSettings : ISettings { private readonly ISharedPreferences preferences; public DroidSettings(Context context) { preferences = context.GetSharedPreferences( context.PackageName, FileCreationMode.Private); } public bool IsSoundOn { get { return preferences.GetBoolean(“IsSoundOn”, true”); } set { using (var editor = preferences.Edit()) { editor.PutBoolean(“IsSoundOn”, value); editor.Commit(); } } } } Now you will potentially have a ViewModel class that will only reference ISettings when following the MVVM pattern. It can be seen in the following snippet: public class SettingsViewModel { private readonly ISettings settings; public SettingsViewModel(ISettings settings) { this.settings = settings; } public bool IsSoundOn { get; set; } public void Save() { settings.IsSoundOn = IsSoundOn; } } Using a ViewModel layer for such a simple example is not necessarily needed, but you can see it would be useful if you needed to perform other tasks such as input validation. A complete application might have a lot more settings and might need to present the user with a loading indicator. Abstracting out your setting’s implementation has other benefits that add flexibility to your application. Let’s say you suddenly need to replace NSUserDefaults on iOS with the iCloud instead; you can easily do so by implementing a new ISettings class and the remainder of your code will remain unchanged. This will also help you target new platforms such as Windows Phone, where you might choose to implement ISettings in a platform-specific way. Implementing Inversion of Control You might be asking yourself at this point in time, how do I switch out different classes such as the ISettings example? Inversion of Control (IoC) is a design pattern meant to complement dependency injection and solve this problem. The basic principle is that many of the objects created throughout your application are managed and created by a single class. Instead of using the standard C# constructors for your ViewModel or Model classes, a service locator or factory class will manage them throughout the application. There are many different implementations and styles of IoC, so let’s implement a simple service locator class as follows: public static class ServiceContainer { static readonly Dictionary > services = new Dictionary >(); public static void Register (Func function) { services[typeof(T)] = new Lazy This class is inspired by the simplicity of XNA/MonoGame’s GameServiceContainer class and follows the service locator pattern. The main differences are the heavy use of generics and the fact that it is a static class. To use our ServiceContainer class, we will declare the version of ISettings or other interfaces that we want to use throughout our application by calling Register, as seen in the following lines of code: //iOS version of ISettings ServiceContainer.Register (() => new AppleSettings()); //Android version of ISettings ServiceContainer.Register (() => new DroidSettings()); //You can even register ViewModels ServiceContainer.Register (() => new SettingsViewModel()); On iOS, you can place this registration code in either your static void Main() method or in the FinishedLaunching method of your AppDelegate class. These methods are always called before the application is started. On Android, it is a little more complicated. You cannot put this code in the OnCreate method of your activity that acts as the main launcher. In some situations, the Android OS can close your application but restart it later in another activity. This situation is likely to cause an exception somewhere. The guaranteed safe place to put this is in a custom Android Application class which has an OnCreate method that is called prior to any activities being created in your application. The following lines of code show you the use of the Application class: [Application]public class Application : Android.App.Application { //This constructor is required public Application(IntPtr javaReference, JniHandleOwnership transfer): base(javaReference, transfer) { } public override void OnCreate() { base.OnCreate(); //IoC Registration here } } To pull a service out of the ServiceContainer class, we can rewrite the constructor of the SettingsViewModel class so that it is similar to the following lines of code: public SettingsViewModel() { this.settings = ServiceContainer.Resolve (); } Likewise, you will use the generic Resolve method to pull out any ViewModel classes you would need to call from within controllers on iOS or activities on Android. This is a great, simple way to manage dependencies within your application. There are, of course, some great open source libraries out there that implement IoC for C# applications. You might consider switching to one of them if you need more advanced features for service location or just want to graduate to a more complicated IoC container. Here are a few libraries that have been used with Xamarin projects: - TinyIoC: - Ninject: - MvvmCross: includes a full MVVM framework as well as IoC - Simple Injector: - OpenNETCF.IoC: Summary In this article, we learned about the MVVM design pattern and how it can be used to better architect cross-platform applications. We compared several project organization strategies for managing a Xamarin Studio solution that contains both iOS and Android projects. We went over portable class libraries as the preferred option for sharing code and how to use preprocessor statements as a quick and dirty way to implement platform-specific code. After completing this article, you should be able to speed up with several techniques for sharing code between iOS and Android applications using Xamarin Studio. Using the MVVM design pattern will help you divide your shared code and code that is platform specific. We also covered several options for setting up cross-platform Xamarin solutions. You should also have a firm understanding of using dependency injection and Inversion of Control to give your shared code access to the native APIs on each platform. Resources for Article: Further resources on this subject: - XamChat – a Cross-platform App [article] - Configuring Your Operating System [article] - Updating data in the background [article]
https://hub.packtpub.com/code-sharing-between-ios-and-android/
CC-MAIN-2020-34
refinedweb
4,371
53.71
Python Hashing (Hash tables and hashlib) When we talk about hash tables, we're actually talking about dictionary. While an array can be used to construct hash tables, array indexes its elements using integers. However, if we want to store data and use keys other than integer, such as 'string', we may want to use dictionary. Dictionaries in Python are implemented using hash tables. It is an array whose indexes are obtained using a hash function on the keys. We declare an empty dictionary like this: >>> D = {} Then, we can add its elements: >>> D['a'] = 1 >>> D['b'] = 2 >>> D['c'] = 3 >>> D {'a': 1, 'c': 3, 'b': 2} It's a structure with (key, value) pair: D[key] = value The string used to "index" the hash table D is called the "key". To access the data stored in the table, we need to know the key: >>> D['b'] 2 How we loop through the hash table? >>> for k in D.keys(): ... print D[k] ... 1 3 2 If we want to print the (key, value) pair: >>> for k,v in D.items(): ... print k,':',v ... a : 1 c : 3 b : 2 Using two Arrays of equal length, create a Hash object where the elements from one array (the keys) are associated with the elements of the other (the values): >>> keys = ['a', 'b', 'c'] >>> values = [1, 2, 3] >>> hash = {k:v for k, v in zip(keys, values)} >>> hash {'a': 1, 'c': 3, 'b': 2} Here are some hashing samples using built-in hash function: >>> map(hash, [0, 1, 2, 3]) [0, 1, 2, 3] >>> map(hash, ['0','1','2','3']) [6144018481, 6272018864, 6400019251, 6528019634] >>> hash('0') 6144018481 As we can see from the example, Python is using different hash() function depending on the type of data. Python provides hashlib for secure hashes and message digests: md5(), sha*(): >>> import hashlib >>> hashlib.md5('a') >>> hashlib.md5('a').digest() '\x0c\xc1u\xb9\xc0\xf1\xb6\xa81\xc3\x99\xe2iw&a;' >>> hashlib.md5('a').hexdigest() '0cc175b9c0f1b6a831c399e269772661' >>> hashlib.sha512('a') >>> hashlib.sha512('a').digest() '\x1f@\xfc\x92\xda$\x16\x94u\ty\xeel\xf5\x82\xf2\xd5\xd7\xd2\x8e\x183]\xe0Z\xbcT\xd0V\x0e\x0fS\x02\x86\x0ce+\xf0\x8dV\x02R\xaa^t!\x05F\xf3i\xfb\xbb\xce\x8c\x12\xcf\xc7\x95{&R;\xfe\x9au' >>> hashlib.sha512('a').hexdig' >>> The following code is a revision from Sets (union/intersection) and itertools - Jaccard coefficient & shingling to check plagiarism. In this section, we used 64 bit integer (hash value from hash()) for the comparison of shingles instead of directly working on the string. from __future__ import division import itertools import re import hashlib # a shingle in this code is a string with K-words K = 4 def jaccard_set(s1, s2): " takes two sets and returns Jaccard coefficient" u = s1.union(s2) i = s1.intersection(s2) return len(i)/len(u) def make_a_set_of_tokens(doc): """makes a set of K-tokens""" # replace non-alphanumeric char with a space, and then split tokens = re.sub("[^\w]", " ", doc).split() sh = set() for i in range(len(tokens)-K): t = tokens[i] for x in tokens[i+1:i+K]: t += ' ' + x sh.add(t) return sh if __name__ == '__main__': documents = [."] shingles = [] # handle documents one by one # makes a list of sets which are compresized of a list of K words string for doc in documents: # makes a set of tokens # sh = set([' ', ..., ' ']) sh = make_a_set_of_tokens(doc) print("sh = %s") %(sh) # hasing bucket = map(hash, sh) # print("bucket = %s") %(bucket) # shingles : list of sets (sh) shingles.append(set(bucket)) #print("shingles=%s") %(shingles) combinations = list( itertools.combinations([x for x in range(len(shingles))], 2) ) print("combinations=%s") %(combinations) # compare each pair in combinations tuple of shingles for c in combinations: i1 = c[0] i2 = c[1] jac = jaccard_set(shingles[i1], shingles[i2]) print("%s : jaccard=%s") %(c,jac) Output is exactly the same as the one we got using string comparison: combinations=[(0, 1), (0, 2), (1, 2)] (0, 1) : jaccard=0.0196078431373 (0, 2) : jaccard=0.0 (1, 2) : jaccard=0.0
http://www.bogotobogo.com/python/python_hash_tables_hashing_dictionary_associated_arrays.php
CC-MAIN-2017-26
refinedweb
673
50.87
Contents - Abstract - Rationale - Guidelines for New Entries - Migration Issues - Modernization Procedures - Python 2.4 or Later - Python 2.3 or Later - Python 2.2 or Later - Python 2.1 or Later - Python 2.0 or Later - Python 1.5 or Later - All Python Versions - References Abstract This PEP is a collection of procedures and ideas for updating Python applications when newer versions of Python are installed. The migration tips highlight possible areas of incompatibility and make suggestions on how to find and resolve those differences. The modernization procedures show how older code can be updated to take advantage of new language features. Rationale This repository of procedures serves as a catalog or checklist of known migration issues and procedures for addressing those issues. Migration issues can arise for several reasons. Some obsolete features are slowly deprecated according to the guidelines in PEP 4 [1]. Also, some code relies on undocumented behaviors which are subject to change between versions. Some code may rely on behavior which was subsequently shown to be a bug and that behavior changes when the bug is fixed. Modernization options arise when new versions of Python add features that allow improved clarity or higher performance than previously available. Guidelines for New Entries Developers with commit access may update this PEP directly. Others can send their ideas to a developer for possible inclusion. While a consistent format makes the repository easier to use, feel free to add or subtract sections to improve clarity. Grep patterns may be supplied as tool to help maintainers locate code for possible updates. However, fully automated search/replace style regular expressions are not recommended. Instead, each code fragment should be evaluated individually. The contra-indications section is the most important part of a new entry. It lists known situations where the update SHOULD NOT be applied. Migration Issues Comparison Operators Not a Shortcut for Producing 0 or 1 Prior to Python 2.3, comparison operations returned 0 or 1 rather than True or False. Some code may have used this as a shortcut for producing zero or one in places where their boolean counterparts are not appropriate. For example: def identity(m=1): """Create and m-by-m identity matrix""" return [[i==j for i in range(m)] for j in range(m)] In Python 2.2, a call to identity(2) would produce: [[1, 0], [0, 1]] In Python 2.3, the same call would produce: [[True, False], [False, True]] Since booleans are a subclass of integers, the matrix would continue to calculate normally, but it will not print as expected. The list comprehension should be changed to read: return [[int(i==j) for i in range(m)] for j in range(m)] There are similiar concerns when storing data to be used by other applications which may expect a number instead of True or False. Modernization Procedures Procedures are grouped by the Python version required to be able to take advantage of the modernization. Python 2.4 or Later Inserting and Popping at the Beginning of Lists Python's lists are implemented to perform best with appends and pops on the right. Use of pop(0) or insert(0, x) triggers O(n) data movement for the entire list. To help address this need, Python 2.4 introduces a new container, collections.deque() which has efficient append and pop operations on the both the left and right (the trade-off is much slower getitem/setitem access). The new container is especially helpful for implementing data queues: Pattern: c = list(data) --> c = collections.deque(data) c.pop(0) --> c.popleft() c.insert(0, x) --> c.appendleft() Locating: grep pop(0 or grep insert(0 Simplifying Custom Sorts In Python 2.4, the sort method for lists and the new sorted built-in function both accept a key function for computing sort keys. Unlike the cmp function which gets applied to every comparison, the key function gets applied only once to each record. It is much faster than cmp and typically more readable while using less code. The key function also maintains the stability of the sort (records with the same key are left in their original order. Original code using a comparison function: names.sort(lambda x,y: cmp(x.lower(), y.lower())) Alternative original code with explicit decoration: tempnames = [(n.lower(), n) for n in names] tempnames.sort() names = [original for decorated, original in tempnames] Revised code using a key function: names.sort(key=str.lower) # case-insensitive sort Locating: grep sort *.py Replacing Common Uses of Lambda In Python 2.4, the operator module gained two new functions, itemgetter() and attrgetter() that can replace common uses of the lambda keyword. The new functions run faster and are considered by some to improve readability. Pattern: lambda r: r[2] --> itemgetter(2) lambda r: r.myattr --> attrgetter('myattr') Typical contexts: sort(studentrecords, key=attrgetter('gpa')) # set a sort field map(attrgetter('lastname'), studentrecords) # extract a field Locating: grep lambda *.py Simplified Reverse Iteration Python 2.4 introduced the reversed builtin function for reverse iteration. The existing approaches to reverse iteration suffered from wordiness, performance issues (speed and memory consumption), and/or lack of clarity. A preferred style is to express the sequence in a forwards direction, apply reversed to the result, and then loop over the resulting fast, memory friendly iterator. Original code expressed with half-open intervals: for i in range(n-1, -1, -1): print seqn[i] Alternative original code reversed in multiple steps: rseqn = list(seqn) rseqn.reverse() for value in rseqn: print value Alternative original code expressed with extending slicing: for value in seqn[::-1]: print value Revised code using the reversed function: for value in reversed(seqn): print value Python 2.3 or Later Testing String Membership In Python 2.3, for string2 in string1, the length restriction on string2 is lifted; it can now be a string of any length. When searching for a substring, where you don't care about the position of the substring in the original string, using the in operator makes the meaning clear. Pattern: string1.find(string2) >= 0 --> string2 in string1 string1.find(string2) != -1 --> string2 in string1 Replace apply() with a Direct Function Call In Python 2.3, apply() was marked for Pending Deprecation because it was made obsolete by Python 1.6's introduction of * and ** in function calls. Using a direct function call was always a little faster than apply() because it saved the lookup for the builtin. Now, apply() is even slower due to its use of the warnings module. Pattern: apply(f, args, kwds) --> f(*args, **kwds) Note: The Pending Deprecation was removed from apply() in Python 2.3.3 since it creates pain for people who need to maintain code that works with Python versions as far back as 1.5.2, where there was no alternative to apply(). The function remains deprecated, however. Python 2.2 or Later Testing Dictionary Membership For testing dictionary membership, use the 'in' keyword instead of the 'has_key()' method. The result is shorter and more readable. The style becomes consistent with tests for membership in lists. The result is slightly faster because has_key requires an attribute search and uses a relatively expensive function call. Pattern: if d.has_key(k): --> if k in d: Contra-indications: Some dictionary-like objects may not define a __contains__() method: if dictlike.has_key(k) Locating: grep has_key Looping Over Dictionaries Use the new iter methods for looping over dictionaries. The iter methods are faster because they do not have to create a new list object with a complete copy of all of the keys, values, or items. Selecting only keys, values, or items (key/value pairs) as needed saves the time for creating throwaway object references and, in the case of items, saves a second hash look-up of the key. Pattern: for key in d.keys(): --> for key in d: for value in d.values(): --> for value in d.itervalues(): for key, value in d.items(): --> for key, value in d.iteritems(): Contra-indications: If you need a list, do not change the return type: def getids(): return d.keys() Some dictionary-like objects may not define iter methods: for k in dictlike.keys(): Iterators do not support slicing, sorting or other operations: k = d.keys(); j = k[:] Dictionary iterators prohibit modifying the dictionary: for k in d.keys(): del[k] stat Methods Replace stat constants or indices with new os.stat attributes and methods. The os.stat attributes and methods are not order-dependent and do not require an import of the stat module. Pattern: os.stat("foo")[stat.ST_MTIME] --> os.stat("foo").st_mtime os.stat("foo")[stat.ST_MTIME] --> os.path.getmtime("foo") Locating: grep os.stat or grep stat.S Reduce Dependency on types Module The types module is likely to be deprecated in the future. Use built-in constructor functions instead. They may be slightly faster. Pattern: isinstance(v, types.IntType) --> isinstance(v, int) isinstance(s, types.StringTypes) --> isinstance(s, basestring) Full use of this technique requires Python 2.3 or later (basestring was introduced in Python 2.3), but Python 2.2 is sufficient for most uses. Locating: grep types *.py | grep import Avoid Variable Names that Clash with the __builtins__ Module In Python 2.2, new built-in types were added for dict and file. Scripts should avoid assigning variable names that mask those types. The same advice also applies to existing builtins like list. Pattern: file = open('myfile.txt') --> f = open('myfile.txt') dict = obj.__dict__ --> d = obj.__dict__ Locating: grep 'file ' *.py Python 2.1 or Later whrandom Module Deprecated All random-related methods have been collected in one place, the random module. Pattern: import whrandom --> import random Locating: grep whrandom Python 2.0 or Later String Methods The string module is likely to be deprecated in the future. Use string methods instead. They're faster too. Pattern: import string ; string.method(s, ...) --> s.method(...) c in string.whitespace --> c.isspace() Locating: grep string *.py | grep import startswith and endswith String Methods Use these string methods instead of slicing. No slice has to be created and there's no risk of miscounting. Pattern: "foobar"[:3] == "foo" --> "foobar".startswith("foo") "foobar"[-3:] == "bar" --> "foobar".endswith("bar") The atexit Module The atexit module supports multiple functions to be executed upon program termination. Also, it supports parameterized functions. Unfortunately, its implementation conflicts with the sys.exitfunc attribute which only supports a single exit function. Code relying on sys.exitfunc may interfere with other modules (including library modules) that elect to use the newer and more versatile atexit module. Pattern: sys.exitfunc = myfunc --> atexit.register(myfunc) Python 1.5 or Later Class-Based Exceptions String exceptions are deprecated, so derive from the Exception base class. Unlike the obsolete string exceptions, class exceptions all derive from another exception or the Exception base class. This allows meaningful groupings of exceptions. It also allows an "except Exception" clause to catch all exceptions. Pattern: NewError = 'NewError' --> class NewError(Exception): pass Locating: Use PyChecker [2]. All Python Versions Testing for None Since there is only one None object, equality can be tested with identity. Identity tests are slightly faster than equality tests. Also, some object types may overload comparison, so equality testing may be much slower. Pattern: if v == None --> if v is None: if v != None --> if v is not None: Locating: grep '== None' or grep '!= None'
http://legacy.python.org/dev/peps/pep-0290/
CC-MAIN-2014-10
refinedweb
1,903
59.5
ot::CallbackNode Class ReferenceThis class implements a simple node that stores a function pointer and calls it every time an event it received. More... [Common Classes] #include <CallbackNode.h> Inheritance diagram for ot::CallbackNode: Detailed DescriptionThis class implements a simple node that stores a function pointer and calls it every time an event it received. The event passed to the can be changed by the function and the changes will propagate to the parent node. Furthermore it passes the event on to its single child. - Author: - Gerhard Reitmayr Definition at line 83 of file CallbackNode.h. Constructor & Destructor Documentation constructor method,sets commend member - Parameters: - Definition at line 103 of file CallbackNode.h. Member Function Documentation This method returns the value set by the name attribute of the CallbackNode. This is a different value then the one returned by getName() which is the value set by the attribute DEF. - Returns: - reference to the name string. Definition at line 140 of file CallbackNode.h. tests for EventGenerator interface being present. Is overriden to return 1 always. - Returns: - always 1 Reimplemented from ot::Node. Definition at line 114 of file CallbackNode.h. This method notifies the object that a new event was generated. - Parameters: - Reimplemented from ot::Node. Definition at line 125 of file CallbackNode.h. Friends And Related Function Documentation Definition at line 143 of file CallbackNode.h. Member Data Documentation data pointer Definition at line 95 of file CallbackNode.h. the event passed to the function and the parent Definition at line 97 of file CallbackNode.h. callback function Definition at line 93 of file CallbackNode.h. name of the CallbackNode for retrieving it from the module. Note that this is not the name returned by getName(), rather the value set by the attribute name. Reimplemented from ot::Node. Definition at line 91 of file CallbackNode.h. Referenced by getCallbackName(). The documentation for this class was generated from the following file:
http://studierstube.icg.tugraz.at/opentracker/html/classot_1_1CallbackNode.php
CC-MAIN-2013-48
refinedweb
321
51.95
Hi everybody, Im pretty new (or u can say that completely new) to python and i try to learn from the very basic. I try to create grid data structure from array. Here i have some code, would you mind to look at it and give me an instruction: import math class Grid(object): def __init__(self, width = 10, height = 10): self.width = width self.height = height self.size = size self.grid = [] cell_value = 0 ..... then i got stucked, i would like to print out this array and then save it into text file! i look some place and it gave me this import numpy as np x = np.arange(20).reshape((4,5)) np.savetxt('test.txt', x) but i dont know how to print it then save it into text file if i have value with coordinate (x,y) later, can i change it in text file? and how? thank you very much, it will be helpful hancook
https://www.daniweb.com/programming/software-development/threads/384787/grid-data-sturcture
CC-MAIN-2017-47
refinedweb
159
83.56
Number plate detection using MNIST using Keras in Python In this article, autonomous number plate detection with the MNIST dataset is done and explained in detail from scratch starting from the training to the development of the User Interface with the help of Python programming using the Keras TensorFlow API. We have trained the numbers using the MNIST dataset which consists of 60,000 images each of 28×28 sized handwritten images commonly used for training various image processing systems. Thus, the trained model will be deployed into a customized graphical user interface, where testing will be done with various widgets and contouring tools. Therefore, these algorithms will help improve security and upgrade surveillance accuracy and ease. The two main constituents of this article are: - Training the MNIST dataset, resulting in 99.26% accuracy. - Creating the Custom User Interface, that’s the widget, contouring, and finding the digits with the trained MNIST model from Part 1. Hope this tutorial will be helpful to the readers. Let’s dive into the code and stay tuned. Happy Reading!!! PART I Here in this part, we will train the model using the MNIST dataset for handwritten digits. The MNIST dataset consists of 60,000 handwritten digits with each image constituting 28×28 pixels. The steps involved in this PART I are: - Load the dataset into the project - Add layers/ built the model - Normalization of the accumulated images - Compiling, and Training of the model. - Evaluation of the trained model is done and studied - Saving the model for easy use in PART II. Thus, this part consists of a total of 10 sections, built for easy understanding and ease of explanation. IMPORTING LIBRARIES Required Python libraries for this section are imported. from keras import layers from keras import models from keras.datasets import mnist from keras.utils import to_categorical Downloading the dataset from Keras and storing it in the images and label folders for ease. Creating the model layers using convolutional 2D layers, max-pooling, and dense layers. (train_images, train_labels), (test_images, test_labels) = mnist.load_data()')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10, activation='softmax')) model.summary() Downloading data from 11493376/11490434 [==============================] - 0s 0us/step Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 3, 3, 64) 36928 _________________________________________________________________ flatten (Flatten) (None, 576) 0 _________________________________________________________________ dense (Dense) (None, 64) 36928 _________________________________________________________________ dense_1 (Dense) (None, 10) 650 ================================================================= Total params: 93,322 Trainable params: 93,322 Non-trainable params: 0 _________________________________________________________________ NORMALIZING IMAGES Normalizing image is an important step to consider when using big datasets especially. It is in simple words, resizing and reshaping images based on a particular scale.) COMPILING AND TRAINING THE MODEL Compiling and training of the model with datasets with model.compile() and model.fit(). model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(train_images, train_labels, epochs=5, batch_size=64) Epoch 1/5 938/938 [==============================] - 52s 56ms/step - loss: 0.1651 - accuracy: 0.9480 Epoch 2/5 938/938 [==============================] - 54s 57ms/step - loss: 0.0467 - accuracy: 0.9859 Epoch 3/5 938/938 [==============================] - 56s 60ms/step - loss: 0.0328 - accuracy: 0.9891 Epoch 4/5 938/938 [==============================] - 52s 55ms/step - loss: 0.0245 - accuracy: 0.9924 Epoch 5/5 938/938 [==============================] - 51s 55ms/step - loss: 0.0194 - accuracy: 0.9942 EVALUATION OF THE MODEL This trained model is 99.26% accurate when evaluated. By increasing the number of iterations, precision can be improved. test_loss, test_acc = model.evaluate(test_images, test_labels) print(test_acc) 313/313 [==============================] - 3s 10ms/step - loss: 0.0258 - accuracy: 0.9927 0.9926999807357788 SAVING THE MODEL save() saves the trained model. model.save('mnist.h5') PART II The prime goal of this PART II is to build a customized graphical user interface. The steps involved in this project are listed below: - Setting up the Graphical user interface. - Creating the canvas. - Widgets making; including the drawing lines, clearing the canvas, predicting, etc. - Then, contouring of the digits via OpenCv and NumPy tools - Connecting the widget buttons to the functions Thus, this part consists of a total of 10 sections, split for the ease and understanding of the reader. IMPORTING THE LIBRARIES Required libraries are imported based on the needs of the project. from tkinter import * import cv2 import numpy as np from PIL import ImageGrab from keras.models import load_model load_model loads the saved model in Part I. model = load_model('C:/Users/Jerrin/Desktop/REC/mnist.h5') image_folder = "C:/Users/Jerrin/Desktop/REC" GUI SETUP Tk() is the python interface for the graphical user interface. Running these commands opens a simple window demonstrating a Tk interface. root = Tk() root.resizable(0, 0) root.title("HDR") lastx, lasty = None, None image_number = 0 DECLARATION OF CANVAS Canvas() sets the layout of the window created by the Tk command. The layout includes space where drawing can be done, place graphics, etc. Grid() formats a table-like structure for the Clear Widget section and the Recognize Digit section. cv = Canvas(root, width=640, height=480, bg='white') cv.grid(row=0, column=0, pady=2, sticky=W, columnspan=2) CLEARING WIDGETS Function to delete the handwritten digits from the grid window. def clear_widget(): global cv cv.delete('all') DRAWING LINES create.line() draws the lines in the grid window, thus helps in the origination of the handwritten digits. def draw_lines(event): global lastx, lasty x, y = event.x, event.y cv.create_line((lastx, lasty, x, y), width=8, fill='black', capstyle=ROUND, smooth=TRUE, splinesteps=12) lastx, lasty = x, y ACTIVATING THE EVENT Activating the event by extracting the point clouds. Point clouds are the coordinates of the event. def activate_event(event): global lastx, lasty cv.bind('<B1-Motion>', draw_lines) lastx, lasty = event.x, event.y cv.bind('<Button-1>', activate_event) IDENTIFYING AND CROPPING DIGITS Isolating the digits from the window and filtering of the isolated image. Then, save() saves the isolated image. def Recognize_Digit(): global image_number filename = '/predict1.jpg' widget = cv x = root.winfo_rootx() + widget.winfo_rootx() y = root.winfo_rooty() + widget.winfo_rooty() x1 = x + widget.winfo_width() y1 = y + widget.winfo_height() print(x, y, x1, y1) # get image and save ImageGrab.grab().crop((x, y, x1, y1)).save(image_folder + filename) image = cv2.imread(image_folder + filename, cv2.IMREAD_COLOR) gray = cv2.cvtColor(image.copy(), cv2.COLOR_BGR2GRAY) ret, th = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) contours = cv2.findContours(th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0] CONTOURING OF DIGITS Contouring of images occurs in a couple of distinctive steps. Thus, the four important steps in this contouring section are: - Cropping out the digit from the image corresponding to the current contours in the for loop - Resizing that digit to (18, 18) - Padding the digit with 5 pixels of black color (zeros) on each side to finally produce the image of (28, 28) - Prediction of the handwritten digits for cnt in contours: x, y, w, h = cv2.boundingRect(cnt) # make a rectangle box around each curve cv2.rectangle(image, (x, y), (x + w, y + h), (255, 0, 0), 1) digit = th[y:y + h, x:x + w] resized_digit = cv2.resize(digit, (18, 18)) padded_digit = np.pad(resized_digit, ((5, 5), (5, 5)), "constant", constant_values=0) digit = padded_digit.reshape(1, 28, 28, 1) digit = digit / 255.0 pred = model.predict([digit])[0] final_pred = np.argmax(pred) data = str(final_pred) + ' ' + str(int(max(pred) * 100)) + '%' font = cv2.FONT_HERSHEY_SIMPLEX fontScale = 0.5 color = (255, 0, 0) thickness = 1 cv2.putText(image, data, (x, y - 5), font, fontScale, color, thickness) cv2.imshow('image', image) cv2.waitKey(0) BUTTONS Button() creates the buttons for the interface. Thus, the function has two parameters: - text – The text to display on the button also termed as the label of the button. - command – Function or the method to be provoked when the button is clicked. btn_save = Button(text='Recognize Digit', command=Recognize_Digit) btn_save.grid(row=2, column=0, pady=1, padx=1) button_clear = Button(text='Clear Widget', command=clear_widget) button_clear.grid(row=2, column=1, pady=1, padx=1) root.mainloop() FINAL THOUGHTS Number plate detection with the help of the MNSIT dataset is thus done and elaborated in this article. Summarizing what we studied here, we trained an MNIST dataset consisting of 60,000 images using customized training methods with convolution, max-pooling techniques. Then, we saved the model. Following that, we created a custom user interface to draw our digits, recognize the digit, and to delete the digits drawn using the grid window. Thus, this technique directly enhances the autonomous detection of the number plate with live feed supporting the surveillance system in day-to-day circumstances. Hope this tutorial, in general, helped the readers in understanding how to train datasets, create a custom user interface, and deploy the trained model in a customized graphical user interface. To download the source code, click here. Thank you!!!
https://valueml.com/number-plate-detection-using-mnist-using-keras-in-python/
CC-MAIN-2021-25
refinedweb
1,489
51.24
Hi all, i am having a strange result, could you please help me explain what is going on? My environment is as follows : tomcat 4.0.1 windows 2000 jdk1.3.1 I am trying to write a jsp file that includes an applet. This applet , using Runtime.getRuntime.exec() method, will start the MSN Messenger. My applet code is as follows : import java.applet.Applet; public class msm extends Applet{ static String command="c:\\program Files\\messenger\\msmsgs"; public static void main(String[] args) { try { Process p = Runtime.getRuntime().exec(command); } catch (Exception e) { System.out.println( "Caught: " +e.toString() ); System.err.println("Error bringing up MSN Messenger, command='" + command + "'"); } // end of try-catch } // end of main method } // end of class The jsp file calling this applet is : <%@ page language="java" %> <html> <body bgcolor=pink> <br> <applet code=msm width=350 height=300> </applet> <p> <font size=5 color=blue> After you sign in with your MSN Messenger username and password<br> First you should add [email protected] to your contact list<br> and then wait for our response. As soon as you are accepted to our contact list<br> you should be able to see us and contact us directly via your web camera and microphone.<br> Hope you enjoy your membership. :)</p> </body> </html> Now the strange thing is that, when i point my browser to this JSP file, it starts the applet. But the MSN Messenger doesn't start. What do you suggest? Isn't it the MSN messenger expected to start. (NOTE !!: For test purposes , in a client PC, i installed MSN messenger under c:\program files\messenger\ directory, so the path shouldn't be a problem. Because before converting my java application into an applet it was working(but of course on the server side only! :) The only log entries about this situation is : 11.20.38.115 - - [10/May/2002:14:41:40 8000] "GET /bty/msm.jsp HTTP/1.1" 200 506 211.20.38.115 - - [10/May/2002:14:41:45 8000] "GET /bty/msmBeanInfo.class HTTP/1.1" 404 223 211.20.38.115 - - [10/May/2002:14:41:45 8000] "GET /bty/msm$COMClassObject.class HTTP/1.1" 404 237 What can be a possible solution? Thanks for taking time to help me :) -- To unsubscribe, e-mail: <mailto:[email protected]> For additional commands, e-mail: <mailto:[email protected]>
http://mail-archives.apache.org/mod_mbox/tomcat-users/200205.mbox/%3C000701c1f7ef$507342e0$7300a8c0@yilmaz%3E
CC-MAIN-2016-26
refinedweb
408
60.41
Many thanks! ziger zigs @ziger zigs Posts made by ziger zigs - - Question about I2C Pins on Daughter Board. - RE: I2C - can't find device address Just a quick word from some of my own frustrated testing with I2Cdetect: The reason -q causes the false positives to disappear is because -q is "quick write". It tells you this if you leave out -y, since -y removes the prompt that tells you what's going on. Conversely, you will get nothing but false positives if you instead use -r, which is "read" the first byte of every address to find out who's where. I think this has to do with how reading from something that isn't there results in 0xff. If you've managed to get I2C working with your Omega2, I'd be real interested in hearing what your setup is. I'm about two seconds away from giving up on the I2C libraries and just bit banging to figure things out. - Python OnionGpio Library Is there any additional control over the GPIO pins than what is offered in the Omega2 Software Libraries documentation? It feels like the OnionGpio library only offers the control of the gpioctrl command, leaving out the fast-gpio command's pwm and pulse abilities. If there's another source of documentation out there, I'd be delighted to know about it. So far all of my learning has been controlling simple IC chips as I dip my toes into the world of micro-controllers for the second time. Last time my experience ended in frustration due to a lack of clear documentation of how to control the micro-controller parts from the coding language given (C in this case). I've been happily avoiding that outcome with all of the -help options and the straight forwardness of linux (and google's help), but I've started growing concerned that when I finally move up to controlling more complex things it's going to fall apart. Let me know if there's any additional documentation out there you folks are using. - PWM flicker between pwm duty changes") - RE: Python Script Causes Crash, Restart [Resolved] For anyone having the same issue, I believe I solved the issue after some tweaking. The issue seems to be that I wasn't setting pin 3 to be input before trying to read from it. Adding the line os.system("gpioctl dirin 3")solved the crashing issue. Alternatively you could replace gpioctl dirin 3 with fast-gpio set-input 3. I'm a little concerned that the error wasn't handled better. I don't know if that fault is on Python or my Omega2, though. Anyways, happy coding everyone! - Python Script Causes Crash, Restart [Resolved] Greetings fellow Onioneers! I've been picking up Python since the ash shell sleep function can't sleep for less than a second and I thought it would be more fun to learn a new language than to relearn one I learned a lifetime ago. That's not to say I'm new to coding. I'm no expert, but I've dabbled for some time. My issue is coming from a (seemingly) simple script I wrote for Python to play with what I thought would be some simple tasks to eventually build up to a larger project I have in mind a proof of concept if you will. The script it meant to work as follows: Script starts Pin 0 is set to out, pins 2 and 3 are set to in Pin 0 is set high to turn on an LED The script loops until the button is pressed, polling pin 2 every .1 second to see if the button has been pressed On press, the pin 0 goes low, the script sleeps for 2 seconds Script loops until pin 3 reads high, where the script simply stops. This script doesn't contain anything I haven't used previously in smaller (5-10 line) scripts, but it causes my Omega2 to reboot when run! Is there something in my script in particular that is causing this? I can't read the dump that it gave before restarting, so any direction would be helpful to me. I am running firmware version 0.1.6 b137 as last I checked, the latest version contained an error that made the Omega2 throw errors when trying to use gpioctl or fast-gpio. If this has changed, let me know! Here's the code as it was run: import os import sys import time import subprocess Read GPIO" + stoppin + ": 0"): Read GPIO" + buttonpin + ": 1" and lampstate == "on" ): os.system(setpin + low + lamppin) lampstate = "off" time.sleep(2) elif(lampstate == "on"): time.sleep(sleeper) else: os.system(setpin + high + lamppin) And here's the dump: root@Omega-9D45:/csm/learning# python button.py [ 2219.519209] CPU 0 Unable to handle kernel paging request at virtual address 00000098, epc == 00000098, ra == 8004f1fc [ 2219.529986] Oops[#1]: [ 2219.532301] CPU: 0 PID: 1773 Comm: python Tainted: G W 4.4.39 #0 [ 2219.539540] task: 839c5e00 ti: 82fce000 task.ti: 82fce000 [ 2219.545013] $ 0 : 00000000 7ffaa930 00000098 803d0000 [ 2219.550332] $ 4 : 803d1890 00b33790 00000018 00b33790 [ 2219.555644] $ 8 : 1100e400 1000001f 6e656720 74617265 [ 2219.560955] $12 : 62206465 771052b0 00000000 72732079 [ 2219.566268] $16 : 803d1880 803d6100 00b33790 fffffffe [ 2219.571581] $20 : 00b33788 00b33770 00000017 00b337d0 [ 2219.576893] $24 : 00000000 770cd9c8 [ 2219.582205] $28 : 82fce000 82fcfed8 76ed6000 8004f1fc [ 2219.587520] Hi : 00000008 [ 2219.590437] Lo : 00000000 [ 2219.593361] epc : 00000098 0x98 [ 2219.596744] ra : 8004f1fc handle_percpu_irq+0x48/0x80 [ 2219.602125] Status: 1100e402 KERNEL EXL [ 2219.606113] Cause : 50808008 (ExcCode 02) [ 2219.610174] BadVA : 00000098 [ 2219.613094] PrId : 00019655 (MIPS 24KEc) [ 2219.617154] Modules linked in: pppoe ppp_async iptable_nat w1_therm w1_gpio uvcvideo snd_usb_audio pppox ppp_generic pl2303 nf_nat_ipv4 nf_conntrack_ipv6 nf_conntrack_ipv4 ipt_REJECT ipt_MASQUERADE ftdi_sio cp210x ch341 xt_time xt_tcpudp xt_state xt_nat xt_multiport xt_mark xt_mac xt_limit xt_conntrack xt_comment xt_TCPMSS xt_REDIRECT xt_LOG wire videobuf2_v4l2 usbserial usblp usbhid ums_usbat ums_sddr55 ums_sddr09 ums_karma ums_jumpshot ums_isd200 ums_freecom ums_datafab ums_cypress ums_alauda uinput [ 2219.762702] Process python (pid: 1773, threadinfo=82fce000, task=839c5e00, tls=77106e50) [ 2219.770900] Stack : 01f00006 800c30f8 00000001 00400000 82fcff08 00000007 00000020 8004b6c8 803cb360 80368188 00000000 770370c8 00001800 80011530 76fe7934 00af46e0 00000001 7ffaa95f 00000000 80004430 00000001 77c67320 00000001 00000000 771052b0 00bca800 7ffaa95f 00000000 00000000 7ffaa930 00b337e0 00b337e0 00b337f0 00b33790 00000018 00b33790 770f013e fefefeff 6e656720 74617265 ... [ 2219.807029] Call Trace: [ 2219.809521] [<800c30f8>] __fdget_pos+0x14/0x60 [ 2219.814066] [<8004b6c8>] generic_handle_irq+0x24/0x3c [ 2219.819209] [<80011530>] do_IRQ+0x1c/0x2c [ 2219.823277] [<80004430>] ret_from_irq+0x0/0x4 [ 2219.827696] [ 2219.829202] Code: (Bad address in epc) [ 2219.833179] [ 2219.834776] ---[ end trace 09977ba32a719a8d ]--- [ 2219.843451] Kernel panic - not syncing: Fatal exception in interrupt [ 2219.852185] Rebooting in 3 seconds..
http://community.onion.io/user/ziger-zigs
CC-MAIN-2019-04
refinedweb
1,114
74.59
Diabetes Prediction using Keras In Python In this, Article we will learn to implement diabetes prediction using deep learning algorithms in Python with the help of Keras deep learning API. For this purpose, we will use an open dataset and we will be creating a deep neural network architecture. You can download the dataset from the link Dataset. You can analyze the dataset after downloading and you will find that the dataset is divided into the categories of 0’s and 1’s. Now let’s move forward to implement our model in python using TensorFlow and Keras. I hope you have pre-installed all the libraries in your local system, If you have not installed it no issues, you can open google Colab and practice this tutorial with me. Diabetes Prediction Using Deep Neural Network Now let’s move forward to import the required Python libraries in our notebook: In the above block of code, we are importing the required library in our notebook. Keras API already comes with the TensorFlow deep learning module of Python that plays an important role in the diabetes prediction task. Moving forward to load our dataset. data=pd.read_csv("/content/drive/My Drive/Internship/prima-indians-diabetes.csv") data.head() In the above block of code, we are loading the dataset and looking to the top 10 data using head(). data.tail() You can also see the dataset from the bottom using tail(). The output looks like this you can see below: columns are not seemed to be meaningful right? Let’s move to rename the column names. data = data.rename(index=str, columns={"6":"preg"}) data = data.rename(index=str, columns={"148":"gluco"}) data = data.rename(index=str, columns={"72":"bp"}) data = data.rename(index=str, columns={"35":"stinmm"}) data = data.rename(index=str, columns={"0":"insulin"}) data = data.rename(index=str, columns={"33.6":"mass"}) data =data.rename(index=str, columns={"0.627":"dpf"}) data = data.rename(index=str, columns={"50":"age"}) data = data.rename(index=str, columns={"1":"target"}) data.head() Output: Now looking cool after renaming. Let’s move to the next step. data.describe() We are using describe () to see our dataset more closely and in a meaningful manner, you can see the output below. Don’t forget to see the mean and standard deviation value given below. Output: mpl.rcParams['figure.figsize']=25,15 plt.matshow(data.corr()) plt.yticks(np.arange(data.shape[1]),data.columns) plt.yticks(np.arange(data.shape[1]),data.columns) plt.colorbar() Output: In the above block of code, we are looking at the correlation matrix of our dataset data.hist() Output: In the above block of code, we can see the histogram plot of each column of our dataset. Hope you are following this tutorial with me, Let’s move forward to do some more analysis of our dataset. mpl.rcParams['figure.figsize']=10,8 plt.bar(data['target'].unique(), data['target'].value_counts(), color = ['blue', 'pink']) plt.xticks([0,1]) plt.xlabel('Target class') plt.ylabel('count') plt.title('count of our each target class') Output: In the above block of code, you can see the output and verify the count of the target class. X = data.iloc[:, :-1] Y = data.iloc[:,8] Y Output: In the above block of code, you can see that we have divided our dataset into train input and the target dataset means the first 8 columns will act as input feature for our model and the last column will work as target class. X_train_full, X_test, y_train_full, y_test = train_test_split(X, Y, random_state=42) X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full, random_state=42) In the above block of code, I have converted my dataset into a train and test dataset and further, I have converted my train and test dataset into train test and validation. from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_valid = scaler.transform(X_valid) X_test = scaler.transform(X_test) X_train Output: In the above block of code, I have converted my dataset into a standard scaler format. Basically standard scalar format is used to remove the mean and used to scale each feature to unit variance. np.random.seed(42) tf.random.set_seed(42) here I am using a random seed to generate a pseudo-random number and setting to our tf graph. X_train.shape Output: (431, 8) you can see the shape of the training sample and on the basis of this shape only we are going to define our deep neural network model. model=Sequential() model.add(Dense(15,input_dim=8, activation='relu')) model.add(Dense(10,activation='relu')) model.add(Dense(8,activation='relu')) model.add(Dropout(0.25)) model.add(Dense(1, activation='sigmoid')) In the above block of code, you can see that I am using a sequential model, and also I am using dropout layers in my model to get rid of overfitting. model.summary() Output: You can see the parameters used in my model, there is a total of 392 trainable parameters Now, let’s move forward to train our model. model.compile(loss="binary_crossentropy", optimizer="SGD", metrics=['accuracy']) model_history = model.fit(X_train, y_train, epochs=200, validation_data=(X_valid, y_valid)) Output: You can see in the above code that I am compiling my model with 200 epoch, with binary-cross entropy loss function and SGD optimizer Now let’s move to predict the values. X_new=X_test[:3] X_new y_pred = model.predict(X_new) print (y_pred) y_test[:3] Output: In the above code block, you can see the actual output and the predicted output. Conclusion: Hope you followed the tutorial with me don’t forget to put your suggestions in the comment box. Further modification can be done to increase the accuracy, like adding another optimizer, increasing the epochs. You can play with the code by doing some modifications, code can be found in the link code. Thanks for your time. This is very helpful 👍 for me bro🙂. Hope you will write more articles like this one.🙂🙂. Great work towards uhh ☺️. Keep going. Keep doing hard work. Such a helpful article 🤗🤗 Thnkuhh for giving this one. 😍😍
https://valueml.com/diabetes-prediction-using-keras-in-python/
CC-MAIN-2021-25
refinedweb
1,019
56.86
CONCEPTS USED: Recursion,Dynamic programming. DIFFICULTY LEVEL: Medium. PROBLEM STATEMENT$($SIMPLIFIED$)$: Arnab is now given N nodes. Now he asked how many binary search trees can be formed using those N nodes. For Example : N=3; There are 5 possible trees: 1 2 3 3 1 \ / \ / / \ 2 1 3 2 1 3 \ / \ / 3 1 2 2 See original problem statement here OBSERVATION: The number of binary search trees can be seen as a recursive solution. i.e., Number of binary search trees = (Number of Left binary search sub-trees) (Number of Right binary search sub-trees) (Ways to choose the root) SOLVING APPROACH: Total number of BSTs for 'n' distinct keys = total number of BSTs with '1' as root + total number of BSTs with '2' as root + ... + total number of BSTs with 'n' as root Now we try to find out the value of each term on the right hand side of above equation - total number of BSTs with number 'i' as root? We have 'n' distinct keys and we have to make number 'i' as the root of these BSTs. In this case there would be (i-1) distinct values which would go in left sub-trees of BSTs with 'i' as their root and there would be (n-i) distinct values which would go in right sub-trees of BSTs with 'i' as their root. Because left sub-trees of these BSTs formed by (i-1) distinct keys are independent of the right sub-trees of these BSTs formed by (n-i) distinct keys, we can say that the total number of BSTs with 'i' as their root = total number of BSTs with (i-1) distinct keys*total number of BSTs with (n-i) distinct keys. ALGORITHM: In a BST, only the relative ordering between the elements matter. So, without any loss on generality, we can assume the distinct elements in the tree are 1, 2, 3, 4, ...., n. Also, let the number of BST be represented by f(n) for n elements. Now we have the multiple cases for choosing the root using data structures in c++. choose 1 as root, no element can be inserted on the left sub-tree. n-1 elements will be inserted on the right sub-tree. Choose 2 as root, 1 element can be inserted on the left sub-tree. n-2 elements can be inserted on the right sub-tree. Choose 3 as root, 2 element can be inserted on the left sub-tree. n-3 elements can be inserted on the right sub-tree. ...... Similarly, for i-th element as the root, i-1 elements can be on the left and n-i on the right. These sub-trees are itself BST, thus, we can summarize the formula as: f(n) = f(0)f(n-1) + f(1)f(n-2) + .......... + f(n-1)f(0) Base cases, f(0) = 1, as there is exactly 1 way to make a BST with 0 nodes. f(1) = 1, as there is exactly 1 way to make a BST with 1 node. f(n)=summation of (f(i-1)*f(n-i)) from i=1 to i=n. Note: f(n) makes 2(n-1) recursive calls, then f(n-1) makes 2(n-2) recursive calls and so on. Note that because we are storing the intermediate results, though each f(i) occurs twice while evaluating f(n), we need to compute f(i) only once which avoids exponential running time for this algorithm. SOLUTIONS: #include <stdio.h> int dp[20]; int main() { //write your code dp[1]=1,dp[0]=1; for(int i=2;i<12;i++) {int s=0; for(int j=1;j<=i;j++) { s+=(dp[j-1]*dp[i-j]); } dp[i]=s; } int t; scanf("%d",&t); while(t--) { int n; scanf("%d",&n); printf("%d\n",dp[n]); } return 0; } #include <bits/stdc++.h> using namespace std; int dp[20]; int main() { dp[1]=1,dp[0]=1; for(int i=2;i<12;i++) {int s=0; for(int j=1;j<=i;j++) { s+=(dp[j-1]*dp[i-j]); } dp[i]=s; } int t; cin>>t; while(t--) { int n; cin>>n; cout<<dp[n]<<"\n"; } return 0; } import java.util.*; import java.io.*; class BinarySearchTree { public static class Node{ int data; Node left; Node right; public Node(int data){ this.data = data; this.left = null; this.right = null; } } public Node root; public BinarySearchTree(){ root = null; } public int factorial(int num) { int fact = 1; if(num == 0) return 1; else { while(num > 1) { fact = fact * num; num--; } return fact; } } public int numOfBST(int key) { int catalanNumber = factorial(2 * key)/(factorial(key + 1) * factorial(key)); return catalanNumber; } public static void main(String[] args) { Scanner sc = new Scanner(System.in); int t= sc.nextInt(); while(t-- >0 ){ int n = sc.nextInt(); BinarySearchTree bt = new BinarySearchTree(); System.out.println(bt.numOfBST(n)); } } } Space complexity :O(n)
https://www.prepbytes.com/blog/dynamic-programming/bst-count/
CC-MAIN-2021-39
refinedweb
824
70.63
: #include using namespace std; struct CPoint2D { float x,y; }; typedef list CStroke; typedef list: #include void NormalizeSize(CGesture& gesture) { float minX = FLT_MAX; float maxX = -FLT_MAX; float minY = FLT_MAX; float maxY = -FLT_MAX; //Calculate extents of the gesture CGesture::iterator i; CStroke::iterator j; for ( i = gesture.begin(); i != gesture.end(); ++i ) { CStroke& stroke = *i; for ( j = stroke.begin(); j != stroke.end(); j++ ) { CPoint2D& pt = *j; if ( minX > pt.x ) minX = pt.x; if ( minY > pt.y ) minY = pt.y; if ( maxX < pt.x ) maxX = pt.x; if ( maxY < pt.y ) maxY = pt.y; } } //Calculate dimensions float width = maxX - minX; float height = maxY - minY; //find out scale float scale = (width > height) ? width:height; if ( scale <= 0.0f ) return; //Empty or a single point stroke! scale = 1.0f/scale; //Do the actual scaling for ( i = gesture.begin(); i != gesture.end(); ++i ) { CStroke& stroke = *i; for (: #include float GetStrokeLength(const CStroke& stroke) { if ( stroke.size() <= 1 ) return 0; float len = 0.0f; CStroke::const_iterator i = stroke.begin(); CPoint2D startPt = *i; ++i; while ( i != stroke.end() ) { CPoint2D endPt = *i; float dx = endPt.x - startPt.x; float dy = endPt.y - startPt.y; //Add the length of the current segment to the total len += sqrtf(dx * dx + dy * dy); startPt = endPt; ++i; } return len; } const int kPointsPerStroke = 32; void NormalizeSpacing(CStroke& newStroke, const CStroke& oldStroke) { //Clear the new stroke newStroke.erase(newStroke.begin(), newStroke.end()); float newSegmentLen = GetStrokeLength(oldStroke); if ( oldStroke.size() <= 1 || newSegmentLen <= 0.0f ) return; newSegmentLen /= (kPointsPerStroke-1); CStroke::const_iterator i = oldStroke.begin(); //Add the first point to the new stroke newStroke.push_back(*i); CPoint2D startPt = *i; //Ends of the current segment CPoint2D endPt = *i; //(begin with the empty segment) ++i; //Distance along old stroke at the end of the current segment float endOldDist = 0.0f; //Distance along the old stroke at the beginning of the current segment float startOldDist = 0.0f; //Distance along new stroke float newDist = 0.0f; //Length of the current segment (on the old stroke) float currSegmentLen = 0.0f; for(;;) { float excess = endOldDist - newDist; //we have accumulated enough length, add a point if ( excess >= newSegmentLen ) { newDist+=newSegmentLen; float { if ( i == oldStroke.end()) break; //No more data //Store the start of the current segment startPt = endPt; endPt = *i; //Get next point ++i; float dx = endPt.x - startPt.x; float dy = endPt.y - startPt.y; //Start accumulated distance (along the old stroke) //at the beginning of the segment startOldDist = endOldDist; //Add the length of the current segment to the //total accumulated length currSegmentLen = sqrtf(dx*dx+dy*dy); endOldDist+=currSegmentLen; } } //Due to floating point errors we may miss the last //point of the stroke if (. void NormalizeCenter(CGesture& gesture) { float centerX = 0.0f; float centerY = 0.0f; int pointCount = 0; //Calculate the centroid of the gesture CGesture::iterator i; CStroke::iterator j; for ( i = gesture.begin(); i != gesture.end(); ++i ) { CStroke& stroke = *i; //size should always be == kPointsPerStroke pointCount += (int)stroke.size(); for ( j = stroke.begin(); j != stroke.end(); j++ ) { CPoint2D& pt = *j; centerX += pt.x; centerY += pt.y; } } //Calculate centroid if ( pointCount <= 0 ) return; //empty gesture centerX /= pointCount; centerY /= pointCount; //To move the gesture into the origin, subtract centroid coordinates //from point coordinates for ( i = gesture.begin(); i != gesture.end(); ++i ) { CStroke& stroke = *i; for (.) float GestureDotProduct(const CGesture& gesture1, const CGesture& gesture2) { if ( gesture1.size() != gesture2.size() ) return -1; CGesture::const_iterator i1; CGesture::const_iterator i2; float dotProduct = 0.0f; for ( i1 = gesture1.begin(), i2 = gesture2.begin(); i1 != gesture1.end() && i2 != gesture2.end(); ++i1, ++i2 ) { const CStroke& stroke1 = *i1; const CStroke& stroke2 = *i2; if ( stroke1.size() != stroke2.size() ) return -1; CStroke::const_iterator j1; CStroke::const_iterator j2; for ( j1 = stroke1.begin(), j2 = stroke2.begin(); j1 != stroke1.end() && j2 != stroke2.end(); ++j1, ++j2 ) { const CPoint2D& pt1 = *j1; const CPoint2D& pt2 = *j2; dotProduct += pt1.x * pt2.x + pt1.y * pt2.y; } } return dotProduct; } float Match(const CGesture& gesture1, const CGesture& gesture2 ) { float score = GestureDotProduct(gesture1,gesture2); if ( score <= 0.0f ) return 0.0f; //No match for sure //at this point our gesture-vectors are not quite normalized yet - their dot product with themselves is not 1. //we normalize the score itself //this is basically a version of a famous formula for a cosine of the //angle between 2 vectors: //cos a = (u.v) / (sqrt(u.u) * sqrt(v.v)) = (u.v) / sqrt((u.u) * (v.v)) score /= sqrtf(GestureDotProduct(gesture1, gesture1) * GestureDotProduct(gesture2, gesture2)); return(TM). (C) Oleg Dopertchouk. Contact me
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/recognition-of-handwritten-gestures-r2039/
CC-MAIN-2018-09
refinedweb
726
60.41
I’ve just finished work on a small command line client for the Heroku Build API written in Haskell. It may be a bit overkill for the task, but it allowed me to play with a library I was very interested in but hadn’t had a chance to use yet: optparse-applicative. In figuring things out, I again noticed something I find common to many Haskell libraries: - It’s extremely easy to use and solves the problem exactly as I need. - It’s woefully under-documented and appears incredibly difficult to use at first glance. Note that when I say under-documented, I mean it in a very specific way. The Haddocks are stellar. Unfortunately, what I find lacking are blogs and example-driven tutorials. Rather than complain about the lack of tutorials, I’ve decided to write one. Applicative Parsers Haskell is known for its great parsing libraries and this is no exception. For some context, here’s an example of what it looks like to build a Parser in Haskell: type CSV = [[String]] csvFile :: Parser CSV csvFile = do lines <- many csvLine eof return lines where csvLine = do cells <- many csvCell `sepBy` comma eol return cells csvCell = quoted (many anyChar) comma = char ',' eol = char '\n' <|> char '\r\n' -- etc... As you can see, Haskell parsers have a fractal nature. You make tiny parsers for simple values and combine them into slightly larger parsers for slightly more complicated values. You continue this process until you reach the top level csvFile which reads like exactly what it is. When combining parsers from a general-purpose library like parsec, we typically do it monadically. This means that each parsing step is sequenced together (that’s what do-notation does) and that sequencing will be respected when the parser is ultimately executed on some input. Sequencing parsing steps in an imperative way like this allows us to make decisions mid-parse about what to do next or to use the results of earlier parses in later ones. This ability is essential in most cases. When using libraries like optparse-applicative and aeson we’re able to do something different. Instead of treating parsers as monadic, we can treat them as applicative. The Applicative type class is a lot like Monad in that it’s a means of describing combination. Crucially, it differs in that it has no ability to define an order – there’s no sequencing. If it helps, you can think of applicative parsers as atomic or parallel while monadic parsers would be incremental or serial. Yet another way to say it is that monadic parsers operate on the result of the previous parser and can only return something to the next; the overall result is then simply the result of the last parser in the chain. Applicative parsers, on the other hand, operate on the whole input and contribute directly to the whole output – when combined and executed, many applicative parsers can run “at once” to produce the final result. Taking values and combining them into a larger value via some constructor is exactly how normal function application works. The Applicative type class lets you construct things from values wrapped in some context (say, a Parser State) using a very similar syntax. By using Applicative to combine smaller parsers into larger ones, you end up with a very convenient situation: the constructed parsers resemble the structure of their output, not their input. When you look at the CSV parser above, it reads like the document it’s parsing, not the value it’s producing. It doesn’t look like an array of arrays, it looks like a walk over the values and down the lines of a file. There’s nothing wrong with this structure per se, but contrast it with this parser for creating a User from a JSON value: data User = User String Int -- Value is a type provided by aeson to represent <abbr title="JavaScript Object Notation">JSON</abbr> values. parseUser :: Value -> Parser User parseUser (Object o) = User <$> o .: "name" <*> o .: "age" It’s hard to believe the two share any qualities at all, but they are in fact the same thing, just constructed via different means of combination. In the CSV case, parsers like csvLine and eof are combined monadically via do-notation: You will parse many lines of CSV, then you will parse an end-of-file. In the JSON case, parsers like o .: "name" and o .: "age" each contribute part of a User and those parts are combined applicatively via (<$>) and (<*>) (pronounced fmap and apply): You will parse a user from the value for the “name” key and the value for the “age” key Just by virtue of how Applicative works, we find ourselves with a Parser User that looks surprisingly like a User. I go through all of this not because you need to know about it to use these libraries (though it does help with understanding their error messages), but because I think it’s a great example of something many developers don’t believe: not only can highly theoretic concepts have tangible value in real world code, but they in fact do in Haskell. Let’s see it in action. Options Parsing My little command line client has the following usage: heroku-build [--app COMPILE-APP] [start|status|release] Where each sub-command has its own set of arguments: heroku-build start SOURCE-URL VERSION heroku-build status BUILD-ID heroku-build release BUILD-ID RELEASE-APP The first step is to define a data type for what you want out of options parsing. I typically call this Options: import Options.Applicative -- Provided by optparse-applicative type App = String type Version = String type Url = String type BuildId = String data Command = Start Url Version | Status BuildId | Release BuildId App data Options = Options App Command If we assume that we can build a Parser Options, using it in main would look like this: main :: IO () main = run =<< execParser (parseOptions `withInfo` "Interact with the Heroku Build API") parseOptions :: Parser Options parseOptions = undefined -- Actual program logic run :: Options -> IO () run opts = undefined Where withInfo is just a convenience function to add --help support given a parser and description: withInfo :: Parser a -> String -> ParserInfo a withInfo opts desc = info (helper <*> opts) $ progDesc desc So what does an Applicative Options Parser look like? Well, if you remember the discussion above, it’s going to be a series of smaller parsers combined in an applicative way. Let’s start by parsing just the --app option using the library-provided strOption helper: parseApp :: Parser App parseApp = strOption $ short 'a' <> long "app" <> metavar "COMPILE-APP" <> help "Heroku app on which to compile" Next we make a parser for each sub-command: parseStart :: Parser Command parseStart = Start <$> argument str (metavar "SOURCE-URL") <*> argument str (metavar "VERSION") parseStatus :: Parser Command parseStatus = Status <$> argument str (metavar "BUILD-ID") parseRelease :: Parser Command parseRelease = Release <$> argument str (metavar "BUILD-ID") <*> argument str (metavar "RELEASE-APP") Looks familiar, right? These parsers are made up of simpler parsers (like argument) combined in much the same way as our parseUser example. We can then combine them further via the subparser function: parseCommand :: Parser Command parseCommand = subparser $ command "start" (parseStart `withInfo` "Start a build on the compilation app") <> command "status" (parseStatus `withInfo` "Check the status of a build") <> command "release" (parseRelease `withInfo` "Release a successful build") By re-using withInfo here, we even get sub-command --help flags: $ heroku-build start --help Usage: heroku-build start SOURCE-URL VERSION Start a build on the compilation app Available options: -h,--help Show this help text Pretty great, right? All of this comes together to make the full Options parser: parseOptions :: Parser Options parseOptions = Options <$> parseApp <*> parseCommand Again, this looks just like parseUser. You might’ve thought that o .: "name" was some kind of magic, but as you can see, it’s just a parser. It was defined in the same way as parseApp, designed to parse something simple, and is easily combined into a more complex parser thanks to its applicative nature. Finally, with option handling thoroughly taken care of, we’re free to implement our program logic in terms of meaningful types: run :: Options -> IO () run (Options app cmd) = do case cmd of Start url version -> -- ... Status build -> -- ... Release build rApp -> -- ... Wrapping Up To recap, optparse-applicative allows us to do a number of things: - Implement our program input as a meaningful type - State how to turn command-line options into a value of that type in a concise and declarative way - Do this even in the presence of something complex like sub-commands - Handle invalid input and get a really great --helpmessage for free Hopefully, this post has piqued some interest in Haskell’s deeper ideas which I believe lead to most of these benefits. If not, at least there’s some real world examples that you can reference the next time you want to parse command-line options in Haskell.
https://robots.thoughtbot.com/applicative-options-parsing-in-haskell
CC-MAIN-2016-07
refinedweb
1,493
52.73
Racket-style Higher Order Contracts in Plain Javascript Racket-style Higher-Order Contracts in Plain JavaScript npm install rho-contracts (scroll down to Tutorial to skip the intro) rho-contracts.js is an implementation of Racket's higher-order contracts library in JavaScript. It is an attempt to bring to JavaScript the reliability benefits we usually get from static types, namely: Types detect bugs early, loudly, and provide clear error messages with precise blame. Types establish powerful invariants that guarantee that certain kinds of bugs do not exists in certain sections of the code. Types act as a checked documentation for the expected input-output type of functions. Types provide a fulcrum against which we can leverage a refactoring. Among the dynamic languages, JavaScript is suffering from the absence of static types quite, because of it propensity for implicitly converting everything into everything else, and its habit turning anything into a null at a moment's notice. When I couldn't stand it anymore, I wrote this contract library. rho-contracts.js is purely a run-time checker. It will never give a compile-time error; it will never refuse to run your code. rho-contracts.js is an assert library where the assertions are written in a style similar to that of a static type system, and whose checking discipline is sufficiently strict to provide similar guarantees as a type system (though not the same.) rho-contracts.js is an higher-order contract library, as opposed to run-of-the-mill assertion library, which means that it provides the ability to check assertions on functions received as an arguments, and on function returned from functions. When implementing derive(fn, deltaX), it is trivial to add an assert that checks that deltaX is a number. It is harder to check that fn is a function that always returns a number. Without higher-order contracts, the only way to implement this check is to pollute the code with if (!isNumber(result_from_fn)) ... everywhere fn is called -- that's not great. Higher-order contracts make it possible to place the specification next to the definition of derive, where it belongs, like this: var c = require('rho-contracts') // derive: returns a function that is the numerically-computed derivative // of the given function. var derive = /* This is the specification: */ c.fun( { fn: c.fun( { x: c.number } ).returns(c.number) }, { deltaX: c.number } ) .wrap( /* And the implementation goes here: */ function(fn, deltaX) { return function(x) { return (fn(x+deltaX/2) - fn(x-deltaX/2))/deltaX } } In this example, we use c.fun to instantiate a contract stating that derive is a function of two arguments. The first argument, which is named fn, must be a function of one argument, called x, which must be a number. The contract also specifies that derive's second argument is named deltaX, which must be a number. The derive function itself is created as an anonymous function using JavaScript's own function keyword. The newly created anonymous function is then immediately wrapped with a contract-checking shell, using rho-contracts.js' .wrap() method on contracts. The result of .wrap() is a function that: fnis a function and that deltaXis a number), .returns()clause, then In addition, at the moment of the call to the original function (Step 2 above), rho-contracts.js will .wrap() the function passed-in for fn. This way fn itself will be protected by a contract shell during the entire duration of the execution of the body of derive, and all its invocations will be checked against the contract. Given the definition for derive above: > function quadratic(x) { return 5*x*x + 3*x + 2 } // When `derive` is called correctly, there is no error: > var linear = derive(quadratic, 1.0) > linear(0) 3 > linear(1) 13 > linear(10) 103 // Error: calling with the arguments flipped: > derive(1.0, quadratic) ContractError: Expected fun, but got 1 For the `fn` argument of the call. // Error: forgetting an argument: > derive(quadratic) ContractError: Wrong number of arguments, expected 2 but got 1 // Error: calling with the wrong kind of function: > var fprime = derive(function(x) { return "**" + x + "**" }, 1.0) // There is now a contract-checking shell installed around `fprime` that // throws an error when `fprime` is called: > fprime(100) ContractError: `fn()` broke its contract Expected number, but got '**100.5**' for the return value of the call. Note how these contract errors are triggered much faster than JavaScript's native error, they provide clearer error messages, and they highlight the exactly line where the error is, rather than some line deep inside the implementation of derive. In the last example, when fn fails to return a number, which code is responsible for the failure? A normal assertion library used as desbribed earlier would raise an exception: assertion failed: expected a number for variable result_from_fn but got a string. This exception would contain a stack trace whose first frame would be pointing the blame on the shoulders of the implementation of derive. But that is incorrect. The error is not that derive assigned a wrong value to the result_from_fn variable. Rather, fn broke its contract -- or more precisely, the module calling derive was contractually required to provide a function that would only return numbers when called, but it failed to abide to its responsibility. The error message should make it clear that the failure comes from fn, not from derive. rho-contracts.js's error messages do indeed makes this clear. The error printed is: `fn()` broke its contract. Expected a number, but got '**100.5**' for the return value of the call. rho-contracts.js is an implementation of the paper Contracts for higher-order functions, by Findler and Felleisen, ICFP 2002. The paper formalizes the notion of blame, describes the blame-tracking algorithm necessary to report blame correctly, and proves the algorithm correct. This implementation follows the paper closely, though without Racket's macro system it was not possible to implement the report of blame in term of the name of the module interacting. rho-contracts.js only reports the function names. rho-contracts.js's higher-order contracts can also be used to check the correctness of functions used as values (aka, stored inside data structures.) This is clearly very useful in JavaScript where functions-in-data are used everywhere. In JavaScript, objects are constructed by putting functions into a hash table, then passing that hash table around. It would be impossible to check these functions against their specification without higher-order contracts. For example: // Define a contract for position objects with two methods, `moveX` and `moveY`: > var posContract = c.object({ x: c.number, y: c.number, moveX: c.fun({dx: c.number}), moveY: c.fun({dx: c.number}) }) // Define a constructor for position objects. Objects returned // will have their methods `.wrap()`-ed with contract-checking shells: > var makePos = c.fun({x: c.number}, { y: c.number }) .returns(posContract) .wrap( function(x, y) { return { x: x, y: y, moveX: function(dx) { return makePos(this.x + dx, this.y) } moveY: function(dy) { return makePos(this.x, this.y + dy) } } }) // Try to misuse the object: > makePos(5, 7).moveX("left") ContractError: on `moveX()` Expected number, but got 'left' for the `dx` argument of the call. (In a delightful instance of self-reference, the contract library is documented and checked using the contract library itself. If reading tutorials is not your thing, you may want to instead look at the contracts placed on rho-contracts.js's functions and methods by reading contract.face.js directly.) The contract library is typically require'd and bound to a variable called c: > var c = require('rho-contracts') Some fields of c are contract objects you can use directly, such as the c.number contract: > c.number.toString() 'c.number' > c.number.check(5) // everything is fine, no error, returns the given value. 5 > c.number.check("five") // boom, because a string is not a number. ContractError: Expected number, but got 'five' The ContractError being thrown is a normal JavaScript Error. It can be caught and rethrown like normal exceptions. Other useful basic contracts are c.string, c.integer, c.bool, and c.regexp. c.string: accepts only strings, according to Underscore.js's _.isString() c.integer: accepts only numbers vthat satisfy Math.floor(v) === v c.bool: accepts only booleans, according to Underscore.js's _.isBoolean() c.regexp: accepts only regular expressions, according to Underscore.js's _.isRegExp() For completeness, there are also c.falsy: accepts only values that selects the elsebranch of a JavaScript conditional c.truthy: accepts only values that select the ifbranch. c.value(): accepts only the given value and nothing else. c.any: the contract that accepts everything c.nothing: the contract that rejects everything Other fields of c are functions that construct interesting contracts, such as c.oneOf() which returns a contract that only accepts the values enumerated: > var anwserContract = c.oneOf("y", "yes", "n", "no") > anwserContract.toString() 'c.oneOf(y, yes, n, no)' > answerContract.check("yes") // good, no error 'yes' > answerContract.check("bunny") // boom ContractError: Expected oneOf(y, yes, n, no), but got 'bunny' On particularly powerful contract is c.or(), which is a contract that takes two or more contracts as argument, and returns a contract that accept a value if it passes any one of the given contracts: > c.or(c.number, c.string).check(10) // good 10 > c.or(c.number, c.string).check("ten") // good 'ten' > c.or(c.number, c.string).check( { x: 10 } ) ContractError: none of the contracts passed: - c.number - c.string The failures were: c.number: Expected number, but got { x: 10 } c.string: Expected string, but got { x: 10 } The c.or() contracts makes it possible to specify types for the kind of heterogeneous functions that are common in idiomatic JavaScript, but that would be refused outright by most static type systems (that is so awesome.) The contract library provides a rich collection of contract function to construct sophisticated contracts from simple one, such as: c.or() : as we just saw, accepts value that passes at least one of the given contracts. c.and() : accepts only values that pass all of the given contracts. c.matches() : accepts only strings that match the given regular expressions. In all likelihood, you will be instantiating a large number of custom contracts. It is customary to create a hash to contain the custom contract created in an application or in a particular module: > var cc = {} // custom contracts > cc.numberAsAString = c.matches(/^[0-9]+(\.[0-9]+)?$/) > cc.numberAsAString.check("42") // ok > cc.numberAsAString.check("10.7") // ok > cc.numberAsAString.check("10.") // boom ContractError: Expected matches(/^[0-9]+(\.[0-9]+)?$/), but got '10.' Another option is to make a clone of the contract library at the top of your NodeJs module and keep the contracts created and used in that module in the clone: > var __ = require('underscore') > var c = __.clone(require('rho-contracts')); > c.numberAsString = c.matches(/^[0-9]+(\.[0-9]+)?$/) > c.or(c.falsy, c.numberAsString).check(null) // ok, null is falsy null To prevent the toString() output of custom contracts from become unwieldy long and render the rho-contracts.js's error messages difficult to read, call .rename() before storing them: > c.numberAsString = c.matches(/^[0-9]+(\.[0-9]+)?$/) .rename('numberAsString') > c.numberAsString.check("o_0.") // boom ContractError: Expected numberAsString, but got 'o_0.' A c.array() contract checks that all items in the array passes the given contract: > c.array(c.integer).check([1, 2, 3, 45.2, 5, 6]) ContractError: Expected integer, but got 45.2 for the 4th element of the array. The full value being checked was: [ 1, 2, 3, 45.2, 5, 6 ] A c.tuple() contract checks that the array has at least the given number of items (having extra items is OK). Then it checks that each item passes its corresponding contract: > c.tuple(c.number, c.string).check([10, "ten"]) // ok [ 10, 'ten' ] > c.tuple(c.number, c.string).check([10, 20]) // boom ContractError: Expected string, but got 20 for the 2nd element of the tuple. The full value being checked was: [ 10, 20 ] > c.tuple(c.number, c.string).check([10]) // boom ContractError: Expected tuple of size 2, but got [ 10 ] A c.hash() contract checks that all right-hand values of a hash table passes the given contract: > c.hash(c.bool).check({ a: true, b: true, c: false, d: null, e: false }) ContractError: Expected bool, but got null for the key `d` of the hash. The full value being checked was: { a: true, b: true, c: false, d: null, e: false } Contract on functions are implemented by wrapping the implementing function with a contract-checking shell. This is achieved with the .wrap() method on contracts: > function square_implementation(x) { return x * x } > var square_contract = c.fun ( { x: c.number } ) > var square = square_contract.wrap(square_implementation) > square(25) 625 The contract-checking shell checks all invocations of the square function. It will raise an error if either the wrong number of arguments is provided, or if any argument fails to check against its contract: > square(10, 11, 12) ContractError: Wrong number of arguments, expected 1 but got 3 > square("cat") ContractError: Expected number, but got 'cat' for the `x` argument of the call. Usually, the implementation, the contract, and the wrapped function are all created at once in one expression, like this: var square = c.fun( { x: c.number } ) .wrap( function (x) { return x * x } ) Each argument's contract is specified in the call to c.fun() using a hash table containing exactly one field. The name of that field is used by rho-contracts.js's error messages when the argument's check fails. Note that the name of the argument in the contract can be different from the name of the argument in the implementation. This is sometime useful -- at time the implementation might want to uses a short name internally, yet still prefer to give users a long-form variable name in the error messages: var normalizeTime = c.fun( { secondSinceEpoc: c.number } ) .wrap( function (s) { return s % 60 } ) > normalizeTime(124526) 26 > normalizeTime(null) ContractError: Expected number, but got null for the `secondSinceEpoc` argument of the call. Contracts for function of more than one arguments are specified by passing additional one-field hashes, separated by commas: var area = c.fun( { x: c.number }, { y: c.number } ) .wrap( function(x, y) { return x * y } Attempting to pass all arguments as a single hash is an error: > var area = c.fun( { x: c.number, y: c.number } ) > .wrap( // ^---- THIS IS WRONG > function(x, y) { return x * y } ContractLibraryError: fun: expected exactly one key to specify the name of the 1st arguments, but got 2 This style of specifying arguments names when calling c.fun() is necessary because JavaScript does maintain the order of fields in hashes. Contracts returned by c.fun() have three additional methods not found on other contracts: c.fun().returns(c.number) : This will check that the function returns only numbers. c.fun().extraArgs(c.array(c.number)) : This will allow a variable number of arguments, so long as they are all numbers. Generally, the contract passed to .extraArgs() will be matched against an array containing the extra arguments beyond those specified explicitly. This opens the possibility of checking overloaded function and other rich combinations of extra arguments by using c.or() contract along with c.tuple() contracts. Like all other methods on contract, these thwo methods, .returns() and .extraArgs() do not modify the original contract. Instead they return a new contract which checks everything the original contract checks, plus their additional check. They are used like this: > var triceWord = c.fun({s:c.any}).returns(c.string) // ^---- This is a bug, should be `c.string` .wrap( function (s) { return s + s + s }) > triceWord("bork") 'borkborkbork' > triceWord(35) ContractError: Expected string, but got 105 for the return value of the call. c.fun().ths( ---- ) : We mention .ths() for completeness. This contract checks that the method was invoked on an object of the right form. (Note, this method name is missing an "i" to avoid clashing with the JavaScript reserved word "this"). However, usages of c.fun().ths are rare. It is more customary to use the .method() method on object contacts (See Contracts on Objects below.) c.fun().ths is useful when using the Apply Invocation Pattern described in Chapter 4 of Douglas' Crockford' JavaScript, The Good Parts. > var makeStatus = function(string) { return { status: string } } > var get_status = c.fun().ths(c.object({status: c.string})).returns(c.string) .wrap( function() { return this.status }) > get_status.apply({ status: 'A-OK' }) // OK 'A-OK' > get_status.apply({ statosstratos: 'I have a typo' }) // not OK ContractError: Field `status` required, got { statosstratos: 'I have a typo' } for this `this` argument of the call. Contracts can be marked optional using c.optional() When used for a function's argument, a contract that has been marked optional makes that argument optional (the contract itself is not affected otherwise). All arguments to the right of an optional argument must be optional as well. > var c = require('rho-contracts') > var util = require('util') > var x = 0 > var incrementIt = c.fun({ i: c.optional(c.number) } ).returns(c.number) .wrap( function(i) { if (i) x+=i; else x++; return x }) > incrementIt(10) 10 > incrementIt() // calling with the argument omitted 11 > incrementIt(10, 20) // too many arguments! ContractError: Too many arguments, expected at most 1 but got 2 Recall, we cannot tell if a function will be miscalled until it is called, and we cannot tell if a function will return a value of the wrong type until it tries to return. Thus function contracts cannot be checked without wrapping the targeted function with a contract-checking shell. Concretely, this means it is an error to call .check() on a function contract: > c.fun({ n: c.integer }).check(function(n) { return n+1 }) ContractLibraryError: check: This contract requires wrapping. Call wrap() instead and retain the wrapped result. The requirement to call .wrap() instead of .check() carries over to contracts over data structures containing functions: > var operations = [function (x) { return x + 1 }, function (x) { return x * 2 }, function (x) { return x * x } ] // Check whether `operations` is indeed an array of functions from number to number: > c.array(c.fun({ x: c.number }).returns(c.number)) .check(operations) ContractLibraryError: check: This contract requires wrapping. Call wrap() instead and retain the wrapped result. By replacing .check() with .wrap(), rho-contracts.js will recur down the array and wrap each function with the function contract: > var operations_wrapped = c.array(c.fun({ x: c.number }).returns(c.number)) .wrap(operations) Here, .wrap() returns a new array containing the wrapped functions. So long as the array's functions are used correctly, the presence of the contract checking-shells is unnoticeable: > operations_wrapped.foreach(function(fn) { util.debug(fn(5)) } DEBUG: 6 DEBUG: 10 DEBUG: 25 But if we misuse one of the functions, the checking shell throws an exception. The error provided clearly identifies the source of the fault: > operations_wrapped.foreach(function(fn) { fn("five") } ContractError: Expected number, but got 'five' for the `x` argument of the call. The full value being checked was: [ [Function], [Function], [Function] ] Meanwhile, the original functions rest unmodified in the original operations array, and continue to fail silently: > operations.foreach(function(fn) { util.debug(fn("five")) } DEBUG: five1 DEBUG: NaN DEBUG: NaN The .wrap() method wraps recursively all JavaScript's data structures, array, hashes, tuples, and objects. Since objects in JavaScript are constructed out of normal hash tables containing normal functions, contracts on objects follow the usage described in the previous three sections Data Structure Contracts, Contracts on Functions and Wrapping vs Checking. > String.prototype.repeat = function( num ) { // A helper function on String, just for fun. return new Array(num + 1).join(this); } > c.animal = c.object({ nLegs: c.number, name: c.string, speak: c.fun({n: c.number}).returns(c.string) }) > var makeCat = c.fun({ name: c.string }).returns(c.animal) .wrap(function (name) { return { nLegs: 4, name: name, speak: function(n) { return this.name + " says " + "meow".repeat(n) } } }) > var makeBird = c.fun({ name: c.string }).returns(c.animal) .wrap(function (name) { return { nLegs: 2, name: name, speak: function(n) { return this.name + " says " + "tweet".repeat(n) } } }) > var tweetie = makeBird("tweetie") > tweetie.speak(3) tweetie says tweettweettweet. In this example, the contract on the .speak() method will correctly verify that the method returns a string. However, it does not verify whether it was correctly invoked on an animal -- an error could go undetected: > var speak = tweetie.speak > speak(2) undefined says tweettweet. // Yikes! The .ths() method on function contracts can be used to add this additional check. In order to distinguish functions intended be used as methods, rho-contracts.js provides c.method(), which is a variant of c.fun() that takes the contract on this as its first argument: > c.animal = c.object({ nLegs: c.number, name: c.string, speak: c.method(c.animal, { n: c.number}).returns(c.string) }) // ^--- Ousp, this doesn't actually work. However, this attempt fails due to the cyclic reference: the line of code defining the contract for animals refers to the contract for animals. When the c.animal is looked up on the third line the first line has not returned yet, so c.animal is not defined and the look up returns of c.animal returns undefined. rho-contracts.js provides a way to establish this cyclic reference in large part to make it possible to fully specify such contract on objects. The function c.cyclic() creates a temporary placeholder until we can close the cycle: > c.animal = c.cyclic() The placeholder returned by c.cyclic() has only one useful method: .closeCycle(), which must be called with the actual contract: > c.animal.closeCycle(c.object({ nLegs: c.number, name: c.string, speak: c.method(c.animal, { n: c.number }).returns(c.string) })) When using this better definition of c.animal, the error is caught as it should: > var speak = tweetie.speak > speak(2) ContractError: on `speak()` Expected object, but got undefined for this `this` argument of the call. rho-contracts.js provides three additional pieces of functionality made specifically for object contracts. c.optional(): Contracts marked "optional" by the c.optional()function (as discussed earlier in the Contracts for Optional Arguments section) are also used to specify optional fields of objects. A field is considered missing if is not set, or if it is set to null. All these are OK. > c.car = c.object({ carModel: c.string, trunkSize: c.optional(c.number) }) // missing to indicate a sport car with no trunk > c.car.check({ carModel: "MINI Cooper Coupe", // OK trunkSize: 9.8 }) > c.car.check({ carModel: "Infiniti IPL G Convertible", // OK trunkSize: null }) Or: > c.car.check({ carModel: "Infiniti IPL G Convertible" }) // Also OK But not: > c.car.check({ trunkSize: 22.1 }) ContractError: Field `carModel` required, got { trunkSize: 9.8 } .strict(): By default, objects are allowed to have additional fields not specified in the contract. Calling .strict()returns a contract that disallows them. > c.car.check({ carModel: "semitruck", towing: true }) // this is fine > c.car.strict().check({ carModel: "semitruck", towing: true }) // but this is not ContractError: Found the extra field `towing` in { carModel: 'semitruck', towing: true } .extend : .strict on tuples Additional functionality that's not documented yet: c.pred forwardRef/setRef Contracts on Whole Modules, publish() The partially documented documentation feature: .doc .theDoc documentType documentTable document category document module And also c.fn() rho-contracts.js is an implementation of the paper Contracts for higher-order functions, by Findler and Felleisen, ICFP 2002. The original and best implementation of the paper's ideas is racket/contract contract.coffee is a dialect of CoffeeScript that like rho-contracts.js also implements Racket's contracts. contract.coffee runs on top of a contract-checking runtime implemented in JavaScript using Proxies, that is currently only implemented in Firefox 4+ and chrome/V8 with the experimental javascript flag enabled. ristretto-js implements a run-time checker for types written in Haskell syntax inside of specification strings. It suffers from the troubles of externally embedded languages, namely that it exists separate from its host language. It support only a limited number of basic type (Int, Num, String, Bool, Object, Array) with no possibility of extensions that's available and its type name space is separate from the JavaScript name space and module machinery. This library was created at Sefaira.com, originally for internal use. We are releasing it to the open source community under the Mozilla open-source license (MPL).
https://www.npmjs.com/package/rho-contracts
CC-MAIN-2015-48
refinedweb
4,094
59.4
Type: Posts; User: salem_c This will at least get you common code on Windows and Linux I suppose it would work on Android as well if you installed a terminal emulator on it. > Posting a question in multiple forums enhances the scope of getting answers from more repliers, Or wastes everyone's time with simple questions that are trivially answered. My reply wasn't... Spammed all over the web... This -> Or this -> Also here -> An IDE is not the C++ language. An IDE is not a particular C++ compiler. An Integrated Development Environment (IDE) is just the whole package of editor + compiler + debugger + help + more into a... Did you even read the post? $ cat foo.c #include <stdio.h> int main ( ) { typedef unsigned int frac; frac one = 0x08000000; // a single 1 followed by 27 zeros You just multiply it with whatever 1.0 is in fixed point notation. typedef unsigned int frac; frac one = 0x08000000; // a single 1 followed by 27 zeros double v = 1.003936; frac result =... Still no nearer to why? If you want to do animations, then this.... > Logger l{ "C:\temp\test.log" }; Beware that \ has a special meaning inside strings. Try Logger l{ "C:\\temp\\test.log" }; or Logger l{ "C:/temp/test.log" }; > I have a requirement for coder / programmer / software engineer coder = you tell them exactly what you want in microscopic detail. software engineer = you tell them what you want to achieve, and... Do NOT use bit-fields to solve this problem. See the notes at the end. "Also, on some platforms, bit fields are packed left-to-right, on... > Because a part of application job is made it online, in a web page. So, are you expecting the user to do something (more than just look at something) when they get to the website? If so, then... Duplicated effort from duplicating poster -> > Im thinking to replace it with enum, Or static const int Well if you're going to be editing each index, why not just go straight to a struct, which gives you automatic names to start with. ... > bArray[1] = *(BYTE*)((&ptrAddress)+1); You have to understand how pointer arithmetic works. > I have an intermediate level at the c++ language so I can come up with some questions.. Do you? ... One of memcpy(aArray,&pAddress,4); // where pAddress is memcpy(aArray,pAddress,4); // what pAddress contains memcpy(aArray,*pAddress,4); // what's in what pAddress points to Cross-posted Just another zero effort codechef wannabe. Same MO as last month. Was it the switch from using = to using := in your assignments? Makefiles can be very picky things when it comes to evaluating variables, and the two forms mean two very different things. $ g++ -Wall -Wextra -O2 foo.cpp foo.cpp: In function ‘int main()’: foo.cpp:18:16: warning: ‘a’ may be used uninitialized in this function [-Wmaybe-uninitialized] ar[l-1]=a; ... TBH, I would create a side project just to investigate cout, cin and a small number of threads. But even then, "it works" would prove nothing useful, though failure would clearly demonstrate that... So is this your magnum opus, or have you just found some source code? Because if it is downloaded code, it would be wise of you to tell us. Pin-hole guesses on your part tell us nothing about wider... Maybe your input stream is in an error state. if ( cin >> selection ) { cout << selection; } else { std::cerr << "cin is borked - do something!" << std::endl; break; // no more... So what do they use on all their other older projects? It would make sense to use what other people (who've been there longer) already know how to.
https://forums.codeguru.com/search.php?s=0bead0296c8c41de6d0858f6676d5ce9&searchid=21584643
CC-MAIN-2021-25
refinedweb
611
76.22
The Google Cloud Vision API allows developers to easily integrate vision detection features within applications, including image labeling, face and landmark detection, optical character recognition (OCR), and tagging of explicit content. In this codelab you will focus on using the Vision API with Python. You will learn how to use several of the API's features, namely label annotations, OCR/text extraction, landmark detection, and detecting facial features! What you'll learn - How to use Cloud Shell - How to Enable the Google Cloud Vision API - How to Authenticate API requests - How to install the Vision API client library for Python - How to perform Label detection - How to perform Text detection - How to perform Landmark detection - How to perform Face detection What you'll need - A Google account (G Suite accounts may require administrator approval) - A Google Cloud Platform project with an active GCP billing account - Basic Python skills would be helpful but not required; this tutorial requires Python 2.6+. You can also use any supported language, but this tutorial is only additionally available in C#/.NET and Ruby.]. This codelab requires you to use the Python language (although many languages are supported by the Google APIs client libraries, so feel free to build something equivalent in your favorite development tool and simply use the Python as pseudocode). In particular, this codelab supports Python 2 and 3, but we recommend moving to 3.x as soon as possible. The Cloud Shell is a convenience available for users directly from the Cloud Console and doesn't require a local development environment, so this tutorial can be done completely in the cloud with a web browser. The Cloud Shell is especially useful if you're developing or plan to continue developing with GCP products & APIs. More specifically for this codelab, the Cloud Shell has already pre-installed both versions of Python. The Cloud Shell also has IPython installed... it is a higher-level interactive Python interpreter which we recommend, especially if you are part of the data science or machine learning community. If you are, IPython is the default interpreter for Jupyter Notebooks as well as Colab, Jupyter Notebooks hosted by Google Research. IPython favors a Python 3 interpreter first but falls back to Python 2 if 3.x isn't available. IPython can be accessed from the Cloud Shell but can also be installed in a local development environment. Exit with ^D (Ctrl-d) and accept the offer to exit. Example output of starting ipython will look like this: $ ipython Python 3.7.3 (default, Mar 4 2020, 23:11:43) Type 'copyright', 'credits' or 'license' for more information IPython 7.13.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: If IPython isn't your preference, use of a standard Python interactive interpreter (either the Cloud Shell or your local development environment) is perfectly acceptable (also exit with ^D): $ python Python 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> $ python3 Python 3.7.3 (default, Mar 10 2020, 02:33:39) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> The codelab also assumes you have the pip installation tool (Python package manager and dependency resolver). It comes bundled with versions 2.7.9+ or 3.4+. If you have an older Python version, see this guide for installation instructions. Depending on your permissions you may need to have sudo or superuser access, but generally this isn't the case. You can also explicitly use pip2 or pip3 to execute pip for specific Python versions. The remainder of the codelab assumes you're using Python 3—specific instructions will be provided for Python 2 if they differ significantly from 3.x. *Create and use virtual environments This section is optional and only really required for those who must use a virtual environment for this codelab (per the warning sidebar above). If you only have Python 3 on your computer, you can simply issue this command to create a virtualenv called my_env (you can choose another name if desired): virtualenv my_env However, if you have both Python 2 & 3 on your computer, we recommend you install a Python 3 virtualenv which you can do with the -p flag like this: virtualenv -p python3 my_env Enter your newly created virtualenv by "activating" it like this: source my_env/bin/activate Confirm you're in the environment by observing your shell prompt is now preceded with your environment name, i.e., (my_env) $ Now you should be able to pip install any required packages, execute code within this eivonment, etc. Another benefit is that if you completely mess it up, get into a situation where your Python installation is corrupted, etc., you can blow away this entire environment without affecting the rest of your system. Before you can begin using Google APIs, you must enable them. The example below shows what you would do to enable the Cloud Vision API. In this codelab, you may be using one or more APIs, and should follow similar steps to enable them before usage. From Cloud Shell Using Cloud Shell, you can enable the API by using the following command: gcloud services enable vision.googleapis.com From the Cloud Console You may also enable the Vision API in the API Manager. From the Cloud Console, go to API Manager and select, "Library." In the search bar, start typing, "vision," then select Vision API when it appears. It may look something like this as you're typing: Select the Cloud Vision API to get the dialog you see below, then click the "Enable" button: Cost While many Google APIs can be used without fees, use of GCP (products & APIs) is not free. When enabling the Vision API (as described above), you may be asked for an active billing account. The Vision API's pricing information should be referenced by the user before enabling. Keep in mind that certain Google Cloud Platform (GCP) products feature an "Always Free" tier for which you have to exceed in order to incur billing. For the purposes of the codelab, each call to the Vision API counts against that free tier, and so long as you stay within its limits in aggregate (within each month), you should not incur any charges. Some Google APIs, i.e., G Suite, has usage covered by a monthly subscription, so there's no direct billing for use of the Gmail, Google Drive, Calendar, Docs, Sheets, and Slides APIs, for example. Different Google products are billed differently, so be sure to reference your API's documentation for that information. Summary In this codelab, you only need to turn on the Cloud Vision API, so proceed forward with this tutorial once you've successfully followed the instructions above and enabled the API. In order to make requests to the APIs, your application needs to have the proper authorization. Authentication, a similar word, describes login credentials—you authenticate yourself when logging into your Google account with a login & password. Once authenticated, the next step is whether you are—or rather, your code, is—authorized to access data, such as blob files on Cloud Storage or a user's personal files on Google Drive. Google APIs support several types of authorization, but the one most common for GCP API users is service account authorization since applications like the one in this codelab runs in the cloud as a "robot user." While the Vision API supports API key authorization as well, it's strongly recommended that users employ a more secure form of authorization. A service account is an account that belong to your project or application (rather than a user) that is used by the client library to make Vision API requests. Like a user account, a service account is represented by an email address. You can create service account credentials from either the command line (via gcloud) or in the Cloud Console. Let's take a look at both below. Using gcloud (in Cloud Shell or your dev environment) In this section, you will use the gcloud tool to create a service account then create the credentials needed to access the API. First you will set an environment variable with your PROJECT_ID which you will use throughout this codelab: export PROJECT_ID=$(gcloud config get-value core/project) Next, you will create a new service account to access the Vision API by using: gcloud iam service-accounts create my-vision-sa \ --display-name "my vision service account" Next, you will create the private key credentials that your Python code will use to log in as your new service account. Create these credentials and save it as JSON file ~/key.json by using the following command: gcloud iam service-accounts keys create ~/key.json \ --iam-account my-vision-sa@${PROJECT_ID}.iam.gserviceaccount.com From the Cloud Console To get OAuth2 credentials for user authorization, go back to the API manager (shortcut link: console.developers.google.com) and select the "Credentials" tab on the left-nav: From the Credentials page, click on the "+ Create Credentials" button at the top, which then gives you a pulldown dialog where you'd choose "Service account:" On the "Create service account" screen (similar to the below), you must enter a Service account name (choose something short but explanatory like "svc acct vision" or the one we used with gcloud above, "my vision sa". A Service account ID is also required, and the form will create a valid ID string similar to the name you chose. The Service account description field is optional, but you can specify something like, "Service account for Vision API demo". Click the "Create" button when complete. The next step is to grant service account access to this project. Having a service account is great, but if it doesn't have permissions to access project resources, it's kind-of useless... it's like creating a new user who doesn't have any access. Here, click on the "Select a role" pulldown menu. You'll see a variety of options (see below), some more granular than others. For this codelab, choose Project → Viewer. Then click Continue. On this 3rd screen (see below), we will skip granting specific users access to this service account, but we do need to make a private key our application script can use to access the Vision API with. To that end, click the "+ Create Key" button. Creating a key is straightforward on the next screen. Take the default of a JSON key structure. (P12 is only used for backwards-compatibility, so it is not recommended for new projects.) Click the "Create" button and save the private key file when prompted. The default filename will be long and possibly confusing, i.e., PROJECT_ID-HASH.json, so we recommend renaming it to something more digestible such as key.json or svc_acct.json. Once the file is saved, you'll get the following confirmation message: Click the "Close" button to complete this task from the console. Summary One last step whether you created your service account from the command-line or in the Cloud console: direct your cloud project to use this as the default service account private key to use for your application by assigning this file to the GOOGLE_APPLICATION_CREDENTIALS environment variable: export GOOGLE_APPLICATION_CREDENTIALS=~/key.json The environment variable should be set to the full path of the credentials JSON file you saved. It's not necessary to do so, but if you don't, you can only use that key file from the current working directory. You can read more about authenticating the Google Cloud Vision API, including the other forms of authorization, i.e., API key, user authorization OAuth2 client ID, etc. We're going to use the Vision API client library for Python which should already be installed in your Cloud Shell environment. Verify it's installed with with pip or pip3: $ pip3 freeze | grep google-cloud-vision google-cloud-vision==2.0.0 If you're using a local development environment or using a new virtual environment you just created, install/update the client library (including pip itself if necessary) with this command: $ pip3 install -U pip google-cloud-vision ... Successfully installed google-cloud-vision-2.0.0 Confirm the client library can be imported without issue like the below, and then you're ready to use the Vision API from real code! $ python3 -c "import google.cloud.vision" $ One of the Vision API's basic features is to identify objects or entities in an image, known as label annotation. Label detection identifies general objects, locations, activities, animal species, products, and more. The Vision API takes an input image and returns the most likely labels which apply to that image. It returns the top-matching labels along with a confidence score of a match to the image. In this example, you will perform label detection on an image of a street scene in Shanghai. To do this, copy the following Python code into your IPython session (or drop it into a local file such as label_detect.py and run it normally): from __future__ import print_function from google.cloud import vision image_uri = 'gs://cloud-samples-data/vision/using_curl/shanghai.jpeg' client = vision.ImageAnnotatorClient() image = vision.Image() image.source.image_uri = image_uri response = client.label_detection(image=image) print('Labels (and confidence score):') print('=' * 30) for label in response.label_annotations: print(label.description, '(%.2f%%)' % (label.score*100.)) You should see the following output: Labels (and confidence score): ============================== People (95.05%) Street (89.12%) Mode of transport (89.09%) Transport (85.13%) Vehicle (84.69%) Snapshot (84.11%) Urban area (80.29%) Infrastructure (73.14%) Road (72.74%) Pedestrian (68.90%) Summary In this step, you were able to perform label detection on an image of a street scene in China and display the most likely labels associated with that image. Read more about Label Detection. Text detection performs Optical Character Recognition (OCR). It detects and extracts text within an image with support for a broad range of languages. It also features automatic language identification. In this example, you will perform text detection on an image of an Otter Crossing. Copy the following snippet into your IPython session (or save locally as text_dectect.py): from __future__ import print_function from google.cloud import vision image_uri = 'gs://cloud-vision-codelab/otter_crossing.jpg' client = vision.ImageAnnotatorClient() image = vision.Image() image.source.image_uri = image_uri response = client.text_detection(image=image) for text in response.text_annotations: print('=' * 30) print(text.description) vertices = ['(%s,%s)' % (v.x, v.y) for v in text.bounding_poly.vertices] print('bounds:', ",".join(vertices)) You should see the following output: ============================== CAUTION Otters crossing for next 6 miles bounds: (61,243),(251,243),(251,340),(61,340) ============================== CAUTION bounds: (75,245),(235,243),(235,269),(75,271) ============================== Otters bounds: (65,296),(140,297),(140,315),(65,314) ============================== crossing bounds: (151,295),(247,297),(247,318),(151,316) ============================== for bounds: (61,322),(94,322),(94,340),(61,340) ============================== next bounds: (106,322),(156,322),(156,340),(106,340) ============================== 6 bounds: (167,321),(180,321),(180,339),(167,339) ============================== miles bounds: (191,321),(251,321),(251,339),(191,339) Summary In this step, you were able to perform text detection on an image of an Otter Crossing and display the recognized text from the image. Read more about Text Detection. Landmark detection detects popular natural and man-made structures within an image. In this example, you will perform landmark detection on an image of the Eiffel Tower. To perform landmark detection, copy the following Python code into your IPython session (or save locally as landmark_dectect.py). from __future__ import print_function from google.cloud import vision image_uri = 'gs://cloud-vision-codelab/eiffel_tower.jpg' client = vision.ImageAnnotatorClient() image = vision.Image() image.source.image_uri = image_uri response = client.landmark_detection(image=image) for landmark in response.landmark_annotations: print('=' * 30) print(landmark) You should see the following output: ============================== mid: "/g/120xtw6z" description: "Trocad\303\251ro Gardens" score: 0.925706148147583 bounding_poly { vertices { x: 339 y: 54 } vertices { x: 531 y: 54 } vertices { x: 531 y: 371 } vertices { x: 339 y: 371 } } locations { lat_lng { latitude: 48.861596299999995 longitude: 2.2892823 } } ============================== mid: "/m/02j81" description: "Eiffel Tower" score: 0.6325246095657349 bounding_poly { vertices { x: 435 y: 180 } ... Summary In this step, you were able to perform landmark detection on an image of the Eiffel Tower. Read more about Landmark Detection. Facial features detection detects multiple faces within an image along with the associated key facial attributes such as emotional state or wearing headwear. In this example, you will detect the likelihood of emotional state from four different emotional likelihoods including: joy, anger, sorrow, and surprise. To perform emotional face detection, copy the following Python code into your IPython session (or save locally as face_dectect.py): from __future__ import print_function from google.cloud import vision uri_base = 'gs://cloud-vision-codelab' pics = ('face_surprise.jpg', 'face_no_surprise.png') client = vision.ImageAnnotatorClient() image = vision.Image() for pic in pics: image.source.image_uri = '%s/%s' % (uri_base, pic) response = client.face_detection(image=image) print('=' * 30) print('File:', pic) for face in response.face_annotations: likelihood = vision.Likelihood(face.surprise_likelihood) vertices = ['(%s,%s)' % (v.x, v.y) for v in face.bounding_poly.vertices] print('Face surprised:', likelihood.name) print('Face bounds:', ",".join(vertices)) You should see the following output for our face_surprise and face_no_surprise examples: ============================== File: face_surprise.jpg Face surprised: LIKELY Face bounds: (93,425),(520,425),(520,922),(93,922) ============================== File: face_no_surprise.png Face surprised: VERY_UNLIKELY Face bounds: (120,0),(334,0),(334,198),(120,198) Summary In this step, you were able to perform emotional face detection. Read more about Facial Features Detection. Congratulations... you learned how to use the Vision API with Python to perform several image detection features! Also check out the code samples in this codelab's open source repo—while the code in this tutorial works for both 2.x (2.6+) and 3.x, the code in the repo requires 3.6+. Clean up You're allowed to perform a fixed amount of (label, text/OCR, landmark, etc.) detection calls per month for free. Since you only incur charges each time you call the Vision API, there's no need to shut anything down nor must you disable/delete your project. More information on billing for the Vision API can be found on its pricing page. In addition to the source code for the four examples you completed in this codelab, below are additional reading material as well as recommended exercises to augment your knowledge and use of the Vision API with Python. Learn More - Cloud Vision API documentation: cloud.google.com/vision/docs - Cloud Vision API home page & live demo: cloud.google.com/vision - Vision API label detection/annotation: cloud.google.com/vision/docs/labels - Vision API facial feature recognition: cloud.google.com/vision/docs/detecting-faces - Vision API landmark detection: cloud.google.com/vision/docs/detecting-landmarks - Vision API optical character recognition (OCR): cloud.google.com/vision/docs/ocr - Vision API "Safe Search": cloud.google.com/vision/docs/detecting-safe-search - Vision API product/corporate logo detection: cloud.google.com/vision/docs/detecting-logos - Python on Google Cloud Platform: cloud.google.com/python - Google Cloud Python client: googlecloudplatform.github.io/google-cloud-python - Codelab open source repo: github.com/googlecodelabs/cloud-vision-python Additional Study Now that you have some experience with the Vision API under your belt, below are some recommended exercises to further develop your skills: - You've built separate scripts demoing individual features of the Vision API. Combine at least 2 of them into another script. For example, add OCR/text recognition to the first script that performs label detection ( label_detect.py). You may be surprised to find there is text on one of the hats in that image! - Instead of our random images available on Google Cloud Storage, write a script that uses one or more of your images on your local filesystem. Another similar exercise is to find images online (accessible via http://). - Same as #2, but with local images on your filesystem. Note that #2 may be an easier first step before doing this one with local files. - Try non-photographs to see how the API works with those. - Migrate some of the script functionality into a microservice hosted on Google Cloud Functions, or in a web app or mobile backend running on Google App Engine. If you're ready to tackle that last suggestion but can't think of any ideas, here are a pair to get your gears going: - Analyze multiple images in a Cloud Storage bucket, a Google Drive folder (use the Drive API), or a directory on your local computer. Call the Vision API on each image, writing out data about each into a Google Sheet (use the Sheets API) or Excel spreadsheet. (NOTE: you may have to do some extra auth work as G Suite assets like Drive folders and Sheets spreadsheets generally belong to users, not service accounts.) - Some people Tweet images (phone screenshots) of other tweets where the text of the original can't be cut-n-pasted or otherwise analyzed. Use the Twitter API to retrieve the referring tweet, extract and pass the tweeted image to the Vision API to OCR the text out of those images, then call the Cloud Natural Language API to perform sentiment analysis (to determine whether it's positive or negative) and entity extraction (search for entities/proper nouns) on them. (This is optional for the text in the referring tweet.) License This work is licensed under a Creative Commons Attribution 2.0 Generic License.
https://codelabs.developers.google.com/codelabs/cloud-vision-api-python
CC-MAIN-2020-50
refinedweb
3,602
56.35
How to Fill area with color in matplotlib with Python In this article, we are going to learn how to fill the area of any figure with color in matplotlib using Python. For this, we need some basic concept of two popular modules of Python in the world of plotting figure or any diagram i.e. “numpy” and “matplotlib“. Filling area with color in matplotlib using Python We will use the inbuilt function “plt.fill_between()“, this function takes two argument x and y, which denotes from where to where the color will be filled in the figure. Let us see with some examples:- Fill color between x-axis and curve in matplotlib import matplotlib.pyplot as plt import numpy as np x = np.arange(0,10,0.1) y = x**2 plt.plot(x,y,'k--') plt.fill_between(x, y, color='#539ecd') plt.grid() plt.show() Output: In the above example, first, we imported two required module named as matplotlib and numpy by writing these two lines of codes :- - import matplotlib.pyplot as plt - import numpy as np then we create a numpy array and stored in a variable named as x and then we established a relation between x and y that “y = x**2” and then we used the function “plt.fill_between” to fill the color between x-axis and curve. And then we use “plt.grid()” to mark grid on the figure. Filling color between y-axis and curve in matplotlib import matplotlib.pyplot as plt import numpy as np x = np.arange(0,10,0.1) y = x**2 plt.plot(x,y,'k--') plt.fill_between(x, y, np.max(y), color='#539ecd') plt.grid() plt.show() Output: The explanation for the above example is as same as the first example. The only change we added to this is that we added one extra argument inside the “plt.fill_between” function. Fill color between two curves in matplotlib import matplotlib.pyplot as plt import numpy as np def f1(x): return 1.0 / np.exp(x) def f2(x): return np.log(x) x = np.arange(0.01,10,0.1) y1 = f1(x) y2 = f2(x) plt.plot(x,y1,'k--') plt.plot(x,y2,'k--') plt.fill_between(x, y1, y2, color='#539ecd') plt.grid() plt.xlim(0,10) plt.ylim(-1,2.5) plt.show() Output: In the above example, the only changes we made are that we plotted two curves in a single figure, and then we use the function for filling the color between these two curves. And we also used the “plt.xlim” and “plt.ylim” to limit the x-axis and y-axis coordinates. You can also read the following articles:-
https://www.codespeedy.com/fill-area-with-color-in-matplotlib-with-python/
CC-MAIN-2020-50
refinedweb
450
76.82
dtdGenerator — Generates the ADMIT DTD files.¶ This module defines the DTD generator for ADMIT. The DTDs are used to validate the XML I/O. In addition to generating the DTDs for each AT and BDP, the following files are also created: - bdp_types.py — constants and types definition file - __init__.py — in both admit/at and admit/bdp Usage: import dtdGenerator dtdGenerator.generate() - class admit.xmlio.dtdGenerator. DtdGenerator[source]¶ Class to generate the dtd files for ADMIT. The dtd’s are generated by searching for all BDP, AT, and utility class files (those located in admit/util and inherit from UtilBase). Each file is then loaded and introspected, and dtd’s are generated by both the name and the type of each class attribute. Attributes Methods generate()[source]¶ Main method for generating the dtd. Searches through the BDP and AT directories and generates dtds for each file found. getType(i, val, key)[source]¶ Method to generate a string version of the data type of the input value. write_at(key, val)[source]¶ Method which takes a class name and AT sub-class instance. It creates a dtd file for the class and collects relevant info for the file.
http://admit.astro.umd.edu/module/admit.xmlio/dtdGenerator.html
CC-MAIN-2021-31
refinedweb
196
50.94
I am not sure if we are utilizing this for Jetty...I didn't see a virtual-server style parameter in the geronimo-jetty.xsd. Perhaps it is somewhere else...which leads me to a discussion we have had in the past... I would very much be interested in taking the geronimo-jetty.xml and geronimo-tomcat.xml files and merge them as a common version, such as a geronimo-web.xml...and remove the jetty namespace attributes from the xsd. This way both containers can surf off the same file....and reuse the xmlbean code. I did not see anything in the xds that was container specific. If anything is container specific, the builder could ignore the parameter. Thoughts? Jeremy Boynes wrote: > Jeff Genender wrote: > >> >> I also added a Tomcat Builder so we will no longer rely on the Jetty >> version. The main reason for this is that Tomcat supports virtual >> hosts (declared by additional HostGBean objects in the plan). The web >> applications can now include a geronimo-tomcat.xml file in the WEB-INF >> file which is very similar to the Jetty version. The only difference >> is support for the <virtual-server> parameter. This allows you to >> deploy your web application to a specific virtual host. >> > > IIRC Jetty has this function as well - can we merge the vhost changes > back into the jetty-builder? > > -- > Jeremy
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200504.mbox/%[email protected]%3E
CC-MAIN-2014-42
refinedweb
227
60.41
Hi, This approach works, but there is another way.... Have you considered using a bridging firewall? All you need to do is bridge the external and internal NICS, apply the bridge netfilter patch (or use etables). Then perform all of your filtering on the bridged interface. Some advantages of this approach: - the firewall requires no ip of it's own and is harder to attack as a result (also good if you only have a limited number of public IP's at your disposal) - no NAT required (NAT is well, ugly:) - potentially, no reconfiguration of your existing servers is required Check out : (older) (far more up to date) Hope this helps.. regards charlie On Thu, 2004-07-08 at 10:09, Nathan Barham wrote: > Ognen Duzlevski wrote: > > Hi, > > > > we have several boxes with unique public IP addresses which are part of > > a big .edu namespace. I would like to put these > > machines behind one single firewall and still keep their names. Is it > > possible to have all names point to the firewall > > machine and then have the firewall direct the specific request to a > > specific box behind it? > > > > So, if F is firewall.x.edu and I have A.x.edu, B.x.edu and C.x.edu I > > want to have A, B and C behind F. A, B and C > > should now point to F and F will direct all outside requests to A, B or > > C based on the name. > > > > Thanks, > > Ognen > > > > > > Ognen, > > You could do it like this: > > 1) Change the public IP's of the servers you want to protect to > something in a private range (192.168.x.x etc.). > > 2) Create interface aliases for their existing public IP's on the > external interface of your firewall > > 3) Forward incoming/outgoing traffic through your firewall with iptables. > > You can assign interface aliases on a Debian box in /etc/network/interfaces. > > As an example, lets say your firewall's external interface is eth0, and > it's public IP is 66.224.54.118. Your firewall has another interface > (eth1) which is the gateway to your DMZ, and has IP 192.168.1.1. You > have a web server in that DMZ with IP 192.168.1.2, and you want it to > handle incoming traffic for. Your DNS A record for > currently resolves to 66.224.54.117, and you don't want to change that. > > To set this up, your /etc/network/interfaces file would look something > like the following: > > auto eth0 > iface eth0 inet static > address 66.224.54.118 > netmask 255.255.255.248 > network 66.224.54.112 > broadcast 66.224.54.119 > gateway 66.224.54.113 > > auto eth0:1 > iface eth0:1 inet static > address 66.224.54.117 > netmask 255.255.255.248 > > auto eth1 > iface eth1 inet static > address 192.168.1.1 > netmask 255.255.255.0 > network 192.168.1.0 > broadcast 192.168.0.255 > > ( I don't know offhand what (or if) the limit is for the number of > aliases allowed per interface, but I think I recall doing three > successfully. Also note that though eth0:1 will show up with ifconfig, > AFAIK iptables will only refer to that interface as eth0.) > > > OK, now you want to get the incoming www traffic that is headed for > 66.224.54.117 through your firewall to your www server and back out. > The iptables rules would go something like this: > > # VARIABLES > IPTABLES=/sbin/iptables # Path to iptables > EXT_IP="66.224.54.118" # eth0 IP > EXT_IF="eth0" # External interface > DMZ_IF="eth1" # DMZ interface > DMZ_IP="192.168.1.1" # eth1 > WWW_IP="66.224.54.117" # Virtual external www IP > DMZ_WWW_IP="192.168.1.2" # WWW server in DMZ > > > # PREROUTING CHAIN - DNAT the incoming tcp port 80 and 443 > # so it can be forwarded > > > $IPTABLES -t nat -A PREROUTING -p tcp -i $EXT_IF -d $WWW_IP / > --dport 80 -j DNAT --to-destination $DMZ_WWW_IP > > $IPTABLES -t nat -A PREROUTING -p tcp -i $EXT_IF -d $WWW_IP / > --dport 443 -j DNAT --to-destination $DMZ_WWW_IP > > > # FORWARD CHAIN > > # Let already established forwarded conversations continue. > $IPTABLES -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT > > # Now forward the DNAT'ed packets to your www machine > # in the DMZ > > $IPTABLES -A FORWARD -i $EXT_IF -o $DMZ_IF -p tcp -d $DMZ_WWW_IP / > --dport 80:80 -j ACCEPT > > $IPTABLES -A FORWARD -i $EXT_IF -o $DMZ_IF -p tcp -d $DMZ_WWW_IP / > --dport 443:443 -j ACCEPT > > > # POSTROUTING - Now SNAT outgoing packets from your www server to your > # public WWW IP > > $IPTABLES -t nat -A POSTROUTING -o $EXT_IF -s $DMZ_WWW_IP -j / > SNAT --to $WWW_IP > > # Make sure everything else going out eth0 is SNAT'ed to the > # firewall's IP. > $IPTABLES -t nat -A POSTROUTING -o $EXT_IF -j SNAT --to $EXT_IP > > > Of course your firewall will likely also have a third interface for your > private LAN. If so, and you want the machines on your LAN to use the > services provided in your DMZ, you will probably need to find a way for > them to resolve to 192.168.1.2 (instead of your public www IP) > and then provide access through your firewall for them as well. > > You might want to look into Shorewall or other iptables frontends to > help you out if you don't like writing your own rule sets. > > Hope that helps. > > -Nathan -- ============================ Charles Kidson Systems Administrator General Pants Group [email protected] ph 02 9290 0813 fx 02 9299 6485 mb 0428 61 7766 ============================
https://lists.debian.org/debian-firewall/2004/07/msg00022.html
CC-MAIN-2016-07
refinedweb
907
72.26
!Converted with LaTeX2HTML 95.1 (Fri Jan 20 1995) by Nikos Drakos ([email protected]), CBLU, University of Leeds > This section discusses some of the general issues in compiling and linking with the SSM libraries. Still, the best way to learn about writing and compiling a VAS and client is to read the tutorial, Writing Value-Added Servers with the Shore Storage Manager. All of the include files needed to build servers and clients are located in SHROOT/include. Any server code using the SSM should include sm_vas.h. Since clients do not need all of the SSM functionality, they need only include sm_app.h. The RPC package include files are located in SHROOT/include/rpc and are usually included with this line: #include <rpc/rpc.h> All of the libraries needed to build servers and clients are located in SHROOT/lib. Clients only need libshorecommon.a. Servers need both libsm.a and libshorecommon.a The RPC package library is librpclib.a. There are two pre-compiled version of these libraries. The are included in the debugging and no-debugging binary releases. The debugging version not only includes symbol table information ( -g option to gcc), but also has considerable additional auditing and assert checking code. This includes code that audits data pages whenever an update is made, performs monitoring to detect thread stack overflow, and checks over 1,400 additional assertions. See the Shore Release document for more information on these releases. Note: when compiling for linkage with the debugging release, the compiler flag -DDEBUG should be used. The SSM uses a number of templates One of the issues that is often confusing is controlling template instantiation. All of the template instantiations needed by the SSM are already included in the libraries. However, due to a bug in gcc 2.6.* (supposedly to be fixed in 2.7.0), it is possible to have problems during linking due to multiple definitions of template code. To avoid this, and to have smaller executables, we use the gcc option -fno-implicit-templates in the Makefile from the tutorial example. This causes gcc not to emit any template code unless the template is explicitly instantiated. Here is an example of explicit instantiation from the tutorial #ifdef __GNUG__ // Explicitly instantiate lists of client_t. template class w_list_t<client_t>; #endif The SSM has been used to build a number of value-added servers. Some of these are publicly available. You may find these helpful in writing your own. The Shore Server is the server for the Shore object repository. The Shore Server actually has two interfaces. One is used by SDL applications and the other is the NFS interface. The Shore Server code is available in src/vas. The SSM testing shell is a server with a TCL interface designed to test the SSM. The code is available in src/sm/ssh. No documentation is available yet. Paradise is a GIS system still under development. It will be publicly available in the future. See for more information.
http://www.cs.wisc.edu/shore/1.0/ssmapi/node11.html
crawl-001
refinedweb
503
59.5
Making your WinForm applications User Friendly is important. One aspect of a good user experience is informing your user that your application is unresponsive during short periods of work. This article introduces an effective and simple way of adding application wide WaitCursors using one line of start-up code. Recently, I have been working on a WinForms project at home that occasionally performs some short (less than 5 seconds) running tasks. I wanted the user to be informed that during that short period the UI is unresponsive, so I chose to use the Cursors.WaitCursor to indicate this. This article shows how I came to my final WaitCursor library implementation which I believe is a completely re-usable library.�s it! You can of course use any Cursor you like, you can use one of the predefined Cursors or you can create a new cursor and use that instead. You can also fine tune the amount of work time that will elapse before the Cursor is shown: ApplicationWaitCursor.Delay = new TimeSpan(0, 0, 0, 1, 0); // Delay of 1 second During development of a recent WinForms project at home, I had decided to use the cursor to indicate short running tasks to the user, like so: private void DoShortRunningTask() { Cursor.Current = Cursors.WaitCursor; .. do some work .. Cursor.Current = Cursors.Default; } Now, before you say "Where�s the exception handling code?", I am trying to illustrate how I eventually came to my final cut of the WaitCursor library. The above code works, or, at least it works most of the time. I, of course, found that without any exception handling I could end up with the WaitCursor on permanently, so I quickly came up with: private void DoShortRunningTask() { Cursor.Current = Cursors.WaitCursor; try { .. do some work .. } finally { Cursor.Current = Cursors.Default; } } Now that�s better, I've guaranteed that the Cursor will always be returned to the Default cursor even if an exception occurs. However, I found it arduous to wrap that code around every short running task that I was developing. Then I remembered how I used to use C++ stack based destructors to perform tear down work upon exiting of a method: void DoShortRunningTask() { StWaitCursor cursor = new StWaitCursor(); .. do some work .. // Implicitly called ~StWaitCursor returns the Cursor to Default } But I couldn't use C# destructors the same way since you can't guarantee when a C# destructor is called since the Garbage Collector thread is responsible for that. Instead, C# uses a different language feature, the using statement which implicitly calls the Dispose method of objects that implement IDisposable. Although (I find it's) not quite as easy to use as the C++ destructor, you can achieve the same result with the using statement: private void DoShortRunningTask() { using (new StWaitCursor()) { .. do some work .. } } This code, of course, requires the StWaitCursor class: public class StWaitCursor : IDisposable { public StWaitCursor() { Cursor.Current = Cursors.WaitCursor; } public void Dispose() { Cursor.Current = Cursors.Default; } } There, nice and simple. I found, however, that if my short running task was too short then the user sees a quick flickering of the Cursor to and from the WaitCursor, quite annoying. So I decided I needed a way of turning the WaitCursor on after a predefined amount of time during a task. After quite a few iterations and refactorings, I came up with what I called the StDelayedCallback class (see source code) which is a generic class that once instantiated will wait for a specified amount of time before calling the Start method of an IDelayedCallbackHandler, and if Start was called that it was guaranteed to call the Finish method of the same interface. Once I had this, I could easily implement the interface such that the WaitCursor was turned on when Start was called and returned the Default cursor when Finish was called. During development of the WaitCursor library, I discovered that I could not set Cursor.Current on any thread other than the GUI thread. This is where the Win32 AttachThreadInput method can be used to get around this problem effectively. So I developed the StThreadAttachedDelayedCallback class which wraps up the call to the AttachThreadInput. So eventually I ended up with a generic StDelayedCallback class, a ThreadInput attached version of it called StThreadAttachedDelayedCallback and a specific Cursor implementation called StWaitCursor. Incidentally, I used the prefix St since I had originally intended on using these classes in a similar way to the C++ Stack based classes I had used in the past. So up until now, I had intended on using the following: private void DoShortRunningTask() { using (new StWaitCursor(new TimeSpan(0, 0, 0, 0, 500)) { .. do some work .. } } However, it still meant I had to be explicit about where I wanted my WaitCursor to show. Then I had an attack of brilliance and came up with the ApplicationWaitCursor singleton class which neatly wraps the StWaitCursor such that whenever the application is working, or rather, whenever it�s not regularly calling OnApplicationIdle, then the StWaitCursor kicks in and shows the WaitCursor. The only caveat I found was when I dragged a window around which blocks the OnApplicationIdle call. So, I also intercept the WM_NCLBUTTONDOWN message (which is sent at the beginning of the window dragging) to temporarily disable the StWaitCursor. That's where I am up to with the current WaitCursor implementation and in brief how I got there. You can easily derive from the StDelayedCallback class to create similar effects. Perhaps you would like to show an Indefinite progress bar during your long running tasks, or indicate "Saving Changes..." in a StatusBar control: private void SaveChanges() { using (new StStatusBar(stbMain, "Saving Changes...") { .. do some work .. } // StStatusBar.IDispose could return the Text // of the StatusBar to its original value } Note that the source code does not contain the StStatusBar class, I am just using it as an example of other ways you could use the StDelayedCallback class. Version 1.0.0.0 (12th March 2005) Version 1.0.1.0 (16th March 2005) General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cpp/WaitCursor.aspx
crawl-002
refinedweb
1,003
52.29
TM::Tau - Topic Maps, Tau Expressions use TM::Tau; # read a map from an XTM file $tm = new TM::Tau ('test.xtm'); # or $tm = new TM::Tau ('file:test.xtm'); # or $tm = new TM::Tau ('file:test.xtm >'); # or $tm = new TM::Tau ('file:test.xtm > null:'); # read it now and write it back to the file when object goes out of scope $tm = new TM::Tau ('test.xtm > test.xtm'); # create empty map at start and then let it automatically flush onto file $tm = new TM::Tau ('null: > test.xtm'); # or $tm = new TM::Tau ('> test.xtm'); # read-in at the start (i.e. constructor time) and then flush it back $tm = new TM::Tau ('> test.xtm >'); # load and merge maps at constructor time $tm = new TM::Tau ('file:test.xtm +'); # load map and filter it with a constraint at constructor time $tm = new TM::Tau ('mymap.atm * myontology.ont'); # convert between different formats $tm = new TM::Tau ('test.xtm > test.atm'); When you need to make maps persistent, then you can resort to either using the prefabricated packages TM::Materialized::*, or you can build your own persistent forms using any of the available synchronizable traits. In either case your application will have to invoke methods like sync_in and sync_out to copy content from the resource into memory and back. While this gives you great flexibility, in some cases your needs may be much simpler: A map should be sourced into memory when the map object is created. A typical use case is a web server application which accesses the map on disk with every request and which returns parts of the map to an HTTP client. A map is created first in memory and is flushed onto disk at destruction time. One example here is a script which extracts content from a relational database, puts it into a map in memory. At the end all map content is copied onto disk. A map is sourced from the disk at map object creation time, you update it and it will be flushed back to the same disk location at object destruction. Your application may be started with with new content to be put into an existing map. So first the map will be loaded, the new content added, and after that the map will be written back from where it came. A map is sourced from the disk, is translated into some other representation and is written back to disk to another location or format. As an example, you might want to convert between XTM and CTM format. A map is sourced from some backend, is transformed and/or filtered before being used. Your application could be one which only needs a particular portion of the map. So before processing the map is filtered down to the necessary parts. One or more maps are sourced from backends and are merged before processing. If you want to provide a consolidated view over several different data resources, you could first bring them all into topic map form, and then merge them before handing it to the application. What is common to all these cases is that there is a breath-in phase when the map object is constructed, and a breath-out phase when it is destroyed. In between theses phases the map object is just a normal instance of TM. To control what happens in these two phases, this package provides a simple expression language, call Tau. With it you can control Here the language provides a URI mechanism for addressing, such as file:tm.atm or To merge two (manifested or virtual) topic maps together the + operator can be used file:tm.atm + To transform product data to only something a customer is supposed to see, the * can be used: product_data.atm * file:customer_view.tmql NOTE: Later versions of this package will heavily overload the operators to also operate on other objects. The Tau expression language supports two binary operators, + and *. The + operator intuitively puts things together, the * applies the right-hand operand to the left-hand operand and behaves as a transformer or a filter. The exact semantics depends on the operands. In any case, the * binds stronger than the +, and that precedence order can be overridden with parentheses. The parser understands the following syntax for Tau expression: tau_expr -> mul_expr mul_expr -> source { ('>' | '*') filter } source -> '(' add_expr ')' | primitive add_expr -> mul_expr { '+' mul_expr } filter -> '(' filter ')' | primitive primitive -> uri [ module_spec ] module_spec -> '{' name '}' Terms in quotes are terminals, terms inside {} can appear any number of times (also zero), terms inside [] are optional. All other terms are non-terminals. NOTE: Filters are planned to be composite, hence the optional bracketing in the grammar. The (pre)parser supports the following shortcuts (I hate unnecessary typing): -as source is interpreted as STDIN (via the TM::Serializable::AsTMa trait). Unless you override that. -as filter is interpreted as STDOUT (via the TM::Serializable::Dumper trait). Unless you override that. # memory-only map null: > null: # read at startup, sync out when map goes out of scope file:test.atm > file:test.atm # copy AsTMa= to XTM file:test.atm > file:test.xtm # using a dedicated driver to load a map, store it onto a file dns:my.dns.server { My::DNS::Driver } > file:dns_snapshot.atm # this will only work if the My::DNS::Driver supports to materialize # the whole map # read a map and compute the statistics file:test.atm * URIs are used to address maps. An XTM map, for example, stored in the file system might be addressed as file:mydir/somemap.xtm for a relative URL (relative to an application's current working directory), or via an absolute URI such as The package supports all those access methods (file:, http:, ...) which LWP supports. Obviously a different deserializer package has to be used for an XTM file than for an AsTMa or LTM file. Some topic map content may be in a TM backend database, some content may only exist virtually, being emulated by a dedicated package. While you may be mostly fine with system defaults, in some cases you may want to have precise control on how files and other external sources are to be interpreted. By their nature, drivers for sources must be subclasses of TM. A similar consideration applies to filters. Also here the specified URI determines which filter actually has to be applied. It also can define where the content eventually is stored to. Drivers for filters must be either subclasses of TM::Tau::Filter, or alternatively must be a trait providing a method sync_out. When a Tau expression is parsed, the parser tries to identify which driver to use for which part of that composite map denoted by the expression. For this purpose a pattern matching approach is used to map regular expression patterns to driver package names. If you would like to learn about the current state of affairs do a use Data::Dumper; print Dumper \%TM::Tau::sources; print Dumper \%TM::Tau::filters; Obviously, there is a distinction made between the namespace of resources (residing data) and filters (and transformers). Each entry in any of the hashes contains as key a regular expression and as value the name of the driver to be used. That key is matched against the parsed URI and the first match wins. Since the keys in a hash are not naturally ordered, that is undefined. At any time you can override values there: $TM::Tau::sources{'null:'} = 'TM'; $TM::Tau::sources{'tm:server\.com'} = 'My::Private::TopicMap::Driver'; or delete existing ones. The only constraint is that the driver package must already be required into your Perl program. During parsing of a Tau expression, two cases are distinguished: TM::Tau::sourceshash. The value of that entry will be used as class name to instantiate an object whereby one component ( uri) will be passed as parameter like this: $this_class_name->new (uri => $this_uri, baseuri => $this_uri) This class should be a subclass of TM. Class::Trait->apply ( $node => $trait => { exclude => [ 'mtime', 'sync_out', 'source_in' ] } ); If there is no match, this results in an exception. Another way to define which package should be used for a particular map is to specify this directly in the tau expression: { My::BrokenXTM } In this case the resource is loaded and is processed using My::BrokenXTM as package to parse it (see TM::Materialized::Stream on how to write such a driver). The constructor accepts a string following the Tau expression "Syntax". If that string is missing, null: will be assumed. An appropriate exception will be raised if the syntax is violated or one of the mentioned drivers is not preloaded. Examples: # map only existing in memory my $map = new TM::Tau; # map will be loaded as result of this tau expression my $map = new TM::Tau ('file:music.atm * file:beatles.tmql'); Apart from the Tau expression the constructor optionally interprets a hash with the following keys: sync_in(default: 1) If non-zero, in-synchronisation at constructor time will happen, otherwise it is suppressed. In that case you can trigger in-synchronisation explicitly with the method sync_in. sync_out(default: 1) If non-zero, out-synchronisation at destruction time will happen, otherwise it is suppressed. Example: my $map = new TM::Tau ('test.xtm', sync_in => 0); # dont want to let it happen now .... # time passes $map->sync_in; # but now is a good time This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~drrho/TM/lib/TM/Tau.pm
CC-MAIN-2016-36
refinedweb
1,577
62.98
I was thinking to make a Desk Notifier which will notify me about my new email, Facebook & Twitter notification and finally I made it. I used the coolest single board computer Raspberry Pi to bring the thing in reality. This Desk Notifier will notify you about your new Gmail, Facebook notification and will show you the total number of new emails, total number of likes of your Facebook page, total number of notifications of your Facebook account and number of your Twitter followers. You can easily modify it to show others information such as number of tweets, number of friend requests, number of messages etc. I used only two seven segment for each information to show and for that it can show maximum 99 notifications. You can easily extend it if you like. Step 1: Required Tools Components: - Raspberry Pi (Dexter Industries) (Gearbest) - SD Card with Raspbian operating system (Dexter Industries) - Raspberry Pi WiFi Adapter (Dexter Industries) - Raspberry Pi Power Supply (Dexter Industries) - MAX7219CNG LED Driver IC (Sparkfun) - Seven Segment Display (Common cathode- 8pcs) - LED (5pcs) - Resistor (10k- 1pc, 220ohm - 5pcs) - PCB Board (8" x 4") - PVC Board (12" x 10") Tools: - PCB Drill - Soldering Iron - Hot Glue Gun - Scale - Jumper Wire - Wire Cutter Step 2: Setup Raspberry Pi To get started with Raspberry Pi you need an operating system. NOOBS (New Out Of the Box Software) is an easy operating system install manager for the Raspberry Pi. If you already bought a sd card with pre-installed operating system then just escape this step. If not follow NOOBS SETUP guide. You can follow the video from Raspberry Pi Foundation. For more: Instructables: All-in-One Raspberry Pi Getting Started Guide Instructables:Getting started with Raspberry PI Step 3: Setup & Test of Email Notification You need to download and install few things in order for Python to be able to properly check your Gmail inbox and show the number of unread email in the seven segment display. Connect your Raspberry Pi to your PC using Putty and enter the following command to the terminal: sudo apt-get install python-dev sudo apt-get install python-pip sudo pip install feedparser sudo easy_install -U distribute sudo apt-get install python-rpi.gpio After executing all the command successfully try the following code snippet. import RPi.GPIO as GPIO, feedparser, time DEBUG = 1 USERNAME = "xxxxxxxx" # just the part before the @ sign, add yours here PASSWORD = "********" # password of your email NEWMAIL_OFFSET = 1 # my unread messages never goes to zero, yours might MAIL_CHECK_FREQ = 60 # check mail every 60 seconds while True: newmails = int(feedparser.parse("https://" + USERNAME + ":" + PASSWORD +"@mail.google.com/gmail/feed/atom")["feed"]["fullcount"]) if DEBUG: print "You have", newmails, "new emails!" time.sleep(MAIL_CHECK_FREQ) Copy the code into a file gmail-test.py or download the code directly from the link below. Transfer the file to home directory of Raspberry Pi using FileZilla and run the following command from the terminal: sudo python gmail-test.py If everything works well you will get following output. Step 4: Setup & Test of Facebook Notification & Like Count We will access our Facebook account using Facebook Graph API. Most request. To get Access Token follow the steps: 1. Go to 2. Click the button on the right to Get Token Drop Down came down to it. 3. Choose to Page Access Token 4. Select necessary field from the left side and click Submit. 5. Copy Access Token & id. We will use it to our program. OK, we got the Access Token. Now, we need to install some python module to work with Facebook. Go to terminal window and type the following command: sudo pip install urllib2 Wait for a while. OK, your urllib2 module is now installed. Copy the code snippet into facebook-test.py or download the attached file directly. import urllib2 import json import time def get_page_data(page_id,access_token): api_endpoint = "" fb_graph_url = api_endpoint+page_id+"?fields=id,name,likes,unread_notif while 1: page_id = "1664109577184012" # username or id token = "CAACEdEose0cBAKBSd9olmJZA3rMZCUy4XZB8qDXwiM49G4OgfYbJQHYNWmyzcFnuTeunGyQZBZChcaEoC8uEjTZCNyWpPtvIWEOkEY7H5AZBFEmZBAFeXEYjzCCOob1ZAK6qwIMskMBQdLjcWBJpM5ZCIeUytLAWTgJwkeXZCwZChzeX5hZCk3kEZC86w25KfmItZBHepMJIZA67VYBYCgZDZD" # Access Token page_data = get_page_data(page_id,token) print "Page Name:"+ page_data['name'] print "Likes:"+ str(page_data['likes']) print "Unread notifications:"+ str(page_data['unread_notif_count']) time.sleep(0.5) Transfer the file to Raspberry pi and run it using the command: sudo python facebook-test.py If everything works well you will get output like following. Step 5: Setup & Test of Twitter To access your Twitter account you need Access Key. Click here to learn how to get Twitter Access Key. Got it? Now, we need to download tweepy, an excellent python module to access Twitter API. To download and install just type the command: sudo pip install tweepy We are ready with Access Key and required module. It is the time to make fun with it. Copy the code snippet and save it to twitter-test.py. import oauth, tweepy, sys, locale, threading from time import localtime, strftime, sleep def init(): global api consumer_key = "xxxxxxxxxxxxxxxxxxxxxxx" # your access key consumer_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" access_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" access_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) user = api.get_user(3318190836) print "User: " + user.screen_name print "Followers: " + str(user.followers_count) print "Friends: " + str(user.friends_count) print "Favorites: " + str(user.favourites_count) #print api.direct_messages() init() If it works well you will get following output. Step 6: Interfacing of Raspberry Pi & MAX7219 LED Driver The MAX7219 lets us control lots and lots of LEDs using just a few Raspberry Pi pin-outs. No hassles with multiplexing, latching, refreshing or using up all your outputs – it handles everything for us. We just send commands to the MAX7219 and we can control up to 64 LEDs (a 8x8 LED Matrix) or eight seven (8 including the decimal point) segment displays, you can even chain multiple MAX7219s together to drive loads more. All this via just a few pins. Both the Raspberry Pi and the MAX7219 support SPI (Serial Peripheral Interface), a good idea then to the get the RPi to talk to a MAX7219 via its very own SPI interface. By default SPI protocol is turned off but you can enable it very easily and can send and receive data. Before going further, lets connect Raspberry Pi to MAX7219 IC. Here is the pin out: We have completed connection. Now, lets enable SPI interface of Raspberry Pi. To do this, 1. Open terminal and type: sudo raspi-config A configuration window will appear like below. 2. Press down arrow key and select Advanced Options & click Enter. 3. Select SPI and click Enter 4. It will ask for confirmation, just press Enter to yes. 5. After confirming a new window will appear asking you like to load kernel default or not. Select yes. 5. You may ask to restart your Pi. Restart it. Now your SPI interface is enable. Cascading, power supply & level shifting The MAX7219 chip supports cascading devices by connecting the DIN of one chip to the DOUT of another chip. You can control lots of seven segment display or led matrix by cascading several MAX7219 IC. Raspberry PI can only supply a limited amount of power from the 5V and 3.3V rail, so it is recommended that any LED matrices or seven segment are powered separately by a 5V supply, and grounded with the Raspberry PI. It is possible to power one or two LED matrices directly from a Raspberry PI, but any more is likely to cause intermittent faults & crashes. Raspberry Pi GPIO ports used 3.3V for SPI, and MAX7219 IC operate on 5V so a simple level shifter should be employed on the DIN, CS and CLK inputs to boost the levels to 5V. It is possible to drive the IC directly by the 3.3V GPIO pins and in case of mine it works well. As I am driving the IC from 3.3V GPIO pins directly for that I used 3.3V supply for the VCC pin of the IC. I experimented with 5V but I got better stability from 3.3V supply. A 3.7V Li-ion battery works very well. You can use Li-ion battery directly to bias MAX7219 IC. Step 7: Make the Circuit Diagram The circuit diagram is designed by Eagle schematic editor. Source file is attached below. If you notice, a 10k resistor is connected between ISET and VCC pin of MAX7219CNG LED driver is. This resistor act as current limiter. I used 10k for 40mA segment current. You can find details about the value of this resistor to manufacturer datasheet. Eight common cathode seven segment LED display is used for displaying four information. Two 7 segment for each information and for that it can show maximum value of 99. You can use more display for displaying greater value. The reason for using common cathode display is that, MAX7219 driver IC can drive common cathode display only. I used 8 LED into four pin, each has two parallel LED which will blink when new notification or email arrived. Pin header SV1 is used to connect with Raspberry Pi and power source. Step 8: Design of PCB PCB layout was designed using Eagle software. Four display group was created containing two seven segment in each group as I want to show four information. I created a provision for connecting LEDs above the displays. I did not implement it now but you can use it to blink when a new notification will come. In the last stage you will see i used the logo of Gmail, Facebook & Twitter at the bottom of the display to specify which information it is showing. You can use led at the bottom of the logo to make it bright. Option is there, you can use it in your own way. Step 9: Make the PCB (Toner Transfer Method) I made the PCB for my project using toner transfer method. Toner transfer method is an excellent method for DIY PCB making. There are many excellent methods are available in the internet about DIY PCB making, so I don't like to explain it here. If you are interested about toner transfer method then you can follow: 1. PCB Etching Using Toner Transfer Method 2. Most Simple Home-Made PCB by Toner Transfer 3. Cheap and Easy Toner Transfer for PCB Making Step 10: Drill the PCB After etching the PCB you have to drill the PCB board. I use home made mini drill to drill the PCB. For details just follow the instructables: DIY Pcb Hand Drilling Machine. Step 11: Solder All the Components If you had made and drilled your PCB now it is the right time to put all the components to the PCB board and solder them. Some soldering technique is required to do the job nicely. Are you new in soldering? Have no previous experience in soldering? Don't panic! Soldering is an easy task!! Just follow the links: 1. Instructables: How to solder - the secrets of good soldering 3. How to Solder - Through-hole Soldering Step 12: Make the Box (Cut PVC) A box is required to fit all the component and Raspberry Pi. I made the box using 4mm thick PVC board. Just cut the board as your requirement depends on the size of your box. I made the box of dimension 20mm x 9mm x 5.5mm. For that I made: 2 pcs 20mm x 9mm 2pcs 20mm x 5.5mm 2pcs 9mm x 5.5mm If you have access to leaser cutter then you can make more nice and elegant box using plywood. Step 13: Make the Box (Use Hot Glue) As I have no enough tools to make a nice box, I used hot glue to make a DIY box and I discovered that it is strong enough to make a box for such project. I just use the glue only inner side of the box and enough for it. I uploaded some image of my PVC box. Make your box without the upper part. For upper part follow the next step. Step 14: Make the Box (Upper Part) Yes, it is the time to make the upper part of our box. Make four holes according to the seven segment display to fix the circuit previously made. Consider the distance between the holes when making the hole. Make a measurement using roller. I used pencil to make the draft drawing to the back side of the board. This will help you to make the hole perfect. Step 15: Attach the Circuit to the Upper Part If you cut your board perfectly it will fit with the circuit board made earlier. Attach the circuit board to the upper part of the box. If it fit perfectly then congratulation! Now you can test your circuit either it is working or not. I am sure your circuit will work perfectly. I attached some image of working circuit. It is not our complete project. Just for demo testing. Step 16: Let's Test We are almost ready to test our circuit and configuration. Before testing we need some more download. To communicate with MAX7219 from Raspberry Pi we will use max7219 Python module. To download and install the module type the following commands: sudo pip install spidev git clone sudo python max7219/setup.py install We are completely ready to go. Type the following command into terminal python You will get like bellow Now, type the following commands import max7219.led as led device = led.sevensegment() device.write_number(deviceId=0, value=3.14159) Is the value of pi is displayed into your display board? Congratulation!!! You made it. Step 17: Complete Your Setup Now fix the upper part along with the circuit to the box included with Raspberry Pi. Run the python program attached with the next step. The notifier will notify you about your new unread email, your Facebook notifications, total like of your Facebook page, Total follower of your Twitter account. You can easily modify the program to display others information such as friend requests, total unread Facebook message etc. Even you can use it to display the time & date as use it as a desk clock. Are you ready to run the complete program? Follow the net step. Step 18: Run the Complete Program We completed all of the steps to make the physical thing. Now it is the high time to make the thing live using complete Python program. Clearly follow the program and replace "xxxxxxxxxxxxxxx" marks with your information. #!/usr/bin/env python import RPi.GPIO as GPIO, feedparser, time import urllib2 import json import time import oauth, tweepy, sys, locale, threading from time import localtime, strftime, sleep import max7219.led as led device = led.sevensegment() ###### Gmail DEBUG = 1</a> token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Access Token page_data = get_page_data(page_id,token) print "Page Name:"+ page_data['name'] print "Likes:"+ str(page_data['likes']) like_count = page_data['likes'] print "Link:"+ page_data['link'] print "Unread notifications:"+ str(page_data['unread_notif_count']) notification_count = page_data['unread_notif_count'] print "Unread message:"+ str(page_data['unread_message_count']) #time.sleep(0.5) def twitter(): global follower global api # consumer_key = "xxxxxxxxxxxxxxxxxxxxxx" # use your access key consumer_secret = "xxxxxxxxxxxxxxxxxxxxxxxxx" access_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" access_secret = "xxxxxxxxxxxxxxxxxxxxxxxxx" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) user = api.get_user(xxxxxxxxxx) # your user id print user.screen_name print user.followers_count follower = user.followers_count print user.friends_count print user.favourites_count def reverse(n): if(n<10): return n*10 else: return int(str(n)[::-1]) def display(): # form a number from all information all_value = (reverse(follower) * 1000000) + (reverse(like_count)*10000) + (reverse(notification_count)*100) + reverse(mail) print all_value device.write_number(deviceId=0, value=all_value) while True: gmail() facebook() twitter() display() time.sleep(0.6) You can download the source file attached below. Run the program using the following command: sudo python iot-dashboard.py If it works fine then CONGRATULATION!!!! If you want to load the program automatically after reboot of raspberry pi then follow the instructions: 1. Make your file executable using the command: sudo chmod +x iot-dashboard.py You can test it by running the program directly by typing: ./iot-dashboard.py Even though you didn't call upon Python the program should still run the same as if you'd typed python iot-dashboard.py. The program can only be run by calling it with it's full location /home/pi/iot-dashboard.py or from the current directory by using ./ as the location. 2. Make the program auto running:. 2.1 On your Pi, edit the file /etc/rc.local using the editor of your choice. You must edit with root, for example: sudo nano /etc/rc.local 2.2 Add commands below the comment, but leave the line exit 0 at the end, then save the file and exit. python /home/pi/iot-dashboard.py & Now reboot your Pi and enjoy Your Desk Notifier. You Made it!!!
http://www.instructables.com/id/Raspberry-Pi-Desk-Notifier/
CC-MAIN-2017-22
refinedweb
2,794
66.54
Building RJMetrics Pipeline How the 25-person RJMetrics engineering team tackled one of today’s most persistent data challenges. Update: RJMetrics Pipeline is now Stitch. Read the launch announcement here. I run the 25-person engineering team at RJMetrics. Since our founding in 2009, we’ve built a full-stack business intelligence product. Full-stack BI involves a lot of technology: we manage the entire ETL process, the data warehouse (built on Redshift), have our own custom caching layer, and an entire analysis front-end. We sell this entire stack of technology as an integrated solution. Earlier today, we announced the launch of our new product: RJMetrics Pipeline. Pipeline is a modified version of the data infrastructure layer from our full-stack product, released as a standalone product. It solves a very straightforward, yet always vexing, problem — consolidating all of an organization’s data into a single, high-performance data warehouse on an ongoing basis. It does this in near-real-time (seconds to minutes, depending on data source), and it can scale to support arbitrarily large data volumes. In this post, I’m going to do a deep dive on the technology that’s at the heart of Pipeline. I’ll lay out our initial requirements, our core challenges, and the design decisions we made as a result. Aside: why did we build Pipeline? We think that today’s biggest analytical challenge is data proliferation. Data no longer just lives in a handful of relational databases on the corporate network. Today, data is located throughout the cloud, stored in varying formats by many different providers, and consolidating it into a single warehouse for analysis is challenging. Many are solving this problem with a combination of home-grown and open-source technology, but the frequency with which data sources and business needs change means these solutions require frequent updates and maintenance. RJMetrics Pipeline is our solution to that problem. Pipeline design criteria Pipeline has a straightforward job: Continuously extract data from many source systems and load it into a single destination. It sounds simple at this level of abstraction, but the realities of doing this quickly and reliably across a large universe of source systems are complex. Bringing any sufficiently complex system to life involves trade-offs, so we created design criteria at the outset of the project to inform how such trade-offs would be made. We arrived at these criteria after an extensive market validation process that we plan on covering in a separate post. Here they are: - Latency Some decisions need to be made in real time, so data freshness is critical. While there will be latency constraints imposed by particular source data integrations, data should flow through the Pipeline with as close to zero latency as possible. - Scalability We can’t have the system break down when large volumes of data start pouring in. All components of the Pipeline should scale to support arbitrarily large throughput. - Accuracy Data cannot be dropped or changed in a way that corrupts its meaning. Every data point should be auditable at every stage in the pipeline. - Flexibility Since we intend to consolidate data from many sources, we need to accommodate data of different shapes and types. Latency and scalability were the most important criteria to establish at the outset of this project. Implementing a high latency (batch-based) pipeline is technically very different than implementing a low-latency pipeline. And scalability needed to be built in from the ground up: it isn’t something you can bolt on later. Solidifying these criteria up front enabled us to parallelize our development effort while knowing that each of our teams was working towards the same goal. We honestly didn’t face any serious gotchas in the development process, which allowed us to move quickly. Core engineering challenges With the team aligned on functional requirements, the next step was to distill out our technical requirements. We concluded that a service for delivering the above functionality would need to also meet these technical goals: - Multi-tenancy RJMetrics has always been a cloud platform, and we want to maintain that. Running a separate instance for each customer was not an option given our go-to-market strategy. Instead, each client maintains their own destination database, while the pipeline uses shared resources to load those databases. Multi-tenancy interacts in challenging ways with scalability. One customer may send us a billion rows of data in an hour and that can’t delay the fact that another customer sent us a thousand rows in that same hour. More on the solution to this later. - Availability This is critical for the data receiving component — unavailability means we are missing data, and we can’t assume that all data producers will resend data when availability is restored. We intend to guarantee high availability once Pipeline hits general availability, which means just a small number of hours of unavailability per year. - Consistency We have to distribute our workload among many machines in order to meet our scalability and availability requirements, but making this leap introduces complexity. Once a system is distributed, coordination between individual components is a challenge. In Pipeline, this manifests as an ordering problem; we need to guarantee that two data points submitted in order are realized in the same order in Redshift, even if they are handled by different machines. - Monitorability If latency is important, then we need to measure it at every step along the way. Any problems with any stage in the pipeline should be pinpointed and appropriate alerts triggered. A trip down the pipeline Let’s follow a single data point through the pipeline to see how these requirements have influenced the system design. Step 1: make data This is the easy part, or so everyone says. Let’s take the following data point as an example: [ { “client_id”: 1, “namespace”: “MyWebsite.com Activity”, “table_name”: “events”, “action”: “upsert”, “sequence”: 1444355267, “key_names”: [“id”], “data”: { “id”: 312, “action”: “click”, “path”: “/index.html”, “referrer”: null } } ] Many of the design decisions mentioned above are visible right here. First, the data format, as you can probably tell, is JSON. JSON is great for usability; it’s easy to read and write, and pretty much every language out there has library support for it. It also supports arbitrary schemas, nested objects and collections, which is necessary for our flexibility requirement. But, sometimes JSON is too simple. For example, JSON doesn’t have a datatype for dates, and doesn’t differentiate between integer and floating point numbers. To make up for these shortcomings, we also accept data in Transit format. Transit is essentially JSON with an extended type system, and, although it’s not as widely supported as JSON, it has libraries in a number of popular languages. Next, the client_id, namespace and table_id fields serve our multi-tenancy requirement. We separate data based on these values, and they dictate the ultimate destination for the data. (Data must also be submitted with a secret credential to ensure the request is authorized.) Finally, the “sequence” field in the request is required to meet our consistency goal. That field allows the user to specify how to order data points in the request relative to all the others that have been received. Without it, two updates to the same data point could be received in the wrong order, resulting in inaccurate data. In most cases the sequence can simply be the timestamp when the datapoint was generated. Step 2: send data To send the above data to Pipeline, it is attached to the body of an HTTP POST request to pipeline-gate.rjmetrics.com. This fact alone reveals some design choices. First, the Pipeline has a single entry point for data. This makes basic auditing easy — we can just compare the volume of data flowing into the pipe against the volume flowing out and know at a glance that we’re not missing any. Second, we use HTTP in service of our flexibility goal. It’s not the most efficient protocol to use for data transfer, but it is the most widely used. We have built dozens of replicators that submit data this way, and we can be confident that clients on any platform will be able to send data via HTTP. Step 3: into the queues In order to meet our scalability and availability requirements, once data enters the pipeline it must be handled by systems that are resilient to node failure and scale horizontally. The HTTP request is load balanced to an available server with capacity to take on the workload. When data hits the HTTP server it’s a hot potato — now that it has been received, we can’t lose it! The next stop — dubbed the fat queue — is an Apache Kafka queue that serves as persistent, low-latency storage for the HTTP servers to pass the potato off to. Kafka is an ideal choice at this stage because it can support a large volume of low-latency writes while guaranteeing durability. Data submitted to the fat queue isn’t acknowledged until it is replicated to three different Kafka servers to mitigate the possibility of data loss, a requirement of our accuracy goal. The HTTP servers get acknowledgement that the data has been persisted in a small number of milliseconds — fast enough that they can wait for this confirmation before themselves confirming that the data was accepted. Data is picked off of the fat queue by an Apache Spark streaming cluster that performs “demultiplexing” — separating data by client, namespace and table — and micro-batching. We chose Spark because it guarantees at least once delivery of all data points, and has a simple language for defining the operations we required. Our Spark jobs write to a second set of Kafka queues — the “thin queues” — that are dedicated to each client’s data set. This ensures the data stays separated, and protects against the possibility that a single large data set could crowd out smaller data sets, fulfilling our multi-tenancy and scalability requirements. Step 4 — to Redshift The final step is to copy the data from the thin queue into its final destination, Redshift. At this point our flexibility and accuracy requirements crash head-first into each other, presenting some difficult challenges. The incoming data is a JSON blob with a flexible, possibly nested, schema. Redshift, on the other hand, doesn’t support nesting, and requires the schema to be specified up front. We have to use some complex logic to break nested records apart into separate tables, and evolve schemas as new columns and datatypes are identified. Why Redshift? This was purely a question of market pull. Redshift has become, at least for the companies we find ourselves talking to, the default option for warehousing. And as Pipeline loads a warehouse that you manage, we obviously prefer to support the warehouse that most of our potential customers use. We’ve been incredibly impressed with Redshift’s performance for analytical and data write workloads. Over the next few months I’m going to share some data we’ve gathered in our testing of the platform in a series of “Benchmarking Redshift” posts on our blog. What about Transformation? You’ll notice our design criteria didn’t mention anything about data modeling or transformation. In our market validation, we certainly found many use cases where modeling and transformation were important to customers, but we also found that transformation needs have shifted significantly because of the performance profile of Amazon Redshift. Simply put, Redshift is performant enough to handle most transformations, and performing transformations in a familiar language — SQL — and alongside the original raw data is great for auditability. These facts have given rise to the “ELT” model of data processing. While we did need to build some basic transformation functionality to convert data types, decode JSON, and normalize nested data structures, we decided to do as little modeling and transformation in Pipeline as possible. We may build functionality like this in the future, but specifically leaving it out of scope allowed us to focus on executing against the problems that we cared most about: latency and scalability. Conclusion I’ve been reading articles on how other engineering organizations have built their data pipelines for a year or so at this point, and now I’ve finally written my own. I hope you learned something. There is, of course, the hook. If you’re looking for a data pipeline, I’m not going to discourage you from building one yourself. But we’ve spent tens of thousands of engineering hours solving this problem, and we’re giving you 5MM events per month for free, forever. Before you dive into a major engineering effort, give RJMetrics Pipeline a try and see what you think.
https://medium.com/@cmerrick/building-rjmetrics-pipeline-9ad5636deb3
CC-MAIN-2018-34
refinedweb
2,129
52.29
Another month, another amazing XSS Challenge from Intigriti, made by Ivars Vids. My first solution for this was not the intended one, but I hope you guys somehow appreciate it. 🤗 In the end of the writeup, I am going to be presenting you the intended solution, which I just figured out with a few hours of challenge remaining. 🕵️ In-Depth Analysis When we access the page it's possible to see that there is a list of security issues, as known as the 2021 edition of OWASP TOP 10. There is also a search bar from where it's possible to look for specific vulnerabilities. Whatever we type into this input will appear with the s query parameter when submitted. If we try to submit, for example, a s value like <h1>example</h1>, we will get this text being present on two different parts of the page: <html> <head> <title>You searched for '<h1>test</h1>'</title> // First one ... </head> <body> <div id="app"> ... <p>You searched for <h1>test</h1></p> // Second one ... </div> </body> </html> It's worth mentioning two points: - The second part where our <h1>appears, that one inside the <p>tag, actually comes to our browser as <p>You searched for v-{{search}}</p>, and we can verify this by opening the page source. So there is a client-side method for the use of templates happening here. - The first part, which is that one inside the <title>tag, is being escaped just like the second part, so our <h1>example</h1>is treated like a normal text instead of an HTML element. But there's a thing, the <title>tag is not meant to have child elements and the browser will not parse as HTML something that simply goes inside this element. In view of this, we can close the tag and insert our <h1>example</h1>after it. 😄 🏞️ Getting to Know the Scenario By using our payload </title><h1>example</h1>, now our <h1> tag goes to the page body and the browser treats it like a normal HTML element. So...what if we try to replace this <h1> for something like a <script>? Well, if we try a payload like </title><script>alert(document.domain)</script>, it will actually be reflected to the page, but no alert is going to be popped out, and the reason can be found on the page response header: content-security-policy: base-uri 'self'; default-src 'self'; script-src 'unsafe-eval' 'nonce-r4nd0mn0nc3' 'strict-dynamic'; object-src 'none'; style-src 'sha256-dpZAgKnDDhzFfwKbmWwkl1IEwmNIKxUv+uw+QP89W3Q=' There is a Content Security Policy (CSP) defined, which is great because it will not trust in every single thing that pops into the page. For those who are not familiar, a CSP is a security standard that can be defined in order to tell to the environment (in this case, our browser) what should be trusted and what should be restricted. The definition of a Content Security Policy helps to mitigate the risks of a XSS. By looking at what it has to tell us about scripts, we have: script-src 'unsafe-eval' 'nonce-r4nd0mn0nc3' 'strict-dynamic'; I remember from the last XSS Challenge, by reading these slides, that when the strict-dynamic policy is defined, we are able to execute JavaScript if its created by using document.createElement("script"). It would be really terrible if this function was being used somewhe...what!?! function addJS(src, cb){ let s = document.createElement('script'); // Script tag being created s.src = src; // Source being defined s.onload = cb; // Onload callback function being defined let sf = document.getElementsByTagName('script')[0]; sf.parentNode.insertBefore(s, sf); // Inserting it before the first script tag } So we have this function, which creates a script that's supposed to load external code, okay. But where is it used? Let's see: <script nonce="r4nd0mn0nc3"> var delimiters = ['v-{{', '}}']; // Apparently, delimiters for templates addJS('./vuejs.php', initVUE); // addJS being called </script> Our addJS function is being called, the defined source is ./vuejs.php (???) and the onload callback function is initVUE (???), which is defined down below. I promise it will all make sense in the end! 😅 function initVUE(){ if (!window.Vue){ setTimeout(initVUE, 100); } new Vue({ // new instance of Vue being created el: '#app', // All the magic will happen inside div#app delimiters: window.delimiters, // Custom delimiters v-{{ }} being defined data: { "owasp":[ // All the OWASP list inside here ].filter(e=>{ return (e.title + ' - ' + e.description) .includes(new URL(location).searchParams.get('s')|| ' '); }), "search": new URL(location).searchParams.get('s') } }) } If you are not familiar with Vue.js, it's a very popular framework based on JavaScript, just like ReactJS or Angular, and it aims to simplify not only the experience of creating web Interfaces, but also anything that's being handled on the client-side. Also, Vue.js is actually the responsible for picking up that v-{{search}} from the page source and converting it to the value of your s query parameter. It does that by picking the search value you can find in the data object above. The original delimiters recognized by Vue.js are actually {{ }}, but for this challenge, the delimiters are custom ones. That ./vuejs.php request is actually redirecting to a CDN hosted JavaScript file containing the basis of Vue.js, so it can be initialized on the initVUE function. 🚧 HTML Injection Leads to CSTI By assuming that the only way we can directly use JavaScript is calling addJS, we have to find a different place from where it's being called. Here's the only place left: <script nonce="r4nd0mn0nc3"> if (!window.isProd){ // isProd may not be true, hm... let version = new URL(location).searchParams.get('version') || ''; version = version.slice(0,12); let vueDevtools = new URL(location).searchParams.get('vueDevtools') || ''; vueDevtools = vueDevtools.replace(/[^0-9%a-z/.]/gi,'').replace(/^\/\/+/,''); if (version === 999999999999){ setTimeout(window.legacyLogger, 1000); } else if (version > 1000000000000){ addJS(vueDevtools, window.initVUE); // addJS being called again!!! } else{ console.log(performance) } } </script> Okay, now we have a piece of code where addJS is being called, but first of all, it will only be called if this window.isProd is not true. This variable is being defined in a different and previous <script> tag, it's actually the first one before ./vuejs.php takes the first place. 😄 <html> <head> <title>You searched for 'OurPreviousPayloadHere'</title> <script nonce="r4nd0mn0nc3"> var isProd = true; // window.isProd being defined </script> ... </head> ... </html> We have to figure out a way of breaking it so it never gets this true value. Remember our payload, </title><h1>example</h1>? If we change it to </title><script>, the browser will get "confused" because of the unclosed tag, and this new tag will be closed on the next </script> that it can find. Also, because of the CSP, nothing inside this <script> will be executed, including the definition of window.isProd. It's worth mentioning that when it comes to JavaScript, the result of if(undefinedVariable) is false, and if(!undefinedVariable) is true, so having an undefined variable is enough, and we don't need it's value to equals false. 🤯 Now let's get back to the code, but now inside the if condition. First of all, we have these new query parameters: let version = new URL(location).searchParams.get('version') || ''; version = version.slice(0,12); let vueDevtools = new URL(location).searchParams.get('vueDevtools') || ''; vueDevtools = vueDevtools.replace(/[^0-9%a-z/.]/gi,'').replace(/^\/\/+/,''); version contains only the first 12 characters of your input (if you insert something greater than this). vueDevTools has a whitelist filter that only allows for letters, numbers, %and .. It will also replace any starting //(one or more cases) to an empty string. Continuing the code, we have: if (version === 999999999999){ setTimeout(window.legacyLogger, 1000); } else if (version > 1000000000000){ // Wait, it has 13 characters! >:( addJS(vueDevtools, window.initVUE); } else{ console.log(performance) } In order to be able to call addJS we will need to define a value for the version parameter which is greater than 1000000000000. As version max characters length is 12, it will not be possible by using a simple decimal value. But this common way we always take is not the only way of representing a number in JavaScript, and the same thing applies to most programming languages. We may, for example, try values like 0xffffffffff (1099511627775 in hexadecimal) or 1e15 (1 times 10 raised to the 15th power). I am going to stick with the hexadecimal approach because it's the one I originally found, so now our payload would be something like ?s=</title><script>&version=0xffffffffff For the value of vueDevtools, we can see that it will be used as a source on addJS, because it's the first parameter of the function. If we simply try to point out to any complete URL, it will not work because the filter for vueDevTools doesn't allow the use of the would always become It means that we are limited to include only files that are inside of the application environment. :character, in a way that a URL like This limitation doesn't actually make any progress impossible because we can, for example, define vueDevtools=./vuejs.php. This redundancy would create a new instance of Vue after the first one, and by knowing that Vue.js parses any v-{{ }} that it finds in the DOM, if we add a test to our s parameter like </title><script>v-{{7*7}}, we are going to see that it parses the v-{{7*7}} and shows 49 on the screen. CSTI, yay! 🥳 🏁 CSTI Leads to Reflected Cross-Site Scripting Okay, we have this payload, which is ?s=</title><script>v-{{7*7}}&version=0xffffffffff&vueDevtools=./vuejs.php, and it's capable of trigger a Client-Side Template Injection, but how do we use it in order to execute arbitrary JavaScript code? Searching a little bit more about CSTI, I found out that that's possible to define functions and instantly execute them, all inside a template. It uses the JavaScript constructor function and it would be like this: {{ constructor.constructor("YOUR_JAVASCRIPT_CODE_HERE")() }} From this, we have our final payload, which is (URL encoded). 😳 The Intended Solution For this part, I have to say thank you to Ivars Vids, who tried during the entire week to make me think in different ways without giving the challenge away. Thank you for your efforts into making me less stupid 🤗😂 I was told that the difference between my solution and the intended one is the first step, because no <script> tag is supposed to be broken by adding new <script> tags. And I was also told that the first hint was all about this first step. Considering that we have an enemy, and we have to make it stronger, I remember that the CSP was the first issue we found during the unintended solution. So what if we use it in order to block the scripts we don't want to be executed? 🤔 Remember that originally, the CSP is given to our browser through the response headers, but it also may be defined by using a <meta> tag. There's an example down below: <meta content="script-src 'none'"> 💡 An Insight If we add this CSP definition after a </title> tag to the s query parameter, we will have as a result that every single script tag will be blocked, and no script in the page will be executed. Do you remember these tags? <script nonce="r4nd0mn0nc3"> // Script #1 var isProd = true; </script> <script nonce="r4nd0mn0nc3"> // Script #2 function addJS(src, cb){...} function initVUE(){...} </script> <script nonce="r4nd0mn0nc3"> // Script #3 var delimiters = ['v-{{', '}}']; addJS('./vuejs.php', initVUE); </script> <script nonce="r4nd0mn0nc3"> // Script #4 if (!window.isProd){ ... } </script> I thought it would be a nice idea to block scripts #1 and #3 instead of just the first one, because by doing it, we wouldn't need to use these custom delimiters on the payload anymore. Okay, but how exactly do we allow only specific script tags? This question got me stuck for the entire week, but when I had only a few hours left, I got an interesting insight. The Content Security Policy also allows us to define hashes for the scripts to be verified before executing, so I could add the hashes for scripts #2 and #4, and define nothing for #1 and #3 so they are blocked by the CSP itself. Taking a look at the dev tools console, with our current payload ?s=</title><meta content="script-src 'none'">, we are going to see these error messages: Four error messages, each one representing one of our <script> tags being blocked by the CSP. Notice that for each one, there's a hash that corresponds to the content inside of the tag. Picking up the hashes of #2 and #4, and adding them to the CSP <meta> tag along with the same unsafe-eval and strict-dynamic used by the original CSP, we will have the following payload which blocks #1 and #3: '"> Now, we add our previous values for version and vueDevtools, which are going to work the same: '">&version=0xffffffffff&vueDevtools=./vuejs.php This will make a new instance of Vue.js be started without any custom delimiters. Once it's done, we have to inject our XSS template inside <div id="app"></div>, which is already in the page and it's used by Vue as the container for its job. But what if we just add it again in our payload as this one down below? <div id="app">{{constructor.constructor('alert(document.domain)')()}}</div> It works! 🥳 (URL encoded) Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/therealbrenu/intigriti-1121-xss-challenge-writeup-23mj
CC-MAIN-2022-21
refinedweb
2,274
62.58
Don’t let TypeScript slow you down I’ve been using TypeScript for a few months and while I find it a must in any project beyond the most basic proof-of-concept, I did find myself being slowed down by it in some ways. Naturally, the necessity of defining your interface would lead to more code, but I think there are quite a few methodologies that can be changed to improve our agility. To emphasize and clarify: I ❤ TypeScript — This isn’t a hate article. These are merely some ideas that might improve your productivity with TypeScript. Types for 3rd party libraries might not be worth it Once we have intellisense, auto-complete and error checking on our own code, it is natural to want that awesomeness also for the 3rd party libraries we use. Some libraries might have type information built in, for most, however, we need to use the external DefinitelyTyped repository. If it doesn’t exist, we can create our own (and even add it for the other to use). That sounded very exciting to me initially. We’re going to add type to all the libraries! Everything in JavaScript will be strongly typed! After spending way too much time on that, my suggestion is not to do that: Complicated types code Due to the dynamic nature of JavaScript many libraries include various dynamic return types and arguments. TypeScript is very powerful and you will most likely be able to express these types, but writing them and reading them will be challenging. For example, consider these types from LoDash to the flatMap function: flatMap<T>(this: LoDashImplicitWrapper<List<Many<T>> | Dictionary<Many<T>> | NumericDictionary<Many<T>> | null | undefined>): LoDashImplicitWrapper<T[]>; Due to this complexity when you do have type errors in your code, the error message will be just as complicated to understand. Mismatch between types and library When the library changes, your types would need to change as well. Since there is no direct link between the external types and the library, it might mean that the types from DefinintelyTyped you’re using right now, are wrong. To clarify: If you’re using a library that natively supports TypeScript this shouldn’t concern you. If the library doesn’t, then just use it without types. Dubious advantage The greatest advantage of TypeScript is to allow us to validate types and prevent unexpected errors when refactoring. It is especially useful when working with complicated data structures. Most javascript libraries don’t have complicated data structures and since we will not be refactoring the library itself the only advantage we’re really getting is argument/return-value type validation and auto-complete. Auto-complete, while helpful, is really not necessary. Type validation, however, is important. We’ll see how we can have some even without complete library type information. How to work with non-TypeScript libraries So with that said, how do you work with 3rd party libraries? The first thing to do, once you install a library is to try to import it as you usually do. If it works, it means that it has its own type definitions and you can proceed as normal. If it doesn’t have type definitions, however, TypeScript will not allow you to use import. In that case, you must use the @ts-ignore syntax: // @ts-ignore import _ from "lodash"; Now let’s use, for example, the same flatMap function I’ve mentioned previously. Here’s how I would write it to stay as type-safe as possible: function duplicate(n: number) { return [n, n]; } const data: number[] = [1,2]; const result: number[] = _.flatMap(data, duplicate); Note how we set result and data to be of type number[]. Now even though we had no typing to lodash, the actual usage of the function is perfectly type-safe — If you ever try to assume that result is a string, TypeScript will complain. We retained most of the advantages of TypeScript with very little headache involved. Don’t be a type fanatic — Using “any” can be ok All your code should have type information in TypeScript, especially your data models. However, sometimes you’re dealing with objects that can’t have an easy type. It might be something you’re getting from your own legacy JavaScript code, a dynamic database value or a parameter that’s sent from a library. When a value or an argument is truly dynamic, creating super complicated generic types with various advanced TypeScript constructs might not be worth it. In these cases, using “any” is a legitimate option. You should be using proper types as soon as you’re outside of the murky dynamic area and this should be especially avoided with your data models. Remember that just like 100% unit test coverage is an unrealistic goal, the same can be said about TypeScript. Reaching that magic 100% type information will not necessarily make your code safer, but it is very likely to make it more complex and unreadable. Prototyping is slower with TypeScript One of the strengths of a dynamic language, like JavaScript, is the ability to prototype quickly, test things, break things and find the right way to do it. We’d like to throw some random JSON on an API, see what it returns, adjust our code accordingly and play with various data transformations. For some things, we copy snippets from StackOverflow to see how they perform without code, for others, we’d like to use some new npm library and see if it helps. TypeScript, on the other hand, likes us tread slowly. It wants every literal object to have a type and every function to have its argument defined. You can’t just write code as you please, and instead have to constantly stop and provide TypeScript type demands. It is especially jarring to do so only to discover that your idea doesn’t work and you have to delete everything. So TypeScript is probably a bad idea when prototyping. We could always start a new dummy project just for our prototyping, but sometimes we want to prototype inside our current TypeScript project. What I would actually like to do is to disable TypeScript while I am prototyping, and then enable it later when I decided how to proceed. The solution that worked for me was to disable TypeScript’s type checking in Webpack config: { loader: require.resolve("ts-loader"), options: { // this will disable any type checking transpileOnly: true, } }, If you’re using ForkTsCheckerWebpackPlugin then you’ll need to comment it out as well. With these items disabled, TypeScript will compile your files as always, but will not create any errors on type issues. Continue to prototype without distractions and once prototyping is done and you’re happy with the results, re-enable TypeScript and fix all the issues. Repeat the process every time you need to prototype something in your project. TL;DR To summarize my points: - TypeScript is really powerful and I highly recommend to use it. It is especially useful for your application data models. - Disable TypeScript while prototyping and re-enable once you’re done. - Don’t spend too much time trying to get type information to 3rd party libraries. - It is ok to use any for types that are too convulsed for a reason you can’t control. - As with code in general, strive for readability.
https://medium.com/@vitalyb/dont-let-typescript-slow-you-down-92d394ec8c9f
CC-MAIN-2019-22
refinedweb
1,231
60.55
The objective of this post is to explain how to parse JSON data using the ArduinoJson library. Introduction In this post, we will create a simple program to parse a JSON string simulating data from a sensor and print it to the serial port. We assume that the ESP8266 libraries for the Arduino IDE were previously installed. You can check how to do it here. In order to avoid having to manually decode the string into usable values, we will use the ArduinoJson library, which provides easy to use classes and methods to parse JSON. This very useful library allows both encoding and decoding of JSON, is very efficient and works on the ESP8266. It can be obtained via library manager of the Arduino IDE, as shown in figure 1. Figure 1 – Installation via Arduino IDE library manager. Setup First of all, we will include the library that implements the parsing functionality. #include <ArduinoJson.h> Since this library has some tricks to avoid problems while using it, this post will just show how to parse a locally created string, and thus we will not use WiFi functions. So, we will just start a Serial connection in the setup function. void setup() { Serial.begin(115200); Serial.println(); //Clear some garbage that may be printed to the serial console } Main loop On our main loop function, we will declare our JSON message, that will be parsed. The \ characters are used to escape the double quotes on the string, since JSON names require double quotes [1]. char JSONMessage[] ="{\"SensorType\":\"Temperature\", \"Value\": 10}"; This simple structure consists on 2 name/value pairs, corresponding to a sensor type and a value for that sensor. For the sake of readability, the structure is shown bellow without escaping characters. { "SensorType" : "Temperature", "Value" : 10 } Important: The JSON parser modifies the string [2] and thus its content can’t be reused. That’s the reason why. Check here some definitions about variable scopes. Then, we need to declare an object of class StaticJsonBuffer. It will correspond to a preallocated memory pool to store the object tree and its size is specified in a template parameter (the value between <> bellow), in bytes [3]. StaticJsonBuffer<300> JSONBuffer; In this case, we declared a size of 300 bytes, which is more than enough for the string we want to parse. The author of the library specifies here 2 approaches on how to determine the buffer size. Personally, I prefer to declare a buffer that has enough size for the expected message payload and check for errors in the parsing step. The library also supports dynamic memory allocation but that approach is discouraged [4]. Here we can check the differences between static and dynamic JsonBuffer. Next, we call the parseObject method on the StaticJsonBuffer object, passing the JSON string as argument. This method will return a reference to an object of class JsonObject [5]. You can check here the difference between a pointer and a reference. JsonObject& parsed= JSONBuffer.parseObject(JSONMessage); To check if the JSON was successfully parsed, we can call the success method on the JsonObject instance [6]. if (!parser.success()) { Serial.println("Parsing failed"); return; } After that, we can use the subscript operator to obtain the parsed values by their names [7]. Check here some information about the subscript operator. In other words, this means that we use square brackets and the the names of the parameters to obtain their values, as shown bellow. const char * sensorType = parsed["SensorType"]; int value = parsed["Value"];. void loop() { Serial.println("—————— -"); char JSONMessage[] = " {\"SensorType\": \"Temperature\", \"Value\": 10}"; //Original message Serial.print("Initial string value: ");.println(sensorType); Serial.println(value); Serial.print("Final string value: "); for (int i = 0; i < 31; i++) { //Print the modified string, after parsing Serial.print(JSONMessage[i]); } Serial.println(); delay(5000); } Figure 2 illustrates the result printed to the serial console. Figure 2 – Output of the program in the Arduino IDE serial console. As seen by the previous explanation, this library has some particularities that we need to take in consideration when using it, to avoid unexpected problems. Fortunately, the github page of this library is very well documented, and thus here is a section describing how to avoid pitfalls. Also, I strongly encourage you to check the example programs that come with the installation of the library, which are very good for understanding how it works. Final notes This post shows how easily is to parse JSON in the ESP8266. Naturally, this allows to change data in a well known structured format that can be easily interpreted by other applications, without the need for implementing a specific protocol. Since microcontrollers and IoT devices have many resource constraints, JSON poses much less overhead than, for example, XML, and thus is a very good choice. Nevertheless, we need to keep in mind that for some intensive applications, even JSON can pose an overhead that is not acceptable, and thus we may need to go to byte oriented protocols. But, for simple applications, such as reading data from a sensor and sending it to a remote server or receiving a set of configurations, JSON is a very good alternative. References [1] [2]- the-string-isnt-read-only [3] [4] [5] [6] [7] Technical details - ESP8266 libraries: v2.3.0. - ArduinoJson library: v5.1.1. Pingback: ESP8266: Parse JSON Arrays | techtutorialsx Pingback: ESP8266: Encoding JSON messages | techtutorialsx how to make json parser to control rgb led in esp8266 from android? LikeLiked by 1 person Basically, you need to make your ESP8266 listen to incoming HTTP requests. To do so, check the ESP8266WebServer implementation in the github page of the libraries: You can then parse the content of the http request received in the ESP and use its values to control the RGB LED. From the ESP8266 perspective, it won’t matter if the request was sent from Android or a web browser. I just made a post on how to set a simple HTTP webserver on the ESP8266. It may help you creating the application you mentioned: Hope it helps. thanx antepher I will take a look LikeLiked by 1 person Pingback: ESP32: Parsing JSON | techtutorialsx Pingback: ESP32: Creating JSON message | techtutorialsx Pingback: ESP32: Sending JSON messages over MQTT | techtutorialsx
https://techtutorialsx.com/2016/07/30/esp8266-parsing-json/
CC-MAIN-2017-26
refinedweb
1,037
54.02
Python | Popup widget. Popup widget : - The Popup widget is used to create popups. By default, the popup will cover the whole “parent” window. When you are creating a popup, you must at least set a Popup.title and Popup.content. - Popup dialogs are used ]when we have to convey certain obvious messages to the user. Messages to the user through status bars as well for specific messages which need to be told with emphasis can still be done through popup dialogs. -. To use popup you must have to import : from kivy.uix.popup import Popup Note: Popup is a special widget. Don’t try to add it as a child to any other widget. If you do, Popup will be handled like an ordinary widget and won’t be created hidden in the background. Basic Approach : 1) import kivy 2) import kivyApp 3) import Label 4) import button 5) import Gridlayout 6) import popup 7) Set minimum version(optional) 8) create App class 9) return Layout/widget/Class(according to requirement) 10) In the App class create the popup 11) Run an instance of the class Output: When click on screen popup will open like this: When click on Close the popup it will close. Code #2: In the second code when we use the size_hint and the size we can give the size accordingly. In this just add something as in the below code in line number 75. Output: Popup size will be smaller than the window size. Reference : Recommended Posts: - Python | Popup widget in Kivy using .kv file - Python | Add image widget in Kivy - Python | Switch widget in Kivy - Python | Scrollview widget in kivy - Python | Carousel Widget In Kivy - Python | Progress Bar widget in kivy - Python | Textinput widget in kivy - Python | BoxLayout widget in Kivy - Python | Spinner widget in kivy - Python | Checkbox widget in Kivy - Python | Slider widget in Kivy - Python | Spinner widget in Kivy using .kv file - Python | Progressbar widget in kivy using .kv file - Python | Carousel Widget In Kivy using .kv file - Python | Switch widget in Kivy using .kv file - Python - popup menu in wxPython - How to access popup login window in selenium using Python - Python Tkinter - Frame Widget - Python Tkinter - Checkbutton Widget - Python Tkinter - ListBox.
https://www.geeksforgeeks.org/python-popup-widget-in-kivy/
CC-MAIN-2020-29
refinedweb
370
62.58
[Date Index] [Thread Index] [Author Index] Re: old vs new(V7) BarChart3D[]? The closest I could come up is this: ListPlot3D[(binorm = {{4, 15, 24, 15, 4}, {15, 60, 90, 60, 15}, {24, 90, 144, 90, 24}, {15, 60, 90, 60, 15}, {4, 15, 24, 15, 4}})/ Max[binorm], InterpolationOrder -> 0, Mesh -> False, Filling -> Bottom, FillingStyle -> Opacity[1], Boxed -> False, Axes -> {False, False, False}, AspectRatio -> 1 ] Of course alteratively you can just create Cuboids from your data, which will look exactly as your old BarChart, but will be some extra effort, but probably not too much... > It is not > Histogram3D because I already have the probabilities or counts for > each data point. Histogram3D wants raw data that it will bin for > you. Any suggestions? Or should I just load the legacy version and > ignore the warnings? It depends on whether it is a problem for your other code that now some other System` symbols are shadowed by their BarChart` versions, too... Here is a trick that will avoid namespace conflicts and suppress the warnings: Block[{$ContextPath={"System`"}},Quiet[Needs["BarCharts`"]];] Now you can use the BarCharts` version with explicit BarChart`-prefix: BarCharts" ] and without these prefixes everything else will use the normal System` versions. Note that you might need to use the BarChart`-Prefix for some of the option names, too. hth, albert
http://forums.wolfram.com/mathgroup/archive/2009/Oct/msg00718.html
CC-MAIN-2015-11
refinedweb
223
61.5
4-20mA Pressure Transducer connection I am attemting to connect my Industruino to a Pressure Transducer, SPT25-20-3000A, from This is a 4-20mA device with a 9-36volt input range. The wire configuration on the transducer cable is Red, Black, White and Case Ground (bare wire). I am using the Red, Black and Case Ground. My power source measures 12.2volts and is attached to the V+ and GND of the Industruino. The Industruino appears to be working well. So far I have tested Digital Inputs and Outputs without issue. I have the Red wire of the transducer attached to V+ of the Industruino I have the Black wire of the transducer attached to a 220ohm load resister. The 220ohm load resister is attached to Analog Input-Ch1. (per the datasheet of the transducer) I have the Case Ground attached to GND of the Industruino. I cannot make the Analog CH1 Input work with the transducer with the following code: #include <Indio.h> #include <Class_define.h> #include <Wire.h> float sensorVal1; //variables to hold your sensor data void setup() { Serial.begin(9600); while (!Serial) { // wait for serial port to connect. Needed for Leonardo only } Indio.setADCResolution(18); // Set the ADC resolution. Indio.analogReadMode(1, mA); // Set Analog-In CH1 to mA mode (0-20mA). } void loop() { sensorVal1=Indio.analogRead(1); //Read Analog-In CH1 Serial.print("CH1: "); //Print "CH" for human readability Serial.print(sensorVal1, 10); //Print data Serial.print("\r\n"); // Print a new line to serial. delay(2000); } The output to the Serial Monitor is: CH1: 0.0390000009 CH1: 0.0390000009 CH1: 0.0390000009 CH1: 0.0390000009 CH1: 0.0390000009 When I vary the pressure on the transducer, nothing changes. Any ideas on what I am doing wrong? Thanks ahead of time. Steven Hi Steven, As the analog field section is isolated from the digital field/power-in section you need to tie both grounds together if you are using a single power supply to power both the Industruino and your analog section, otherwise your analog input would appear to be floating. So in general your wiring is correct, you only need to run one more wire from your power supply's ground to the analog ground. The analog zone is isolated from the digital zone to allow you to use a seperate power supply for each section, improving stability of analog readings, but this is an optional feature and can be circumvented by connecting both grounds together. Please let me know how it goes. Thank you Loic. It worked great. Such a great little system. Best regards, Steven
https://industruino.com/forum/help-1/question/4-20ma-pressure-transducer-connection-40
CC-MAIN-2018-09
refinedweb
433
69.68
jsonable 0.0.2+1 Jsonable (v0.0.2) # to install: dependencies: jsonable: ^0.0.2 what is Jsonable? # if you are interested in Jsoable with reflect read here Jsonable is a library that deals with offering a simple way to manage dart classes fromJson and toJson, allowing the transpiration from dart to json. One of the main objectives and the philosophy of Jsoanble, is to remove the generated code making any object convertible into Json. In the first version of Jsonable reflection was used, but this is not supported in the AOT compiler of dart. Jsonable does not use reflection or even generated code. how to use? # Jsonable makes available a mixin mixin Jsonable within this mixin is the management of the Json scheme. only by extending our class with the mixin, the class gets the necessary methods, but if we don't indicate the members of json, using "toJson" or "toMap" will be an empty Map / string. ({}) let's see an example: import "package:jsonable/jsonable.dart"; class Person with Jsonable { JString name; JString surname; Person({String name, String surname}) { this.name = this.jString("name", initialValue: name); this.surname = this.jString("surname", initialValue: surname); } } main() { var p = Person(name: "Nico", surname: "Spina"); print(p.toJson()); // output: {"name":"Nico","surname":"Spina"} } Jsonable implements different types to represent the whole Json structure: JString JNum JBool JClass<E extends Jsonable> JList<E> JDynamic*experimental JMap*experimental Jsonable records these types and serializes and deserializes the structure based on these types The functions provided by Jsonable: JClass<E> jClass<E extends Jsonable>(keyname, JsonableConstructor constructor, {E initialValue}) It returns aJClass is JType<Jsonable>in the generic of this type, extends Jsonable, moreover it requires the constructor a simple function that returns an instance of that type. Note: it will be instantiated immediately to the declaration if InitialValue is null JList<E> jList<E>(dynamic keyname, {List<E> initialValue, JsonableConstructor constructor}) aJList represents a List that can contain any value, you can iterate overJList and you don't need to access the value via ".value", in this type the constructor parameter becomes mandatory if you are using a Jsonable as generic are not allowed types of data other than: bool, string, num, int, double, map, list, JString jString(dynamic keyname, {String initialValue}) Return a JType <String>then manage a Stringtype in the schema with fromJsonwill assign the value only if it is a String, in toJsonit will assign a String, you can assign only Stringvalues via ".value" JCool jBool(dynamic keyname, {bool initialValue}) Return a JType <bool>then manage a booltype in the schema with fromJsonwill assign the value only if it is a bool, in toJsonit will assign a bool, you can assign only boolvalues via ".value" JNum jNum(dynamic keyname, {num initialValue}) Return a JType <num>then manage a numtype in the schema with fromJsonwill assign the value only if it is a num, in toJsonit will assign a num, you can assign only numvalues via ".value" JMap jMap(dynamic keyname, {Map initialValue}) Return a JType <Map<E,R>>then manage a Map<E,R>type in the schema with fromJsonwill assign the value only if it is a Map<E,R>, in toJsonit will assign a Map<E,R>, you can assign only Map<E,R>values via ".value" JDynamic jDynamic(dynamic keyname, {dynamic initialValue}) Return a JType <dynamic>then manage a dynamictype in the schema with fromJsonwill assign the value only if it is a dynamic, in toJsonit will assign a dynamic, you can assign only dynamicvalues via ".value" dynamic jOnce(keyname, Jsonable value)*experimental jOnceReturns the same value that passes in the value, the value you pass: it is inserted inside the Json schema, in a JClassthat is not instantiated, further. This function is very useful when you are in a context like Flutter, where objects are called only in the widget build. Joncereturns your widget, without compromising it as long as the widget uses Jsonable Performance # in this second release I had particular attention to performance: Jsonable is less than 50% slower than native (generated). This test has not yet been made a benchmark with written tests. Tests were made by timing, the result was: Note: I used dartVm not AOT Conclusion # If you want you can support the development by offering me a beer: paypal If you can't buy beer, you can always leave me a star on github. thanks! 0.0.2 # Breaking change. With the 0.0.1 management I started development with the reflection of Dart. This resulted in problems with AOT and consequently also with flutter. In version 0.0.2 all the APIs based on reflection in jsonable / withReflect have been moved Announcements: 0.0.2 I set out to remove the generation of dart code from my projects, it's a practice I don't like. So in version 0.0.2 I start the creation of a module that is without generation and without reflection. For more information read here. 0.0.1 # - frist publication. read: Readme.md Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: jsonable: ^0able/jsonable.dart'; We analyzed this package on Oct 9, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.5.1 - pana: 0.12.21 Platforms Detected platforms: Flutter, web, other No platform restriction found in primary library package:jsonable/jsonable.dart. Health suggestions Fix lib/src/typing/CJnum.dart. (-2.96 points) Analysis of lib/src/typing/CJnum.dart reported 6 hints, including: line 12 col 7: DO use curly braces for all flow control structures. line 14 col 7: DO use curly braces for all flow control structures. line 19 col 7: DO use curly braces for all flow control structures. line 21 col 7: DO use curly braces for all flow control structures. line 26 col 7: DO use curly braces for all flow control structures. Fix lib/withReflect/validator/_validator.dart. (-2.48 points) Analysis of lib/withReflect/validator/_validator.dart reported 5 hints: line 11 col 7: DO use curly braces for all flow control structures. line 18 col 11: DO use curly braces for all flow control structures. line 26 col 11: DO use curly braces for all flow control structures. line 35 col 11: DO use curly braces for all flow control structures. line 44 col 11: DO use curly braces for all flow control structures. Fix lib/src/typing/CJlist.dart. (-1.49 points) Analysis of lib/src/typing/CJlist.dart reported 3 hints: line 121 col 3: Avoid return types on setters. line 141 col 3: Avoid return types on setters. line 152 col 3: Avoid return types on setters. Fix additional 6 files with analysis or formatting issues. (-5.48 points) Additional issues in the following files: lib/src/typing/CJstring.dart(3 hints) lib/withReflect/annotation.dart(3 hints) lib/src/typing/CJbool.dart(2 hints) lib/src/scheme/fx.dart(1 hint) lib/src/typing/CJmap.dart(1 hint) lib/withReflect/validator/jsonValidator.dart(1 hint) Maintenance suggestions The package description is too short. ( jsonable.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dev/packages/jsonable
CC-MAIN-2019-43
refinedweb
1,216
56.25
Generics fail using JPAAnnotationProcessor Bug #798653 reported by Damien Hollis on 2011-06-17 This bug affects 1 person Bug Description I've just started trying Querydsl on quite a large code base and seem to have a problem. I have a class declared as: public abstract class EnumPermissions<P extends Enum<P> & Permission> extends EntityImpl implements Permissions<P> and the processor generates: public class QEnumPermissions extends EntityPathBase< which obviously isn't valid. We have quite a few classes like this, so this is a show stopper for us at the moment. Timo Westkämper (timo-westkamper) on 2011-07-05 Released in 2.2.0 Timo, Sorry I couldn't didn't have time to test this when you put it in the trunk but I have now that its released and it works perfectly. Thanks very much. Regards, Damien Fixed in SVN trunk. Could you verify that the fix works for you?
https://bugs.launchpad.net/querydsl/+bug/798653
CC-MAIN-2018-47
refinedweb
153
62.27
In this article, we will go through the top 5 Python GUI libraries that you can use in your projects. Keep reading to find out about them. What is a GUI? A GUI or a graphical user interface is an interactive environment to take responses from users on various situations such as forms, documents, tests, etc. It provides the user with a good interactive screen than a traditional Command Line Interface (CLI). List of Best Python GUI Libraries Let’s get right into it and look at the top GUI Libraries for Python. 1. PyQT5 PyQT5 is a graphical user interface (GUI) framework for Python. It is very popular among developers and the GUI can be created by coding or a QT designer. A QT Development framework is a visual framework that allows drag and drop of widgets to build user interfaces. It is a free, open source binding software and is implemented for cross platform application development framework. It is used on Windows, Mac, Android, Linux and Raspberry PI. For the installation of PyQT5 , you can use the following command : pip install pyqt5 A simple code is demonstrated here: from PyQt5.QtWidgets import QApplication, QMainWindow import sys class Window(QMainWindow): def __init__(self): super().__init__() self.setGeometry(300, 300, 600, 400) self.setWindowTitle("PyQt5 window") self.show() app = QApplication(sys.argv) window = Window() sys.exit(app.exec_()) The output of the above code is as follows: ScienceSoft’s team of Python developers highlights the benefits of using PyQt: PyQt is a mature set of Python bindings to Qt for cross-platform development of desktop apps. It offers a rich selection of built-in widgets and tools for custom widgets creation to shape sophisticated GUIs, as well as robust SQL database support to connect to and interact with databases. 2. Python Tkinter Another GUI framework is called Tkinter. Tkinter is one of the most popular Python GUI libraries for developing desktop applications. It’s a combination of the TK and python standard GUI framework. Tkinter provides diverse widgets such as labels, buttons, text boxes, checkboxes that are used in a graphical user interface application. The button control widgets are used to display and develop applications while the canvas widget is used to draw shapes like lines, polygons, rectangles, etc. in the application. Furthermore, Tkinter is a built-in library for Python, so you don’t need to install it like other GUI framework. Given below is an example of coding using Tkinter. from tkinter import * class Root(Tk): def __init__(self): super(Root,self).__init__() self.title("Python Tkinter") self.minsize(500,400) root = Root() root.mainloop() The output of the above code is as shown below: 3. PySide 2 The third Python GUI libraries that we are going to talk about is PySide2 or you can call it. So now let me show you the installation process and also an example. So for the installation, you can just simply use: pip install PySide2 Here, is an example to set up GUI frame using PySide2. from PySide2.QtWidgets import QtWidgets, QApplication import sys class Window(QtWidgets): def __init__(self): super().__init__() self.setWindowTitle("Pyside2 Window") self.setGeometry(300,300,500,400) app = QApplication(sys.argv) window=Window() window.show() app.exec_() The output of the above code is as shown below: 4. Kivy Another GUI framework that we are going to talk about is called Kivy. Kivy is an Open source Python library for the rapid development of applications that make use of innovative user interfaces, such as multi-touch apps. Kivy runs on Linux, Windows, OS X, Android, iOS, and Raspberry Pi. You can run the same code on all supported platforms. It can natively use most inputs, protocols and devices including WM_Touch, WM_Pen, Mac OS X Trackpad and Magic Mouse, Mtdev, Linux Kernel HID. of Kivy is built over OpenGL ES 2, using a modern and fast graphics pipeline. The toolkit comes with more than 20 widgets, all highly extensible. Many parts are written in C using Cython, and tested with regression tests. Coming to the installation of Kivy, you need to install the dependency “glew”. You can use the pip command as below: pip install docutils pygments pypiwin32 kivy.deps.sdl2 kivy.deps.glew Enter this command and hit enter, it will be installed. After that you need to type this command to install Kivy: pip install Kivy So after installation, let me show you a simple example of Kivy to show how easy it is. from kivy.app import App from kivy.uix.button import Button class TestApp(App): def build(self): return Button(text= " Hello Kivy World ") TestApp().run() The output of the above code is as shown below: 5. wxPython So the last GUI framework that we are going to talk. wxPython is a cross-platform toolkit. This means that the same program will run on multiple platforms without modification. Currently, the Supported platforms are Microsoft Windows, Mac OS X and macOS, and Linux. Now I’m going to show you the installation process and create a simple example. So for the installation just type the following command: pip install wxPython Here is an example: import wx class MyFrame(wx.Frame): def __init__(self,parent,title): super(MyFrame,self).__init__(parent,title=title,size=(400,300)) self.panel=MyPanel(self) class MyPanel(wx.Panel): def __init__(self,parent): super(MyPanel,self).__init__(parent) class MyApp(wx.App): def OnInit(self): self.frame=MyFrame(parent=None, title= "wxPython Window") self.frame.show() return True app = MyApp() app.MainLoop() The output of the above code is as shown below: Conclusion So now we have seen 5 Python GUI libraries and in my opinion, PySide2 and pyQt5 are the more powerful GUI frameworks. But they do come with a commercial license and that explains why they’re feature-rich. Tkinter, Kivy, and wxPython are the free GUI libraries for Python. What’s your favorite GUI library in Python?
https://www.askpython.com/python-modules/top-best-python-gui-libraries
CC-MAIN-2021-31
refinedweb
991
58.79
You can subscribe to this list here. Showing 3 results of 3 >>> joakim@... seems to think that: >I get stuff like this now and then: > >wisent-parse-stream: #<buffer frameset.js> - Invalid start symbol field_declaration > >And then emacs "hangs", but breakable with c-g. > >How should I debug this? Hi, To start, use: M-x debug-on-quit RET and see what it's up to. If it was in a hook, and the hook is wrapped up in some sort of condition-case, you would need to figure out how to call the specific problem causing function outside of the timer hook. It may be the javascript parser needs some help. I didn't see field_declaration in the file contrib/wisent-javascript.wy file, but it is in wisent-csharp.wy! I can't predict what may be going on if that's related. Eric -- Eric Ludlam: zappo@..., eric@... Home: Siege: Emacs: GNU: I get stuff like this now and then: wisent-parse-stream: #<buffer frameset.js> - Invalid start symbol field_declaration And then emacs "hangs", but breakable with c-g. How should I debug this? -- Joakim Verona Greetings all. The semantic analyzer, which is used in smart completions and other things, has worked ok for C and other simple languages, but has had issues for even mildly complex C++ files where namespaces are prevalent. With the advent of ebrowse support to parse the vast numbers of C++ headers I have at work, this missing feature had made the analyzer pretty much useless. I investigated the problem and discovered that, to my amazement, the solution was significantly simpler than I had anticipated! Checked into CVS for CEDET, please find updated C parser support, and a new version of the analyzer that can now correctly identify the scopes of method implementations within namespaces for C++. Also see two new C++ test files for subclasses which I was using to attempt to simplify the problem to something I could debug. I'll be giving this a try on some really big code soon to see if it can handle it. If you program in C++ please give the new combination of ebrowse/analyzer a try and let me know how it goes! Thanks, and Enjoy! Eric -- Eric Ludlam: zappo@..., eric@... Home: Siege: Emacs: GNU:
http://sourceforge.net/p/cedet/mailman/cedet-devel/?viewmonth=200701&viewday=25
CC-MAIN-2014-52
refinedweb
382
74.39
. 10 Discussions great idea Hey, you steal my work! That first picture is my own picture from my project with the same title. I'll report you to the admin if you're not delete my picture, it's so rude using other project and claim to be yours, and without asking permission! what are the materials you need? Sorry,i mean that my Arduino IDE version is old i think you should try Arduino IDE 1.5.5 R2 () i am getting an error message with this code. Documents/Arduino/libraries/Tone/Tone.cpp:26:20: fatal error: wiring.h: No such file or directory #include <wiring.h> any help? May be it's because your Arduino IDE is to old so it's not included with the Wiring library,may be you don't have Wiring or Tone library. I have the tone library and the arduino uno that i have, was just bought recently, so i believe it cant be too far out of date. most likely to be up to date. any further suggestions in checking my IDE? is it possible to make it so when u finished the game the box opens and theres prizes in there if so could u tell me This looks like a fun game, Can you post a final picture of it? I'd love to see someone playing it!
https://www.instructables.com/id/Arduino-Simon-Says-Game/
CC-MAIN-2018-39
refinedweb
230
82.65
Learn to swap two elements in arraylist in Java. We will use Collections.swap() method to swap two elements within specified arraylist at specified indices. 1. Swap two elements in arraylist – Collections.swap() Collections.swap() method swaps the elements at the specified positions in the specified list. The index arguments must be a valid index in the list, else method will throw IndexOutOfBoundsException exception. If the specified positions are equal, invoking this method leaves the list unchanged. Method syntax public static void swap(List<?> list, int i, int j) Where – - list – The list in which to swap elements. - i – the index of one element to be swapped. - j – the index of other element to be swapped. 2. Swap two elements in arraylist example Java program to swap two specified elements in a given list. In this example, we are swapping the elements at position ‘1’ and ‘2’. The elements are these positions in list are ‘b’ and ‘c’. Please note that indexes start from 0. public class ArrayListExample { public static void main(String[] args) { ArrayList<String> list = new ArrayList<>(Arrays.asList("a", "b", "c", "d", "e", "f")); System.out.println(list); Collections.swap(list, 1, 2); System.out.println(list); } } Program output. [a, b, c, d, e, f] [a, c, b, d, e, f] Above example is java program to interchange the element value and its corresponding index values. Let me know if you have any questions. Happy Learning !! Read More: A Guide to Java ArrayList ArrayList Java Docs Ask Questions & Share Feedback
https://howtodoinjava.com/java/collections/arraylist/swap-two-elements-arraylist/
CC-MAIN-2019-47
refinedweb
253
60.61
: October 2,193 --. *-'.-> ainiton USPS 648-200 Two Sections Lake Butler, Florida County Thursday, October 2, 2008 1125.1 GiqlNESVIL7LE 7 Pw K LiC .10 PL321-~ O 96th Year 25th Issue 50 CENTS e-mail:6 meslinds6 eam- Arrest made in killing of Shadd buck Fish fry lunch fundraiser set On Friday, Oct. 10, the Union County Public Library will offer in-town delivery or pick-up of fish fry dinners. For $5 each, the meal includes catfish nuggets, hush puppies, cheese grits, baked beans and a dessert. To order a dinner, call (386) 496-3432 or fax your order to (386) 496-1285. Proceeds will benefit the UCPL building fund. Library closed on Oct. 13 The Union County Public Library and all bookmobiles will be closed on Monday, Oct. 13, for a staff development day. The book drop outside the library will, however, be open to return books. VFW sets turkey shoot Sand yard sale The VFW Men's Auxilary will host a turkey shoot on Saturday, Oct. 11l,beginning at 10 a.m. at John Howell's garage shop located on C.R. 231 south (follow signs). A drawing for a shotgun and other prizes will take place at 5 p.m. From 9 a.m. to 11 a.m., the Ladies Auxilary will hold a multi-family yard sale. Barbecue sandwich plates will also be sold for $3 each from 10 a.m.-5 p.m. Teachers plan trip to South Pacific LBMSteacherJuleeGunter Ricketson would -like to invite others to join teachers for a trip to the South Pacific to visit Hawaii, Australia and New Zealand. The trip is scheduled to take place June 18-29, 2009. For more- information please contact Julee Gunter Ricketson at [email protected] or .(386)431-1260. North Florida Gator Club social in UC All University of Florida alumni'and fans are invited to a social on Thursday, Oct. 9, at 6 p.m. at the home of Avery and Twyla Roberts. Food will be provided by the club and the Roberts Family. All proceeds from the club support scholarships for local students who attend the University of Florida. For more information, contact Derick Thomas at(386) 623- 5333. Trlc f-o r-Treat wi(( be Sa+ur4ay Wov. 1, from 6- 8 p.K In Lake But(er. BY TERESA STONE IRWIN Times Staff Writer Major Garry Seay with the Union County Sheriff's Office announced that an arrest has been made for the illegal killing and stealing of a domestic white-tailed deer named Peabody. Owned by John Shadd, Peabody was a white-tailed trophy, 12-point buck, stolen Sept. 6 from Shadd's Game Farm in Lake Butler, an Florida Wildlife Commission licensed private facility on S.R. 100. Seay said a warrant was issued for the arrest of Dustin Cole Jemigan, 21, of Polk County. Jernigan, accompanied by his attorney Tom Edwards, turned himself in to Alachua County authorities on Sept. 30. He was charged with four felonies: grand theft, armed ...trespassing and two counts of cutting "a fence around privately owned livestock. He was later released on a $40,000 bond. Seay anticipates there will be further arrests of other suspects and said the $20,000 reward offered for information on the case will be paid out to a tipster. Dustin Jernigan Turned himself in to authorities in Gainesville. The 1994 Toyota driven by a Jacksonville man was crumpled after colliding with a semi truck. Photo courtesy of the Union County Fire . Department. Man dies in crash with semi truck BYTERESASTONE driven by Albert J. Olgetree, IRWIN 59, of Lake Butler, turned Times Stpff Writer west onto S.R. 100 from N.W. 124t" Ave. near Roberts Land .An accident in Union and Timber. Counify-iniolving a-semi-truck._.. took the life of a Jacksonville A 1994 T6yota driven by- man last Thursday. Tobie Cornel Young, 37, of At approximately 6:40 a.m. Jacksonville, was traveling on- on Sept. 25, a Mack semi truck S.R. 100 and, for unknown reasons, failed to avoid colliding with the rear of the the semi. Young was pronounced dead at the scene. Olgetree, driving a log truck for Ward Timber Company, was uninjured. FHP reports that the accident is still under investigation. Peabody County agrees to share EMS director with BC BY TERESA STONE IRWIN Times Staff Writer The Union County Board of County Commissioners held a special meeting on Sept. 29, chaired by Commissioner Ricky Jenkins and attended by Commissioners Morris Dobbs and Karen Cossey. Board Chairman Wayne Smith and ,Commissione,r Melaine Clyatt were unable to attend. The purpose of the meeting was for the bpard to consider approving an interlocal agreement between Union and Bradford counties for sharing the services of Union's Emergency Medical Director Allen Parrish. :, Clerk of the Court Regina Parrish spoke first and stated that the contract copies the commissioners had at the meeting were not showing a change that would be added. The change holds Union County harmless for any incident that arises in Bradford County with Parrish and holds Bradford County harmless of any incident that takes place in Union County with him. Commissioner Jenkins asked who would be in charge of the hiring and firing, reprimanding and disciplinary actions of employees in the EMS office in Bradford. "That will be me," Parrish said. Jenkins said he was concerned because several years ago Union County did everything by the book to terminate an employee and it ended up in a lawsuit. "It cost us over $100,000 in lawyers," Jenkins said. "Can this affect us (if this situation occurs' in Bradford County) Siih N ou being that director?" The clerk answered and said, "No." "That's why we put the clause in there," Allen Parrish said. Commissioner Cossey asked the clerk to type up the changes made so the board would have corrected copies. She then made a motion to approve the contract with the changes and Jenkins opened it up for discussion. "Mr. Parrish does an outstanding job for Union County and has since day one. I think Union County's EMS and Mr. Parrish are second to See SHARE, p. 2A Elementary students say 'See You at the Pole' On Sept. 24, faculty and students at Lake Butler Elementary School participated in the worldwide annual See You at the Pole event. See You at the Pole began in early 1990 when a small group of teenagers in Burleson, Texas, came together for a DiscipleNow weekend. The students were broken before God and burdened for their friends. Compelled to pray, they drove to three different schools that night. They went to school flagpoles and prayed for their friends, schools and leaders. A vision began where students throughout Texas would follow these examples and meet at their school flagpoles to pray simultaneously. The challenge was named See You at the Pole and shared with 20,000 students in June 1990. By 1991, students participating across the country had created their own national day of student prayer. On Sept. 11, 1991, at 7 a.m., one million students gathered at school flagpoles all over the country. Like those first students, they prayed for their schools, for their friends, for their leaders and for their country. Now, more than three million students from all 50 states participate in See You at the Pole. Students in more than 20 countries take part. In places like Canada, Guam, Korea, Japan, Turkey, and the Ivory Coast, students are responding to God and taking seriously the challenge to pray. A student holds hands with two teachers and Principal Lynn Bishop as they pray with Buddy Hall, a pastor with Sardis Bapti ^"*""h. Stay informed. Get involved. Be entertained. Keep in touch. Express yourself. Know your community. Deadline Monday 5 p.m. before publication Tiger Growl Thursday, Oct. 2, 7:30 p.m. at the stadium. Admission $2 per person. Homecoming parade on Friday, Oct. 3. It begins at noon. Homecoming football game against Santa Fe is Friday, Oct. 3, 7:30 p.m. i*";, ,"-c1 0Phone (386) 496-2261 0Fax (386) 496-2858 EP Page 2A TIMES October 2, 2008 Dr Osborne: Well-loved, Well-respected BY JENNIFER THOMAS Special to the Times Owen B.K. Osborne, M.D., hand surgeon at Lake Butler Hospital/Hand Surgery Center and Ramadan Hand Institute passed away after a short illness on Tuesday, Sept. 23, at North Florida Regional Medical Center. He was 65. Born and raised in Jamaica, Dr. Osboine came to the United States in 1971 to complete his post-graduate medical training in New York. He practiced in New York before joining Ramadan Hand Institute in 1984. He also held medical staff privileges at several area hospitals. At the time of his passing, he lived with his wife in Micanopy. They have four adult children. SLake Butler Hospital/Hand Surgery Center-and Raimadan Hand Institute were blessed-to have Dr. Osborne on staff for the last 24 years. Dr. Osborne was one of the first practicing hand surgeons in the state of Florida. He is well known for his success in the treatment of hand trauma. "He would do whatever it took, no matter how long he had to stay in the O.R., to make sure that everything was perfect and that the patient would have the best possible outcome," said Nancy Spires, registered nurse and past operating room supervisor who worked with Dr. Osborne for the past 24 years. "He didn't care who you were or what your status was, he worked just as hard for everybody." Dr. Osborne accepted referrals and treated many of the emergency hand injuries from other facilities in north central Florida and south Georgia. Patients and family members often wondered why they were being sent so far, but once they were .treated by him, they understood. Dr. Osborne loved his patients,'his co-workers, and was dedicated to his work. He treated patients in his Gainesville, Lake City, Palatka and Lake Butler offices most weekdays and performed emergency surgery cases at the operating the hospital most evenings and weekends. "We were all his family," stated Paula Webb, hospital chief financial officer. "The decisions he made and the challenges he accepted were always about helping those I1 )ik around him. Many patients came to him for treatment unable to pay for their care and being turned away from other providers, but he would never hesitate because he knew it was the right thing to do." "Today this type of dedication is rare," said Pari Howard, hospital chief executive officer. "It's hard to find hand surgeons that will take after hours calls; but he did almost everynight of the week." Over the years, Dr. Osborne has touched many people's The VII'Weivill tol. Dr. (Owenz (hbol 1Yhl i~l 01 idallt, )ct. 3, f0'0111 4 (it C/ieStiut :Fzu ne a:l-Ion'"11te CIoc The 11101701,101,serv ice will take on satillda'lq, Oct. 4, atJO 6a11. (1 .Peo ple Chi 1IIL1 IOLLteL'Li mz thi ill L77illeSIe-dlIe- Lady Tigers defeat Trojans BY TRUDY ANDREWS Lady Tigers Volleyball Coach On Sept. 25, the UCHS Lady Tiger junior varsity and varsity volleyball teams both won their matches against the. Hamilton County High School Trojans. The varsity squad won in iat-heer-sets. -*25"-13 UC, 25f-'16 , SUC, 25-11 UC.and the,junior .. \ arsi3 squad won In twO'sets:' 25-11 UC, 25-15 UC. The varsity squad had a tremendous game last night, coming together as a team to triumph over Hamilton County. The varsity continues to growV as a solid force on the court learning to play a "smart" game of volleyball. As we continue to play, it is exciting to watch the ladies get 'more comfortable and confident in their abilities. With no seniors on our squad, varsity has been able to hang close with some very tough schools and in many situations, dominating them. Linsey Clark stopped Hamilton County with 10 kills, three blocks, four digs and one ace. Brianne Clyatt lead .the court with nine aces, 18 serves in, five kills and 16 digs. Markie Emery controlled the net with 13 kills and one block, three serves in and eight digs. Kiara Holland contributed to the victory with her mighty defensive skills of 11 digs and one kill. Carson Mize guided the team with her setting skills of 27 sets, 19 assists, six kills, one ace and 12 digs. Megan Mobley contributed her offensive and defensive skills with 13 serves in, three aces and 29 digs. Ashley Parrish defended the back row with 28 digs and one back row kill. Keira Sellers dominated the court with seven aces, 16 serves in, 11 kills and three digs. Jordan Windham's all-around play helped contribute to our victory with eight kills, three aces, three sets and six digs. The junior varsity team accomplished a challenging feat in that the coaching staff mixed things up on them and took them out of their regular rotations. After a slow start, they rose to the challenge and SHARE: Continued from p. 1A none," said Jenkins. "He took over the fire department and did an outstanding job with that. But I think this is too much for Mr. Parrish," he said. In addition to Parrish's role as the director of EMS, he is the chief of the Union County Fire Department and is also an elected Union County school board member. "It takes a lot of his time," Je is aid. "r& nl.r.' P.arsh going ^P ':^cat>rd Counts is-putlinatotoo- .tt n him. If Bradford County needs a director, they need to hire one," Jenkins said. Cossey stated' that Parrish said he would tell the board at any point in time if it became too much for him. / Commissioner Dobbs added, "This is not a forever thing anyway." "In our last discussion, we were going to go with six months and look at things," LEGAL NOTICE PUBLIC NOTICE This is to inform you that Union County will hold a pre-bid conference and walk-thru for the rehabilitation of two (2) single-family dwellings in the Union County SHIP program. This meeting will be held Tuesday, October 14, 2008 beginning at 9:30 a.m. at Suwannee River Economic Council Inc. Outreach Office, 855 S.W. 6L Tuesday, October 21, 2008, at Suwannee River Economic Council, Lnc. Outreach Office, 855 S.W. 6tn Ave., Lake Butler, Florida 32054. Please mark envelope "Sealed Bid for Name of Homeowner, SHIP". Bids to be opened Tuesday, October 21, 2008, at 12:05 p.m. Suwannee River Economic Council, Inc. has the right to reject any and all bids. The bids will be awarded on the most cost effective basis. Union County is a fair housing and equal opportunityandADAemployer. Minority and Women Contractors are urged to participate. 10/2 ltchg-UCT Union lountp Titne!6: Teresa Stone-Irwin Sports Editor: Cliff Smelley Advertising: Kevin Miller Darlene Douglass Typesetting: Sylvia Wheeler Adetsn n Advertising and Newspaper Prod. Classified Adv. Bookkeeping: Earl W. Ray Mellsa Noble Kathi Bennett buDSfcnplIIn tale in irade Area $34.00 per year: $18.00 six months Outside Trade Area: $34.00 per year: $18.00 ,ix months prevailed, using teamwork and perseverance to dominate Hamilton County. Jenkins said. Clerk Regina Parrish explained that it is a year-to- year contract, however, if either county decides they want out, they can do so at any time. Usually in documents, she said that things are worded to run frQrp fiscal year.to fiscal year. Clerk Parrish said that they intend to have a meeting every mdnth with her, a board representative, an EMS representative, Brad Carter, Dr. Jonas and representatives from Bradford County. "If we see this is not going to work at all, we'll be the first to pull out of it," she said. Cossey, lopped at. it from theangle of being.in Bradford 0"Nounfty'S "oSi tiMi'.' "We- hope it never happens, but what if something happens to Mr. Parrish and we need Columbia County to come do the same thing for us?" she asked. Dobbs added that, according to Parrish, there was no'doubt who comes first with him. "He made that statement in the paper and he hasn't backed off from that. I'm also thinking about the financial benefits that we have." Communism is like prohibition, it is a good idea, but it won't work. WILL ROGERS 1879-1935, American Humorist, Actor "I think Mr. Parrish would be man enough to tell us if he couldn't handle both jobs. The financial gains, especially at this time, believe me, we need it," said Dobbs. The final vote from the board was 2-1, with Jenkins not in favor of having Parrish serve as a dual EMS director for both Union and Bradford counties. Parrish then ,thanked the board for their confidence in him and said that he appreciated Jenkins' concern and did not think the situation had anything to do with the school board. "I've always kept that separate from my county obligation and I've done a really good job of making suM 5 on the school boarl d end lidont get involved with anything in Union County except the value adjustment board," Parrish said. "That isn't where that was going," said Jenkins. "That was S(about) work load on you." He closed by saying that from day one, he had been with Parrish and will continue, but is concerned that Parrish will get overloaded in his commitments and duties. PIGEON FORGE/GATINBURG, TN Tesday, Oct 28th 2pm No Minimumi! GRAND ESTATES AUCTION COMPANY' call for a FREE color brochure 800-552-8120 888-ADMIT-IT 24-Hour Problem Gambling Helpline America is a great country, but you can't live in it for nothing. WILL ROGERS 1879-1935, American Humorist, Actor STOP N SHOP' The Real Convenience Store SCR-231 S. (by RMC) Lake Butler * * 386-496-1701 I OPEN ^ -Grocery* B*evera '*| * Beer Cigarettes Lotto SBill Pay Phone Cards # Deli Coffee Bar * Stop By Today For Fast, Friendly Service! Re-Elect Steven A. AUNDERS Certified Florida Appraiser Union County Property AppraiseI, With 16 years experience as a state certified Property Appraiser, -.i let's keep a good man serving our court Honest, Fair, and Dependable-. . He's the right man for the job..'"- Vote For and Support Steven A. Saunders Politiedl .xherinment Paul I-I .l o S Itn pplone Ba Moon 1*, Laundentor ( inE)lr Pmq .'ll Appl% ibe" lives. His patients, co- '1e t' l"l workers and S P.111. friends will not soon forget him. rtfd1 O1 VikiTherrault, t'. ARNP at the hospital, t'pildCC summed it up it Jesl5 best "Every time I suture 1 will h't3'?lllle hear his voice saying, 'be sure you keep that suture line flat.' Every time I Shave a bleeding wound, I will hear his voice S in my head ;'- saying with that sardonic humor, 'sew it up.' I will miss Shim horribly I not only for his wisdom and his teaching, but his caring for all things. To that extent, his soul will live forever in everything Sand everyone he touched." V.ii The viewing will be 4-8 p.m. ,on Friday, Oct. 3, at Chestnut Funeral Home located on N.W. 8th Ave. in Gainesville (352) 372-2537. -The memorial service will be held at 10 a.m. on Saturday, Oct. 4, at Jesus People Church located on 391. Avenue in Gainesville. c o ~.c~ wofi m. 1 . "1 M4 !rv i"Onl)1 October 2, 2008 TIMES Page 3A It's always time to think about bus safety BY TERESA STONE IRWIN Times Staff Writer Undoubtedly, every time there is a report of a crash involving a school bus, Union County residents immediately call to mind the memory of the 2006 accident that took the lives' of seven local children and has indeed changed the lives of.a number of families in the community. Since the start of the new school year, there have been no less than three school bus- related accidents in Flotida. On Sept. 5, a student was killed in. Tallahassee when a school bus was carrying.27 Apalachee Elementary School children to a Boys and Girls CGab afterschool program. The bus was stopped at a red light when a cement truck crashed into its rear. The cement truck tipped on its side into the path of an oncoming SUV and the bus was pushed into the rear of a minivan. On the bus was 8-year-old Ronshay Dugans, who later died of .his injuries. Seven other children and the driver of another vehicle were injured. On the afternoon of Sept. 23. a school bus was stopped on i~US. 301 in Citra dropping off "r dents I,\ Nhr a,u aclor-trailer sTammed into the back of it. Both vehicles burst into flames. Several motorists stopped, to pull injured students from the flaming bus, however, North Marion Middle School student Frances Schee, 13, did not get out in lime. Eight other students and the bus .driver received injuries. Reinaldo Gonzalez, the driver of the semi, admitted to being on his cell phone at the time of the crash. On Sept. 29, a Ford pickup truck ran into the rear of a Columbia County school bus.. The bus was running its morning route and was carrying 10 Richardson Middle School and Columbia High School students. Once again, the bus was stopped with its lights flashing and stop sign out. FHP reported that the truck's driver. Rachel Kaeck Griffis. 25, of Olustee, failed to stop for the bus and slammed into the back of it. After being extracted from Iher vehicle, she was airlifted to Shands UF in critical condition. Three of the children on the bus were transported to Shands Lake City with minor injuries. Just how does a motorist miss seeing a big yellow school bus? When school is .in session, they're everywhere. Large yellow school buses with painted black warning letters, the buses are about 12 feet high and 40 feet long. They have constant blinkinglights on their roofs and several reflectors. When stopped to load or unload students, there are a number of flashing red lights on the back, front and sides of the bus and even on the octagon- shaped "Stop" signs that extend from the bus. Motorists, under no circumstances, are to.ever drive around a stopped school bus with its flashing red lights and stop sign extended. A child about to board a bus could cross in front of the path of a vehicle at any time. The standard yellow color of school buses was created for use on U.S. school buses in 1939. The color is known as National School Bus Glossy Yellow. There are no less than eight flashing lights on the back, front and sides of buses when stopped to load or unload children. An octagon-shaped sign resembling the common stop sign also comes with flashing red lights and the word. "STOP" painted on both sides. What Florida law says about stopping for school buses Section 316.172, Florida Statutes, states: (1)(a) Any person using, operating; or dii'ing a vehicle on or over the roads or highways of this state shall, upon approaching any school bus which displays, ATTENTION! Set your clock get started early! OWNa a i, Lake Butler in Lake Butler. It's time to prune and fertilize trees, plant a flower! IQI 4Aain this yearduring October, the City will provide removal of extra trash and debris, at no additional costs, in order to help clean up & beautify our community. Clean-up, Fix-up, Repair & Paint! 'h1 ^^-11'1 rcm '"*"~ dla0 'rh & I- Got ajunk car in your yard? tI Arrange to have it hauled away BII- free! Call City Hall at.496-3401: for help. Show your PRIDE in your Community..... Do your part to beautify Lake Butler during October! City of Lake B." OK, now you know what the law says, but what does this mean in non-statute language? The Florida Highway Patrol has provided these explanations: divided highway has either a five-foot unpaved space (median) or a physical barrier separating the roadways. Examples of a barrier are a chain-link fence or a concrete abutment. An unpaved space is a grass, dirt, gravel, water, etc. division between one- or multi-lane roadways going:.in opposite directions. The- lahing lights. Do you have to stop for a bus that is flashing yellow lights? Flashing yellow lights are for warning purposes, primarily to let you know the driver is nearing a loading or unloading zone and will ,soon:be stopping and displaying. flashing red lights and extending the stop. arm. Youi should not stop for .flashing yellowlights;ghowever,. youshould slow and be prepared to come to a complete stop. When you see,a school bus with yellow lights flashing, start preparing to stop; Signal your intentions by slowing down and activating your brake lights. Don't surprise the drivers behind you by your actions. 'Always let other drivers, wiho may not know the law or may not be as safety conscious as ii utler you, know what you are doing. See SAFE, p. 4A WOW! - -- I I I Disabled employment program avail. Do you have a significant physical or mental disability and eed .help to work? Vocatiblal Rehaib"'may be able to assist you. Vocational Rehab is an employment program that provides services for eligible people who have 'a significant physical or mental impairment that limits them working. The services are designed to enable people to prepare for, get, keep, or regain employment. For more information in Union and Columbia counties, call (386) 754-1675. ComingSoon! WOW! Music IGER Music! Available on CD, October 2008 sail to Get Away, Without Going Far-Celebrate the S 296th anniversary of the U.S. Navy in Jacksonville / 6/ C'P'(r a October 10-13. Jacksonville will welcome the USS f Where FlorJa Begins. Stephen W Groves, a guided-missile frigate, and offer tours of the ship. On October 13, come take Part In the Jacksonvllle Navy Memorials 20th anniversary celebration. While y you're here. enjoy all the water by playing in it at the beach, avoiding it at golf or spending tfine dining beside It-either way, you're sure to make a splash. Find great values on acatlon packages and info on.0ther events you won't want to miss at Vsluacksonvilte.com/escape awkno atVil_ aksnvlle on II 180-73-66]L Woman Digs Tunnel From Her House to Grocery Store BEXAR COUNTY-After applying Thera-Gesic"to her sore shoulders, Mary Ann W. dug a 3,927 foot tunnel from her house directly to the entrance of her favorite grocery store. When asked by curious onlookers why she didn't just drive her car there, she painlessly replied, "None of your dang business!" l m , Go painlessly with Them-Gesic" 'Russell A. Wade III, P.A. S\ Attorney at Law .,- (386) 496-9656 WE ARE PLEASED TO ANNOUNCE WE ARE NOW TAKING NEW CLIENTS AND HAVE EXPANDED OUR REPRESENTATION Estate Planning Will Trusts Probate Corporate/LLC Formation Business Law Real Estate Transactions Contracts Evictions Divorce Custody Adoptions Several and Corporate Litigation Personal Injury 155 SE 6t' Place Lake Butler, FL S(Directly behind Badcock Furniture Store off of Main Street) VOTE Doyle Williams for County Commissioner District 3 * Born and raised in Union County. * 5th generation Union County resident. f " STaxpayer and landowner. * Member of Sardis Baptist Church of Worthington Springs. * Member of National Wild Turkey Federation. * Member of American Cancer Society. * Wants YOUR tax money spent wisely and efficiently. i * Self-employed and part business owner. * Honest, Fair, Dependable and Hard Working. . * On original committee for the Farmers Market in Lake Butler. ; '::, * Presently on Union County Extension Agricultural'Advisory Board, 10 plus years. ;i i i; * Will actively work to keep our community a place where people want to work and live. 'l fl * Will actively work to make sure your tax money is spent wisely and efficiently. i; " Your questions or concerns are welcome 352-745-6015. I Thank your for your consideration and support on November 4th. Pd. Pol. Adv. Pd. for and approved by Doyle Williams, Democrat for County Commissioner, District 3. October isLB clean-up month October is Fall Beautification Month in Lake Butler. Throughout the month and free of charge, the city will remove any extra trash, debris, scrap metals, old appliances, junk cars, dilapidated building or other nondesirable items in order beautify the community and help keep it clean. The city commission stated it recognizes that in order for the community to prosper, it must be a desirable and attractive community, and is asking each resident and business to take this time to repair, clean, improve and landscape their properties. Businesses and residents should place the items at the street during regular garbage pick-up days. I I j J :.. ." , .... 7i. .. . V,14 A, y fN i - Page 4A TIMES Driggers sets Masonic milestone BY TED BARBER Special to the Times 'Sixty-five years ago, Clayborn Driggers was initiated an entered apprentice mason in Lake Butler Lodge No. 52, Free and Accepted Masons. The date was June 17, 1943, and a month later, he was passed to the degree of fellow craft then raised to the sublime degree of master mason on Sept. 16 the same year. Driggers, 92, earned the title of worshipful (master of Lake Butler Lodge) in 1950. He served as a junior steward in 1989 and earned his 40-, 50-, 55- and 60-year Service Certificates of Good Standing in 1988, 1993, 1998.and 2003, respectively. On Sept. 16 of this year, the Grand Lodge of Florida awarded Driggers with his 65-year Certificate of Good Standing. Clayborn and Irene spent most of their lives in Union County. Presently they are residing at Still Waters West, an assisted living facility in Lake City. They have been there for almost a year. Disability awareness observed On Sept. 23, the Union County School Board adopted a resolution designating the fTrst two weeks of October 2008 as Disability History and Awareness weeks. Within the resolution' the- school board encourages schools to provide instruction on disability history, people with disabilities and the disability rights movement during the first two weeks of October as well as periodically throughout the school year. The Legislature also encourages cooperation between the school system, ppst secondary institution and, the. community at large to promote better treatment and fairer hiring practices for people with' disabilities. The Americans with Disabilities Act of 1990 was founded on the four principles of inclusion, full participation, economic self-sufficiency and equality of opportunity for all people with disabilities. There are more than 400,000 students with disabilities in'Florida's K- 12 education system. There should be one day when there is open season on senators. WILL ROGERS 1879-1935, American Hunmorist, Actor Driggers has been a Gator and Tiger fan all of his 92 years. He and Irene have two children, six grandchildren and seven great-grandchildren. Daughter Marguerite and her husband, Wayne Larkin, reside in Gainesville and son Clay and his wife, Cassie, reside in Lake Butler. Joey Delecruz, the present district deputy grand master of the "Friendly" 10th Masonic District (representing Most Worshipful Joe Fleites, the grand master of Free and Accepted Masons of Florida) and Worshipful Master Leaman Alvarez of Lake Butler Lodge presented Driggers with certificates, pins and a grand master's coin. Right Worshipful Ted Barber served as the chaplain and Right Worshipful George Barber read Worshipful Drigger's masonic history during the ceremony. The ceremony and honor of receiving the 65-year Grand Lodge Certificate of Good Standing is one that rarely happens and is one that all who participated will not forget. SAFE: Continued from p. 3A FHP school bus test Test your knowledge of the law by taking this short True or False quiz provided by FHP. 1. On a two-way street or highway, only vehicles moving in the same direction as the school bus must stop for a legally stopped school bus- displaying red loading lights and stop signal arm. 2. Vehicles must remain stopped until all red loading lights ha e.been ~trped ff, -; - "3. Vehicles tral ilrng"'i'the opposite direction as the school bus displaying red loading lights and extended stop arm do not have to stop if the street or highway is divided by a center two-way left turning lane. 4. When vehicles are traveling in the same direction as a school bus displaying its yellow warning lights but has not yet come to a complete stop, they must stop. 5. When vehicles are traveling in the same direction as a legally stopped school bus displaying red loading lights and an extended stop arm, they do not have to stop if the street or highway is a four-lane divided by an unpaved-(grassy) median at least five-feet wide. Perhaps in His wisdom the Almighty is trying to show us that a leader may chart the way, may point out the road to lasting peace, but that many leaders and many peoples must do the building. Eleanor Roosevelt 1884-1962, American First Lady, Columnist, 6. When vehicles are traveling in the same direction as a school bus displaying red loading lights and extended stop arm, they must stop if the street or highway is a four-lane roadway divided by a raised median-. - See how you did .l, False. .On any two-way street or highway, all vehicles must stop. 2. True. All vehicles that are required to stop must remained stopped until the red lights are turned off and the stop arm retracted. 3. False. Only a five-foot wide unpaved median or a physical barrier provides "the sepatafion needed to permit Vehkcle' iri elring i' the opposite direction of the school bus to continue without stopping. 4. False. Vehicles traveling in either direction on any type of road do not have to stop for a school bus displaying yellow flashing lights. These are warnings that a stop will occur very shortly. 5. False. Vehicles going in the same direction must always stop for a legally stopped school bus displaying flashing red lights and extended stop arms. Only those vehicles traveling in the opposite direction on the other side of the median are not required to stop. 6. True. Same rule as above applies. Craft fair and yard sale set Oct. 18 A craft fair and yard sale will be held at the Worthington Springs Community Park on S.R. 121 on Saturday, Oct. 18, from 8 a.m.-2 p.m. Breakfast, funnel cakes, cake walks, quilt and afghan raffle, hot dogs, barbecue chicken lunches, music, yard sale booths and craft sale booths are the main attractions for this event. The fundraiser is sponsored by Sardis Baptist Church to raise funds for its new Family Life Center building. Booths are still available. Contact Ayn McDermott at (386) 496-8480 or call Sardis Baptist Church at (386) 496-0892. Never mistake knowledge for wisdom. One helps you make a living; the other helps you make a life. Sandara Carey Without wisdom, knowledge is either useless or destructive. Source, Unknown PIIP First Baptist Church of Lake Butler invites all married & "soon to be" married couples to see FIREPROOF4his Saturday, October 4th at the Florida Twin Theatre in Starke. Please come by the office and pick up your FREE tickets for the 4:45 pm or 7:00 pm showing. These tickets also include free childcare for ages infant 6th grade. For more info call us at 386-496-3704 1I1 Too H I For Senior To' ATTENTION ALL CARE PARKSIDE WILL CARE FOR I ONES WHILE YOU GO ON V We offer: GREAT -FACITI GOOD FOOD CARING STAFF LIMITED SPACE AVAILA caot iajheomatio cat Cdathe.y Pitts, 904 PazA~i 1 Assisted Livin( You v-iome w 'ay 7t On Church Street across from WJ License #: ALI)278 ot ^ Florida Credit Union At Florida Credit Union, members come first. They offer honest deals to every member, every time. Therefore, they give you low rates on auto and home loans, free checking accounts with generous savings rates and other y 'no-cost and low-cost services. Y OUa When you deposit money in Florida Credit Union, you become a member, not just a customer, because your o u r deposit is considered your share of the ownership in the credit union. Florida Credit Union specializes in helping people save money. They provide objective financial advice that is focused on reducing loan payments, increasing T ra v e lu I savings, and improving credit scores. Florida Credit Union is not your typical "bank." In addition to traditional T ra v e l? banking services, they also finance land/home packages, provide financing for businesses, and offer loan services 24 hours a day, 7 days a week. Their goal is to help you take advantage of the benefits of a locally owned, not- for-profit credit union. GIVERS ... The authors of this 2008 Fall Preview Edition Local Business Update suggest that you call Florida Credit GIV SU nion at (904) 964-1427 or visit them in front of the Super Wal-Mart at 2460 Commercial Drive in Starke, YOUR LOVED ormore information. Bradford Pre-School VACATION. Home of the Future Tornados! Have you been hopelessly searching for the best possible environment for your child or children to attend while you're away? Allow the caring and dependable staff at Bradford Pre-School to give you a tour of their facilities. ..-.--- .. They are always happy to answer your questions and will keep you informed of your child's development. Bradford Pre-School is a participant in the VPK Program and spaces are still available. -Y Their emphasis is on providing a nurturing and healthy environment while-allowing the children that attend to grow and explore their endless curiosity. Offering the latest development of learning toys and equipment, the staff' at Bradford Pre-Schaoelwelcomes your child to a world full of acceptance and warmth and their trained teachers encourage the-children to learn at their own pace. They know that when it comes to something as priceless as your child or children, you deserve to find the very best. At Bradford Pre-School they can offer you just that. From spacious rooms full of friendly faces to a playground made for good'times, they desire to make each individual child feel special from A to Z! BLE The authors of this 2008 Fall Edition Local Business Update suggest that you stop by Bradford Pre-School at 407 W. Washington St. in Starke, phone (904) 964-4361, and check out the best in childcare! , C & G Manufactured Homes Building Homes For NE Florida Over 25 Years -964-2220 I2 ) Trying to buy a home can sometimes be frustrating. It could be credit problems or finding a mortgage that is reasonable and that you can afford. Manufactured and modular homes have been the latest trend for homebuyers all across the nation. C & G Manufactured Homes can help you achieve the "American Dream," and get youin your very own home with very little money down. SWhen you purchase your home from. C & G Manufactured Homes, you can save thousands in transportation costs because the factory is right in Lake City. They would be pleasgD to give you a tour of the factory so you can see their top-quality construction for yourself. Their homes have'all the same features from the interior to the exterior construction that are found in many site-built homes. Kitchens provide spacious cabinets as well as brand-name appliances and storage areas. Many baths offer luxury features including roman tubs and skylights. S Facility Bedrooms and living areas reflect the most contemporary design concepts including built-in cabinets, elegant window treatments and, in some models, energy-efficient fireplaces. Financing options are available for qualified buyers. If you would like to see their floor plans, or would like more om c n'(te information on any of their model lines, please stop by their office located in Lake City at Highway 90West, Phone (386) 752-3743; or 278 Deputy J. Davis Lane, phone (386) 755-8885, or call either office to schedule your personal tour of their factory. ainwright Park The editors of this 2008 Fall Edition Local Business Update are pleased to suggest that C & G Manufactured Homes could be your best opportunity in the hunt for your new home. Visit their website for a virtual tour at \ v~.cimffhomes.com. Pictured (sitting) are Irene and Clayborn Driggers. Standing behind the couple are Leaman Alvarez, George Barber, Ted Barber, Bob Gaubatz, Joey Delecruz and Wayne Larkin. Photo by Jerry Couts. Local Business Update Prepared By County News, Inc. 2008 All Rights Reserved (800) 580-0485 I--A I I i: October 2, 2008 TIMES Page 5A 2 new fellow A craft Masons BYTED BARBER Special to the Times . These two are really dying to get out of here. saamcaPI:` .- -m e,, -!- r j --- -V---- ..-J This saloon will convince anyone about the evils of drink-(and bartenders?) Tombstone returns Halloween Carnival and Spook Trail to be hosted by Lake Butler Rotary and Kirby Pharmacy The Ghost Town of Tombstone is returning to Lake Butler this Halloween, along with an added carnival, promising fun for all ages. The Lake Butler Rotary Club, in conjunction with Kirby and Company Pharmacy, are busy planning the event that will take place two weekends this year: Friday and Saturday, Oct. 24-25, and Friday and Saturday, Oct. 31-Nov. 1. Due to a scheduled football game on Halloween night (spooky!), the city -i4si, !hekli jgri k-or-Troat ,tis year on Saturday, Nov. 1. The event will be located on C.R. 231, just past the entrance to Reception and Medical Center. Follow the signs. "We wanted to bring back the best elements of the old Worthington Springs Halloween Carnival that we remembered from childhood," said Rotary Club President Russ Wade, "and by combining the carnival with Daniel Richard Ryder of Bradford Lodge No. 35 and Leavy Shane Robinson of Lake Butler Lodge No. 52 were passed from Entered Apprentice Degree to Fellow Craft Degree in Free Masonry at Lake Butler Lodge on Sept. 24. --Right Worshipful Ted Barber served as the worshipful master for the degree and Worshipful Don Noland, an honorary member of Lake Butler Lodge, gave the Winding Stair Lecture. Right Worshipful George Barber read the Fellow Craft Charge. Twenty-seven Freemasons were present for the degree. the Tombstone Spook Trail we plan to take itto the next level for the new generation." The Rotary Club hopes that if the event is a success, it can become an annual event. The Halloween carnival will feature food, a number of games and activities, including the familiar kiddy games such as the fishing booth, lollipop pull, cupcake walk and duck pond, as well as traditional carnival games of skill for older children and adults. Many games will be available for $1, while most kiddie games will cost only 50 cents. There will also be family games that See TOMB, p. 8A (Front, I-r) is Don Noland, Dan Ryder, Leavy Robinson, Greg Cameron, (back, I-r) Don Hicks, Ron Ratliff, Colan Coody, Ted Barber, Tim Geibeig, Gary Ranard and George Barber. SSWIFT CREEK P E A L Y r| IN ., E I T- i R A T I TURKEY RIDGE 1 +/- ACRE LOTS STARTING AT $22,500 If you have rental property and need tenants, Owner Financing CALL US! Available * *~ i Read our Classifieds on the where one call CIl SSltied A IS World Wide Web does itall! ""WWWs--d- 19041964-6305*13521473-2210 *3861 496-2261 U- Tri-County Classifieds Bradford Union Clay Reach over 20.500 Readers Every Week! 40 Notice 41 Vehicles Accessories 42 Motor Vehicles 43 RV's & Campers 44 Boals 45 Land for Sale 46 Real Istate Outi of Area 47 Commercial Property Rent, ease. Sale 48 Hiumes for Sale 49 Mohile HIomesI rr Sale 50 For Rent INDEX 51 Lost/Fouind 52 Animals & Pels 53 Yard Sales 54 Keystone Yard Sales 55 Wanted 56 Trade or Swap 57 FonrSale 58 building! Materials 59 Ier.stnul Services 60 Secretarial Services 61 Scriplures U2 VNacaioli'l'rasvel 63 Love lines 64 Business Opportunity 65 Help Wanted 66 Investment Opportunity 67 llunting Land for Renl 68 Carpet Cleaning 69 F[od Supplenlants 70 Self Storage 72 Sporlaig Goods 73 Fuarmn l(iipnment 74 Computers & Computer Accesisories CLASSIFIED DEADLINES Word Ad Clussilied Tuesday, 12:01) noon Chlssiried Display' Tuesday, 12:00 noon ^'% A., %A. . 9U0464-3bb5U 352-473-2210 386-496-2261 Ieoi w1w N NOTICE Clussified Adveni.ing should be paid in advance unless, credit lis already been esihabliled with tlhe n sp;i'r A I3 (It service charge sill be addeJd to all billiingF t crer pisi. igpnd li.ndhiiI. All Iad. pl.Iad by phone: .1Ie icaid ciilk iLto lh adveneril ile illme or pl.ilesien. However. lie cl;ssifile i rwli c;nil be lilld rLspollhlbl lr inlsaikce, i11 Cla.iiied advernisin Uikenl by phone. TlI news.rp;aper rerrve thlie rigtl to correctly cli.aify ;ald edit ll cnp) or it) rejecil r caincel ;iy, adveen-senlis al un loinme. Only :.indard ;abbre\.ionsn uill he ;iaccpled. 40 Notice EQUAL HOUSING OP- PORTUNITY. All real estate advertising in this newspaper is subject to the Federal Fair Housing Act of 1968 which makes it illegal to advertise "any preference, limitation or discrimination based on race, color, religion, sex or national origin,oran in- tei'tiori to make any such preference, limitation or discrimination." Familial status includes children under the age of 18 living with parents or legal cus- St6diahs,-pregnant women and people securing cus- tody of children under 18. This newspaper will rpt knowingly accept any advertising for real estate which is in violation of -the-.law:.. Our readers -are bereby informed that Small dwellings advertised in' fthir newspaper are- available on an equal opportunity basis. To complain of discrimina- tion, call HUD toll-free at '"1 800-'669-9777, the toll- ,-frde telephone number for,the hearing impaired is 1-8'00-927-9275. For :iithe'r -information call Florida Commission on ,Human Relations, Lisa SSutherland 850-488-7082 ext' #1005: CLASSIFIED ADVERTIS- ING should be submitted to the Starke office in writing & paid in advance unless credit has already been established with this office THE CLASSIFIED STAFF CANNOT BE HELD RE- SPONSIBLE FUH MiS- TAKES IN CLASSIFIED ADVERTISING TAKEN OVER THE PHONE. Deadline is Tuesday at 12 noon prior to that Thursday's publication. Minimum charge is S9.50 for the first 20 words, then 20 cents per word thereafter. Auctions 41 ESTATE AUCTION SAT- URDAY, OCTOBER 4TH, 10am across from Harvey's Supermar- ket on SR21 between Keystone and Melrose. Estate of the late Mary Jane Hoover. A house full of furniture, china, kitchen items, jewelry, hats, clothing, tools, yard and garden tools and lots more. Cash, check, Visa, MC or debit. 12% buyer's premium plus tax. 2% BP discount with cash or check. Keystone Auction Service, AB1648/AU2225, 352-283-6297 42 Motor Vehicles LOOKING FOR TEN PEO- PLE FOR CREDIT RE- BUILDING PROGRAM with payments under $300/mth. Call 866-665- 2372. TAKE OVER PAYMENTS UNDER $300/MTH on Honda Pilot or Nissan Altima. Call 866-665- 2372. 2000 PETERBILT MODEL 379, $45K. Owner fi- nancing. Call Anthony at 904-964-7537. 43 RVs and Campers 2004 32' CEDAR CREEK FIFTH WHEEL 2 slides, rear kitchen and lots of. storage. Excellent condi- -tin, $18.500. Call 904- 21f'99. 47 Commercial Property (Rent, Lease, Sale) SO FT. Bradford Indus- trial Park. $1,000 for each bay. Smith & Smith Realty, 904-964-9222. 48 Homes for Sale 2/1 HOME COMPLETELY REMODELED. Asking $77K, owner will pay closing costs and no down payment to qualify- ing buyer. 2 miles N. of Starke on 301.. Mobile Homes for Sale FOR SALE BY OWNER SINGLEWIDE MO- BILE HOMES starting at $7,900. 2/2 or 3/2. Also, 3/2 on one acre for sale. Call Jesse at 352-318- 9262. FOR SALE BY OWNER 3/2 SINGLEWIDE MO- BILE HOME. Early 2000 and/or late '90's. Call for details, 352-318-9262. 1/1-$400/MTH, FIRSTHAND LAST. Low utility (FPL). Hwy 301 N, Starke, 904- 769-6020. MACCLENNY LAND HOME PACKAGE- New 1579 sq ft 3/2 with deluxe kitchen appliances, island, lots of cabinets, formal dining and more on 1.5 shaded acres on the St. Mary's River. Was $135K, re- duced to $120K. Call 904-259-8028. BRAND NEW 1369 SQ FT 3/2- DELIVERED, set-up, A/C, skirting and steps all installed. $52K. call 904- 259-8028. BRAND NEW 4/2, 2280 SO FT Delivered, set-up, A/C, skirting and steps all installed, $69,400. Call 904-259-8028. BRAND NEW 4/2, 1560 SO. YOU CAN OWN YOUR OWN HOME 3/2 mobile home, all redone. Seller will finance. $750 down, $365/mth plus $195 lot rent. Hidden Oaks Mobile Home Park, 386-496- 8111. 50 For Rent STARKE 4/2 $950/MTH PLUS $1,000 DEPOSIT. Service animals only. Call S352. MOBILE HOME IN SMALL PARK LOCATED ON SR121. Close to all pris- ons, CH/A. Call 904- 364-8535. 2/2 MOBILE HOME WITH LAWN SERVICE AND PEST CONTROL. Nice private area. Call for info, 904-964-3359. $625/ mth, first and last plus $300/dep. 12X70 TRAILER, 2/1.5 - VERY CLEAN. $500/mth, $400/dep. First and last month's rent required. Call 904-782-3380 or 904-782-3367. KEYSTONE 2/1, CH/A, HOUSE ON HALF MOON SLAKE. 2 acres, 12x24 shed, partially fenced. References, $595/mth. Call 352-246-1450. KEYSTONE 4/2 WITH CA- THEDRAL CEILINGS, CH/A, remodeled, carpet, wooded lot, close to bike path. References, $795/ mth. Call 352-246-1450. 2/1 SINGLEWIDE MOBILE HOME $500/mth plus $500/dep. 22515 NW 53rd Ave., Lawtey. Ser- vice animals only. Call 904-312-3999 or 904- 782-3867. HAMPTONLAKEAREA2/2 MOBILE HOME. $500/ mth plus deposit. Call 352-473-8981. FOR RENT BY OWNER - 2/2 ON 1/4 ACRE. Brand new DWMH with appli- ances. Call for more info, 352-318-9262. FOR LEASE (OR SALE) KEYSTONE HEIGHTS MOBILE HOME 2/1 on one acre fenced lot, paved road. Close to town. First, last and se- Keystone Hauling & Handyman Service, LLC *Carpefr -IlawRepvki *Odd.Tobs *YardWErk "CkGiden P ADInd " Lkffmd & InsautJ -BushHogMowin *lve11iming&Rmmfli *SktesanUp *PkwBark&CpiNMukh *hewodForSale *FFeEatimatc Owner: Kerrv Whitford S; .-ita *. curity. $500/mth, call 352-475-3094. 631 FRANCES ST. -NEWLY REMODELED 2/1 block home in the city of Starke. Carport, corner lot, fenced . backyard, within walk- ing distance to schools. $700/mth, $700/dep with one year lease. Call 904- 964-5017 or 352-745- 6463. 4/2 COUNTRY HOME - SOME PASTURE. Lo- cated in Providence/ Worthington Springs area. CH/A, $925/mth. Call 386-496-2354. 3/1 IN STARKE'NEAR SCHOOLS. $750/mth plus $750/dep. Rental references required, 6 mth lease. Call 904-964- 2167, leave message.. WATERFRONT 2/2 FLEET- WOOD Has handicap ramp, front porch, full deck on back facing Deer Springs. Cute cottage style, large yard, covered carport. $695/mth plus deposit. Call 352-473- 2252, service animals only. TRAILER LOT FOR RENT -UNDER SHADE TREES in country. Lots of space, service animals only. Utilities included. Call 352-468-2684. SMALL TRAILER FOR RENT IN COUNTRY. Utilities included, call 352- 468-2684. DOUBLEWIDE MOBILE HOME FOR RENT- Key- stone Heights. Newer 3/2 beautiful DWon large 1/3 acre lot with new carpet, fully equipped kitchen, washer/dryer hook-up: $650/mth plus deposit Call 904-571-4264. FURNISHED ROOMS FOR RENT! COMPLETE with CH/A, cable provided, all utilities paid! Central loca- tion. 10% discount on first month's rent for senior cit- WE HAVE TENANTS! CUSTOM COMPUTER SERVICE Now Located at 665 E. Main St. LB (386) 496-1990 [email protected] r .I I A .... >: * 1/1 Mobile Home w/lake access $450/mo & security. * 2/1 Cottage on Lake Geneva. $595/mo & security. S1 * 2/1 Cottage in /Earleton $925/mo & security. * 42 5 Home on Bedford Lake $950/mo & secuwlty * 3/2 Home on Lake-a-wana $995/mo & security. * 2/2 Home on Swisher Lake $1,100/mo & security. S4/3 on Lake Santa Fe $1,500/mo & security. 19 3,.Icra No Job to Small Over 30 Years Experience PO. Box 183 Lawtey FL. 32058 Perry Nicula Len Eaves Cell 904-364-7451 Cell 352-745-0650 ER-13013402 Email [email protected] Page 6A TIMES October 2, 2008 SRead our Classifieds on the Where one call Classified Ads WorldWideWeb does/ta// 19041 964-6305 -(3521 473-2210 -*3861496-2261. Oall352-468-1323. SPECIAL RENTAL 2 & 3 BRORLAKEFRONT2. HOUSE FOR RENT - COUNTRY SETTING. 2/2, CH/A, washer/dryer hook-ups, carport, shed. Bradford County. $675 plus deposit. Call 352- 473-7208 or 352-745- 607 $550/mth plus electric. Also, studio apartment, utilities included. $450/- mth. Both first and secu- rity. Call 352-473-2919. 2/2 FOR RENT SERVICE '-*NIMALS ONLY, NO SMOKING, credit report required. $950/mth plus security. 525 Hebron Ave., Park of the Palms, Keystone Heights. Calf 352-235-1586. FOR RENT NEW APART- MENT, 1BR FURNISHED with cable and carport. $500/mth. Call 352-283- 4644. Lake Geneva, SR100. 2/1 HOUSE WITH FENCED YARD AND A/C on St. Clair St. $450/mth plus deposit and security. Ser- vice animals only. Call Joan at 904-964-4303. 3/2 MOBILE HOME FOR RENT ON 2.5 ACRES. $700/mth plug security 'deposit. In Lawtey. Call 904-894-2552. 5/2 HOUSE WITH BIG FENCED YARD 4 miles from Lake Butler, 10 miles to Starke. $300/dep, $750/mth. Call 904-284- 9223 or 904-305-8287. 2/1 HOME WITH CH/A, COMPLETELY REMOD- ELED. New carpet and paint. $470/mth plus deposit. Call 904-368- 0832. consider small pet. Call 352-473-5214. STUDIO APARTMENT FOR RENT- $450/mth includes electric and water utilities. 226S Thompson St. First month rent plus deposit. Call Mr. Corbin at 904- 563-5410. 52 Animals & Pets _ DOG TAGS DOG TAGS - DOG TAGSI Buy them at the Office Shop in Starke on Call St. Only $4.75, including postage. Many colors, shapes and styles to choose from. Call 904-964-5764 for more information. SHIHTZU AND DACHS- HUND MIX PUPPIES, $150. Chihuahua and Dachshund mix puppies, $150. 40-45 chickens and rooster also for sale. Call 386-431-1404. 53A Yard-Sales 'SATURDAY, OCTOBER 4TH, 7:30AM-12PM lo- cated at Smith Brothers Body Shop, north 301. Junior jeans (0 and 1, Hy- draulic, Charlotte Russe), tops, shoes (sizes 6 and 7), and purses. Baby and toddler boy clothes (size newborn to2T) and shoes (2-5). Medical scrubs size XS, toys, housewares, and holiday items. SATURDAY ONLY, 8AM- 3PM. Furniture, appli- ances, books, Avon signs, little girls stuff, etc. 3 miles on 100S on SE 129th St., follow signs. Call 904-964-6604. SIDEWALK SALE/MULTI FAMILY YARD SALE. Saturday, October 4, 9:30am-? Lyri's, 103 Ed- wards Rd., Starke. MOVING SALE SATUR- DAY, OCTOBER 4, 8AM- ? Lafayette St., Starke. Lots of furniture, clothing, toys and misc. THURSDAY, FRIDAY AND SATURDAY, 9AM-4PM. 640 Glendale St. off SR16W, look for signs. CDs, movies, washer and dryer, and house- hold items. MOVING SALE FRIDAY, 10/3 AND SATURDAY, 10/4, 8AM-3PM. 431 North Broadway St., Starke, across from en- trance to BHS football field. Furniture, tools, clothing, firewood, house- hold items. Tob much to list. Faulkner Realty, Inc. Susan Faulkner-O'Neal, Broker 19041904-5069 405 W. Georgia St. Starke [email protected] 3BR/2BA in Starke Walk to schools! Huge screened porch, fireplace. For Sale at $145,000 or Rent for $900/mo Broker/Owner 3BR/2BA Newly renovated for rent $800 (BrokerlOwner) Rhonda Stifel 904-769-9699 Ann Ryan 904-364-6148 Ken Ryan 904-364-8213 HOMETOWN Amanda Williams 904-364-8340 sNs aSSEnn Ronnie Norman 904-364-6985 "Where You Comei First" Gayle Van Wagenen 904-449-3938 ww .Hmeon^ rs^aly o HOMES FOR SALE 4BR/4BA 2834sf 2-story on Walnut St., carports, screened porch, detached game room & workshop.................$290,000 3BR/2.5BA 4016sf Newly Renovated 2- story home with large room, FPs, sunroom, formal living & dining rooms $399,900 3BR/1BA Block Home in town with bonus room. Large workshop /shed, partially fenced & carport. ................ ......................... $106,000 4BR 2-story Victorian on Cherry St. Wrap around porches, formal dining & living rooms. ............................................... 165,000 3BRI1.5BA 1477sf on oversized fenced lot.............................$:0........ $160,000 3BR/2BA on 1 acre Newer windows, vinyl siding, deck, shed & more. Only...... ........................................... $99,250 3BR/1BA on 1.5 acres. Needs some TLC............................................$89,900 3BR/2BA Never lived in Hardiplank & brick. Must See. Only..............$135,000 Large Home in Keystone, over 2000sf, Keystone area .................................... ....$109,000 3BRI2BA, on 1 acre. Deck, landscaped, motivated seller, move in ready.$79,900 LAND FOR SALE Mobile Home Lots: Just reduced & many to choose from. 77 Acres Paved road, would be good hunting land or get-a-way. 1 Acre Only..............................$15,000 1 Acre near Bayless Highway ..................................................$ 16 ,500 WATERFRONT FOR SALE Lake Elizabeth, crystal clear, 1+ acre with power pole, well, shed. Ready to build on....................................$119,000 Sampson Lake lot with well. 50" on the water. Mobile homes allowed Only $54,500 We Have Commercial Property! Call for information. RedoSe (904) 964-7330 107 E. Call Street Starke, FL MOVING SALE 6623 CR18, SATURDAY ONLY. Hampton, 8am-? Large variety of items, cheap, linens, appliances, chairs, tables and much more. HUGE YARD SALE FRI- DAY AND SATURDAY At hospital, turn left by First Presbyterian Church, go one mile, L on NE 21st Ave, follow signs. SATURDAY ONLY, 8AM-? AT JULIA'S FLORIST on Hwy 301. BABY AND ADULT CLOTH- ING, BEDDING, TOYS, and misc. Saturday, Oc- tober 4th, 8am-lpm be- hind Lawtey Apartments under covered bridge. LARGE FAMILY YARD SALE SATURDAY, OC- TOBER 4th, 7am until everything is gone. 540 Weldon St., Starke. Lots of baby items, household items, frames, curtains, vases, adult size clothes as well as baby clothes. ESTATE SALE FARM ITEMS, CHINA, FURNI- TURE, pottery, cast iron and lots more. Thurs- day, Friday and Saturday, 8am-2pm, 305 Clark St. FRIDAY AND SATURDAY, 8AM-4PM, GRIFFIS LOOP. Baby items, mov- ies, DVDs, fishing stuff, plants and lots more. 2 BLOCKS NORTH FROM NORMAN'S FOOD STAND on Hwy 301, Fri- day and Saturday. Car for sale. SATURDAY ONLY COLD- WELL BANKER parking lot on Call St. Tons of nice items, 8am-12pm. FRIDAY ONLY CONERLY ESTATES ON SR16W. Tons of nice items, 8am- 1pm. "RELAY FOR LIFE" MULTI FAMILY YARD SALE. Friday and Saturday, Oc- tober 3rd and 4th, 1609 Raiford Rd. Queen tu- bular water bed, electric stove, Halloween and Christmas decorations, books, crafts, sheets, tow- els, shoes, clothing. Too many items to list. Come on out and help us with the fight for a cure. SATURDAY AND SUNDAY, 100A (GRIFFIS LOOP), turn by Kangaroo, follow signs to SE 144th St (dirt GUARNTED FNANING Fianig il e rvie bider f al pste bis. nin bdder. Al pucha ser wh.wih. t obai road off curve). Dodge truck, clothing, furniture. 8am-3pm. 3 FAMILY YARD SALE - FRIDAY AND SATUR- DAY, October 3rd and 4th, 4447 Nw 173rd St. (Market Rd) off SR16 (toward the prison). Adult and children's clothing, religious and cookbooks, razor electric pavement go-cart, toys, ceiling fans, large conference tables and home decor. 2.5 MILES NORTH OF LAWTEY on 301. Lots of clothes and shoes. Saturday, 8am-? 53B Keystone Yard Sales SATURDAY ONLY 8AM- 1PM. 1001 SE 50TH ST. BATHROOM EMODELING + MORE HANDYMAN SERVICES " Complete athroom remodeling, including wallan tioortlework.Tuband showerconversions_ Remodeling. From kitchen bath to exterior repairs, wall-floor-tile work, built-in shower seating. References Available Lic. #202105 . S' CALL STEVE 904-465-0078 . ,.s nira~r ? -Si (off 21B). Furniture, ap- pliances, kitchen utensils, sporting goods, fishing stuff. .MULTI FAMILY SATUR- DAY ONLY, 7AM-? 6523 Triest Ave., off Commer- cial Circle. Furniture, pod and patio Turniture, women's and children's clothing. FRIDAY AND SATURDAY, 9AM-2PM OFF CR214 Sat 6220 Colgate Rd. Din- 'ing table with 6 chairs, 'more furniture, baby stuff, household goods, clothes and more. For info, call 352-473-4757. 3 FAMILY YARD SALE - SATURDAY, OCTOBER 4, 8am-? 6860 Crystal Lake Rd. Desk, dog pen, basketball hoop, tires and wheels (285/75 R16's, fit County Rd 241 Union County Providence. FL 32054 ... %., 'Ajj An, U1'.4 4.,, ill .k if , _A^r) 'I '. A4AAA" AA nA . A AAi~t*. A ".A~k, A', A '4*.~," -: A -! ra r 4 AAU An. I: .An' ". 103 Acres Property Features: *Working Sod Farm *Over 2.5 miles of Paved Road Frontage *Well on property an F150), and clothes. SATURDAY, 8AM-12PM. 7695 CLOVER LANE near Friendship Bible Church. Call 352-473- 7556. FRIDAY ONLY 8:30AM- 3PM. Crystal Lake Home- sites, 383 SE 73rd St, follow signs. Nice ladies and children's clothes, household items. 53C Lake Butler Yard Sales YARD SALE MEN, WOM- EN'S AND CHILDREN'S CLOTHES, glassware, cookware, cookbooks, larger than ever sale. Friday and Saturday, 9am-4pm. 1/2 mile of Worthington Springs on 18 East. GRAND OPENING FOR TY- LER'S THRIFT SHOP at new location. Go SR100 to SR121 in Lake Butler, turn left, go 7 miles to 2nd road past Quiett Well Drilling, look for signs. All items on tables and in shed. Call 386-233-2039, lots of stuff. 57 For Sale BED KING SIZE Pillowtop mattress and boxspring with manufactures war- ranty. Brand new still in plastic. Can deliver. Sell for $200. Call 352-372- 7490. BED-OUEEN orthopedic Pillowtop mattress and box. Name brand, new in plastic, with warranty. Can deliver. Sacrifice $120. 'Call 352-372-8588. II Ro&eRs Built-UpRoos PO Box 82 Ft. White. FL 32038 Office: 386-497-1419 Toll Free 1-866-9LW-ROOF Fax:386-497-1452 Licensed Bonded Insured Workers Comp. License # RC0067442 Florida Hwy 238 Union County Prmidence, FI.32o54 .4*'" An, ti Ar .I ;- : I 'i v .. .. 59. Acrec Potential use: *Minifarms *Pastureland .Cattleland oHomesites CertifiedRealEstateAuctions. com AU2 627.P. Rtn io% B.P.Y Bidnline 1U .. United _,. Certified Real Estate Auctions Marketing Brokerage iA Smith & Smith Realty Shela Daugherty, Il Reallor I' 2 (904) 964-6708 or (352) 235-1131 cell * 2BR/IBA, Corer of Oak St. & North St., in Starke, Remodeled REDUCED.....$59,500 * 3BR/IBA on Orange Street, Hardwood Floors & Above-ground Pool. Seller will pay up to $1500 of Buyer's Closing Cost..... .......... ......... ...... ........... $106,000 * 1.25 A4"VE q7NE. __12th Ave & 171st S"DI i l......................$18,000 * 2 Wooded Acres Just OffGriffis Loop ................................REDUCED $29,950 * I Wooded Acr1~ ffitiW 177th in Pleasant C I ?..... ................ BRi2 BA: " 1246 sq. ft. Starke...... .............$150,300, Saturday, October 18th @ 10 a.m. I 1 1-1-4 txul qg2 --- I I Mml * i r!ii:. i i-r .. 11 i i lula r~ ian. r- - .. --- .--------: -- S-S Page7A Read our Classifieds on the Where one call Classified Ads World Wide Web doesita (9041 964-6305 [3521473-2210 .3861 496-2261. BULK COW MANEUR FOR SALE Pure, dry-stacked. Call Anthony at 904-964- 7537. 1992 DODGE EXTENDED CUSTOM VAN, excellent condition, $1,500. Elec- tric wheelchair lift with re- mote for van, $1,000. '90 Camaro, new valve train, new tires, 5sp. Hot car but needs work, $1,000. 18ft boat with trailer and motor, $600. Call 352- 473-7425 or 904-226- 4346. STEEL BUILDINGS FAC- TORY DEALS, can erect. http//> Source# 16H. 9D4-838- 1399. STEEL BUILDINGS -$1000 TO' RESERVE. Factory direct, local consultant. Can erect (discount- ed).. Source#16H. Phone 904- 838-1399. CERAMIC TILE TOP TABLE WITH 4 CHAIRS, SEATS -" 10, $300, Cast iron king and full size headboards. Burgundy recliner, $75. Ab lounge, $40. Rod iron outside table with 4 chairs, $50. 3 small din- ing chairs, $5 each. Call 352-473-7535 or 904- 314-2798. KENMORE AND WHIRL- POOL WASHERS and dryers, new type $95 and up each. Electric stove, written guarantee, delivery available. For appointments, call 904- 964-8861. USED COMPUTERS, $99. WESTERN AUTO- IN STARKE, call 904-964- 6841 GO GREEN BUILD ROUND. Venture domes can help you build a new 3/2 home on your land for under $60K. Over 1,200 sq ft of living space. Cut your utility by 50%. With- stands winds in excess of 150mph. sulhrncmfrt@ windstream.net. 59 Personal Services CHILDCARE IN MY KEY- STONE HOME. Licensed CPR/FirstAid. Hot meals, indoor play room, lots of activities. 12 years expe- rience. Near Twin Lake Soccer Park. Infants and up. Call 352-478-8040, Lic#F04CL0116. PAUL MILLER TREE SER- VICE. Licensed and in- sured, free estimates. Call 904-796-2430. DO YOU OR YOUR LOVED ONE NEED AN IN-HOME CAREGIVER? Non- smoking only, housesitter, anifnal caretaker, drive to appts., etc. 20+ years experience. Give Helen a call, 352-473-7845.! Will pick up anywhere. $150 and up. Call 904-219-9365 or 904-782-9822. 64 Business Opportunity LIQUOR LICENSE Brad- ford County. No transfer fee. RealtyMasters, Real- tors. 800-523-7651. ? Works Alachus/Bradfl.n A Cmmnimitl Partaership. Homes For Rent Homes, Lake Homes, Mobile Homes & Vacation Properties for Rent in the Keystone, Melrose, Starke, Hawthorne Area ranging from $550 to $1,200 per month. Call for Free List Professional Property Management Services Offered by Trevor Waters Realty 60&f Refinance & Purchases - FHA- VA - Conventional -~ New Construction - Home Equity Loans - Mobile Home/Land EQUAL HOUSING LENDER 65 Help Wanted POSTAL JOBS $17 89- $28.27/HR NOW HIR- ING. Paid training is pro- vided. For appointment and free government job info, call American As- sociation of Labor at 913- 599-8226, 24hrs, emp. serv. ATLANTIC PUBLISHING HAS FT/PT WARE- HOUSE/Inventory Con- trol position available at our new Starke, FL warehouse. Warehouse and corn) or fax to 352-622- 1875. No walk-ins. *Land Clearning *Ponds *Dozer Work -Road Building *Driveways *Heavy Brush Mowing Kay Colson Waters Licensed Mortgage Broker 704 North Lake Street Suite A Starke, Florida 32091 VERY BUSY CARRIER 100% 0/0. Pull vans, flats or tanks in Florida SE, Midwest and West, out and back. Paid empty and loaded, fuel card, no fees, paid, fuel tax, home weekends. Call904-781- 0457 or 800-606-8344.. SENIOR SERVICES CASE MANAGER/UNION COUNTY. Responsible for client case records, home'visits, client as- sessments, case plans and case management. Desirable qualifications: 4yr college degree with course work in Social Work, Sociology, Psychol- ogy, Nursing, Gerontol- ogy, and/or related fields. Two years experience in Gerontology and/or related field. Experience may be substituted for the college required. Submit resume to SREC Inc., PO Box 70, Live Oak, FL 32064, 386-362-4115. Deadline: October 6, 2008. Voice/fDDAffirma- tive Action Employer. lite --^ Q 'Demolition *Road Grading R.E. Jones *Fill Dirt *Limerock Owner .Washout ,Site Prep Licensed 'Fire Line & Insured Plowing Office: 904-966-0065 Coll: 904-364-833 .. ~c- 16 Sw 66ri- Ljar,- 1 ke FL 32091 TREE CLIMBER GAS- TON'S TREE SERVICE LLC seeks Tree Climber. Experienced only. Valid driver's license required. CDL a plus. Full time, year-round work with ben- efits. Call 352-378-5801 ext 4. PART-TIME STUDENT DE- VELOPMENT SPECIAL- IST. Bachelor's degree required. Experience in academic and career advisement. Pick up application from Santa Fe Community College, Andrews Center, 209 W Call St., Starke, FL 32091, or download from -humanresourc. Submit completed application to Cheryl Canova, Direc- tor, at the address listed above by close of busi- ness, Friday, October 10,2008. LAKE BUTLER HOSPI- TAL: Housekeeping Aide FT/days. For further information, please visit our website:- butlerhospital.com. Call OFFICE (904) 964-7400 FAX (904) 964-5290 [email protected] RESIDENTIAL-COMMERCIAL MOBILE HOMES FHA- INVESTMENT & EQUITY LOANS Property Manager/Maintenance Position Established, progressive management company seeks confident, experienced real estate management candidate to oversee small apartment community in Baldwin/Callahan area. Subsidy experience helpful; basic computer 8iowledge, strong communication skills andthe- ability to follow-through mandatory. * Life, health and dental insurance * 401k, paid sick days, vacation and holidays * 35 hour work week Email resume to C. Saunders at csaunders(hallmarkco.com or fax resume to 352-224-2058 1107 S. Walnut St. Starke, Florida Jenny W. Mann (Located behind Bradford County Eye Center) Branch Manager/ S Mortgage Consultant iaFidelity FUNDING MORTGAGE CORP. MORTGAGE 904-964-4000 BANKERS ASSOvesllTIg n communtes lnveoenig 0 0lmit00i S 386-496-2323, ext 258 or fax 386-496-1611. Equal Employment Opportunity/ Drug Free Workplace LAKE BUTLER HOSPITAL: ARNP/PA current FL license, 1-3 years and ER experience preferred but not required. RN PRN positions. CNA PRN positions. LPN PRN positions. Pharmacist PRN position. Lab MLT/ MT Medical Technolo- gists FT, PT and PRN positions. Wonderful working environment, low stress, competitive salary. For further information, please visit our website:. com. Call 386-496-2323, ext 258 or fax 386-496- 1611. Equal Employment Opportunity/Drug Free Workplace. BRADFORD TERRACE IS NOW ACCEPTING AP- PLICATIONS for LPNs and RNs, full time for all shifts. Excellent pay and benefits. Apply in person at 808 S Colley ROOMS FOR RENT Economy Inn Lawte, FL Low Daily & Weekly Rates Daily Rm Service Microwave- Cable/HBO Refrigerator- Local Phone (904) 782-3332 ED'S APPLIANCE Sales Service Nice selection of Pre-Owned Refrigerators Starting at $ 165 GREAT FOR - SUMMER VEGGIES Or RENTAL PROPERTY 904-964-2966 355 N Temple Ave Stalke NO IRN .r. ELECTRICIAN WITH expe- rience, Prestige Electric. Call 352-745-0650. HORSE FARM HELP NEEDED LIVE-IN PO- SITION in exchange for free rent and electric. Horse experience a plus. Must have work ethic. Call 904-364-8526. SILKSCREENER SHEET METAL FABRICATION company interested in ex- perience silkscreener or individual willing to train. Full time, good benefits. DFWP, 352-473-4984. EQUIPMENT OPERATOR Bradford County is cur- rently accepting applica- tions for one full time EDUCATION INSTRUCTION City SCollege Accredited by the Accrediting Councl for Independent Colleges and Schools (ACICS) position for equipment operator for operating heavy equipment and other duties that may be assigned from time to time. All applicants must have a valid Florida driv- er's license, CDL (class B) preferred. Salary will be based on the applicant's qualifications. Applica- tions may be turned in or mailed to the Bradford County Road Department at 812B N Grand St., Starke, FL 32091. The deadline for accepting applications is 4:00pm, Thursday, October 16, 2008. Application forms may be picked up at the Road Dept. Equal Op- portunity Employer. SALES PERSON NEEDED Looking for individual with the right personality to meet and greet cus- tomers Must have good work ethics Will con- sider both experienced and inexperienced sales people in the car industry Good pay and excellent benefits. Need to work Saturday. Must apply in person at Noegels Auto Sales. Drug Free Work- place, 1018 N. Temple Ave.. Starke, FL. 72 Computers and Accessories COMPUTER NEW DELL 2-GIG XP PENTIUM 512 MB, 2 speakers, wireless mouse and keyboard. 17" LCD screen, many extras with 4yr warranty, $900 Call 386-496-0016. qe+t afL f-cAcafi [o(. Ca, Lean Ofn Our small class sizes and personalized attention make it easier for you to succeed in: ASSOCIATE OF SCIENCE IN ALLIED HEALTH SMedical Assisting * Medical Office Administration with a Track in Irsurance Billing and Coding ASSOCIATE OF SCIENCE IN BUSINESS ADMINISTRATION (ASBA) * Management ASSOCIATE OF SCIENCE SLegal Assisting/Paralegal * Mental Health Technology BACHELOR OF SCIENCE IN BUSINESS ADMINISTRATION (BSBA) * Management SProject Management DIPLOMA *Phlebotomy *Financial Aid For Those Who Qualify *Flexible Schedules with Day and Evening Classes *Lifetime Career Placement Assistance '. 110 WEST CALL ST., STARKE (904) 964-5764 Fax (904) 964-6905 Fastd, Fndy, Prfesslonal Help. Call this newspaper or (866)742-1373 for more details or visit: w w w florida - classifieds 3 3 5 wwwv,Gul fCoastSupply.co m. Business Opportunities ALL CASH CANDY ROUTE Do you earn $800 in a day? 30 Local Machines and CandyV $9.995. (X88)629-99(.8 B02000033. CALl. US We will not be undersold! Financial Freedom Ibr you, SI000idav returning phone calls. Not MI.M. No buying or selling products. Legal. moral and e t h i c a I . vw vw., mygoldplan.com.' bigmoney (888)276-8596. OWN A RECESSION Proof Business Established accounts vnth the average owner Earning o\ er S200K a year call 24'7 (866)622- 8892 Code X. Employment Services Post Ollice Nosw HIiring' Avg Pay S20 hr or 557K yr Including Federal IBencfits and (OT. Placed bh adSource not alil iated w USPS iho hires. Call (866)713 4492. Learn to O(perate a Ci'ane ior Bull Doier Illeai IF:uipnielLt Training N.ilional (.'crillfication FIinancial & Placement: Assistance, (iColg la School ol ( 'onlslUclion %%\ 1, i ]eFIN"\ \5 com L c c'..le FIT CNIF >.or <.ill (866)218-2763. Out of Area Classifieds Job1-air,-Otober-, NS- 17uut~ I. Help Wanted DRIVERS: CALL ASAP! $$ Sign-On Bonus $$ 35- 41cpm Earn over $1000 weekly! Excellent Benefits Need CDL-A & 3 mos recent OTR ( 8 7 7 )2 5 8 8 7 8 2. Guaranteed Weekly Settlement Check. Join Wil-Trans Lease Operator Program. Get the Benefits of Being a Lease Operatol without any of the Risk. (866))906-2982. Must be 23 Need a career.'? IBecome a Nationally Cerilledd Ialintig AC( Tech 15\\k Nallona.ll Acciediled progiaml. Get l'P\ OSIlA N(( I R ( eililicd I.ocal loh pliaccniL'ilt I'l ainciIll \\ ,4l'lblc n7Is 4-1s-1.1 4.' (ro ing Specialized Car Ilaul Div. 21 da\s out. 7 days home. Top Pay! FRIEE C'o. Benefits. Mil exp Iyr (DL.-A req. Min age 23. no felony. Call John WAiGG)ONlERS TRUCKING (912)571- 9668. i1 \RT YOUR ('ARI IFR. S.\R If IT RIGT' (t' ipa t i Spil.t.cd (.DI) Ih .llll II11 \\l..D I Mhll I .1 h11'111 .11ac 11 < h1 h1 (8O66)917-2778. Homes For Rent Venice New I and 2 bedroom homes from $900 per month in active lifestyle community with waterfront sites, resort amenities, on-site activities and events. (866)823-9860. Miscellaneous ATTEND COLLEGE ONLINE from Home. * Medical. *Business. * Paralegal. *Computers, * Criminal Justice. Job placement assistance. Computer available. fmiancial Aid if qualified. Call (866)858-2121. % %ww w.(CeiluraOnl ine.comn. AIRLINES AREI IIRIN[G Train lor high pai ing A latiion MI.inllt.'in, ce ('iareer I'AA .appmro cd piogriamn I 1 oflacI ilkd t iItlalificd - Inh p laI c m e nI t.'lll v e' l sta n 'e CALL Aviation Institute of Maintenance (888)349-5387. NOW AVAILABLE! 2008 POST OFFICE JOBS. $18- $20/HR. NO EXPERIENCE, PAID TRAINING, FED BENEFITS, VACATIONS. CALL (800)910-9941 TODAY! REF #FLO8. Real Estate). ST E AL MARRSIIFRONT s.crliice!!! Drop MY 0wner dead gorgeous Marshfront. My neighbor paid $389,900. I'll sell mine for less than the bank repo's. My six figure loss is your gain. $229,900. Call: (888)306-4734.. TENNESSEE LAND RUSH! I+acre to 2acre homesites,. wood. views. Starting at $59,900. Tenn River & Nick-a-Jack view tracts now a allable! Retirement guide rales this, area #2 is U.S. places to retire. Low cost of living, no impact fee. (330)699- 2741 or (866)550-5263, Ask About Mini Vacation! NC MOUNTAINS 2+ acres with great view, very private, big trees, waterfalls & large public lake nearby, 549,500 call now (866)789-8535. *LOW $ DOWN HOMES* Gov't & Bank Repos! Little $ Down! Call Now! (800)861-5890 Sporing Goods GUN SHOW, OCTOBER 4-5. SAT. 9-5 & SUN. 10- 5. ATLANTA, GA ATLANTA EXPO CENTER. (3650 JONESBORO RD). EXIT # 239 OFF 1-75 OVER 1200 TABLES! BUY- SELL-TRADE. INFO: ( 5 6 3 ) 9 2 7 8 1 7 6 . NATIONAL ARMS StOW. dSj SERVICE Tired of Banks & Loan Centers saying NO, WE CAN HELP Good credit Bad credit WE CAN HELP Bankruptcy WE CAN HELP We offer many types of loans: business loans home & trailer loans personal loans car loans We say YES when others say NO! Call toll free 1-877-367-0130 CNAs Parklands Rehab & Nursing Seeks new additions to our growing family Full/Part-Time Openings 3-11 + 11-7 COMPETITIVE SALARY!!! STRONG BENEFITS DRUG/BCKGRND CHK REQ. CALL & APPLY TODAY! Call 800-442-1353 Fax 877-571-1952 [email protected] 1000 S.W. 1 6th Ave. Gainesville, FL SOUTHERN PROFESSIONAL FINANCIAL, LLC. Ask About The New $7,500 First Time Homebuyer Credit! CALL TODAY., It's Time To Make A Move! Earn your degree and prepare for a great career by contacting us at: 1.877.455.0092 | www. MyCtyCollege.com 2400 S.W. 13th Street Gainesville, 32608 ,, LA--W . Pan.e 8A TIMES October 2, 2008 BY TERESA STONE IRWIN Times Staff Writer Ivory Hunter begins his day ironing clothes and preparing breakfast for his family. From there, he heads to his day job as wellness director at the Reception and Medical Center, the Department of Corrections institution located in Lake Butler. Known by all as Coach Hunter, he teaches wellness, physical fitness, care and prevention, and smoking cessation to youthful offenders in the mornings.and the same programs to adult offenders in the afternoons. Coach Hunter said the Wellness Program serves to educate inmates regarding appropriate lifestyle choices, getting in shape. both mentally and physically and proper hygiene. Hunter also provides individual fitness programs to keep inmates, active. Fitness activities for inmates include things like basketball, softball, flag football, "'volleyball, track and weight and strength training, but such 'privileges must be earned. "Inmates have to be cleared with nowrite-upsordisciplinary action on their record to get recreational privileges," Hunter said. TOMB:. Continued from p. 5A are like life-size board games that children and parents can play together. Prizes will be awarded for each of the games played. "We've worked hard to make sure there are activities and games for little ones, teenagers and for parents," said Wade, "and we are also focusing on keeping things reasonably priced." The event will also feature a variety of entertainment options. For a small admission price, moonwalks will be made available for children too young to enjoy being frightened on the spook trail. Kirby Pharmacy is also sponsoring Treat Street, where little ones can trick or treat in a scaled-down, fright-free trail of their own. A western- themed jail-and-bail will allow everyone to lock up their friends and family and a hay ride will ferry visitors from the carnival to the haunted trail. Of course, the centerpiece of the entertainment will be the Tombstone Ghost Town and Haunted Trail. A labor of love for Pharmacist Keith Kirby, preparations for 'the spook trail have been underway for several months. Kirby designed the western ghost town theme and layout and has been working on this year's version of the trail for more than a year, adding to the movie back-lot-type ghost town originally opened to the public two years ago. Kirby said, "The new version of the'spook trail brings back all of.the old elements and adds even more. We have a haunted hotel, saloon, jail, general store, graveyard, and even an Indian village." The trail also features a gold mine that twists and turns like a maze. RMC is one of only seven prisons in Florida with an indoor gymnasium. It is also the only prison in Florida with a dialysis center for those with special needs. "We have exercise programs for all inmates, including those who are wheelchair bound," Hunter said. Once cleared to participate, Hunter and his wellness center staff try to motivate inmates by first writing up an individual program involving flexibility, bodybuilding, endurance, weightlifting and basic use of the wellness center equipment. The program keeps inmates with health risks from becoming idle and gives them something to focus on. Hunter said one inmate who started out at 302 pounds dropped to 187 pounds in one year on the program. "We've also produced a few good athletes and a few even went into bodybuilding. We've also helped several inmates stop smoking," Hunter said. Hunter began his career in corrections as a social worker at the RMC Hospital in 1982.- Several months later, he was working as the wellness director at New River East Correctional Institution and' later returned as wellness director at RMC' in 1995. "I love my job," said Hunter. "My motivation comes from home." Hunter has been married to the light of his life, Joan Hunter, for 24 years and they have four children: Joanna, 23, currently attending FSU for a degree in special education, Michelle, 19, a sophomore studying hospitality at Floiola A&M, Christina, a 15-year-ol ninth grader at Union County High School and Joey, 21, who resides in Gainesville. Ask Hunter and he'll tell you he considers his assistant at the wellness center, Officer Steve Kelly, to be family as well. Not only do the men work together at RMC, but during football season, they both coach corner backs for the UCHS Tigers team. "Coach Kelly is my right- hand man. He does a great job, and at work, I consider him my equal. I continue to learn for him all the time," Hunter said. No doubt, if you see one of them, you can rest-assured, the other's nearby. Except maybe when it comes to church. Kelly serves as a deacon at Victory Christian Center in Lake Butler and Hunter attends the Church of God by Faith with his family in Starke. RMC Warden Tim Cannon and Assistant Warden Chris Southerland allowed Hunter and Kelly to mentor students at Lake Butler Middle School, Union County High School, the Outpost alternative school and the juvenile detention center. "My superiors and the entire staff at RMC are A-I," Hunter said. "The closeness and togetherness experienced there is the best." At the schools, Hunter and Kelly also talk with students who are given in-school suspension and offer any student the opportunity to come talk to one of them if they need help. The men do what they can to-encourage kids to stay out of trouble and respect their teachers. This, Hunter said, is their way of giving back to kids. If you watch Coach Hunter on the sidelines at a football game, he's not there just to coach the players, but he's mentoring and motivating them as well. Prior to coaching at UCHS, he coached at Santa Fe High School for three years and the Mighty Mites for another two years. He was also the one-time defensive coordinator for the no longer existent semi-pro Enforcers football team that used to be in Union County. Hunter graduated from Leon High School in Tallahassee where he was inducted into his alma mater's hall of fame as a member of the 1974 state championship football team where he played defensive corner back. A prospective scholarship athlete, he made a verbal commitment to follow his dream and attend Georgia Tech. AIn 1974, it was his senior year in high school and Hunter lost his mother, Savannah Hunter, to-an aneurism. Hunter decided to stay close to home so he could help his father, Samuel A. Hunter Jr., raise Hunter's two younger brothers, Leroy and James. He received a scholarship to play football at FSU. Then, in 1976, his father needed a kidney operation and Hunter was a good match for a donor. Hunter recalls, "But my father wouldn't let me do it. He said he didn't want .anything to stop me'from continuing with my football and becoming the first one in my family to graduate from college.-- One year later, Hunter's father was on the side of a road with a gas can -putting fuel into his uncle's car when he was struck by another motorist and killed. Leroy and James were in high school and were now being raised by their grandparents. During his senior year at FSU, Hunter broke his leg playing football and was granted an additional year of playtime. Hunter holds the record as FSU's only team captain for two consecutive years, 1978 and 1979. Right before he was to graduate from college, his grandfather, Samuel Hunter Sr., 'died from heart complications. A year after receiving his degrees in communications and in coaching, his grandmother, Willie Mae Hunter, who had diabetes that led to blindness and her being wheelchair- bound, passed away. Through it all, Hunter was able to overcome his many tragedies and press on. He didn't have a parent to go home to at night and tell how practice went or what happened at school that day. He didn't have his mom or dad sitting in him with a positive influence in his life. He credits Pastor James McKnight and Pastor Byron Ramseur at Church of God by Faith for providing him with spiritual motivation. His wife holds a special place, though. "Joan is my-backbone. What I want is to live to see our - kids become-successful in life," he said. Ivory Joe Hunter "The mine was a big hit two years ago," said Kirby, "and the new maze is more than twice as long." Tombstone was not able to open last year, but in 2006, there were more than 1,000 visitors on trick-or-treat night alone. Families can participate in the carnival game and admission into the Tombstone spook trail for $2 per person or $5 for a one-time all night pass. "Come hungry," said Wade, "we will have plenty of good things to eat." The Tombstone Ghost Town and Haunted Trail will be open at dusk on Friday and Saturday, Oct. 24-25 and Friday and Saturday, Oct. 31-Nov. 1. Carnival booths will be open Nov. 1, from 6-10 p.m., with the Spook Trail staying open until late that evening. "I wouldn't be surprised if, once the word gets out, that you don't see people coming over from surrounding communities next year," added Wade. the stands at his games. "So when I talk to these kids who are hurt or feel left out because -heir parents don't attend their practices or their games, I really do know how they feel," Hunter said. Hunter praises two of his in-laws, who he affectionately refers to as Miss Karen and Miss Wilson for subbing for his own parents and providing Worship if theHoos e os the rdtk.. Somewhere this week! The churches and businesses listed below urge you to attend the church of your choice! JACKSON BUILDING SUPPLY Custo Where Quality & Service e Oare a Family Tradition. Automotive Shop: 386-496-8l2I0 Fa:38-9620 - Business & Service Directory - Building Supply Catering Services Dentistry aJ COUNTRY CATERERS I V My Dentist. Jackson We Cater AllEvns... Large or Small! BUILDING SUPPLY WILL COOK ON-SITE We will match any Gregory Allen, D.M.D. P.A. "Where Quality & Service Competitors priceon James C. Brummett, D.M.D. "IThe same product. are a Family Tradition" WE RENT: Tents, Tables & Chairs i STAR_..WEALSODO: Cosmetic, Restorative, -US 3LS..*-S tE Waterslides, Bounce Houses, and General Dentistry 964-6078 Giant Slides, Rock Wall, Cotton Candy, Shaved Ice, 145 SW 6TH AVE Popcorn & many Games! 255 SW Main Blvd LAKE BUTLER PICK UP OR DELIVERY 255 SW Main Blvd IS AVAILABLE! Lake City, FL 32025 496-3079 1-800-940-3728 S352-473-3728 386-752-2480 r44 = 1 Handvman Services Mike's Handyman Services *Carpentry SPainting *Plumbing *Electrical SMobile Home j"-~ pP~mh. ~r .--i 1~-. ~fj~ . I-':~i L' T', Repair * And Much More! Home (352) 473-225 Cell (352) 745-0614 F Michael Home Serving the Lake Region Heating/Ar Conditioning BERTIE Heating & Air Conditioning, Inc. 352331-2005 /Ir '-. Prevenative Maintenance Pays... Schedule your Summer Air Conditioning and Check! 1730 NE 23rd Ave Gainesville, FL Painting O\ er Parkers 2\oJ Painting Custom Work Painting *** FREE ESTIMATES *** CA ^^N.Color Matching S, Cabinet Glazing -*Minor Sheetrock Repair S Pressure Washing I' -and Much More! REAT IPEAS FOR YOUR 0OME... CALL US TOPAY! Timothy & Elisa Parker 352-481-0782 Weight Loss LEr I A * KEYSTONE CALLTODAY 352-47-8808904.94.630 GENESIS CHRISTIAN ACADEMY OPEN ENROLLMENT for the 2008-09 School Year Kthru 12 We use the Ace Curriculum S-' ( We accept the McKay & Hero S Scholarships 386-496-2515 Located on SSR 121 in Worthington Springs JSN F ADVEKIIiNG NETWORKS OF FLORIDA CiJa.ica DIU.SDiy IMetro Uiaiy The key to advertising success r 'r 1-866-742-1373 Hunter, Allmaround mentor ,_ ___u _I lr i:.~I~_~ .. _~-;--~- -:.---- --..-;.--- .1.' ~...~~1~5~1. -'-i:L.I'-"-~.~-l. ... .- Section B: Thursday, October 2, 2008 Regional News News from Bradford County, Union County and the Lake Region area Santa Fe wants to create more scholarship opportunities John and Julee Tinsler used their scholarships. as springboards to careers in Bradford County schools BY CLIFF SM ELLEY Telegraph Staff Writer There are 12 endowed scholarships to Santa Fe College made available specifically to Bradford County students (another three are in the process of being established), but many qualified students are turned away because'the money isn't there. With that in mind, the college's endowment corporation is having a fundraising drive in an effort to better serve more students. Cheryl Canova, the director of the Santa Fe College Andrews Center in Starke, said 40 of the Bradf6rd County- specific scholarships, including renewals, were awarded this year. However, 20-25 students had to be turned away "simply because we did not have the funds." Several donations have been made recently. The Starke Rotary and Kiwanis clubs donated $25,000 and $5,000, respectively, to the endowed scholarships in their names, while $16500 was received from a private donor for the Becky Reddish Memorial Scholarship. If you would like to help or learn more about the scholarships available, please call Canova at (904) 964-5382 or call the' Santa Fe College Endowment- Corporation at (352) 395-5200. You may also visit the Web site. Donations will all be matched by the state eventually and do not have to be cash. People can make general donations or donate toward specific scholarships. --Two ~ isplheLi '- of -the Bradford Couni\ public school system can attest to the ,importance 'of scholarships. John Tinsler; -a--sixth-grade science teacher at Bradford Middle School, and his wife, Julee Tinsler, the district's finance director, both graduated from Bradford High School-in the I1980s and-both received scholarships to Santa Fe. John said the understanding in his,family is that he wouldd attend college, but the family had no money set aside.for that purpose. Therefore, John said "it was just truly a blessing" when he received the news from Jim Duncan, the principal of Bradford High School at the time, that he had received a scholarship. "It was great because I really had no financial planning as to how I was going to pay for college when it came time to go to college," John said. Julee, who graduated third in her class, was intent on going to college, too. Her family would've found a way to make it happen if she had not Jacksonville man dies in Union crash ...A 37-year-old Jacksonville man died Sept. 25 when his car collided with a truck on S.R. 100.in Union County. Tobie Cornel Young was westbound on S.R. 100 when a Mack truck pulled from Northwest 124h Avenue in front of Young's 1994 Toyota, according to FHP Trooper C.A. Scott. The Toyota Camry hit the rear of the truck which had reportedly been hauling lumber. Young was .pronounced dead at the scene of the 6:40 a.m. crash, Trooper Scott said. The truck's driver, Albert J. Olgctree, 59, of Lake Butler was not injured in the accident, Trooper Scott said, To get what you want, STOP doing what isn't working. Dennis Weaver received the scholarships she did, but the scholarships did make it a lot easier, she said. For example, she did not have to rely on a job to get her through school. She did work, but just 20 hours a week. "That was a huge help," Julee said. "If got me through school a whole lot faster." John laughed when he thought about the fact that he worked 40 hours a week and took a full load of classes. "Looking back, I wouldn't suggest anyone do that ever," he said. John remembered how Santa Fe classes were still being held at Bradford High School when he began college, but the Andrews Center did eventually open while he was still a See SANTA FE, p. 2B BECK CHIVROLET ~ic 4oa T '05 CHEVY CAVALIER SGreat First Ride!............... 4995 '04 FORD MUSTANG Low Miles, 31k.................$7,995 '04 CHEVY TRACKER ZR2, 4x4, 41,000 Miles........ '05 PONTIAC VIBE 34mpg & Fun................ 9995 '02 NISSAN FRONTIER Crew Cab, Low Miles......... ,9$ 95 '07 CHEVY SILVERADO Z71 w/Lift.................... $l1 995 To '03 CHEVY MALIBU Low Miles, NICE!............... 99/mo '02OLDS ALERO 40 kMiles & Clean!.......... 149/mo '06DODGE CARAVAN Carry The Whole Family!.. 1.59*/mo '07 CHEVY COBALT 32mpg, 4-door, Automatic, .$l99*/mo '05CADILLAC DTS Like New! Luxury!...........299*/mo '08 CHEVY SILVERADO Crew, 4-door, Full Size...... 299/mo *Payments with $3500.00 down plus tax, tag, and fees. w.a.c. Call Us For PreApproval 1-866-852-1834 ^^Vrie I I ^V JI Ben Mitts fany Jack Sales Makis An4 ~ifer Aie BECK CHEVROLET Hwy 301 N. Starke, FL Certified umM81 Vas BUY WITH COMPLETE CONFIDENCE BEST SELECTION OF PRE-OWNED VFHICL' S IN NORTH CENTRAL F nRInlAI * GM CERTIFIED * 101 POINT INSPECTION FREE BUMPER TO BUMPER 3mos/3,000 mi. Guarantee * 12mos/12,000 MILE POWERTRAIN GUARANTEE 1rrI"I. L.&lliLL-i,1Lr VEHICLE HISTORY RGPORTC ON EVERY VEHICLE IN STOCK! I --I, II V-- nV I ------- i% F VL8,a FL1 II RAW6JiQ8 ---- -~~-- - -- r PWAA I~B ~ fr Page 2B TELEGRAPH, TIMES & MONITOR--B-SECTION Oct. 2, 2008 Bradford Middle School sixth-grade science teacher John Tinsler assists students Ronda McCormick and Corey Robinson with a classroom exercise. Tinsler and his wife, Julee, '*. were both recipients of Santa Fe College scholarships coming out of high school. SANTA FE Continued from page 1 B student. From Santa Fe, he went to the University of North Florida and earned a bachelor's degree in elementary education. Prior to graduation in August, though, John said he and other students spent May-June looking for jobs. The UNF professors had the students scared that jobs Extension service offers small- ruminant workshops .Agents from Bradford and Alachua County have teamed together to provide two small- ruminant workshops this month: Tuesday, Oct. 14, at the Bradford County Extension Office and Monday, Oct. 27, at the Alachua County Extension Office. The workshops are scheduled to begin at 5:30 p.m. with a light supper. The following topics will then be covered: internal parasite Kirk Cameron in IflLmrn1bu Fri 7:00, 9:10 Sat, 9:10 Sun, 4:45, 7:00 Wed-Thurs, 7:15 would be hard to come by, John said. However, John did find a job-teaching fourth grade in Baker County. The fact that a "Tornado" was teaching "Wildcats" led to some good- natured ribbing from family. ,John taught fourth grade in Baker County. He thoroughly enjoyed the job and the people he worked with, but the commute wore on him. He wanted to be closer to home and his and Juice's son, Noah. control, toxic plant identification and control, and winter annual planting decisions. One CEU will be available in one of the following categories: core, private applicator, and research and demonstration. Reservations for these Now Showing Shia Lebeouf in l^Z! Fri, 8:00 Sat, 5:30, 8:00 Sun, 4:50, 7:05 Wed-Thurs, 7:30 NOW PLAYING! FLORIDA TWIN CALL FOR SHOWTIMES 101 West Call Street (904) 964-5451 GREAT WATER! At 1/2 The Cost I MI E" s G R E AT WT E R SI.- And Save Money Too! i The "easy" way to healthier water * A *gg // 90 Day Trials $995 PER MO. Rent With Option to Buy Non-Electric & Electric Systems Available ( [ate6 r. 4 Kinetico Since 1946 horno wate(2r y s K.1 st 10 37 w .cert m.c After nine years in Baker County, John returned to Bradford County. He was hired at Southside Elementary School in Starke. He spent two years there teaching first and fourth grades. From Southside, John went to Bradford Middle School. He loves it there. "I think working at the middle school is wonderful," John said. Julee enjoys the fact that her husband teaches science. She working days prior to the program in order for proper consideration to be given to the request. FloridaWorks offers corrections officer exam Those interested in becoming Department of Cdffections officers no longer have to drive to Lake City or Jacksonville to take the required Florida Basic Abilities Test as FloridaWorks in Starke now offers the test. Please call Susan or Pam at (904) 964-5278 to schedule an appointment, or complete a registration form online at. $ 00 Hair 00 Cuts Hairy Business Men Women -Children WALK-INS WELCOME NO WAITING Now Hiring For 2nd Location Next to Auto Zone on S. Walnut St. Starke, FL 904-964-3338 Mon-Sat 10-5 enjoys science and math, hut when she was a student at Bradford High School, she said she and "everybody else expected me to go into some type of medical field." However, Julee took an accounting class in college and fell in love with it. "There's something satisfying about doing a double underline and saying, 'This is the answer,'" Julee said. Accounting made sense, too, since Julee's mother worked for the city of Starke as city clerk and finance director. "I had been exposed to accounting basically my entire life," Julee said. Julee went to the University of Florida after Santa Fe. She worked for a CPA firm before returning to school to earn her bachelor's degree so she could sit for the CPA exam. She then worked for PRIDE (Prison Rehabilitative Industries and Diversified Enterprises), Children's Home Society, another CPA firm, the Clay County Clerk of the Court's office and the Clay County School Board before being Open c^ Sunday 12 Nc ... , ^.4 ~ '3 r .* -_ JM 1165 Wilson Road, S 3 BR/ 2 BA SR 16 West 1 Acre Fireplace Stainless Steel Appliances Jacuzzi Tub --u9-Reuee00- - $229,000 VICTORY THE NEW AMERICAN MOTORCYCLt Who says you can't buy an i A TITUDE? . 2008 ViclolrV Vegas Jackput r-iKe. ~~3s4S a * hired in her current position in Bradford County. It's nice to have a job in Bradford County, Julee said. She, like John, had done the commute and was ready to be closer to home. "I like working here," she said. "I enjoy my job." Both Juice and John will tell you they are glad they went to Santa Fe before attending larger institutions. Julee found the staff at Santa Fe to be caring. She was injured in a car accident and had to wear a neck brace toward the end of a semester, but the professors bent over backward, she said, to assist her in any way necessary so she could take her finals. "They were so accommodating," Julee said. John keeps ties with Santa Fe. He enlists the help of Santa Fe professor Van Dubolsky for his "Night Under the Stars" astronomy project and is also a member of the college's Kika Silva Pla Planetarium advisory committee. Because of their experiences, John and Julee Julee Tinsler would love for more Bradford County students to get the chance to attend Santa Fe. The generosity of others in helping to increase available scholarship funds will go a long way toward making that happen. "We need to make (attending college) as affordable as possible," Julee said. I Housi , October 5th ion to 3 PM 4 BR/ 2.5 BA City of Starke Security Syster Sprinkler Syste Fireplace S Stainless Stee M Appliances Just Reduced: tarke $229,000 m I )_ \I ow i FREE gas for a Year! With any new Victory motorcycle purchase. (offer expires October 13, 2008) PLUS 5.99% on 2008 models . 2.99% on 2007 models *Offers are subject to credit approval Q POLRRIS Of Gainesvilleff r N' North Central Florida's , Largest Powersports & Marine Dealer -:;" .'i 12556 NWHWY 441 .. &,r'1: (6 miles north of HWY Patrol) /. Gainesville (386) 418-4244..., B ^ tPolarisofGainesville.com w ~ -~ -~ Florida Twin Theatre-, (All '.aBi 'i5. BLfore 6 p.m 9 *9--5451 CLOSED MON & TUES. Visit us on-line at I* I r _ Take NW 203rd off SR 16 20334 NW 57th Place I I ." " Oct. 2, 2008 TELEGRAPH, TIMES & MONITOR--B-SECTION Page 3B Whitney Shannon and James Houlin Shannon, Houlin to wed Oct. 11 Matt and Jenny Shannon of Lake Butler announce the engagement of their daughter, Whitney Lynn Shannon, to James Patrick Houlin, son of William and Patricia Houlin of Goose Creek, S.C. The couple will exchange vows in the LDS Salt Lake City, Utah Temple, Saturday Oct. 11, 2008. A reception will honor the newlyweds following the marriage in the Salt Lake City area and also on Oct. 18, 2008, at the Shannon residence. Attending the bride will be matron of honor Jennifer Lake Wilson maid of honor; Michaela Shannon and Kimber Shannon, Shanti Rose and Kristen Vibber attendants. Best men attending the groom will be Jon Banks and Geoffrey Clarke. Gr6omsmen will be Mathew Houlin, Ryan Shannon and Spencer Shannon. Honored guests will ,bq their friends and family. The bride-elect is a graduate of Union County High School, LDS seminary and Santa Fe College. She is currently in the elementary education program at Brigham Young University on full scholarship. The groom-elect is a graduate of Miami Killiam Senior High and LDS seminary in Miami. He served a two-year LDS church mission in Rome, Italy. He is currently attending Brigham Young University, studying Italian and political science and will graduate in the spring of 2009U Upon graduation, the groa-elect will- begin a- graduate program in accounting. Following the wedding, the couple will make their home in Orem, Utah. April Adamson and Mark A. Mitola Adamson, Mitola to wed Oct. 25 April Ann Adamson, daughter of Michael and Janet Adamson, all of Starke, and Mark Anthony Mitola, son of Paul and Betty Mitola, all of Hawthorne, ,announce their engagement and upcoming marriage. The bride-elect is a member of Madison Street Baptist Church and is a senior at Liberty University. The groom-elect is also a member of Madison Street Baptist Church and is a student at the University of Florida. The wedding will be an event of October 25, 2008, at Bayless Highway Church. with a reception to follow. " *.A - \- "' , ^i^M Dale L. Jones Dale Jones crowned UC HC king Dale L. Jones, 8, of Starke, son of Steven and Karen Jones, was crowned Union County Mighty Mite Gold Homecoming King September 27,2008. Dale is the grandson df John and Nadine Jones of Lawtey and Dr. Lyle and Jane Ishol of Cumming, Ga. and is the great-grandson of Mrs. Coy Cruce of Lawtey. Dale is a third grade student at Lake Butler Elementary School. Starling family reunion set The Starling family reunion will be held at the Lake Butler Community Center Saturday, Oct. 4, 2008, from 10 a.m. to 2 p.m. Descendants of Joe E. and Alma Starling, plus defendants of Jimmy and Dolly Starling, are welcome. Please bring a covered dish for the noon meal. Bring photographs, family trees and tall tales. For more information, contact Loretta Starling Draper, at 904-772-9309 or ljsdraper@comcast. net. BHS Class of '78 planning class reunion The Bradford High School class of 1978 is planning its 30"' reunion. Dates are Oct. 31 and Nov. I to [email protected]. Learn more about Haven Hospice Oct. 9 in Keystone Those interested in learning about Haven Hospice's available services and volunteering opportunities are invited to the city hall in Keystone Heights Thursday, Oct. 9, from 10 a.m. until 11:30 a.m. The city hall is located at 555 Lawrence Blvd. To register, please call Bettye Zowarka at (352) 473- 3069 or Bonna Yates at (352) 473-7918. Ah, but a man's reach should exceed his grasp, or what's a heaven for? Robert Browning Woman's Club plan events The Starke Woman's Club luncheon will be held Wednesday, Oct. 8, with social hour beginning a I1:30 and lunch at 12 noon. JoAnn Rowe will present the Hugh O'Brien award to the 2008 recipient during the program. Please R.S.V.P. for the luncheon. Volunteers will help register Kids for Coats at the club house Saturday, Oct. I 8 a.m. to 3 p.m. Get your tickets for a card party and luncheon planned for Oct. 20 at the Woman's Club from JoAnn Rowe. Sgt. Rimes returns following tour Army National Guard Sgt. Terrill M. Rimes has returned to the 2361' Military Police Company, San Antonio, Texas, he Sauce g crnling bodies assume full so\ erein powers to go\ern the peoIples of Iraq. Members from all branches of the IU.S. military and multinational forces are also assisting in rebuilding Iraq's iuthoritx and responsibility in defending and prescr\iing Iraq's so\ ereignt and independence as a democracy. Rimes is a military police team leader with I I years of military service. He is the son of I.inda K..l Jones of .ake Butler and Thomas E. Rimes of ()Old Tox\wn. The sergeant is a 1997 graduate, of Union mountt ) High School. My most brilliant achievement was my ability to be able to persuade my wife to marry me. Winston Churchill * Headaches Dr. Virgil A. Berry CHIROPRACTIC * Neck and Back Pain PHYSICIAN 601 E. Call St. Hwy. 230, Starke 964-8018 Boss is Back! 7/ He Sings the Blues He Cooks the Gumbo SCome See Him at He Plays the Guitar SCoWll He Feeds the Masses 1Oct10-- m -- He Makes his SOc Very Own Hot Sauce S,,, ., I o't MiDSHitM lll!"~hat's why they call him... 1' "'" h- THE 5AUCc BOSS! Mini Fastrak, 16hp Honda.......................$3,795 Fastrak Super Duty, 25hp Kawasaki.....$6,095 Super Z, 27hp Kohler................................ $8,495 o Monthly Payments for 6 Months & No Interest If Paid Within 6 Months* QlVoid~aputohat oIlOOol ralH.1 WHIUtlEREIUPMEN ndl fly Iop:mbl pd~, 1. henMywHAuth f k f r dt aU, .I4 5Il) SPumo f l ppi ti q f 1M W APR, l2.1 reyfflmnt ppl Il HUSXILER , , , 9-5 Sat. 9-2 (90) 94-238 ..lzm'nbeqipmet~ol1163US30JS "A True Community Bank" Community State Bank makes it simple... We offer Loans to fit your Personal & Financial needs. HOME IMPROVEMENT . MORTGAGE LOANS CAR LOANS (New & Used) l PERSONAL LOANS ' RECREATIONAL LOANS At Community Bank your Deposits are Insured up 1, to $100,000 per person per bank, with Retirement I It.- Vki s onth Wb t:w- Comuit-ttea-flco ST/ RKE 811 S Walnult St 904-964-7830 CCommunity State Bank "The Same Yesterday...Today and Tomorrow!" II.NME 1)IC0 LAKE BUTLER 255 SE 6th St. 386-496-3333 -i~b------ ---- eCOWBOy , ' *^^??^7??c^7^> Rau~?^^ Your Satellite Sales Additional Cable Runs and Services IEdW RK &Phone Lines NETWORK %ILl-V-"'Authorized Local Dealer Paul Jones L.L.C. Office: (904) 364-6612 Cell: (904) 622-6492 Fax: (904) 964-2447 Email: gjones01 @embarqmail.com Hustle3G6" Hustler 60" Hustler4 accounts insured up to $250,000. Internet Banking & On-Line Bill Pay I, I a P ;m~j~g~5CiISSSmJSS~)~~;~~ I 1----~ d -1 a a dS a a T: Ed irtorial0 pinion Thursday, October 2, 2008 Page 4B Powering the city of Starke Part one of three Some time ago, Publix supermarkets sent a few residences were lighted with home- representatives into Starke to determine the owned, 32-volt power plants with banks of plausibility of establishing a store here. Their batteries, which were expensive to own and report was positive, saying a local store would operate, limiting the systems to individuals draw patrons from Waldo, Keystone Heights, and families with disposable incomes. Power Lake Butler and smaller communities in company executives (in 1935) claimed their Bradford County. Established competition firms provided electricity to essentially all in Starke was not a deterrent to the largest farms requiring electric power for "major employee-owned supermarket chain in the farm operations." The statement was patently world, but the Starke electric rates proved to untrue and executives were shown to be in be the stumbling block, and the city lost an error when President Franklin D. Roosevelt outstanding business to another community. signed into law the Rural Electrification If this information is correct, then it's time Administration Act, establishing rural for city officials and residents to reassess electric cooperatives. Prior to establishing the ownership of the electric utility with an the REA, the government had passed, the in-depth study to bring city electric rates in Tennessee Valley Authority Act, authorizing competition with private enterprise, the hydroelectric facility to serve "farms and A city that does not progress will either small villages that are not otherwise supplied stagnate or regress. There is no way for a with electricity at reasonable rates." (National community to stand still comimerci~ily, and qural Electric Cooperative Association) city fathers must-be-eognizarit of the need to In late 1937, Clay Electric Cooperative make commercial utility rates competitive was formed and Keystone Heights chosen or face a declining business community. for its general office and generating plant, The loss of Publix is disconcerting to loyal providing electricity for 14 Central Florida Publix customers who drive to Gainesville or counties. A frenzy of house wiring began in Middleburg to shop for groceries. 1937 with exposed wiring in existing homes, The last half of the 20th century saw a each having two outlets, one each for a radio watershed change in grocery stores, from and toaster, and homeowners paying the independently operated or small chain stores minimum $1.25 per month for electricity. to the mammoth big-box stores of today. Within a short time, Florida boasted--17 - Electricity bills in those. stores may exceed electric co-osstatlewidc tviith Clay Electric $50,000 a month. ,, number four in size. The REA program has Starke is far from a Johngy-.come-la'tly been one of the more successful government - irnproviditfg street Trghting with electricity, ventures, self-supporting and meeting needs having begun with halting steps, according of millions of Americans living outside urban to the 1979 centennial issue of the Telegraph. areas. The first electric-generating equipment Florida Power and Light, based in South failed because of the loud noise level, and Florida, Florida Power Corporation, based in was discontinued. In August 1899, a Starke St. Petersburg, and a Pensacola distributor, committee inspected new equipment in did a fantastic job in providing electricity Jacksonville and recommended its purchase to to urban areas throughout the state with replace the original machinery. The purchase generating plants strategically located to was made and the generators installed, but serve population centers. Electricity, unlike breakdowns continued to plague the system, fluids, cannot be transmitted over long and three years later (1902) Starke voters distances, requiring widespread distribution approved an $18,000 bond issue for sewer of generators, but small generating plants and electricity. Realizing the initial amount are inefficient, resulting in expensive would be insufficient, Starke Commissioners electricity. The big players, Florida Power appealed to Judge A.V. Long to increase the- and Light, Gulf Power, Progress Energy and amount another $6,000 without an additional the member-owned co-ops, blanket Florida vote. The wood-burning equipment was with transmission lines and sell the surplus. installed and operational in (or about) 1904 Several cities, including Tampa, Gainesville, and operated with wood as a fuel until oil Orlando and Jacksonville, own generating burners were installed in 1916. 1 /' fh'doilties*a rtt!sorrtime sell s ihplusdhei-gy. STh centennial issue included a story h The .neAiessity Tor hig lfpifoductibfi .to "Utilities have always been a problem," produce electricity -economically forced which outlined the history of generating smaller facilities to close. All seventeen electricity by the city, and the problems Florida co-ops discontinued producing encountered. Although the city no longer electricity (Clay Electric quit in 1968), but operates generators, it does own a modern each retained generators at-the-ready to come electrical distribution system that makes online at peak times or other emergencies. a profit and serves to reduce city taxes, Starke retained the capacity, and actually according to city officials. produced electricity into the late 1990s, but It's true, the electric department transfers has now sold the generators and razed the some $900,000 per year to the general fund old building. Clay Electric has an abandoned based on existing city electric rates, but Starke building in Worthington Springs where it just completed a $4 million dollar upgrade, to once operated a generator. We are witnesses be amortized over a 10-year period, which to the end of an era. amounts to paying $400,000 per year plus fThis review of the Starke electrical system interest for 10 years, reducing the net income isn'taclarioncall.tosell its distribution system, of the facility during the payback period, but to enlighten home and business owners The history of generating electricity by the to the complexities that accompany city- city is replete with continually buying and ...owneduitilities, and to put city commissioners installing equipment. Early ._, the eity -ften on notice of displeasure in the commercial -boughrsee-'idhariii generators in an attempt community with rates that are unacceptably.._ to provide electricity at affordable rates, high when compared to other providers in' and paid a price in breakdowns and repairs; the area. but each acquisition saw the city improve How high are commercial rates, and how in the quality of generating equipment and do Starke's rates compare with Clay Electric, increasing generating capacity. Prior to whose power lines provide electricity adjacent 1925, when Florida Power and Light, was to the city limits, and in a few cases, to ..founded, there were no large distributors customers within Starke city limits? In a and each city had to provide its own succeeding article, we will look at figures generating equipment and distribution. In in reference to the cost of electricity posted a few years, Florida was crisscrossed with by the .city of Starke .and Clay Electric power lines serving urban areas of thy,.state," Co-op, and review comments by responsible - bMLyillages and farms-remained neglected, participants in the drama. relegated to kerosene or carbide lights and By Buster Rahn,; manually operated equipment. Stores and Telegraph Editorialist Vote 'No' on Amendment 2 Dear Editor: Many people think, and, it seems to be pushed, that Amendment 2 will prevent gay marriage in Florida. Well, look up Florida Statute 741.212(1). Gay marriage is already banned in the state of Florida. What Amendment 2 will do is take away benefits from all unmarried Floridians, regardless of orientation. Amendment 2 will block civil unions, domestic partnerships and repeal existing protections and benefits afforded to millions of Floridians. Many seniors in Florida form domestic partnerships after being widowed, rather than remarry, to keep from losing essential benefits.. Amendment 2 will take these benefits away from them. It's true that same-sex couples may get a few rights under domestic partnership, but it's not marriage. So, who cares? They are people, too, and should have some rights towards benefits. Backers of Amendment 2 want to make people believe that it is a "gay thing" whereas it will do more harm to non-g Florida than people the figures on seni domestic partners really want to tak rights to their benef their lives earning? Don't be scared "Yes" on Amendm base your vote on oi the facts. " j i gay citizens of because of the Friday Festival. e realize. Get I firmly believe we should ors who form continue to do more events in ips. Do you downtown Starke and make it the ;e away their fun, warm place it used to be. fits they spent Let's encourage the city of Starke, Main Street Starke Inc. Into voting and the chamber of commerce to lent 2. Don't continue its investment in pinion. Get all downtown Starke, giving our children, our families, and our David Stegall neighbors something positive to talk about for years to come. SMark Santiago Starke rriaay Fest is a hit for business Dear Editor: Last Friday was the first time my business, The House, had been opened for a Friday Fest. I wanted to publicly thank the Main Street Starke Manager, Kim Skidmore, as well as the entire staff at the chamber of commerce, Ron, Deanna, Pam and Susan, for a great night. We moved our business to downtown Starke because we believed in its potential. Last Friday, our coffee house had its highest night of gross sales to date The truth about voter verification There have been many misstatements and confusion over the recent implementation of the Voter Verification law, otherwise known as the "No-Match-No Vote" law. The Division of Elections' mission along with local supervisors of elections is to register voters and make sure that they can cast a ballot on Election Day that will be counted. And just to clarify, this law will not affect the status of the 10.7 million already registered voters. The law will apply to all NEW applications received on or after Sept. 8, 2008. The Voter Verification law regarding new voter registration applications became effective January 2006. It was in effect until December 2007 when a court first ordered the department to stop the almost two-year old process. That ruling was overturned on appeal. The law was re-implemented Sept. 8, 2008. The law is being implemented now because the court order denying the injunction became final in July. The implementation was delayed by pending litigation until June 2008, waiting for U.S. Department of Justice preclearance in July 2008, time needed to reprogram the system to automatically notice voters and set up revised procedures, and the time needed to prepare supervisors who were otherwise engaged in administering the 2008 primary election. Unlike 1'hat activists are saying, obvious errors, including nicknames or typos will be resolved and that applicant will be registered to vote. Every voter registration applicant must provide (if issued) a Florida driver's license number, state identification card number or the last four digits of the Social Security number. The identification number is automatically cross-checked against the Florida driver's license database (DHSMV) or the Social Security Administration database. If that number does not match, the Bureau of Voter Registration Services manually reviews for identifiable typographical errors-or-a difference-between-a-nicinfiaide and formal name based on available records and the actual voter registration application. If the number still cannot be matched, the applicant is notified to provide a photocopy of their identification by mail, by fax, or by e-mail; or the applicant may show their identification in person. If proof is provided before the election, the applicant becomes registered and the person is able to vote a regular ballot. If proof is not provided IMPORTANT ELECTION DATES October 6 Deadline for New Voter Registrations October 20 -Early Voting Begins October 29 Deadline for Requests for Absentee Ballots to be Mailed to Voters November 2 -, Early Voting: Ends . November.4 Election Day Nurses at Shands are awesome Dear Editor: My 6-year-old daughter was seen in the emergency room at Shands Starke on Sept. 22, 2008. The back of her earring had made its way into her earlobe and it had to be cut out. From the time I got there until the time I left, everyone was so kind to me and did their best to get my daughter straight back to see a doctor. The nurses who actually took care of my daughter were just so great! Ms. Colleen Thompson was the one who-removed the earring and she was awesome! She took the time to help my daughter calm down and explained to her what she was going to do. I didn't get the other nurse's name, but she was great also. I was in and out of there within an hour and a half. I am truly grateful to them for how they handled my daughter and me. It -is amazing how the bad and negative things can spread so fast, but .when something good happens or when IDENTIFICATION OPTIONS FOR VOTING AT THE POLLS (a) Florida driver's license. (b) Florida identification card issued by the Department of Highway Safety and Motor Vehicles. (c) United States passport. (d) Debit or credit card. (e) Military identification. (f) Student identification. (g) Retirement center identification. (h) Neighborhood association identification. (i) Public assistance identification. before the election, the person may vote a provisional ballot. The person may provide proof up until 5 p.m. of the second day after the election for the ballot to be counted. Keep in mind, this is just for new applications since Sept. 8,2008. This law does not keep any person with an unverified number from being able to vote. This law is about verifying identity at the time of registration, so that when the voter goes to the polls the voter can vote a regular ballot, not a provisional ballot. A voter can show a driver's license, a Florida identification card from DSHMV, a passport, a debit or credit card, military identification, student identification, retirement center identification, neighborhood association identification and public assistance accuracy and less fraud, while creating minimal inconvenience for prospective voters. We encourage you to register now, review your application before submission, and call your local supervisor if you have any questions. See you on Election Day. ,, 'By Secretary of State Kurt S. Browning someone is nice to you, it way too often overlooked. I felt as if I needed not only to let the nurses who helped my daughter know how much I appreciate everything they did, but to also let the public know as well. I just want to say to Ms. Colleen Thompson and the other nurses who helped my daughter, thank you and God bless. Samantha Bradley Every man who is high up loves to think that he has done it all himself and the wife smiles, and lets it go at that. Sir James M. Barrie 1860-1937, British Playwright a~f, 9b W k Stig& Si.wl 1982 Whitney Heather Laura Martha Gayle Salautha --- COUPON --- N OW OPEN r-- COUPON $3 OFF EVERYDAY $5 FF 1 (904) 1 S CUT (904) Hi-Lite or Perm Good thru 10/31/08 -- 964-4 l1 L Goodthru10/31/08 Call us at: 1-866-228-5135 or on the web at: Open to the public: Mon-Fri 8:00 3:00 Done inyhoe An ocsin -aei;l'r vdd I I --- m M t " *nb~uWmplpo~onh Oct. 2, 2008 TELEGRAPH, TIMES & MONITOR--B-SECTION Page 5B Extension service offers landscaping tips lMi\c \ou ce\er \\allied to gi\e tMir lindsciape a facelift bill just didn't kno \\ here to start. f11 \oinr mlaswer is yes, then picasc krcp reading. Allhoulih il i ltoo late to plant sllulmlecr perennials sho\\ n in the pictures, the timing \\ ill soon bhe perfect to pkint flo\\ers that like cool catcherr As :Iin added bonus, \ou clan take advantage of the rain \\ce ha\e heen getting to put ill shrublhS that can be planted .\ear- rontmd. Before goimg out and purclasi ng plans \ou can save youtrselt time and money by following these simple steps: *Make a dra\\ ingl of the area \.oil \\ ant to landscape. Be sure to include surrounding structures that cast shade or \ill I'ramet the appearance. You need to kno\\ the area \our \\ant to fill. *Get the soil tested for pH before selecting plants. Choose plants that are compatible with existing soil pHt *Pult the right plant in the right place. choosee plants that are compatible \with existing soil drainage and light characteristics. When possible, use dIrought-tolerant plants to save water. *Setting out plants during rainy periods makes it easier to establish your ne\\ landscape. -A nice thing about drought tolerant landscapes is that once established, they require minimal watering. If it is not rainy when you set out your plants, a temporary watering system like a soaker hose or a temporary mist systern can be used to establish )our plants. The black hose intertwined among the blue da/e at the base of the sign is a soaker hose that pluts out a fiine spray about I foot on either side the hose. Both of these systems can be attached to a water spigot and are available at most garden stores. *Mist systems, or any portable sprinkler, can be used in combination with many types of inexpensive timers or water flow volume controllers. Using the volume controller is simple. Turn the water on, set the controller to the first notch and time how long it takes to turn itself off. Using a water- controlling device saves a lot of water because you don't have to remember to turn off the sprinkler. If this article has been helpful, then you should consider attending two workshops scheduled for Tuesday, Oct. 7, and Monday, Oct. 20, at the Bradford County Extension Service from 6:30 p.m. until 8:30 p.m. There is a $10 registration fee that will cover the cost of dinner and materials for each workshop. Seating is limited to 25 participants. Pre- registration and payment three days in advance is required for .both workshops. Call (904) 966-6299 to register or for more information. October FHP checkpoint locations The Florida Highway Patrol Troop G will be conducting driver's license and vehicle inspection checkpoints during October in Bradford and Union counties during the daylight I-- hours., S.R. 121. S.R. 16, C.R. 18, S.R. 231, C.R. 229, S.R. 121, C.R. 231. Bradford County S.R. 230, C.R. 100A, C.R. 231, C.R. 225, C.R. 225, C.R. 229, Speedville Road, C.R. 221, Southwest 75th Avenue, C.R. 18, C.R. 221, C.R. 18, C.R. 225, C.R. 229, S.R. 16, Market Road, C.R. 18, S.R. 227. Upward Basketball registration under way in Starke Madison Street Baptist Church in Starke is currently accepting Upward Basketball registration forms and fees, which can be turned in at the church office between 8 a.m. and 5 p.m., Monday-Thursday. Early registration, which lasts through today, Oct. 2, is $80, while regular registration, which lasts through Saturday, Oct. 25, is $90. Checks may be made payable to Madison Street Baptist. (Basketball shorts are included in the registration cost.) Every child interested in participating must attend one of two evaluation sessions. Sessions, which are 9 a.m.- noon at the church's family life center gym, are scheduled for Oct. 18 and Oct. 25. League practice is scheduled to begin Dec. 9. with the first game tipping off Jan. 10. For more information, please call the church at (904) 964-7557. :. up to POLRRIS 200 R EB ES Of Gainesville 2000 IN REBATES Powersorts&Marine Dealer S.,., -,-,,:,,,,,,, 12556 NW, Highway 441 (6 miles north of Highway Patrol) D(38i) 41 8-4 "a. Polaris .u 70XW I I Ic I'. a. wIn -.le RULES OF THE GAME I. n ,n .'ne. except Tl/,ti/aphI enplh' ee_, and their immediate L.inil, nmenibers. is welcomem e ti.. enctr iUne entr\ per person per - '..cek please Persons "inning one " ct'-k are f nrteligfhlK 'ro)-in a.inh fur .a1 I.a-'thrrh t- 4 e <-- 2 \ in picking up \, ilnnin, the \ inni r \ ll ha',e his or her photograph taken for the paper. 3. Entry must be on an official form from the Telegraph and submitted to one of our offices: .11 \\ C.ll (t Stike. 125 E MaNl i Si Like Buller ..r '3.2 S.R I N. ke', .,i.ie Heci.lih before 5 p n, .n F ,d.,',s Fill in all the '1.1'l iii Ihe ..irnle i t ihe leanm .; I1i , l 11l in The perit )n " ', t '. ,. 'n ..- i ;. .). ,- V. .411:' 1 i ll ... ,n 'b l'il ca ,h 4 Inl L. c I . the i.ta. l po.il nt, scored in the GATORS game this week is the tie breaker. Please fill in the points you think will be scored by the GATORS and their opponent, combined. in the tie breaker blank. IFor instance, if the Scile cf the GATORS game was GATORS I). opponent 7. the co-rrect score ill be 26 points. SDE)Ic.'lsntll O. the judge is Imnal' A cY.nJ i.i t broker ill be used. It necesars\ Results \ ill be tabuliaed on Tuesday and winners notified by telephone. Don't forget to list a phone number where you can be reached. SPORTINC TGa Little Caesar mpa at HANCE 207 Orange St. 964-3300 Denver Snta Feat Cincinnati at Dallas Santa Fe at Come K S Union County 5 $00 LARGE PEPPERONI PIZZA Great 211 S. ORANGE ST., STARKE 964-7434J All Day Every Day Zhis I LaL 4Z Spe Watch any F Food 6 Bev( 904-964-' 01 E Call St '( Week's Winner ura Theus Missed 6 arts Pub football Game With Us! erage Specials Daily! WALE (9253) in D),i. nt, n Sil[. re ou SpiresO "Hometown LAWTEY SUPERMARKET P r6 Proud" "Your Game Day Headquarters" Fernandina Beach at Kestone 86-496-3361 Smoked & Fresh Meats Fresh Oysters in Stock oernanina Beach at K Aranet* Meat Freezer Packages Ice Cold Beer 2 miles south of Starke on US-301 oria at rkansasTennessee at Baltimore 904-964-7200 610 SW 1st St., Lake Butler Web address: isit and t at: piresiga.m (904)782-3161 CornerofUS 301 8CR 125 Web address: Visit and contact us at: spiresiga.com (904) 782-3161 Lawtev, FL SAWYER GAS Capital C Your Local Full-Service Propane Dealer Ban Wendell Davis, District Manager O an a FSU at Miami Oregon at USC 350 N. Temple Ave. 500 Green Way S1 sMi uto he (352)468-1500 Starke, FL 32091 Keystone Heights, Gate Station At 301 8 18 1-800-683-1005 (904) 964-7050 (352) 473- S.R. 100E FL 32656 4952 GREAT STEAKS 'hM /e W (S Comn unity AT A GREAT PRICE! Beck Chevrolet Estblished in1957 State Bank Ohio St. at ) r Dea/ler For Lyfe Wisconsin Illinois at Michigan Kentucky at Alabama STARKE LAKE BUTLER FL US-301 North 811 S. Walnut St. 255 SE Sixth St. ,FL .964806 Starke (904) 964-7500 (904-964-7830) ME (386-496-3333) --, I-T- 0 i 1^ ---- IiB - . . . i . . G AVAILABLE! Atlanta at Green Bay US-30 I S in Starke S* Office Supplies Legal Forms Gift Notions Greeting Cards SCalculators Typewriters Coi [Nomif9,u FRE LRY I, s~'hington at Philadelphia rlTfl Cr'\ I TREET* STARKE* (904) 9645764 www. theofficeshopofstarke.com Fax (904) 964-6905 J St US 3R 964 Jackson Soutnern rroiessionai Buidun Title Services, Inc. Building Supply ,ook Fior ihe lii) Ioor Proudly scivitn our commui ty for over 48 years! For All Your land Tilh le Neds San Diego at Miami South Carolina at Ole Miss take Lake Butler Lake, Butler Starke 01 South f 145 SW 6th Ave. 185 SE l" Street 704 N. Lake St. -6078 496-3079 386-496-0089 904-964-6872 V, DOFOR o U ACV C Ss since 1879 co i H A Y ES b Corner of S.R. 16 & 301 N O(904) 964-8744 131 W. Call St. Starke, FL , Pittsburgh at Jacksonville Au Email: [email protected] 'Y Vanderbrnt CIENIAI 904-964-6305 Fax: 904-964-8628 U ,ER00015 as7' R00.644 Insured -- J. ^.^ x u ~~i ___________ Cly\. MT 7U- m )_0 JP OJler *E.01SS.RA0364 n iure Western Auto For all your s DR.C w Satellite Needs. - Repairs & Same Day Installation Seattle at N.Y. Giants (904) 964-6841 L """'i, & 312 W. Call St. (94 6U4 o,1.vrr40e..rs. Starke FL Jackson Building Supply Hayes Electric CowboysSteak House Capital City Bank Sawyer Gas Beck of Starke Whale Tales Bradford County Telegraph Little Caesars Sporting Chance, Town and Country Ford Office Shop Community State Bank_ Lawtey Supermarket Spires Grocery Beck Chevrolet Western Steer Southern Professional Title Handi-House Western Auto TIEBREAKER SCORE: Name: Address: Phone: bIm CHRY sIER 4 Bradford 904-964-3200 15000 U.S. 301 South 1-866-665-2372 Starke L2 I , I I- .- I -J pte rLCL*LLLaZZ*~aL~C-LLLLU ~-~~- ------------------------------ -C- .N! < Page 6B TELEGRAPH, TIMES & MONITOR--B-SECTION Oct. 2, 2008 George W. Teston George Teston LAND OF LAKES-George Weaks Teston, 76, of Land Of Lakes, died Monday, Sept. 29, 2008. Born in Ocoee, he was the son of Blaine Teston and Ruby Brown Teston. An avid bass fisherman who also loved to farm and garden, Mr. Tetston was preceded in death by twoobrothers, Earl and Lonnie. Mr. Teston was alsb a deacon in his ,church for many years. :" Stivivors of Mr. Teston include his wife of 56 years Juanita; two sons, Randall (Darlene) Teston of New Market, Tenn. and Philip (Barbara) Teston of Seattle, Wash.; a brother, Charles Teston and two sisters, Clara and Mildred; and four grandchildren. I grandchildren, 36 great- grandchildren and several great- great-grandchildren. Funeral 'services were held Friday. Sept. 19 in the DeWitt C. Jones Chapel with the Rev. Charlie Clark officiating. Burial was in Dyal Cemetery under the care of Jones-Gallagher Funeral Home. Florence Ohlscwager KEYSTONE HEIGHTS- Florence May Ohlscwager, 80, of Keystone Heights died Wednesday, Sept. 24, 2008 at Robert's Care Center in Palatka. Mrs. Ohiscwager moved to Keystone Heights in 1980 from Monroe. Conn. She was a retired school bus driver. She was born in West Hartford, Conn. to Asmus and Rose Oppelt Bonniksen. Survivors include her husband of 14 years, Jack Ohlschwager of Keystone Heights; three daughters, Sherry Malson of Great Mills, Md., Debi Greenberg of West Palm Beach and Donna Dilo of Davie; a sister, Mina Bedel of Old Saybrook, Conn.; five grandchildren, one step granddaughter; four great grandchildren and one step great- granddaughter. Graveside funeral services were held Saturday, Sept. 27, at Jacksonville Memory Gardens with Mr. Jessie Absher officiating. Arrangements were under the care of Jones-Gallagher of Keystone Heights. Funeral services for Mr. Teston Bessie W illiams will be held Thursday, Oct. 2, -B L 2008, at 11 a.m. in Archie Tanner ames Sullivan TAMPA-Bessie Le Wiliams, ner James Sullivan 88, of Tampa, 'died Wednesday, Funeral Home, Starke. The family 88, of Tam died Wednesday, will receive friends at 10 a.m. STARKE-James P. Sullivan Jr., Sept.24,2008. prior to the service. Arrangements 73, of Starke, died Monday, Sept. Born in Hampton, Ms. Williams are under the care of Archie 29, 2008 at E.T. York Hospice was the daughter of Charles N. Tanner Funeral Services. Center in Gainesville. Williams and LaNora Adams Born in Jacksonville, Mr. Williams. She was educated in the M artha Nettles Sullivan was the son of James P. Bradford County public schools. Se leS and Bertyce Gaskins Sullivan Sr. She was a member of the First HAMPTON-Martha Ann Prior to moving to Alabama, he Baptist Church of College Hill Cooper Nettles, 77, of Hampton grew up in the Evergreen where she sang in the choir, and died Tuesday, Sept. 16, 2008 in community in Lawtey and the served as recording secretary. She Lake Butler Hospital and Hand Keystone Heights area. Following was a avid fan of golf and tennis. Surgery Center following an 30 years of dedication, Mr. She was a retired employee of extended illness. Sullivan retired from Reynolds Seminole Laundry and Sunstate Born in Blackshear, Ga., Mrs. Alloy. Garment Factory. Nettles was a long time resident of Survivors include three Survivors include two children, Starke She was a member of Air daughters, Cheryl Gasque of Constance (Thomas) Bush of Ft. Park Baptist Church and a Tuscambia, Ala., Gayle Dobbins Pierce and a son, John Williams; a homemaker. Mrs. Nettles was of Trinity, Ala. and Teresa sister, Mary L. Green of Hampton; preceded in death by her Stanfield of Muscle Shoal, Ala.; a brother, Emanuel (Georgann) husbands, Rob Cooper and Melvin four sisters, Wanda Goedert of Williams of Orlando:. eight Nettles. Alma, Ga., .Sarah Howard of grandchildren and six great- Survivors include four Dunn, N.C., Patricia Rice of grandchildren. daughters, Katie Hardin and Betty Blairsville, Ga. and Jean Brookens A wake for Ms. Williams will Kight, both of Lakeland; Bea of Pinetta, Fla.; six grandchildren be held Friday, Oct. 3, from 6-7 Strickland and Audrey Thornton, and a special friend, Martha Brock p.m. at Aikens Funeral Home in both of Starke: three sons, Henry of Starke. Tampa. Funeral services will be Cooper of Lakeland; Leon Cooper Funeral services for Mr. held Saturday, Oct. 4, 2008 at 1 I and William Cooper, both of Sullivan will be-held Thursday, a.m. at First Baptist Church of Starke; two step-daughters, Linda Oct. 2, 2008, at 11:00 a.m. in the College Hill, Tampa. The family Sanders and Sandra Summerlin, DeWitt C. Jones Chapel with can be reached by calling (813) both of Georgia; two sisters, viewing held one hour prior to the 247-2657. Arrangements are Melvyryn Roe of Callahan and service. Please omit flowers and under the care of Aikens Fungral Jane Stolier of Hilliad; "(F(1 "qfk'e "'dnfriButid&is "fo' -ve;" Home. , .ilorf:t 4rv.''u*Z b'mTh*1 'Pl ~ i- ' ; nqcr t '"J'islP A? v'"'nv "A'Mr "Rn-lf3 hnC .,wO( vql -c yd b lnr i irn! r i t q i ner i atRM qvnA IsiM1ADiIIO r ,i- We offer these Special Packages for at-need and first-time pre-need subscribers D Arc he Pre-paymrent accepted Funeral Home "Within Your Means Now, Peace ofMind Always" ri:Ut '- ur 386-496-2008 386-496-2056 55 North: Lake Avenue Lake Butler, Florida 32054 ,n HAVEN In Memory Hospice or the American Cancer Society' Arrangements are under the care of Jones-Gallagher Funeral IHome olfStarke. PAID OBITUARY James Tyson CHICAGO-Deacon .lames Norris Tyson. 74, of Chicago, 1ll. died Monday, Sept. 8, 2008 following an extended illness. Born in Starke, Mr. Tyson moved to Philadelphia, Penn. and then later to Chicago. He was preceded in death by his wife, Mary Williams. Mr. Tyson was employed with the city of Chicago for 30 years. He began his education at Thurston Elementary School in Bradford County; was a member of Greater Cannon's of Chicago and served on the deacon board. Survivors include four daughters, Jacqueline, Gwendolyn, Carolyn, and Marilyn; five brothers, Ivie Norris of St. Petersburg, Joseph Norris of Webster, Eddie Neal, Sr. of St. Petersburg and Douglas Fcrgerson of Starke; a sister, Carnell Williams of Starke; 11 grandchildren and 12 great- grandchildren. Funeral services were held at I a.m. Saturday, Sept. .13, at Greater Cannon Missionary Baptist Church with the Rev. Leon Thompson, eulogist. Arrangements were under the care of Jones Funeral Home of Chicago. Local information courtesy of Haile Funeral Home. In Memory Babs Montpetit In Loving Memory of Babs Montpetit Ms. Babs, I can't believe it's been a year! won't be any doubt, you are so wonderful to think of and so hard to be without. Not a day goes by I don't think ofyoy. I miss your warm heart and kind words, your beautiful smile, your love and guidance. Until we meet again one day, I'll keep you in my heart I love you and miss you dearly. Love, Debbie If life were measured by accomplishments, most of us would die in infancy. A. P. Gouthey **** Charlene West LAKE BUTLER-Charlene B. West, 57, of Lake Butler died Sunday, Sept. 28, 2008 at Shands Starke following an extended illness. Born in Orlando, Mrs. West had lived most of her life in Lake Butler. She was a graduate of Union County High School and was a member of First Baptist Church of Lake Butler. Mrs. West was also the daughter of Sidney West Jr. and Lucille Keller West. Survivors are her sister, Kathryn J. West of Lake Butler and her caregiver, Harriett Maines. Funeral services will be held Tuesday, Oct. 7, 2008, at 10:00 a.m. in the Chapel of Archer Funeral Home with the Rev. Jason Johns officiating. Burial will follow in Woodlawn Cemetery in Ocoee at 2:30 p.m. Archer Funeral Home is in charge of arrangements.. L .. "When Yeu Say It With Flowers Its leeutifully Said" JSince W19 7 Florist (904)964-7711 Swwwjiasf lorstff.com 218 N. Temple Ave.*Starke Not Registered to Vote? OCTOBER 6 IS THE DEADLINE This is the all-important deadline to register to vote to be eligible for the historic November 4th General Election. If you are already registered but have moved, had a name change, or had any other change in your record, it is important that you update your record to avoid delays on Election Day. Drop by the Elections Office in the North Wing of the Courthouse or call us at 904-966-6266 "Haven was not only there to care for thefamily, but to make sure that my stepfather had everything he needed. I didn't have the answers for what my m mother was going through... but Haven . did. They comforted us in ways that only j i.. someone who has been there can. " Kein TIomas Family Member . e M s .om.- 110- 4- MNIR C [tz"Rl L S Serving North Florldo since 1979. Licensed as a not-for-proflt hospice since 1980 I ~ . ~. Ginger Lee Davis In Loving Memory of Ginger Lee Davis July 12, 1942 Oct. 9, 2004 You were a precious gift given to us from God. Our life with you was such a joyful journey. We laughed, we cried, we hugged and we cared for each other with love. Your life in Christ influenced so many. Your compassion for all encouraged us all to be better people. You were a true prayer warrior We miss you daily. We can "Only Imagine" your life in Heaven. Your love lives on in your friends and family Love, Your Family Enhancing life through our compussion and care. HrIP i+-S14i;rl;~*r Ci~.*;~. C I.OCBITUARIES Page 8B TELEGRAPH, TIMES & MONITOR--B-SECTION Oct. 2, 2008 CRIMEn Recent arrests in Bradford, Clay or Union The following individuals were arrested recently by local law enforcement officers in Bradford, Clay (Keystone Heights area) or Union County: Megan Howell, 24, of Keystone Heights was arrested Sept. 28 by Clay deputies for resisting a retail merchant. Justin D. White, 27, of Keystone Heights was arrested Sept. 23 by Starke Patrolman Cliht Lockhart for retail theft. White was charge with removing the packaging of a Playstation 2 controller, which he concealed in his pocket. He then left Wal- Mart without -paying for the item, Patrolman Lockhart said. Value of the controller is $53.42. A $500 surety bond was posted for White's release from custody. Joanne Fowler, 53, of Starke was arrested Sept. 26 by Patrolman Brown for theft. Fowler was charged with stealing the victim's dog and refusing to return it when asked to do so, Patrolman Brown said. She was released on her own recognizance. Stephanie Mcleoud Goad, 23, of Hampton was arrested Sept. 25 by Patrolman King for retail theft. Goad was charged with removing a bottle of shampoo from a shelf in Family Dollar and placing it in her purse, Patrolman King said. She then left the store and put the bottle in a trash can. Goad later returned and offered to pay for the shampoo valued at $3, Patrolman King said. A $1,000 surety bond was posted for Goad's release from custody. Termaine Alvin Byrd, 22, of Starke was arrested Sept. 25 by Patrolman King for trespass after warning. Byrd was arrested at the back side of T.H.E. Apartments, where he had been warned and trespassed on an earlier date, Patrolman King said. He was released from custody after a $1,000 surety bond was posted. Gabriel Stephen Cartwright, 30, of Starke was arrested Sept. 25 by Starke Patrolman James Stutler for disorderly intoxication and possession of marijuana. Police responded to a burglary in progress at Center Street at 2:02 a.m. Cartwright had been banging on the victim's windows and door, Patrolman Stutler said. Cartwright appeared to be intoxicated and was slurring his words, Patrolman Stutler said. During a search, marijuana was found on Cartwright, Patrolman Stutler said. Surety bonds totaling $2,000 were posted for his release. Anthony Leonard Aaron, 49, of Starke was arrested Sept. 28 by Patrolman Brown for resisting without violence and attempting to flee' or elude. Aaron accelerated to 90 mph in a 30 mph zone on Southeast 143rd Street when the patrolman activated his emergency lights. After a brief chase, Aaron left his vehicle and ran on foot. When he was apprehended, he refused to b6 handcuffed by placing his hands behind his back, Patrolman Brown said. The incident occurred at 3:28 a.m. Bond was set at $6,000. Ashley Michelle Mote, 22, of Jacksonville was arrested Sept. 23 by Bradford Sgt. R.W. White for disorderly intoxication. Mote was found loitering in the Allison Way area, where deputies had received a noise complaint. Mote became verbal and Sv --- v v v v v v v v v v v . v v 1 Spruce Up 0 4 y"Our Landscape Mulch Bulk Stone Cypress Blend Pea Gravel - *Colored Mulch *River Rock 4 Pine lark Red Rock 4 I "Bulk or bagged" *Marble Chips Horse Bedding Crushed Concrete 4 4 Baled Pine Straw Railroad Ties 4 STARKE LANDSCAPE SUPPLY 9620 SE S.R.100, Starke : STues.-Fri. 7-5:30 (904 964-3112 Prices & availability 4 Sat. 7-3:30 subject to change 4 Closed Sun.&Mon. Approx. 2-miles E of U.S. 301 without notice. Familiar Face, New Location! caused a disturbance, Sgt. White said. She was consuming a beer and refused to calm down, Sgt. White said. A $1,000 surety bond was posted for her release from custody. Hurbert Edenfield, 20, of Keystone Heights was arrested Sept. 23 by Patrolman Brown for disorderly intoxication. Edenfield created a disturbance in the Winn Dixie parking lot by causing a fight with a group of juveniles, Patrolman Brown said.' He was intoxicated when the incident occurred at 10 p.m. The juveniles were parked in the lot, Patrolman Brown said. Edenfield was released from custody after a $1,000 surety bond was posted. Thomas Jerrel Drawdy, 61, of Lake Butler was arrested Sept. 25 by Union Deputy John Gootee for trespassing on property and petit theft. Drawdy was charged with stealing PVC pipe from an abandoned building on the victim's property. The victim stated unknown subjects had looted his buildings for scrap metal over the past few weeks, Deputy Gootee said. William Tracy White, 40, of Middleburg was arrested Sept. 26 by Patrolman Schlofman for loitering and prowling. White was loitering in the Bradford Court Apts. area just before midnight. He does not reside there, Patrolman Schlofman said. Bond on the charge was set at $1,000. White was also charged on a warrant from Putnam County for failure to appear. Thomias Elton Wilkins, 59, of Starke and Marcus O'Neil Jenkins, 22, and Marie Clark, 29, both of Lawtey, were arrested .-Sept. 28 by Patrolman Brown for possession of cocaine. A gram of crack cocaine was found during a traffic stop on North Temple Avenue. Bond on the charge was set at $15,000. Wilkins was also charged with failing to notify DMV of a address change, Patrolman Brown said. David Mood'). 30.' 6T Hampton was arrested Sept. 23 by Clay deputies for violation of probation improper exhibition of firearm. Fred Lucas, 44, of Keystone Heights was arrested Sept. 26 by Clay deputies on a warrant for violation of probation domestic battery. Jerry Ryan Jones, 35, of Lake Butler was arrested Sept. 22 by Deputy Shane for failure to appear on a felony offense. Ronnie A. Baker, 27, of Starke was arrested by Alachua deputies on a warrant from Bradford County for violation of probation for a charge of possession of cannabis and failure to appear. He remains in custody of the Bradford County Jail without bond. Jeremy Tillman, 28, of Green Cove Springs was arrested Sept. 24 by. Patrolman King on a warrant from Clay County for sale and delivery of marijuana. Bond was set at $25,003. Keri Leanna Geiger, 21, of Lake City was arrested Sept. 25 by Bradford Deputy R.V. Melton on a Bradford warrant for violation of probation petit theft. Bond was set at $5,000. Danielle Monique Kates, 21, of Starke was arrested Sept. 25 by Bradford Deputy Keith E. McInnis on a warrant for felony welfare fraud. She was released on her own recognizance. Lisa Ann. Cowan, 39, of Melbourne was arrested Sept. 27 by Bradford Sgt. Robert Lyons on a capias for burglary and grand theft. Surety bonds totaling $15,000 were posted for her release from custody. Eric Lee Hunt, 18, of Keystone Heights was arrested Sept, 25 by Clay- Deputy Renee Scucci on a warrant for burglary motor vehicle and petit theft with no bond. Hunt was re-booked into the county jail. Traffic Irene Valinski, 52, of Starke was arrested Sept. 27 by Clay Deputy H.L. Lanier for driving under the influence (DUI); ValinSki, reported another vehicle struck her vehicle while she was driving on Kingsley Lake Drive. When the deputy ,responded at 7:48 p.m. he - ---- -- Gene Stephen Jordan, 58, Gene Stephen Jordan, 58, of Keystone Heights was arrested Sept. 29 by Clay deputies on a Bradford warrant for violation of probation DWLS, DUI. He was ordered to serve 40 days in jail. noticed Valinski smelled of an alcoholic beverage. After failing the field sobriety test, Valinski was placed under arrest, Deputy Lanier said. Her blood-alcohol level was .130 percent. Tiescha Latrese Mitchell, 27, of Starke was arrested Sept. 27 by Hampton Chief John Hodges for driving while license suspended or revoked (DWLS) knowingly. Mitchell's vehicle was stopped for running a red light at U.S. 301 and C.R. 18. She was also charged on a Union County warrant for failure to appear DWLS. Surety bonds totaling $11,000 were posted for her release from custody. Nancy Lynn Roberts, 42, of Keystone Heights was arrested Sept. 24 by Bradford Sgt. M.L. McKenzie for DWLS. A $500 surety bond was posted. for her release from custody. James E. Irving, 37, of Waldo was arrested Sept. 26 by Hampton patrolmen for DWLS. A $500 surety bond was posted for his release from custody. Allen Howard Johnson Jr., 20, of Ocklawaha was arrested Sept. 24 by Lawtey Lt. M.E. Jenkins for DWLS with knowledge. He was released from custody after a $500 surety bond was posted. Stephen Daniel Boswell, 45, of Bensalem, Pa., was arrested Sept. 27 by Lawtey Patrolman Kelly Brown for DWLS with knowledge. He posted a $500 cash bond for his release. Danny Keith Stanford, 51, of Waldo was arrested Sept. 24 by Hampton patrolmen for DWLS knowingly. Stanford posted a $500 cash bond for his release from custody. Timothy John Fox, 20, of Starke was arrested Sept. 29 by Orange Park Patrolman R. Baltram for DWLS knowingly and attaching a tag not assigned. Fox's vehicle was stopped at the intersection of Kingsley and Park avenues for faulty equipment. SReginald Bernard. Crum, 38, of Starke was arrested Sept. 27 by Hampton patrolmen for DWLS. A $500 surety bond was posted for his release from custody. A passenger in the vehicle, Johnnie Lee Simmons Sr., 60, of Pomona Park, was charged on a writ of attachment for child support. He purged by paying $1,000 in cash. Patricia Gilbert Deese, 57, of Lawtey was arrested Sept. 24 by Bradford Deputy Scott Konkel on a warrant from Putnam County for DWLS habitual. Bond was set at $5004. 4200 SW 34th Street Gainesville, FL 32653 352-224-2700 cell 352-478-7267 Swww. alarionbank. com Myrna Jackson Loan Officer Myr More income at retirement? , 6.40%* Elbert Arnold Southall, Agent 119 N Walnut Street Starke, FL 32091 Bus.'904-964-5391 [email protected] Single rnemium Defed Anuity 'Current effective annual interest rate for 1 -year Interest Rate Guarantee Period based on premium of $50,000 as of 09/16/2008. Rate includes a 1.00% bonus in the first year. At the end of the first year, the interest rate is reduced by 1.00% and is guaranteed for the remainder of the guarantee period. A tax benefit'today, plus retirement income you can't outlive. Let's talk about the FUTURE INCOME PLUS deferred annuity from State Farm. statefarm.com om Lower rates apply for lower premium Rate subject to change without notice. Actual rate credited will be rate in effect on the day premium is received. After 10 years, a new guaranteed interest rate, not less MT, A03047 & A03097 in NY, 03047 & 03097 in OR. PA. TX. and A03040 & A03090 in WI State Farm Insurance Company. Bloomington. IL (Not licensed in MA. NY. and WI) State Farm Life and Accident Assurance Company. Bloomington. IL (Licensed in NY and WII (rolo 'I1,. I N Not FDIC Insured No Bank Guarantee Ma Lose Value I SAN MATEO SEAFOI -0II0 OFF *lidi.Shimp Lake Butler woman home after crash A 68-year-old Lake Butler woman is at home recovering from injuries received when her vehicle was struck by a patrol vehicle north of Brooker. Linda T. Barber, driving a 1999 Lincoln, was northbound on S.R. 231, according to FHP Sgt. Thomas Stebbins. Bradford Deputy Keith Mclnnis made a U-turn in front of Barber's vehicle. Barber took evasive action but was unable to avoid hitting McInnis. Her vehicle traveled off the west shoulder and down an embankment, striking a barbed-wire fence and tree with its left front, Sgt. Stebbins said. Barber was taken to North Florida Regional Hospital. She was released Sunday and is at home recovering from her injuries. Barber said she is very bruised and has three broken ribs. McInnis stated the glare of the sun blocked his view of the other vehicle. The crash remains under investigation. McInnis, who had minor injuries, was patrolling for traffic violators in the' Brooker area at the time of the crash. He was in an unmarked Dodge Charger police vehicle: Damages totaled $22,500. Starke woman injured in crash A 30-year-old Starke woman was injured Sept. 25 when her vehicle was struck on U.S. 301 north of C.R. 227. Heather Seay was taken to Shands University of Florida with serious : injuries, according to Florida Highway Patrol Troodper A. Cummings. Seay1 Wdisfdt 'i'r "f '' the hospital system as of press time. Two semi trucks were southbound on U.S. 301 at 5:11 a.m. Seay, driving a 1996 Chrysler van, was northbound. She entered the direct path of one of the trucks, Trooper Cummings said. The truck drivers attempted to avoid a collision but were unable to do so, Trooper Cummings said. Charges are pending in the! crash. There were no reported' injuries to the truck drivers,, both of Jacksonville. Total damages were: $13,500. rr -~-nnrPnn~*n~ola~lrarr*norr-r~-r~~ Oct. 2, 2008 TELEGRAPH, TIMES & MONITOR--B-SECTION Page 9B Robinson has 4 scores in Indians' 35-7 district win Bradford quarterback --Trey Winkler (left) sprints out of the pocket in the Tornadoes' 19- 17 win over District 3-2A opponent Fernandina Beach. Tornadoes top Pirates by 2 in see-saw district battle BY ARNIE HARRIS Telegraph Staff Writer Long offensive drives and a tenacious defense that forced five turnovers led to the first win of the season for the Bradford football team, which defeated District 3-2A opponent Fernandina Beach 19-17 on Sept. 26 in Fernandina Beach. Bradford head coach Steve Hoard said he was very pleased with the team's victory-especially the fact that the defense, which had four interceptions and one fumble recovery, kept the pressure on for the whole contest. "It was still far from a perfect game from us,' Hoard said, "but I'm pleased that the guys got the message that they didn't play as hard as they were capable of in their first two games." Those first two games were losses by a combined score of 72-15, and the game against Fernandina (2-2, 0-1 in District 3) did not get off to a good start. Bradford (1-2, 1-0) muffed a punt, which resulted in the Pirates taking control of the ball on the Bradford 15. However, the stubborn See BHS, p. 10B BY CLIFF SMELLEY Telegraph Staff Writer Marcel Robinson rushed for 216 yards and four touchdowns to help lead the Keystone Heights football team to its first win of the season, a 35-7 victory over District 3-2A opponent Interlachen on Sept. 26 in Interlachen. The winless Rams hung tough with Keystone, trailing just 14-7 at the half. The Indians, though, pulled away with a pair of 80-yard scoring plays-a reception by Ryan Lather and a run by Robinson. Robinson averaged 12 yards per carry in the second half as the Indians seemed to wear Interlachen down. Keystone head coach Chuck Dickinson said the Rams were playing a lot more players on both sides of the ball. "If you've got a lot of kids playing both ways, it takes a toll on you," he said. ' Keystone (1-2, 1-0 in District 3) can chalk up its t\wo losses this season to playing quality competition (Lafayette and Fort White are both undefeated and ranked in the top five in their respective classificationss, but mistakes were also prevalent in those games, 'especially on the offensive side of the ball. Dickinson said his team was making the kind of mistakes it hasn't made in three or four years. That comes with losing an experienced offensive line and basically having 10 new starters on that side of the ball. "(In the first two games), we were just turning people loose, having missed blocking assignments and jumping offsides at inopportune times," Dickinson said. "That's something we really worked on these last two weeks- getting back to the basics." Turnovers, however, continue to be a problem. The Indians had three against Interlachen, including one to start the game. Keystone received the opening kickoff, but Interlachen recovered a fumble on the return, giving the Rams the ball at the Keystone 36. Interlachen (0-3, 0-1) moved right down the field on three straight runs by Quell Brown, the last.a 4-yard touchdown that, with the extra point, put the home team up 7-0 approximately a minute into the game. The Rams did not enjoy the lead for long as Robinson made his presence felt immediately. He carried the See KHHS, p. 11B BHS looks for second win against struggling Warriors BY CLIFF SMELLEY Telegraph Staff Writer Bradford snapped its losing streak with its win over Fernandina Beach last week and will now host a team looking to end its own streak. The Tornadoes play their first home game this Friday, Oct. 3, when they host District 3-2A opponent West Nassau at 7:30 p.m. The Warriors have lost three., in a row, most recently losing 12-6 to Yulee in overtime last week. West Nassau (1-3, 0-1 in District 3) got off to a good start this season, avenging a loss last year,,, ,by Idefeating,; Episcopal 35-34 The Warriors, then dropped games against Clay (49-20) and Union County (34-24) prior to playing district opponent Yulee. Against Union County, a team that defeated Bradford 12-0 in a half of play in the preseason, the Warriors amassed 238 yards rushing and 106 passing. They outgained the Tigers, but turned the ball over five times. Three first- half turnovers helped put West Nassau in a 28-12 hole at the half. A turnover in the latter stages of the game killed a rally after the Warriors closed the gap to four points. Senior running back Horace Wilson, one of the team's key returners from last season, was the main workhorse when the Warriors were able to hang onto the ball. He scored three times, including the game's first touchdown when he got :' tiL bulk,,of his;leam's touches on a 67-yard game-opening drive. Defensively, the Warriors held Union to 109 yards rushing and 94 yards passing. They had held the Tigers to 51 yards rushing until Deven Perry broke free for a 58-yard touchdown run in the game's final minute. The fact that West Nassau was involved in an overtime game last week brings to mind last year's game between the Warriors and Bradford. The two teams battled through a scoreless second half before the Tornadoes came out on top 20-17 in overtime. Graduate Rob Harris was the hero of the game with a 1-yard touchdown run after West Nassau's Austin Guffin kicked a 21-yard field goal in overtime. Harris accounted for all of Bradford's points, scoring on runs of 60 and 80 yards in the first halft, Guffin, like the Warriors' other key players in the game, has since graduated. West Nassau's two scores in regulation came on pass plays of 25 and 35 yards from quarterback AJ. Higginbotham to receiver Kent Thomas., Bradford finished the game with 255 yards rushing on 33 carries and no turnovers. The Warriors had 278 total yards, including 218 through the air. No amnsfo otsAvailale onAllMdl!- Of Gainesville 12556 NW Highway 441 386-41i84244 flo. Li Mrfoecu~i rw~i4..amwlana*buhadiOniL45Spun Sa~ sdsrpi~wandi 0h mat flc.%M*6n ,ftrtci~ols en sMAoeiiS Carnge565m l~l.iis *5uc v4 ndnrsisim 4n vetyt Mite a .mr m .~pardiioL i.ttaLsa, aflrrunga a oaiv 4.5,iiemainingpsoiar CiamiW~ ca a S..enY ~ bnc.aaaut ipitsiaahanpaiac fhihY pumna~iwi ,5534 1aihnbv raiach, MiwitsaL i iyii4a aypui i 313114Pr U 5 lsa 4.it~n gPaii.ia 34tp,4..a .4 PVS~ti ass, alrM aam i 4. a 1MraahaatVM7 UP fraWksaidai.5c lko ~dlO Paoi l seOR 2058 ig LOStM .5 M itts ii LuiSa Su i P 0SsCun~ ~ m 5 h*. 20 iaaM.,3,.&ai. CROSSWORD Sow, S.eiS GRACEFULLY GROWING Hairy Business STARKE Hair Nails Tanning L eAR ng CNE 6 HAIR CUTS CHIROPRACTIC INC. Hair Nails Tanning "We Make Learning Child's Play" Walk-ins Welcome 01 q 904-964-9969 Men- Women -Children 904-368-0011 904-966-2300 s9.s5 9 904-964-3338 Hwy 301 S Starke 743 S. Walnut St. Hwy 301 N Starke, FL Next to Auto Zone in Starke 225 S. Orange St. Starke, FL CS Community LB State Bank STARKE LAKE BUTLER 904-964-7830 386-496-3333 The Law Offices of Douglas E. Massey CRIMINAL DEFENSE a$ (904) 964-6465 19580 NW SR-16 Starke, FL Serving Bradford and Union Counties ROBERTS INSURANCE -All Your Insurance Needs! Starke 904-964-7826 Keystone *352-473-7209 Lake Butler *386-496-3411 COWBOYS TWO CAN DINE FOR $20 Beverage Included! Hwy 301 S Starke 368-3800 Delivery, Installation and Turf Care CALL 386-496-2174 YOU LOOKED.... and so will others! Place your Ad here for $30 Call Darlene 904-964-6305 darlene()bctelearaoh.com Capital City Bank 350 North Temple Ave Starke 904-964-7050 500 Green Way Keystone Heights 352-473-4952 CROSSWORD PUZZLE 9-25-08 51."My _I' 52. Sundae topper, perhaps 53. Bomba, Western name for the RDS-220 hydrogen bomb 55. undertake, with out 56. Clinker Answers to 9-25-08 puzzle G A R B SBH'M]'O A P P L 3E ALOEM" P EEL MIN A IIAD B A S E B A L L D I A M O N D O 1MAPN I ID IOT N E O NO R B1A P S TO1A y E L L OW H A M ME IR S N E G RIO Y ODI E L R I S C A T L-AS R 0 P Y O L E OS F R E Y A CONDE MN E DCELL E SAU ETA APE A L A T IGON S T YE S F I L L IIN G IS T A T I T .ON( S A S P ElN E L L S TUNA R A S TA TO Y S S T AY Across 22. Its motto is industry' 1. Like It Hot 23. Montana city 5. Kuwaiti, e.g. 24. Nonchalantly unconcerned 9. Katherine _, German-American botanist 25. Personnel director 13. Certain surgeon's patientf 26. Gastric woe 14. 0. Henry's -he Gift of the -. 27. School mos. 15. Editor Harold 28. Mythical, female water creature in 16. Acts of increasing wealth German folk tales 19. Swedish shagtrug 29. Perennials of the genus Geum 20. Animal with a mane 30. Dissolutions or destruction of cells 21. Hang around 32. Express 22. Abreast of 35. Someone who leaves one country to 23. Bundle settle in another 24. High land 36. Masked omnivorous nocturnal 27. Of heaven or the spirit mammal 31. Air freshener option 37. Pool site, maybe 32. Docs for dachshunds 42. Pertaining to a group of organic 33. Brown, e.g. compounds of nitrogen derived from 34. Extinct primitive toothed birds of the Jurassic ammonia period 43. Asian weight units 38. "Didn't I tell you?" 44. Negative particles 39. Bit 46. Forever (archaic) 40. Nitrogen compound 47. Advil target 41. Rock boulders that differ from surrounding 48. FedEx, say bedrock due to glacier transportation, 49. LP player 44. Admission 50. Christiania, now 45. Magician 1 2 3 4 5 46. Small buffalo of the Celebes 47. More pale -13 1 50. Hodgepodge 51. All the rage 54. Englishmen from a high social class who think 16 17 they are intelligent and important but are not 57. Reproductions of sound with little or no distortion 19 20 58. Dash 59. Used in animal feed 22 60. Beowulf, e.g. 61. 1992 Robin Williams movie 24 25 26 62. It may get into a jamb Down 1. Antares, for one 34 3 2. Bacchanal 3. Prefix with phone 38 4. Always, in verse 5. Embryonic sac 41 42 43 6, Heaviest of the inert gases 7. Not 'fer 8. Show 45 9. Plane, e.g. 10. Caroled47 48 49 11. Achip, maybe 54 12. Union of Soviet Socialist Republics 15. 'South Pacific" hero 57 17. Petting zoo animal 18. Go by, as time 60 No Payments for 6 Months! Page 10B TELEGRAPH, TIMES & MONITOR--B-SECTION Oct. 2, 2008 We are the champions The Lawtey Crue team took home the championship trophy of the Bradford County co-ed softball league with its win over the Scared Hitless team. Pictured are: (front, I-r) Rebecca Wise, Lori Norman, Wesley White, Scott Jones, Tommy Wise, Kevin Wise, (back, I-r) Jared Chapman, Derek Donnely, Kayla Alvarez, Jesse Alvarez, Chris Wise, Sonya White, Clayton Norman, Justin Fogarty, Brad Thomas, Rob Norman, Jimmy Brown, Joy Stafford and Will Hobbs. Not pictured: Brian Outlaw. St. Francis hands Indians 3-0 loss BY CLIFF SMELLEY Telegraph Staff Writer St. Francis handed the Keystone Heights volleyball team its second straight loss outside of tournament play, but the Indians impressed their coach with their play in the Keystone Heights High School Invitational this past weekend. "The team is progressing," head coach Belinda Smith said, "but like any other team, we still have our moments." St. Francis swept the Indians (9-6 prior to Sept. 30) by scores of 25-10, 25-23 and 25- 18 on Sept. 29 in Gainesville. Katie McCollum tallied 17 assists, while Maranda Gibbs had 16 digs. Carey Taylor led the team with 10 kills, while Morgan Maxwell and Katie Easton had three and two blocks, respectively... On Sept. 23, the Indians suffered their first setback in District 6-3A, losing 3-2 (15- 25, 25-23, 25-22, 15-25, 13- 15) to Pierson Taylor in Pierson. McCollum had 26 assists, while the team got 24 kills from Taylor (nine), Maxwell (eight) and Shannon Gray (seven). Gray also contributed eight service aces, while defensively, Gibbs and' Maxwell had 15 digs and three blocks, respectively. The loss gave the Indians a 2-1 district record prior to its match against Union County on Sept. 30. During the Sept. 26-27 Keystone Heights High School Invitational, the Indians placed fourth out of I1 teams, winning three matches and losing two. "The girls really played well this past weekend," Smith said. "Many of my seniors really stepped up their games." Leading the team in kills for the tournament were Taylor with 28, Maxwell with 26 and Gray with 20. Gibbs and Taylor had 33 and 28 digs, respectively, while McCollum had 75 assists. Maxwell tallied nine blocks. Keystone defeated Baker County 2-0 (25-20, 25-22), Lecanto 2-0 (25-21, 25-14) and North Marion 2-1 (21-25, 25-18, 15-6). The Indians' losses were 2-0 (25-19, 25-18) against Menendez and 2-1 (25- 18, 23-25, 15-13) against St. Johns Country Day. The Indians host Pierson Taylor tonight, Oct. 2, at 6 p.m. That's followed by another home match against a district opponent when Union County pays a visit Tuesday, Oct. 7, at 6:30 p.m. Tigers get swept by Suwannee BY CLIFF SMELLEY Telegraph Staff Writer It was a chance for the Union County volleyball team to even its record, but the Suwannee Bulldogs got the best of the Tigers, defeating them 3-0 (26-24,25-17,25-22) on Sept. 29 in Live Oak. The Tigers (4-6 prior to Sept. 30), who got eight and seven service points from Markie Emery and Jordan Windham, respectively, had a couple of chances to put consecutive wins, together this past week. They defeated Newberry on Sept. 22, but then lost 3-2 (25-15, 25-13, 21-25, 19-25, 15-5) to District 6-3A opponent Interlachen on Sept.; 23 iiinLake Butler The loss tod' Suwannee came after a 3-0 (25-13, 25-16, 25-11) win over visiting Hamilton County on Sept. 25. In the loss to Interlachen, Linsey Clark and Emery led the offense with 31 and 19 kills, respectively, while Keira Sellers added 10. Emery had 16 service points, Windham 15, Carson Mize 14 and Brianne Clyatt 11. Mize also led the team with 47 assists. Megan Mobley had 54 digs, followed by Clyatt with 38, Emery with 34, Mize and Ashley Parrish with with 24 each, Windham with 22 and Kiara Holland with 17. . In the win over Hamilton County, Mobley and Parrish had 29 and 28 digs, respectively, while Clyatt had 16, Mize 12 and Holland 11. Emery led the team in kills with 13, while Clark had 10. Mize had 19 assists, while the service points leaders were Clyatt with 18, Sellers with 16 and Mobley with 13. Union played Keystone Heights this past Tuesday in an attempt to pick up its first district win. (The Tigers fell to 0r3 after the loss to Interlachen.) The Tigers host district opponent Crescent City tonight, Oct. 2, at 5:30 p.m., followed by another home match against Suwannee on Monday, Oct. 6, at 6:30 p.m. On Tuesday, Oct. 7, 7 the Tigers. travel to play Keystone at 6:30 p.m. 1874-1936, British Author BHS Continued from page 9B Bradford defense forced Fernandina Beach to settle for a 34-yard field goal at the 9:18 mark of the first quarter. On the next series, the Tornadoes fumbled on their first running play, with the Pirates recovering the ball on their opponents' 16. After two rushing attempts were pushed backward by Bradford's defense, Tramaine Harris picked off a pass, giving the Tornadoes the ball on their own 19. From there, Bradford commenced an 81-yard drive behind the rushing of Reggie Thomas and quarterback Trey Winkler. A crucial moment in the drive was when Hoard rolled the dice and had his team go for a first down on fourth-and- 4 at the Fernandina 40. The gamble worked and was rewarded four plays later when, from the 20, quarterback Rodney Mosely successfully faked a handoff, kept the ball and ran virtually unmolested, for a touchdown with 27 seconds to go in the first quarter. The PAT put Bradford up 7-3. For most of the second quarter, neither team could U --- L --~--~ --- - mount a successful offensive drive until the Pirates took advantage of a Bradford penalty that nullified a Pirates' punt and gave them new life on Bradford's 45. A 16-yard pass completion by quarterback Emory Wingard and two runs set the Pirates up at Bradford's 2-yard line. They punched the ball in from there for a 10-7 lead with 2:28 left in the half. The Tornadoes came roaring hack, and as the seconds ticked down to halftime, they mounted a 72-yard drive largely on the strong throwing arm of quarterback Trey Winkler. Winkler completed four consecutive passes, including a 12-yarder to receiver Seth Upthegrove and a 16-yard touchdown strike to Harris with 21 seconds remaining in the half. The PAT failed, but Bradford retired to the locker room ahead 13-10. The third-quarter was mostly a standoff as both teams' defenses stifled each other's offensive efforts. However, in the closing minutes of the quarter, the Pirates penetrated Bradford's red zone. After a field-goal attempt went wide, a penalty gave Fernandina another chance. The Pirates quickly capitalized with a touchdown with 26 seconds remaining in the quarter, putting them back on top by a score of 17-13. Once again, the Tornadoes quickly responded in the opening minute of the final quarter. Beginning at their own 20, they drove to their 40, where, in a quarterback-keeper almost identical to Bradford's first score, Winkler raced the ball 60 yards for a touchdown at the 11:12 mark. The two- point attempt failed, but Bradford now held a fragile 19-17 lead with most of the last quarter to go. Bradford's defense stepped up once again and blanked the Pirates for the rest of the contest, forcing two interceptions, one nabbed by Charles Jones and the other by Harris. The Pirates added to their troubles by coughing up the ball at their own 34, with the ball being recovered by defensive lineman Casey Hines. "They came in with-and showed-a never-say-die attitude and brought us home a big win," Hoard said of his players. A positive attitude may not solve all your problems, but it will annoy enough people to make it worth the effort. Herm Albright "LOOK" We Accept MEDICARE and MEDICAID Call Vision Tech, Grace by Dr. Gary Williams for your Pre-Eligibility at Independent Doctor of Optometry 904-769-9593 & Starke Wal-Mart COUPON REQUIRED CONTACT LENS EXAM Expires 10'31/08 E COUPON REQUIRED E YE GLASS EXAM Eqo 1r I 11 DR. WILLIAMS NOW ACCEPTS MEDICAID FOR EYE EXAMS S964-2250 WAL*MART ieco. IC# CBC1254779 --]acclwlo 190 West Main St Lake Butler, FL 32054 SCapital City Bank is here for you. The financial industry is rapidly changing, yet the Capital. ^ *Capital City S Bank More than your bank. Your banker. I L. -w Integrity First, Last, Always Renovation, Remodeling New Construction Residential and Commercial Richard O. Tillis Contracting, Inc. 386-496-1360 Call for a Free Estimate FOR A LIMITED TIME CHOOSE FROM 8 FULL-SIZE DINNERS UNDER $8. Sliced Pork Bar-B-Q Chicken Pulled Beef Brisket Sliced Beef Smoked Turkey Pulled Pork NEW High Springs Chicken NEW Pulled Chicken Offer Go* Through 1O0/5/8 230 S. Temple Ave. Starke, FL 32091 904-964-8840 ww.sonnysbbq.com I I -- -- -~011~lllllllsCr-- -- ~L , - ' -., L I i _EGRAPh, Ilh iS & MONI i..--B-SECTION Page 11B KHHS Continued from page 9B ball on Keystone's first four offensive plays, gaining 28 yards. Thomas Ricketts had a 13-vard gain to the Interlachen 18, which was later followed by Robinson's 12-yard run to the 3. He scored from there with 6:52 on the clock. Tim Frysinger's PAT tied the game. At that point, it appeared as if the game was going to be a shootout. Interlachen picked up three first downs on its second possession, but the Indians forced a punt when its defense stepped up and made some plays. Dillon Van Wagner made a tackle for a loss of 2 yards, while his brother, Jacob Van Wagner, was in on a play that resulted in a loss of 2 yards. Garrett Strickland recovered a fumble on Interlachen's next series after the Rams had driven inside the Keystone 25. Keystone's offense began moving down the field and had a first al the Rams 45 after an 1 I-yard run by Robinson. A fumble, however, ended the drive. A halftime tie seemed possible as' time was expiring in the first half, but the Indians put another score on the board after a shanked Interlachen punt of 3 yards gave them the ball at the Rams 49. A 9-yard rut by Ricketts, along with a personal foul penalty on the Rams, moved the Indians to the 25. Three plays later, Robinson had his second score of the night on a 13-yard run. Frysinger's PAT put Keystone up 14-7 with 2:08 left in the half. Keystone forced the Rams to punt on the second half's opening series when Jacob Van Wagner held Brown to a 2-yard gain on third-and-4. The Indians, after the change of possession, were poised to pound the ball at the Rams, with Robinson gaining 6 yards on first down. However, Robinson was dropped for a I- yard loss on the next play, setting up third-and-5. Quarterback Brantley Lott then threw just his third-and last-pass of the night, hitting a leaping Latner between two defenders. Latner was able t6 shake loose and sprint a long way into the end zone, giving his team a two-touchdown lead. Shane Jennings sparked the Keystone defense when he dumped Brown for a 2-yard loss on the first play from scrimmage following Latner's 80-yard score. The Indians forced the Rams to go three- and-out. One play was all Keystone needed to get back into the end zone when .Robinson took a handoff and kept going, covering 80 yards for a score 'that put his team up 28-7 with 4:01 to play in the third quarter. The defense came up big again when Jacob Van Wagner recovered a fumble. That set up an eight-play, 40-yard scoring drive, which was capped by a 1-yard Robinson plunge into the end zone in the first minute of the fourth quarter. Keystone came close to tacking on another score, but a drive that began at its own 20 resulted in a turnover on downs at the Interlachen 22 in the game's final minute. Defensively, the Indians held Interlachen to 39 yards in the second half. Brown gained more than 100 yards in the first half, but was held to less than 40 yards in the second. Keystone will host last year's district runner-up Friday BY CLIFF SMELLEY Telegraph Staff Writer Keystone Heights, after getting its first win of the season, is looking to build \some momentum now that it has begun district play, but to move to 2-0 in District 3-2A, the Indians will have to defeat last year's runner-up. The Indians host Fernandina Beach Friday, Oct. 3, at 7:30 p.m. The Pirates (2-2, 0-1 in District 3) went 8-3 last year, winning their first six games, but they have been up and down so far this season. They opened the season with a 61-0 rout of Bishop Snyder, but followed that up with a 35-14 1tss to Bishop Kenny. Fernandina then defeated Baldwin 35-2, but then lost 19- 17 to district opponent Bradford last week (see related story). Against Bradford, Fernandina wasted an opportunity to take early control of the game. The Pirates, because of turnovers, had two possessions -start inside the Bradford 20 at the start of the game, but came away with just three points. The Pirates also committed five turnovers in the loss. Defensively, Fernandina allowed Bradford to engineer scoring drives of 72, 80 and 81 yards. One of the Pirates' key returning starters from last year is senior quarterback Emory Wingard, who picked apart the Keystone defense last year in leading his team to a 23-21 win. Wingard completed 21-of- 34 passes for 262 yards and three touchdowns ir that win. He was 5-of-7 for 70 yards on a 66-yard scoring drive in the final four minutes to give his team the win. The Indians, though, outgained the Pirates with 245 yards rushing and 117 passing (Fernandina had 302 total yards). However, Keystone squandered a first-half scoring opportunity with a first-and- goal at the 6-yard line. Two penalties ultimately forced Keystone to settle for a 32- yard field goal attempt. The kick was blocked. Keystone took the lead late in the fourth quarter after putting together an 80-yard drive that consumed more than five minutes. However, ithe drive still left Wingard and the Pirates plenty of time to come backhand retake the lead. fTTTTTVVVVV WVWITTVITT w TTVVVVVw1 VViWl t LVIM tIX5OM I S* Site Work Clearing *Excavation Ponds Fill & Sand Ball Diamond Clay* Limerock LL OP/IY Private Driveways-Tonsoil* Milling CWLL NE T TOBE (52) -O21 We Also do C C *E B Scp Stumpgrinding! sIc I I I Id, Yo a Mk ifeec VOTE JIM BIGGS.r ww .igsorislo Poiicladetseet adfo ndapovdbyJms .BipDmort o uprntnet fScol Score By Quarter KHHS 7' 7 14 IHS 7 0 0 7-35 0-7 Scoring Summary I: Brown 4 run (Baker kick) K: Robinson 3 run (Frysinger kick) K: Robinson 13 run (Frysinger kick) K: Latner 80 pass from Lott (Frysinger kick) K: Robinson 80 run (Frysinger kick) K: Robinson 1 run (Frysinger kick) Team Statistics K First Downs 17 Rushes/Yds. 44-285 Passing Yds. 94 Passes 3-3-0 Fumbles-Lost 3-2 I 7 40-157 0 0-2-0 2-2 CLOTH from 9 DENMARK FURNITURE "Serving thfe '-Area Since 1937" STOREI HOURS: 424 Wat M gtkoe{/4l M1ot. & Tues. 9-7 'e(, ,hru tri. 9-6 (904) 964W& S;dl ii n. 9-3 Keystone running back Marcel Robinson bulls his way through the Interlachen defense. GO GREEN BUILD ROUND 3bdrm 2bath New Homes from 50k Withstand winds over 150mph Cut Elecric Bil in 1/2 VENTURE DOMES [email protected] GREAT SELECTION OF CHAIRS... LEATHER & CLOTH! CASH IS KING Cash discounts for checks, cash or credit cards.. Credit terms available. 6 'e (904) 964ttSo27 C N a"s !6 ok (904) 964-5827 r | |l _ _I __ I Oct. 2 Clt 2 m Page 12B TELEGRAPH, TIMES & MONITOR--B-SECTION Oct. 2, 2uU8 Tigers lose 21-18 despite stellar performance from Perry BYCL-FFBSMEL-EY--S Telegraph Staff Writer Deven Perry outrushed the entire Newberry team, but it S wasn't enough for the-Union County football team, which lost 21-18 to last year's District 4-2B champs on Sept. 26 in Lake-Butler. Perry gained 188 yards on 22 carries, scoring twice as well as adding another touchdown on a reception. However, the Tigers' failure to convert on all three of their two-point plays following their three touchdowns would prove costly, as did a 92-yard interception return that set Newberry up for what would prove to be the game's decisive score. Union (2-2, 0-I in District 4) was poised to take the lead on the opening possession of the second half. The Tigers drove downfield and had a first-and- goal at the 10, but Newberry's Tay Ross intercepted a pass in the end zone, returning rt to the "Union 8-yard line. Newberry running back' Ryan Brown, who rushed for 1 I yards and two touchdowns on 26 carries, Deven Perry crosses the goal line for one of his three touchdowns against Newberry. -later scored from 2 yards out. The PAT put the Panthers up 21-12. The score remained unchanged until the fourth quarter when Perry scored on a 2-yard run. The play was set up by a Bryan Holmes reception off of a deflection. Approximately seven minutes remained in the game, but the Newberry offense wasv able to hold onto the ball all that time. The Tigers got big plays from Mason Dukes and Lonnie Gosha, who recorded a sack, but they allowed themselves to be drawn offsides on a fourth-down play, giving the Panthers a first down with approximately three minutes on the clock. That proved to be just the seventh first down for Newberry, which finished the game with 143 total yards. In comparison, Union totaled 348 yards and 18 first downs. Dukes, Brodie Ellis and Robbie Jarvis were the Tigers' top tacklers. Ellis and Gosha each had two tackles behind the line of scrimmage. The defense gave up a touchdown on the game's opening possession, hut came up big when the offense fumbled the ball away on its first series. That gave the Panthers the ball inside the Union 25 and a chance to go up 14-0. The Tigers forced the Panthers to settle for a field- goal try, which failed. Getting the ball back, the Union offense held onto it and drove down the field, eventually scoring on a 2-yard touchdown run by Perry. Perry was stopped on the ensuing two-point play, however, leaving the Tigers trailing 7-6. Perry was at it again on the next series. He had an 8-yard run to convert a fourth-down play, then scored on an II- yard reception from this story. Score By Quarter NHS 7 7 7 UCHS 0 12 0 0-18 6-21 quarterback Chris Alexander. The two-point play failed again, but the Tigers led 12-7. The lead was short-lived. There were offsetting penalties on the ensuing kickoff, so the Tigers were forced to kick again. Newberry's Brown fielded the second kick and returned it 89 yards for a touchdown. Newberry, with the PAT, went up 14-12, which was the score at the half. Perry's fourth-quarter touchdown gave him a team- high seven for the year. He leads the team in rushing also, having gained 383 yards on 52 carries. Union County Times staff writer Teresa Stone Irwin provided the information for Team Statistics N First Downs 7 Rushes/Yds. 45-143 Passing Yds. 0 Passes 0-0-0 Fumbles-Lost 3-0 UC 18 41-260 88 6-7-1 2-.1 Accepting All Insurances Natalia Shiriaeva, M.D. and Self-Pay Patients Family Medi,:ne Annual Preventative Health E ,ams Annual Well-Woman Exams SchoollSports Physical Primar Care tleurology Opiate and Alcohol Dependency Trearment Pain Management i i i ,i H I,,, i i i ,, i , ,, i L ' Struggling Raiders are next opponent for Union County BY CLIFF SMELLEY Telegraph Staff Writer Union County will play a larger school for the third time this season, but the Santa Fe Raiders' struggles the last three years have continued so far this season. The Tigers host the Raiders Friday, Oct. 3, at .7:30 p.m. Santa Fe (1-3) is coming off of a 35-14 loss to Dunnellon and has been outscored 71-20 in its past two games. Union and Santa Fe have .played one common opponent- Newberry. The Tigers are coming off of a 21- 18 loss to Newberry (see related story), while Santa Fe opened the season with a 47- 18 loss to the Panthers. The Raiders had 245 total yards of offense in that game, with senior quarterback Frank Snead, a returning starter from Medicare & Most Insurance Plans Accepted (904) 964-8076 last year, completing 1 I-of-27 passes for 128 yards and a touchdown. Jamal White, who had a 47-yard touchdown reception, caught four passes for 69 yards. Defensively, the Raiders allowed Newberry to gain 370 yards. Santa Fe also committed three turnovers. Following that loss, the Raiders defeated Middleburg 28-14, with Snead throwing three touchdowns and rushing for another. Santa Fe, though, would come back the following week afndlose 36-6 to Taylor County. Santa Fe entered this season with a 5-25 record over the last three seasons. The Raiders return five players from last year's 2-8' team-Snead, running back White and lineman 'Nick Owens on offense, and linebackers Vernis Jenkins and Justin pecidlists White on defense. The Raiders' two wins last season were both close affairs. They defeated Taylor County 20-19 and put together a strong second half in defeating Union 34-27. Union took a seven-point lead into the half, but the Raiders outscored the Tigers 20-6 in the second half. Santa Fe's winning score occurred on a third-and-9 play with 1:27 left in the game. The Tigers scored on touchdown runs. -by- Shandale-- Lee and Sammy Simmons, as well as getting a touchdown reception from Bryan Holmes. The defense also got into the act when Vinson Wintons returned a fumble. for a score. It was a pretty even game in terms of stats. Union rushed for 164 yards and passed for 120, while Santa Fe had 187 yards rushing and 110 passing. Saturday Morning Appointments Available 1105 S. Walnut St. (US 301 South) Starke, FL Kuabol,-o <,bolo Kbota tK l' t<,bol'a K, ',oa m2ssbol',:t, Kboti,3 Kjbol'^ STime to "Accessorize ZERO Down! ZERO % On New Kubota Tractors and Implements S0% for 36 Months All New Kubota Tractors, Implement; SAccessories and Selected Land Pride Implements S0% Down 0% for 42 Months NewKiuoFtaiTractor Implement, Accessory ZD, F BX B and Selected Land Pride Implements .-.- 3 0% for 54 Months New Kubota Tractor Implement. Accessory A L, M B21, B26 CE and Selected Land Pride Implements , S0% Down SII With Approved Credit Program ends f2-31-08 4502 NW 13th Street Gainesville A S HI.6(, Iai,,i ir d.. ii R(,HT.\uNinlo,,,,, Iiw(, i, I.' 35 2-376-4506 -' OPE: Mlnday. Friday: 8 P. -. 5 P.M., <,,,b ,, IJbol'Q I<,jbo',a I I Ni 9 Mum-. DO YOU IMOIf Smoking DO YOU DIP It DO YoU SP t "D 1 jyuwawtNq to eul ? FREE Free Group Sessions - cotine Replacement Mondays Oct. 20th Nov. 24th Patches provided! 6:00 7:30 pm at Bradford Public LIWaiy To Register call: Bradford CHD 904-964-7732 or 866-341-2730 I Quttline) "MOTORCRAFT BRAKES INSTALLED! AFTER $25 MAIL-INREBATE Get the brakes engineered for BEA your vehicle and save! ' WE'LL BEAT Dealer-nstaltd retail Mofcrat' or Gen Ford brake pads shoes only, limit one redempnon per ale. per customer YOUR BEST PRICE... M"mum rebate o 25pe r axle. by lPador es oly Srr Emos cars and lighl trucks Fron orrear axle.Excludes machining rotoror rums. Taxes extra Redemption orm mu II be pstmared by 1231108 See pe rpalng dealership GUARANTEED! =ean itr, mat in redemption certificate and delays through 11130108 , On all name-brand tires we sell MOTORCRAFT TESTED Including Goodyear, TOUGH" PLUS BATTERY Continental, Michelin and More 5 95 The right tire at the right price. $5 9 M2RP A C_ AFTER $20 MAIL-IN REBATE COOD E R ntlnenta Engineered for your vehicle. iWith 84-month warranty. n o. oae ,,r P er en a,,ae. ,, ,iii ,,, ., .,. Open Saturday! 1 I AFTER $10 MAIL-IN REBATE r Motorcraft Premium Synthetic inspectt broke system Blend Oil and Filter chang Test battery SRotate and inspect our tie Check belts and holes or Check air & cabin iir flters Top off all fluid On Up lo tfe uals oof Molocratt i and Molorcra oll fiter. Taxes dse vehicles and disposal lees extra. Hyrid bakery ust lest eduded Redemplon form musl be mailed by 12I31108. See participating dealership for madl-n redempon certl.caie. vehicle applicahons and deals through 1130/08. BUY TIRES, GET FUEL! BUY 4 MICHELIN or BF GOODRICH' TIRES AND GETA $50 GAS CARD ON US! DFGaodrfc GENdUINE 1I3447US HWIY 30 SU ,T KF-94610Wi W(.TOUFR.O orU NO PAYMENTS f,,- ONE YEAR'"-'? .'.-- MALTILE i CARPET ONE' FLOME 131 N. Cherry St., Starke (904) 964-7423 "W'c e your /trhihb,,'rhkst" Iftm" >I I I I S For a free room measure and financing P Oi THREE EASY WAYS TO SHOP. F0 1, preapproval visit Capene.comn t I--'L".J..J pre-approvwl visit CarpetOno.com, LteTHIMC IFSIALLTEN Sm61` ,lM, Ni IN i Mtll i. . .i i i ^'"i.'! i-'.riil .ai.< i, Jl t p loi I "i t l u[lrr 0i'1, 11C o wu t ot, truer ; A'tAtol ir i otw t F .w + 1t *l11"i 1,1it 1w 0t Palt i s Not "J, I BRADFORD COUNTY EYE CENTER isypeasedto announce the association of Dr. Michael Schlofman Optometrist Dr. Leonard Schlofman ~ Optometrist Dr. Kevin McAuliffe Opthalmologist Dr. James Staman Diabetic Retina Specialist i s~-------------- - L Save h Us! I I Scoring Summary N: Brown 22 run (DaFeo kick) U: Perry 2 run (run failed) U: Perry 11 pass,' from Alexander (run failed) ' N: Brown 89 kickoff return (DaFeo kick) N: Brown 2 run (DaFeo kick) U: Perry 2 run (run failed) t~;~Z~S~- Abet A -1i Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs
https://ufdc.ufl.edu/UF00028314/00193
CC-MAIN-2020-40
refinedweb
32,778
75.3
I am creating a view with Reduce function returning grouped Map results in JSON object. Clearly, I am exceeding the expected return (non-scalar) value by Reduce function. And getting "reduce_overflow_error". I have to set the reduce_limit to FALSE in order to avoid the error and make my view work. When I was making the change in 'default.ini' I read a comment there that "If you think you're hitting reduce_limit with a "good" reduce function, please let us know on the mailing list so we can fine tune the heuristic" Please advise what needs to be done here. -- Regards, Rohit Sharma
http://mail-archives.apache.org/mod_mbox/couchdb-dev/201402.mbox/%3CCAH9fQEUbPwUHrepvB2sCQ5kDDhxRYPYqD+_iwY42wZeHmCZOMw@mail.gmail.com%3E
CC-MAIN-2019-22
refinedweb
103
73.88
Update instructions to match current requirements diff --git a/README.google b/README.google index cfc4e18..b537f91 100644 --- a/README.google +++ b/README.google @@ -1,36 +1,52 @@ Name: Root certificates for trusted CAs Short Name: root_certificates URL: -Version: 0.1 -Date: June 29, 2015 +Version: 0.2 +Date: Jan 9, 2017 License: MPL/2.0, Description: This directory contains the root CA certificates chosen to be trusted by Mozilla's NSS library, reformatted into an array in C source code, to be used by the default SecurityContext obect in Dart's dart:io library for -secure networking (TLS, SSL). +secure networking (TLS, SSL) for operating systems that don't have a supported +certificate store. -The certificates are fetched from Mozilla with the command line +The files can be updated as follows: + +1. Fetch the certificates from Mozilla with this command line: curl -o certdata.txt -and then a derived file, with the certificates in PEM format, is created by -running the utility at: +2. Convert from Mozilla format to PEM format by running the utility +at: go run convert_mozilla_certdata.go > certdata.pem Note that this utility produces a warning about one certificate with a negative serial number. This is expected. -Comments are stripped from this file, to decrease the size of the string +3. Strip comments from this file to decrease the size of the string that will be compiled into the Dart executable: sed '/^#/d' ./certdata.pem > ./certdata.stripped -The stripped file is converted to a C character array with the xxd utility: +4. Convert the stripped file to a C character array with the xxd utility: xxd -i certdata.stripped > certdata.cc -And finally the name of the array is changed to root_certificates and the -MPL copyright header is added back to the file, creating root_certificates.cc. +5. Make the following changes to certdata.cc: + - Copy the MPL copyright header from root_certificates.cc to certdata.cc. + - Update the conversion date in the copyright comment. + - Copy the #ifdef/#endif and namespace declarations from + root_certificates.cc into certdata.cc. + - Rename the array variable from certdata_stripped to root_certificates_pem_. + - Rename the variable containing the array length from + certdata_stripped_length to root_certificates_pem_length. + - Above the declaration for root_certificates_pem_length, add this line: + const unsigned char* root_certificates_pem = root_certificates_pem_; + +6. Update root_certificates.cc as follows: + +mv certdata.cc root_certificates.cc +
https://dart.googlesource.com/root_certificates/+/050faf5c9180fd405fe7bde23dc4fd786f9f8964%5E%21/
CC-MAIN-2020-29
refinedweb
383
59.3
This is my GPA Calculator, for a homework assignment. Please share what you would have added or remove from the code. Arrays is a requirement in this code. # include <iostream> using namespace std; int main () { float b [100], c [100], a, a1, average_GPA, total_credits, Cummalative_GPA; cout <<"Enter The # of Courses: "; cin>>a; cout <<"-------------------------------------------- \n"; a1=0; total_credits=0; for (int i=1; i<=a;i++) { cout <<"Enter the credit hours for course "<< i <<": " ; cin >> b[i]; cout<<"Enter the grade you earned for the course \n"; cout<<"(A= 4, B= 3, C= 2, D= 1) :"; cin>>c[i]; cout <<"-------------------------------------------- \n"; average_GPA = b[i] * c[i]; a1 = average_GPA + a1;total_credits = b[i] + total_credits; } Cummalative_GPA = a1 / total_credits; cout<< "Your cummalative GPA for the semester is a \n" <<" (" << Cummalative_GPA <<")\n"; return (0); }
https://www.daniweb.com/programming/software-development/threads/96302/homework-gpa-calculator
CC-MAIN-2018-34
refinedweb
129
51.01
) Suthahar J(8) Jinal Shah(4) Gourav Jain(4) Syed Shanu(3) Viral Jain(3) Manpreet Singh(3) Vijai Anand Ramalingam(3) Ibrahim Ersoy(2) Sumit Singh Sisodia(2) Dennis Thomas(2) Mangesh Kulkarni(2) Ankit Sharma(2) Nirav Daraniya(2) Mani Gautam(2) Mahender Pal(2) Swatismita Biswal(2) John Kocer(2) Abubackkar Shithik(2) Tahir Naushad(2) Akshay Deshmukh(2) Shiv Sharma(1) Neel Bhatt(1) Prashant Kumar(1) Lakpriya Ganidu(1) P K Yadav(1) Jayanthi P(1) Kantesh Sinha(1) Shweta Lodha(1) Mushtaq M A(1) Gowtham K(1) Munish A(1) Puja Kose(1) Ahsan Siddique(1) Najuma Mahamuth(1) Bhuvanesh Mohankumar(1) Iqra Ali(1) Nithya Mathan(1) Sarath Jayachandran(1) Rahul Dagar(1) Areeba Moin(1) Ali Ahmed(1) Allen O'neill(1) Srashti Jain(1) Vincent Maverick Durano(1) Mohamed Elqassas Mvp(1) Lou Troilo(1) Santhakumar Munuswamy(1) Shantha Kumar T(1) Nilesh Shah(1) Talha Bin Afzal(1) Deepak Kaushik(1) Vinoth Rajendran(1) David Mccarter(1) Shakti Singh Dulawat(1) Yusuf Karatoprak(1) Hussain Patel(1) Ankit Saxena(1) Hamid Khan(1) Resources No resource found.. Navigation Drawer Activity In Android Dec 27, 2017. In this article, we will learn how to use a single navigation drawer for different activities. Getting Started With Azure Service Bus Dec 26, 2017. From this article you will learn an overview of Azure service bus and ow to create an Azure service bus namespace using the Azure portal. Dec 14, 2017.. Getting Started With Angular 5 Using Visual Studio Code Dec 07, 2017. In this article, we are going to set up Angular 5 app using Visual Studio Code. Audit Made Easy Without Audit Log - Part One Dec 07, 2017. In Microsoft SQL Server, the activity of each of the database table is tracked in the other table and that is called the Audit trail or Audit log of the database table. Callback Concept And Events In Node.js Dec 06, 2017. Hello friends, today I explain you about callbacks and events in Node JS. People who are new to Node JS please learn previous articles NodeJS - Getting Started With Some Basic!!. Getting Started With Angular 5 And ASP.NET Core Nov 13, 2017. I hope you all know that Angular 5 has been released. In this article, we will see how to start working with Angular 5 and ASP.NET Core using Angular5TemplateCore.. Leadership Challenge 002 - What Is Your Coaching Fitbit Oct 07, 2017. Many people have embraced the "Fitbit" craze. The company has great marketing: “Fitbit tracks every part of your day—including activity, exercise, food, weight and sleep—to help you find your fit, stay motivated, and see how small steps make a big impact.”. People wear it daily, slapping it on their wrist the second they get out of bed…or even sleeping with it Sensor Android App Using Android Studio Oct 03, 2017. In this article, I will show you how to create a Sensor Android App using Android studio. we are going to create a sensor application that changes the background color of an activity when a device is shaken. Migrate Your On-Premises / Enterprise Data Warehouse Into Azure SQL Data Warehouse Sep 21, 2017. I will share how you can start migrating your data into the Azure SQL Data Warehouse! How To Create A Camera Application In Android Using Android Studio Sep 12, 2017. Android is one of the most popular operating systems for mobile. In this article, I will show you how to start the camera application in Android using Android Studio ASP.NET Core 2.0. I Am A Programmer And I Love To Exercise Sep 04, 2017. here I am going to talk about fitness and health. I will start with very basic things with which you can improve your health and business.. Tables Sep 01, 2017. In this article, we will walk through some important concepts of Azure Tables through Emulator. Create the Azure Tables by using Microsoft Azure Storage Explorer... start-activity NA File APIs for .NET Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!
http://www.c-sharpcorner.com/tags/start-activity
CC-MAIN-2018-05
refinedweb
703
63.8
What is Python? Python is an interpreted, interactive object-oriented programming language; it incorporated modules, classes, exceptions, dynamic typing and high level data types. Python is also powerful when it comes to clear syntax. It is a high-level general-purpose programming language that can be applied to many different classes of problems — with a large standard library that encapsulates string processing (regular expressions, Unicode, calculating differences between files), Internet protocols (HTTP, FTP, SMTP, XML-RPC, POP, IMAP, CGI programming), software engineering (unit testing, logging, profiling, parsing Python code), and operating system interfaces (system calls, filesystems, TCP/IP sockets). Here are some of Python’s features: - An interpreted (as opposed to compiled) language. Contrary to C, for example, Python code does not need to be compiled (an object’s type can change during the course of a program). What does it mean to be an object-oriented language? Python is a multi-paradigm programming language. Meaning, it supports different programming approach. One of the popular approach to solve a programming problem is by creating objects. This is known as Object-Oriented Programming (OOP). An object has two characteristics: 1) attributes 2) behavior Let’s take an example: Dog is an object: a) name, age, color are data b) singing, dancing are behavior We call data as attributes and behavior as methods in object oriented programming. Again: Data → Attributes & Behavior → Methods The concept of OOP in Python focuses on creating reusable code. This concept is also known as DRY (Don’t Repeat Yourself). In Python, the concept of OOP follows some basic principles: Inheritance — A process of using details from a new class without modifying existing class. Encapsulation — Hiding the private details of a class from other objects. Polymorphism — A concept of using common operation in different ways for different data input. Class A class is a blueprint for the object. We can think of class as an sketch of a dog with labels. It contains all the details about the name, colors, size etc. Based on these descriptions, we can study about the dog. Here, dog is an object. The example for class of dog can be : class Dog: pass Here, we use class keyword to define an empty class Dog. From class, we construct instances. An instance is a specific object created from a particular class. A Class is the blueprint from which individual objects are created. In the real world we often find many objects with all the same type. Like cars. All the same make and model (have an engine, wheels, doors, …). Each car was built from the same set of blueprints and has the same components. Object. >>> a = 1 >>>>> print(a) I am a string now Every object has its own identity/ID that stores its address in memory. Every object has a type. An object can also hold references to other objects. For example, an integer will not have references to other objects but if the object is a list, it will contain references to each object within this list. We will touch up on this when we look at tuples later. The built-in function id() will return an object’s id and type() will return an object’s type: >>> list_1 = [1, 2, 3] # to access this object's value >>> list_1 [1, 2, 3] # to access this object's ID >>> id(list_1) 140705683311624 # to access object's data type >>> type(list_1) <class 'list'> So, an object (instance) is an instantiation of a class. When class is defined, only the description for the object is defined. Therefore, no memory or storage is allocated. The example for object of class Dog can be: obj = Dog() Here, obj is object of class Dog. Suppose we have details of Dog. Now, we are going to show how to build the class and objects of Dog. class Dog: #class attribute species = "animal" # instance attribute def __init__(self, name, age): self.name = name self.age = age # instantiate the Dog class blu = Dog("Blu", 10) woo = Dog("Woo", 15) # access the class attributes print("Blu is an {}".format(blu.__class__.species)) print("Woo is also an {}".format(woo.__class__.species)) # access the instance attributes print("{} is {} years old".format( blu.name, blu.age)) print("{} is {} years old".format( woo.name, woo.age)) When we run the program, the output will be: Blu is an animal Woo is also an animal Blu is 10 years old Woo is 15 years old In the above program, we create a class with name Dog. Then, we define attributes. The attributes are a characteristic of an object. Then, we create instances of the Dog. Let’s try to understand how value and identity are affected if you use operators “==” and “is” The “==” operator compares values whereas “is” operator compares identities. Hence, a is b is similar to id(a) == id(y), but two different objects may share the same value, but they will never share the same identity. Example: >>> a = ['blu', 'woof'] >>> id(a) 1877152401480 >>> b = a >>> id(b) 1877152401480 >>> id(a) == id(b) True >>> a is b True >>> c = ['blu', 'woof'] >>> a == c True >>> id(c) 1877152432200 >>> id(a) == id(c) False Hashability What is a hash? According to Python , “An object is hashable if it has a hash value which never changes during its lifetime”, if and only if the object is immutable. A hash is an integer that depends on an object’s value, and objects with the same value always have the same hash. (Objects with different values will occasionally have the same hash too. This is called a hash collision.) While id() will return an integer based on an object's identity, the hash() function will return an integer (the object's hash) based on the hashable object's value: >>> a = ('cow', 'bull') >>> b = ('cow', 'bull') >>> a == b True >>> a is b False >>> hash(a) 6950940451664727300 >>> hash(b) 6950940451664727300 >>> hash(a) == hash(b) True Immutable objects can be hashable, mutable objects can’t be hashable.This is important to know, because (for reasons beyond the scope of this post) only hashable objects can be used as keys in a dictionary or as items in a set. Since hashes are based on values and only immutable objects can be hashable, this means that hashes will never change during the object’s lifetime. Hashability will be covered more under the mutable vs immutable object section, as sometimes a tuple can be mutable and how does that change values and understanding of mutable objects and immutable objects. To summarize, EVERYTHING is an object in Python the only difference is some are mutable and some immutable. Wait but what kind of objects are possible in Python and which ones are mutable and which ones aren’t? Objects of built-in types like (bytes, int, float, bool, str, tuple, unicode, complex) are immutable. Objects of built-in types like (list, set, dict, array, bytearray) are mutable. Custom classes are mutable. To simulate immutability in a class, one should override attribute setting and deletion to raise exceptions. Now how would a newbie know which variables are mutable objects and which ones are not? For this we use 2 very handy built-in functions called id() and type() What is id() and type()? Syntax to use id() id(object) As we can see the function accepts a single parameter and is used to return the identity of an object. This identity has to be unique and constant for this object during the lifetime. Two objects with non-overlapping lifetimes may have the same id() value. If we relate this to C, then they are actually the memory address, here in Python it is the unique id. This function is generally used internally in Python. Examples: The output is the identity of the object passed. This is random but when running in the same program, it generates unique and same identity. Input : id(2507) Output : 140365829447504 Output varies with different runs Input : id("Holberton") Output : 139793848214784 What is an Alias? >>> a = 1 >>> id(a) 1904391232 >>> b = a #aliasing a >>> id(b) 1904391232 >>> b 1 An alias is a second name for a piece of data. Programmers use/ create aliases because it’s often easier and faster to refer data than to copy it. If the data that is being created and assigned is immutable then aliasing does not matter as the data won’t change, but there will be a lot of bugs if the data is mutable as it will lead to some issues like see below — >>> a = 1 >>> id(a) 1904391232 >>> b = a #aliasing a >>> id(b) 1904391232 >>> b 1 >>> a = 2 >>> id(2) 1904391264 >>> id(b) 1904391232 >>> b 1 >>> a 2 as it can be seen a now points to 2 and id is different as compared to b which is still pointing to 1. In Python, aliasing happens whenever one variable’s value is assigned to another variable, because variables are just names that store references to values. type() method returns class type of the argument(object) passed as parameter. type() function is mostly used for debugging purposes. Two different types of arguments can be passed to type() function, single and three argument. If single argument type(obj) is passed, it returns the type of given object. Syntax : type(object) We can find out what class an object belongs to using the built-in type()function: >>> Blue = [1, 2, 3] >>> type(Blue) <class 'list'> >>> def my_func(x) ... x = 89 >>> type(my_func) <class 'function'> Now that we can compare variables to see their type and id’s, we can dive in deeper to understand how mutable and immutable objects work. Mutable Objects vs. Immutable Objects Not all Python objects handle changes the same way. Some objects are mutable, meaning they can be altered. Others are immutable; they cannot be changed but rather return new objects when attempting to update. What does this mean when writing Python code? The following are some mutable objects: - list - dict - set - bytearray - user-defined classes (unless specifically made immutable) The following are some immutable objects: - int - float - decimal - complex - bool - string - tuple - range - frozenset - bytes The distinction is rather simple: mutable objects can change, whereas immutable objects cannot. Immutable literally means not mutable. A standard example are tuple and list: A tuple is filled on creation, and then is frozen - its content cannot change anymore. To a list, one can append elements, set elements and delete elements at any time. Although keep in mind exceptions: tuple is an immutable list whereas frozenset is an immutable set. Quoting stackoverflow answer- Tuples are indeed an ordered collection of objects, but they can contain duplicates and unhashable objects, and have slice functionality frozensets aren't indexed, but you have the functionality of sets - O(1) element lookups, and functionality such as unions and intersections. They also can't contain duplicates, like their mutable counterparts. Let’s create a dictionary with immutable objects for keys — >>> a = {‘blu’: 42, True: ‘woof’, (‘x’, ‘y’, ‘z’): [‘hello’]} >>> a.keys() dict_keys([‘blu’, True, (‘x’, ‘y’, ‘z’)]) As seen above keys in a are immutable, hashable objects, but if you try to call hash() on a mutable object(such as sets), or trying to use a mutable object for a dictionary key, an error will be raised: >>> spam = {['hello', 'world']: 42} Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'list' >>> d = {'a': 1} >>> spam = {d: 42} Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'dict' So, tuples, being immutable objects, can be used as dictionary keys? >>> spam = {('a', 'b', 'c'): 'hello'} Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'list' As seen above, if a tuple contains a mutable object, according to the previous explanation about hashability it cannot be hashed. So, immutable objects can be hashable, but this doesn’t necessarily mean they’re alwayshashable. And remember, the hash is derived from the object’s value. This is an interesting corner case: a tuple (which should be immutable) that contains a mutable list cannot be hashed. This is because the hash of the tuple depends on the tuple’s value, but if that list’s value can change, that means the tuple’s value can change and therefore the hash can change during the tuple’s lifetime. So far it is now understood that some tuples are hashable — immutable but some other tuple are not hashable — mutable. According to official Python documentation immutable and mutable are defined as — “An object with a fixed value” and “Mutable objects can change their value”. This can possibly mean that mutability is a property of objects, hence it makes sense that some tuples will be mutable while others won’t be. >>> a = ('dogs', 'cats', [1, 2, 3]) >>> b = ('dogs', 'cats', [1, 2, 3]) >>> a == b True >>> a is b False >>> a[2].append(99) >>> a ('dogs', 'cats', [1, 2, 3, 99]) >>> a == b False In this example, the tuples a and b have equal (==) values but are different objects, so when list is changed in tuple a the values get changed as a is not longer == b and did not change values of b. This example states that tuples are mutable. While Python tends towards mutability, there are many use-cases for immutability as well. Here are some straightforward ones: - Mutable objects are great for efficiently passing around data. Let’s say object antonand bertahave access to the same list. antonadds “lemons”to the list, and bertaautomatically has access to this information. If both would use a tuple, anton would have to copy the entries of his shopping-tuple, add the new element, create a new tuple, then send that to berta. Even if both can talk directly, that is a lot of work. - Immutable objects are great for working with the data. So bertais going to buy all that stuff - she can read everything, make a plan, and does not have to double check for changes. If next week, she needs to buy more stuff for the same shopping-tuple, bertajust reuses the old plan. She has the guarantee that antoncannot change anything unnoticed. If both would use a list, bertacould not plan ahead. She has no guarantee that “lemons”are still on the list when she arrives at the shop. She has no guarantee that next week, she can just repeat what was appropriate last week. You should generally use mutable objects when having to deal with growing data. For example, when parsing a file, you may append information from each line to a list. Custom objects are usually mutable, buffering data, adjusting to new conditions and so on. In general, whenever something can change, mutable objects are much easier. Immutable objects are sparingly used in python — usually, it is implicit such as using int or other basic, immutable types. Often, you will be using mutable types as de-facto immutable - many lists are filled at construction and never changed. There is also no immutable dict. You should enforce immutability to optimise algorithms, e.g. to do caching. Interestingly enough, python’s often-used dict requires keys to be immutable. It is a data structure that cannot work with mutable objects, since it relies on some features being guaranteed for its elements. Mutable example >>> my_list = [10, 20, 30] >>> print(my_list) [10, 20, 30] >>> my_list = [10, 20, 30] >>> my_list[0] = 40 >>> print(my_list) [40, 20, 30] Immutable example >>> tuple_ = (10, 20, 30) >>> print(tuple_) [10, 20, 30] >>> tuple_ = [10, 20, 30] >>> tuple_[0] = 40 >>> print(tuple_) Traceback (most recent call last): File "test.py", line 3, in < module > my_yuple[0] = 40 TypeError: 'tuple' object does not support item assignment. Interning, integer caching and everything called: NSMALLPOSINTS & NSMALLNEGINTS Easy things first — NSMALLNEGINTS is in the range -5 to 0 and NSMALLPOSINTS is in the 0 to 256 range. These are macros defined in Python — earlier versions ranged from -1 to 99, then -5 to 99 and finally -5 to 256. Python keeps an array of integer objects for “all integers between -5 and 256”. When creating an int in that range, it is actually just getting a reference to the existing object in memory. If x = 42, what happens actually is Python performing a search in the integer block for the value in the range -5 to 256. Once x falls out of the scope of this range, it will be garbage collected (destroyed) and be an entirely different object. The process of creating a new integer object and then destroying it immediately creates a lot of useless calculation cycles, so Python preallocated a range of commonly used integers. There are exception to immutable objects as stated above by making a tuple “mutable”. As it is known a new object is created each time a variable makes a reference to it, it does happen slightly differently for a few things - a) Strings without whitespaces and less than 20 characters b) Integers between -5 to 256 (including both as explained above) c) empty immutable objects (tuples) These objects are always reused or interned. This is due memory optimization in Python implementation. The rationale behind doing this is as follows: - Since programmers use these objects frequently, interning existing objects saves memory. - Since immutable objects like tuples and strings cannot be modified, there is no risk in interning the same object. So what does it mean by “interning”? interning allows two variables to refer to the same string object. Python automatically does this, although the exact rules remain fuzzy. One can also forcibly intern strings by calling the intern()function. Guillo’s articleprovides an in-depth look into string interning. Example of string interning with more than 20 characters or whitespace will be new objects: >>>>>>> a is b False but, if a string is less than 20 char and no whitespace it will look somewhat like this: >>>>>>> a is b True As a and b refer to the same objects. Let’s move on to integers now. As explained above in macro definition integer caching is happening because of preload python definition of commonly used integers. Hence, variables referring to an integer within the range would be pointing to the same object that already exists in memory: >>> a = 256 >>> b = 256 >>> a is b True This is not the case if the object referred to is outside the range: >>> a = 1024 >>> b = 1024 >>> a is b False Lastly, let’s talk about empty immutable objects: >>> a = () >>> b = () >>> a is b True Here a and b refer to the same object in memory as it is an empty tuple, but this changes if the tuple is not empty. >>> a = (1, 2) >>> b = (1, 2) >>> a == b True >>> a is b False Passing mutable and immutable objects into functions: Immutable and mutable objects or variables are handled differently while working with function arguments. In the following diagram, variables a, band name point to their memory locations where the actual value of the object is stored. Major Concepts of Function Argument Passing in Python Arguments are always passed to functions by object def foo1(a): # function block a += 1 print(‘id of a:’, id(a)) # id of y and a are same return a # main or caller block x = 10 y = foo1(x) # value of x is unchanged print(‘x:’, x) # value of y is the return value of the function foo1 # after adding 1 to argument ‘a’ which is actual variable ‘x’ print(‘y:’, y) print(‘id of x:’, id(x)) # id of x print(‘id of y:’, id(y)) # id of y, different from x Result: id of a: 1456621360 x: 10 y: 11 id of x: 1456621344 id of y: 1456621360
https://www.linux.com/blog/holberton/2018/6/python3-sometimes-Immutable-mutable-and-Everything-Object
CC-MAIN-2019-30
refinedweb
3,284
60.24
For the next few weeks I’ll be participating in Microsoft’s Frontline program. It’s a great opportunity for me to get out and meet with customers using our products. This week I was down in Los Colinas TX at Microsoft Customer Support Services group. These guys are awesome. It’s unbelievable what their able to diagnose from the sketchy information they get. One of their engineers was explaining a customer who could only communicate a memory dump over the phone. Talk about tough…. I spent most of my time with the ASP.Net support team. These engineers are experts at diagnosing problems in deployed applications. They often don’t get the luxury having a repro in house that they can attach a debugger to and catch the failure. More often than not they collect dump files from the customer site and perform a post-mortem analysis. They’re like forensic CSI experts. They provide support 24x7. While I was there I saw everything from simple configuration issues to complex hangs or memory leaks in distributed systems. There were some general issues I saw repeatedly. One of which was excessive memory consumption due to misuse of strings. There is a great article on MSDN covering performance Checkout the section on strings. In short don’t use concatenation += with Strings to build up a response page. Consider using Response.Write() directly or using StringBuilder to build up large strings. StringBuild is in the System.Text namespace. You’d be surprised how much of the overall memory consumption is in strings. They’ve developed some awesome tools to help narrow down the problems customers face. More often than not it is a problem somewhere in the application or in a 3rd party component. And yes, occasionally in ASP.Net. Many of the techniques they use are documented on MSDN. Check out this article:
http://blogs.msdn.com/b/bradleyb/archive/2005/04/23/411081.aspx
CC-MAIN-2014-23
refinedweb
312
69.18
I have a python property (created with the builtin type "property") named <key> in a class derived from NSObject. Getting and setting the property seem to work just fine with the normal python syntax. Key-value coding ends up calling the getter and setter (as long as there is not a variable named '_<key>'; if there is, it gets set directly without the setter being called). However, if I call willChangeValueForKey_ on that class, with <key> as argument, the lookup fails and valueForUndefinedKey_ is called. Now, functionality-wise, everything seems to go back to working if I also implement `valueForUndefinedKey_`. In this case, it's an exceedingly simple implementation: def valueForUndefinedKey_(self, key): return getattr(self,key) Can anyone shed light on why the lookup for willChangeValueForKey_ fails? Is there any other way of solving this issue other than implementing valueForUndefinedKey_ as I have above? Thanks! Matt
http://sourceforge.net/p/pyobjc/mailman/message/21698449/
CC-MAIN-2015-35
refinedweb
148
51.07
(‘macro’,’regex’) from surlex.dj import surl This function allows one to use surlex syntax in the urlpatterns and simply put surl() around it instead of url() like so: surl(r’^<project:s>/photos/<photo:#>$’, ‘show_photo’,name=’show_photo’), This is a real URL pattern that I’ve put here. The ‘normal’ way to do this would have been: url(r’^(?P<project>[\w-]+)/photos/(?P<photo>\d+)$’, ‘show_photo’,name=’show_photo’), That may make perfect sense to anyone who understands named groups in regex, but I would rather not have to type such a complicated syntax for such a common task in Django. register_macro(‘macro’,’regex’) This one is really useful for all sorts of macros. I recently wrote a yardbird-based IRC bot for work and needed a bunch of macros for the irc message resolving portion: name = settings.YARDBIRD_NAME register_macro(‘ch’,’#[\w-]+’) register_macro(‘nm’,'[\w_-]+’) register_macro(‘botmsg’,'(?:chanmsg/%s[:,. ] ?|privmsg/)’ % name) register_macro(‘msg’,'(?:chanmsg|privmsg)’) To be honest, I’ve severely modified Yardbird and will eventually send the patches back, so this stuff doesn’t make as much sense without my patches that increase the flexibility of Yardbird. The ‘ch’ macro is for matching IRC channels, ‘nm’ is for nicknames, ‘botmsg’ is for messages directed to the bot, and ‘msg’ is for messages in the channel or to the bot versus other types of messages, e.g. nick changes come in as ‘nick/’ so we don’t want to act on those as possible commands. The patterns for these macros are really simply and much easier to read as well: surl(r’^<:botmsg>tell <nick:nm> <message=.*>$’, ‘tell’), That command is for telling the bot to later tell a user a message when they log in again. It checks if the command was to the bot through ‘botmsg’, sees the command as ‘tell’ and the command has two parameters, ‘user’ and ‘message’ the latter of which matches the rest of the statement. All of this is accessible from a very quick glance at the pattern instead of attempting to read this: ‘^(?:chanmsg/bot_name[:,. ] ?|privmsg/)tell (?P<nick>[\\w_-]+) (?P<message>.*)$’ The length of that regex is 77 characters, more than twice the 38 characters of the surlex pattern. As I said before, Surlex is Awesome.
http://fahhem.com/blog/2010/09/surlex-is-awesome/
CC-MAIN-2015-14
refinedweb
376
60.24
John. I'd like to thank my family for their continuous support. Elena Renard and Joakim Erdfelt for their many contributions to the book. Ruel Loehr. Jason. and the teammates during my time at Softgal. Emmanuel Venisse and John Tolentino. Stephane Nicoll. Allan Ramirez. Felipe Leme. Also. Thanks also to all the people in Galicia for that delicious food I miss so much when traveling around the world. Napoleon Esmundo C. Tim O'Brien. Lester Ecarma. Bill Dudney. Abel Rodriguez. It is much appreciated. Carlos Sanchez Many thanks to Jesse McConnell for his contributions to the book. All of us would like to thank Lisa Malgeri. we would like to thank all the reviewers who greatly enhanced the content and quality of this book: Natalie Burdick.I would like to thank professor Fernando Bellas for encouraging my curiosity about the open source world. especially my parents and my brother for helping me whenever I needed. Fabrice Bellingard. Mark Hobson. Chris Berry. Ramirez. David Blevins. for accepting my crazy ideas about open source. Brett and Carlos . Vincent. Jerome Lacoste. Finally. and in 2005. John Casey became involved in the Maven community in early 2002. When he's not working on Maven. where he is the technical director of Pivolis. roasting coffee. Brett has become involved in a variety of other open source projects. John enjoys amateur astrophotography. Australia. and is a Member of the Apache Software Foundation. software development. Additionally. Florida with his wife. when he began looking for something to make his job as Ant “buildmeister” simpler. Carlos Sanchez received his Computer Engineering degree in the University of Coruña. discovering Maven while searching for a simpler way to define a common build process across projects. Emily. In addition to his work on Maven. He enjoys cycling and raced competitively when he was younger. which has led to the founding of the Apache Maven project. joining the Maven Project Management Committee (PMC) and directing traffic for both the 1. financial. This is Vincent's third book. a company which specializes in collaborative offshore software development using Agile methodologies. CSSC. Jason van Zyl focuses on improving the Software Development Infrastructure associated with medium to large scale projects. as well as to various Maven plugins. Brett Porter has been involved in the Apache Maven project since early 2003. of course..About the Authors Vincent Massol has been an active participant in the Maven community as both a committer and a member of the Project Management Committee (PMC) since Maven's early days in 2002. where he hopes to be able to make the lives of other developers easier. John was elected to the Maven Project Management Committee (PMC). Brett is a co-founder and the Director of Engineering at Mergere. specializing in open source consulting. Vincent has directly contributed to Maven's core. He continues to work directly on Maven and serves as the Chair of the Apache Maven Project Management Committee. . John lives in Gainesville. supporting both European and American companies to deliver pragmatic solutions for a variety of business problems in areas like e-commerce.. and today a large part of John's job focus is to continue the advancement of Maven as a premier software development tool.0 major releases. he founded the Jakarta Cactus project-a simple testing framework for server-side Java code and the Cargo project-a J2EE container manipulation framework. Jason van Zyl: As chief architect and co-founder of Mergere. published by O'Reilly in 2005 (ISBN 0-596-00750-7). telecommunications and. his focus in the Maven project has been the development of Maven 2. published by Manning in 2003 (ISBN 1-930-11099-5) and Maven: A Developer's Notebook. Build management and open source involvement have been common threads throughout his professional career. Brett became increasingly involved in the project's development. he is a co-author of JUnit in Action. Inc.0 and 2. Immediately hooked. Spain. Vincent lives and works in Paris. He was invited to become a Maven committer in 2004. Inc. He is grateful to work and live in the suburbs of Sydney. Since 2004. He created his own company. and working on his house. and started early in the open source technology world. This page left intentionally blank. . Maven’s Principles 1. Using Project Inheritance 3.1. Coherent Organization of Dependencies Local Maven repository Locating dependency artifacts 22 22 23 24 25 26 27 27 28 28 28 31 32 34 1. Packaging and Installation to Your Local Repository 2. Creating Applications with Maven 38 39 40 42 44 46 48 49 52 53 54 55 3.2. Using Snapshots 3.6. Resolving Dependency Conflicts and Using Version Ranges 3.2.2.4.1.2. Using Maven Plugins 2. Convention Over Configuration Standard Directory Layout for Projects One Primary Output Per Project Standard Naming Conventions 1. Reuse of Build Logic Maven's project object model (POM) 1. Handling Classpath Resources 2.3.8. What is Maven? 1.1.1.3. Preventing Filtering of Binary Resources 2.6.1.4.1. Maven's Origins 1. Compiling Application Sources 2.1. Maven Overview 1. Introducing Maven 17 21 1. Setting Up an Application Directory Structure 3.1.2. Introduction 3. Getting Started with Maven 35 37 2. Summary 3. What Does Maven Provide? 1.3. Utilizing the Build Life Cycle 3. Creating Your First Maven Project 2. Compiling Test Sources and Running Unit Tests 2.5.5.2. Filtering Classpath Resources 2.1.2.3.6.6.Table of Contents Preface 1.3.8. Handling Test Classpath Resources 2. Maven's Benefits 2.2.2. Preparing to Use Maven 2.6.7. Using Profiles 56 56 59 61 64 65 69 70 9 .3.7. Managing Dependencies 3. Bootstrapping into Plugin Development 5. Summary 5.9. Deploying with SSH2 3. Building a Web Application Project 4.2. Building J2EE Applications 74 74 75 75 76 77 78 84 85 4. Introduction 4.3.9. Deploying with an External SSH 3. Deploying with FTP 3.11. Introduction 5.9. Plugin Development Tools Choose your mojo implementation language 5. Deploying Web Applications 4. Deploying with SFTP 3. Improving Web Development Productivity 4.3.2.2.4.1. Deploying to the File System 3.3.9.10. BuildInfo Example: Notifying Other Developers with an Ant Mojo The Ant target The mojo metadata file 141 141 141 142 142 145 146 147 148 148 149 10 .5. Testing J2EE Application 4.4.13.14.3. Deploying a J2EE Application 4.9.3. Developing Custom Maven Plugins 86 86 87 91 95 100 103 105 108 114 117 122 126 132 133 5.11.3. Deploying EJBs 4.3. BuildInfo Example: Capturing Information with a Java Mojo Prerequisite: Building the buildinfo generator project Using the archetype plugin to generate a stub plugin project The mojo The plugin POM Binding to the life cycle The output 5. Organizing the DayTrader Directory Structure 4.2. Introducing the DayTrader Application 4.1. Building an EAR Project 4.6.3.10.4. Creating a Web Site for your Application 3.12.9.5.9. Deploying your Application 3.1.7.2. The Plugin Framework Participation in the build life cycle Accessing build information The plugin descriptor 5. Summary 4.1. Building a Web Services Client Project 4. Building an EJB Project 4.4.1. Developing Your First Mojo 5.4. A Review of Plugin Terminology 5. A Note on the Examples in this Chapter 134 134 135 135 136 137 137 138 140 140 5.8. Building an EJB Module With Xdoclet 4. 3.5. Accessing Project Sources and Resources Adding a source directory to the build Adding a resource to the build Accessing the source-root list Accessing the resource list Note on testing source-roots and resources 5.2. Summary 6.8. Advanced Mojo Development 5.2. Monitoring and Improving the Health of Your Releases 6.9.7.12. Monitoring and Improving the Health of Your Source Code 6. What Does Maven Have to do With Project Health? 6. Attaching Artifacts for Installation and Deployment 153 153 154 154 155 156 157 158 159 160 161 163 163 5. Creating a Standard Project Archetype 7. Assessing Project Health with Maven 165 167 6.1.1. The Issues Facing Teams 7.6.2.5.2.5. Cutting a Release 7. Creating a Shared Repository 7. Adding Reports to the Project Web site 6.7. Summary 8.5. Summary 7. Separating Developer Reports From User Documentation 6.5.1. Creating POM files 242 242 244 250 11 .3. Viewing Overall Project Health 6.9.Modifying the plugin POM for Ant mojos Binding the notify mojo to the life cycle 150 152 5. Where to Begin? 8. Creating Reference Material 6. Introduction 8.8.1.4.6.11.1. Introducing the Spring Framework 8. Choosing Which Reports to Include 6.4.3. Continuous Integration with Continuum 7. Creating an Organization POM 7. Team Collaboration with Maven 168 169 171 174 180 182 186 194 199 202 206 206 207 7.6.4. Monitoring and Improving the Health of Your Dependencies 6.5. Migrating to Maven 208 209 212 215 218 228 233 236 240 241 8.1. Accessing Project Dependencies Injecting the project dependency set Requiring dependency resolution BuildInfo example: logging dependency versions 5. Configuration of Reports 6.5. Gaining Access to Maven APIs 5. How to Set up a Consistent Developer Environment 7.3. Monitoring and Improving the Health of Your Tests 6.10. Team Dependency Management Using Snapshots 7. Ant Metadata Syntax Appendix B: Standard Conventions 272 272 273 273 274 274 278 278 279 279 283 B.2. Using Ant Tasks From Inside Maven 8.6. Mojo Parameter Expressions A. The site Life Cycle Life-cycle phases Default Life Cycle Bindings 266 266 266 268 269 270 270 270 271 271 271 A. Compiling Tests 8.7. Some Special Cases 8.6.2.6.3. The default Life Cycle Life-cycle phases Bindings for the jar packaging Bindings for the maven-plugin packaging A.5.8.2.2. Standard Directory Structure B.5.1.1.4.2. Testing 8. Summary Appendix A: Resources for Plugin Developers 250 254 254 256 257 257 258 258 261 263 263 264 264 265 A.4. Running Tests 8. Building Java 5 Classes 8. Complex Expression Roots A.2. Restructuring the Code 8. Other Modules 8.1.2.5.1. Maven’s Super POM B. Non-redistributable Jars 8.1. Compiling 8.2.1.6.6. Java Mojo Metadata: Supported Javadoc Annotations Class-level annotations Field-level annotations A.8.6.1.2. Referring to Test Classes from Other Modules 8.3. Maven’s Default Build Life Cycle Bibliography Index 284 285 286 287 289 12 . Simple Expressions A. The clean Life Cycle Life-cycle phases Default life-cycle bindings A. Avoiding Duplication 8.1.1.2.4.3.2. Maven's Life Cycles A. The Expression Resolution Algorithm Plugin metadata Plugin descriptor syntax A.6.5.5.6. This page left intentionally blank. 16 . this guide is written to provide a quick solution for the need at hand.Preface Preface Welcome to Better Builds with Maven. Maven 2 is a product that offers immediate value to many users and organizations. For first time users. Perhaps. it is recommended that you step through the material in a sequential fashion. but Maven shines in helping teams operate more effectively by allowing team members to focus on what the stakeholders of a project require -leaving the build infrastructure to Maven! This guide is not meant to be an in-depth and comprehensive resource but rather an introduction. 17 . This guide is intended for Java developers who wish to implement the project management and comprehension capabilities of Maven 2 and use it to make their day-to-day work easier and to get help with the comprehension of any Java-based project. As you will soon find.x). For users more familiar with Maven (including Maven 1. Maven works equally well for small and large projects.0. an indispensable guide to understand and use Maven 2. reading this book will take you longer. which provides a wide range of topics from understanding Maven's build platform to programming nuances. We hope that this book will be useful for Java project managers as well. it does not take long to realize these benefits. Finally. From there. discusses Maven's monitoring tools. Introducing Maven. and install those JARs in your local repository using Maven. and document for reuse the artifacts that result from a software project. including a review of plugin terminology and the basic mechanics of the Maven plugin framework. After reading this second chapter. At the same time. Chapter 7. Chapter 3 builds on that and shows you how to build a real-world project. looks at Maven as a set of practices and tools that enable effective team communication and collaboration. illustrates Maven's best practices and advanced uses by working on a real-world example application. Web Services). you will be revisiting the Proficio application that was developed in Chapter 3. you will be able to keep your current build working. In this chapter. compiling and packaging your first project. goes through the background and philosophy behind Maven and defines what Maven is. EAR. In this chapter you will learn to set up the directory structure for a typical application and the basics of managing an application's development with Maven. Creating Applications with Maven. These tools aid the team to organize. Assessing Project Health with Maven. the chapter covers the tools available to simplify the life of the plugin developer. Chapter 6 discusses project monitoring issues and reporting. After reading this chapter. Team Collaboration with Maven. Building J2EE Applications. Chapter 7 discusses using Maven in a team development environment. reporting tools. create JARs. explains a migration path from an existing build in Ant to Maven. you will be able to take an existing Ant-based build. Chapter 1. how to use Maven to build J2EE archives (JAR. Chapter 6. Migrating to Maven. they discuss what Maven is and get you started with your first Maven project. it discusses the various ways that a plugin can interact with the Maven build environment and explores some examples.Better Builds with Maven Organization The first two chapters of the book are geared toward a new user of Maven 2. Chapter 2. At this stage you'll pretty much become an expert Maven user. Chapter 3. and how to use Maven to generate a Web site for your project. Chapter 8. Chapter 5. and how to use Maven to deploy J2EE archives to a container. You will learn how to use Maven to ensure successful team development. shows how to create the build for a full-fledged J2EE application. focuses on the task of writing custom plugins. Chapter 4. and learning more about the health of the project. and Chapter 8 shows you how to migrate Ant builds to Maven. Chapter 4 shows you how to build and deploy a J2EE application. EJB. Chapter 5 focuses on developing plugins for Maven. compile and test the code. It starts by describing fundamentals. Getting Started with Maven. Developing Custom Maven Plugins. WAR. 18 . split it into modular components if needed. visualize. gives detailed instructions on creating. you should be up and running with Maven. Once at the site. so occasionally something will come up that none of us caught prior to publication. go to. post an update to the book’s errata page and fix the problem in subsequent editions of the book.com and locate the View Book Errata link.mergere.com. On this page you will be able to view all errata that have been submitted for this book and posted by Maven editors. We offer source code for download. 19 .mergere. click the Get Sample Code link to obtain the source code for the book. However. You can also click the Submit Errata link to notify us of any errors that you might have found. We’ll check the information and.0 installed.Preface Errata We have made every effort to ensure that there are no errors in the text or in the code.com. if appropriate. So if you have Maven 2. errata. How to Contact Us We want to hear about any errors you find in this book.mergere. and technical support from the Mergere Web site at. we are human. Simply email the information to community@mergere. then you're ready to go. How to Download the Source Code All of the source code used in this book is available for download at. To find the errata page for this book.com. Better Builds with Maven This page left intentionally blank. 20 .. but not any simpler. .Albert Einstein 21 . Better Builds with Maven 1. to view it in such limited terms is akin to saying that a web browser is nothing more than a tool that reads hypertext. What is Maven? Maven is a project management framework. While you are free to use Maven as “just another build tool”. and software. “Well.” 22 . distribution. This book focuses on the core tool produced by the Maven project. but the term project management framework is a meaningless abstraction that doesn't do justice to the richness and complexity of Maven. and deploying project artifacts. In addition to solving straightforward. testing. 1 You can tell your manager: “Maven is a declarative project management tool that decreases your overall time to market by effectively leveraging cross-project intelligence. It simultaneously reduces your duplication effort and leads to higher code quality . they expect a short. it will prime you for the concepts that are to follow. to distribution. Maven also brings with it some compelling second-order benefits. It is a combination of ideas. sound-bite answer. Perhaps you picked up this book because someone told you that Maven is a build tool. Maven 2. you can stop reading now and skip to Chapter 2. what exactly is Maven? Maven encompasses a set of build standards. From compilation. richer definition of Maven read this introduction. standards. and with repetition phrases such as project management and enterprise software start to lose concrete meaning. It provides a framework that enables easy reuse of common build logic for all projects following Maven's standards. Don't worry.1. Revolutionary ideas are often difficult to convey with words.. uninspiring words. to team collaboration. and many developers who have approached Maven as another build tool have come away with a finely tuned build system. Maven provides the necessary abstractions that encourage reuse and take much of the work out of project builds. 1. first-order problems such as simplifying builds. but this doesn't tell you much about Maven. So. It defines a standard life cycle for building. Maven Overview Maven provides a comprehensive approach to managing software projects. and the technologies related to the Maven project. If you are reading this introduction just to find something to tell your manager1. Maven can be the build tool you need. When someone wants to know what Maven is. it is a build tool or a scripting framework. to documentation. a framework that greatly simplifies the process of managing a software project. are beginning to have a transformative effect on the Java community.1. an artifact repository model. If you are interested in a fuller. Too often technologists rely on abstract phrases to capture complex topics in three or four words. It's the most obvious three-word definition of Maven the authors could come up with.1. You may have been expecting a more straightforward answer. and the deployment process.” Maven is more than three boring. documentation. Maven. The build process for Tomcat was different than the build process for Struts. distribution. While there were some common themes across the separate builds. the barrier to entry was extremely high. Maven entered the scene by way of the Turbine project. as much as it is a piece of software. generating documentation. common build strategies. for a project with a difficult build system.Introducing Maven As more and more projects and products adopt Maven as a foundation for project management.2. to answer the original question: Maven is many things to many people. Developers at the ASF stopped figuring out creative ways to compile. Prior to Maven. and package software. developers were building yet another build system. Soon after the creation of Maven other projects. and Web site generation. each community was creating its own build systems and there was no reuse of build logic across projects. Instead of focusing on creating good component libraries or MVC frameworks. predictable way. In addition. Once you get up to speed on the fundamentals of Maven. started focusing on component development. and instead. Ultimately. 1. this copy and paste approach to build reuse reached a critical tipping point at which the amount of work required to maintain the collection of build systems was distracting from the central task of developing high-quality software. they did not have to go through the process again when they moved on to the next project. If you followed the Maven Build Life Cycle. Maven provides standards and a set of patterns in order to facilitate project management through reusable. This lack of a common approach to building software meant that every new project tended to copy and paste another project's build system. So. Maven is a way of approaching a set of software as a collection of highly-interdependent components. The same standards extended to testing. Using Maven has made it easier to add external dependencies and publish your own project components. Whereas Ant provides a toolbox for scripting builds. your project gained a build by default. it becomes easier to understand the relationships between projects and to establish a system that navigates and reports on these relationships. test. Maven is not just a build tool. and deploying. The ASF was effectively a series of isolated islands of innovation. and not necessarily a replacement for Ant.1. you will wonder how you ever developed without it. It is a set of standards and an approach to project development. and the Turbine developers had a different site generation process than the Jakarta Commons developers. Maven's Origins Maven was borne of the practical desire to make several projects at the Apache Software Foundation (ASF) work in the same. which can be described in a common format. projects such as Jakarta Taglibs had (and continue to have) a tough time attracting developer interest because it could take an hour to configure everything in just the right way. generating metrics and reports. and it immediately sparked interest as a sort of Rosetta Stone for software project management. Many people come to Maven familiar with Ant. knowing clearly how they all worked just by understanding how one of the components worked. so it's a natural association. but Maven is an entirely different creature from Ant. Developers within the Turbine project could freely move between subcomponents. Once developers spent time learning how one project was built. 23 . Maven's standard formats enable a sort of "Semantic Web" for programming projects. It is the next step in the evolution of how individuals and organizations collaborate to create software systems. Maven's standards and centralized repository model offer an easy-touse naming system for projects. such as Jakarta Commons. every project at the ASF had a different approach to compilation. the Codehaus community started to adopt Maven 1 as a foundation for project management. more reusable. 1. install) is effectively delegated to the POM and the appropriate plugins. to provide a common layout for project documentation. if your project currently relies on an existing Ant build script that must be maintained.3. The key value to developers from Maven is that it takes a declarative approach rather than requiring developers to create the build process themselves. if you've learned how to drive a Jeep. assemble. What Does Maven Provide? Maven provides a useful abstraction for building software in the same way an automobile provides an abstraction for driving. Maven takes a similar approach to software projects: if you can build one Maven project you can build them all. Maven provides you with: • A comprehensive model for software projects • Tools that interact with this declarative model Maven provides a comprehensive model that can be applied to all software projects. and much more transparent. An individual Maven project's structure and contents are declared in a Project Object Model (POM). and if you can apply a testing plugin to one project. in order to perform the build. Much of the project management and build orchestration (compile. and output. documentation. and you gain access to expertise and best-practices of an entire industry. more maintainable. which forms the basis of the entire Maven system. declarative build approach tend to be more transparent. existing Ant scripts (or Make files) can be complementary to Maven and used through Maven's plugin architecture. You describe your project using Maven's model. 24 . you can apply it to all projects. and to retrieve project dependencies from a shared storage area makes the building process much less time consuming. and easier to comprehend.1. Plugins allow developers to call existing Ant scripts and Make files and incorporate those existing functions into the Maven build life cycle. the car provides a known interface. referred to as "building the build". test. Projects and systems that use Maven's standard. Maven allows developers to declare life-cycle goals and project dependencies that rely on Maven’s default structures and plugin capabilities. you can easily drive a Camry. When you purchase a new car. Developers can build any given project without having to understand how the individual plugins work (scripts in the Ant world). and the software tool (named Maven) is just a supporting element within this model.Better Builds with Maven However. Maven’s ability to standardize locations for source files. Given the highly inter-dependent nature of projects in open source. The model uses a common project “language”. it is improbable that multiple individuals can work productively together on a project. when code is not reused it is very hard to create a maintainable system. and focus on building the application. You will see these principles in action in the following chapter. • • • Without these advantages. but also for software components.Organizations that adopt Maven can stop “building the build”. there is little chance anyone is going to comprehend the project as a whole.Maven is built upon a foundation of reuse. Maven projects are more maintainable because they follow a common. 25 .Maven allows organizations to standardize on a set of best practices. Without visibility it is unlikely one individual will know what another has accomplished and it is likely that useful code will not be reused. As mentioned earlier. and aesthetically consistent relation of parts. home-grown build systems. When you adopt Maven you are effectively reusing the best practices of an entire industry. along with a commensurate degree of frustration among team members. This is a natural effect when processes don't work the same way for everyone. when you create your first Maven project. Further.Maven lowers the barrier to reuse not only for build logic. The definition of this term from the American Heritage dictionary captures the meaning perfectly: “Marked by an orderly. Each of the principles above enables developers to describe their projects at a higher level of abstraction. logical. Developers can jump between different projects without the steep learning curve that accompanies custom. publicly-defined model. 1. Agility .2. Because Maven projects adhere to a standard model they are less opaque. Maven’s Principles According to Christopher Alexander "patterns help create a shared language for communicating insight and experience about problems and their solutions". Maven makes it is easier to create a component and then integrate it into a multi-project build. Maintainability .“ Reusability . This chapter will examine each of these principles in detail.Introducing Maven Organizations and projects that adopt Maven benefit from: • Coherence . Maven provides a structured build life cycle so that problems can be approached in terms of this structure. The following Maven principles were inspired by Christopher Alexander's idea of creating a shared language: • Convention over configuration • Declarative execution • Reuse of build logic • Coherent organization of dependencies Maven provides a shared language for software development projects. allowing more effective communication and freeing team members to get on with the important work of creating value at the application level. When everyone is constantly searching to find all the different bits and pieces that make up a project. As a result you end up with a lack of shared knowledge. or deploying. the notion that we should try to accommodate as many approaches as possible. makes it easier to communicate to others. and allows you to create value in your applications faster with less effort. Convention Over Configuration One of the central tenets of Maven is to provide sensible default strategies for the most common tasks. With Maven you slot the various pieces in where it asks and Maven will take care of almost all of the mundane aspects for you.2. you gain an immense reward in terms of productivity that allows you to do more. Rails does. which all add up to make a huge difference in daily use. This is not to say that you can't override Maven's defaults. One characteristic of opinionated software is the notion of 'convention over configuration'. The class automatically knows which table to use for persistence. but the use of sensible default strategies is highly encouraged.”2 David Heinemeier Hansson articulates very well what Maven has aimed to accomplish since its inception (note that David Heinemeier Hansson in no way endorses the use of Maven. Well. and I believe that's why it works. you're rewarded by not having to configure that link. that we shouldn't pass judgment on one form of development over another.Better Builds with Maven 1. sooner. All of these things should simply work. 2 O'Reilly interview with DHH 26 . If you follow basic conventions. he probably doesn't even know what Maven is and wouldn't like it if he did because it's not written in Ruby yet!): that is that you shouldn't need to spend a lot of time getting your development infrastructure functioning Using standard conventions saves time. generating documentation. One of those ideals is flexibility. such as classes are singular and tables are plural (a person class relates to a people table). you trade flexibility at the infrastructure level to gain flexibility at the application level. You don’t want to spend time fiddling with building.. so stray from these defaults when absolutely necessary only. and better at the application level. It eschews placing the old ideals of software in a primary position. so that you don't have to think about the mundane details. and this is what Maven provides. If you are happy to work along the golden path that I've embedded in Rails. With Rails.1. We have a ton of examples like that. One Primary Output Per Project The second convention used by Maven is the concept that a single Maven project produces only one primary output. and a project for the shared utility code portion. separate projects: a project for the client portion of the application. In this scenario. the code contained in each project has a different concern (role to play) and they should be separated. Maven pushes you to think clearly about the separation of concerns when setting up your projects because modularity leads to reuse. but. If you have placed all the sources together in a single project. but you can also take a look in Appendix B for a full listing of the standard conventions. server code. You will be able to look at other projects and immediately understand the project layout. extendibility and reusability.consider a set of sources for a client/server-based application that contains client code. It is a very simple idea but it can save you a lot of time. when you do this. The separation of concerns (SoC) principle states that a given problem involves different kinds of concerns. makes it much easier to reuse. the boundaries between our three separate concerns can easily become blurred and the ability to reuse the utility code could prove to be difficult. you need to ask yourself if the extra configuration that comes with customization is really worth it. To illustrate. and documentation. If you do have a choice then why not harness the collective knowledge that has built up as a result of using this convention? You will see clear examples of the standard directory structure in the next chapter. These components are generally referred to as project content. maintainability. First time users often complain about Maven forcing you to do things a certain way and the formalization of the directory structure is the source of most of the complaints. you will be able to navigate within any Maven project you build in the future. project resources. Follow the standard directory layout. Having the utility code in a separate project (a separate JAR file).Introducing Maven Standard Directory Layout for Projects The first convention used by Maven is a standard directory layout for project sources. default locations. In this case. even if you only look at a few new projects a year that's time better spent on your application. but Maven would encourage you to have three. you will be able to adapt your project to your customized layout at a cost. which should be identified and separated to cope with complexity and to achieve the required engineering quality factors such as adaptability. increased complexity of your project's POM. If this saves you 30 minutes for each new project you look at. Maven encourages a common arrangement of project content so that once you are familiar with these standard. and you will make it easier to communicate about your project. You can override any of Maven's defaults to create a directory layout of your choosing. 27 . If you have no choice in the matter due to organizational policy or integration issues with existing systems. and shared utility code. You could produce a single JAR file which includes all the compiled classes. you might be forced to use a directory structure that diverges from Maven's defaults. configuration files. generated output. a project for the server portion of the application. It is the POM that drives execution in Maven and this approach can be described as model-driven or declarative execution. It's happened to all of us.2 of Commons Logging. The intent behind the standard naming conventions employed by Maven is that it lets you understand exactly what you are looking at by.the POM is Maven's currency. Maven can be thought of as a framework that coordinates the execution of plugins in a well defined way. because the naming convention keeps each one separate in a logical. but with Maven. is the use of a standard naming convention for directories and for the primary output of each project. well. The naming conventions provide clarity and immediate comprehension. Even from this short list of examples you can see that a plugin in Maven has a very specific role to play in the grand scheme of things. Reuse of Build Logic As you have already learned. you would not even be able to get the information from the jar's manifest. The execution of Maven's plugins is coordinated by Maven's build life cycle in a declarative fashion with instructions from Maven's POM. and many other functions. a plugin for creating Javadocs. which results because the wrong version of a JAR file was used. a plugin for creating JARs.2. in a lot of cases. Maven is useless .2.jar. easily comprehensible manner. and the POM is Maven's description of a single project. It is immediately obvious that this is version 1. Systems that cannot cope with information rich artifacts like commons-logging-1. It doesn't make much sense to exclude pertinent information when you can have it at hand to use. when something is misplaced. Declarative Execution Everything in Maven is driven in a declarative fashion using Maven's Project Object Model (POM) and specifically. the plugin configurations contained in the POM. Moreover. a set of conventions really. you'll track it down to a ClassNotFound exception. Maven's project object model (POM) Maven is project-centric by design. 28 . A simple example of a standard naming convention might be commons-logging-1. later in this chapter.2. This is important if there are multiple subprojects involved in a build process. One important concept to keep in mind is that everything accomplished in Maven is the result of a plugin executing. Maven puts this SoC principle into practice by encapsulating build logic into coherent modules called plugins. looking at it.Better Builds with Maven Standard Naming Conventions The third convention in Maven. This is illustrated in the Coherent Organization of Dependencies section. If the JAR were named commonslogging. and it doesn't have to happen again.2. 1. In Maven there is a plugin for compiling source code.jar you would not really have any idea of the version of Commons Logging. Maven promotes reuse by encouraging a separation of concerns . a plugin for running tests. Without the POM.jar are inherently flawed because eventually. Plugins are the key building blocks for everything in Maven. maven. The answer lies in Maven's implicit use of its Super POM.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.This element indicates the unique identifier of the organization or group that created the project. For example org.lang. The POM is an XML document and looks like the following (very) simplified example: <project> <modelVersion>4. The key feature to remember is the Super POM contains important default information so you don't have to repeat this information in the POMs you create.Object.apache. In Java.Introducing Maven The POM below is an example of what you could use to build and test a project.xml files. myapp-1. • groupId . Maven's Super POM carries with it all the default conventions that Maven encourages.<extension> (for example. You.0</modelVersion> <groupId>com. The POM contains every important piece of information about your project. Likewise.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.lang.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM will allow you to compile.This element indicates the unique base name of the primary artifact being generated by this project.This required element indicates the version of the object model that the POM is using. Additional artifacts such as source bundles also use the artifactId as part of their file name.0.0.mycompany. but still displays the key elements that every POM contains.Object class. • • project . will ask “How this is possible using a 15 line file?”. The groupId is one of the key identifiers of a project and is typically based on the fully qualified domain name of your organization.jar). and is the analog of the Java language's java.8. so if you wish to find out more about it you can refer to Appendix B. The version of the model itself changes very infrequently. being the observant reader. but it is mandatory in order to ensure stability when Maven introduces new features or other model changes. in Maven all POMs have an implicit parent in Maven's Super POM. and generate basic documentation. 29 . The Super POM can be rather intimidating at first glance. The POM shown previously is a very simple POM.This is the top-level element in all Maven pom. modelVersion .plugins is the designated groupId for all Maven plugins. A typical artifact produced by Maven would have the form <artifactId>-<version>. • artifactId . all objects have the implicit parent of java. test. or other projects that use it as a dependency. Maven plugins provide reusable build logic that can be slotted into the standard build life cycle. The default value for the packaging element is jar so you do not have to specify this in most cases. or test. generate-sources.This element indicates where the project's site can be found. which indicates that a project is in a state of development. So. if you tell Maven to compile. For a complete reference of the elements available for use in the POM please refer to the POM reference at element indicates the version of the artifact generated by the project. In Maven.apache.This element indicates the package type to be used by this artifact (JAR. testing. or package. For example. etc. For example. installation. • • url . packaging. Any time you need to customize the way your project builds you either use an existing plugin. EAR. For now. and during the build process for your project. process-sources.html. Maven goes a long way to help you with version management and you will often see the SNAPSHOT designator in a version. well-trodden build paths: preparation. The standard build life cycle consists of many phases and these can be thought of as extension points. related to that phase. WAR. This not only means that the artifact produced is a JAR. or create a custom plugin for the task at hand.This element provides a basic description of your project. just keep in mind that the selected packaging of a project plays a part in customizing the build life cycle. This is often used in Maven's generated documentation. but also indicates a specific life cycle to use as part of the build process. or goals. The actions that have to be performed are stated at a high level.7 Using Maven Plugins and Chapter 5 Developing Custom Maven Plugins for examples and details on how to customize the Maven build. It is important to note that each phase in the life cycle will be executed up to and including the phase you specify. When you need to add some functionality to the build life cycle you do so with a plugin. The path that Maven moves along to accommodate an infinite variety of projects is called the build life cycle. or install. description .). or EAR. and Maven deals with the details behind the scenes. See Chapter 2. Maven's Build Life Cycle Software projects generally follow similar.This element indicates the display name used for the project.Better Builds with Maven • packaging .org/maven-model/maven. WAR. and compile phases that precede it automatically. etc. compilation. In Maven you do day-to-day work by invoking particular phases in this standard build life cycle. initialize. version . the compile phase invokes a certain set of goals to compile a set of classes. generate-resources. The life cycle is a topic dealt with later in this chapter. • • name . the build life cycle consists of a series of phases where each phase can perform one or more actions. you tell Maven that you want to compile. Maven will execute the validate. 30 . Maven tries to satisfy that dependency by looking in all of the remote repositories to which it has access. we can describe the process of dependency management as Maven reaching out into the world.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.3. artifactId and version. or EAR file.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM states that your project has a dependency on JUnit. In Java. If you recall. In the POM you are not specifically telling Maven where the dependencies are physically located. instead you deal with logical dependencies.8.mycompany. but you may be asking yourself “Where does that dependency come from?” and “Where is the JAR?” The answers to those questions are not readily apparent without some explanation of how Maven's dependencies.8. which is straightforward. grabbing a dependency. you are simply telling Maven what a specific project expects. SAR. artifacts and repositories work. A dependency is uniquely identified by the following identifiers: groupId. There is more going on behind the scenes. in order to find the artifacts that most closely match the dependency request. In “Maven-speak” an artifact is a specific piece of software. but a Java artifact could also be a WAR. instead it depends on version 3. Your project doesn't require junit-3.1 of the junit artifact produced by the junit group.8. but the key concept is that Maven dependencies are declarative. you stop focusing on a collection of JAR files. and it supplies these coordinates to its own internal dependency mechanisms. With Maven. Coherent Organization of Dependencies We are now going to delve into how Maven resolves dependencies and discuss the intimately connected concepts of dependencies. A dependency is a reference to a specific artifact that resides in a repository. Dependency Management is one of the most powerful features in Maven. In order for Maven to attempt to satisfy a dependency.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. artifacts. 31 . At a basic level. and providing this dependency to your software project. When a dependency is declared within the context of your project. Maven needs to know what repository to search as well as the dependency's coordinates.0. the most common artifact is a JAR file.0</modelVersion> <groupId>com.1.2.jar. Maven takes the dependency coordinates you provide in the POM.Introducing Maven 1. and repositories. If a matching artifact is located. Maven transports it from that remote repository to your local repository for project use. our example POM has a single dependency listed for Junit: <project> <modelVersion>4. You must have a local repository in order for Maven to work. By default. The following folder structure shows the layout of a local Maven repository that has a few locally installed dependency artifacts such as junit-3. Maven creates your local repository in ~/. but when a declared dependency is not present in your local repository Maven searches all the remote repositories to which it has access to find what’s missing.Better Builds with Maven Maven has two types of repositories: local and remote. it will create your local repository and populate it with artifacts as a result of dependency requests.jar: 32 .m2/repository. Local Maven repository When you install and run Maven for the first time.8. Read the following sections for specific details regarding where Maven searches for these dependencies. Maven usually interacts with your local repository.1. . Above you can see the directory structure that is created when the JUnit dependency is resolved.Introducing Maven Figure 1-1: Artifact movement from remote to local repository So you understand how the layout works. a repository is just an abstract storage mechanism. In theory. On the next page is the general pattern used to create the repository layout: 33 .1.jar artifact that are now in your local repository.8. We’ll stick with our JUnit example and examine the junit-3. but in practice the repository is a directory structure in your file system. take a closer look at one of the artifacts that appeared in your local repository. 8.m2/repository/junit/junit/3.8.1” in ~/. 34 . artifactId of “junit”. Maven will generate a path to the artifact in your local repository.apache.Better Builds with Maven Figure 1-2: General pattern for the repository layout If the groupId is a fully qualified domain name (something Maven encourages) such as z. Locating dependency artifacts When satisfying dependencies.maven. If this file is not present. Maven will attempt to find the artifact with a groupId of “junit”.y.1/junit-3. and a version of “3.1. Maven will fetch it from a remote repository.8.jar.x then you will end up with a directory structure like the following: Figure 1-3: Sample directory structure In the first directory listing you can see that Maven artifacts are stored in a directory structure that corresponds to Maven’s groupId of org. for example. Maven attempts to locate a dependency's artifact using the following process: first. if your project has ten web applications. Like the engine in your car or the processor in your laptop. To summarize. artifacts can be downloaded from a secure.1.com/. you don't have to jump through hoops trying to get it to work. which can be managed by Mergere Maestro. you simply change some configurations in Maven. rather than imposing it. 3 Alternatively. all projects referencing this dependency share a single copy of this JAR. Before Maven. If you were coding a web application.6 of the Spring Framework. it doesn't scale easily to support an application with a great number of small components. Maven will attempt to fetch an artifact from the central Maven repository at. simplifies the process of development. modular project arrangements. While this approach works for a few projects. 1. active open-source community that produces software focused on project management.3. Your local repository is one-stop-shopping for all artifacts that you need regardless of how many projects you are building. internal Maven repository.3 If your project's POM contains more than one remote repository. Maven provides such a technology for project management. Maven is a set of standards. Maestro is an Apache License 2. From this point forward.Introducing Maven By default. You don't have to worry about whether or not it's going to work. In other words.0 JARs to every project.mergere. into a lib directory. be a part of your thought process. every project with a POM that references the same dependency will use this single copy installed in your local repository. Storing artifacts in your SCM along with your project may seem appealing.ibiblio. it should rarely. and you would add these dependencies to your classpath. if ever. Maven is a framework. and they shouldn't be versioned in an SCM. Each project relies upon a specific artifact via the dependencies listed in a POM. Once the dependency is satisfied.jar for each project that needs it. Maven will attempt to download an artifact from each remote repository in the order defined in your POM. shielding you from complexity and allowing you to focus on your specific task.8. Declare your dependencies and let Maven take care of details like compilation and testing classpaths. which all depend on version 1. For more information on Maestro please see:. Using Maven is more than just downloading another JAR file and a set of scripts. in the background.2. the common pattern in most projects was to store JAR files in a project's subdirectory. 35 . Maven's Benefits A successful technology takes away burden.0 distribution based on a pre-integrated Maven. Maven is a repository. you don’t store a copy of junit3. it is the adoption of a build life-cycle process that allows you to take your software development to the next level. a useful technology just works. Instead of adding the Spring 2.org/maven2. you would check the 10-20 JAR files. Maven is also a vibrant. there is no need to store the various spring JAR files in your project. the artifact is downloaded and installed in your local repository. With Maven. but it is incompatible with the concept of small. upon which your project relies. Dependencies are not your project's code. and Maven is software.0 by changing your dependency declarations. in doing so. and. Continuum and Archiva build platform. and it is a trivial process to upgrade all ten web applications to Spring 2. 36 .Better Builds with Maven This page left intentionally blank. The terrible temptation to tweak should be resisted unless the payoff is really noticeable.Jon Bentley and Doug McIlroy 37 . it is assumed that you are a first time Maven user and have already set up Maven on your local system. then please refer to Maven's Download and Installation Instructions before continuing. then you should be all set to create your first Maven project. so for now simply assume that the above settings will work.com/maven2</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> </settings> In its optimal mode. Now you can perform the following basic check to ensure Maven is working correctly: mvn -version If Maven's version is displayed.xml file with the following content. If you are behind a firewall. then note the URL and let Maven know you will be using a proxy. ask your administrator if there if there is an internal Maven proxy.xml file.1.m2/settings.mycompany. The settings. If you have not set up Maven yet.Better Builds with Maven 2.m2/settings.com</id> <name>My Company's Maven Proxy</name> <url> file will be explained in more detail in the following chapter and you can refer to the Maven Web site for the complete details on the settings. Depending on where your machine is located. it may be necessary to make a few more preparations for Maven to function correctly.mycompany.com</host> <port>8080</port> <username>your-username</username> <password>your-password</password> </proxy> </proxies> </settings> If Maven is already in use at your workplace.xml file with the following content: <settings> <proxies> <proxy> <active>true</active> <protocol>http</protocol> <host>proxy. create a <your-homedirectory>/. To do this. 38 . Maven requires network access. Create a <your-home-directory>/. If there is an active Maven proxy running. Preparing to Use Maven In this chapter. <settings> <mirrors> <mirror> <id>maven. then you will have to set up Maven to understand that. This chapter will show you how the archetype mechanism works.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.mycompany.8. After the archetype generation has completed. execute the following: C:\mvnbook> mvn archetype:create -DgroupId=com. and that it in fact adheres to Maven's standard directory layout discussed in Chapter 1.1</version> <scope>test</scope> </dependency> </dependencies> </project> At the top level of every project is your pom.xml file. you will notice that the following directory structure has been created. please refer to the Introduction to Archetypes. you will notice that a directory named my-app has been created for the new project. To create the Quick Start Maven project.xml.0</modelVersion> <groupId>com.0. In Maven.app \ -DartifactId=my-app You will notice a few things happened when you executed this command. 39 . you know you are dealing with a Maven project. Creating Your First Maven Project To create your first project.Getting Started with Maven 2. An archetype is defined as an original pattern or model from which all other things of the same kind are made. Whenever you see a directory structure. which looks like the following: <project> <modelVersion>4.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. First. which is combined with some user input to produce a fullyfunctional Maven project. you will use Maven's Archetype mechanism.2. an archetype is a template of a project. but if you would like more information about archetypes. which contains a pom.xml file. and this directory contains your pom.mycompany.apache.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url>. but the following analysis of the simple compile command shows you the four principles in action and makes clear their fundamental importance in simplifying the development of a project. compile your application sources using the following command: C:\mvnbook\my-app> mvn compile 40 . you are ready to build your project. in order to accomplish the desired task. Change to the <my-app> directory. testing. Compiling Application Sources As mentioned in the introduction. In this first stage you have Java source files only. in one fell swoop.3. you tell Maven what you need. at a very high level. Before you issue the command to compile the application sources. in a declarative way. Now that you have a POM. the site. some application sources. Then. and so on). and deploying the project (source files.Better Builds with Maven Figure 2-1: Directory structure after archetype generation The src directory contains all of the inputs required for building. note that this one simple command encompasses Maven's four foundational principles: • Convention over configuration • Reuse of build logic • Declarative execution • Coherent organization of dependencies These principles are ingrained in all aspects of Maven. and some test sources. but later in the chapter you will see how the standard directory layout is employed for other project content. documenting. various descriptors such as assembly descriptors. configuration files. The <my-app> directory is the base directory. for the my-app project. ${basedir}. 2. in the first place? You might be guessing that there is some background process that maps a simple command to a particular plugin. [INFO] artifact org.maven. Instead. The next question. The same build logic encapsulated in the compiler plugin will be executed consistently across any number of projects. 41 . In fact. and how Maven invokes the compiler plugin.. inherited from the Super POM. Even the simplest of POMs knows the default location for application sources.plugins:maven-compiler-plugin: checking for updates from central . how was Maven able to decide to use the compiler plugin. in fact. [INFO] [resources:resources] .apache. What actually compiled the application sources? This is where Maven's second principle of “reusable build logic” comes into play. You can. along with its default configuration. application sources are placed in src/main/java. So. By default. is the tool used to compile your application sources. This means you don't have to state this location at all in any of your POMs. is target/classes.plugins:maven-resources-plugin: checking for updates from central .. of course. Although you now know that the compiler plugin was used to compile the application sources.. This default value (though not visible in the POM above) was. if you poke around the standard Maven installation. but there is very little reason to do so.apache. Maven downloads plugins as they are needed. what Maven uses to compile the application sources.Getting Started with Maven After executing this command you should see output similar to the following: [INFO-------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [compile] [INFO]------------------------------------------------------------------[INFO] artifact org.. override this default location. The standard compiler plugin. you won't find the compiler plugin since it is not shipped with the Maven distribution. now you know how Maven finds application sources.maven. there is a form of mapping and it is called Maven's default build life cycle.. how was Maven able to retrieve the compiler plugin? After all. The same holds true for the location of the compiled classes which. if you use the default location for application sources. How did Maven know where to look for sources in order to compile them? And how did Maven know where to put the compiled classes? This is where Maven's principle of “convention over configuration” comes into play. by default.. 0 distribution based on a pre-integrated Maven. For more information on Maestro please see:. From a clean installation of Maven this can take quite a while (in the output above. 42 .Better Builds with Maven The first time you execute this (or any other) command. it took almost 4 minutes with a broadband connection). Maven will download all the plugins and related dependencies it needs to fulfill the command. This implies that all prerequisite phases in the life cycle will be performed to ensure that testing will be successful. you probably have unit tests that you want to compile and execute as well (after all. or where your output should go. it won't download anything new. Again.4 The next time you execute the same command again. Therefore. and eliminates the requirement for you to explicitly tell Maven where any of your sources are. Compiling Test Sources and Running Unit Tests Now that you're successfully compiling your application's sources. which is specified by the standard directory layout.4. Use the following simple command to test: C:\mvnbook\my-app> mvn test 4 Alternatively. Maestro is an Apache License 2. internal Maven repository. artifacts can be downloaded from a secure. programmers always write and execute their own unit tests *nudge nudge. If you're a keen observer you'll notice that using the standard conventions makes the POM above very small. because Maven already has what it needs. simply tell Maven you want to test your sources. As you can see from the output.com/. which can be managed by Mergere Maestro. the compiled classes were placed in target/classes. Maven will execute the command much quicker.mergere. wink wink*). Continuum and Archiva build platform. By following the standard Maven conventions you can get a lot done with very little effort! 2. .all classes are up to date [INFO] [resources:testResources] [INFO] [compiler:testCompile] Compiling 1 source file to C:\Test\Maven2\test\my-app\target\test-classes . remember that it isn't necessary to run this every time.apache.. and execute the tests. Failures: 0.app. 43 ..mycompany. Time elapsed: 0 sec Results : [surefire] Tests run: 1. Failures: 0. since we haven't changed anything since we compiled last).maven.AppTest [surefire] Tests run: 1. Errors: 0 [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 15 seconds [INFO] Finished at: Thu Oct 06 08:12:17 MDT 2005 [INFO] Final Memory: 2M/8M [INFO]------------------------------------------------------------------- Some things to notice about the output: • Maven downloads more dependencies this time. [INFO] [resources:resources] [INFO] [compiler:compile] [INFO] Nothing to compile . as well as all the others defined before it. compile the tests. If you simply want to compile your test sources (but not execute the tests). • Before compiling and executing the tests.. Now that you can compile the application sources. Errors: 0.plugins:maven-surefire-plugin: checking for updates from central .Getting Started with Maven After executing this command you should see output similar to the following: [INFO]------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [test] [INFO]------------------------------------------------------------------[INFO] artifact org. Maven compiles the main code (all these classes are up-to-date. [INFO] [surefire:test] [INFO] Setting reports dir: C:\Test\Maven2\test\my-app\target/surefire-reports ------------------------------------------------------T E S T S ------------------------------------------------------[surefire] Running com. mvn test will always run the compile and test-compile phases first. These are the dependencies and plugins necessary for executing the tests (recall that it already has the dependencies it needs for compiling and won't download them again). how to package your application. you'll want to move on to the next logical step. you can execute the following command: C:\mvnbook\my-app> mvn test-compile However. Time elapsed: 0.0-SNAPSHOT.0-SNAPSHOT\my-app-1. This is how Maven knows to produce a JAR file from the above command (you'll read more about this later).Better Builds with Maven 2.0-SNAPSHOT. Errors: 0 [INFO] [jar:jar] [INFO] Building jar: <dir>/my-app/target/my-app-1. Failures: 0.m2/repository is the default location of the repository. The directory <your-homedirectory>/.jar to <localrepository>\com\mycompany\app\my-app\1. you'll want to install the artifact (the JAR file) you've generated into your local repository. Packaging and Installation to Your Local Repository Making a JAR file is straightforward and can be accomplished by executing the following command: C:\mvnbook\my-app> mvn package If you take a look at the POM for your project. To install. Take a look in the the target directory and you will see the generated JAR file. Errors: 0. Failures: 0.001 sec Results : [surefire] Tests run: 1.5.app. Now. It can then be used by other projects as a dependency.0-SNAPSHOT.jar [INFO] [install:install] [INFO] Installing c:\mvnbook\my-app\target\my-app-1.jar [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 5 seconds [INFO] Finished at: Tue Oct 04 13:20:32 GMT-05:00 2005 [INFO] Final Memory: 3M/8M [INFO]------------------------------------------------------------------- 44 .mycompany. you will notice the packaging element is set to jar. and if you've noticed. there are a great number of Maven plugins that work out-of-the-box. This chapter will cover one in particular. it will update the settings rather than starting fresh.java You have now completed the process for setting up. building. you must keep making error-prone additions. simply execute the following command: C:\mvnbook\my-app> mvn site There are plenty of other stand-alone goals that can be executed as well. as it is one of the highly-prized features in Maven. So. there is far more functionality available to you from Maven without requiring any additions to the POM. so it is fresh. to get any more functionality out of an Ant build script.java • **/*TestCase. Perhaps you'd like to generate an IntelliJ IDEA descriptor for the project: C:\mvnbook\my-app> mvn idea:idea This can be run over the top of a previous IDEA project. everything done up to this point has been driven by an 18-line POM. alternatively you might like to generate an Eclipse descriptor: C:\mvnbook\my-app> mvn eclipse:eclipse 45 .java Conversely. and installing a typical Maven project. this POM has enough information to generate a Web site for your project! Though you will typically want to customize your Maven site.java **/Test*. the following tests are included: • • **/*Test. what other functionality can you leverage. In this case. as it currently stands. for example: C:\mvnbook\my-app> mvn clean This will remove the target directory with the old build data before starting. Or. For projects that are built with Maven.Getting Started with Maven Note that the Surefire plugin (which executes the test) looks for tests contained in files with a particular naming convention. Of course. packaging. given Maven's re-usable build logic? With even the simplest POM. Without any work on your part.java **/Abstract*TestCase. In contrast. By default. testing. this covers the majority of tasks users perform. if you're pressed for time and just need to create a basic Web site for your project. the following tests are excluded: • • **/Abstract*Test. starting at the base of the JAR. you need to add the directory src/main/resources. In the following example. Figure 2-2: Directory structure after adding the resources directory You can see in the preceding example that there is a META-INF directory with an application. you can package resources within JARs. For this common task. The rule employed by Maven is that all directories or files placed within the src/main/resources directory are packaged in your JAR with the exact same structure. simply by placing those resources in a standard directory structure. If you unpacked the JAR that Maven created you would see the following: 46 .Better Builds with Maven 2. which requires no changes to the POM shown previously. This means that by adopting Maven's standard conventions. is the packaging of resources into a JAR file. Maven again uses the standard directory layout.6.properties file within that directory. Handling Classpath Resources Another common use case. That is where you place any resources you wish to package in the JAR. MF. simply create the resources and META-INF directories and create an empty file called application. You can create your own manifest if you choose. One simple use might be to retrieve the version of your application. If you would like to try this example.Getting Started with Maven Figure 2-3: Directory structure of the JAR file created by Maven The original contents of src/main/resources can be found starting at the base of the JAR and the application. The pom. as well as a pom. should the need arise. Operating on the POM file would require you to use Maven utilities. 47 .properties file.xml and pom.xml and pom.properties files are packaged up in the JAR so that each artifact produced by Maven is self-describing and also allows you to utilize the metadata in your own application. but Maven will generate one by default if you don't. These come standard with the creation of a JAR in Maven.properties file is there in the META-INF directory. but the properties can be utilized using the standard Java APIs. Then run mvn install and examine the jar file in the target directory. You will also notice some other files like META-INF/MANIFEST.xml inside. . At this point you have a project directory structure that should look like the following: Figure 2-4: Directory structure after adding test resources In a unit test. except place resources in the src/test/resources directory.1.getResourceAsStream( "/test.. follow the same pattern as you do for adding resources to the JAR.. // Do something with the resource [.. Handling Test Classpath Resources To add resources to the classpath for your unit tests.] // Retrieve resource InputStream is = getClass().Better Builds with Maven 2.properties" ). you could use a simple snippet of code like the following for access to the resource required for testing: [.] 48 .6. 2. To accomplish this in Maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifestFile>META-INF/MANIFEST. you can use the follow configuration for the maven-jarplugin: <plugin> <groupId>org. Filtering Classpath Resources Sometimes a resource file will need to contain a value that can be supplied at build time only.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> </project> 49 . a property defined in an external properties file.6.xml. you can filter your resource files dynamically by putting a reference to the property that will contain the value into your resource file using the syntax ${<property name>}. or a system property.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.0.Getting Started with Maven To override the manifest file yourself.xml.0</modelVersion> <groupId>com.maven.mycompany.8.apache.apache. To have Maven filter resources when copying. The property can be either one of the values defined in your pom.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1. simply set filtering to true for the resource directory in your pom.MF</manifestFile> </archive> </configuration> </plugin> 2. a value defined in the user's settings.xml: <project> <modelVersion>4.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url>. version=${project.xml. all you need to do is add a reference to this external file in your pom. and resource elements . add a reference to this new file in the pom. In fact. So ${project.] <build> <filters> <filter>src/main/filters/filter.xml file: [. when the built project is packaged. create an src/main/resources/application. First. you can execute the following command (process-resources is the build life cycle phase where the resources are copied and filtered): mvn process-resourcesThe application. resources. which weren't there before.properties my.build.properties: # filter.properties file. To reference a property defined in your pom.name=${project. which will eventually go into the JAR looks like this: # application.properties application. In addition.have been added.xml..value=hello! Next.] 50 .version} refers to the version of the project.properties application. the property name uses the names of the XML elements that define the value.. ${project. and ${project. To continue the example..name} application.xml to override the default value for filtering and set it to true. whose values will be supplied when the resource is filtered as follows: # application.0-SNAPSHOT To reference a property defined in an external file.finalName} refers to the final name of the file created.properties file under target/classes.filter.version} With that in place. the POM has to explicitly state that the resources are located in the src/main/resources directory.Better Builds with Maven You'll notice that the build. create an external properties file and call it src/main/filters/filter.properties</filter> </filters> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> [.which weren't there before .name} refers to the name of the project.version=1.name=Maven Quick Start Archetype application.. any element in your POM is available when filtering resources. All of this information was previously provided as default values and now must be added to the pom. either the system properties built into Java (like java.version} message=${my.name} application.prop=hello again" 51 .value> </properties> </project> Filtering resources can also retrieve values from system properties.apache.properties file to look like the following: # application.Getting Started with Maven Then. the application.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.properties file as follows: # application.filter.prop=${command.filter.version=${project.line. you could have defined it in the properties section of your pom.line.filter.line.version} command.8.filter.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> <properties> <my. when you execute the following command (note the definition of the command.value>hello</my.home). change the application. To continue the example.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url>{project.0</modelVersion> <groupId>com.line.prop property on the command line).version or user.value} The next execution of the mvn process-resources command will put the new property value into application.properties.version=${java.properties application. mvn process-resources "-Dcommand.xml and you'd get the same effect (notice you don't need the references to src/main/filters/filter.properties file will contain the values from the system properties.prop} Now. or properties defined on the command line using the standard Java -D parameter. As an alternative to defining the my.value property in an external file.0. add a reference to this property in the application.properties java.mycompany.properties either):<project> <modelVersion>4. 6.. <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> <excludes> <exclude>images/**</exclude> </excludes> </resource> <resource> <directory>src/main/resources</directory> <includes> <include>images/**</include> </includes> </resource> </resources> </build> . In addition you would add another resource entry. and an inclusion of your images directory. with filtering disabled. The build element would look like the following: <project> ..3. Preventing Filtering of Binary Resources Sometimes there are classpath resources that you want to include in your JAR..Better Builds with Maven 2. for example image files. then you would create a resource entry to handle the filtering of resources with an exclusion for the resources you wanted unfiltered. but you do not want them filtered. </project> 52 . This is most often the case with binary resources.. If you had a src/main/resources/images that you didn't want to be filtered. or settings. and in some ways they are. or configure parameters for the plugins already included in the build.. but in most cases these elements are not required. This is often the most convenient way to use a plugin.Getting Started with Maven 2. For example. If it is not present on your local system. you must include additional Maven plugins. plugin developers take care to ensure that new versions of plugins are backward compatible so you are usually OK with the latest release. This is as simple as adding the following to your POM: <project> .apache.. but you may want to specify the version of a plugin to ensure reproducibility.apache. then Maven will default to looking for the plugin with the org. the groupId and version elements have been shown. To illustrate the similarity between plugins and dependencies.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2. If you do not specify a version then Maven will attempt to use the latest released version of the specified plugin. you may want to configure the Java compiler to allow JDK 5. </project> You'll notice that all plugins in Maven 2 look very similar to a dependency.xml.5</source> <target>1.0</version> <configuration> <source>1. For the most part.you can lock down a specific version. The configuration element applies the given parameters to every goal from the compiler plugin. You can specify an additional groupId to search within your POM. If you do not specify a groupId.plugins or the org.maven.7.maven.5</target> </configuration> </plugin> </plugins> </build> .0 sources. <build> <plugins> <plugin> <groupId>org. but if you find something has changed . In the above case.. Using Maven Plugins As noted earlier in the chapter. 53 . this plugin will be downloaded and installed automatically in much the same way that a dependency would be handled. the compiler plugin is already used as part of the build process and this just changes the configuration. to customize the build for a Maven project.mojo groupId label.codehaus.. The next few chapters provide you with the how-to guidelines to customize Maven's behavior and use Maven to manage interdependent software projects. Summary After reading Chapter 2. If you were looking for just a build tool. In eighteen pages. you should be up and running with Maven. read on. testing a project.org/plugins/ and navigating to the plugin and goal you are using. By learning how to build a Maven project. and packaging a project. If someone throws a Maven project at you. 2. 54 .apache.8. You've learned a new language and you've taken Maven for a test drive. you've seen how you can use Maven to build your project. If you want to see the options for the maven-compiler-plugin shown previously. If you are interested in learning how Maven builds upon the concepts described in the Introduction and obtaining a deeper working knowledge of the tools introduced in Chapter 2. you have gained access to every single project using Maven. although you might want to refer to the next chapter for more information about customizing your build to fit your project's unique needs. use the mvn help:describe command. compiling a project.Better Builds with Maven If you want to find out what the plugin's configuration options are. you could stop reading this book now.apache. You should also have some insight into how Maven handles dependencies and provides an avenue for customization using Maven plugins. you'll know how to use the basic features of Maven: creating a project. use the following command: mvn help:describe -DgroupId=org.plugins \ -DartifactId=maven-compiler-plugin -Dfull=true You can also find out what plugin configuration is available by using the Maven Plugin Reference section at. Berard 55 . ..3.Edward V. In this chapter. which consists of all the classes that will be used by Proficio as a whole. it is important to keep in mind that Maven emphasizes the practice of standardized and modular builds. • Proficio Model: The data model for the Proficio application. and operate on the pieces of software that are relevant to a particular concept. The guiding principle in determining how best to decompose your application is called the Separation of Concerns (SoC). In doing so. but you are free to name your modules in any fashion your team decides. 56 . Moreover.1. Introduction In the second chapter you stepped though the basics of setting up a simple project. 3. The only real criterion to which to adhere is that your team agrees to and uses a single naming convention. you will see that the Proficio sample application is made up of several Maven modules: • Proficio API: The application programming interface for Proficio. So. Now you will delve in a little deeper. using a real-world example.2. more manageable and comprehensible parts. The natural outcome of this practice is the generation of discrete and coherent components. a key goal for every software development project. encapsulate. lets start by discussing the ideal directory structure for Proficio. houses all the store modules. • Proficio Core: The implementation of the API. • These are default naming conventions that Maven uses. SoC refers to the ability to identify. each of which addresses one or more specific concerns. you will be guided through the specifics of setting up an application and managing that application's Maven structure. The interfaces for the APIs of major components. Proficio has a very simple memory-based store and a simple XStream-based store. which is Latin for “help”. Concerns are the primary motivation for organizing and decomposing software into smaller. like the store.Better Builds with Maven 3. As such. • Proficio Stores: The module which itself. or purpose. and be able to easily identify what a particular module does simply by looking at its name. which enable code reusability. goal. everyone on the team needs to clearly understand the convention. are also kept here. • Proficio CLI: The code which provides a command line interface to Proficio. task. you are going to learn about some of Maven’s best practices and advanced uses by working on a small application to manage frequently asked questions (FAQ). The application that you are going to create is called Proficio. Setting Up an Application Directory Structure In setting up Proficio's directory structure. which consists of a set of interfaces. 0-SNAPSHOT</version> <name>Maven Proficio</name> <url>. A module is a reference to another Maven project. but the Maven team is trying to consistently refer to these setups as multimodule builds now.. <modules> <module>proficio-model</module> <module>proficio-api</module> <module>proficio-core</module> <module>proficio-stores</module> <module>proficio-cli</module> </modules> . This setup is typically referred to as a multi-module build and this is how it looks in the top-level Proficio POM: <project> <modelVersion>4.proficio</groupId> <artifactId>proficio</artifactId> <packaging>pom</packaging> <version>1.apache. which you can see is 1. For an application that has multiple modules.x documentation. It is recommended that you specify the application version in the top-level POM and use that version across all the modules that make up your application. it is very common to release all the sub-modules together.org</url> . You should take note of the packaging element.0</modelVersion> <groupId>org.. For POMs that contain modules. which in this case has a value of pom. you can see in the modules element all the sub-modules that make up the Proficio application.maven. which really means a reference to another POM. the packaging type must be set to value of pom: this tells Maven that you're going to be walking through a set of modules in a structure similar to the example being covered here..Creating Applications with Maven In examining the top-level POM for Proficio.x these were commonly referred to as multi-project builds and some of this vestigial terminology carried over to the Maven 2. so it makes sense that all the modules have a common application version.0.. If you were to look at Proficio's directory structure you would see the following: 57 . </project> An important feature to note in the POM above is the value of the version element. Currently there is some variance on the Maven web site when referring to directory structures that contain more than one Maven project. In Maven 1.0-SNAPSHOT.apache. 0. If you take a look at the POM for the proficio-stores module you will see a set of modules contained therein: <project> <parent> <groupId>org.apache. but the interesting thing here is that we have another project with a packaging type of pom.</modelVersion> <artifactId>proficio-stores</artifactId> <name>Maven Proficio Stores</name> <packaging>pom</packaging> <modules> <module>proficio-store-memory</module> <module>proficio-store-xstream</modul </modules> </project> 58 . which is the proficio-stores module.0-SNAPSHOT</version> </parent> <modelVersion>4.maven.Better Builds with Maven Figure 3-1: Proficio directory structure You may have noticed that the module elements in the POM match the names of the directories in the prior Proficio directory structure.proficio</groupId> <artifactId>proficio</artifactId> <version>1. enabling you to add resources where it makes sense in the hierarchy of your projects... You can nest sets of projects like this to any level. This is the snippet in each of the POMs that lets you draw on the resources stated in the specified toplevel POM and from which you can inherit down to the level required .proficio</groupId> <artifactId>proficio</artifactId> <version>1. or state your common dependencies ..3. state your deployment information. Let's examine a case where it makes sense to put a resource in the top-level POM. organizing your projects in groups according to concern. just as has been done with Proficio’s multiple storage mechanisms..maven.0-SNAPSHOT</version> <. which are all placed in one directory. Being the observant user. 59 .all in a single place. <parent> <groupId>org. Using project inheritance allows you to do things like state your organizational information.apache. 3. you have probably taken a peek at all the POMs in each of the projects that make up the Proficio project and noticed the following at the top of each of the POMs: . using our top-level POM for the sample Proficio application. Using Project Inheritance One of the most powerful features in Maven is project inheritance. .codehaus. The dependency is stated as following: <project> .0. you never have to declare this dependency again.1</version> <scope>test</scope> </dependency> </dependencies> .0-SNAPSHOT</version> </parent> <modelVersion>4..apache.apache.. is that each one inherits the dependencies section of the top-level POM. So. by stating the dependency in the top-level POM once.plexus</groupId> <artifactId>plexus-container-default</artifactId> </dependency> </dependencies> </project> 60 .Better Builds with Maven If you look at the top-level POM for Proficio.proficio</groupId> <artifactId>proficio</artifactId> <version>1.maven. </project> What specifically happens for each child POM. In this case the assumption being made is that JUnit will be used for testing in all our child projects.8. in any of your child POMs.proficio</groupId> <artifactId>proficio-api</artifactId> </dependency> <dependency> <groupId>org. So. if you take a look at the POM for the proficio-core module you will see the following (Note: there is no visible dependency declaration for Junit): <project> <parent> <groupId>org.maven. you will see that in the dependencies section there is a declaration for JUnit version 3..8.1. <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.0</modelVersion> <artifactId>proficio-core</artifactId> <packaging>jar</packaging> <name>Maven Proficio Core</name> <dependencies> <dependency> <groupId>org. take a look at the resulting POM. <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. as the results can be far from desirable. In order to manage. of all your dependencies. for example.Creating Applications with Maven In order for you to see what happens during the inheritance process.. 3. making dependency management difficult to say the least.8. to end up with multiple versions of a dependency on the classpath when your application executes.4. Maven's strategy for dealing with this problem is to combine the power of project inheritance with specific dependency management elements in the POM. Managing Dependencies When you are building applications you typically have a number of dependencies to manage and that number only increases over time. You want to make sure that all the versions. individual projects. After you move into the proficio-core module directory and run the command. 61 . So in this case.. or align. </project> You will have noticed that the POM that you see when using the mvn help:effective-pom is bigger than you expected. <dependencies> . across all of your projects are in alignment so that your testing accurately reflects what you will deploy as your final result.... the proficio-core project inherits from the top-level Proficio project.1</version> <scope>test</scope> </dependency> .. This command will show you the final result for a target POM. You don't want.8.1 dependency: <project> . it is likely that some of those projects will share common dependencies. versions of dependencies across several projects. But remember from Chapter 2 that the Super POM sits at the top of the inheritance hierarchy. you use the dependency management section in the top-level POM of an application. you will see the JUnit version 3. you will need to use the handy mvn help:effective-pom command. which in turn inherits from the Super POM. When you write applications which consist of multiple. so that the final application works correctly.. When this happens it is critical that the same version of a given dependency is used for all your projects. Looking at the effective POM includes everything and is useful to view when trying to figure out what is going on when you are having problems.. </dependencies> . apache.. As you can see within the dependency management section..maven.version}</version> </dependency> <dependency> <groupId>org. </project> Note that the ${project.apache.apache.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project. let's look at the dependency management section of the Proficio top-level POM: <project> . which is the application version. we have several Proficio dependencies and a dependency for the Plexus IoC container..plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1.maven. 62 .maven.Better Builds with Maven To illustrate how this mechanism works..codehaus.version} specification is the version specified by the top-level POM's version element.proficio</groupId> <artifactId>proficio-core</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.version}</version> </dependency> <dependency> <groupId>org. There is an important distinction to be made between the dependencies element contained within the dependencyManagment element and the top-level dependencies element in the POM. <dependencyManagement> <dependencies> <dependency> <groupId>org.0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> ... 63 . The dependencies stated in the dependencyManagement only come into play when a dependency is declared without a version. <dependencies> <dependency> <groupId>org. If you take a look at the POM for the proficio-api module.version}) for proficio-model so that version is injected into the dependency above.. to make it complete. whereas the top-level dependencies element does affect the dependency graph.0-SNAPSHOT (stated as ${project. The dependencyManagement declares a stated preference for the 1. you will see a single dependency declaration and that it does not specify a version: <project> .apache.maven. plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1. it is usually the case that each of the modules are in flux. Your APIs might be undergoing some change or your implementations are undergoing change and are being fleshed out. but you can use the -U command line option to force the search for updates. 64 . If you look at the top-level POM for Proficio you will see a snapshot version specified: <project> .0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> .apache... Controlling how snapshots work will be explained in detail in Chapter 7. When you specify a non-snapshot version of a dependency Maven will download that dependency once and never attempt to retrieve it again. A snapshot in Maven is an artifact that has been prepared using the most recent sources available.5. By default Maven will look for snapshots on a daily basis.version}</version> </dependency> <dependency> <groupId>org. or you may be doing some refactoring. Snapshot dependencies are assumed to be changing.codehaus.apache.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project. Using Snapshots While you are developing an application with multiple modules. so Maven will attempt to update them.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project.0-SNAPSHOT</version> <dependencyManagement> <dependencies> <dependency> <groupId>org. and this is where Maven's concept of a snapshot comes into play. Your build system needs to be able to deal easily with this real-time flux.Better Builds with Maven 3.version}</version> </dependency> <dependency> <groupId>org..maven. <version>1. </project> Specifying a snapshot version for a dependency means that Maven will look for new versions of that dependency without you having to manually specify a new version.maven.. 0-alpha-9 (selected for compile) plexus-utils:1. and allowing Maven to calculate the full dependency graph. it is inevitable that two or more artifacts will require different versions of a particular dependency.Creating Applications with Maven 3. this has limitations: • The version chosen may not have all the features required by the other dependencies..0.1-alpha-2 (selected for compile) junit:3. To manually resolve conflicts. Resolving Dependency Conflicts and Using Version Ranges With the introduction of transitive dependencies in Maven 2.0-SNAPSHOT (selected for compile) plexus-utils:1. then the result is undefined. see section 6.4 (selected for compile) classworlds:1. For example.that is. as the graph grows. In Maven.. to compile. there are ways to manually resolve these conflicts as the end user of a dependency. However.1 (not setting.8. if you run mvn -X test on the proficio-core module. In this case. • If multiple versions are selected at the same depth. or you can override both with the correct version.8. Maven must choose which version to provide.9 in Chapter 6). A dependency in the POM being built will be used over anything else. Removing the incorrect version requires identifying the source of the incorrect version by running Maven with the -X flag (for more information on how to do this. and more importantly ways to avoid it as the author of a reusable library. the version selected is the one declared “nearest” to the top of the tree .1 (selected for compile) 65 .0-SNAPSHOT (selected for compile) proficio-model:1.0. it became possible to simplify a POM by including only the dependencies you need directly. While further dependency management features are scheduled for the next release of Maven at the time of writing.1 (selected for test) plexus-container-default:1. the output will contain something similar to: proficio-core:1. local scope test wins) proficio-api:1.6. Maven selects the version that requires the least number of dependencies to be traversed. However.0-SNAPSHOT junit:3. you can remove the incorrect version from the tree. 1 be used.plexus</groupId> <artifactId>plexus-utils</artifactId> </exclusion> </exclusions> </dependency> . In fact.codehaus..0-alpha-9</version> <exclusions> <exclusion> <groupId>org... The reason for this is that it distorts the true dependency graph.1 version is used instead.. You'll notice that the runtime scope is used here. This is because.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>1. if the dependency were required for compilation.codehaus.regardless of whether another dependency introduces it. The alternate way to ensure that a particular version of a dependency is used.1</version> <scope>runtime</scope> </dependency> </dependencies> .codehaus.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1.Better Builds with Maven Once the path to the version has been identified. the dependency is used only for packaging. but it is possible to improve the quality of your own dependencies to reduce the risk of these issues occurring with your own build artifacts. use version ranges instead.xml file as follows: . However. To accomplish this. This ensures that Maven ignores the 1.0. <dependencies> <dependency> <groupId>org. Neither of these solutions is ideal. 66 . for stability it would always be declared in the current POM as a dependency . a WAR file).. and Proficio requires version 1. To ensure this. as follows: . for a library or framework. In this example. not for compilation. you can exclude the dependency from the graph by adding an exclusion to the dependency that introduced it. which will accumulate if this project is reused as a dependency itself.4 version of plexus-utils in the dependency graph. is to include it directly in the POM... so that the 1. that will be used widely by others.. in this situation. plexus-utils occurs twice. This is extremely important if you are publishing a build. <dependency> <groupId>org. this approach is not recommended unless you are producing an artifact that is bundling its dependencies and is not used as a dependency itself (for example. modify the plexus-container-default dependency in the proficio-core/pom. 1. then the next nearest will be tested. will be retrieved from the repository. In this case.1.1).1. this indicates that the preferred version of the dependency is 1. as shown above for plexus-utils.1. and so on. but less than 2. you need to avoid being overly specific as well. except 1. To understand how version ranges work. Maven has no knowledge regarding which versions will work.1.) Less than or equal to 1.5 Any version. For instance.) (.codehaus. while the nearest dependency technique will still be used in the case of a conflict. The notation used above is set notation. Figure 3-3: Version parsing 67 . This means that the latest version.0 Between 1.3 (inclusive) Greater than or equal to 1.1.1. the dependency should be specified as follows: <dependency> <groupId>org.5.0. Table 3-2: Examples of Version Ranges Range Meaning (. However. Finally. which is greater than or equal to 1.1. the version that is used must fit the range given. the version you are left with is [1.2. the build will fail.0.0) [1. However. it is necessary to understand how versions are compared. you can see how a version is partitioned by Maven.Creating Applications with Maven When a version is declared as 1. Maven assumes that all versions are valid and uses the “nearest dependency” technique described previously to determine which version to use. you may require a feature that was introduced in plexus-utils version 1.3] [1. it is possible to make the dependency mechanism more reliable for your builds and to reduce the number of exception cases that will be required. if none of them match.1 By being more specific through the use of version ranges.1.2. so in the case of a conflict with another dependency.(1. or there were no conflicts originally.).0 Greater than or equal to 1. If the nearest version does not match.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>[1.1. In figure 3-1. if two version ranges in a dependency graph do not intersect at all.)</version> </dependency> What this means is that.0] [1. but that other versions may be acceptable.2 and 1. and table 3-2 shows some of the values that can be used... <!-.Profiles for the two assemblies to create for deployment --> <profiles> <!-..Creating Applications with Maven If you take a look at the POM for the proficio-cli module you will see the following profile definitions: <project> .xml</descriptor> </descriptors> </configuration> </plugin> </plugins> </build> <activation> <property> <name>memory</name> </property> </activation> </profile> <!-. SFTP deployment. FTP deployment. It should be noted that the examples below depend on other parts of the build having been executed beforehand. In order to deploy.9.jar file only. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>{basedir}/target/deploy</url> </repository> </distributionManagement> . you need to correctly configure your distributionManagement element in your POM. and external SSH deployment. You will also notice that the profiles are activated using a system property. Here are some examples of how to configure your POM via the various deployment mechanisms. SSH2 deployment. so that all child POMs can inherit this information.Better Builds with Maven You can see there are two profiles: one with an id of memory and another with an id of xstream. This is a very simple example. Deploying to the File System To deploy to the file system you would use something like the following: <project> . Deploying your Application Now that you have an application assembly.jar file only. Currently Maven supports several methods of deployment.9. </project> 74 ... you will see that the memory-based assembly contains the proficiostore-memory-1. 3.0-SNAPSHOT. which would typically be your top-level POM. In each of these profiles you are configuring the assembly plugin to point at the assembly descriptor that will create a tailored assembly. If you wanted to create the assembly using the memory-based store. so it might be useful to run mvn install at the top level of the project to ensure that needed components are installed into the local repository.. while the XStream-based store contains the proficio-store-xstream-1.1. including simple file-based deployment. you would execute the following: mvn -Dxstream clean assembly:assembly Both of the assemblies are created in the target directory and if you use the jar tvf command on the resulting assemblies.0-SNAPSHOT.. 3. it is now time to deploy your application assembly. but it illustrates how you can customize the execution of the life cycle using profiles to suit any requirement you might have. you would execute the following: mvn -Dmemory clean assembly:assembly If you wanted to create the assembly using the XStream-based store. you’ll want to share it with as many people as possible! So. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>s Applications with Maven 3...yourcompany.com/deploy</url> </repository> </distributionManagement> .. Deploying with SSH2 To deploy to an SSH2 server you would use something like the following: <project> . <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>scp://sshserver.. </project> 75 ..com/deploy</url> </repository> </distributionManagement> .3.9. Deploying with SFTP To deploy to an SFTP server you would use something like the following: <project> .9. </project> 3....2.yourcompany. Deploying with an External SSH Now.4...9. the first three methods illustrated are included with Maven. <project> . 76 . </project> The build extension specifies the use of the Wagon external SSH provider.0-alpha-6</version> </extension> </extensions> </build> .wagon</groupId> <artifactId>wagon-ssh-external</artifactId> <version>1.apache.yourcompany. but to use an external SSH command to deploy you must configure not only the distributionManagement element.Better Builds with Maven 3.maven.com/deploy</url> </repository> </distributionManagement> <build> <extensions> <extension> <groupId>org.. but also a build extension.. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>scpexe://sshserver. Wagon is the general purpose transport mechanism used throughout Maven. which does the work of moving your files to the remote server. so only the distributionManagement element is required. 0-alpha-6</version> </extension> </extensions> </build> .yourcompany.Creating Applications with Maven 3. and you are ready to initiate deployment. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url></url> </repository> </distributionManagement> <build> <extensions> <extension> <groupId>org... simply execute the following command: mvn deploy 77 .apache.9.5..maven. Deploying with FTP To deploy with FTP you must also specify a build extension. </project> Once you have configured your POM accordingly.wagon</groupId> <artifactId>wagon-ftp</artifactId> <version>1.. To deploy to an FTP server you would use something like the following: <project> . you will see that we have something similarly the following: Figure 3-4: The site directory structure Everything that you need to generate the Web site resides within the src/site directory. For applications like Proficio.10. Maven supports a number of different documentation formats to accommodate various needs and preferences. testing and deploying Proficio. it is recommended that you create a source directory at the top-level of the directory structure to store the resources that are used to generate the web site. If you take a look. 78 . there is a subdirectory for each of the supported documentation formats that you are using for your site and the very important site descriptor. Creating a Web Site for your Application Now that you have walked though the process of building.Better Builds with Maven 3. it is time for you to see how to create a standard web site for an application. Within the src/site directory. A simple XML format for managing FAQs. • The APT format (Almost Plain Text). Maven also has limited support for: • The Twiki format. • The DocBook Simple format. the most well supported formats available are: • The XDOC format. . which is the FAQ format. • The Confluence format. which is a wiki-like format that allows you to write simple. We will look at a few of the more well-supported formats later in the chapter. which is a simple XML format used widely at Apache. • The FML format. structured documents (like this) very quickly. which is a less complex version of the full DocBook format. • The DocBook format. which is a popular Wiki markup format.Creating Applications with Maven Currently. A full reference of the APT Format is available. which is another popular Wiki markup format. 4.Helen Keller 85 . . Building J2EE Applications Building J2EE Applications This chapter covers: • • • • • Organizing the directory structure Building J2EE archives (EJB. EAR. Web Services) Setting up in-place Web development Deploying J2EE archives to a container Automating container start/stop Keep your face to the sun and you will never see the shadows. WAR. As importantly. Figure 4-1: Architecture of the DayTrader application 86 . You'll learn not only how to create a J2EE build but also how to create a productive development environment (especially for Web application development) and how to deploy J2EE modules into your container. you’ll learn how to build EARs. EJBs. This chapter will take you through the journey of creating the build for a full-fledged J2EE application called DayTrader.4 application and as a test bed for running performance tests. Web services. This chapter demonstrates how to use Maven on a real application to show how to address the complex issues related to automated builds.2. you’ll learn how to automate configuration and deployment of J2EE application servers. The functional goal of the DayTrader application is to buy and sell stock. Whether you are using the full J2EE stack with EJBs or only using Web applications with frameworks such as Spring or Hibernate. and Web applications.1.Better Builds with Maven 4. and its architecture is shown in Figure 4-1. 4. Introducing the DayTrader Application DayTrader is a real world application developed by IBM and then donated to the Apache Geronimo project. Its goal is to serve as both a functional example of a full-stack J2EE 1. Through this example. Introduction J2EE (or Java EE as it is now called) applications are everywhere. it's likely that you are using J2EE in some of your projects. As a consequence the Maven community has developed plugins to cover every aspect of building J2EE applications. • • • A typical “buy stock” use case consists of the following steps that were shown in Figure 4-1: 1. and a JMS Server for interacting with the outside world. This request is handled by the Trade Session bean. Once this happens the Trade Broker MDB is notified 6.3. The easy answer is to follow Maven’s artifact guideline: one module = one main artifact. In addition you may need another module producing an EAR which will contain the EJB and WAR produced from the other modules. The Data layer consists of a database used for storing the business objects and the status of each purchase. A new “open” order is saved in the database using the CMP Entity Beans. The Web layer offers a view of the application for both the Web client and the Web services client. • A module producing a WAR which will contain the Web application. • A module producing a JAR that will contain the Quote Streamer client application. Asynchronously the order that was placed on the queue is processed and the purchase completed. • A module producing another JAR that will contain the Web services client application. Organizing the DayTrader Directory Structure The first step to organizing the directory structure is deciding what build modules are required. using Web services. and using the Quote Streamer. The Trade Broker calls the Trade Session bean which in turn calls the CMP entity beans to mark the order as “completed". The order is then queued for processing in the JMS Message Server. This EAR will be used to easily deploy the server code into a J2EE container. The user gives a buy order (by using the Web client or the Web services client). cancel an order. It uses container-managed persistence (CMP) entity beans for storing the business objects (Order. Account. 3. and Message-Driven Beans (MDB) to send purchase orders and get quote changes. The EJB layer is where the business logic is. Holding. 2. buy or sell a stock. and so on. Thus you simply need to figure out what artifacts you need.Building J2EE Applications There are 4 layers in the architecture: • The Client layer offers 3 ways to access the application: using a browser. It uses servlets and JSPs. 87 . 4. The Quote Streamer is a Swing GUI application that monitors quote information about stocks in real-time as the price changes. The creation of the “open” order is confirmed for the user. The user is notified of the completed order on a subsequent request. get a stock quote. logout. you can see that the following modules will be needed: • A module producing an EJB which will contain all of the server-side EJBs. 5. 4.Quote and AccountProfile). The Trade Session is a stateless session bean that offers the business services such as login. Looking again at Figure 4-1. it is usually easier to choose names that represent a technology instead.the module containing the client side streamer application wsappclient .the module containing the EJBs web .. Best practices suggest to do this only when the need arises. As a general rule. However. For example.. The next step is to give these modules names and map them to a directory structure. it is better to find functional names for modules.the module producing the EAR which packages the EJBs and the Web application There are two possible layouts that you can use to organize these modules: a flat directory structure and a nested one. if you needed to physically locate the WARs in separate servlet containers to distribute the load. This file also contains the list of modules that Maven will build when executed from this directory (see the Chapter 3.] 88 .. for more details): [.the module containing the Web application streamer .the module containing the Web services client application ear .Better Builds with Maven Note that this is the minimal number of modules required. It is possible to come up with more.. Figure 4-2: Module names and a simple flat directory structure The top-level daytrader/pom. For the DayTrader application the following names were chosen: • • • • • ejb . On the other hand.] <modules> <module>ejb</module> <module>web</module> <module>streamer</module> <module>wsappclient</module> <module>ear</module> </modules> [. It is flat because you're locating all the modules in the same directory. you may want to split the WAR module into 2 WAR modules: one for the browser client and one for the Web services client. For example.xml file contains the POM elements that are shared between all of the modules. it is important to split the modules when it is appropriate for flexibility. If there isn't a strong need you may find that managing several modules can be more cumbersome than useful. Figure 4-2 shows these modules in a flat directory structure. Creating Applications with Maven. Let's discuss the pros and cons of each layout. as shown in Figure 4-4. In this case. not nested within each other. Figure 4-4: Nested directory structure for the EAR. The other alternative is to use a nested directory structure. Having this nested structure clearly shows how nested modules are linked to their parent. if you have many modules in the same directory you may consider finding commonalities between them and create subdirectories to partition them. and is the structure used in this chapter. the ejb and web modules are nested in the ear module. Note that in this case the modules are still separate. EJB and Web modules 89 . you might separate the client side modules from the server side modules in the way shown in Figure 4-3. However.Building J2EE Applications This is the easiest and most flexible structure to use. For example. each directory level containing several modules contains a pom.xml file containing the shared POM elements and the list of modules underneath. Figure 4-3: Modules split according to a server-side vs client-side directory organization As before. This makes sense as the EAR artifact is composed of the EJB and WAR artifacts produced by the ejb and web modules. Now that you have decided on the directory structure for the DayTrader application. you're going to create the Maven build for each module.0. EJB project. but then you’ll be restricted in several ways.m2\repository\org\apache\geronimo\samples\daytrader\daytrader\1.xml to C:\[. For example. These examples show that there are times when there is not a clear parent for a module. [INFO] --------------------------------------------------------------------[INFO] Building DayTrader :: Performance Benchmark Sample [INFO] task-segment: [install] [INFO] --------------------------------------------------------------------[INFO] [site:attach-descriptor] [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\pom. In addition. but by some client-side application.. Or the ejb module might be producing a client EJB JAR which is not used by the EAR.]\.xml of the project. the nested strategy doesn’t fit very well with the Assembler role as described in the J2EE specification. For example. the three modules wouldn’t be able to have different natures (Web application project.pom. Depending on the target deployment environment the Assembler may package things differently: one EAR for one environment or two EARs for another environment where a different set of machines are used.. A flat layout is more neutral with regard to assembly and should thus be preferred. starting with the wsappclient module after we take care of one more matter of business.. so before we move on to developing these sub-projects we need to install the parent POM into our local repository so it can be further built on. etc. the ejb or web modules might depend on a utility JAR and this JAR may be also required for some other EAR. EAR project). In those cases using a nested directory structure should be avoided. even though the nested directory structure seems to work quite well here. • It doesn’t allow flexible packaging. We are now ready to continue on with developing the sub-projects! 90 .. it has several drawbacks: • Eclipse users will have issues with this structure as Eclipse doesn’t yet support nested projects. The Assembler has a pool of modules and its role is to package those modules for deployment.. You’d need to consider the three modules as one project.0\daytrad er-1.Better Builds with Maven However. The modules we will work with from here on will each be referring to the parent pom. and this will be used from DayTrader’s wsappclient module.org/axis/java/userguide. the WSDL files are in src/main/wsdl. Building a Web Services Client Project Web Services are a part of many J2EE applications. For example. which is the default used by the Axis Tools plugin: Figure 4-5: Directory structure of the wsappclient module 91 ..Building J2EE Applications 4. the Maven plugin called Axis Tools plugin takes WSDL files and generates the Java files needed to interact with the Web services it defines. and Maven's ability to integrate toolkits can make them easier to add to the build process. the plugin uses the Axis framework (. Figure 4-5 shows the directory structure of the wsappclient module.apache. As you may notice. As the name suggests. see). We start our building process off by visiting the Web services portion of the build since it is a dependency of later build stages.4.html#WSDL2JavaBuildingStubsSkeletonsAndDataTypesFromWSDL.apache. Similarly. For example: [.wsdl file.] <plugin> <groupId>org. it would fail.xml.] In order to generate the Java source files from the TradeServices.xml file must declare and configure the Axis Tools plugin: <project> [....Better Builds with Maven The location of WSDL source can be customized using the sourceDirectory property.] <build> <plugins> [. it is required for two reasons: it allows you to control what version of the dependency to use regardless of what the Axis Tools plugin was built against. and more importantly.mojo</groupId> <artifactId>axistools-maven-plugin</artifactId> <configuration> <sourceDirectory> src/main/resources/META-INF/wsdl </sourceDirectory> </configuration> [.codehaus. This is because after the sources are generated. you will require a dependency on Axis and Axis JAXRPC in your pom. any tools that report on the POM will be able to recognize the dependency. it allows users of your project to automatically get the dependency transitively...] <plugin> <groupId>org. the wsappclient/pom..codehaus. At this point if you were to execute the build.. While you might expect the Axis Tools plugin to define this for you... apache.Building J2EE Applications As before. Thus. 93 .2</version> <scope>provided</scope> </dependency> <dependency> <groupId>org. they are not present on ibiblio and you'll need to install them manually.specs</groupId> <artifactId>geronimo-j2ee_1.geronimo.2</version> <scope>provided</scope> </dependency> <dependency> <groupId>axis</groupId> <artifactId>axis-jaxrpc</artifactId> <version>1.0</version> <scope>provided</scope> </dependency> </dependencies> The Axis JAR depends on the Mail and Activation Sun JARs which cannot be redistributed by Maven. Thus add the following three dependencies to your POM: <dependencies> <dependency> <groupId>axis</groupId> <artifactId>axis</artifactId> <version>1.4_spec</artifactId> <version>1. you need to add the J2EE specifications JAR to compile the project's Java sources. Run mvn install and Maven will fail and print the installation instructions. 0.org/axistoolsmaven-plugin/. running the build with mvn install leads to: C:\dev\m2book\code\j2ee\daytrader\wsappclient>mvn install [.jar [.] [INFO] [axistools:wsdl2java {execution: default}] [INFO] about to add compile source root [INFO] processing wsdl: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ src\main\wsdl\TradeServices.Better Builds with Maven After manually installing Mail and Activation... Now that we have discussed and built the Web services portion...] Note that the daytrader-wsappclient JAR now includes the class files compiled from the generated source files. [INFO] [jar:jar] [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ target\daytrader-wsappclient-1.. The Axis Tools plugin boasts several other goals including java2wsdl that is useful for generating the server-side WSDL file from handcrafted Java classes..]\.0\daytrader-wsappclient-1.m2\repository\org\apache\geronimo\samples\daytrader\ daytrader-wsappclient\1.0. 94 .0. But that's another story. [INFO] [compiler:compile] Compiling 13 source files to C:\dev\m2book\code\j2ee\daytrader\wsappclient\target\classes [INFO] [resources:testResources] [INFO] Using default encoding to copy filtered resources..jar to C:\[. in addition to the sources from the standard source directory. The Axis Tools reference documentation can be found at. [INFO] [compiler:testCompile] [INFO] No sources to compile [INFO] [surefire:test] [INFO] No tests to run.codehaus.wsdl [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources. lets visit EJBs next.. The generated WSDL file could then be injected into the Web Services client module to generate client-side Java files.jar [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\wsappclient\ target\daytrader-wsappclient-1. Tests that require the container to run are called integration tests and are covered at the end of this chapter. Unit tests are tests that execute in isolation from the container. • Unit tests in src/test/java and classpath resources for the unit tests in src/test/resources.. • Runtime classpath resources in src/main/resources.xml.Building J2EE Applications 4. 95 . Any container-specific deployment descriptor should also be placed in this directory.5. More specifically. the standard ejb-jar.xml deployment descriptor is in src/main/resources/META-INF/ejbjar. 0</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.xml file.4_spec</artifactId> <version>1. 96 .samples.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean.3</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.specs</groupId> <artifactId>geronimo-j2ee_1.0</modelVersion> <parent> <groupId>org.class</clientExclude> </clientExcludes> </configuration> </plugin> </plugins> </build> </project> As you can see. take a look at the content of this project’s pom.0</version> </parent> <artifactId>daytrader-ejb</artifactId> <name>Apache Geronimo DayTrader EJB Module</name> <packaging>ejb</packaging> <description>DayTrader EJBs</description> <dependencies> <dependency> <groupId>org.maven. you're extending a parent POM using the parent element.0. This is because the DayTrader build is a multi-module build and you are gathering common POM elements in a parent daytrader/pom.apache. If you look through all the dependencies you should see that we are ready to continue with building and installing this portion of the build.apache.Better Builds with Maven Now.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.apache.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1.0.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1.apache.geronimo.samples.geronimo.xml file: <project> <modelVersion>4.geronimo. **/*Session. • Lastly. 97 .class. so you must explicitly tell it to do so: <plugin> <groupId>org. Fortunately.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean.xml file is is a standard POM file except for three items: • You need to tell Maven that this project is an EJB project so that it generates an EJB JAR when the package phase is called. this prevents the EAR module from including the J2EE JAR when it is packaged. You should note that you're using a provided scope instead of the default compile scope.xml contains a configuration to tell the Maven EJB plugin to generate a Client EJB JAR file when mvn install is called. the Geronimo project has made the J2EE JAR available under an Apache license and this JAR can be found on ibiblio. By default the EJB plugin does not generate the client JAR. The Client will be used in a later examples when building the web module.Building J2EE Applications The ejb/pom. You could instead specify a dependency on Sun’s J2EE JAR. This is achieved by specifying a dependency element on the J2EE JAR.class and **/package. You make this clear to Maven by using the provided scope. This is done by specifying: <packaging>ejb</packaging> • As you’re compiling J2EE code you need to have the J2EE specifications JAR in the project’s build classpath. The reason is that this dependency will already be present in the environment (being the J2EE application server) where your EJB will execute.class</clientExclude> </clientExcludes> </configuration> </plugin> The EJB plugin has a default set of files to exclude from the client EJB JAR: **/*Bean. it still needs to be listed in the POM so that the code can be compiled.maven. this JAR is not redistributable and as such cannot be found on ibiblio.apache.class. Even though this dependency is provided at runtime. However. **/*CMP.html. the pom. m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1. Time elapsed: 0. Relax and type mvn install: C:\dev\m2book\code\j2ee\daytrader\ejb>mvn install [INFO] Scanning for projects.samples.FinancialUtilsTest [surefire] Tests run: 1.. Errors: 0 [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1. Errors: 0.0.0-client.02 sec Results : [surefire] Tests run: 1.0-client.0\daytrader-ejb-1.jar [INFO] Building ejb client daytrader-ejb-1..jar Maven has created both the EJB JAR and the client EJB JAR and installed them in your local Maven repository. Thus you're specifying a pattern that only excludes from the generated client EJB JAR all EJB implementation classes located in the ejb package (**/ejb/*Bean.daytrader.jar [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.jar to C:\[. Note that it's also possible to specify a list of files to include using clientInclude elements.0. 98 . you need to override the defaults using a clientExclude element because it happens that there are some required non-EJB files matching the default **/*Bean.jar to C:\[.0-client [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.0 [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.]\.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1. You’re now ready to execute the build.0\daytrader-ejb-1. [INFO] [compiler:compile] Compiling 49 source files to C:\dev\m2book\code\j2ee\daytrader\ejb\target\classes [INFO] [resources:testResources] [INFO] Using default encoding to copy filtered resources.ap [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.class pattern and which need to be present in the generated client EJB JAR. [INFO] ----------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [install] [INFO] ----------------------------------------------------------[INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.0..]\. Failures: 0. Failures: 0.Better Builds with Maven In this example.0-client.class).geronimo. Early adopters of EJB3 may be interested to know how Maven supports EJB3.Building J2EE Applications The EJB plugin has several other configuration elements that you can use to suit your exact needs. Stay tuned! 99 . There is a working prototype of an EJB3 Maven plugin. Please refer to the EJB plugin documentation on. however in the future it will be added to the main EJB plugin after the specification is finalized.apache. At the time of writing.org/plugins/maven-ejb-plugin/. the EJB3 specification is still not final. samples. When writing EJBs it means you simply have to write your EJB implementation class and XDoclet will generate the Home interface.TradeHome" * @ejb.JMSException. Exception […] 100 .samples.geronimo.6. Building an EJB Module With Xdoclet If you’ve been developing a lot of EJBs (version 1 and 2) you have probably used XDoclet to generate all of the EJB interfaces and deployment descriptors for you.daytrader.geronimo.bean * display-name="TradeEJB" * name="TradeEJB" * view-type="remote" * impl-class-name= * "org.ejb. the container-specific deployment descriptors.daytrader.apache.interface * generate="remote" * remote-class= * "org.TradeBean" * @ejb. and the ejb-jar. Note that if you’re an EJB3 user.Trade" * […] */ public class TradeBean implements SessionBean { […] /** * Queue the Order identified by orderID to be processed in a * One Phase commit […] * * @ejb.xml descriptor.ejb. you can run the XDoclet processor to generate those files for you.home * generate="remote" * remote-class= * "org. Using XDoclet is easy: by adding Javadoc annotations to your classes.jms.Better Builds with Maven 4.apache.ejb.geronimo.transaction * type="RequiresNew" *[…] */ public void queueOrderOnePhase(Integer orderID) throws javax. the Remote and Local interfaces.samples.apache.interface-method * view-type="remote" * @ejb. you can safely skip this section – you won’t need it! Here’s an extract of the TradeBean session EJB using Xdoclet: /** * Trade Session EJB manages all Trading services * * @ejb.daytrader. this has to be run before the compilation phase occurs. This is achieved by using the Maven XDoclet plugin and binding it to the generate-sources life cycle phase.directory}/generated-sources/xdoclet"> <fileset dir="${project.java"></include> <include name="**/*MDB.Building J2EE Applications To demonstrate XDoclet.xml that configures the plugin: <plugin> <groupId>org. Figure 4-7: Directory structure for the DayTrader ejb module when using Xdoclet The other difference is that you only need to keep the *Bean. but you don’t need the ejb-jar.build.build.outputDirectory}/META-INF"/> </ejbdoclet> </tasks> </configuration> </execution> </executions> </plugin> 101 .1" destDir= "${project.java classes and remove all of the Home.sourceDirectory}"> <include name="**/*Bean.build.mojo</groupId> <artifactId>xdoclet-maven-plugin</artifactId> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>xdoclet</goal> </goals> <configuration> <tasks> <ejbdoclet verbose="true" force="true" ejbSpec="2.java"></include> </fileset> <homeinterface/> <remoteinterface/> <localhomeinterface/> <localinterface/> <deploymentdescriptor destDir="${project. As you can see in Figure 4-7. Since XDoclet generates source files. Here’s the portion of the pom. Now you need to tell Maven to run XDoclet on your project. create a copy of the DayTrader ejb module called ejb-xdoclet. Local and Remote interfaces as they’ll also get generated.xml file anymore as it’s going to be generated by Xdoclet.codehaus. the project’s directory structure is the same as in Figure 4-6. apache. In practice you can use any XDoclet task (or more generally any Ant task) within the tasks element.geronimo. It’s based on a new architecture but the tag syntax is backwardcompatible in most cases.geronimo. in the tasks element you use the ejbdoclet Ant task provided by the XDoclet project (for reference documentation see start INFO: Running <deploymentdescriptor/> Generating EJB deployment descriptor (ejb-jar.sourceforge. […] 10 janv.samples. 2006 16:53:50 xdoclet.directory}/generated-sources/xdoclet (you can configure this using the generatedSourcesDirectory configuration element).samples. […] 10 janv.ejb.ejb.Better Builds with Maven The XDoclet plugin is configured within an execution element.0 […] You might also want to try XDoclet2. However.net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask. It also tells Maven that this directory contains sources that will need to be compiled when the compile phase executes. 2006 16:53:51 xdoclet.TradeBean'.xml). but here the need is to use the ejbdoclet task to instrument the EJB class files.apache. nor does it boast all the plugins that XDoclet1 has.AccountBean'.daytrader. […] [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1.TradeBean'.geronimo. […] INFO: Running <remoteinterface/> Generating Remote interface for 'org.apache.ejb. 102 .XDocletMain start INFO: Running <homeinterface/> Generating Home interface for 'org.build.daytrader.daytrader.AccountBean'.samples.. Finally.samples.org/Maven2+Plugin. The plugin generates sources by default in ${project. 2006 16:53:50 xdoclet.codehaus.daytrader. In addition.html). 2006 16:53:51 xdoclet.XDocletMain start INFO: Running <localhomeinterface/> Generating Local Home interface for 'org. This is required by Maven to bind the xdoclet goal to a phase. the XDoclet plugin will also trigger Maven to download the XDoclet libraries from Maven’s remote repository and add them to the execution classpath.apache.XDocletMain start INFO: Running <localinterface/> Generating Local interface for 'org. […] 10 janv. it should be noted that XDoclet2 is a work in progress and is not yet fully mature.geronimo. There’s also a Maven 2 plugin for XDoclet2 at. To do so you're going to use the Maven plugin for Cargo.2 distribution from the specified URL and install it in ${installDir}.] <plugin> <groupId>org.net/ sourceforge/jboss/jboss-4. stopping.7. Edit the ejb/pom. In this example.. First.codehaus.dl. Cargo is a framework for manipulating containers. configuring them and deploying modules to them.. Deploying EJBs Now that you know how to build an EJB project. Netbeans.x (containerId element) and that you want Cargo to download the JBoss 4.build. you will also learn how to test it automatically. It offers generic APIs (Java.Building J2EE Applications 4.0.sourceforge. In the container element you tell the Cargo plugin that you want to use JBoss 4.. the JBoss container will be used.2.xml file and add the following Cargo plugin configuration: <build> <plugins> [. 103 .] See</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <zipUrlInstaller> <url> for full details. Later. you can use the log element to specify a file where Cargo logs will go and you can also use the output element to specify a file where the container's output will be dumped. Maven 2. IntelliJ IDEA. The location where Cargo should install JBoss is a user-dependent choice and this is why the ${installDir} property was introduced.. you will learn how to deploy it. Ant. In order to build this project you need to create a Profile where you define the ${installDir} property's value.codehaus.log</output> <log>${project.log</log> [.build. Maven 1.directory}/cargo. Let's discover how you can automatically start a container and deploy your EJBs into it.directory}/jboss4x.) for performing various actions on containers such as starting. etc.0. For example: <container> <containerId>jboss4x</containerId> <output>${project.zip</url> <installDir>${installDir}</installDir> </zipUrlInstaller> </container> </configuration> </plugin> </plugins> </build> If you want to debug Cargo's execution. you will need to have Maven start the container automatically. in the Testing J2EE Applications section of this chapter. Nor should the content be shared with other Maven projects at large. The Cargo plugin does all the work: it provides a default JBoss configuration (using port 8080 for example). 104 . For example: <home>c:/apps/jboss-4. It's also possible to tell Cargo that you already have JBoss installed locally. [INFO] [talledLocalContainer] JBoss 4.. In this case.0. Of course..xml file defines a profile named vmassol. and the EJB JAR has been deployed. you can define a profile in the POM. the EJB JAR should first be created.. In that case replace the zipURLInstaller element with a home element. [INFO] Searching repository for plugin with prefix: 'cargo'. or in a settings... as the content of the Profile is user-dependent you wouldn't want to define it in the POM.2 starting. it detects that the Maven project is producing an EJB from the packaging element and it automatically deploys it when the container is started.2 started on port [8080] [INFO] Press Ctrl-C to stop the container.xml file.0. in a profiles...2] [INFO] [talledLocalContainer] JBoss 4.xml file.xml file. That's it! JBoss is running. Thus the best place is to create a profiles.Better Builds with Maven As explained in Chapter 3. in a settings. [INFO] ----------------------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [cargo:start] [INFO] ----------------------------------------------------------------------[INFO] [cargo:start] [INFO] [talledLocalContainer] Parsed JBoss version = [4. so run mvn package to generate it.0.2</home> That's all you need to have a working build and to deploy the EJB JAR into JBoss. activated by default and in which the ${installDir} property points to c:/apps/cargo-installs.0. to stop the container call mvn cargo:stop. If the container was already started and you wanted to just deploy the EJB.org/Maven2+plugin. JSPs. Finally.Building J2EE Applications As you have told Cargo to download and install JBoss. let’s focus on building the DayTrader web module. Building a Web Application Project Now. and more. WEB-INF configuration files. Subsequent calls will be fast as Cargo will not download JBoss again. Figure 4-8: Directory structure for the DayTrader web module showing some Web application resources 105 .8. etc. modifying various container parameters. you would run the cargo:deploy goal. 4. Check the documentation at. deploying on a remote machine. Cargo has many other configuration options such as the possibility of using an existing container installation. except that there is an additional src/main/webapp directory for locating Web application resources such as HTML pages. (see Figure 4-8). especially if you are on a slow connection. The layout is the same as for a JAR module (see the first two chapters of this book). the first time you execute cargo:start it will take some time. daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. Depending on the main EJB JAR would also work.0.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1. you specify the required dependencies.4_spec</artifactId> <version>1. 106 .geronimo.apache. The reason you are building this web module after the ejb module is because the web module's servlets call the EJBs.samples. Therefore.samples. for example to prevent coupling.0</version> <scope>provided</scope> </dependency> </dependencies> </project> You start by telling Maven that it’s building a project generating a WAR: <packaging>war</packaging> Next.geronimo.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. you need to add a dependency on the ejb module in web/pom. Therefore.specs</groupId> <artifactId>geronimo-j2ee_1.xml: <dependency> <groupId>org.geronimo.apache.samples. It’s always cleaner to depend on the minimum set of required classes.0</modelVersion> <parent> <groupId>org.apache. the servlets only need the EJB client JAR in their classpath to be able to call the EJBs.0</version> <type>ejb-client</type> </dependency> <dependency> <groupId>org.xml file: <project> <modelVersion>4. This is why you told the EJB plugin to generate a client JAR earlier on in ejb/pom.0</version> <type>ejb-client</type> </dependency> Note that you’re specifying a type of ejb-client and not ejb. This is because the servlets are a client of the EJBs.geronimo.0</version> </parent> <artifactId>daytrader-web</artifactId> <name>DayTrader :: Web Application</name> <packaging>war</packaging> <description>DayTrader Web</description> <dependencies> <dependency> <groupId>org.xml.apache.Better Builds with Maven As usual everything is specified in the pom. but it’s not necessary and would increase the size of the WAR file. mortbay. isn’t it? What happened is that the Jetty6 plugin realized the page was changed and it redeployed the Web application automatically.SelectChannelConnector"> <port>9090</port> <maxIdleTime>60000</maxIdleTime> </connector> </connectors> <userRealms> <userRealm implementation= "org.html. There are various configuration parameters available for the Jetty6 plugin such as the ability to define Connectors and Security realms.HashUserRealm"> <name>Test Realm</name> <config>etc/realm. Now imagine that you have an awfully complex Web application generation process. and so on.mortbay. The Jetty container automatically recompiled the JSP when the page was refreshed.nio.properties. For a reference of all configuration options see the Jetty6 plugin documentation at. possibly generating some files.xml configuration file using the jettyConfig configuration element.mortbay.xml file will be applied first.org/jetty6/mavenplugin/index.jetty. In that case anything in the jetty.jetty</groupId> <artifactId>maven-jetty6-plugin</artifactId> <configuration> […] <connectors> <connector implementation= "org. that you have custom plugins that do all sorts of transformations to Web application resource files. you would use: <plugin> <groupId>org.mortbay. It's also possible to pass in a jetty. Fortunately there’s a solution.security. By default the plugin uses the module’s artifactId from the POM. For example if you wanted to run Jetty on port 9090 with a user realm defined in etc/realm.Better Builds with Maven That’s nifty. 112 . The strategy above would not work as the Jetty6 plugin would not know about the custom actions that need to be executed to generate a valid Web application.jetty.properties</config> </userRealm> </userRealms> </configuration> </plugin> You can also configure the context under which your Web application is deployed by using the contextPath configuration element. war [INFO] [jetty6:run-exploded] [INFO] Configuring Jetty for project: DayTrader :: Web Application [INFO] Starting Jetty Server .SimpleLogger@78bc3b via org...xml and pom.0.Slf4jLog [INFO] Context path = /daytrader-web 2214 [main] INFO org.. [INFO] Copy webapp resources to C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.mortbay. To demonstrate.slf4j.xml file is modified. Then the plugin deploys the WAR file to the Jetty server and it performs hot redeployments whenever the WAR is rebuilt (by calling mvn package from another window. Then it deploys the unpacked Web application located in target/ (whereas the jetty6:run-war goal deploys the WAR file). The Jetty6 plugin also contains two goals that can be used in this situation: • jetty6:run-war: The plugin first runs the package phase which generates the WAR file.Started SelectChannelConnector @ 0.Building J2EE Applications The WAR plugin has an exploded goal which produces an expanded Web application in the target directory..log .. The plugin then watches the following files: WEB-INF/lib.impl.0.xml. 0 [main] INFO org. Calling this goal ensures that the generated Web application is the correct one.Logging to org.log .log.0 [INFO] Assembling webapp daytrader-web in C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1. WEB-INF/web.. any change to those files results in a hot redeployment. execute mvn jetty6:run-exploded goal on the web module: C:\dev\m2book\code\j2ee\daytrader\web>mvn jetty6:run-exploded [.. WEB-INF/classes.mortbay.0. [INFO] Scan complete at Wed Feb 15 11:59:00 CET 2006 [INFO] Starting scanner at interval of 10 seconds..0. • jetty6:run-exploded: The plugin runs the package phase as with the jetty6:runwar goal.0 [INFO] Generating war C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1.] [INFO] [war:war] [INFO] Exploding webapp. 113 .mortbay.0:8080 [INFO] Scanning .war [INFO] Building war: C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1. for example) or when the pom. codehaus. First. Restart completed.. so now the focus will be on deploying a packaged WAR to your target container.org/Containers).. Scanning .. edit the web module's pom.port>8280</cargo.. You're now ready for productive web development. Stopping webapp .Better Builds with Maven As you can see the WAR is first assembled in the target directory and the Jetty plugin is now waiting for changes to happen..codehaus.servlet. If you open another shell and run mvn package you'll see the following in the first shell's console: [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] Scan complete at Wed Feb 15 12:02:31 CET 2006 Calling scanner listeners .. Deploying Web Applications You have already seen how to deploy a Web application for in-place Web development in the previous section..10..xml file and add the Cargo configuration: <plugin> <groupId>org.port> </properties> </configuration> </configuration> </plugin> 114 . Reconfiguring webapp . This example uses the Cargo Maven plugin to deploy to any container supported by Cargo (see. This is very useful when you're developing an application and you want to verify it works on several containers.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>${containerId}</containerId> <zipUrlInstaller> <url>${url}</url> <installDir>${installDir}</installDir> </zipUrlInstaller> </container> <configuration> <properties> <cargo. Listeners completed. No more excuses! 4.servlet... Restarting webapp . There are two differences though: • Two new properties have been introduced (containerId and url) in order to make this build snippet generic. the containerId and url properties should be shared for all users of the build..sourceforge. However.apache.30/bin/ jakarta-tomcat-5.Building J2EE Applications As you can see this is a configuration similar to the one you have used to deploy your EJBs in the Deploying EJBs section of this chapter. Those properties will be defined in a Profile.30.org/dist/jakarta/tomcat-5/v5. add the following profiles to the web/pom.zip</url> </properties> </profile> <profile> <id>tomcat5x</id> <properties> <containerId>tomcat5x</containerId> <url> file. 115 .2.servlet. A cargo. • As seen in the Deploying EJBs section the installDir property is user-dependent and should be defined in a profiles. Thus. You could add as many profiles as there are containers you want to execute your Web application on..0. This is very useful if you have containers already running your machine and you don't want to interfere with them.xml file: [.zip</url> </properties> </profile> </profiles> </project> You have defined two profiles: one for JBoss and one for Tomcat and the JBoss profile is defined as active by default (using the activation element).net/sourceforge/jboss/jboss4.] </build> <profiles> <profile> <id>jboss4x</id> <activation> <activeByDefault>true</activeByDefault> </activation> <properties> <containerId>jboss4x</containerId> <url> element has been introduced to show how to configure the containers to start on port 8280 instead of the default 8080 port.dl. 0.remote..Better Builds with Maven Executing mvn install cargo:start generates the WAR. To deploy the DayTrader’s WAR to a running JBoss server on machine remoteserver and executing on port 80..hostname> <cargo.. you would need the following Cargo plugin configuration in web/pom.] [.2] [INFO] [talledLocalContainer] JBoss 4. once this is verified you'll want a solution to deploy your WAR into an integration platform...0. [INFO] [CopyingLocalDeployer] Deploying [C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.codehaus.username> <cargo.0.0.servlet.remote.. However.servlet.hostname>${remoteServer}</cargo..port>${remotePort}</cargo...] [INFO] [cargo:start] [INFO] [talledLocalContainer] Tomcat 5..username>${remoteUsername}</cargo... This is useful for development and to test that your code deploys and works.war] to [C:\[. [INFO] [talledLocalContainer] JBoss 4. One solution is to have your container running on that integration platform and to perform a remote deployment of your WAR to it. [INFO] [talledLocalContainer] Tomcat 5..cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <type>remote</type> </container> <configuration> <type>runtime</type> <properties> <cargo..password> </properties> </configuration> </configuration> </plugin> 116 ..0.remote.password>${remotePassword}</cargo.30 started on port [8280] [INFO] Press Ctrl-C to stop the container.port> <cargo.remote.2 starting.]\Temp\cargo\50866\webapps]. starts the JBoss container and deploys the WAR into it: C:\dev\m2book\code\j2ee\daytrader\web>mvn install cargo:start [.0.30 starting..2 started on port [8280] [INFO] Press Ctrl-C to stop the container.xml: <plugin> <groupId>org. It’s time to package the server module artifacts (EJB and WAR) into an EAR for convenient deployment. Check the Cargo reference documentation for all details on deployments at file (see Figure 4-11).apache. • Several configuration properties (especially a user name and password allowed to deploy on the remote JBoss container) to specify all the details required to perform the remote deployment. the changes are: • A remote container and configuration type to tell Cargo that the container is remote and not under Cargo's management. Figure 4-11: Directory structure of the ear module As usual the magic happens in the pom. Start by defining that this is an EAR project by using the packaging element: <project> <modelVersion>4. Note that there was no need to specify a deployment URL as it is computed automatically by Cargo..0. Building an EAR Project You have now built all the individual modules. 4.11.0</version> </parent> <artifactId>daytrader-ear</artifactId> <name>DayTrader :: Enterprise Application</name> <packaging>ear</packaging> <description>DayTrader EAR</description> 117 .daytrader</groupId> <artifactId>daytrader</artifactId> <version>1..org/Deploying+to+a+running+container.codehaus.geronimo.xml file) for those user-dependent.xml file. All the properties introduced need to be declared inside the POM for those shared with other users and in the profiles.0</modelVersion> <parent> <groupId>org.xml file (or the settings. it solely consists of a pom. The ear module’s directory structure can't be any simpler.Building J2EE Applications When compared to the configuration for a local deployment above. Web modules. jar. 118 .geronimo.samples. define all of the dependencies that need to be included in the generated EAR: <dependencies> <dependency> <groupId>org.0</version> </dependency> </dependencies> Finally. the EAR plugin supports the following module types: ejb. sar and wsr.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>org.samples. and the J2EE version to use.geronimo. par.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. the description to use.geronimo. and EJB modules.apache.apache. you need to configure the Maven EAR plugin by giving it the information it needs to automatically generate the application.0</version> <type>ejb</type> </dependency> <dependency> <groupId>org. ejb-client.apache. war.Better Builds with Maven Next. It is also necessary to tell the EAR plugin which of the dependencies are Java modules.samples.daytrader</groupId> <artifactId>daytrader-web</artifactId> <version>1.apache. rar. This includes the display name to use. At the time of writing.samples.xml deployment descriptor file. ejb3.0</version> <type>war</type> </dependency> <dependency> <groupId>org.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1.geronimo. the contextRoot element is used for the daytrader-web module definition to tell the EAR plugin to use that context root in the generated application.geronimo. 119 .samples.geronimo.xml file.samples.samples.maven.daytrader</groupId> <artifactId>daytrader-web</artifactId> <contextRoot>/daytrader</contextRoot> </webModule> </modules> </configuration> </plugin> </plugins> </build> </project> Here.geronimo.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <webModule> <groupId>org.4</version> <modules> <javaModule> <groupId>org. only EJB client JARs are included when specified in the Java modules list.apache. it is often necessary to customize the inclusion of some dependencies such as shown in this example: <build> <plugins> <plugin> <groupId>org. However.apache.apache.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> <configuration> <displayName>Trade</displayName> <description> DayTrader Stock Trading Performance Benchmark Sample </description> <version>1.Building J2EE Applications By default.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <javaModule> <groupId>org.apache. all dependencies are included. or those with a scope of test or provided. You should also notice that you have to specify the includeInApplicationXml element in order to include the streamer and wsappclient libraries into the EAR. with the exception of those that are optional. By default. Run mvn install in daytrader/streamer..Better Builds with Maven It is also possible to configure where the JARs' Java modules will be located inside the generated EAR.org/plugins/maven-ear-plugin.] <defaultBundleDir>lib</defaultBundleDir> <modules> <javaModule> [: [.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> <bundleDir>lib</bundleDir> </javaModule> <javaModule> <groupId>org.. if you wanted to put the libraries inside a lib subdirectory of the EAR you would use the bundleDir element: <javaModule> <groupId>org. 120 . The streamer module's build is not described in this chapter because it's a standard build generating a JAR. For example..samples..samples..apache.] </javaModule> [.geronimo.apache.geronimo.] There are some other configuration elements available in the EAR plugin which you can find out by checking the reference documentation on. However the ear module depends on it and thus you'll need to have the Streamer JAR available in your local repository before you're able to run the ear module's build.apache.. Generating one [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.war] [INFO] Copying artifact [ejb:org.0] to [daytrader-streamer-1.0.jar] [INFO] Copying artifact [jar:org.0] to[daytrader-ejb-1.ear to C:\[..0.MF .0\daytrader-ear-1.xml [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.jar] [INFO] Copying artifact [ejb-client:org..samples.0. [INFO] [ear:ear] [INFO] Copying artifact [jar:org.apache.daytrader: daytrader-ejb:1.samples.geronimo.0] to [daytrader-web-1.daytrader: daytrader-streamer:1.ear [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.daytrader: daytrader-ejb:1.0] to [daytrader-ejb-1.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ear\1.jar] [INFO] Copying artifact [war:org.samples.apache.geronimo.0.samples.jar] [INFO] Could not find manifest file: C:\dev\m2book\code\j2ee\daytrader\ear\src\main\application\ META-INF\MANIFEST.daytrader: daytrader-wsappclient:1.daytrader: daytrader-web:1.0.]\.samples.0. run mvn install: C:\dev\m2book\code\j2ee\daytrader\ear>mvn install […] [INFO] [ear:generate-application-xml] [INFO] Generating application.geronimo.geronimo.0] to [daytrader-wsappclient-1.0-client.apache.geronimo.apache.apache.ear 121 .Building J2EE Applications To generate the EAR.0. jar</java> </module> <module> <java>daytrader-wsappclient-1. In this example.Better Builds with Maven You should review the generated application. A plan is an XML file containing configuration information such as how to map CMP entity beans to a specific database. Like any other container. it is recommended that you use an external plan file so that the deployment configuration is independent from the archives getting deployed. 4.0" encoding="UTF-8"?> <application xmlns=". The DayTrader application does not deploy correctly when using the JDK 5 or newer.jar</java> </module> <module> <web> <web-uri>daytrader-web-1.com/xml/ns/j2ee" xsi: <description> DayTrader Stock Trading Performance Benchmark Sample </description> <display-name>Trade</display-name> <module> <java>daytrader-streamer-1.sun.war</web-uri> <context-root>/daytrader</context-root> </web> </module> <module> <ejb>daytrader-ejb-1. you'll deploy the DayTrader EAR into Geronimo.xml to prove that it has everything you need: <?xml version="1.com/xml/ns/j2ee" xmlns:xsi=". 122 . Deploying EARs follows the same principle.sun. The next section will demonstrate how to deploy this EAR into a container.0. how to map J2EE resources in the container.com/xml/ns/j2ee/application_1_4.sun.12. Geronimo is somewhat special among J2EE containers in that deploying requires calling the Deployer tool with a deployment plan. Deploying a J2EE Application You have already learned how to deploy EJBs and WARs into a container individually.jar</ejb> </module> </application> This looks good. enabling the Geronimo plan to be modified to suit the deployment environment.xsd" version="1.w3.0. You'll need to use the JDK 1. Geronimo also supports having this deployment descriptor located within the J2EE archives you are deploying. However.4 for this section and the following.0.0. etc. xml configuration snippet: <plugin> <groupId>org.0.codehaus.xml</plan> </properties> </deployable> </deployables> </deployer> </configuration> </plugin> 123 ..cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url> J2EE Applications To get started.xml. You would need the following pom.apache. store the deployment plan in ear/src/main/deployment/geronimo/plan.0/ geronimo-tomcat-j2ee-1. xml </argument> </arguments> </configuration> </plugin> You may have noticed that you're using a geronimo. As you've seen in the EJB and WAR deployment sections above and in previous chapters it's possible to create properties that are defined either in a properties section of the POM or in a Profile.jar –user system –password manager deploy C:\dev\m2book\code\j2ee\daytrader\ear\target/daytrader-ear-1. in this section you'll learn how to use the Maven Exec plugin. You'll use it to run the Geronimo Deployer tool to deploy your EAR into a running Geronimo container.0-tomcat/bin/deployer.jar</argument> <argument>--user</argument> <argument>system</argument> <argument>--password</argument> <argument>manager</argument> <argument>deploy</argument> <argument> ${project. Modify the ear/pom. or when Cargo doesn't support the container you want to deploy into.build. the Exec plugin will transform the executable and arguments elements above into the following command line: java -jar c:/apps/geronimo-1.13 Testing J2EE Applications).build. put the following profile in a profiles.xml file: <profiles> <profile> <id>vmassol</id> <properties> <geronimo.xml to configure the Exec plugin: <plugin> <groupId>org. Even though it's recommended to use a specific plugin like the Cargo plugin (as described in 4.Better Builds with Maven However.codehaus. As the location where Geronimo is installed varies depending on the user.0.ear </argument> <argument> ${basedir}/src/main/deployment/geronimo/plan.0-tomcat</geronimo. learning how to use the Exec plugin is useful in situations where you want to do something slightly different.home>c:/apps/geronimo-1.directory}/${project.home> </properties> </profile> </profiles> At execution time.home}/bin/deployer.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <configuration> <executable>java</executable> <arguments> <argument>-jar</argument> <argument>${geronimo.home property that has not been defined anywhere.xml 124 .ear C:\dev\m2book\code\j2ee\daytrader\ear/src/main/deployment/geronimo/plan.xml or settings. This plugin can execute any process.finalName}. .jar [INFO] [INFO] `-> daytrader-wsappclient-1. Since Geronimo 1.0-SNAPSHOT.0/car If you need to undeploy the DayTrader version that you've built above you'll use the “Trade” identifier instead: C:\apps\geronimo-1.0-tomcat\bin>deploy undeploy Trade 125 .war [INFO] [INFO] `-> daytrader-ejb-1.0-SNAPSHOT.0-SNAPSHOT. start your preinstalled version of Geronimo and run mvn exec:exec: C:\dev\m2book\code\j2ee\daytrader\ear>mvn exec:exec [. You will need to make sure that the DayTrader application is not already deployed before running the exec:exec goal or it will fail.jar [INFO] [INFO] `-> daytrader-streamer-1.0-tomcat\bin>deploy stop geronimo/daytrader-derby-tomcat/1.0 comes with the DayTrader application bundled.. you should first stop it.jar [INFO] [INFO] `-> TradeDataSource [INFO] [INFO] `-> TradeJMS You can now access the DayTrader application by opening your browser to.] [INFO] [exec:exec] [INFO] Deployed Trade [INFO] [INFO] `-> daytrader-web-1.Building J2EE Applications First.0-SNAPSHOT. by creating a new execution of the Exec plugin or run the following: C:\apps\geronimo-1. Figure 4-13: The new functional-tests module amongst the other DayTrader modules You need to add this module to the list of modules in the daytrader/pom.Better Builds with Maven 4.xml so that it's built along with the others. see Chapter 7. modify.13. Testing J2EE Application In this last section you'll learn how to automate functional testing of the EAR built previously. At the time of writing. Maven only supports integration and functional testing by creating a separate module. create a functional-tests module as shown in Figure 4-13. To achieve this. so you can define a profile to build the functional-tests module only on demand. 126 . Functional tests can take a long time to execute. For example. take a look in the functional-tests module itself. but running mvn install -Pfunctional-test will. • The Geronimo deployment Plan file is located in src/deployment/geronimo/plan.xml. Figure 4-14: Directory structure for the functional-tests module As this module does not generate an artifact. the packaging should be defined as pom. Figure 4-1 shows how it is organized: • Functional tests are put in src/it/java.Building J2EE Applications This means that running mvn install will not build the functional-tests module. so these need to be configured in the functional-tests/pom. the compiler and Surefire plugins are not triggered during the build life cycle of projects with a pom packaging.xml file: 127 . • Classpath resources required for the tests are put in src/it/resources (this particular example doesn't have any resources). Now. However. samples.apache.] </plugins> </build> </project> 128 .plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <goals> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.maven.geronimo.0.maven...apache.Better Builds with Maven <project> <modelVersion>4.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1..apache.apache.0-SNAPSHOT</version> <type>ear</type> <scope>provided</scope> </dependency> [.geronimo.0-SNAPSHOT</version> </parent> <artifactId>daytrader-tests</artifactId> <name>DayTrader :: Functional Tests</name> <packaging>pom</packaging> <description>DayTrader Functional Tests</description> <dependencies> <dependency> <groupId>org.samples.daytrader</groupId> <artifactId>daytrader-ear</artifactId> <version>1.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <executions> <execution> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> [.0</modelVersion> <parent> <groupId>org..] </dependencies> <build> <testSourceDirectory>src/it</testSourceDirectory> <plugins> <plugin> <groupId>org. you'll bind the Cargo plugin's start and deploy goals to the preintegration-test phase and the stop goal to the postintegration-test phase. thus ensuring the proper order of execution.codehaus. you will usually utilize a real database in a known state.cargo</groupId> <artifactId>cargo-ant</artifactId> <version>0. and it is started automatically by Geronimo.8</version> <scope>test</scope> </dependency> </dependencies> 129 .8</version> <scope>test</scope> </dependency> <dependency> <groupId>org. It also ensures that the daytrader-ear module is built before running the functional-tests build when the full DayTrader build is executed from the toplevel in daytrader/. In addition. You're going to use the Cargo plugin to start Geronimo and deploy the EAR into it. To set up your database you can use the DBUnit Java API (see. This is because the EAR artifact is needed to execute the functional tests.xml file: <project> [. For integration and functional tests. You may be asking how to start the container and deploy the DayTrader EAR into it.codehaus.Building J2EE Applications As you can see there is also a dependency on the daytrader-ear module. However. so DBUnit is not needed to perform any database operations.. Start by adding the Cargo dependencies to the functional-tests/pom.net/). there's a DayTrader Web page that loads test data into the database.] <dependency> <groupId>org. Derby is the default database configured in the deployment plan.cargo</groupId> <artifactId>cargo-core-uberjar</artifactId> <version>0. As the Surefire plugin's test goal has been bound to the integration-test phase above.sourceforge.] <dependencies> [.. in the case of the DayTrader application... 0.apache. It is configured to deploy the EAR using the Geronimo Plan file.Better Builds with Maven Then create an execution element to bind the Cargo plugin's start and deploy goals: <build> <plugins> [.daytrader</groupId> <artifactId>daytrader-ear</artifactId> <type>ear</type> <properties> <plan>${basedir}/src/deployment/geronimo/plan.] <plugin> <groupId>org..samples.] The deployer element is used to configure the Cargo plugin's deploy goal.xml</plan> </properties> <pingURL></pingURL> </deployable> </deployables> </deployer> </configuration> </execution> [.codehaus...0/ geronimo-tomcat-j2ee-1..geronimo. In addition.apache.org/dist/geronimo/1. thus ensuring that the EAR is ready for servicing when the tests execute. 130 .cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <wait>false</wait> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url>. a pingURL element is specified so that Cargo will ping the specified URL till it responds. 8.net/) to call a Web page from the DayTrader application and check that it's working.1</version> <scope>test</scope> </dependency> 131 .6. with both defined using a test scope.. by wrapping it in a JUnit TestSetup class to start the container in setUp() and stop it in tearDown().1</version> <scope>test</scope> </dependency> <dependency> <groupId>httpunit</groupId> <artifactId>httpunit</artifactId> <version>1. as you're only using them for testing: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. Add the JUnit and HttpUnit dependencies. An alternative to using Cargo's Maven plugin is to use the Cargo Java API directly from your tests.. You're going to use the HttpUnit testing framework ( J2EE Applications Last. The only thing left to do is to add the tests in src/it/java.] <execution> <id>stop-container</id> <phase>post-integration-test</phase> <goals> <goal>stop</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> The functional test scaffolding is now ready. add an execution element to bind the Cargo plugin's stop goal to the post-integration-test phase: [. response. public class FunctionalTest extends TestCase { public void testDisplayMainPage() throws Exception { WebConversation wc = new WebConversation(). WebRequest request = new GetMethodWebRequest( "").Better Builds with Maven Next. In addition you've discovered how to automate starting and stopping containers. Errors: 0.geronimo. 132 . and more.*.geronimo. how to gather project health information from your builds. Summary You have learned from chapters 1 and 2 how to build any type of application and this chapter has demonstrated how to build J2EE applications.getResponse(request). import junit.14. type mvn install and relax: C:\dev\m2book\code\j2ee\daytrader\functional-tests>mvn install [. WebResponse response = wc.. add a JUnit test class called src/it/java/org/apache/geronimo/samples/daytrader/FunctionalTest. } } It's time to reap the benefits from your build.apache. Change directory into functional-tests. assertEquals("DayTrader".apache.samples. At this stage you've pretty much become an expert Maven user! The following chapters will show even more advanced topics such as how to write Maven plugins.FunctionalTest [surefire] Tests run: 1.httpunit. Time elapsed: 0.framework.getTitle()).daytrader.*.meterware. the URL is called to verify that the returned page has a title of “DayTrader”: package org.daytrader.531 sec [INFO] [cargo:stop {execution: stop-container}] 4..samples. import com.] effectively set up Maven in a team. deploying J2EE archives and implementing functional tests. Failures: 0. In the class.java. Richard Feynman 133 . Developing Custom Maven Plugins Developing Custom Maven Plugins This chapter covers: • • • • • How plugins execute in the Maven life cycle Tools and languages available to aid plugin developers Implementing a basic plugin using Java and Ant Working with dependencies. reality must take precedence over public relations.5. for Nature cannot be fooled. source directories. and resources from a plugin Attaching an artifact to the project For a successful technology. . This chapter will focus on the task of writing custom plugins. the common theme for these tasks is the function of compiling code. if your project requires tasks that have no corresponding plugin. it traverses the phases of the life cycle in order. This association of mojos to phases is called binding and is described in detail below. the maven-compiler-plugin incorporates two mojos: compile and testCompile. Additionally. When a number of mojos perform related tasks. the plugins provided “out of the box” by Maven are enough to satisfy the needs of most build processes (see Appendix A for a list of default plugins used to build a typical project). A mojo is the basic unit of work in the Maven application. injecting runtime parameter information. Maven's core APIs handle the “heavy lifting” associated with loading project definitions (POMs). the build process for a project is comprised of set of mojos executing in a particular. Recall that a mojo represents a single task in the build process. of the build process are executed by the set of plugins associated with the phases of a project's build life-cycle. including a review of plugin terminology and the basic mechanics of the the Maven plugin framework. called phases. This makes Maven's plugin framework extremely important as a means of not only building a project. Such supplemental plugins can be found at the Apache Maven project. Correspondingly. executing all the associated mojos at each phase of the build. When Maven executes a build. 134 . resolving dependencies. it is still likely that a plugin already exists to perform this task. It starts by describing fundamentals. Maven is actually a platform that executes plugins within a build life cycle. it will discuss the various ways that a plugin can interact with the Maven build environment and explore some examples. This ordering is called the build life cycle. Packaging these mojos inside a single plugin provides a consistent access mechanism for users. but also extending a project's build to incorporate new functionality. In this case. and is defined as a set of task categories. With most projects.2. it enables these mojos to share common code more easily.Better Builds with Maven 5.1. The actual functional tasks. 5. refer to the Plugin Matrix. it may be necessary to write a custom plugin to integrate these tasks into the build life cycle. they are packaged together into a plugin. Just like Java packages. Finally. It executes an atomic build task that represents a single step in the build process. resolving project dependencies. the loosely affiliated CodeHaus Mojo project. and organizing and running plugins. in order to perform the tasks necessary to build a project. and more. Each mojo can leverage the rich infrastructure provided by Maven for loading projects. or even at the Web sites of third-party tools offering Maven integration by way of their own plugins (for a list of some additional plugins available for use. well-defined order. the chapter will cover the tools available to simplify the life of the plugin developer. plugins provide a grouping mechanism for multiple mojos that serve similar functions within the build life cycle. For example. or work. let's begin by reviewing the terminology used to describe a plugin and its role in the build. Even if a project requires a special task to be performed. However. Introduction As described in Chapter 2. A Review of Plugin Terminology Before delving into the details of how Maven plugins function and how they are written. such as integration with external tools and systems. From there. allowing shared configuration to be added to a single section of the POM. Each execution can specify a separate phase binding for its declared set of mojos. to ensure compatibility with other plugins. dependency management. successive phases can make assumptions about what work has taken place in the previous phases. a mojo can pick and choose what elements of the build state it requires in order to execute its task. While Maven does in fact define three different lifecycles. Therefore. you will also need a good understanding of how plugins are structured and how they interact with their environment. since they often perform tasks for the POM maintainer. before a mojo can execute. Using the life cycle. Such mojos may be meant to check out a project from version control. Think of these mojos as tangential to the the Maven build process. mojos have a natural phase binding which determines when a task should execute within the life cycle. As a plugin developer. parameter injection and life-cycle binding form the cornerstone for all mojo development. so be sure to check the documentation for a mojo before you re-bind it. Bootstrapping into Plugin Development In addition to understanding Maven's plugin terminology. which is used for the majority of build activities (the other two life cycles deal with cleaning a project's work directory and generating a project web site). which correspond to the phases of the build life cycle. 5. The Plugin Framework Maven provides a rich framework for its plugins.Developing Custom Maven Plugins Together with phase binding. Indeed. or aid integration with external development tools. and as such. 5. Maven also provides a welldefined procedure for building a project's sources into a distributable archive. Using Maven's parameter injection infrastructure. While mojos usually specify a default phase binding. the discussion in this chapter is restricted to the default life cycle. Together. A discussion of all three build life cycles can be found in Appendix A. a mojo may be designed to work outside the context of the build life cycle. in addition to determining its appropriate phase binding.1. a given mojo can even be bound to the life cycle multiple times during a single build. sequencing the various build operations. plus much more. the ordered execution of Maven's life cycle gives coherence to the build process. or even create the directory structure for a new project. However. Binding to a phase of the Maven life cycle allows a mojo to make assumptions based upon what has happened in the preceding phases. it may still require that certain activities have already been completed. using the plugin executions section of the project's POM. 135 . Since phase bindings provide a grouping mechanism for mojos within the life cycle. In some cases. Most mojos fall into a few general categories.3.3. will not have a life-cycle phase binding at all since they don't fall into any natural category within a typical build process. including a well-defined build life cycle. it is important to provide the appropriate phase binding for your mojos. you must understand the mechanics of life-cycle phase binding and parameter injection. and parameter resolution and injection. These mojos are meant to be used by way of direct invocation. Understanding this framework will enable you to extract the Maven build-state information that each mojo requires. they can be bound to any phase in the life cycle. As a result. Then. the compile mojo from the maven-compiler-plugin will compile the source code into binary class files in the output directory. Indeed. This is not a feature of the framework. First.. is often as important as the modifications made during execution itself. then two additional mojos will be triggered to handle unit testing. As a specific example of how plugins work together through the life cycle. 136 . determining when not to execute. but until now they had nothing to do and therefore. providing functions as varied as deployment into the repository system. If this basic Maven project also includes source code for unit tests. did not execute. many more plugins can be used to augment the default life-cycle definition. the jar mojo from the maven-jarplugin will harvest these class files and archive them into a jar file. validation of project content. each of the resource-related mojos will discover this lack of non-code resources and simply opt out without modifying the build in any way. Maven will execute a default life cycle for the 'jar' packaging. Instead. at least two of the above mojos will be invoked. The testCompile mojo from the maven-compiler-plugin will compile the test sources. This level of extensibility is part of what makes Maven so powerful. generation of the project's website.Better Builds with Maven Participation in the build life cycle Most plugins consist entirely of mojos that are bound at various phases in the life cycle according to their function in the build process. Depending on the needs of a given project. Since our hypothetical project has no “non-code” resources. Maven's plugin framework ensures that almost anything can be integrated into the build life cycle. then the test mojo from the maven-surefire-plugin will execute those compiled tests. but a requirement of a well-designed mojo. and much more. none of the mojos from the maven-resources-plugin will be executed. These mojos were always present in the life-cycle definition. Only those mojos with tasks to perform are executed during this build. During this build process. In good mojo design. consider a very basic Maven build: a project with source code that should be compiled and archived into a jar file for redistribution. Maven allows mojos to specify parameters whose values are extracted from the build state using expressions. and the resulting value is injected into the mojo. see Appendix A. Environment information – which is more static. they require information about the state of the current build. At runtime. a mojo that applies patches to the project source code will need to know where to find the project source and patch files. under the path /META-INF/maven/plugin. along with any system properties that were provided when Maven was launched. the expression associated with a parameter is resolved against the current build state.Developing Custom Maven Plugins Accessing build information In order for mojos to execute effectively. 137 .compileSourceRoots} Then. how do you instruct Maven to instantiate a given mojo in the first place? The answers to these questions lie in the plugin descriptor. see Appendix A.xml. and consists of the user. It contains information about the mojo's implementation class (or its path within the plugin jar). until now you have not seen exactly how a life-cycle binding occurs. For example. and once resolved. That is to say. Using the correct parameter expressions.and machinelevel Maven settings. how do you associate mojo parameters with their expression counterparts. and the mechanism for injecting the parameter value into the mojo instance. This information comes in two categories: • Project information – which is derived from the project POM. using a language-appropriate mechanism. and what methods Maven uses to extract mojo parameters from the build state. each declared mojo parameter includes information about the various expressions used to resolve its value. • To gain access to the current build state. in addition to any programmatic modifications made by previous mojo executions. thereby avoiding traversal of the entire build-state object graph. the set of parameters the mojo declares. whether it is required for the mojo's execution. the expression to retrieve that information might look as follows: ${patchDirectory} For more information about which mojo expressions are built into Maven. This mojo would retrieve the list of source directories from the current build information using the following expression: ${project. The descriptor is an XML file that informs Maven about the set of mojos that are contained within the plugin. and more. The Maven plugin descriptor is a file that is embedded in the plugin jar archive. whether it is editable. The plugin descriptor Though you have learned about binding mojos to life-cycle phases and resolving parameter values using associated expressions. assuming the patch directory is specified as mojo configuration inside the POM. the life-cycle phase to which the mojo should be bound. Within this descriptor. For the complete plugin descriptor syntax. a mojo can keep its dependency list to a bare minimum. how do you instruct Maven to inject those values into the mojo instance? Further. Writing a plugin descriptor by hand demands that plugin developers understand low-level details about the Maven plugin framework – details that the developer will not use. the format used to write a mojo's metadata is dependent upon the language in which the mojo is implemented. it consists of a framework library which is complemented by a set of provider libraries (generally. This metadata is embedded directly in the mojo's source code where possible. it's a simple case of providing special javadoc annotations to identify the properties and parameters of the mojo. this flexibility comes at a price. one per supported mojo language). By abstracting many of these details away from the plugin developer. to generate the plugin descriptor). • Of course. For example. POM configurations.2.verbose}" default-value="false" */ private boolean verbose. Plugin Development Tools To simplify the creation of plugin descriptors. and direct invocations (as from the command line). 138 . Maven's development tools expose only relevant specifications in a format convenient for a given plugin's implementation language. except when configuring the descriptor. so it can be referenced from lifecycle mappings. In short. However. the maven-plugin-plugin simply augments the standard jar life cycle mentioned previously as a resource-generating step (this means the standard process of turning project sources into a distributable jar archive is modified only slightly. it uses a complex syntax.Better Builds with Maven The plugin descriptor is very powerful in its ability to capture the wiring information for a wide variety of mojos. Maven provides plugin tools to parse mojo metadata from a variety of formats. adding any other plugin-level metadata through its own configuration (which can be modified in the plugin's POM). Maven's plugindevelopment tools remove the burden of maintaining mojo metadata by hand. This is where Maven's plugin development tools come into play. The clean mojo also defines the following: /** * Be verbose in the debug log-level? * * @parameter expression="${clean. 5. These plugindevelopment tools are divided into the following two categories: • The plugin extractor framework – which knows how to parse the metadata formats for every language supported by Maven. The maven-plugin-plugin – which uses the plugin extractor framework. To accommodate the extensive variability required from the plugin descriptor. and orchestrates the process of extracting metadata from mojo implementations. the clean mojo in the maven-cleanplugin provides the following class-level javadoc annotation: /** * @goal clean */ public class CleanMojo extends AbstractMojo This annotation tells the plugin-development tools the mojo's name.3. and its format is specific to the mojo's implementation language. This framework generates both plugin documentation and the coveted plugin descriptor. Using Java. But consider what would happen if the default value you wanted to inject contained a parameter expression. consider the following field annotation from the resources mojo in the maven-resources-plugin: /** * Directory containing the classes. The first specifies that this parameter's default value should be set to false. it's implicit when using the @parameter annotation.build. it might seem counter-intuitive to initialize the default value of a Java field using a javadoc annotation. the underlying principles remain the same. For a complete list of javadoc annotations available for specifying mojo metadata. which references the output directory for the current project. these annotations are specific to mojos written in Java. This parameter annotation also specifies two attributes. 139 . especially when you could just declare the field as follows: private boolean verbose = false. expression and default-value.File instance.io. * * @parameter default-value="${project. namely the java. then the mechanism for specifying mojo metadata such as parameter definitions will be different. The second specifies that this parameter can also be configured from the command line as follows: -Dclean. it's impossible to initialize the Java field with the value you need. When the mojo is instantiated. However. this value is resolved based on the POM and injected into this field. Since the plugin tools can also generate documentation about plugins based on these annotations. At first. it's a good idea to consistently specify the parameter's default value in the metadata.Developing Custom Maven Plugins Here.outputDirectory}" */ private File classesDirectory. like Ant. For instance. If you choose to write mojos in another language. Remember. it specifies that this parameter can be configured from the POM using: <configuration> <verbose>false</verbose> </configuration> You may notice that this configuration name isn't explicitly specified in the annotation. rather than in the Java field initialization code.verbose=false Moreover. see Appendix A. the annotation identifies this field as a mojo parameter. In this case. This project can be found in the source code that accompanies this book. you risk confusing the issue at hand – namely. In these cases. in certain cases you may find it easier to use Ant scripts to perform build tasks.3. the specific snapshot versions of dependencies used in the build. and so on. 5. For many mojo developers. Ant-based plugins can consist of multiple mojos mapped to a single build script. and minimizes the number of dependencies you will have on Maven's core APIs.. To facilitate these examples. which is used to read and write build information metadata files. individual mojos each mapped to separate scripts. Whatever language you use. This is especially important during migration. when translating a project build from Ant to Maven (refer to Chapter 8 for more discussion about migrating from Ant to Maven). the particular feature of the mojo framework currently under discussion. this technique also works well for Beanshell-based mojos. Otherwise. this chapter will also provide an example of basic plugin development using Ant. Ant. For example. and because many Maven-built projects are written in Java. it's important to keep the examples clean and relatively simple. In addition. Maven lets you select pieces of the build state to inject as mojo parameters. During the early phases of such a migration. or any combination thereof. Maven can accommodate mojos written in virtually any language. Therefore. due to the migration value of Ant-based mojos when converting a build to Maven. A Note on the Examples in this Chapter When learning how to interact with the different aspects of Maven from within a mojo. called buildinfo. Maven can wrap an Ant build target and use it as if it were a mojo. Plugin parameters can be injected via either field reflection or setter methods. You can install it using the following simple command: mvn install 140 . Since Java is currently the easiest language for plugin development. To make Ant scripts reusable. mojo mappings and parameter definitions are declared via an associated metadata file.Better Builds with Maven Choose your mojo implementation language Through its flexible plugin descriptor format and invocation framework. Java is the language of choice. the examples in this chapter will focus on a relatively simple problem space: gathering and publishing information about a particular build. Since it provides easy reuse of third-party APIs from within your mojo. Simple javadoc annotations give the plugin processing plugin (the maven-plugin-plugin) the instructions required to generate a descriptor for your mojo. Maven currently supports mojos written in Java. Since Beanshell behaves in a similar way to standard Java. it is often simpler to wrap existing Ant build targets with Maven mojos and bind them to various phases in the life cycle. this chapter will focus primarily on plugin development in this language.3. and Beanshell. it also provides good alignment of skill sets when developing mojos from scratch. Such information might include details about the system environment. you will need to work with an external project. However. which will be deployed to the Maven repository system. since it can have a critical effect on the build process and the composition of the resulting Guinea Pig artifacts. reusable utility in many different scenarios.4.1. When triggered. by separating the generator from the Maven binding code. 5. Prerequisite: Building the buildinfo generator project Before writing the buildinfo plugin. this approach encapsulates an important best practice.4. Here. and take advantage of a single. consider a case where the POM contains a profile. providing a thin adapter layer that allows the generator to be run from a Maven build. This development effort will have the task of maintaining information about builds that are deployed to the development repository. perform the following steps: cd buildinfo mvn install 141 . eventually publishing it alongside the project's artifact in the repository for future reference (refer to Chapter 7 for more details on how teams use Maven). BuildInfo Example: Capturing Information with a Java Mojo To begin. The buildinfo plugin is a simple wrapper around this generator. As a side note. In addition to simply capturing build-time information. and this dependency is injected by one of the aforementioned profiles. Capturing this information is key. you are free to write any sort of adapter or front-end code you wish. This information should capture relevant details about the environment used to build the Guinea Pig artifacts. which will be triggered by the value of a given system property – say. this profile adds a new dependency on a Linux-specific library.name is set to the value Linux (for more information on profiles. called Guinea Pig. then the value of the triggering system property – and the profile it triggers – could reasonably determine whether the build succeeds or fails. When this profile is not triggered. Developing Your First Mojo For the purposes of this chapter. Therefore. this dependency is used only during testing. To build the buildinfo generator library. and has no impact on transitive dependencies for users of this project. if the system property os. the values of system properties used in the build are clearly very important. for the purposes of debugging. which allows the build to succeed in that environment.Developing Custom Maven Plugins 5. refer to Chapter 3). it makes sense to publish the value of this particular system property in a build information file so that others can see the aspects of the environment that affected this build. you will look at the development effort surrounding a sample project. you will need to disseminate the build to the rest of the development team. For simplicity. a default profile injects a dependency on a windows-specific library. you must first install the buildinfo generator library into your Maven local repository. If you have a test dependency which contains a defect. you'll find a basic POM and a sample mojo. It can be found in the plugin's project directory.Better Builds with Maven Using the archetype plugin to generate a stub plugin project Now that the buildinfo generator library has been installed. 142 .mergere.] /** * Write environment information for the current build to file.systemProperties}" */ private String systemProperties. you will need to modify the POM as follows: • Change the name element to Maven BuildInfo Plugin. Once you have the plugin's project structure in place. writing your custom mojo is simple. fairly simple Java-based mojo: [. • Remove the url element. Inside. as you know more about your mojos' dependencies. you should remove the sample mojo. Finally. This is a result of the Velocity template. simply execute the following: mvn archetype:create -DgroupId=com. interacting with Maven's own plugin parameter annotations. /** * The location to write the buildinfo file. it's helpful to jump-start the plugin-writing process by using Maven's archetype plugin to create a simple stub project from a standard pluginproject template. used to generate the plugin source code. For the purposes of this plugin..build. You will modify the POM again later.plugins \ -DartifactId=maven-buildinfo-plugin \ -DarchetypeArtifactId=maven-archetype-mojo When you run this command. * This is a comma-delimited list. since this plugin doesn't currently have an associated web site. The mojo You can handle this scenario using the following. This message does not indicate a problem. * @parameter expression="${buildinfo. under the following path: src\main\java\com\mergere\mvnbook\plugins\MyMojo. However.mvnbook. you're likely to see a warning message saying “${project. To generate a stub plugin project for the buildinfo plugin. since you will be creating your own mojo from scratch.. * @goal extract * @phase package */ public class WriteBuildInfoMojo extends AbstractMojo { /** * Determines which system properties are added to the file.java. this simple version will suffice for now.directory} is not a valid reference”. This will create a project with the standard layout under a new subdirectory called mavenbuildinfo-plugin within the current working directory. } } } While the code for this mojo is fairly straightforward. Properties sysprops = System.artifactId}-${project." ). outputFile ).outputDirectory}/${project.getProperty( key.split( ". In the class-level javadoc comment.trim(). BuildInfoConstants. } } try { BuildInfoUtils.addSystemProperty( key.length. it's worthwhile to take a closer look at the javadoc annotations. value ). i++ ) { String key = keys[i].build. Reason: " + e. for ( int i = 0.writeXml( buildInfo. if ( systemProperties != null ) { String[] keys = systemProperties. } catch ( IOException e ) { throw new MojoExecutionException( "Error writing buildinfo XML file.MISSING_INFO_PLACEHOLDER ). String value = sysprops.getProperties(). buildInfo.xml" * @required */ private File outputFile. e ). i < keys.outputFile}" defaultvalue="${project.Developing Custom Maven Plugins * @parameter expression="${buildinfo.version}buildinfo. public void execute() throws MojoExecutionException { BuildInfo buildInfo = new BuildInfo().getMessage(). there are two special annotations: /** * @goal extract * @phase package */ 143 . Therefore. you can specify the name of this parameter when it's referenced from the command line. you have several field-level javadoc comments.build. as follows: localhost $ mvn buildinfo:extract \ -Dbuildinfo.Better Builds with Maven The first annotation. This is where the expression attribute comes into play. it makes sense to execute this mojo in the package phase. Each offers a slightly different insight into parameter specification. the mojo cannot function unless it knows where to write the build information file. 144 . since you have more specific requirements for this parameter. using several expressions to extract project information on-demand.version}buildinfo. which are used to specify the mojo's parameters. To ensure that this parameter has a value. If this parameter has no value when the mojo is configured. @goal. you can see why the normal Java field initialization is not used. as execution without an output file would be pointless.version. The default output path is constructed directly inside the annotation. Take another look: /** * The location to write the buildinfo file. you may want to allow a user to specify which system properties to include in the build information file.systemProperties=java. However.xml" * * @required */ In this case. the outputFile parameter presents a slightly more complex example of parameter annotation. In this case. Using the expression attribute.artifactId}-${project. the mojo uses the @required annotation. the complexity is justified. with no attributes. The second annotation tells Maven where in the build life cycle this mojo should be executed. In this example. In this case. consider the parameter for the systemProperties variable: /** * @parameter expression="${buildinfo. In general.dir Finally. In addition. you're collecting information from the environment with the intent of distributing it alongside the main project artifact in the repository.systemProperties}" */ This is one of the simplest possible parameter specifications. Aside from the class-level comment. When you invoke this mojo. Using the @parameter annotation by itself. you want the mojo to use a certain value – calculated from the project's information – as a default value for this parameter. tells the plugin tools to treat this class as a mojo named extract.outputFile}" defaultvalue="${project.user. the expression attribute allows you to specify a list of system properties on-the-fly. First. * * @parameter expression="${buildinfo. However. the build will fail with an error. so they will be considered separately. you will use this name. will allow this mojo field to be configured using the plugin configuration specified in the POM. so it will be ready to attach to the project artifact.directory}/${project. attaching to the package phase also gives you the best chance of capturing all of the modifications made to the build state before the jar is produced. 145 . This mapping is a slightly modified version of the one used for the jar packaging.mergere. which simply adds plugin descriptor extraction and generation to the build process.plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>com.0.0</modelVersion> <groupId>com.0-SNAPSHOT</version> </dependency> </dependencies> </project> This POM declares the project's identity and its two dependencies. you can construct an equally simple POM which will allow you to build the plugin. note the packaging – specified as maven-plugin – which means that this plugin build will follow the maven-plugin life-cycle mapping. as follows: <project> <modelVersion>4. which provides the parsing and formatting utilities for the build information file.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.Developing Custom Maven Plugins The plugin POM Once the mojo has been written.0-SNAPSHOT</version> <packaging>maven-plugin</packaging> <dependencies> <dependency> <groupId>org. Also.apache. Note the dependency on the buildinfo project.mergere.mvnbook.mvnbook.shared</groupId> <artifactId>buildinfo</artifactId> <version>1. and capture the os. </plugins> .... as follows: <build> .mvnbook. </build> The above binding will execute the extract mojo from your new maven-buildinfo-plugin during the package phase of the life cycle.Better Builds with Maven Binding to the life cycle Now that you have a method of capturing build-time environmental information.java.. The easiest way to guarantee this is to bind the extract mojo to the life cycle.name system property.mergere. so that every build triggers it. This involves modification of the standard jar life-cycle.. you need to ensure that every build captures this information. <plugins> <plugin> <groupId>com.plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> <configuration> <systemProperties>os. which you can do by adding the configuration of the new plugin to the Guinea Pig POM.name.. 146 .version</systemProperties> </configuration> <goals> <goal>extract</goal> </goals> </execution> </executions> </plugin> . Your mojo has captured the name of operating system being used to execute the build and the version of the jvm.0-SNAPSHOT-buildinfo. test the plugin by building Guinea Pig with the buildinfo plugin bound to its life cycle as follows: > C:\book-projects\guinea-pig > mvn package When the Guinea Pig build executes. you should see output similar to the following: [.. build the buildinfo plugin with the following commands: > C:\book-projects\maven-buildinfo-plugin > mvn clean install Next.name>Linux</os.version>1.4</java.xml In the file.] [buildinfo:extract {execution:extract}] ------------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------------- Under the target directory. 147 . and both of these properties can have profound effects on binary compatibility... you will find information similar to the following: <?xml version="1.version> </systemProperties> </buildinfo> While the name of the OS may differ. you can build the plugin and try it out! First.. there should be a file named: guinea-pig-1.name> <java.Developing Custom Maven Plugins The output Now that you have a mojo and a POM. the output of of the generated build information is clear enough.] [INFO] [INFO] [INFO] [INFO] [.0" encoding="UTF-8"?><buildinfo> <systemProperties> <os. it's simpler to use Ant. so that other team members have access to it. and how. therefore. For now. Of course.outputFile}"> <to>${listAddr}</to> </mail> </target> </project> If you're familiar with Ant. BuildInfo Example: Notifying Other Developers with an Ant Mojo Now that some important information has been captured. you'll notice that this mojo expects several project properties. However. Information like the to: address will have to be dynamic. Your new mojo will be in a file called notify. After writing the Ant target to send the notification e-mail.name}" mailhost="${mailHost}" mailport="${mailPort}" messagefile="${buildinfo.Better Builds with Maven 5. and should look similar to the following: <project> <target name="notify-target"> <mail from="maven@localhost" replyto="${listAddr}" subject="Build Info for Deployment of ${project.2. and the dozens of well-tested. To ensure these project properties are in place within the Ant Project instance. such a task could be handled using a Java-based mojo and the JavaMail API from Sun. it's a simple matter of specifying where the email should be sent. The Ant target To leverage the output of the mojo from the previous example – the build information file – you can use that content as the body of the e-mail. it might be enough to send a notification e-mail to the project development mailing list. 148 .xml. From here. mature tasks available for build script use (including one specifically for sending e-mails). “deployment” is defined as injecting the project artifact into the Maven repository system. you just need to write a mojo definition to wire the new target into Maven's build process. it should be extracted directly from the POM for the project we're building.build. you need to share it with others in your team when the resulting project artifact is deployed. given the amount of setup and code required. It's important to remember that in the Maven world. simply declare mojo parameters for them.4. ]]></description> <parameters> <parameter> <name>buildinfo. the build script was called notify. which is associated to the build script using a naming convention.Developing Custom Maven Plugins The mojo metadata file Unlike the prior Java examples.build. The corresponding metadata file will be called notify.directory}/${project. In this example.xml and should appear as follows: <pluginMetadata> <mojos> <mojo> <call>notify-target</call> <goal>notify</goal> <phase>deploy</phase> <description><![CDATA[ Email environment information from the current build to the development mailing list when the artifact is deployed.xml.xml </defaultValue> <required>true</required> <readonly>false</readonly> </parameter> <parameter> <name>listAddr</name> <required>true</required> </parameter> <parameter> <name .name</name> <defaultValue>${project.mojos. metadata for an Ant mojo is stored in a separate file.version}-buildinfo.artifactId}${project.build.outputFile</name> <defaultValue> ${project. all of the mojo's parameter types are java. parameter injection takes place either through direct field assignment. and parameter flags such as required are still present. default value. Also. Fortunately. to develop an Ant-based mojo. its value is injected as a project reference. In this example. a more in-depth discussion of the metadata file for Ant mojos is available in Appendix A. expression.lang. by binding the mojo to the deploy phase of life cycle. The rule for parameter injection in Ant is as follows: if the parameter's type is java.or Beanshell-based mojos with no additional configuration. you will have to add support for Ant mojo extraction to the maven-plugin-plugin. you'd have to add a <type> element alongside the <name> element. Maven still must resolve and inject each of these parameters into the mojo. Any build that runs must be deployed for it to affect other development team members. Instead. Maven allows POM-specific injection of plugin-level dependencies in order to accommodate plugins that take a framework approach to providing their functionality. however.0 shipped without support for Ant-based mojos (support for Ant was added later in version 2. parameters are injected as properties and references into the Ant Project instance. some special configuration is required to allow the maven-plugin-plugin to recognize Ant mojos. mojo-level metadata describes details such as phase binding and mojo name. In an Antbased mojo however. otherwise.2).String (the default). First of all. the contents of this file may appear different than the metadata used in the Java mojo.Better Builds with Maven At first glance.0. 150 . each with its own information like name. in order to capture the parameter's type in the specification. As with the Java example. with its use of the MojoDescriptorExtractor interface from the maven-plugin-tools-api library. the overall structure of this file should be familiar. so it's pointless to spam the mailing list with notification e-mails every time a jar is created for the project. However. you will see many similarities. Finally. upon closer examination. the notification e-mails will be sent only when a new artifact becomes available in the remote repository. This allows developers to generate descriptors for Java.String. This library defines a set of interfaces for parsing mojo descriptors from their native format and generating various output from those descriptors – including plugin descriptor files. Modifying the plugin POM for Ant mojos Since Maven 2. In Java. but expressed in XML. The maven-plugin-plugin ships with the Java and Beanshell provider libraries which implement the above interface. since you now have a good concept of the types of metadata used to describe a mojo. notice that this mojo is bound to the deploy phase of the life cycle. If one of the parameters were some other object type. This is an important point in the case of this mojo. then its value is injected as a property. the difference here is the mechanism used for this injection.lang. because you're going to be sending e-mails to the development mailing list. metadata specify a list of parameters for the mojo. As with the Java example. The expression syntax used to extract information from the build state is exactly the same. and more. When this mojo is executed. or through JavaBeans-style setXXX() methods. The maven-plugin-plugin is a perfect example. it will be quite difficult to execute an Ant-based plugin. The second new dependency is.5</version> </dependency> [.6.. 151 . the specifications of which should appear as follows: <dependencies> [.] </project> Additionally. since the plugin now contains an Ant-based mojo.apache. a dependency on the core Ant library (whose necessity should be obvious).apache.] <build> <plugins> <plugin> <artifactId>maven-plugin-plugin</artifactId> <dependencies> <dependency> <groupId>org.] </dependencies> The first of these new dependencies is the mojo API wrapper for Ant build scripts.maven</groupId> <artifactId>maven-plugin-tools-ant</artifactId> <version>2.2</version> </dependency> <dependency> <groupId>ant</groupId> <artifactId>ant</artifactId> <version>1..] <dependency> <groupId>org.0.2</version> </dependency> </dependencies> </plugin> </plugins> </build> [.. quite simply.maven</groupId> <artifactId>maven-script-ant</artifactId> <version>2... If you don't have Ant in the plugin classpath. you will need to add a dependency on the maven-plugin-tools-ant library to the maven-plugin-plugin using POM configuration as follows: <project> [.. and it is always necessary for embedding Ant scripts as mojos in the Maven build process. it requires a couple of new dependencies.0...Developing Custom Maven Plugins To accomplish this. you should add a configuration section to the new execution section.] </plugins> </build> The existing <execution> section – the one that binds the extract mojo to the build – is not modified. 152 . Now. it behaves like any other type of mojo to Maven. Again.. and these two mojos should not execute in the same phase (as mentioned previously)..except in this case.] <plugins> <plugin> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> [. because non-deployed builds will have no effect on other team members. execute the following command: > mvn deploy The build process executes the steps required to build and deploy a jar . This is because an execution section can address only one phase of the build life cycle.Better Builds with Maven Binding the notify mojo to the life cycle Once the plugin descriptor is generated for the Ant mojo. which supplies the listAddr parameter value. Even its configuration is the same. Adding a life-cycle binding for the new Ant mojo in the Guinea Pig POM should appear as follows: <build> [. a new section for the notify mojo is created.] </execution> <execution> <id>notify</id> <goals> <goal>notify</goal> </goals> <configuration> <listAddr>dev@guineapig.. and send them to the Guinea Pig development mailing list in the deploy phase.... it will also extract the relevant environmental details during the package phase. notification happens in the deploy phase only.org</listAddr> </configuration> </execution> </executions> </plugin> [. In order to tell the notify mojo where to send this e-mail.codehaus. Instead. Therefore.5. you must add a dependency on one or more Maven APIs to your project's POM.maven</groupId> <artifactId>maven-artifact-manager</artifactId> <version>2. project source code and resources. if you also need to work with artifacts – including actions like artifact resolution – you must also declare a dependency on maven-artifact-manager in your POM.1. and are not required for developing basic mojos. like this: <dependency> <groupId>org. one or more artifacts in the current build. The following sections do not build on one another. The next examples cover more advanced topics relating to mojo development. modify your POM to define a dependency on maven-artifact by adding the following: <dependency> <groupId>org. Whenever you need direct access to the current project instance. Gaining Access to Maven APIs Before proceeding.0</version> </dependency> It's important to realize that Maven's artifact APIs are slightly different from its project API. However.apache. in that the artifact-related interfaces are actually maintained in a separate artifact from the components used to work with them. including the ability to work with the current project instance. To enable access to Maven's project API.maven</groupId> <artifactId>maven-artifact</artifactId> <version>2. Advanced Mojo Development The preceding examples showed how to declare basic mojo parameters. if you only need to access information inside an artifact.apache. or any related components. However. it's important to mention that the techniques discussed in this section make use of Maven's project and artifact APIs. and artifact attachments.maven</groupId> <artifactId>maven-project</artifactId> <version>2. modify your POM to define a dependency on maven-project by adding the following: <dependency> <groupId>org. then read on! 5. if you want to know how to develop plugins that manage dependencies. the above dependency declaration is fine.Developing Custom Maven Plugins 5. and how to annotate the mojo with a name and a preferred phase binding.0</version> </dependency> 153 .5.apache.0</version> </dependency> To enable access to information in artifacts via Maven's artifact API. this declaration has another annotation. To enable a mojo to work with the set of artifacts that comprise the project's dependencies. the mojo must tell Maven that it requires the project's dependencies be resolved (this second requirement is critical. As with all declarations. such as: -Ddependencies=[. you may be wondering. if the mojo works with a project's dependencies. Maven makes it easy to inject a project's dependencies.. In addition.. Accessing Project Dependencies Many mojos perform tasks that require access to a project's dependencies.] So. namely it disables configuration via the POM under the following section: <configuration> <dependencies>. This declaration should be familiar to you. only the following two changes are required: • First.</dependencies> </configuration> It also disables configuration via system properties. the mojo must tell Maven that it requires the project dependency set. the compile mojo in the maven-compiler-plugin must have a set of dependency paths in order to build the compilation classpath. This annotation tells Maven not to allow the user to configure this parameter directly.util. Injecting the project dependency set As described above..dependencies}" * @required * @readonly */ private java. • Second.Better Builds with Maven 5.2. since the dependency resolution process is what populates the set of artifacts that make up the project's dependencies). users could easily break their builds – particularly if the mojo in question compiled project source code. Fortunately. it must tell Maven that it requires access to that set of artifacts. this is specified via a mojo parameter definition and should use the following syntax: /** * The set of dependencies required by the project * @parameter default-value="${project. which might not be as familiar: @readonly. However. “How exactly can I configure this parameter?" The answer is that the mojos parameter value is derived from the dependencies section of the POM.5. For example.. so you configure this parameter by modifying that section directly. 154 .Set dependencies. If this parameter could be specified separately from the main dependencies section. since it defines a parameter with a default value that is required to be present before the mojo can execute. the test mojo in the maven-surefire-plugin requires the project's dependency paths so it can execute the project's unit tests with a proper classpath. you'll know that one of its major problems is that it always resolves all project dependencies before invoking the first goal in the build (for clarity. Rather. your mojo must declare that it needs them. and if so. this is a direct result of the rigid dependency resolution design in Maven 1.0 uses the term 'mojo' as roughly equivalent to the Maven 1. it will force all of the dependencies to be resolved (test is the widest possible scope. If the project's dependencies aren't available. If you've used Maven 1. the clean process will fail – though not because the clean goal requires the project dependencies.Developing Custom Maven Plugins In this case. In other words. Maven will resolve only the dependencies that satisfy the requested scope. the @readonly annotation functions to force users to configure the POM. If a mojo doesn't need access to the dependency list. To gain access to the project's dependencies. direct configuration could result in a dependency being present for compilation. at which scope. if a mojo declares that it requires dependencies for the compile scope. Maven 2 will not resolve project dependencies until a mojo requires it. It's important to note that your mojo can require any valid dependency scope to be resolved prior to its execution.x term 'goal'). the mojo is missing one last important step. if your mojo needs to work with the project's dependencies. the build process doesn't incur the added overhead of resolving them. You can declare the requirement for the test-scoped project dependency set using the following class-level annotation: /** * @requiresDependencyResolution test [. any dependencies specific to the test scope will remain unresolved. Returning to the example. Even then.. the mojo should be ready to work with the dependency set. Requiring dependency resolution Having declared a parameter that injects the projects dependencies into the mojo.] */ Now. 155 . encapsulating all others). but being unavailable for testing.x. Therefore. However. Maven provides a mechanism that allows a mojo to specify whether it requires the project dependencies to be resolved. Failure to do so will cause an empty set to be injected into the mojo's dependencies parameter. Maven 2 addresses this problem by deferring dependency resolution until the project's dependencies are actually required. if later in the build process.x. it will have to tell Maven to resolve them.. Maven 2. rather than configuring a specific plugin only. Maven encounters another mojo that declares a requirement for test-scoped dependencies. Consider the case where a developer wants to clean the project directory using Maven 1. To that end. rd. Once you have access to the project dependency set. one of the dependency libraries may have a newer snapshot version available.getArtifactId() ).getClassifier() != null ) { rd.hasNext().isOptional() ). you will need to iterate through the set. which enumerates all the dependencies used in the build. The code required is as follows: if ( dependencies != null && !dependencies.setClassifier( artifact. adding the information for each individual dependency to your buildinfo object.getVersion() ). rd.addResolvedDependency( rd ). In this case.setOptional( artifact. rd. knowing the specific set of snapshots used to compile a project can lend insights into why other builds are breaking. along with their versions – including those dependencies that are resolved transitively. you'll add the dependency-set injection code discussed previously to the extract mojo in the maven-buildinfo-plugin.setArtifactId( artifact. This will result in the addition of a new section in the buildinfo file. rd. ResolvedDependency rd = new ResolvedDependency(). } } 156 . } buildInfo.setType( artifact.Better Builds with Maven BuildInfo example: logging dependency versions Turning once again to the maven-buildinfo-plugin.setScope( artifact.iterator().getClassifier() ).setResolvedVersion( artifact. you will want to log the versions of the dependencies used during the build.getType() ). This is critical when the project depends on snapshot versions of other libraries. ) { Artifact artifact = (Artifact) it. rd. if ( artifact. For example.getGroupId() ).next(). so it can log the exact set of dependencies that were used to produce the project artifact.isEmpty() ) { for ( Iterator it = dependencies.getScope() ).setGroupId( artifact. rd. it. the extract mojo should produce the same buildinfo file. For instance. This won't add much insight for debuggers looking for changes from build to build. or new source code directories to the build.mvnbook.0-alpha-SNAPSHOT. it may be necessary to augment a project's code base with an additional source directory. If this plugin adds resources like images.] <resolvedDependency> <groupId>com.. it's important for mojos to be able to access and manipulate both the source directory list and the resource definition list for a project. Therefore.1. This is because snapshot time-stamping happens on deployment only.8.1</resolvedVersion> <optional>false</optional> <type>jar</type> <scope>test</scope> </resolvedDependency> [.guineapig</groupId> <artifactId>guinea-pig-api</artifactId> <resolvedVersion>1. it's possible that a plugin may be introduced into the build process when a profile is activated. junit. and other mojos may need to produce reports based on those same source directories. Once this new source directory is in place.. when a project is built in a JDK 1. has a static version of 3. 157 .5. and is still listed with the version 1. If you were using a snapshot version from the local repository which has not been deployed. The actual snapshot version used for this artifact in a previous build could yield tremendous insight into the reasons for a current build failure.] </resolvedDependencies> The first dependency listed here. with an additional section called resolvedDependencies that looks similar to the following: <resolvedDependencies> <resolvedDependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <resolvedVersion>3. This dependency is part of the example development effort.8.094434-1</resolvedVersion> <optional>false</optional> <type>jar</type> <scope>compile</scope> </resolvedDependency> [.4 environment. it can have dramatic effects on the resulting project artifact. 5. Accessing Project Sources and Resources In certain cases.mergere.0-20060210.Developing Custom Maven Plugins When you re-build the plugin and re-run the Guinea Pig build. but consider the next dependency: guinea-pigapi. the resolvedVersion in the output above would be 1... particularly if the newest snapshot version is different. the compile mojo will require access to it.3.0-alpha-SNAPSHOT in the POM. as in the following example: project. However. it's a simple matter of adding a new source root to it. Chapter 3 of this book). which tells Maven that it's OK to execute this mojo in the absence of a POM. It is possible that some builds won't have a current project. The current project instance is a great example of this. Maven's concept of a project can accommodate a whole list of directories. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. As in the prior project dependencies discussion. unless declared otherwise. It requires access to the current MavenProject instance only. which can be injected into a mojo using the following code: /** * Project instance. Once the current project instance is available to the mojo. The generally-accepted binding for this type of activity is in the generate-sources life-cycle phase.directory}/generated-sources/<plugin-prefix> While conforming with location standards like this is not required. So. This declaration identifies the project field as a required mojo parameter that will inject the current MavenProject instance into the mojo for use. Maven will fail the build if it doesn't have a current project instance and it encounters a mojo that requires one. as in the case where the mavenarchetype-plugin is used to create a stub of a new project. or simply need to augment the basic project code base. and no other project contains current state information for this build. when generating source code. This can be very useful when plugins generate source code. Mojos that augment the source-root list need to ensure that they execute ahead of the compile phase. mojos require a current project instance to be available.addCompileSourceRoot( sourceDirectoryPath ). it does improve the chances that your mojo will be compatible with other plugins bound to the same life cycle. This annotation tells Maven that users cannot modify this parameter. Further. instead. any normal build will have a current project.build. if you expect your mojo to be used in a context where there is no POM – as in the case of the archetype plugin – then simply add the class-level annotation: @requiresProject with a value of false. it refers to a part of the build state that should always be present (a more in-depth discussion of this annotation is available in section 3.Better Builds with Maven Adding a source directory to the build Although the POM supports only a single sourceDirectory entry. this parameter also adds the @readonly annotation. the accepted default location for the generated source is in: ${project. 158 .6. used to add new source directory to the build. allowing plugins to add new source directories as they execute. Maven's project API bridges this gap. xml files for servlet engines. the Maven application itself is well-hidden from the mojo developer.xml file found in all maven artifacts. Maven components can make it much simpler to interact with the build process. the process of adding a new resource directory to the current build is straightforward and requires access to the MavenProject and MavenProjectHelper: /** * Project instance. that it's not a parameter at all! In fact. * @component */ private MavenProjectHelper helper. used to add new source directory to the build. Normally. Many different mojo's package resources with their generated artifacts such as web. you should notice something very different about this parameter. Namely. it is a utility. or wsdl files for web services. 159 . It provides methods for attaching artifacts and adding new resource definitions to the current project. as in the case of Maven itself and the components. in some special cases. This could be a descriptor for binding the project artifact into an application framework. A complete discussion of Maven's architecture – and the components available – is beyond the scope of this chapter. this is what Maven calls a component requirement (it's a dependency on an internal component of the running Maven application). as it is particularly useful to mojo developers. used to make addition of resources * simpler. which means it's always present. the unadorned @component annotation – like the above code snippet – is adequate. the MavenProjectHelper component is worth mentioning here. the project helper is not a build state. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. as discussed previously. the MavenProjectHelper is provided to standardize the process of augmenting the project instance. However. The project helper component can be injected as follows: /** * project-helper instance. in most cases. Right away. however. Component requirements are not available for configuration by users. so your mojo simply needs to ask for it.Developing Custom Maven Plugins Adding a resource to the build Another common practice is for a mojo to generate some sort of non-code resource. This declaration will inject the current project instance into the mojo. Whatever the purpose of the mojo. For example. To be clear. to simplify adding resources to a project. This component is part of the Maven application. Component requirements are simple to declare. the mojo also needs access to the MavenProjectHelper component. However. which will be packaged up in the same jar as the project classes. and abstract the associated complexities away from the mojo developer. 160 . the entire build will fail. The classic example is the compile mojo in the maven-compiler-plugin. as in the following example: /** * List of source roots containing non-test code. they have to modify the sourceDirectory element in the POM. and the jar mojo in the maven-source-plugin. List excludes = null. it's important to understand where resources should be added during the build life cycle. in order to perform some operation on the source code. and exclusion patterns as local variables. this parameter declaration states that Maven does not allow users to configure this parameter directly.compileSourceRoots}" * @required * @readonly */ private List sourceRoots. includes. directory. Again. The parameter is also required for this mojo to execute. The most common place for such activities is in the generate-resources life-cycle phase.addResource(project. List includes = Collections. which actually compiles the source code contained in these root directories into classes in the project output directory. In a typical case. it will need to execute ahead of this phase. others must read the list of active source directories.Better Builds with Maven With these two objects at your disposal. Other examples include javadoc mojo in the maven-javadoc-plugin. excludes). Similar to the parameter declarations from previous sections. which may or may not be directly configurable. If your mojo is meant to add resources to the eventual project artifact. adding a new resource couldn't be easier. Simply define the resources directory to add. conforming with these standards improves the compatibility of your plugin with other plugins in the build. Accessing the source-root list Just as some mojos add new source directories to the build. along with inclusion and exclusion patterns for resources within that directory. * @parameter default-value="${project. and then call a utility method on the project helper. all you have to do is declare a single parameter to inject them. these values would come from other mojo parameters. for the sake of brevity.singletonList("**/*"). The prior example instantiates the resource's directory. instead. Gaining access to the list of source root directories for a project is easy. or else bind a mojo to the life-cycle phase that will add an additional source directory to the build. Again. inclusion patterns. The code should look similar to the following: String directory = "relative/path/to/some/directory". if it's missing. helper. Resources are copied to the classes directory of the build during the process-resources phase. as in the case of the extract mojo. You've already learned that mojos can modify the list of resources included in the project artifact. you need to add the following code: for ( Iterator it = sourceRoots. Accessing the resource list Non-code resources complete the picture of the raw materials processed by a Maven build. in order to incorporate list of source directories to the buildinfo object. for eventual debugging purposes. If a certain profile injects a supplemental source directory into the build (most likely by way of a special mojo binding).. buildInfo. which copies all non-code resources to the output directory for inclusion in the project artifact. binding this mojo to an early phase of the life cycle increases the risk of another mojo adding a new source root in a later phase. it's better to bind it to a later phase like package if capturing a complete picture of the project is important. In this case however.next(). } One thing to note about this code snippet is the makeRelative() method. now. the ${basedir} expression refers to the location of the project directory in the local file system.hasNext(). then this profile would dramatically alter the resulting project artifact when activated. binding to any phase later than compile should be acceptable. let's learn about how a mojo can access the list of resources used in a build. it. source roots are expressed as absolute file-system paths. since compile is the phase where source files are converted into classes. 161 . any reference to the path of the project directory in the local file system should be removed. it could be critically important to track the list of source directories used in a particular build. When you add this code to the extract mojo in the maven-buildinfo-plugin. ) { String sourceRoot = (String) it. However. By the time the mojo gains access to them. Returning to the buildinfo example.iterator(). To be clear. This is the mechanism used by the resources mojo in the maven-resources-plugin. applying whatever processing is necessary. Therefore.addSourceRoot( makeRelative( sourceRoot ) ). it can iterate through them. it can be bound to any phase in the life cycle. Remember. This involves subtracting ${basedir} from the source-root paths.Developing Custom Maven Plugins Now that the mojo has access to the list of project source roots. In order to make this information more generally applicable. it is important that the buildinfo file capture the resource root directories used in the build for future reference. the user has the option of modifying the value of the list by configuring the resources section of the POM. 162 .model.resources}" * @required * @readonly */ private List resources. along with some matching rules for the resource files it contains. it can mean the difference between an artifact that can be deployed into a server environment and an artifact that cannot.List.maven. } } As with the prior source-root example. capturing the list of resources used to produce a project artifact can yield information that is vital for debugging purposes.iterator(). you'll notice the makeRelative() method. and Maven mojos must be able to execute in a JDK 1. Therefore. containing * directory. and excludes. The parameter appears as follows: /** * List of Resource objects for the current build.isEmpty() ) { for ( Iterator it = resources. includes. since the ${basedir} path won't have meaning outside the context of the local file system. All POM paths injected into mojos are converted to their absolute form first.addResourceRoot( makeRelative( resourceRoot ) ). String resourceRoot = resource. this parameter is declared as required for mojo execution and cannot be edited by the user. if an activated profile introduces a mojo that generates some sort of supplemental framework descriptor. by trimming the ${basedir} prefix. Since the resources list is an instance of java.next(). ) { Resource resource = (Resource) it.getDirectory(). allowing direct configuration of this parameter could easily produce results that are inconsistent with other resource-consuming mojos. the resources list is easy to inject as a mojo parameter.util. mojos must be smart enough to cast list elements as org.Better Builds with Maven Much like the source-root list. It's necessary to revert resource directories to relative locations for the purposes of the buildinfo plugin. For instance. which in fact contain information about a resource root. It's a simple task to add this capability. This method converts the absolute path of the resource directory into a relative path. * @parameter default-value="${project. to avoid any ambiguity.apache. It's also important to note that this list consists of Resource objects.hasNext(). and can be accomplished through the following code snippet: if ( resources != null && !resources. it. As noted before with the dependencies parameter. Since mojos can add new resources to the build programmatically. Just like the source-root injection parameter.Resource instances. In this case. buildInfo.4 environment that doesn't support Java generics. Usually. Attaching Artifacts for Installation and Deployment Occasionally. the key differences are summarized in the table below. Like the vast majority of activities.addTestSourceRoot() ${project.4.addCompileSourceRoot() ${project. This chapter does not discuss test-time and compile-time source roots and resources as separate topics. That section should appear as follows: <resourceRoots> <resourceRoot>src/main/resources</resourceRoot> <resourceRoot>target/generated-resources/xdoclet</resourceRoot> </resourceRoots> Once more. by using the classifier element for that dependency section within the POM. Maven treats these derivative artifacts as attachments to the main project artifact. only the parameter expressions and method names are different. The concepts are the same. Note on testing source-roots and resources All of the examples in this advanced development discussion have focused on the handling of source code and resources.Developing Custom Maven Plugins Adding this code snippet to the extract mojo in the maven-buildinfo-plugin will result in a resourceRoots section being added to the buildinfo file.addTestResource() ${project. which may be executed during the build process. It's important to note however. 163 . due to the similarities. These artifacts are typically a derivative action or side effect of the main build process. a corresponding activity can be written to work with their test-time counterparts. collecting the list of project resources has an appropriate place in the life cycle.resources} project. javadoc bundles. mojos produce new artifacts that should be distributed alongside the main project artifact in the Maven repository system. Since all project resources are collected and copied to the project output directory in the processresources phase. which sets it apart from the main project artifact in the repository. Therefore. an artifact attachment will have a classifier. that for every activity examined that relates to source-root directories or resource definitions. instead. in that they are never distributed without the project artifact being distributed. any mojo seeking to catalog the resources used in the build should execute at least as late as the process-resources phase.testResources} 5.addResource() ${project. Once an artifact attachment is deposited in the Maven repository. and even the buildinfo file produced in the examples throughout this chapter. Table 5-2: Key differences between compile-time and test-time mojo activities Activity Change This To This Add testing source root Get testing source roots Add testing resource Get testing resources project. Classic examples of attached artifacts are source archives. This ensures that any resource modifications introduced by mojos in the build process have been completed. it's worthwhile to discuss the proper place for this type of activity within the build life cycle. it can be referenced like any other artifact. this classifier must also be specified when declaring the dependency for such an artifact. which must be processed and included in the final project artifact.testSourceRoots} helper.5.compileSourceRoots} helper. like sources or javadoc. produces a derivative artifact. From the prior examples. "xml". * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. which will make the process of attaching the buildinfo artifact a little easier: /** * This helper class makes adding an artifact attachment simpler.Better Builds with Maven When a mojo. or set of mojos. "buildinfo". since it provides information about how each snapshot of the project came into existence.attachArtifact( project. the meaning and requirement of project and outputFile references should be clear. an extra piece of code must be executed in order to attach that artifact to the project artifact. outputFile ). which is still missing from the maven-buildinfo-plugin example. These values represent the artifact extension and classifier. This extra step. Once you include these two fields in the extract mojo within the maven-buildinfo-plugin. For convenience you should also inject the following reference to MavenProjectHelper. Doing so guarantees that attachment will be distributed when the install or deploy phases are run. you'll need a parameter that references the current project instance as follows: /** * Project instance. * @component */ private MavenProjectHelper helper. While an e-mail describing the build environment is transient. for historical reference.5. However. See Section 5. The MavenProject instance is the object with which your plugin will register the attachment with for use in later phases of the lifecycle. respectively. Including an artifact attachment involves adding two parameters and one line of code to your mojo. and only serves to describe the latest build. 164 .2 for a discussion about MavenProjectHelper and component requirements. the distribution of the buildinfo file via Maven's repository will provide a more permanent record of the build for each snapshot in the repository. to which we want to add an attached artifact. can provide valuable information to the development team. there are also two somewhat cryptic string values being passed in: “xml” and “buildinfo”. First. the process of attaching the generated buildinfo file to the main project artifact can be accomplished by adding the following code snippet: helper. the maven-buildinfo-plugin is ready for action. In this chapter. you've also learned how a plugin generated file can be distributed alongside the project artifact in Maven's repository system. Summary In its unadorned state.pom Now. it can attach the buildinfo file to the main project artifact so that it's distributed whenever Maven installs or deploys the project. Maven can integrate these custom tasks into the build process through its extensible plugin framework. you can test it by re-building the plugin. Whether they be code-generation. enabling you to attach custom artifacts for installation or deployment. Using the default lifecycle mapping. It can extract relevant details from a running build and generate a buildinfo file based on these details.Developing Custom Maven Plugins By specifying an extension of “xml”. the maven-buildinfo-plugin can also generate an e-mail that contains the buildinfo file contents. 5.xml guinea-pig-core-1. then running Maven to the install life-cycle phase on our test project. Finally. Now that you've added code to distribute the buildinfo file.xml extension.0-SNAPSHOT dir guinea-pig-core-1.0-SNAPSHOT-buildinfo. This serves to attach meaning beyond simply saying. in certain circumstances. Since the build process for a project is defined by the plugins – or more accurately. reporting. you've learned that it's relatively simple to create a mojo that can extract relevant parts of the build state in order to perform a custom build-process task – even to the point of altering the set of source-code directories used to build the project.jar guinea-pig-core-1. you're telling Maven that the file in the repository should be named using a. From there. there is a standardized way to inject new behavior into the build by binding new mojos at different life-cycle phases. you should see the buildinfo file appear in the local repository alongside the project jar. 165 . Finally.6. a project requires special tasks in order to build successfully. and route that message to other development team members on the project development mailing list.0-SNAPSHOT. the mojos – that are bound to the build life cycle. when the project is deployed. Working with project dependencies and resources is equally as simple. Maven represents an implementation of the 80/20 rule.0-SNAPSHOT. By specifying the “buildinfo” classifier. or verification steps. “This is an XML file”. as follows: > > > > mvn install cd C:\Documents and Settings\jdcasey\. Maven can build a basic project with little or no modification – thus covering the 80% case. However. If you build the Guinea Pig project using this modified version of the maven-buildinfo-plugin. It identifies the file as being produced by the the maven-buildinfoplugin. you're telling Maven that this artifact should be distinguished from other project artifacts by using this value in the classifier element of the dependency declaration. as opposed to another plugin in the build process which might produce another XML file with different meaning.m2\repository cd com\mergere\mvnbook\guineapig\guinea-pig-core\1. you can integrate almost any tool into the build process. it's unlikely to be a requirement unique to your project. 166 . if you have the means. It is in great part due to the re-usable nature of its plugins that Maven can offer such a powerful build platform. only a tiny fraction of which are a part of the default lifecycle mapping. or the project web site of the tools with which your project's build must integrate. So. remember that whatever problem your custom-developed plugin solves. If not. If your project requires special handling. please consider contributing back to the Maven community by providing access to your new plugin. Mojo development can be as simple or as complex (to the point of embedding nested Maven processes within the build) as you need it to be. Using the plugin mechanisms described in this chapter.Better Builds with Maven Many plugins already exist for Maven use. However. the Codehaus Mojo project. developing a custom Maven plugin is an easy next step. chances are good that you can find a plugin to address this need at the Apache Maven project. it is an art.6.Samuel Butler 167 .. . Because the POM is a declarative model of the project. the project will meet only the lowest standard and go no further. When referring to health. It provides additional information to help determine the reasons for a failed build. Maven can analyze. how well it is tested. This is unproductive as minor changes are prioritized over more important tasks. and how well it adapts to change. unzip the Code_Ch06-1. new tools that can assess its health are easily integrated. To begin. In this chapter. For this reason. In this chapter. It is important not to get carried away with setting up a fancy Web site full of reports that nobody will ever use (especially when reports contain failures they don't want to know about!). which everyone can see at any time. because if the bar is set too high. why have a site. But. What Does Maven Have to do With Project Health? In the introduction. Project vitality . It is these characteristics that assist you in assessing the health of your project.zip for convenience as a starting point. many of the reports illustrated can be run as part of the regular build in the form of a “check” that will fail the build if a certain condition is not met. if the build fails its checks? The Web site also provides a permanent record of a project's health. there are two aspects to consider: • Code quality .zip file into C:\mvnbook or your selected working directory. and what the nature of that activity is. and whether the conditions for the checks are set correctly. and display that information in a single place. This is important. you'll learn how to use a number of these tools effectively. • Maven takes all of the information you need to know about your project and brings it together under the project Web site. and using a variety of tools. 168 . The code that concluded Chapter 3 is also included in Code_Ch06-1.finding out whether there is any activity on the project. and learning more about the health of the project.1. it was pointed out that Maven's application of patterns provides visibility and comprehensibility. Conversely.Better Builds with Maven 6.determining how well the code works. Maven has access to the information that makes up a project. Through the POM. if the bar is set too low. to get a build to pass. relate. and then run mvn install from the proficio subdirectory to ensure everything is in place. The next three sections demonstrate how to set up an effective project Web site. you will be revisiting the Proficio application that was developed in Chapter 3. there will be too many failed builds. Figure 6-1: The reports generated by Maven You can see that the navigation on the left contains a number of reports. The Project Info menu lists the standard reports Maven includes with your site by default. For newcomers to the project. and now shows how to integrate project health information.xml: 169 .Assessing Project Health with Maven 6. unless you choose to disable them. On a new project. Project Reports. by including the following section in proficio/pom. The second menu (shown opened in figure 6-1). adding a new report is easy. is the focus of the rest of this chapter. having these standard reports means that those familiar with Maven Web sites will always know where to find the information they need. These reports provide a variety of insights into the quality and vitality of the project.2. Adding Reports to the Project Web site This section builds on the information on project Web sites in Chapter 2 and Chapter 3. For example. and to reference as links in your mailing lists. To start. However. These reports are useful for sharing information with others. issue tracker. and so on. review the project Web site shown in figure 6-1. SCM. you can add the Surefire report to the sample application. this menu doesn't appear as there are no reports included. </project> This adds the report to the top level project. and is shown in figure 6-2. and as a result.apache. <reporting> <plugins> <plugin> <groupId>org. You can now run the following site task in the proficio-core directory to regenerate the site.maven. it will be inherited by all of the child modules.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> </plugin> </plugins> </reporting> .. Figure 6-2: The Surefire report 170 ..Better Builds with Maven .. C:\mvnbook\proficio\proficio-core> mvn site This can now be found in the file target/site/surefire-report..html. . Configuration of Reports Before stepping any further into using the project Web site.. Configuration for a reporting plugin is very similar. Maven knows where the tests and test results are.. 171 . the report can be modified to only show test failures by adding the following configuration in pom..Assessing Project Health with Maven As you may have noticed in the summary.maven.. 6.3. the defaults are sufficient to get started with a useful report.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <showSuccess>false</showSuccess> </configuration> </plugin> </plugins> </reporting> . <build> <plugins> <plugin> <groupId>org. it is important to understand how the report configuration is handled in Maven. For example.. For a quicker turn around.5</target> </configuration> </plugin> </plugins> </build> . however it is added to the reporting section of the POM.xml.. <reporting> <plugins> <plugin> <groupId>org. You might recall from Chapter 2 that a plugin is configured using the configuration element inside the plugin declaration in pom..xml: . and due to using convention over configuration.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.apache.maven. for example: .apache. the report shows the test results of the project..5</source> <target>1. consider if you wanted to create a copy of the HTML report in the directory target/surefirereports every time the build ran. “Executions” such as this were introduced in Chapter 3.. <build> <plugins> <plugin> <groupId>org.build.directory}/surefire-reports </outputDirectory> </configuration> <executions> <execution> <phase>test</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> </plugins> </build> . However.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <outputDirectory> ${project. is used only during the build.. The plugin is included in the build section to ensure that the configuration. even though it is not specific to the execution. as seen in the previous section. To do this. However. they will all be included. you might think that you'd need to configure the parameter in both sections. or in addition to. the reporting section: . If a plugin contains multiple reports.. some reports apply to both the site.Better Builds with Maven The addition of the plugin element triggers the inclusion of the report in the Web site. the plugin would need to be configured in the build section instead of. Fortunately. what if the location of the Surefire XML reports that are used as input (and would be configured using the reportsDirectory parameter) were different to the default location? Initially.. Any plugin configuration declared in the reporting section is also applied to those declared in the build section. and not site generation. while the configuration can be used to modify its appearance or behavior. Plugins and their associated configuration that are declared in the build section are not used during site generation. this isn't the case – adding the configuration to the reporting section is sufficient. and the build.apache. To continue with the Surefire report. 172 .maven. . there are cases where only some of the reports that the plugin produces will be required. <reporting> <plugins> <plugin> <groupId>org.. 173 . However.xml: .directory}/surefire-reports/perf </reportsDirectory> <outputName>surefire-report-perf</outputName> </configuration> <reports> <report>report</report> </reports> </reportSet> </reportSets> </plugin> </plugins> </reporting> . which is the reporting equivalent of the executions element in the build section. and cases where a particular report will be run more than once. once for unit tests and once for a set of performance tests.maven. by default all reports available in the plugin are executed once.apache.build.Assessing Project Health with Maven When you configure a reporting plugin. The configuration value is specific to the build stage When you are configuring the plugins to be used in the reporting section. Each report set can contain configuration. always place the configuration in the reporting section – unless one of the following is true: 1.directory}/surefire-reports/unit </reportsDirectory> <outputName>surefire-report-unit</outputName> </configuration> <reports> <report>report</report> </reports> </reportSet> <reportSet> <id>perf</id> <configuration> <reportsDirectory> ${project.build. consider if you had run Surefire twice in your build. Both of these cases can be achieved with the reportSets element. each time with a different configuration. you would include the following section in your pom. and that you had had generated its XML results to target/surefire-reports/unit and target/surefire-reports/perf respectively. The reports will not be included in the site 2.. and a list of reports to include..plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <reportSets> <reportSet> <id>unit</id> <configuration> <reportsDirectory> ${project. To generate two HTML reports for these results. For example. who isn't interested in the state of the source code.xml file: .4. 6. However. Consider the following: • The commercial product. where the developer information is available. where the end user documentation is on a completely different server than the developer information. which are targeted at the developers. running mvn surefire-report:report will not use either of these configurations.. as with executions. 174 .plugins</groupId> <artifactId>maven-project-info-reports-plugin</artifactId> <reportSets> <reportSet> <reports> <report>mailing-list</report> <report>license</report> </reports> </reportSet> </reportSets> </plugin> . This may be confusing for the first time visitor.apache. It is also possible to include only a subset of the reports in a plugin. there's something subtly wrong with the project Web site. but quite separate to the end user documentation. but in the navigation there are reports about the health of the project. this customization will allow you to configure reports in a way that is just as flexible as your build.html. • The open source graphical application. depending on the project. add the following to the reporting section of the pom. This approach to balancing these competing requirements will vary. Separating Developer Reports From User Documentation After adding a report. • The open source reusable library. The reports in this list are identified by the goal names that would be used if they were run from the command line. While the defaults are usually sufficient. and most likely doesn't use Maven to generate it.. If you want all of the reports in a plugin to be generated.. where much of the source code and Javadoc reference is of interest to the end user.maven. and an inconvenience to the developer who doesn't want to wade through end user documentation to find out the current state of a project's test coverage. they must be enumerated in this list. The reports element in the report set is a required element. <plugin> <groupId>org.Better Builds with Maven Running mvn site with this addition will generate two Surefire reports: target/site/surefirereport-unit. outside of any report sets. Maven will use only the configuration that is specified in the plugin element itself. For example. to generate only the mailing list and license pages of the standard reports.html and target/site/surefire-report-perf.. which are targeted at an end user. On the entrance page there are usage instructions for Proficio. When a report is executed individually. in some cases down to individual reports. However. Table 6-1 lists the content that a project Web site may contain. each section of the site needs to be considered. that can be updated between releases without risk of including new features. While there are some exceptions. and to maintain only one set of documentation. FAQs and general Web site End user documentation Source code reference material Project health and vitality reports This is the content that is considered part of the Web site rather than part of the documentation. You can maintain a stable branch. Javadoc) that in a library or framework is useful to the end user. It is also true of the project quality and vitality reports. For a single module library. The situation is different for end user documentation. and sometimes they are available for download separately. The Distributed column in the table indicates whether that form of documentation is typically distributed with the project. This is documentation for the end user including usage instructions and guides. This is typically true for the end user documentation. This is reference material (for example. as it is confusing for those reading the site who expect it to reflect the latest release. and a development branch where new features can be documented for when that version is released. Features that are available only in more recent releases should be marked to say when they were introduced. Table 6-1: Project Web site content types Content Description Updated Distributed Separated News. the Updated column indicates whether the content is regularly updated. including the end user documentation in the normal build is reasonable as it is closely tied to the source code reference. The Separated column indicates whether the documentation can be a separate module or project. It is good to update the documentation on the Web site between releases. the source code reference material and reports are usually generated from the modules that hold the source code and perform the build. the Javadoc and other reference material are usually distributed for reference as well. which are continuously published and not generally of interest for a particular release. It is important not to include documentation for features that don't exist in the last release. but usually not distributed or displayed in an application. source code references should be given a version and remain unchanged after being released. and not introducing incorrect documentation. These are the reports discussed in this chapter that display the current state of the project to the developers. like mailing list information and the location of the issue tracker and SCM are updated also. Yes No Yes Yes Yes No No Yes No Yes No No In the table. 175 . is to branch the end user documentation in the same way as source code. and the content's characteristics. Sometimes these are included in the main bundle. The best compromise between not updating between releases. This is true of the news and FAQs. which are based on time and the current state of the project. For libraries and frameworks.Assessing Project Health with Maven To determine the correct balance. It refers to a particular version of the software. regardless of releases. Some standard reports. The resulting structure is shown in figure 6-4. a module is created since it is not related to the source code reference material. In this case. in most cases. The current structure of the project is shown in figure 6-3. the documentation and Web site should be kept in a separate module dedicated to generating a site. or maybe totally independent.proficio \ -DarchetypeArtifactId=maven-archetype-site-simple This archetype creates a very basic site in the user-guide subdirectory. the site currently contains end user documentation and a simple report. In the following example. but make it an independent project when it forms the overall site with news and FAQs. While these recommendations can help properly link or separate content according to how it will be used. and is not distributed with the project. Figure 6-3: The initial setup The first step is to create a module called user-guide for the end user documentation. This avoids including inappropriate report information and navigation elements. You would make it a module when you wanted to distribute it with the rest of the project. In Proficio.Better Builds with Maven However. you are free to place content wherever it best suits your project. which you can later add content to. It is important to note that none of these are restrictions placed on a project by Maven. This separated documentation may be a module of the main project. This is done using the site archetype : C:\mvnbook\proficio> mvn archetype:create -DartifactId=user-guide \ -DgroupId=com.mergere. 176 .mvnbook. you will learn how to separate the content and add an independent project for the news and information Web site. 177 . while optional.mergere. and the user guide to. the URL and deployment location were set to the root of the Web site:{pom.. First..version} </url> </site> </distributionManagement> .Assessing Project Health with Maven Figure 6-4: The directory layout with a user guide The next step is to ensure the layout on the Web site is correct. whether to maintain history or to maintain a release and a development preview..com/mvnbook/proficio/user-guide..mergere. the development documentation would go to that location.. <url> scp://mergere.xml file to change the site deployment url: . Under the current structure. edit the top level pom. Adding the version to the development documentation. is useful if you are maintaining multiple public versions.com/mvnbook/proficio. <distributionManagement> <site> . Previously. In this example.. the development documentation will be moved to a /reference/version subdirectory so that the top level directory is available for a user-facing web site. maven. 183 . however the content pane is now replaced with a syntax-highlighted.apache.. A useful way to leverage the cross reference is to use the links given for each line number in a source file to point team mates at a particular piece of code. You can now run mvn site in proficio-core and see the Source Xref item listed in the Project Reports menu of the generated site. crossreferenced Java source file for the selected class... Including JXR as a permanent fixture of the site for the project is simple..xml: . if you don't have the project open in your IDE..Assessing Project Health with Maven Figure 6-6: An example source code cross reference Figure 6-6 shows an example of the cross reference.plugins</groupId> <artifactId>maven-jxr-plugin</artifactId> </plugin> . Or. The hyper links in the content pane can be used to navigate to other classes and interfaces within the cross reference. </plugins> </reporting> . <reporting> <plugins> <plugin> <groupId>org. Those familiar with Javadoc will recognize the framed navigation layout.. and can be done by adding the following to proficio/pom. the links can be used to quickly find the source belonging to a particular exception. In the online mode. the default JXR configuration is sufficient.maven.sun. you should include it in proficio/pom. Using Javadoc is very similar to the JXR report and most other reports in Maven. will link both the JDK 1.xml. browsing source code is too cumbersome for the developer if they only want to know about how the API works.xml as a site report to ensure it is run every time the site is regenerated: . Again.4. the following configuration.org/ref/1. <plugin> <groupId>org.. 184 .Better Builds with Maven In most cases. One useful option to configure is links. Now that you have a source cross reference.. many of the other reports demonstrated in this chapter will be able to link to the actual code to highlight an issue.0-alpha-9/apidocs</link> </links> </configuration> </plugin> .. the Javadoc report is quite configurable. in target/site/apidocs.apache.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <configuration> <links> <link>. with most of the command line options of the Javadoc tool available. A Javadoc report is only as good as your Javadoc! Make sure you document the methods you intend to display in the report. However. The end result is the familiar Javadoc output. see the plugin reference at</groupId> <artifactId>maven-javadoc-plugin</artifactId> </plugin> .. so an equally important piece of reference material is the Javadoc report. and if possible use Checkstyle to ensure they are documented. this will link to an external Javadoc reference at a given URL.apache.apache.. when added to proficio/pom. For example.codehaus.2/docs/api</link> <link>. Unlike JXR.4 API documentation and the Plexus container API documentation used by Proficio: .org/plugins/maven-jxr-plugin/.com/j2se/1.. <plugin> <groupId>org.. This setting must go into the reporting section so that it is used for both reports and if the command is executed separately. but conversely to have the Javadoc closely related. of course!).xml by adding the following line: .lang. <configuration> <aggregate>true</aggregate> .Assessing Project Health with Maven If you regenerate the site in proficio-core with mvn site again. One option would be to introduce links to the other modules (automatically generated by Maven based on dependencies. the next section will allow you to start monitoring and improving its health. this setting is always ignored by the javadoc:jar goal. this simple change will produce an aggregated Javadoc and ignore the Javadoc report in the individual modules.Object.html.. Instead. ensuring that the deployed Javadoc corresponds directly to the artifact with which it is deployed for use in an IDE. this is not sufficient. When built from the top level project. but it results in a separate set of API documentation for each library in a multi-module build. </configuration> .String and java. Try running mvn clean javadoc:javadoc in the proficio directory to produce the aggregated Javadoc in target/site/apidocs/index... but this would still limit the available classes in the navigation as you hop from module to module.. However.. Edit the configuration of the existing Javadoc plugin in proficio/pom. 185 .. as well as any references to classes in Plexus. you'll see that all references to the standard JDK classes such as java. are linked to API documentation on the Sun website.lang. the Javadoc plugin provides a way to produce a single set of API documentation for the entire project. Since it is preferred to have discrete functional pieces separated into distinct modules. Now that the sample application has a complete reference for the source code. Setting up Javadoc has been very convenient. which in turn reduces the risk that its accuracy will be affected by change) Maven has reports that can help with each of these health factors.7.sf. Figure 6-7: An example PMD report 186 . The result can help identify bugs.net/) • Checkstyle (. which is obtained by running mvn pmd:pmd.sf. and violations of a coding standard..Better Builds with Maven 6. Figure 6-7 shows the output of a PMD report on proficio-core.net/) • Tag List PMD takes a set of either predefined or user-defined rule sets and evaluates the rules across your Java source code. copy-and-pasted code. this is important for both the efficiency of other team members and also to increase the overall level of code comprehension. and this section will look at three: • PMD (. .maven.apache. The “unused code” rule set will locate unused private fields. by passing the rulesets configuration to the plugin. such as unused methods and variables.xml</ruleset> <ruleset>/rulesets/unusedcode. Adding the default PMD report to the site is just like adding any other report – you can include it in the reporting section in the proficio/pom. methods.. 187 . if you configure these.. <plugin> <groupId>org. <plugin> <groupId>org. The “basic” rule set includes checks on empty blocks. redundant or unused import declarations.maven.xml</ruleset> </rulesets> </configuration> </plugin> .plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> </plugin> ...plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>/rulesets/basic. some source files are identified as having problems that could be addressed. However.xml file: . since the JXR report was included earlier. to include the default rules. The default PMD report includes the basic. and imports rule sets. you must configure all of them – including the defaults explicitly. and the finalizer rule sets.Assessing Project Health with Maven As you can see.. Also. Adding new rule sets is easy.. The “imports” rule set will detect duplicate.apache. unused code. variables and parameters..xml</ruleset> <ruleset>/rulesets/finalizers. the line numbers in the report are linked to the actual source code so you can browse the issues.xml</ruleset> <ruleset>/rulesets/imports. add the following to the plugin configuration you declared earlier: . unnecessary statements and possible bugs – such as incorrect loop variables. For example. html. unusedcode. If you've done all the work to select the right rules and are correcting all the issues being discovered.xml" /> <rule ref="/rulesets/imports. but exclude the “unused private field” rule. you could create a rule set with all the default rules. override the configuration in the proficio-core/pom.sf. and imports are useful in most scenarios and easily fixed. you need to make sure it stays that way. select the rules that apply to your own project. Or.. <reporting> <plugins> <plugin> <groupId>org.apache. create a file in the proficio-core directory of the sample application called src/main/pmd/custom. Start small.html: • • Pick the rules that are right for you.xml" /> <rule ref="/rulesets/unusedcode.. with the following content: <?xml version="1. basic. 188 . For example. you may use the same rule sets in a number of projects.maven. For PMD. In either case. but not others. see the instructions on the PMD Web site at. try the following guidelines from the Web site at. To try this. It is also possible to write your own rules if you find that existing ones do not cover recurring problems in your source code.net/bestpractices. you can choose to create a custom rule set.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>${basedir}/src/main/pmd/custom. For more examples on customizing the rule sets.xml"> <exclude name="UnusedPrivateField" /> </rule> </ruleset> To use this rule set.Better Builds with Maven You may find that you like some rules in a rule set.xml file by adding: . There is no point having hundreds of violations you won't fix.xml</ruleset> </rulesets> </configuration> </plugin> </plugins> </reporting> . and add more as needed.net/howtomakearuleset. One important question is how to select appropriate rules..sf.. From this starting.xml.0"?> <ruleset name="custom"> <description> Default rules. no unused private field warning </description> <rule ref="/rulesets/basic. </plugins> </build> You may have noticed that there is no configuration here. [INFO] --------------------------------------------------------------------------- Before correcting these errors.Assessing Project Health with Maven Try this now by running mvn pmd:check on proficio-core. fix the errors in the src/main/java/com/mergere/mvnbook/proficio/DefaultProficio. If you need to run checks earlier.. You will see that the build fails.. add the following section to the proficio/pom. but recall from Configuring Reports and Checks section of this chapter that the reporting configuration is applied to the build as well. By default. you should include the check in the build. To do so. This is done by binding the goal to the build life.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> . which occurs after the packaging phase. To correct this. the pmd:check goal is run in the verify phase.xml file: <build> <plugins> <plugin> <groupId>org. try running mvn verify in the proficio-core directory.maven.apache. you could add the following to the execution block to ensure that the check runs just after all sources exist: <phase>process-sources</phase> To test this new setting.java file by adding a //NOPMD comment to the unused variables and method: 189 . so that it is regularly tested. can make the check optional for developers. // NOPMD . int j. Figure 6-8: An example CPD report 190 .Better Builds with Maven . there is one that is in a separate report. and will appear as “CPD report” in the Project Reports menu. If you run mvn verify again.. While this check is very useful. it can be slow and obtrusive during general development. This is the CPD. // Trigger PMD and checkstyle int i. which is executed only in an appropriate environment. While the PMD report allows you to run a number of different rules... or copy/paste detection report... // NOPMD ...This report is included by default when you enable the PMD plugin in your reporting section. See Continuous Integration with Continuum section in the next chapter for information on using profiles and continuous integration.. adding the check to a profile. and it includes a list of duplicate code fragments discovered across your entire source base. An example report is shown in figure 6-8. but mandatory in an integration environment. private void testMethod() // NOPMD { } . For that reason. the build will succeed. and a commercial product called Simian (. or to enforce a check will depend on the environment in which you are working. resulting in developers attempting to avoid detection by making only slight modifications. With this setting you can fine tune the size of the copies detected. There are other alternatives for copy and paste detection.redhillconsulting. which defaults to 100. 191 . • Use it to check code formatting and selected other problems.net/availablechecks. in many ways. It was originally designed to address issues of format and style.html. Whether to use the report only. This may not give you enough control to effectively set a rule for the source code. refer to the list on the Web site at. If you need to learn more about the available modules in Checkstyle. Depending on your environment.au/products/simian/). rather than identifying a possible factoring of the source code.Assessing Project Health with Maven In a similar way to the main check. the CPD report contains only one variable to configure: minimumTokenCount. and rely on other tools for detecting other problems. Some of the extra summary information for overall number of errors and the list of checks used has been trimmed from this display. such as Checkstyle. Figure 6-9 shows the Checkstyle report obtained by running mvn checkstyle:checkstyle from the proficio-core directory. Checkstyle is a tool that is. • Use it to check code formatting and to detect other problems exclusively This section focuses on the first usage scenario. and still rely on other tools for greater coverage. Simian can also be used through Checkstyle and has a larger variety of configuration options for detecting duplicate source code. but has more recently added checks for other code issues. you may choose to use it in one of the following ways: • Use it to check code formatting only. However.sf. similar to PMD. pmd:cpd-check can be used to enforce a failure if duplicate source code is found.com. the rules used are those of the Sun Java coding conventions. so to include the report in the site and configure it to use the Maven style. add the following to the reporting section of proficio/pom. with a link to the corresponding source line – if the JXR report was enabled. but Proficio is using the Maven team's code style.. warnings or errors is listed in a summary.apache.maven. and then the errors are shown.Better Builds with Maven Figure 6-9: An example Checkstyle report You'll see that each file with notices. This style is also bundled with the Checkstyle plugin.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <configuration> <configLocation>config/maven_checks. That's a lot of errors! By default.xml</configLocation> </configuration> </plugin> 192 .xml: . <plugin> <groupId>org.. 0. or would like to use the additional checks introduced in Checkstyle 3. will look through your source code for known tags and provide a report on those it finds. It is a good idea to reuse an existing Checkstyle configuration for your project if possible – if the style you use is common.apache.apache. By default. and to parameterize the Checkstyle configuration for creating a baseline organizational standard that can be customized by individual projects. as explained at. However.sf. While this chapter will not go into an example of how to do this.0 and above. This report.org/turbine/common/codestandards.org/plugins/maven-checkstyle-plugin/tips. It is also possible to share a Checkstyle configuration among multiple projects.com/docs/codeconv/. one or the other will be suitable for most people. the Checkstyle documentation provides an excellent reference at. 193 .xml Description Reference Sun Java Coding Conventions Maven team's coding conventions Conventions from the Jakarta Turbine project Conventions from the Apache Avalon project. filter the results.xml config/turbine_checks. Before completing this section it is worth mentioning the Tag List plugin.net/config. if you have developed a standard that differs from these. The Checkstyle plugin itself has a large number of configuration options that allow you to customize the appearance of the report.org/guides/development/guidem2-development.xml No longer online – the Avalon project has closed.Assessing Project Health with Maven Table 6-3 shows the configurations that are built into the Checkstyle plugin.html. or a resource within a special dependency also.html#Maven%20Code%20Style. The built-in Sun and Maven standards are quite different. known as “Task List” in Maven 1.xml config/avalon_checks. The configLocation parameter can be set to a file within your build. a URL.html.apache. then it is likely to be more readable and easily learned by people joining your project.sun. These checks are for backwards compatibility only. you will need to create a Checkstyle configuration. and typically. this will identify the tags TODO and @todo in the comments of your source code.html config/maven_checks. Table 6-3: Built-in Checkstyle configurations Configuration config/sun_checks. and more plugins are being added every day. FIXME. While you are writing your tests. Cobertura (. add the following to the reporting section of proficio/pom.codehaus. JavaNCSS and JDepend. 6.mojo</groupId> <artifactId>taglist-maven-plugin</artifactId> <configuration> <tags> <tag>TODO</tag> <tag>@todo</tag> <tag>FIXME</tag> <tag>XXX</tag> </tags> </configuration> </plugin> . will ignore these failures when generated to show the current test state. have beta versions of plugins available from the. PMD...codehaus. or XXX in your source code. In addition to that. Another critical technique is to determine how much of your source code is covered by the test execution.. you saw that tests are run before the packaging of the library or application for distribution.. 194 . Monitoring and Improving the Health of Your Tests One of the important (and often controversial) features of Maven is the emphasis on testing as part of the production of your code. it can be a useful report for demonstrating the number of tests available and the time it takes to run certain tests for a package. As you learned in section 6.sf. Knowing whether your tests pass is an obvious and important assessment of their health. using this report on a regular basis can be very helpful in spotting any holes in the test plan. <plugin> <groupId>org. the report (run either on its own.xml: . or as part of the site).2. At the time of writing. Failing the build is still recommended – but the report allows you to provide a better visual representation of the results. Checkstyle. for assessing coverage.org/ project at the time of this writing. In the build life cycle defined in Chapter 2.Better Builds with Maven To try this plugin. Some other similar tools. and Tag List are just three of the many tools available for assessing the health of your project's source code. however this plugin is a more convenient way to get a simple report of items that need to be addressed at some point later in time. Setting Up the Project Web Site. such as FindBugs. There are additional testing stages that can occur after the packaging step to verify that the assembled package works under other circumstances. While the default Surefire configuration fails the build if the tests fail. it is easy to add a report to the Web site that shows the results of the tests that have been run. based on the theory that you shouldn't even try to use something before it has been tested. @todo.8. This configuration will locate any instances of TODO. It is actually possible to achieve this using Checkstyle or PMD rules.net) is the open source tool best integrated with Maven. or for which all possible branches were not executed. and a line-by-line coverage analysis of each source file. For example. For a source file.html. Figure 6-10 shows the output that you can view in target/site/cobertura/index. you'll notice the following markings: • Unmarked lines are those that do not have any executable code associated with them. • • Unmarked lines with a green number in the second column are those that have been completely covered by the test execution. Each line with an executable statement has a number in the second column that indicates during the test run how many times a particular statement was run. The report contains both an overall summary. in the familiar Javadoc style framed layout. a branch is an if statement that can behave differently depending on whether the condition is true or false.Assessing Project Health with Maven To see what Cobertura is able to report. Lines in red are statements that were not executed (if the count is 0). Figure 6-10: An example Cobertura report 195 . This includes method and class declarations. comments and white space. run mvn cobertura:cobertura in the proficio-core directory of the sample application. Add the following to the reporting section of proficio/pom.. 196 . <build> <plugins> <plugin> <groupId>org. The Cobertura report doesn't have any notable configuration.. If you now run mvn site under proficio-core. If you now run mvn clean in proficio-core.codehaus. the report will be generated in target/site/cobertura/index.xml: . High numbers (for example. <plugin> <groupId>org.ser file is deleted. To ensure that this happens. add the following to the build section of proficio/pom. you might consider having PMD monitor it.. might indicate a method should be re-factored into simpler pieces.. If this is a metric of interest. which measures the number of branches that occur in a particular method.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> . there is another useful setting to add to the build section... you'll see that the cobertura. While not required. as it can be hard to visualize and test the large number of alternate code paths.ser.html.Better Builds with Maven The complexity indicated in the top right is the cyclomatic complexity of the methods in the class. the database used is stored in the project directory as cobertura.. and is not cleaned with the rest of the project. over 10). as well as the target directory.codehaus.xml: .. so including it in the site is simple.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <executions> <execution> <id>clean</id> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> </plugins> </build> . Due to a hard-coded path in Cobertura. The Cobertura plugin also contains a goal called cobertura:check that is used to ensure that the coverage of your source code is maintained at a certain percentage. If you now run mvn verify under proficio-core.. add a configuration and another execution to the build plugin definition you added above when cleaning the Cobertura database: . the check passes. the configuration will be applied. <execution> <id>check</id> <goals> <goal>check</goal> </goals> </execution> </executions> .. You would have seen in the previous examples that there were some lines not covered. However. The Surefire report may also re-run tests if they were already run – both of these are due to a limitation in the way the life cycle is constructed that will be improved in future versions of Maven. <configuration> <check> <totalLineRate>80</totalLineRate> . This ensures that if you run mvn cobertura:check from the command line. If you run mvn verify again. looking through the report. and the tests are re-run using those class files instead of the normal ones (however. You can do this for Proficio to have the tests pass by changing the setting in proficio/pom. Normally.. so running the check fails. You'll notice that your tests are run twice. you may decide that only some exceptional cases are untested... these are instrumented in a separate directory. as in the Proficio example. and decide to reduce the overall average required...Assessing Project Health with Maven To configure this goal for Proficio. and 100% branch coverage rate. This wouldn't be the case if it were associated with the life-cycle bound check execution. Note that the configuration element is outside of the executions..xml: . This is because Cobertura needs to instrument your class files. The rules that are being used in this configuration are 100% overall line coverage rate.. you would add unit tests for the functions that are missing tests.. so are not packaged in your application). <configuration> <check> <totalLineRate>100</totalLineRate> <totalBranchRate>100</totalBranchRate> </check> </configuration> <executions> . the check will be performed. 197 . org/plugins/maven-clover-plugin/. This will allow for some constructs to remain untested. as it will discourage writing code to handle exceptional cases that aren't being tested. exceptional cases – and that's certainly not something you want! The settings above are requirements for averages across the entire source tree. although not yet integrated with Maven directly. It is just as important to allow these exceptions. You may want to enforce this for each file individually as well. It behaves very similarly to Cobertura. These reports won't tell you if all the features have been implemented – this requires functional or acceptance testing. and you can evaluate it for 30 days when used in conjunction with Maven. Remember. It also won't tell you whether the results of untested input values produce the correct results. and setting the total rate higher than both. 198 . and get integration with these other tools for free. the easiest way to increase coverage is to remove code that handles untested. there is more to assessing the health of tests than success and coverage.sf. may be of assistance there. To conclude this section on testing. For more information.net). it is possible for you to write a provider to use the new tool. so that they understand and agree with the choice. It is also possible to set requirements on individual packages or classes using the regexes parameter.Better Builds with Maven These settings remain quite demanding though. see the Clover plugin reference on the Maven Web site at support is also available. Consider setting any package rates higher than the per-class rate. Tools like Jester (. Some helpful hints for determining the right code coverage settings are: • • • • • • Like all metrics. as it is to require that the other code be tested. refer to the Cobertura plugin configuration reference at. only allowing a small number of lines to be untested. Remain flexible – consider changes over time rather than hard and fast rules. If you have another tool that can operate under the Surefire framework. The best known commercial offering is Clover. Choose to reduce coverage requirements on particular classes or packages rather than lowering them globally.codehaus. Don't set it too high. which is very well integrated with Maven as well. Set some known guidelines for what type of code can remain untested. involve the whole development team in the decision. In both cases. or as the average across each package. and at the time of writing experimental JUnit 4. Surefire supports tests written with TestNG. Cobertura is not the only solution available for assessing test coverage. For example. it is worth noting that one of the benefits of Maven's use of the Surefire abstraction is that the tools above will work for any type of runner introduced. Don't set it too low.apache. Choosing appropriate settings is the most difficult part of configuring any of the reporting metrics in Maven. as it will become a minimum benchmark to attain and rarely more. these reports work unmodified with those test types. Jester mutates the code that you've already determined is covered and checks that it causes the test to fail when run a second time with the wrong code. For more information. using lineRate and branchRate. Of course. using packageLineRate and packageBranchRate. such as handling checked exceptions that are unexpected in a properly configured system and difficult to test.org/cobertura-maven-plugin. and a number of other features such as scoping and version selection. Monitoring and Improving the Health of Your Dependencies Many people use Maven primarily as a dependency manager. and browse to the file generated in target/site/dependencies. Maven 2. run mvn site in the proficio-core directory.9. Figure 6-11: An example dependency report 199 .html. but any projects that depend on your project. While this is only one of Maven's features.Assessing Project Health with Maven 6. If you haven't done so already. The result is shown in figure 6-11. This brought much more power to Maven's dependency mechanism.0 introduced transitive dependencies. but does introduce a drawback: poor dependency maintenance or poor scope and version selection affects not only your own project. the full graph of a project's dependencies can quickly balloon in size and start to introduce conflicts. Left unchecked. used well it is a significant time saver. The first step to effectively maintaining your dependencies is to review the standard report included with the Maven site. where the dependencies of dependencies are included in a build. Currently. This can be quite difficult to read. or an incorrect scope – and choose to investigate its inclusion. here is the resolution process of the dependencies of proficio-core (some fields have been omitted for brevity): proficio-core:1. run mvn site from the base proficio directory. this requires running your build with debug turned on. local scope test wins) proficio-api:1. To see the report for the Proficio project. Another report that is available is the “Dependency Convergence Report”. but that it is overridden by the test scoped dependency in proficio-core. such as mvn -X package. so at the time of this writing there are two features in progress that are aimed at helping in this area: • • The Maven Repository Manager will allow you to navigate the dependency tree through the metadata stored in the Ibiblio repository. A dependency graphing plugin that will render a graphical representation of the information. This will output the dependency tree as it is calculated. It's here that you might see something that you didn't expect – an extra dependency. and that plexus-container-default attempts to introduce junit as a compile dependency.0-SNAPSHOT junit:3. an incorrect version. and why. and must be updated before the project can be released. which indicates dependencies that are in development.html will be created. It also includes some statistics and reports on two important factors: • Whether the versions of dependencies used for each module is in alignment. but appears in a multi-module build only. • 200 . The report shows all of the dependencies included in all of the modules within the project.0-SNAPSHOT (selected for compile) proficio-model:1. using indentation to indicate which dependencies introduce other dependencies. but more importantly in the second section it will list all of the transitive dependencies included through those dependencies. This helps ensure your build is consistent and reduces the probability of introducing an accidental incompatibility.8.8.1-alpha-2 (selected for compile) junit:3.1 (selected for test) plexus-container-default:1.Better Builds with Maven This report shows detailed information about your direct dependencies.1 (not setting scope to: compile.4 (selected for compile) classworlds:1. Whether there are outstanding SNAPSHOT dependencies in the build. as well as comments about what versions and scopes are selected.0-SNAPSHOT (selected for compile) Here you can see that. proficio-model is introduced by proficio-api. The file target/site/dependencyconvergence. for example. For example.0.0-alpha-9 (selected for compile) plexus-utils:1. and is shown in figure 6-12. This report is also a standard report. try the following recommendations for your dependencies: • • • • Look for dependencies in your project that are no longer used Check that the scope of your dependencies are set correctly (to test if only used for unit tests. This is particularly the case for dependencies that are optional and unused by your project. Use a range of supported dependency versions. Add exclusions to dependencies to remove poorly defined dependencies from the tree.Assessing Project Health with Maven Figure 6-12: The dependency convergence report These reports are passive – there are no associated checks for them. However. 201 . they can provide basic help in identifying the state of your dependencies once you know what to find. or runtime if it is needed to bundle with or run the application but not for compiling your source code). rather than using the latest available. To improve your project's health and the ability to reuse it as a dependency itself. You can control what version is actually used by declaring the dependency version in a project that packages or runs the application. declaring the absolute minimum supported as the lower boundary. Monitoring and Improving the Health of Your Releases Releasing a project is one of the most important procedures you will perform. 6. there is no verification that a library is binary-compatible – incompatibility will be discovered only when there's a failure.Better Builds with Maven Given the importance of this task. but then expected to continue working as they always have. Libraries will often be substituted by newer versions to obtain new features or bug fixes. Figure 6-13: An example Clirr report This is particularly important if you are building a library or framework that will be consumed by developers outside of your own project. but it is often tedious and error prone. specification dependencies that let you depend on an API and manage the implementation at runtime. Because existing libraries are not recompiled every time a version is changed. An important tool in determining whether a project is ready to be released is Clirr (). While the next chapter will go into more detail about how Maven can help automate that task and make it more reliable. and more.10. 202 . Clirr detects whether the current version of a library has introduced any binary incompatibilities with the previous release. Two that are in progress were listed above. An example Clirr report is shown in figure 6-13. more tools are needed in Maven. and the information released with it. this section will focus on improving the quality of the code released. Catching these before a release can eliminate problems that are quite difficult to resolve once the code is “in the wild”. but there are plans for more: • • A class analysis plugin that helps identify dependencies that are unused in your current project Improved dependency management features including different mechanisms for selecting versions that will allow you to deal with conflicting versions.sf. add the following to the reporting section of proficio-api/pom. [INFO] [INFO] [INFO] [INFO] [INFO] . For example.8 release.codehaus. You can change the version used with the comparisonVersion parameter. While methods of marking incompatibility are planned for future versions. that is before the current development version. By default. by setting the minSeverity parameter. run the following command: mvn clirr:clirr -DcomparisonVersion=0. If you run either of these commands..xml: ...mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <minSeverity>info</minSeverity> </configuration> </plugin> </plugins> </reporting> . As a project grows. the answer here is clearly – yes.Assessing Project Health with Maven But does binary compatibility apply if you are not developing a library for external consumption? While it may be of less importance.8 203 . where the dependency mechanism is based on the assumption of binary compatibility between versions. Maven currently works best if any version of an artifact is backwards compatible. the interactions between the project's own components will start behaving as if they were externally-linked. However. the report will be generated in target/site/clirrreport. This gives you an overview of all the changes since the last release.9 ------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------- This version is determined by looking for the newest release in repository. you'll notice that Maven reports that it is using version 0. back to the first release.. If you run mvn site in proficio-api. [clirr:clirr] Comparing to version: 0. you can configure the plugin to show all informational messages. To see this in action.. This is particularly true in a Maven-based environment. to compare the current code to the 0.... the Clirr report shows only errors and warnings.html. <reporting> <plugins> <plugin> <groupId>org.9 of proficio-api against which to compare (and that it is downloaded if you don't have it already): . Different modules may use different versions. even if they are binary compatible. or a quick patch may need to be made and a new version deployed into an existing application. You can obtain the same result by running the report on its own using mvn clirr:clirr. if the team is prepared to do so. To add the check to the proficio-api/pom.. If it is the only one that the development team will worry about breaking. delegating the code. there is nothing in Java preventing them from being used elsewhere. since this early development version had a different API. the harder they are to change as adoption increases. You'll notice there are a more errors in the report.. Even if they are designed only for use inside the project. then there is no point in checking the others – it will create noise that devalues the report's content in relation to the important components.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> . add the following to the build section: . In this instance.. However. <build> <plugins> <plugin> <groupId>org. to discuss and document the practices that will be used. Like all of the quality metrics.. it is important to agree up front. and it can assist in making your own project more stable. and later was redesigned to make sure that version 1. rather than removing or changing the original API and breaking binary compatibility. The longer poor choices remain. however you can see the original sources by extracting the Code_Ch06-2. It is best to make changes earlier in the development cycle. on the acceptable incompatibilities. Once a version has been released that is intended to remain binary-compatible going forward.codehaus. The Clirr plugin is also capable of automatically checking for introduced incompatibilities through the clirr:check goal. it is a good idea to monitor as many components as possible. you are monitoring the proficio-api component for binary compatibility changes only.. 204 .zip file.. as it will be used as the interface into the implementation by other applications. </plugins> </build> . This is the most important one to check. it is almost always preferable to deprecate an old API and add a new one.Better Builds with Maven These versions of proficio-api are retrieved from the repository.0 would be more stable in the long run. and to check them automatically. so that fewer people are affected.xml file. codehaus.. and ignored in the same way that PMD does.Assessing Project Health with Maven If you now run mvn verify. you can create a very useful mechanism for identifying potential release disasters much earlier in the development process. This allows the results to be collected over time to form documentation about known incompatibilities for applications using the library. which is available at. you can choose to exclude that from the report by adding the following configuration to the plugin: . you will see that the build fails due to the binary incompatibility introduced between the 0.9 preview release and the final 1.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <excludes> <exclude>**/Proficio</exclude> </excludes> </configuration> . not just the one acceptable failure. and then act accordingly.. as well as strategies for evolving an API without breaking it.codehaus. However. the following articles and books can be recommended: • Evolving Java-based APIs contains a description of the problem of maintaining binary compatibility. and so is most useful for browsing. Built as a Javadoc doclet.org/jdiff-maven-plugin. This can be useful in getting a greater level of detail than Clirr on specific class changes.9 release. <plugin> <groupId>org. Hopefully a future version of Clirr will allow acceptable incompatibilities to be documented in the source code.0 version. With this simple setup. it will not pinpoint potential problems for you. • A similar tool to Clirr that can be used for analyzing changes between releases is JDiff. it takes a very different approach. it is listed only in the build configuration. and particularly so if you are designing a public API. so the report still lists the incompatibility. Since this was an acceptable incompatibility due to the preview nature of the 0... taking two source trees and comparing the differences in method signatures and Javadoc annotations. </plugin> This will prevent failures in the Proficio class from breaking the build in the future. 205 . A limitation of this feature is that it will eliminate a class entirely. While the topic of designing a strong public API and maintaining binary compatibility is beyond the scope of this book. Effective Java describes a number of practical rules that are generally helpful to writing code in Java. It has a functional Maven 2 plugin. Note that in this instance. It is important that developers are involved in the decision making process regarding build constraints. but none related information from another report to itself. the Dashboard plugin). 206 . While some attempts were made to address this in Maven 1. will reduce the need to gather information from various sources about the health of the project. The purpose. and few of the reports aggregated information across a multiple module build.xml).0 (for example. as there is a constant background monitor that ensures the health of the project is being maintained. and as the report set stabilizes – summary reports will start to appear. How well this works in your own projects will depend on the development culture of your team. individual checks that fail the build when they're not met. In the absence of these reports. The additions and changes to Proficio made in this chapter can be found in the Code_Ch06-1. Best of all. none of the reports presented how the information changes over time other than the release announcements. The next chapter examines team development and collaboration. a new set of information about your project can be added to a shared Web site to help your team visualize the health of the project. the model remains flexible enough to make it easy to extend and customize the information published on your project web site. it is important that your project information not remain passive. regularly scheduled. Most Maven plugins allow you to integrate rules into the build that check certain constraints on that piece of information once it is well understood. of the visual display is to aid in deriving the appropriate constraints to use. to a focus on quality.11. However. Summary The power of Maven's declarative project model is that with a very simple setup (often only 4 lines in pom. Some of the reports linked to one another. along with techniques to ensure that the build checks are now automated. However. These are all important features to have to get an overall view of the health of a project. then.12. Once established. it requires a shift from a focus on time and deadlines. and run in the appropriate environment. 6. they did not address all of these requirements. and will be used as the basis for the next chapter. Finally. and incorporates the concepts learned in this chapter. enforcing good.0. Viewing Overall Project Health In the previous sections of this chapter. In some cases. and have not yet been implemented for Maven 2. so that they feel that they are achievable. a large amount of information was presented about a project.Better Builds with Maven 6. it should be noted that the Maven reporting API was written with these requirements in mind specifically.zip source archive. this focus and automated monitoring will have the natural effect of improving productivity and reducing time of delivery again. each in discrete reports. 7.Tom Clancy 207 .. . Even when it is not localized. The Issues Facing Teams Software development as part of a team. This problem is particularly relevant to those working as part of a team that is distributed across different physical locations and timezones. faces a number of challenges to the success of the effort. the fact that everyone has direct access to the other team members through the CoRE framework reduces the time required to not only share information. rapid development. it's just as important that they don't waste valuable time researching and reading through too many information sources simply to find what they need.Better Builds with Maven 7. CoRE enables globally distributed development teams to cohesively contribute to high-quality software. it is obvious that trying to publish and disseminate all of the available information about a project would create a near impossible learning curve and generate a barrier to productivity. in rapid. project information can still be misplaced. However. These tools aid the team to organize. it does encompass a set of practices and tools that enable effective team communication and collaboration. and that existing team members become more productive and effective.1. An organizational and technology-based framework. although a distributed team has a higher communication overhead than a team working in a single location. and document for reuse the artifacts that result from a software project. While it's essential that team members receive all of the project information required to be productive. CoRE emphasizes the relationship between project information and project members. web-based communication channels and web-based project management tools. This problem gets exponentially larger as the size of the team increases. whether it is 2 people or 200 people. or forgotten. Even though teams may be widely distributed. component-based projects despite large. As teams continue to grow. A Community-oriented Real-time Engineering (CoRE) process excels with this information challenge. but also to incorporate feedback. iterative cycles. visualize. further contributing to the problem. While Maven is not tied directly to the CoRE framework. one of the biggest challenges relates to the sharing and management of development information. 208 . and dealing with differences in opinions. CoRE is based on accumulated learnings from open source projects that have achieved successful. Using the model of a community. widely-distributed teams. and asynchronous engineering. The CoRE approach to development also means that new team members are able to become productive quickly. However. This value is delivered to development teams by supporting project transparency. Many of these challenges are out of any given technology's control – for instance finding the right people for the team. real-time stakeholder participation. misinterpreted. repeating errors previously solved or duplicating efforts already made. which is enabled by the accessibility of consistently structured and organized information such as centralized code repositories. resulting in shortened development cycles. working on complex. every other member (and particularly new members). As each member retains project information that isn't shared or commonly accessible. will inevitably have to spend time obtaining this localized information. the key to the information issue in both situations is to reduce the amount of communication necessary to obtain the required information in the first place. xml file. Without it.m2 subdirectory of your home directory (settings in this location take precedence over those in the Maven installation directory). and to user-specific profiles. This file can be stored in the conf directory of your Maven installation. because the environment will tend to evolve inconsistently once started that way. you learned how to create your own settings. it's a good idea to leverage Maven's two different settings files to separately manage shared and user-specific settings. the set up process for a new developer can be slow. This chapter also looks at the adoption and use of a consistent development environment. such as proxy settings. through the practice of continuous integration. while an individual developer's settings are stored in their home directory. To maintain build consistency. varying operating systems. The settings. In a shared development environment. Additionally. but also several that are typically common across users in a shared environment. these variables relate to the user and installation settings files. demonstrating how Maven provides teams with real-time information on the builds and health of a project. Maven can gather and share the knowledge about the health of a project. While one of Maven's objectives is to provide suitable conventions to reduce the introduction of inconsistencies in the build environment.Team Collaboration with Maven As described in Chapter 6. 7. there are unavoidable variables that remain. error-prone and full of omissions. 209 . multiple JDK versions. the key is to minimize the configuration required by each individual developer. In Chapter 2. and other discrete settings such as user names and passwords. How to Set up a Consistent Developer Environment Consistency is important when establishing a shared development environment. or in the . Common configuration settings are included in the installation directory.2. and the use of archetypes to ensure consistency in the creation of new projects. In this chapter. it will be the source of timeconsuming development problems in the future. this is taken a step further.xml file contains a number of settings that are user-specific. and to effectively define and declare them. while still allowing for this natural variability. In Maven. such as different installation locations for software.></pluginGroup> </pluginGroups> </settings> 210 .mycompany.com/internal/</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>property-overrides</activeProfile> <activeProfile>default-repositories</activeProfile> </activeProfiles> <pluginGroups> <pluginGroup>com.mycompany. <maven home>/conf/settings.Better Builds with Maven The following is an example configuration file that you might use in the installation directory.mycompany. username}. you can easily add and consistently roll out any new server and repository settings. The plugin groups are necessary only if an organization has plugins. By placing the common configuration in the shared settings. issues with inconsistently-defined identifiers and permissions are avoided. This profile will be defined in the user's settings file to set the properties used in the shared file.username>myuser</website.3 for more information on setting up an internal repository. Using the basic template. with only specific properties such as the user name defined in the user's settings. which are run from the command line and not defined in the POM. across users. The server settings will typically be common among a set of developers.username> </properties> </profile> </profiles> </settings> To confirm that the settings are installed correctly.3 of this chapter for more information on creating a mirror of the central repository within your own organization. While you may define a standard location that differs from Maven's default (for example. See section 7. internal repositories that contain a given organization's or department's released artifacts. the local repository is defined as the repository of a single user. such as ${website. These repositories are independent of the central repository in this configuration. which is typically one that has been set up within your own organization or department. ${user. Another profile.Team Collaboration with Maven There are a number of reasons to include these settings in a shared configuration: • • • • • • If a proxy server is allowed. at a single physical location. without having to worry about integrating local changes made by individual developers. you can view the merged result by using the following help plugin command: C:\mvnbook> mvn help:effective-settings 211 . The user-specific configuration is also much simpler as shown below: <settings> <profiles> <profile> <id>property-overrides</id> <properties> <website. it is important that you do not configure this setting in a way that shares a local repository. You'll notice that the local repository is omitted in the prior example. The active profiles listed enable the profile defined previously in every environment.home}/maven-repo). property-overrides is also enabled by default. The mirror element can be used to specify a mirror of a repository that is closer to you. See section 7. it would usually be set consistently across the organization or department. The previous example forms a basic template that is a good starting point for the settings file in the Maven installation. The profile defines those common. In Maven. Now that each individual developer on the team has a consistent set up that can be customized as needed. download the Jetty 5. The following are a few methods to achieve this: • • • • Rebuild the Maven release distribution to include the shared configuration file and distribute it internally. Check the Maven installation into CVS. For more information on profiles. or if there are network problems. While any of the available transport protocols can be used. Apache Tomcat. or other custom solution.10-bundle.3.10 server bundle from the book's Web site and copy it to the repository directory. and when possible. Use an existing desktop management solution. organization's will typically want to set up what is referred to as an internal repository. so that multiple developers and teams can collaborate effectively. located in the project directory. easily updated. While it can be stored anywhere you have permissions. To set up Jetty. Subversion. or create a new server using Apache HTTPd. and run: C:\mvnbook\repository> java -jar jetty-5. This internal repository is still treated as a remote repository in Maven. Change to that directory. each execution will immediately be up-to-date. or other source control management (SCM) system. however it applies to all projects that are built in the developer's environment. but requires a manual procedure.jar 8081 212 . since not everyone can deploy to the central Maven repository. doing so will prevent Maven from being available off-line. Place the Maven installation on a read-only shared or network drive from which each developer runs the application. but it is also important to ensure that the shared settings are easily and reliably installed with Maven. create a new directory in which to store the files. To set up your organization's internal repository using Jetty. if M2_HOME is not set. You can use an existing HTTP server for this. it is possible to maintain multiple Maven installations.xml file. developers must use profiles in the profiles. Setting up an internal repository is simple. If necessary. or any number of other servers.Better Builds with Maven Separating the shared settings from the user-specific settings is helpful. For an explanation of the different types of repositories. If this infrastructure is available. just as any other external repository would be. Each developer can check out the installation into their own machines and run it from there. In some circumstances however. the most popular is HTTP.1. However. Jetty. A new release will be required each time the configuration is changed. To publish releases for use across different environments within their network. • Adjusting the path or creating symbolic links (or shortcuts) to the desired Maven executable. Retrieving an update from an SCM will easily update the configuration and/or installation. 7. see Chapter 2. Creating a Shared Repository Most organizations will need to set up one or more shared repositories. an individual will need to customize the build of an individual project.xml file covers the majority of use cases for individual developer customization. Configuring the settings. To do this. in this example C:\mvnbook\repository will be used. see Chapter 3. the next step is to establish a repository to and from which artifacts can be published and dependencies downloaded.1. by one of the following methods: • Using the M2_HOME environment variable to force the use of a particular installation. you will want to set up or use an existing HTTP server that is in a shared. configured securely and monitored to ensure it remains running at all times. It is deployed to your Jetty server (or any other servlet container) and provides remote repository proxies. but rather than set up multiple web servers. it is common in many organizations as it eliminates the requirement for Internet access or proxy configuration. You can create a separate repository under the same server. The server is set up on your own workstation for simplicity in this example. and reporting.apache. • The Maven Repository Manager (MRM) is a new addition to the Maven build platform that is designed to administer your internal repository. C:\mvnbook\repository> mkdir internal It is also possible to set up another repository (or use the same one) to mirror content from the Maven central repository. searching. Use rsync to take a copy of the central repository and regularly update it. The repository manager can be downloaded from. This will download anything that is not already present. sftp and more. there are a number of methods available: • • Manually add content as desired using mvn deploy:deploy-file Set up the Maven Repository Manager as a proxy to the central repository. and keep a copy in your internal repository for others on your team to reuse. However.Team Collaboration with Maven You can now navigate to and find that there is a web server running displaying that directory. ftp. For the first repository. While this isn't required.8G. However. the size of the Maven repository was 5. accessible location. To populate the repository you just created. and gives full control over the set of artifacts with which your software is built. and is all that is needed to get started. it is possible to use a repository on another server with any combination of supported protocols including http. At the time of writing. it provides faster performance (as most downloads to individual developers come from within their own network). This creates an empty repository. Your repository is now set up. create a subdirectory called internal that will be available at. This chapter will assume the repositories are running from and that artifacts are deployed to the repositories using the file system. using the following command: C:\mvnbook\repository> mkdir central This repository will be available at. scp. separate repositories. by avoiding any reliance on Maven's relatively open central repository. For more information. 213 . as well as friendly repository browsing. In addition. Later in this chapter you will learn that there are good reasons to run multiple. you can store the repositories on this single server. refer to Chapter 3.org/repositorymanager/. and as a result. as shown in section 7. there are two choices: use it as a mirror. if you want to prevent access to the central repository for greater control.. You would use it as a mirror if it is intended to be a copy of the central repository exclusively. On the other hand. you must define a repository in a settings file and/or POM that uses the identifier central. it would be a nightmare to change should the repository location change! The solution is to declare your internal repository (or central replacement) in the shared settings.Better Builds with Maven When using this repository for your projects. or had it in their source code check out. it would need to be declared in every POM. to configure the repository from the project level instead of in each user's settings (with one exception that will be discussed next). and if it's acceptable to have developers configure this in their settings as demonstrated in section 7. Not only is this very inconvenient. This makes it impossible to define the repository in the parent. that declares shared settings within an organization and its departments.2. or to include your own artifacts in the same repository. there is a problem – when a POM inherits from another POM that is not in the central repository. this must be defined as both a regular repository and a plugin repository to ensure all access is consistent. for a situation where a developer might not have configured their settings and instead manually installed the POM. or have it override the central repository. If you have multiple repositories. The next section discusses how to set up an “organization POM”. it is necessary to declare only those that contain an inherited POM. unless you have mirrored the central repository using one the techniques discussed previously.2. otherwise Maven will fail to download any dependencies that are not in your local repository. you should override the central repository. Repositories such as the one above are configured in the POM usually.xml file. Developers may choose to use a different mirror. or the original central repository directly without consequence to the outcome of the build. so that a project can add repositories itself for dependencies located out of those repositories configured initially. Usually. It is still important to declare the repositories that will be used in the top-most POM itself. However. or hierarchy. To override the central repository with your internal repository. it must retrieve the parent from the repository. 214 . Since the version of the POM usually bears no resemblance to the software. 215 .0.0</modelVersion> <parent> <groupId>org.apache. By declaring shared elements in a common parent POM. It is important to recall. and is a project that. etc. consider the POM for Maven SCM: <project> <modelVersion>4.Team Collaboration with Maven 7.xml file in the shared installation (or in each developer's home directory). depending on the information that needs to be shared.org/maven-scm/</url> . that if your inherited projects reside in an internal repository. or the organization as a whole. Future versions of Maven plan to automate the numbering of these types of parent projects to make this easier. so it's possible to have one or more parents that define elements common to several projects. from section 7..0 – that is. These parents (levels) may be used to define departments. has a number of sub-projects (Maven.apache. Maven SCM. As an example. It is a part of the Apache Software Foundation. its departments. As a result.3. To continue the Maven example. as this is consistent information. wherein there's the organization. Any number of levels (parents) can be used. itself.. <modules> <module>maven-scm-api</module> <module>maven-scm-providers</module> . there are three levels to consider when working with any individual module that makes up the Maven project.). project inheritance can be used to assist in ensuring project consistency. While project inheritance was limited by the extent of a developer's checkout in Maven 1.. the easiest way to version a POM is through sequential numbering.. consider the Maven project itself. the current project – Maven 2 now retrieves parent projects from the repository. Maven Continuum. consistency is important when setting up your build infrastructure.maven</groupId> <artifactId>maven-parent</artifactId> <version>1</version> </parent> <groupId>org. and then the teams within those departments. </modules> </project> If you were to review the entire POM.apache.maven. you'd find that there is very little deployment or repositoryrelated information. Creating an Organization POM As previously mentioned in this chapter. which is shared across all Maven projects through inheritance. then that repository will need to be added to the settings. This project structure can be related to a company structure.4.scm</groupId> <artifactId>maven-scm</artifactId> <url>. You may have noticed the unusual version declaration for the parent project. org/</url> .org</post> ..apache</groupId> <artifactId>apache</artifactId> <version>1</version> </parent> <groupId>org.apache. </mailingList> </mailingLists> <developers> <developer> ..apache.maven</groupId> <artifactId>maven-parent</artifactId> <version>5</version> <url></modelVersion> <parent> <groupId>org.. <mailingLists> <mailingList> <name>Maven Announcements List</name> <post>announce@maven.. you'd see it looks like the following: <project> <modelVersion>4..apache.Better Builds with Maven If you look at the Maven project's parent POM.0. </developer> </developers> </project> 216 .. modified.apache. 217 . the Maven Repository Manager will allow POM updates from a web interface). and deployed with their new version as appropriate.apache. </snapshotRepository> </distributionManagement> </project> The Maven project declares the elements that are common to all of its sub-projects – the snapshot repository (which will be discussed further in section 7. <distributionManagement> <repository> .apache</groupId> <artifactId>apache</artifactId> <version>1</version> <organization> <name>Apache Software Foundation</name> <url>. you can retain the historical versions in the repository if it is backed up (in the future.. For this reason.apache. and the deployment locations. there is no best practice requirement to even store these files in your source control management system. is regarding the storage location of the source POM files...6). Source control management systems like CVS and SVN (with the traditional intervening trunk directory at the individual project level) do not make it easy to store and check out such a structure. it is best to store the parent POM files in a separate area of the source control tree. where they can be checked out. most of the elements are inherited from the organization-wide parent project. Again.snapshots</id> <name>Apache Snapshot Repository</name> <url></modelVersion> <groupId>org.Team Collaboration with Maven The Maven parent POM includes shared elements.org/</url> ... and less frequent schedule than the projects themselves. when working with this type of hierarchy. An issue that can arise.0. In fact. <repositories> <repository> <id>apache. These parent POM files are likely to be updated on a different. such as the announcements mailing list and the list of developers that work across the whole project.org/maven-snapshot-repository</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> ...org/</url> </organization> <url>. in this case the Apache Software Foundation: <project> <modelVersion>4. </repository> <snapshotRepository> . continuous integration can enable a better development culture where team members can make smaller.Better Builds with Maven 7. and learn how to use Continuum to build this project on a regular basis. and you must stop the server to make the changes (to stop the server. For most installations this is all the configuration that's required. The configuration on the screen is straight forward – all you should need to enter are the details of the administration account you'd like to use. The examples discussed are based on Continuum 1. as well as the generic bin/plexus. First.3. press Ctrl-C in the window that is running Continuum).3> bin\win32\run There are scripts for most major platforms.0. iterative changes that can more easily support concurrent development processes. As of Continuum 1. The first screen to appear will be the one-time setup page shown in figure 7-1.3.org/. and the company information for altering the logo in the top left of the screen. continuous integration enables automated builds of your project on a regular interval. you will need to install Continuum. As such. however.sh for use on other Unix-based platforms. More than just nightly builds. you will pick up the Proficio example from earlier in the book. This is very simple – once you have downloaded it and unpacked it.0. ensuring that conflicts are detected earlier in a project's release life cycle.5. In this chapter. however newer versions should be similar. if you are running Continuum on your desktop and want to try the examples in this section. The examples also assumes you have Subversion installed. Continuous Integration with Continuum If you are not already familiar with it. you can run it using the following command: C:\mvnbook\continuum-1. Continuum is Maven's continuous integration and build server. Starting up continuum will also start a http server and servlet engine.tigris. continuous integration is a key element of effective collaboration. which you can obtain for your operating system from. some additional steps are required. 218 . You can verify the installation by viewing the web site at. rather than close to a release.0. these additional configuration requirements can be set only after the previous step has been completed. you can cut and paste field values from the following list: Field Name Value working-directory Working Directory Build Output Directory build-output-directory Base URL 219 .Team Collaboration with Maven Figure 7-1: The Continuum setup screen To complete the Continuum setup page. If you do not have this set up on your machine. You can then check out Proficio from that location. To enable this setting. you will also need an SMTP server to which to send the messages. since paths can be entered from the web interface. This requires obtaining the Code_Ch07. By default.org/continuum/guides/mini/guide-configuration. you can start Continuum again.. this is disabled as a security measure. refer to.. For instructions. <implementation> org.Better Builds with Maven In the following examples.. edit apps/continuum/conf/application.xml and verify the following line isn't commented out: ..apache.validation. POM files will be read from the local hard disk where the server is running. edit the file above to change the smtp-host setting. for example if it was unzipped in C:\mvnbook\svn: C:\mvnbook> svn co \ proficio 220 . <allowedScheme>file</allowedScheme> </allowedSchemes> </configuration> .codehaus. After these steps are completed.formica.html... The next step is to set up the Subversion repository for the examples.plexus. The default is to use localhost:25. To have Continuum send you e-mail notifications.UrlValidator </implementation> <configuration> <allowedSchemes> .zip archive and unpacking it in your environment. The ciManagement section is where the project's continuous integration is defined and in the above example has been configured to use Continuum locally on port 8080... <scm> <connection> scm:svn: </connection> <developerConnection> scm:svn: </developerConnection> </scm> .xml to correct the e-mail address to which notifications will be sent..version} </url> </site> </distributionManagement> . Once these settings have been edited to reflect your setup.Team Collaboration with Maven The POM in this repository is not completely configured yet..xml You should build all these modules to ensure everything is in order. with the following command: C:\mvnbook\proficio> mvn install 221 ...3 for information on how to set this up. from the directory C:\mvnbook\repository.. If you haven't done so already. <ciManagement> <system>continuum</system> <url> <notifiers> <notifier> <type>mail</type> <configuration> <address>youremail@yourdomain. This assumes that you are still running the repository Web server on localhost:8081. by uncommenting and modifying the following lines: .com</address> </configuration> </notifier> </notifiers> </ciManagement> .. since not all of the required details were known at the time of its creation. refer to section 7. Edit proficio/pom. The distributionManagement setting will be used in a later example to deploy the site from your continuous integration environment. and edit the location of the Subversion repository. <distributionManagement> <site> <id>website</id> <url> /reference/${project. commit the file with the following command: C:\mvnbook\proficio> svn ci -m 'my settings' pom. You have two options: you can provide the URL for a POM. a ViewCVS installation. under the Continuum logo. Figure 7-2: Add project screen shot This is all that is required to add a Maven 2 project to Continuum.0. If you return to the location that was set up previously. When you set up your own system later. Once you have logged in. enter the file:// URL as shown. or a Subversion HTTP server. you must either log in with the administrator account you created during installation. and each of the modules will be added to the list of projects. The login link is at the top-left of the screen. Instead.3 this does not work when the POM contains modules. you can now select Maven 2. Before you can add a project to the list. After submitting the URL. you will see an empty project list. as in the Proficio example. Continuum will return to the project summary page.Better Builds with Maven You are now ready to start using Continuum. the builds will be marked as New and their checkouts will be queued.0+ Project from the Add Project menu. or with another account you have since created with appropriate permissions. The result is shown in figure 7-3. in Continuum 1. or upload from your local drive. Initially. While uploading is a convenient way to configure from your existing check out. This will present the screen shown in figure 7-2. or perform other tasks. 222 . you will enter either a HTTP URL to a POM in the repository. Team Collaboration with Maven Figure 7-3: Summary page after projects have built Continuum will now build the project hourly. the build will show an “In progress” status. The Build History link can be used to identify the failed build and to obtain a full output log. but you may wish to go ahead and try them. restore the file above to its previous state and commit it again. First. If you want to put this to the test. and then fail. you might want to set up a notification to your favorite instant messenger – IRC.java. go to your earlier checkout and introduce an error into Proficio. This chapter will not discuss all of the features available in Continuum. Jabber. 223 .] Now.. MSN and Google Talk are all supported.java Finally. To avoid receiving this error every hour. The build in Continuum will return to the successful state.. marking the left column with an “!” to indicate a failed build (you will need to refresh the page using the Show Projects link in the navigation to see these changes). In addition.. remove the interface keyword: [.. For example. check the file in: C:\mvnbook\proficio\proficio-api> svn ci -m 'introduce error' \ src/main/java/com/mergere/mvnbook/proficio/Proficio. for example. you should receive an e-mail at the address you configured earlier.] public Proficio [. press Build Now on the Continuum web interface next to the Proficio API module. and send an e-mail notification if there are any problems. but regular schedule is established for site generation. based on selected schedules. While this seems obvious. Continuum can be configured to trigger a build whenever a commit occurs. for example. Continuous integration is most effective when developers commit regularly. iterative builds are helpful in some situations. and independent of the environment being used. test and production environments. Avoid customizing the JDK. This will be constrained by the length of the build and the available resources on the build machine. but rather keeping changes small and well tested. operating system and other variables. 224 . In addition. • • • • • • • In addition to the above best practices. Run a copy of the application continuously. if it isn't something already in use in other development. Consider a regular. Continuous integration is most beneficial when tests are validating that the code is working as it always has. commit often. Build all of a project's active branches. Fix builds as soon as possible. if the source control repository supports postcommit hooks. not just that the project still compiles after one or more changes occur.Better Builds with Maven Regardless of which continuous integration server you use. before the developer moves on or loses focus. Establish a stable environment. separate from QA and production releases. Continuous integration will be pointless if developers repetitively ignore or delete broken build notifications. and a future version will allow developers to request a fresh checkout. If the application is a web application. there are two additional topics that deserve special attention: automated updates to the developer web site. For these reports to be of value. or local settings. it is important that it can be isolated to the change that caused it. Run comprehensive tests. it is recommend that a separate. This will make it much easier to detect the source of an error when the build does break. This can be helpful for non-developers who need visibility into the state of the application. they need to be kept up-todate. you learned how to create an effective site containing project information and reports about the project's health and vitality. the continuous integration environment should be set up for all of the active branches. This doesn’t mean committing incomplete code. it is often ignored. it is also important that failures don't occur due to old build state. clean build. it is beneficial to test against all different versions of the JDK. In Chapter 6. but it is best to detect a failure as soon as possible. Continuum has preliminary support for system profiles and distributed testing. If multiple branches are in development. run a servlet container to which the application can be deployed from the continuous integration environment. there are a few tips for getting the most out of the system: • Commit early. Continuum currently defaults to doing a clean build. periodically. and profile usage. Run builds as often as possible. This is another way continuous integration can help with project collaboration and communication. This also means that builds should be fast – long integration and performance tests should be reserved for periodic builds. When a failure occurs in the continuous integration environment. enhancements that are planned for future versions. and your team will become desensitized to the notifications in the future. Though it would be overkill to regenerate the site on every commit. Run clean builds. While rapid. select Schedules. It is not typically needed if using Subversion. 16:00:00 from Monday to Friday.Team Collaboration with Maven Verify that you are still logged into your Continuum instance. only the default schedule is available. You will see that currently. since commits are not atomic and a developer might be committing midway through a update.html. 9:00:00. which will be configured to run every hour during business hours (8am – 4pm weekdays). The example above runs at 8:00:00. Next. The appropriate configuration is shown in figure 7-4.. Click the Add button to add a new schedule. This is useful when using CVS.. Figure 7-4: Schedule configuration To complete the schedule configuration. The “quiet period” is a setting that delays the build if there has been a commit in the defined number of seconds prior.opensymphony.. 225 . from the Administration menu on the left-hand side..com/quartz/api/org/quartz/CronTrig. In Continuum 1. so you will need to add the definition to each module individually. Maven Proficio.xml clean site-deploy --batch-mode -DenableCiProfile=true The goals to run are clean and site-deploy. as well – if this is a concern. click the Add button below the default build definition.0. there is no way to make bulk changes to build definitions. Figure 7-5: Adding a build definition for site deployment To complete the Add Build Definition screen. you can cut and paste field values from the following list: Field Name Value POM filename Goals Arguments pom. but does not recurse into the modules (the -N or --non-recursive argument). which will be visible from. and add the same build definition to all of the modules. use the non-recursive mode instead. 226 . and select the top-most project. Since this is the root of the multi-module build – and it will also detect changes to any of the modules – this is the best place from which to build the site. The Add Build Definition screen is shown in figure 7-5.Better Builds with Maven Once you add this schedule. To add a new build definition. In addition to building the sites for each module. The site will be deployed to the file system location you specified in the POM. return to the project list. when you first set up the Subversion repository earlier in this chapter. The project information shows just one build on the default schedule that installs the parent POM. In this example you will add a new build definition to run the site deployment for the entirety of the multi-module build. The downside to this approach is that Continuum will build any unchanged modules.3. it can aggregate changes into the top-level site as required. on the business hours schedule. which means that Build Now from the project summary page will not trigger this build. you can add the test. However. you'll see that these checks have now been moved to a profile. Profiles are a means for selectively enabling portions of the build.... In the previous example. You can see also that the schedule is set to use the site generation schedule created earlier.xml file in your Subversion checkout to that used in Chapter 6. The --non-recursive option is omitted. 227 . which can be a discouragement to using them. Any of these test goals should be listed after the site-deploy goal. a system property called enableCiProfile was set to true. . and view the generated site from. If you haven't previously encountered profiles. <profiles> <profile> <id>ciProfile</id> <activation> <property> <name>enableCiProfile</name> <value>true</value> </property> </activation> <plugins> <plugin> <groupId>org. the generated site can be used as reference for what caused the failure. which is essential for all builds to ensure they don't block for user input. so that if the build fails because of a failed check. However. If you compare the example proficio/pom.. You'll find that when you run the build from the command line (as was done in Continuum originally). these checks delayed the build for all developers. which sets the given system property. verify or integration-test goal to the list of goals. none of the checks added in the previous chapter are executed. In this particular case. However.Team Collaboration with Maven The arguments provided are --batch-mode.apache. In Chapter 6. please refer to Chapter 3. such as the percentage of code covered in the unit tests dropping below a certain value. each build definition on the project information page (to which you would have been returned after adding the build definition) has a Build Now icon. The meaning of this system property will be explained shortly. since most reports continue under failure conditions. Click this for the site generation build definition. a number of plugins were set up to fail the build if certain project health checks failed. if you want to fail the build based on these checks as well. to ensure these checks are run. the profile is enabled only when the enableCiProfile system property is set to true. The checks will be run when you enable the ciProfile using mvn -DenableCiProfile=true. It is rare that the site build will fail.maven.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> . and -DenableCiProfile=true. and that it is not the default build definition. which are not changed. but the timing and configuration can be changed depending upon your circumstances.xml file in <maven home>/conf/settings. <activeProfile>ciProfile</activeProfile> </activeProfiles> . and clicking Edit next to the default build definition. the verify goal may need to be added to the site deployment build definition. the build involves checking out all of the dependent projects and building them yourself. In this section. As you saw before.6. or for the entire multi-module project to run the additional checks after the site has been generated. As Maven 2 is still executed as normal.. for all projects in Continuum. How you configure your continuous integration depends on the culture of your development team and other environmental factors such as the size of your projects and the time it takes to build and test them. The generated artifacts of the snapshot are stored in the local repository. these artifacts will be updated frequently. as discussed previously.xml file for the user under which it is running. it may be necessary to schedule them separately for each module.8 of this chapter.xml: . and in contrast to regular dependencies.home}/. you will learn about using snapshots more effectively in a team environment. Team Dependency Management Using Snapshots Chapter 3 of this book discussed how to manage your dependencies in a multi-module build.. indicates that the profile is always active when these settings are read. The guidelines discussed in this chapter will help point your team in the right direction. in an environment where a number of modules are undergoing concurrent development. Snapshots were designed to be used in a team environment as a means for sharing development versions of artifacts that have already been built. 228 . Usually. To enable this profile by default from these settings. For example. the team dynamic makes it critical. if the additional checks take too much time for frequent continuous integration builds. at least in the version of Continuum current at the time of writing. The other alternative is to set this profile globally. The first is to adjust the default build definition for each module. as well as the settings in the Maven installation. In this case the identifier of the profile itself. add the following configuration to the settings... <activeProfiles> . and how to enable this within your continuous integration environment.. Projects in Maven stay in the snapshot state until they are released.Better Builds with Maven There are two ways to ensure that all of the builds added in Continuum use this profile. and while dependency management is fundamental to any Maven build. by going to the module information page. So far in this book. in some cases.m2/settings. where projects are closely related. it is necessary to do this for each module individually. which is discussed in section 7. it reads the ${user. you must build all of the modules simultaneously from a master build.. 7. snapshots have been used to refer to the development version of an individual module. Additionally. rather than the property used to enable it. use binary snapshots that have been already built and tested.Team Collaboration with Maven While building all of the modules from source can work well and is handled by Maven inherently. this is achieved by regularly deploying snapshots to a shared repository.jar. <distributionManagement> <repository> <id>internal</id> <url></url> </repository> .. the version used is the time that it was deployed (in the UTC timezone) and the build number. building from source doesn't fit well with an environment that promotes continuous integration.131114-1. the Proficio project itself is not looking in the internal repository for dependencies..xml: . While this is not usually the case. locking the version in this way may be important if there are recent changes to the repository that need to be ignored temporarily. This technique allows you to continue using the latest version by declaring a dependency on 1. or to lock down a stable version by declaring the dependency version to be the specific equivalent such as 1. If you were to deploy again... </distributionManagement> Now. but rather relying on the other modules to be built first. <repositories> <repository> <id>internal</id> <url></url> </repository> </repositories> ..0-20060211. Considering that example.xml: .. Instead. Currently..0-20060211. In Maven. 229 . The filename that is used is similar to proficio-api-1. In this case.3. deploy proficio-api to the repository with the following command: C:\mvnbook\proficio\proficio-api> mvn deploy You'll see that it is treated differently than when it was installed in the local repository. it can lead to a number of problems: • • • • It relies on manual updates from developers.131114-1.. which can be error-prone. such as the internal repository set up in section 7. you'll see that the repository was defined in proficio/pom. add the following to proficio/pom.. To add the internal repository to the list of repositories used by Proficio regardless of settings.0SNAPSHOT. the time stamp would change and the build number would increment to 2. though it may have been configured as part of your settings files. proficio-api:1. add the following configuration to the repository configuration you defined above in proficio/pom. daily (the default). always. In this example. but you can also change the interval by changing the repository configuration. This is because the default policy is to update snapshots daily – that is. To see this... without having to manually intervene. no update would be performed. and then deploy the snapshot to share with the other team members.. If it were omitted. and interval:minutes. Several of the problems mentioned earlier still exist – so at this point. This causes many plugins to be checked for updates. build proficio-core with the following command: C:\mvnbook\proficio\proficio-core> mvn -U install During the build. similar to the example below (note that this output has been abbreviated): . <repository> . assuming that the other developers have remembered to follow the process... However. making it out-of-date. by default. However. This technique can ensure that developers get regular updates. and without slowing down the build by checking on every access (as would be the case if the policy were set to always). Whenever you use the -U argument. the updates will still occur only as frequently as new versions are deployed to the repository. as well as updating any version ranges. you may also want to add this as a pluginRepository element as well. it updates both releases and snapshots. or deployed without all the updates from the SCM. to check for an update the first time that particular dependency is used after midnight local time. you will see that some of the dependencies are checked for updates..xml: .Better Builds with Maven If you are developing plugins. any snapshot dependencies will be checked once an hour to determine if there are updates in the remote repository. this introduces a risk that the snapshot will not be deployed at all. The -U argument in the prior command is required to force Maven to update all of the snapshots in the build. The settings that can be used for the update policy are never. 230 . You can always force the update using the -U command. <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> ..0-SNAPSHOT: checking for updates from internal . to see the updated version downloaded.. all that is being saved is some time.. Now. deployed with uncommitted code. It is possible to establish a policy where developers do an update from the source control management (SCM) system before committing.. this feature is enabled by default in a build definition.\apps\ continuum\build-output-directory C:\mvnbook\repository\internal Deployment Repository Directory Base URL Mergere Company Name..\.0.mergere. so let's go ahead and do it now. Log in as an administrator and go to the following Configuration screen. However. as well. Continuum can be configured to deploy its builds to a Maven snapshot repository automatically. it makes sense to have it build snapshots.gif Company Logo.. shown in figure 7-6.com Company URL Working Directory 231 . as you saw earlier.mergere..Team Collaboration with Maven A much better way to use snapshots is to automate their creation. To deploy from your server. you can cut and paste field values from the following list: Field Name Value C:\mvnbook\continuum-1.0.com/_design/images/mergere_logo.\apps\ continuum\working-directory Build Output Directory C:\mvnbook\continuum-1.3\bin\win32\. Since the continuous integration server regularly rebuilds the code from a known state. So far in this section..3\bin\win32\. you have not been asked to apply this setting. Figure 7-6: Continuum configuration To complete the Continuum configuration page. How you implement this will depend on the continuous integration server that you use.\. you must ensure that the distributionManagement section of the POM is correctly configured. If there is a repository configured to which to deploy them. To try this feature. when necessary. you can avoid all of the problems discussed previously. you can enter a full repository URL such as scp://repositoryhost/www/repository/internal. if you had a snapshot-only repository in /www/repository/snapshots. This will deploy to that repository whenever the version contains SNAPSHOT. with an updated time stamp and build number.snapshots</id> <url></url> </snapshotRepository> </distributionManagement> . With this setup..Better Builds with Maven The Deployment Repository Directory field entry relies on your internal repository and Continuum server being in the same location. and click Build Now on the Proficio API project.. 232 . return to your console and build proficio-core again using the following command: C:\mvnbook\proficio\proficio-core> mvn -U install You'll notice that a new version of proficio-api is downloaded.. For example. follow the Show Projects link. Once the build completes. you would add the following: . you can either lock a dependency to a particular build. If this is not the case... <distributionManagement> . This can be useful if you need to clean up snapshots on a regular interval. while you get regular updates from published binary dependencies.. Another point to note about snapshots is that it is possible to store them in a separate repository from the rest of your released artifacts. <snapshotRepository> <id>internal. and deploy to the regular repository you listed earlier. or build from source. If you are using the regular deployment mechanism (instead of using Continuum). but still keep a full archive of releases. when it doesn't. this separation is achieved by adding an additional repository to the distributionManagement section of your POM. Better yet. There are two ways to create an archetype: one based on an existing project using mvn archetype:create-from-project.. there is always some additional configuration required. using an archetype. either in adding or removing content from that generated by the archetypes. you can make the snapshot update process more efficient by not checking the repository that has only releases for updates. Beyond the convenience of laying out a project structure instantly. by hand. run the following command: C:\mvnbook\proficio> mvn archetype:create \ -DgroupId=com.7. The replacement repository declarations in your POM would look like this: . in a way that is consistent with other projects in your environment. While this is convenient.mergere.Team Collaboration with Maven Given this configuration. To avoid this. you can create one or more of your own archetypes. To get started with the archetype. <repositories> <repository> <id>internal</id> <url></url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>internal. you have seen the archetypes that were introduced in Chapter 2 used to quickly lay down a project structure.mvnbook \ -DartifactId=proficio-archetype \ -DarchetypeArtifactId=maven-archetype-archetype 233 . 7. As you saw in this chapter. and replacing the specific values with parameters. the requirement of achieving consistency is a key issue facing teams. archetypes give you the opportunity to start a project in the right way – that is... and the other. Writing an archetype is quite like writing your own project.snapshots</id> <url></url> <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> </repositories> .. Creating a Standard Project Archetype Throughout this book. you'll see that the archetype is just a normal JAR project – there is no special build configuration required. 234 . and the template project in archetype-resources.xml.java</source> </testSources> </archetype> Each tag is a list of files to process and generate in the created project.Better Builds with Maven The layout of the resulting archetype is shown in figure 7-7. testResources. so everything else is contained under src/main/resources. There are two pieces of information required: the archetype descriptor in META-INF/maven/archetype. and siteResources. Figure 7-7: Archetype directory layout If you look at pom.xml at the top level.java</source> </sources> <testSources> <source>src/test/java/AppTest. The archetype descriptor describes how to construct a new project from the archetype-resources provided. but it is also possible to specify files for resources. The example above shows the sources and test sources. The example descriptor looks like the following: <archetype> <id>proficio-archetype</id> <sources> <source>src/main/java/App. The JAR that is built is composed only of resources. w3. a previous release would be used instead). artifactId and version elements are variables that will be substituted with the values provided by the developer running archetype:create. the required version would not be known (or if this was later development. Continuing from the example in section 7.apache. the content of the files will be populated with the values that you provided on the command line. go to an empty directory and run the following command: C:\mvnbook> mvn archetype:create -DgroupId=com. You now have the template project laid out in the proficio-example directory. However. To do so. Releasing a project is explained in section 7.apache. For this example.xsd"> <modelVersion>4.0</modelVersion> <groupId>$groupId</groupId> <artifactId>$artifactId</artifactId> <version>$version</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. the archetypeVersion argument is not required at this point.org/POM/4.org/2001/XMLSchema-instance" xsi:schemaLocation=" of this chapter. since the archetype has not yet been released. 235 .mvnbook \ -DarchetypeArtifactId=proficio-archetype \ -DarchetypeVersion=1. Once you have completed the content in the archetype.0. install and deploy it like any other JAR.org/maven-v4_0_0. it has the correct deployment settings already.8.0. It will look very similar to the content of the archetype-resources directory you created earlier.0</version> <scope>test</scope> </dependency> </dependencies> </project> As you can see. From here. refer to the documentation on the Maven web site.3 of this chapter.mergere.Team Collaboration with Maven The files within the archetype-resources section are Velocity templates. the pom.mvnbook \ -DartifactId=proficio-example \ -DarchetypeGroupId=com.apache. For more information on creating an archetype. you need to populate the template with the content that you'd like to have applied consistently to new projects. now however. so you can run the following command: C:\mvnbook\proficio\proficio-archetype> mvn deploy The archetype is now ready to be used. the groupId. Since the archetype inherits the Proficio parent. Maven will build. These files will be used to generate the template files when the archetype is run.0" xmlns:xsi=". if omitted.0-SNAPSHOT Normally.0.org/POM/4. you will use the “internal” repository.xml file looks like the following: <project xmlns=". it is usually difficult or impossible to correct mistakes other than to make another. allowing them to be highly automated. it happens at the end of a long period of development when all everyone on the team wants to do is get it out there. 236 .8. Maestro is an Apache License 2. without making any modifications to your project. The perform step could potentially be run multiple times to rebuild a release from a clean checkout of the tagged version. Continuum and Archiva build platform. you will be prompted for values. It is usually tedious and error prone.0 distribution based on a pre-integrated Maven. run the following command: c:\mvnbook\proficio> mvn release:prepare -DdryRun=true This simulates a normal release preparation. You can continue using the code that you have been working on in the previous sections. To demonstrate how the release plugin works.5 The release plugin operates in two steps: prepare and perform. full of manual steps that need to be completed in a particular order. The release plugin takes care of a number of manual steps in updating the project POM. or check out the following: C:\mvnbook> svn co \ \ proficio To start the release process. For more information on Maestro please see:. Once the definition for a release has been set by a team. Maven provides a release plugin that provides the basic functions of a standard release process. Accept the defaults in this instance (note that running Maven in “batch mode” avoids these prompts and will accept all of the defaults). new release. and to perform standard tasks. You'll notice that each of the modules in the project is considered. which often leads to omissions or short cuts. Cutting a Release Releasing software is difficult.0. Finally. Worse.mergere. such as deployment to the remote repository.com/. the Proficio example will be revisited. updating the source control management system to check and commit release related changes. As the command runs.Better Builds with Maven 7. and released as 1. once a release has been made. The prepare step is run once for a release. and does all of the project and source control manipulation that results in a tagged version. 5 Mergere Maestro provides an automated feature for performing releases. and creating tags (or equivalent for your SCM). releases should be consistent every time they are built. the explicit version of plugins and dependencies that were used are added any settings from settings. to verify they are correct. This is because the prepare step is attempting to guarantee that the build will be reproducible in the future. 4. This can be corrected by adding the plugin definition to your POM.tag file written out to each module directory. However. In this POM. as they will be committed to the tag Run mvn clean integration-test to verify that the project will successfully build Describe other preparation goals (none are configured by default. a number of changes are made: • • • 1. including profiles from settings. or that different profiles will be applied. an error will appear.next respectively in each module directory. This contains a resolved version of the POM that Maven will use to build from if it exists. You'll notice that the version is updated in both of these files. even if the plugin is not declared in the POM. For that reason. there is also a release-pom. or part of the project. or obtained from the development repository of the Maven project) that is implied through the build life cycle. 5. not ready to be used as a part of a release. and is set based on the values for which you were prompted during the release process.xml. 6. these changes are not enough to guarantee a reproducible build – it is still possible that the plugin versions will vary. but this might include updating the metadata in your issue tracker. or creating and committing an announcement file) 8. and setting the version to the latest release (But only after verifying that your project builds correctly with that version!). if you are using a dependency that is a snapshot. all of the dependencies being used are releases. the appropriate SCM settings) Check if there are any local modifications Check for snapshots in dependency tree Check for snapshots of plugins in the build Modify all POM files in the build. and snapshots are a transient build.xml 237 . The prepare step ensures that there are no snapshots in the build. Describe the SCM commit and tag operations 9. 2. and this is reverted in the next POM. This is because you are using a locally installed snapshot of a plugin (either built yourself. To review the steps taken in this release process: Check for correct version of the plugin and POM (for example. In some cases. other than those that will be released as part of the process (that is. any active profiles are explicitly activated. named pom. However.Team Collaboration with Maven In this project.xml (both per-user and per-installation) are incorporated into the POM. you may encounter a plugin snapshot.xml.tag and pom. Describe the SCM commit operation You might like to review the POM files that are created for steps 5 and 9. as they will be committed for the next development iteration 10. 3.xml and profiles. that resulting version ranges will be different. Modify all POM files in the build. other modules).xml. 7. The SCM information is also updated in the tag POM to reflect where it will reside once it is tagged. Recall from Chapter 6 that you learned how to configure a number of checks – so it is important to verify that they hold as part of the release. the release still hasn't been generated yet – for that. Also. while locally.xml in the same directory as pom.5. you need to enable this profile during the verification step. use the following plugin configuration: [.apache. However. To include these checks as part of the release process. This is used by Maven.properties file that was created at the end of the last run.] <plugin> <groupId>org. this file will be release-pom. If you need to start from the beginning. Having run through this process you may have noticed that only the unit and integration tests were run as part of the test build. you can remove that file. you'll see in your SCM the new tag for the project (with the modified files). instead of the normal POM. Once this is complete.1-SNAPSHOT. This is not the case however. the version is now 1. and the updated POM files are committed.. the release plugin will resume a previous attempt by reading the release. as these can be established from the other settings already populated in the POM in a reproducible fashion.. when a build is run from this tag to ensure it matches the same circumstances as the release build. This is achieved with the release:perform goal.] Try the dry run again: C:\mvnbook\proficio> mvn release:prepare -DdryRun=true Now that you've gone through the test run and are happy with the results. or run mvn -Dresume=false release:prepare instead. To do so. When the final run is executed. you can go for the real thing with the following command: C:\mvnbook\proficio> mvn release:prepare You'll notice that this time the operations on the SCM are actually performed.. you need to deploy the build artifacts.xml. You won't be prompted for values as you were the first time – since by the default..maven. This is run as follows: C:\mvnbook\proficio> mvn release:perform 238 .Better Builds with Maven You may have expected that inheritance would have been resolved by incorporating any parent elements that are used. recall that in section 7. or that expressions would have been resolved. you created a profile to enable those checks conditionally.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <configuration> <arguments>-DenableCiProfile=true</arguments> </configuration> </plugin> [. check out the tag: C:\mvnbook> svn co \. the release plugin will confirm that the checked out project has the same release plugin configuration as those being used (with the exception of goals). before you run the release:prepare goal. It is important in these cases that you consider the settings you want. both the original pom. you want to avoid such problems. this requires that you remember to add the parameter every time.xml file.apache.xml file. add the following goals to the POM: [. and the built artifacts are deployed. you'll see that a clean checkout was obtained from the created tag. during the process you will have noticed that Javadoc and source JAR files were produced and deployed into the repository for all the Java projects. it is necessary to know what version ranges are allowed for a dependency. you can examine the files that are placed in the SCM repository. These are configured by default in the Maven POM as part of a profile that is activated when the release is performed.. When the release is performed..org/plugins/maven-release-plugin/ for more information. you would run the following: C:\mvnbook\proficio> mvn release:perform -DconnectionUrl=\ scm:svn: Collaboration with Maven No special arguments are required. To release from an older version. rather than the specific version used for the release. To do so. because the release.0 If you follow the output above. and to deploy a copy of the site.apache.maven.properties file had been removed.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <configuration> <goals>deploy</goals> </configuration> </plugin> [.0 You'll notice that the contents of the POM match the pom.properties file still exists to tell the goal the version from which to release. To ensure reproducibility.] You may also want to configure the release plugin to activate particular profiles. For the same reason.. If this is not what you want to run. Refer to the plugin reference at. you can change the goals used with the goals parameter: C:\mvnbook\proficio> mvn release:perform -Dgoals="deploy" However. Also. Since the goal is for consistency. This is the default for the release plugin – to deploy all of the built artifacts. The reason for this is that the POM files in the repository are used as dependencies and the original information is more important than the release-time information – for example. before running Maven from that location with the goals deploy site-deploy. 239 . and not the release-pom.. or if the release.xml file and the release-pom. or to set certain properties.] <plugin> <groupId>org. though.xml files are included in the generated JAR file. To do this. without having to declare and enable an additional profile.properties and any POM files generated as a result of the dry run.9. removing release. All of the features described in this chapter can be used by any development team... And all of these features build on the essentials demonstrated in chapters 1 and 2 that facilitate consistent builds. by making information about your projects visible and organized. define a profile with the identifier release-profile. real-time engineering style. Maven provides value by standardizing and automating the build process.maven. the only step left is to clean up after the plugin. as follows: [.Better Builds with Maven You can disable this profile by setting the useReleaseProfile parameter to false. and indeed this entire book.. The site and reports you've created can help a team communicate the status of a project and their work more effectively. 240 .] Instead. This in turn can lead to and facilitate best practices for developing in a community-oriented.apache. it can aid you in effectively using tools to achieve consistency in other areas of your development.Extra plugin configuration would be inserted here --> </build> </profile> </profiles> [.] After the release process is complete.. Simply run the following command to clean up: C:\mvnbook\proficio> mvn release:clean 7. the adoption of reusable plugins can capture and extend build knowledge throughout your entire organization. and while Maven focuses on delivering consistency in your build infrastructure through patterns.. To do this. Lack of consistency is the source of many problems when working in a team.. whether your team is large or small..plugins</groupId> <artifactId>maven-release-plugin</artifactId> <configuration> <useReleaseProfile>false</useReleaseProfile> </configuration> </plugin> [. as follows: [.] <plugin> <groupId>org.] <profiles> <profile> <id>release-profile</id> <build> <!-.. Maven was designed to address issues that directly affect teams of developers. Summary As you've seen throughout this chapter. you may want to include additional actions in the profile. So. There are also strong team-related benefits in the preceding chapters – for example. rather than creating silos of information around individual projects. .8.you stay in Wonderland and I show you how deep the rabbit-hole goes. After this.the story ends. You take the blue pill . The Matrix 241 . using both Java 1. to a build in Maven: • • • • • Splitting existing sources and resources into modular Maven projects Taking advantage of Maven's inheritance and multi-project capabilities Compiling. You take the red pill . Migrating to Maven Migrating to Maven This chapter explains how to migrate (convert) an existing build in Ant. testing and building jars with Maven. there is no turning back. you wake up in your bed and believe whatever you want to believe.Morpheus.4 and Java 5 Using Ant tasks from within Maven Using Maven with your current directory structure This is your last chance. recommended Maven directory structure). which is the latest version at the time of writing. while still running your existing. how to split your sources into modules or components. and among other things. You will learn how to start building with Maven. For the purpose of this example.1. The Spring release is composed of several modules. you will be introduced to the concept of dependencies. Introduction The purpose of this chapter is to show a migration path from an existing build in Ant to Maven. which uses an Ant script.1.0-m1 of Spring. while enabling you to continue with your required work. This example will take you through the step-by-step process of migrating Spring to a modularized. You will learn how to use an existing directory structure (though you will not be following the standard.Better Builds with Maven 8. 8. This will allow you to evaluate Maven's technology. . Maven build. The Maven migration example is based on the Spring Framework build. Ant-based build system. we will focus only on building version 2. component-based. Introducing the Spring Framework The Spring Framework is one of today's most popular Java frameworks.1. how to run Ant tasks from within Maven. the Ant script compiles each of these different source directories and then creates a JAR for each module. you can see graphically the dependencies between the modules. These modules are built with an Ant script from the following source directories: • • src and test: contain JDK 1. The src and tiger/src directories are compiled to the same destination as the test and tiger/test directories. and each produces a JAR. using inclusions and exclusions that are based on the Java packages of each class. resulting in JARs that contain both 1. Each of these modules corresponds. TLD files.).4 and 1. Optional dependencies are indicated by dotted lines. with the Java package structure. 243 .Migrating to Maven Figure 8-1: Dependency relationship between Spring modules In figure 8-1.5 classes. more or less. properties files. For Spring. etc. ) per Maven project file. WAR. etc. the rule of thumb to use is to produce one artifact (JAR. that means you will need to have a Maven project (a POM) for each of the modules listed above. Inside the 'm2' directory.Better Builds with Maven 8. Where to Begin? With Maven. To start. you will need to create a directory for each of Spring's modules. you will create a subdirectory called 'm2' to keep all the necessary Maven changes clearly separated from the current build system. Figure 8-2: A sample spring module directory 244 . In the Spring example.2. war. in Spring. spring-parent) • version: this setting should always represent the next release version number appended with .org</url> <organization> <name>The Spring Framework Project</name> </organization> In this parent POM we can also add dependencies such as JUnit. <groupId>com. Recall from previous chapters that during the release process. the main source and test directories are src and test. and ear values should be obvious to you (a pom value means that this project is used for metadata only) The other values are not strictly required.m2book. primary used for documentation purposes. <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. however. which will be used for testing in every module. For this example.Migrating to Maven In the m2 directory.migrating</groupId> <artifactId>spring-parent</artifactId> <version>2. company.0-m1-SNAPSHOT</version> <name>Spring parent</name> <packaging>pom</packaging> <description>Spring Framework</description> <inceptionYear>2002</inceptionYear> <url>. in order to tag the release in your SCM.springframework. respectively. non-snapshot version for a short period of time. you will use com. each module will inherit the following values (settings) from the parent POM. the Spring team would use org. you will need to create a parent POM. 245 . • packaging: the jar. and it should mimic standard package naming conventions to avoid duplicate values. etc. • groupId: this setting indicates your area of influence. You will use the parent POM to store the common configuration settings that apply to all of the modules. as it is our 'unofficial' example version of Spring.mergere..mergere.springframework • artifactId: the setting specifies the name of this module (for example. the version you are developing in order to release.SNAPSHOT – that is.8. Let's begin with these directories. thereby eliminating the requirement to specify the dependency repeatedly across multiple modules.migrating. For example. project. Maven will convert to the definitive.m2book. department.1</version> <scope>test</scope> </dependency> </dependencies> As explained previously. Include Commons Attributes generated Java sources --> <src path="${commons.tempdir. you will need to append -Dmaven./test</testSourceDirectory> <plugins> <plugin> <groupId>org. Recall from Chapter 2./.3" debug="${debug}" deprecation="false" optimize="false" failonerror="true"> <src path="${src./. your build section will look like this: <build> <sourceDirectory>. deprecation and optimize (false).debug=false to the mvn command (by default this is set to true).classes. and failonerror (true) values. For the debug attribute.attributes.dir}" source="1./src</sourceDirectory> <testSourceDirectory>..dir}"/> <!-. At this point.maven.3)..apache.3</source> <target>1. you can retrieve some of the configuration parameters for the compiler.3</target> </configuration> </plugin> </plugins> </build> 246 .Better Builds with Maven Using the following code snippet from Spring's Ant build script. that Maven automatically manages the classpath from its list of dependencies..3" target="1. For now. you don't have to worry about the commons-attributes generated sources mentioned in the snippet.compiler. <javac destdir="${target. so to specify the required debug function in Maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1..src}"/> <classpath refid="all-libs"/> </javac> As you can see these include the source and target compatibility (1. as you will learn about that later in this chapter. These last three properties use Maven's default values. Spring's Ant script uses a debug parameter. so there is no need for you to add the configuration parameters. in the buildmain target. excludes}"/> </batchtest> </junit> You can extract some configuration information from the previous code: • forkMode=”perBatch” matches with Maven's forkMode parameter with a value of once.testclasses. Maven sets the reports destination directory (todir) to target/surefire-reports.dir}"> <fileset dir="${target.properties files etc take precedence --> <classpath location="${target. haltonfailure and haltonerror settings. this value is read from the project.includes}" excludes="${test. You will need to specify the value of the properties test.includes and test. • • formatter elements are not required as Maven generates both plain text and xml reports.mockclasses.properties file loaded from the Ant script (refer to the code snippet below for details). and this doesn't need to be changed.awt.Must go first to ensure any jndi.dir}" includes="${test. so you will not need to locate the test classes directory (dir). by default.headless=true -XX:MaxPermSize=128m -Xmx128m"/> <!-. You will not need any printsummary. as Maven prints the test summary and stops for any test error or failure.excludes from the nested fileset. Maven uses the default value from the compiler plugin. From the tests target in the Ant script: <junit forkmode="perBatch" printsummary="yes" haltonfailure="yes" haltonerror="yes"> <jvmarg line="-Djava. • • • • • 247 . classpath is automatically managed by Maven from the list of dependencies.Need files loaded as resources --> <classpath location="${test. since the concept of a batch for testing does not exist.testclasses. by default.Migrating to Maven The other configuration that will be shared is related to the JUnit tests.classes.dir}"/> <classpath location="${target.dir}"/> <!-.dir}"/> <classpath location="${target. The nested element jvmarg is mapped to the configuration parameter argLine As previously noted.dir}"/> <classpath refid="all-libs"/> <formatter type="plain" usefile="false"/> <formatter type="xml"/> <batchtest fork="yes" todir="${reports. plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <forkMode>once</forkMode> <childDelegation>false</childDelegation> <argLine> -Djava. 248 .4 to run you do not need to exclude hibernate3 tests. <plugin> <groupId>org. # Convention is that our JUnit test classes have XXXTests-style names.includes=**/*Tests. # Second exclude needs to be used for JDK. test.class</include> </includes> <excludes> <exclude>**/Abstract*</exclude> </excludes> </configuration> </plugin> The childDelegation option is required to prevent conflicts when running under Java 5 between the XML parser provided by the JDK and the one included in the dependencies in some modules.maven.headless=true -XX:MaxPermSize=128m -Xmx128m </argLine> <includes> <include>**/*Tests.4. Note that it is possible to use another lower JVM to run tests if you wish – refer to the Surefire plugin reference documentation for more information. It makes tests run using the standard classloader delegation instead of the default Maven isolated classloader. which are processed prior to the compilation.apache.excludes=**/Abstract* org/springframework/orm/hibernate3/** The includes and excludes referenced above.1 # being compiled with target JDK 1.5 . due to Hibernate 3. test.class # # Wildcards to exclude among JUnit tests. mandatory when building in JDK 1.excludes=**/Abstract* #test. Since Maven requires JDK 1.4.and generates sources from them that have to be compiled with the normal Java compiler. When building only on Java 5 you could remove that option and the XML parser (Xerces) and APIs (xml-apis) dependencies.awt. Spring's Ant build script also makes use of the commons-attributes compiler in its compileattr and compiletestattr targets. servlet.mojo</groupId> <artifactId>commons-attributes-maven-plugin</artifactId> <executions> <execution> <configuration> <includes> <include>**/metadata/*.codehaus.attributes. 249 .attributes.web.Compile to a temp directory: Commons Attributes will place Java Source here.java</include> <include>org/springframework/jmx/**/*. Maven handles the source and destination directories automatically. --> <attribute-compiler <fileset dir="${test.Compile to a temp directory: Commons Attributes will place Java Source here.dir}" includes="**/metadata/*.java"/> </attribute-compiler> From compiletestattr: <!-.test}"> <fileset dir="${test. --> <fileset dir="${src. --> <attribute-compiler </attribute-compiler> In Maven. this same function can be accomplished by adding the commons-attributes plugin to the build section in the POM.dir}" includes="org/springframework/aop/**/*.tempdir.java</include> </testIncludes> </configuration> <goals> <goal>compile</goal> <goal>test-compile</goal> </goals> </execution> </executions> </plugin> Later in this chapter you will need to modify these test configurations.dir}" includes="org/springframework/jmx/**/*.src}"> <!-Only the PathMap attribute in the org.Migrating to Maven From compileattr: <!-.java</include> </includes> <testIncludes> <include>org/springframework/aop/**/*.springframework.metadata package currently needs to be shipped with an attribute.handler. 0-m1-SNAPSHOT</version> </parent> <artifactId>spring-core</artifactId> <name>Spring core</name> Again.dir}"> <include name="org/springframework/core/**"/> <include name="org/springframework/util/**"/> </fileset> <manifest> <attribute name="Implementation-Title" value="${spring-title}"/> <attribute name="Implementation-Version" value="${spring-version}"/> <attribute name="Spring-Version" value="${spring-version}"/> </manifest> </jar> From the previous code snippet. As you saw before.mergere. compiler configuration. In each subdirectory. Compiling In this section. which centralizes and maintains information common to the project.3.). since the sources and resources are in the same directory in the current Spring build. However. The following is the POM for the spring-core module. you will start to compile the main Spring source. description. For the resources. in this case the defaults are sufficient. you will need to add a resources element in the build section.dir}/modules/spring-core.jar"> <fileset dir="${target. you need to create the POM files for each of Spring's modules.classes.java files from the resources. and organization name to the values in the POM. 8. JUnit test configuration. Maven will automatically set manifest attributes such as name. tests will be dealt with later in the chapter.m2book.migrating</groupId> <artifactId>spring-parent</artifactId> <version>2. or they will get included in the JAR. you will need to create a POM that extends the parent POM. 250 . setting the files you want to include (by default Maven will pick everything from the resource directory). review the following code snippet from Spring's Ant script. you will need to exclude the *. as those values are inherited from parent POM. <parent> <groupId>com.4.Better Builds with Maven 8. you can determine which classes are included in the JAR and what attributes are written into the JAR's manifest. etc. While manifest entries can also be customized with additional configuration to the JAR plugin. Creating POM files Now that you have the basic configuration shared by all modules (project information. This module is the best to begin with because all of the other modules depend on it. you won't need to specify the version or groupId elements of the current module. To begin. version. where spring-core JAR is created: <jar jarfile="${dist. you will need to tell Maven to pick the correct classes and resources from the core and util packages. Maven will by default compile everything from the source directory./src</directory> <includes> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </includes> <excludes> <exclude>**/*.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <includes> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </includes> </configuration> </plugin> </plugins> </build> 251 . <build> <resources> <resource> <directory>. you will need to configure the compiler plugin to include only those in the core and util packages.Migrating to Maven For the classes.apache.java</exclude> </excludes> </resource> </resources> <plugins> <plugin> <groupId>org./.maven. because as with resources.. which is inherited from the parent POM.. As an alternative. you now know that you need the Apache Commons Logging library (commons-logging) to be added to the dependencies section in the POM.java:[31. But.commons.\. Specify site:www. beginning with the following: [INFO] -----------------------------------------------------------------------[ERROR] BUILD FAILURE [INFO] -----------------------------------------------------------------------[INFO] Compilation failure C:\dev\m2book\code\migrating\spring\m2\springcore\. you can search the repository using Google.commons.\.24] cannot find symbol symbol : class Log location: class org.apache.ibiblio. From the previous output.core.apache. Regarding the artifactId..\src\org\springframework\core\io\support\PathMatchingResourcePatternResol ver. In the case of commons-logging..apache.34] package org.34] package org. located in the org/apache/commons/logging directory in the repository.apache.org/maven2/commons-logging/commons-logging/.java:[107.. 252 .springframework. you need to check the central repository at ibiblio. what groupid.logging does not exist C:\dev\m2book\code\migrating\spring\m2\springcore\. the convention is to use a groupId that mirrors the package name.logging. You will see a long list of compilation failures. commons-logging groupId would become org.org/maven2 commons logging.support.Better Builds with Maven To compile your Spring build.java:[19.\src\org\springframework\util\xml\SimpleSaxErrorHandler. If you check the repository.commons. for historical reasons some groupId values don't follow this convention and use only the name of the project.logging does not exist C:\dev\m2book\code\migrating\spring\m2\springcore\.ibiblio.34] package org. the option that is closest to what is required by your project.logging does not exist These are typical compiler messages.java:[30. Typically. artifactId and version should we use? For the groupId and artifactId. it's usually the JAR name without a version (in this case commonslogging). caused by the required classes not being on the classpath...io.\src\org\springframework\core\io\support\PathMatchingResourcePatternResol ver.. you can now run mvn compile.\.. you will find all the available versions of commons-logging under. and then choose from the search results.PathMatchingResourcePatternResolver C:\dev\m2book\code\migrating\spring\m2\springcore\.. For example. changing dots to slashes.\src\org\springframework\core\io\support\PathMatchingResourcePatternResol ver. the actual groupId is commons-logging. However.\.commons. ibiblio. 253 .0.1. So.com/. However. search the ibiblio repository through Google by calculating the MD5 checksum of the JAR file with a program such as md5sum. Continuum and Archiva build platform.jar.org/maven2 to the query. we strongly encourage and recommend that you invest the time at the outset of your migration. component-oriented. to make explicit the dependencies and interrelationships of your projects. For instance.0 distribution based on a preintegrated Maven. we discovered that the commons-beanutils version stated in the documentation is wrong and that some required dependencies are missing from the documentation. you will find that there is documentation for all of Spring's dependencies in readme. has been developed and is available as part of Maestro.txt in the lib directory of the Spring source.ibiblio. there are some other options to try to determine the appropriate versions for the dependencies included in your build: • Check if the JAR has the version in the file name • Open the JAR file and look in the manifest file META-INF/MANIFEST. you could search with: site:). all submodules would use the same dependencies). and then search in Google prepending site:www. the previous directory is the artifactId (hibernate) and the other directories compose the groupId. you have to be careful as the documentation may contain mistakes and/or inaccuracies.Migrating to Maven With regard the version. with the slashes changed to dots (org.MF • For advanced users. using a Web interface. through inheritance. although you could simply follow the same behavior used in Ant (by adding all the dependencies in the parent POM so that. You can use this as a reference to determine the versions of each of the dependencies. For more information on Maestro please see:. explicit dependency management is one of the biggest benefits of Maven once you have invested the effort upfront. When needed. Maestro is an Apache License 2.hibernate) An easier way to search for dependencies. <dependencies> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1. For example. during the process of migrating Spring to Maven.ibiblio.md5 You can see that the last directory is the version (3. For details on Maven Archiva (the artifact repository manager) refer to the Maven Archiva project for details)6. 6 Maven Archiva is part of Mergere Maestro. modular projects that are easier to maintain in the long term. for the hibernate3.org/maven2 78d5c38f1415efc64f7498f828d8069a The search will return: provided with Spring under lib/hibernate. Doing so will result in cleaner.1/hibernate-3. While adding dependencies can be the most painful part of migrating to Maven.org/maven2/org/hibernate/hibernate/3.4</version> </dependency> </dependencies> Usually you will convert your own project. so you will have first hand knowledge about the dependencies and versions used.mergere. 5.this time all of the sources for spring-core will compile. we will cover how to run the tests.Better Builds with Maven Running again mvn compile and repeating the process previously outlined for commons-logging. Compiling Tests Setting the test resources is identical to setting the main resources.. <testResources> <testResource> <directory>. 8.1</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1. Now. setting which test resources to use. <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3. with the exception of changing the location from which the element name and directory are pulled.java</exclude> </excludes> </testResource> </testResources> 254 .properties file required for logging configuration. so log4j will not be included in other projects that depend on this.9</version> <optional>true</optional> </dependency> Notice that log4j is marked as optional.5. and setting the JUnit test sources to compile. Optional dependencies are not included transitively. After compiling the tests. 8. and it is just for the convenience of the users.2. Using the optional tag does not affect the current project. you may decide to use another log implementation. For the first step./test</directory> <includes> <include>log4j.1. Testing Now you're ready to compile and run the tests. run mvn compile again . you will repeat the previous procedure for the main classes./. This is because in other projects.properties</include> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </includes> <excludes> <exclude>**/*. you will need to add the log4j.. you will notice that you also need Apache Commons Collections (aka commons-collections) and log4j. In addition. 4</version> <scope>test</scope> </dependency> The scope is set to test. as well. as all the test classes compile correctly now.mock. not tests.Migrating to Maven Setting the test sources for compilation follows the same procedure.springframework. as before. in order to compile the tests: <dependency> <groupId>javax. <testIncludes> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </testIncludes> You may also want to check the Log4JConfigurerTests. Inside the mavencompiler-plugin configuration. To exclude test classes in Maven. org. As a result. Therefore. add the testExcludes element to the compiler configuration as follows. you will get compilation errors. In other words. if spring-core depends on spring-beans and spring-beans depends on spring-core. we cannot add a dependency from spring-core without creating a circular dependency. It may appear initially that spring-core depends on spring-mock.java</exclude> <exclude>org/springframework/util/SerializationTestUtils. you will see the following error: package javax. if you try to compile the test classes by running mvn test-compile.springframework.java</exclude> <exclude>org/springframework/util/ClassUtilsTests.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.beans packages are missing. but rather require other modules to be present. spring-web and spring-beans modules. you will need to add the testIncludes element.servlet does not exist This means that the following dependency must be added to the POM. which one do we build first? Impossible to know.java</exclude> <exclude>org/springframework/util/ReflectionUtilsTests. <testExcludes> <exclude>org/springframework/util/comparator/ComparatorTests.springframework.java</exclude> <exclude>org/springframework/util/ObjectUtilsTests. depend on classes from springcore.java</exclude> <exclude>org/springframework/core/io/ResourceTests. but this time there is a special case where the compiler complains because some of the classes from the org. when you run mvn test-compile. you will see that their main classes. it makes sense to exclude all the test classes that reference other modules from this one and include them elsewhere. Now. 255 . If you run mvn test-compile again you will have a successful build.java</exclude> </testExcludes> Now. So.web and org.java class for any hard codes links to properties files and change them accordingly. the key here is to understand that some of the test classes are not actually unit tests for springcore. as this is not needed for the main sources. but if you try to compile those other modules. springframework. for the test class that is failing org.aopalliance package is inside the aopallience JAR.io. Failures: 1. However. as it will process all of the previous phases of the build life cycle (generate sources. The first section starts with java.core. Failures: 1.springframework.core. This indicates that there is something missing in the classpath that is required to run the tests.0</version> <scope>test</scope> </dependency> Now run mvn test again.015 sec <<<<<<<< FAILURE !! This output means that this test has logged a JUnit failure and error. so to resolve the problem add the following to your POM <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> <version>1.5.2. Within this file.Better Builds with Maven 8. Time elapsed: 0.FileNotFoundException: class path resource [org/aopalliance/] cannot be resolved to URL because it does not exist. you will find the following: [surefire] Running org.support. you will get the following error report: Results : [surefire] Tests run: 113. run tests. You will get the following wonderful report: [INFO] -----------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ The last step in migrating this module (spring-core) from Ant to Maven. compile.PathMatchingResourcePatternResolverTe sts. This command can be used instead most of the time. [INFO] ------------------------------------------------------------------------ Upon closer examination of the report output. etc. is to run mvn install to make the resulting JAR available to other projects in your local Maven repository. there is a section for each failed test called stacktrace. simply requires running mvn test. when you run this command.txt. To debug the problem. Errors: 1. you will need to check the test logs under target/surefire-reports. compile tests.io.) 256 .support.PathMatchingResourcePatternResolverTests [surefire] Tests run: 5. The org. Errors: 1 [INFO] -----------------------------------------------------------------------[ERROR] BUILD ERROR [INFO] -----------------------------------------------------------------------[INFO] There are test failures.io. Running Tests Running the tests in Maven. For instance.groupId}: groupId of the current POM being built For example. you can refer to spring-core from spring-beans with the following.version}: version of the current POM being built ${project.4</version> </dependency> </dependencyManagement> The following are some variables that may also be helpful to reduce duplication: • • ${project.6.6. In the same way. you will find that you are repeating yourself. 8. If you follow the order of the modules described at the beginning of the chapter you will be fine. and remove the versions from the individual modules (see Chapter 3 for more information). move these configuration settings to the parent POM instead.groupId}</groupId> <artifactId>spring-core</artifactId> <version>${project.version}</version> </dependency> 257 . instead of repeatedly adding the same dependency version information to each module. Using the parent POM to centralize this information makes it possible to upgrade a dependency version across all sub-projects from a single location. Avoiding Duplication As soon as you begin migrating the second module. That way. you will be adding the Surefire plugin configuration settings repeatedly for each module that you convert. To avoid duplication. See figure 8-1 to get the overall picture of the interdependencies between the Spring modules. otherwise you will find that the main classes from some of the modules reference classes from modules that have not yet been built. since they have the same groupId and version: <dependency> <groupId>${project. <dependencyManagement> <dependencies> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1. Other Modules Now that you have one module working it is time to move on to the other modules. use the parent POM's dependencyManagement section to specify this information once.1.Migrating to Maven 8. each of the modules will be able to inherit the required Surefire configuration.0. Generally with Maven. this can cause previously-described cyclic dependencies problem. First. they will need to run them under Java 5. By splitting them into different modules. Although it is typically not recommended. and spring-web-mock. Building Java 5 Classes Some of Spring's modules include Java 5 classes from the tiger folder.maven. you can use it as a dependency for other components. with only those classes related to spring-context module. how can the Java 1.6. by specifying the test-jar type. attempting to use one of the Java 5 classes under Java 1.6. you can split Spring's mock classes into spring-context-mock.5 sources be added? To do this with Maven. that a JAR that contains the test classes is also installed in the repository: <plugin> <groupId>org.version}</version> <type>test-jar</type> <scope>test</scope> </dependency> A final note on referring to test classes from other modules: if you have all of Spring's mock classes inside the same module. So. As the compiler plugin was earlier configured to compile with Java 1. you will need to create a new spring-beans-tiger module.groupId}</groupId> <artifactId>spring-beans</artifactId> <version>${project. To eliminate this problem. you need to create a new module with only Java 5 classes instead of adding them to the same module and mixing classes with different requirements. any users. be sure to put that JAR in the test scope as follows: <dependency> <groupId>${project. 8. users will know that if they depend on the module composed of Java 5 classes.Better Builds with Maven 8.4.2.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <executions> <execution> <goals> <goal>test-jar</goal> </goals> </execution> </executions> </plugin> Once that JAR is installed. 258 . Referring to Test Classes from Other Modules If you have tests from one component that refer to tests from other modules. particularly in light of transitive dependencies. in this case it is necessary to avoid refactoring the test source code.3 and some compiled for Java 5 in the same JAR. make sure that when you run mvn install. would experience runtime errors. it's easier to deal with small modules.apache. there is a procedure you can use.3 compatibility. However. with only those classes related to spring-web.3.3 or 1. Consider that if you include some classes compiled for Java 1. the Java 5 modules will share a common configuration for the compiler. and then a directory for each one of the individual tiger modules.Migrating to Maven As with the other modules that have been covered. as follows: Figure 8-3: A tiger module directory The final directory structure should appear as follows: Figure 8-4: The final directory structure 259 . The best way to split them is to create a tiger folder with the Java 5 parent POM.>..apache. with all modules In the tiger POM.maven... you will need to add a module entry for each of the directories./tiger/src</sourceDirectory> <testSourceDirectory>./.5</target> </configuration> </plugin> </plugins> </build> 260 ././tiger/test</testSourceDirectory> <plugins> <plugin> <groupId>org.Better Builds with Maven Figure 8-5: Dependency relationship./.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1./....5</source> <target>1. Maven can call Ant tasks directly from a POM using the maven-antrun-plugin.springframework. you need to use the Ant task in the spring-remoting module to use the RMI compiler.classes.RmiInvocationWrapper" iiop="true"> <classpath refid="all-libs"/> </rmic> 261 .RmiInvocationWrapper"/> <rmic base="${target. In this case. but to still be able to build the other modules when using Java 1.remoting.5</jdk> </activation> <modules> <module>tiger</module> </modules> </profile> </profiles> 8. you just need a new module entry for the tiger folder. Using Ant Tasks From Inside Maven In certain migration cases. <profiles> <profile> <id>jdk1. with the Spring migration.classes. this is: <rmic base="${target.dir}" classname="org.4.6.5</id> <activation> <jdk>1. you may find that Maven does not have a plugin for a particular task or an Ant target is so small that it may not be worth creating a new plugin.Migrating to Maven In the parent POM.dir}" classname="org.rmi.remoting.rmi. From Ant.5 JDK.4 you will add that module in a profile that will be triggered only when using 1. For example.springframework. add: <plugin> <groupId>org. stub and tie classes from them. To complete the configuration.home}/.jar</systemPath> </dependency> </dependencies> </plugin> As shown in the code snippet above. and required by the RMI task.compile.directory} and maven.directory}/classes" classname="org.remoting.build.Better Builds with Maven To include this in Maven build.classpath"/> </rmic> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>com. So.sun</groupId> <artifactId>tools</artifactId> <scope>system</scope> <version>1.RmiInvocationWrapper" iiop="true"> <classpath refid="maven.4</version> <systemPath>${java..apache.RmiInvocationWrapper"/> <rmic base="${project.build. In this case./lib/tools. the most appropriate phase in which to run this Ant task is in the processclasses phase.springframework.rmi.build. will take the compiled classes and generate the rmi skeleton. which applies to that plugin only.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <configuration> <tasks> <echo>Running rmic</echo> <rmic base="${project.compile.directory}/classes" classname="org. 262 . you will need to determine when Maven should run the Ant task.maven.jar above. such as ${project.remoting. there are some references available already. the rmic task. such as the reference to the tools.rmi.classpath. There are also references for anything that was added to the plugin's dependencies section. which is a classpath reference constructed from all of the dependencies in the compile scope or lower. which is bundled with the JDK.springframework. 6. such as springaspects.html.apache. Sun's Activation Framework and JavaMail are not redistributable from the repository due to constraints in their licenses. These issues were shared with the Spring developer community and are listed below: • Moving one test class. There is some additional configuration required for some modules. You can then install them in your local repository with the following command. 8. which uses AspectJ for weaving the classes. NamespaceHandlerUtilsTests. special cases that must be handled. You may need to download them yourself from the Sun site or get them from the lib directory in the example code for this chapter.3.6.Migrating to Maven 8. For more information on dealing with this issue.2 -Dpackaging=jar You will only need to do this process once for all of your projects or you may use a corporate repository to share them across your organization. as these test cases will not work in both Maven and Ant. Some Special Cases In addition to the procedures outlined previously for migrating Spring to Maven.jar -DgroupId=javax.mail -DartifactId=mail -Dversion=1. to install JavaMail: mvn install:install-file -Dfile=mail. there are two additional. which used relative paths in Log4JConfigurerTests class. For example.5. 263 . see.. Using classpath resources is recommended over using file system resources. These can be viewed in the example code. mvn install:install-file -Dfile=<path-to-file> -DgroupId=<group-id> -DartifactId=<artifact-id> -Dversion=<version> -Dpackaging=<packaging> For instance.6. Non-redistributable Jars You will find that some of the modules in the Spring build depend on JARs that are not available in the Maven central repository. these would move from the original test folder to src/test/java and src/test/resources respectively for Java sources and other files .you can delete that 80 MB lib folder. Finally. you would move all Java files under org/springframework/core and org/springframework/util from the original src folder to the module's folder src/main/java. you would eliminate the need to include and exclude sources and resources “by hand” in the POM files as shown in this chapter. The same for tests. you can realize Maven' other benefits . as Maven downloads everything it needs and shares it across all your Maven projects automatically . reports. and quality metrics. In the case of the Spring example. Restructuring the Code If you do decide to use Maven for your project. create JARs. By adopting Maven's standard directory structure. compile and test the code. Summary By following and completing this chapter. ObjectUtilsTests. Once you decide to switch completely to Maven. At the same time.Better Builds with Maven 8. 264 . you will be able to keep your current build working. reducing its size by two-thirds! 8. you will be able to take an existing Ant-based build. and install those JARs in your local repository using Maven. Maven can eliminate the requirement of storing jars in a source code management system. for the spring-core module.just remember not to move the excluded tests (ComparatorTests. SerializationTestUtils and ResourceTests). By doing this. ClassUtilsTests. For example.advantages such as built-in project documentation generation. in addition to the improvements to your build life cycle.7. Now that you have seen how to do this for Spring. ReflectionUtilsTests. you can apply similar concepts to your own Ant based build. you will be able to take advantage of the benefits of adopting Maven's standard directory structure.8. split it into modular components (if needed). Once you have spent this initial setup time Maven. you can simplify the POM significantly. All of the other files under those two packages would go to src/main/resources. it it highly recommended that you go through the restructuring process to take advantage of the many timesaving and simplifying conventions within Maven. I'll try not to take that personally. All systems automated and ready. Scott. A chimpanzee and two trainees could run her! Kirk: Thank you.Appendix A: Resources for Plugin Developers Appendix A: Resources for Plugin Developers In this appendix you will find: • Maven's Life Cycles • Mojo Parameter Expressions • Plugin Metadata Scotty: She's all yours. Mr. .Star Trek 265 . sir. initialize – perform any initialization steps required before the main part of the build can start.1.Better Builds with Maven A. For example. This is necessary to accommodate the inevitable variability of requirements for building different types of projects. and the content of the current set of POMs to be built is valid. generate-test-sources – generate compilable unit test code from other source formats. A. In other words. process-classes – perform any post-processing of the binaries produced in the preceding step. It continues by describing the mojos bound to the default life cycle for both the jar and maven-plugin packagings. as when using Aspect-Oriented Programming techniques. archiving it into a jar. performing any associated tests. corresponding to the three major activities performed by Maven: building a project from source. along with a short description for the mojos which should be bound to each. etc. 8. along with a summary of bindings for the jar and maven-plugin packagings. For the default life cycle. 4. This may include copying these resources into the target classpath directory in a Java build. It begins by listing the phases in each life cycle. 6. compile – compile source code into binary form. 3. Life-cycle phases The default life cycle is executed in order to perform a traditional build. generate-resources – generate non-code resources (such as configuration files. process-resources – perform any modification of non-code resources necessary.) from other source formats. in the target output location. validate – verify that the configuration of Maven. mojo-binding defaults are specified in a packaging-specific manner. 2. this section will describe the mojos bound by default to the clean and site life cycles. It contains the following phases: 1. The default Life Cycle Maven provides three life cycles. generate-sources – generate compilable code from other source formats. This section contains a listing of the phases in the default life cycle. and generating a project web site. 7. 9. cleaning a project of the files generated by a build. a mojo may apply source code patches here.1. Maven's Life Cycles Below is a discussion of Maven's three life cycles and their default mappings. such as instrumentation or offline code-weaving. Finally. and distributing it into the Maven repository system. 266 . it takes care of compiling the project's code. 5. process-sources – perform any source modification processes necessary to prepare the code for compilation.1. 16. process-test-resources – perform any modification of non-code testing resources necessary. a mojo may apply source code patches here. generate-test-resources – generate non-code testing resources (such as configuration files. install – install the distributable archive into the local Maven repository. 17. deploy – deploy the distributable archive into the remote Maven repository configured in the distributionManagement section of the POM. 11. before it is available for installation or deployment.Appendix A: Resources for Plugin Developers 10. preintegration-test – setup the integration testing environment for this project. test – execute unit tests on the application compiled and assembled up to step 8 above. test-compile – compile unit test source code into binary form. etc.) from other source formats. in the testing target output location. 14. package – assemble the tested application code and resources into a distributable archive. 18. 15. 21. This may include copying these resources into the testing target classpath location in a Java build. verify – verify the contents of the distributable archive. 12. post-integration-test – return the environment to its baseline form after executing the integration tests in the preceding step. For example. 13. 267 . using the environment configured in the preceding step. 19. 20. integration-test – execute any integration tests defined for this project. process-test-sources – perform any source modification processes necessary to prepare the unit test code for compilation. This could involve removing the archive produced in step 15 from the application server used to test it. This may involve installing the archive from the preceding step into some sort of application server. Alongside each. Compile unit-test source code to the test output directory. Copy non-source-code test resources to the test output directory for unit-test compilation.testRes maven-resourcesresources ources plugin test-compile test package install deploy testCom maven-compilerpile plugin test jar maven-surefireplugin maven-jar-plugin install maven-installplugin deploy maven-deployplugin 268 . Execute project unit tests. you will find a short description of what that mojo does.. Create a jar archive from the staging directory. Deploy the jar archive to a remote Maven repository. Filter variables if necessary. process-test. Install the jar archive into the local Maven repository.Better Builds with Maven Bindings for the jar packaging Below are the default life-cycle bindings for the jar packaging. Compile project source code to the staging directory for jar creation. and the rest. Indeed. compiling source code. As such. if one exists. packaging. testing. and metadata references to latest plugin version. 269 . maven-plugin-plugin Update the plugin registry. to install updateRegistry reflect the new plugin installed in the local repository. for example). maven-plugin artifacts are in fact jar files. the maven-plugin packaging also introduces a few new mojo bindings.. they undergo the same basic processes of marshaling non-source-code resources. and generate a plugin descriptor.Appendix A: Resources for Plugin Developers Bindings for the maven-plugin packaging The maven-plugin project packaging behaves in almost the same way as the more common jar packaging. However. to extract and format the metadata for the mojos within. addPluginArtifact Metadata maven-plugin-plugin Integrate current plugin information with package plugin search metadata. 2. 270 . post-clean – finalize the cleaning process. along with a summary of the default bindings. which perform the most common tasks involved in cleaning a project. along with any additional directories configured in the POM. Maven provides a set of default mojo bindings for this life cycle. Life-cycle phases The clean life-cycle phase contains the following phases: 1. you will find a short description of what that mojo does. effective for all POM packagings. clean – remove all files that were generated during another build process 3. the state of the project before it was built.Better Builds with Maven A. Table A-3: The clean life-cycle bindings for the jar packaging Phase Mojo Plugin Description clean clean maven-clean-plugin Remove the project build directory.1. Below is a listing of the phases in the clean life cycle. pre-clean – execute any setup or initialization procedures to prepare the project for cleaning 2. Default life-cycle bindings Below are the clean life-cycle bindings for the jar packaging. Alongside each. The clean Life Cycle This life cycle is executed in order to restore a project back to some baseline state – usually. and even deploy the resulting web site to your server. render your documentation source files into HTML. Table A-4: The site life-cycle bindings for the jar packaging Phase Mojo Plugin Description site site maven-site-plugin maven-site-plugin Generate all configured project reports. along with a summary of the default bindings. you will find a short description of what that mojo does.Appendix A: Resources for Plugin Developers A. site-deploy – use the distributionManagement configuration in the project's POM to deploy the generated web site files to the web server. which perform the most common tasks involved in generating the web site for a project. Below is a listing of the phases in the site life cycle.3. Maven provides a set of default mojo bindings for this life cycle. and prepare the generated web site for potential deployment 4. The site Life Cycle This life cycle is executed in order to generate a web site for your project. and render documentation into HTML 3. Life-cycle phases The site life cycle contains the following phases: 1.1. Alongside each. site-deploy deploy 271 . site – run all associated project reports. It will run any reports that are associated with your project. and render documentation source files into HTML. Deploy the generated web site to the web server path specified in the POM distribution Management section. pre-site – execute any setup or initialization steps to prepare the project for site generation 2. effective for all POM packagings. post-site – execute any actions required to finalize the site generation process. Default Life Cycle Bindings Below are the site life-cycle bindings for the jar packaging. apache.apache.re This is a reference to the local repository pository.util.ma List of reports to be generated when the site ven.maven. and extract only the information it requires. java. Using the discussion below.Better Builds with Maven A.apache.ma List of project instances which will be ven. It is used for bridging results from forked life cycles back to the main line of execution.MavenProject> processed as part of the current build.1. Simple Expressions Maven's plugin parameter injector supports several primitive expressions. Finally.List<org. It will summarize the root objects of the build state which are available for mojo expressions.2.ArtifactRepository used to cache artifacts during a Maven build. ${reactorProjects} ${reports} ${executedProject} java. They are summarized below: Table A-5: Primitive expressions supported by Maven's plugin parameter Expression Type Description ${localRepository} ${session} org. These expressions allow a mojo to traverse complex build state.2.apache. in addition to providing a mechanism for looking up Maven components on-demand.List<org.maven. org.maven. 272 . along with the published Maven API documentation.project.execution.project. which act as a shorthand for referencing commonly used build state objects.M The current build session. Mojo Parameter Expressions Mojo parameter values are resolved by way of parameter expressions when a mojo is initialized.apache. org. it will describe the algorithm used to resolve complex parameter expressions.MavenReport> life cycle executes. A. and often eliminates dependencies on Maven itself beyond the plugin API.artifact. This reduces the complexity of the code contained in the mojo. This contains avenSession methods for accessing information about how Maven was called.util.reporting. mojo developers should have everything they need to extract the build state they require.Mav This is a cloned instance of the project enProject instance currently being built. This section discusses the expression language used by Maven to inject build state and plugin configuration into mojos. The first is the root object. Complex Expression Roots In addition to the simple expressions above. 273 .File The current project's root directory.2.project.maven. This root object is retrieved from the running application using a hard-wired mapping. If at some point the referenced object doesn't contain a property that matches the next expression part. Otherwise.apache. The resulting value then becomes the new 'root' object for the next round of traversal. unless specified otherwise.m2/settings. org. merged from ings conf/settings. the value that was resolved last will be returned as the expression's value.xml in the maven application directory and from . A.maven. ${plugin} org. much like a primitive expression would. ptor.Sett The Maven settings. an expression part named 'child' translates into a call to the getChild() method on that object. From there. this reflective lookup process is aborted. The valid root objects for plugin parameter expressions are summarized below: Table A-6: A summary of the valid root objects for plugin parameter expressions Expression Root Type Description ${basedir} ${project} ${settings} java. and must correspond to one of the roots mentioned above. the expression is split at each '.PluginDescriptor including its dependency artifacts. successive expression parts will extract values from deeper and deeper inside the build state.apache. following standard JavaBeans naming conventions. the next expression part is used as a basis for reflectively traversing that object' state. During this process.' character. When there are no more expression parts. No advanced navigation can take place using is such expressions. rendering an array of navigational directions. Project org.io.Maven Project instance which is currently being built.3.apache. The Expression Resolution Algorithm Plugin parameter expressions are resolved using a straightforward algorithm.plugin. if the expression matches one of the primitive expressions (mentioned above) exactly. First. then the value mapped to that expression is returned.xml in the user's home directory.Appendix A: Resources for Plugin Developers A.descri The descriptor instance for the current plugin.2. Maven supports more complex expressions that traverse the object graph starting at some root object that contains build state. if there is one.maven. Repeating this.2.settings. resolved in this order: 1.plugins</groupId> <artifactId>maven-myplugin-plugin</artifactId> <version>2.The description element of the plugin's POM. an ancestor POM. as well as the metadata formats which are translated into plugin descriptors from Java. Its syntax has been annotated to provide descriptions of the elements.and Ant-specific mojo source files. then the string literal of the expression itself is used as the resolved value. If the parameter is still empty after these two lookups. If a user has specified a property mapping this expression to a specific value in the current POM. It includes summaries of the essential plugin descriptor.This element provides the shorthand reference for this plugin. This includes properties specified on the command line using the -D commandline option. |-> <goalPrefix>myplugin</goalPrefix> <!-.apache. For | instance. or method invocations that don't conform to standard JavaBean naming conventions. Currently. <plugin> <!-.maven.The name of the mojo. If the value is still empty. | this name allows the user to invoke this mojo from the command line | using 'myplugin:do-something'. Plugin descriptor syntax The following is a sample plugin descriptor. --> <description>Sample Maven Plugin</description> <!-. Combined with the 'goalPrefix' element above.This is a list of the mojos contained within this plugin. |-> <inheritedByDefault>true</inheritedByDefault> <!-. 2. Maven plugin parameter expressions do not support collection lookups.Whether the configuration for this mojo should be inherted from | parent to child POMs by default. or an active profile. array index references. it will be resolved as the parameter value at this point. --> <mojos> <mojo> <!-. this plugin could be referred to from the command line using | the 'myplugin:' prefix.0-SNAPSHOT</version> <!-. The POM properties.Better Builds with Maven If at this point Maven still has not been able to resolve a value for the parameter expression. Plugin metadata Below is a review of the mechanisms used to specify metadata for plugins. Maven will consult the current system properties. it will attempt to find a value in one of two remaining places.These are the identity elements (groupId/artifactId/version) | from the plugin POM. The system properties. |-> <groupId>org. |-> <goal>do-something</goal> 274 . | This allows the user to specify that this mojo be executed (via the | <execution> section of the plugin configuration in the POM).Ensure that this other mojo within the same plugin executes before | this one.Determines how Maven will execute this mojo in the context of a | multimodule build. |-> <phase>compile</phase> <!-. then execute that life cycle up to the specified phase. |-> <requiresDirectInvocation>false</requiresDirectInvocation> <!-. but the mojo itself has certain life-cycle | prerequisites. such mojos will | cause the build to fail. it will only | execute once. --> <description>Do something cool.Tells Maven that a valid project instance must be present for this | mojo to execute. it will be executed once for each project instance in the | current build.Tells Maven that this mojo can ONLY be invoked directly.Tells Maven that a valid list of reports for the current project are | required before this plugin can execute. It's restricted to this plugin to avoid creating inter-plugin | dependencies.Some mojos cannot execute if they don't have access to a network | connection. |-> <executeLifecycle>myLifecycle</executeLifecycle> <!-. Mojos that are marked as aggregators should use the | ${reactorProjects} expression to retrieve a list of the project | instances in the current build. If Maven is operating in offline mode. without | also having to specify which phase is appropriate for the mojo's | execution.Which phase of the life cycle this mojo will bind to by default. If the mojo is not marked as an | aggregator.Appendix A: Resources for Plugin Developers <!-.This tells Maven to create a clone of the current project and | life cycle. This is | useful to inject specialized behavior in cases where the main life | cycle should remain unchanged. via the | command line. It is a good idea to provide this. | and specifies a custom life-cycle overlay that should be added to the | cloned life cycle before the specified phase is executed.</description> <!-. regardless of the number of project instances in the | current build. to give users a hint | at where this task should run. If a mojo is marked as an aggregator. | This is useful when the user will be invoking this mojo directly from | the command line. This flag controls whether the mojo requires 275 . |-> <requiresProject>true</requiresProject> <!-. |-> <executePhase>process-resources</executePhase> <!-. |-> <requiresReports>false</requiresReports> <!-.Description of what this mojo does. |-> <executeGoal>do-something-first</executeGoal> <!-.This is optionally used in conjunction with the executePhase element. |-> <aggregator>false</aggregator> <!-. --> <parameters> <parameter> <!-.plugins. this parameter must be configured via some other section of | the POM.maven.SiteDeployMojo</implementation> <!-.The parameter's name. either via command-line or POM configuration.Tells Maven that the this plugin's configuration should be inherted | from a parent POM by default. |-> <inheritedByDefault>true</inheritedByDefault> <!-. |-> <requiresOnline>false</requiresOnline> <!-.The Java type for this parameter. |-> <alias>outputDirectory</alias> <!-.apache. |-> <implementation>org.site. the | mojo (and the build) will fail when this parameter doesn't have a | value. In Java mojos. --> <language>java</language> <!-.</description> </parameter> </parameters> 276 .Better Builds with Maven | Maven to be online. | It will be used as a backup for retrieving the parameter value. this will often reflect the | parameter field name in the mojo class.Description for this parameter.This is a list of the parameters used by this mojo.This is an optional alternate parameter name for this parameter. If set to | false.Whether this parameter is required to have a value. |-> <editable>true</editable> <!-. |-> <name>inputDirectory</name> <!-. specified in the javadoc comment | for the parameter field in Java mojo implementations.Whether this parameter's value can be directly specified by the | user.The class or script path (within the plugin's jar) for this mojo's | implementation. as in the case of the list of project dependencies. |-> <description>This parameter does something important.File</type> <!-. --> <type>java. If true. |-> <required>true</required> <!-.io. unless the user specifies | <inherit>false</inherit>.The implementation language for this mojo. WagonManager</role> <!-.manager. as | compared to the descriptive specification above.This is the list of non-parameter component references used by this | mojo. |-> <field-name>wagonManager</field-name> </requirement> </requirements> </mojo> </mojos> </plugin> 277 . The expression used to extract the | parameter value is ${project.File">${project. | | The general form is: | <param-nameparam-expr</param-name> | |-> <configuration> <!-.apache. |-> <requirements> <requirement> <!-. this parameter is named "inputDirectory".WagonManager |-> <role>org.Use a component of type: org.artifact. |-> <inputDirectory implementation="java.For example. Each parameter must | have an entry here that describes the parameter name.File.This is the operational specification of this mojo's parameters.reporting. | along with an optional classifier for the specific component instance | to be used (role-hint).artifact.outputDirectory}</inputDirectory> </configuration> <!-. Finally.Appendix A: Resources for Plugin Developers <!-.apache.outputDirectory}.Inject the component instance into the "wagonManager" field of | this mojo.io.io. Components are specified by their interface class name (role). | and the primary expression used to extract the parameter's value. and it | expects a type of java.maven. the requirement specification tells | Maven which mojo-field should receive the component instance.reporting. parameter type.manager.maven. Alphanumeric. executeLifecycle. life cycle name. with dash ('-') Any valid phase name true or false (default is false) true or false (default is true) true or false (default is false) true or false (default is false) Yes No No No No No 278 . Class-level annotations The table below summarizes the class-level javadoc annotations which translate into specific elements of the mojo section in the plugin descriptor. Classlevel annotations correspond to mojo-level metadata elements.2. Java Mojo Metadata: Supported Javadoc Annotations The Javadoc annotations used to supply metadata about a particular mojo come in two types. and field-level annotations correspond to parameter-level metadata elements.4. Table A-7: A summary of class-level javadoc annotations Descriptor Element Javadoc Annotation Values Required? aggregator description executePhase.. and requirements sections of a mojo's specification in the plugin descriptor. These metadata translate into elements within the parameter.The dependency scope required for this mojo. corresponding to the ability to map | multiple mojos into a single build script.The default life-cycle phase binding for this mojo --> <phase>compile</phase> <!-. NOTE: | multiple mojos are allowed here.Whether this mojo requires a current project instance --> <requiresProject>true</requiresProject> <!-. configuration. Ant Metadata Syntax The following is a sample Ant-based mojo metadata file. <pluginMetadata> <!-. Its syntax has been annotated to provide descriptions of the elements. |-> <requiresDependencyResolution>compile</requiresDependencyResolution> <!-.The name for this mojo --> <goal>myGoal</goal> <!-. |-> <mojos> <mojo> <!-.Whether this mojo requires access to project reports --> <requiresReports>true</requiresReports> 279 . Maven will resolve | the dependencies in this scope before this mojo executes.Appendix A: Resources for Plugin Developers Field-level annotations The table below summarizes the field-level annotations which supply metadata about mojo parameters.. Table A-8: Field-level annotations Descriptor Element Javadoc Annotation Values Required? alias.Contains the list of mojos described by this metadata file.5.2. |-> <property>prop</property> <!-.The list of parameters this mojo uses --> <parameters> <parameter> <!-.maven.Another mojo within this plugin to execute before this mojo | executes. |-> <inheritByDefault>true</inheritByDefault> <!-.apache.This is the type for the component to be injected. --> <name>nom</name> <!-.A named overlay to augment the cloned life cycle for this fork | only |-> <lifecycle>mine</lifecycle> <!-.artifact.The parameter name.The property name used by Ant tasks to reference this parameter | value.The phase of the forked life cycle to execute --> <phase>initialize</phase> <!-.Whether this mojo requires Maven to execute in online mode --> <requiresOnline>true</requiresOnline> <!-.Whether the configuration for this mojo should be inherited | from parent to child POMs by default.This describes the mechanism for forking a new life cycle to be | executed prior to this mojo executing.Whether this parameter is required for mojo execution --> <required>true</required> 280 .Whether this mojo operates as an aggregator --> <aggregator>true</aggregator> <!-. |-> <execute> <!-. |-> <goal>goal</goal> </execute> <!-.This is an optional classifier for which instance of a particular | component type should be used.List of non-parameter application components used in this mojo --> <components> <component> <!-.ArtifactResolver</role> <!-. |-> <requiresDirectInvocation>true</requiresDirectInvocation> <!-. --> <role>org.Whether this mojo must be invoked directly from the command | line.resolver.Better Builds with Maven <!-. |-> <hint>custom</hint> </component> </components> <!-. If this is specified.The description of what the mojo is meant to accomplish --> <description> This is a test. this element will provide advice for an | alternative parameter to use instead.Appendix A: Resources for Plugin Developers <!-.Whether the user can edit this parameter directly in the POM | configuration or the command line |-> <readonly>true</readonly> <!-. </description> <!-. |-> <deprecated>Use another mojo</deprecated> </mojo> </mojos> </pluginMetadata> 281 .maven.When this is specified.apache.MavenProject</type> <!-. it provides advice on which alternative mojo | to use.The description of this parameter --> <description>Test parameter</description> <!-.An alternative configuration name for this parameter --> <alias>otherProp</alias> <!-.artifactId}</defaultValue> <!-. |-> <deprecated>Use something else</deprecated> </parameter> </parameters> <!-.property}</expression> <!-.The expression used to extract this parameter's value --> <expression>${my.The Java type of this mojo parameter --> <type>org.project.The default value provided when the expression won't resolve --> <defaultValue>${project. . txt README. Standard location for test sources.txt target/ Maven’s POM. 284 . Standard location for application resources. Directory for all generated output. the generated site or anything else that might be generated as part of your build. which is always at the top-level of a project. Standard location for assembly filters. Standard location for application configuration filters. This would include compiled classes. Standard location for resource filters. Standard Directory Structure Table B-1: Standard directory layout for maven project content Standard Location Description pom. A license file is encouraged for easy identification by users and is optional.Better Builds with Maven B. Standard location for test resources. For example.xml LICENSE.1. Standard location for test resource filters. A simple note which might help first time users and is optional. generated sources that may be compiled. you src/main/java/ src/main/resources/ src/main/filters/ src/main/assembly/ src/main/config/ src/test/java/ src/test/resources/ src/test/filters/ Standard location for application sources. may generate some sources from a JavaCC grammar. target/generated-sources/<plugin-id> Standard location for generated sources. Maven’s Super POM <project> <modelVersion>4.2.org/maven2</url> <layout>default</layout> <snapshots> <enabled>false</enabled> </snapshots> <releases> <updatePolicy>never</updatePolicy> </releases> </pluginRepository> </pluginRepositories> <!-..org/maven2</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <!-.Appendix B: Standard Conventions B..maven.Reporting Conventions --> <reporting> <outputDirectory>target/site</outputDirectory> </reporting> . </project> 285 .0</modelVersion> <name>Maven Default Project</name> <!-.Plugin Repository Conventions --> <pluginRepositories> <pluginRepository> <id>central</id> <name>Maven Plugin Repository</name> <url>> <!-.Repository Conventions --> <repositories> <repository> <id>central</id> <name>Maven Repository Switchboard</name> <layout>default</layout> <url>. Run any checks to verify the package is valid and meets quality criteria. such as a JAR. Description Validate the project is correct and all necessary information is available. Copy and process the resources into the destination directory. for use as a dependency in other projects locally. Post-process the generated files from compilation. Compile the source code of the project. Compile the test source code into the test destination directory Run tests using a suitable unit testing framework. Done in an integration or release environment. Perform actions required before integration tests are executed. Process the test source code. These tests should not require the code be packaged or deployed. for example to do byte code enhancement on Java classes. Process the source code. 286 . This may including cleaning up the environment. ready for packaging. for example to filter any values.3. Take the compiled code and package it in its distributable format. This may involve things such as setting up the required environment.Better Builds with Maven B. Copy and process the resources into the test destination directory. Generate resources for inclusion in the package. for example to filter any values. Install the package into the local repository. Generate any test source code for inclusion in compilation. Perform actions required after integration tests have been executed.. copies the final package to the remote repository for sharing with other developers and projects. Process and deploy the package if necessary into an environment where integration tests can be run. Create resources for testing. Generate any source code for inclusion in compilation. apache. Sun Developer Network .org/Deploying+to+a+running+container Cargo Plugin Configuration Options .apache. Bibliography Online Books des Rivieres.net/config.sf.org/Containers Cargo Container Deployments .org/eclipse/development/java-api-evolution. Jim.org/ Checkstyle .html Bloch.codehaus.codehaus. Effective Java.codehaus.org/axis/java/ AxisTools Reference Documentation .org/Merging+WAR+files Cargo Reference Documentation .sun. Axis Tool Plugin .codehaus.codehaus.html 287 .. Evolving Java-based APIs. Web Sites Axis Building Java Classes from WSDL Cargo Merging War Files Plugin . 2001. June 8. Joshua.codehaus. Cargo Containers Reference . J2EE Specification . Clirr .apache.html Xdoclet2 .net/ EJB Plugin Documentation .codehaus. PMD Best Practices .com.au/products/simian/ Tomcat Manager Web Application . 288 .net/bestpractices.apache.apache.0-doc/manager-howto.html XDoclet Maven Plugin .html Introduction to the Build Life Cycle – Maven .html Jester .org Maven Downloads . XDoclet Reference Documentation Mojo .maven. Clover Plugin .org/plugins/maven-clover-plugin/ DBUnit Java API .com Introduction to Archetypes .org/download.net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask.html Xdoclet .org/ Simian . Cobertura . POM Reference . XDoclet EjbDocletTask .sourceforge. ibiblio.html Maven Plugins .sourceforge. Maven 2 Wiki .redhillconsulting.net/howtomakearuleset.html Ruby on Rails .org/jdiff-maven-plugin Jetty 6 Plugin Documentation .sourceforge.sf.sourceforge.html Jdiff . XDoclet2 Maven Plugin .Better Builds with Maven Checkstyle Available Checks .ibiblio.html PMD Rulesets . Edward V. 107. 103-105. 190 B Bentley. 90-99. 286 Butler. 86. 111. 63-65. 87. 59 APT format 79 archetypes creating standard project 233. 224 Continuum continuous integration with 218. 194-198 code improving quality of 202-205 restructuring 264 code restructuring to migrate to Maven 264 Codehaus Mojo project 134. 43 tests 254. 55. 129. 135. 62. 129. 61. 57. 131. Jon 37 Berard. 186. 69. 130 Changes report 182 Checkstyle report 181. 222-233 creating standard project 234. 166 collaborating with teams introduction to 207 issues facing teams 208. 220. 116. 209 setting up shared development environments 209-212 Community-oriented Real-time Engineering (CoRE) 208 compiling application sources 40. 100. 84 managing dependencies 61. 61. 144. 84 modules 56 preparing a release 236-240 project inheritance 55. 126-132 creating 55-59. Samuel 167 289 . 101-110. Christopher 25 Ant metadata syntax 279-281 migrating from 241-264 Apache Avalon project 193 Commons Collections 254 Commons Logging library 252 Geronimo project 86.Index A Alexander. 80-82. 193 Clancy. 112. 202-205 Cobertura 181. 55 bibliography 287. 245. 160. 124. 83 creating a Web site for 78. 95. 114-122. 63. 122. 269. 255 Confluence format 79 container 62. 48. 84 DayTrader 86-88. 124. 268. 134-136. 235 conventions about 26 default 29 default build life cycle 286 Maven’s super POM 285 naming 56 single primary output per project 27 standard directory layout for projects 27 standard directory structure 284 standard naming conventions 28 copy/paste detection report 190 CPD report 181. 76. 165. 77. 215. 23. 97 HTTPd 212 Maven project 134. 234 definition of 39 artifactId 29. 163. 191. 166 Software Foundation 22. 50-52 preventing filtering of resources 52 testing 35 clean life cycle 270 Clirr report 182. 213. Tom 207 classpath adding resources for unit tests 48 filtering resources 49-51 handling resources 46. 41 main Spring source 250-254 test sources 42. 59. 288 binding 134. 112. 117. 84. 112. 90 deploying 55. 152. 252 ASF 23 aspectj/src directory 243 aspectj/test directory 243 C Cargo 103-105. 114. 217 Tomcat 212 application building J2EE 85-88. 74. 184. 84 Separation of Concerns (SoC) 56 setting up the directory structure 56. 271 build life cycle 30. 41. 87 building a Web module 105-108 organizing the directory structure 87. 91-99. 182. 77 to the file system 74 with an external SSH 76 with FTP 77 with SFTP 75 with SSH2 75 development environment 209-212 directories aspectj/src 243 aspectj/test 243 m2 244. 245 mock 243 my-app 39 src 40. 184. Albert EJB building a project canonical directory structure for deploying plugin documentation Xdoclet external SSH 21 95-99 95 103-105 99 100.lang. 124. 252 H Hansson. 90 Quote Streamer 87 default build life cycle 41. 185 JDK 248 290 .Object 29 mojo metadata 278-281 Spring Framework 242-246. David Heinemeier hibernate3 test 26 248 I IBM improving quality of code internal repository 86 202. 243. 88.xml 39. 127. 124 deploying applications methods of 74. 69.Better Builds with Maven D DayTrader architecture 86. 245. 245 standard structure 284 test 243. 47. 101 76 F Feynman. 286 conventions 29 location of local repository 44 naming conventions 56 pom. 76. 204. 101-110. 245 tiger 258. 112. 261 tiger/src 243 tiger/test 243 directory structures building a Web services client project 94 flat 88 nested 89 DocBook format 79 DocBook Simple format 79 E Einstein. 129-132 Java description 30 java. 125 Geronimo specifications JAR 107 testing applications 126. 114122. Richard filtering classpath resources preventing on classpath resources FindBugs report FML format FTP 133 49-51 52 194 79 77 G groupId 29. 126-132 deploying applications 122. 124. 49-51 structures 24 dependencies determining versions for 253 locating dependency artifacts 34 maintaining 199-201 organization of 31 relationship between Spring modules 243 resolving conflicts 65-68 specifying snapshot versions for 64 using version ranges to resolve conflicts 65-68 Dependency Convergence report 181 Deployer tool 122. 34. 205 212 J J2EE building applications 85-88. 250 url 30 Java EE 86 Javadoc class-level annotations 278 field-level annotations 279 report 181. 259. 286 developing custom 135. 48-51. 166 artifact guideline 87 build life cycle 30. 156-163 basic development 141. Helen 85 L life cycle default for jar packaging local repository default location of installing to requirement for Maven storing artifacts in locating dependency artifacts 266. 142. 245 Maven Apache Maven project 134. 245 135 134-136 137. 138 291 . 277-281 phase binding 134-136 requiring dependency resolution 155 writing Ant mojos to send e-mail 149-152 my-app directory 39 K Keller. 40 default build life cycle 69. 146-148. 144-163. 223-240 compiling application sources 40. 142. 41 configuration of reports 171-174 creating your first project 39. 136 developing custom plugins 133-140. 154. 32-35 26 P packaging parameter injection phase binding plugin descriptor 30. 53 groupId 34 integrating with Cobertura 194-198 JDK requirement 248 life-cycle phases 266. 45 32 35 35 M m2 directory 244. 267 migrating to 241-254 naming conventions 56 origins of 23 plugin descriptor 137 plugin descriptor 138 preparing to use 38 Repository Manager (MRM) 213 standard conventions 283-286 super POM 285 using Ant tasks from inside 261. 41 collaborating with 207-221.Index Jester JSP JXR report 198 105 181-183 McIlroy. 267 136 44 44. 150-152 capturing information with Java 141-147 definition of 134 implementation language 140 parameter expressions 272-275.. 79 getting started with 37-46.. 30. 165 documentation formats for Web sites 78. 144. 250 40. 181 separating from user documentation 174-179 standard project information reports 81 292 V version version ranges 30. 193 Clirr 182. 202-205 configuration of 171-174 copy/paste detection 190 CPD 181. 184 Dependency Convergence 181 FindBugs 194 Javadoc 181. 194. 237-239. 186-188. 190 selecting 180. 101. 135 using 53. 177179. 134 developer resources 265-274. 106-109. 183-185. 284 preparing to use Maven 38 profiles 55. 188-191. 170. 184. 165 development tools 138-140 framework for 135. 171. 124. 229. 103. 65-68 W Web development building a Web services client project 91-93. 172-174. 245 75 169. 221. 255 248 243 194-198 256 243 243 79 Q Quote Streamer 87 R releasing projects 236-240 reports adding to project Web site 169-171 Changes 182 Checkstyle 181. 206. 170. 72-74 project assessing health of 167-180. 84 monitoring overall health of 206 project management framework 22 Project Object Model 22 Surefire 169. 126. 66. 285 tiger 260 pom. 88-90. 194 repository creating a shared 212-214 internal 212 local 32. 117 improving productivity 108. 144-153. 192. 191. 142. 193-206 inheritance 55. 204. 63. 92. 115. 181. 198 Tag List 181. 190 POM 22. 185 JavaNCSS 194 JXR 181-183 PMD 181. 203. 182-184. 134. 230. 110-112. 117-122 deploying Web applications 114. 245 42. 196. 252 55. 35 manager 213 types of 32 restructuring code 264 Ruby on Rails (ROR) 288 running tests 256 S SCM SFTP site descriptor site life cycle snapshot Spring Framework src directory SSH2 Surefire report 35. 174. 230. 61. 64. 242-246. 187-189. 193. 137-140. 96. 186. 172-174. 129. 127. 236-239 75 81 271 55. 226. 215. 186. 43 254. 190 creating source code reference 182. 194 243. 197. 39. 235. 155-163. 215 creating an organization 215-217 creating files 250 key elements 29 super 29.xml 29. 186-188. 156. 181. 228 35. 70. 97. 243. 197. 227. 54 PMD report 181. 278-281 developing custom 133. 198 T Tag List report test directory testing sources tests compiling hibernate3 JUnit monitoring running tiger/src directory tiger/test directory Twiki format 181. 197. 211. 194. 169. 34. 245. 186. 214. 212. 136 Plugin Matrix 134 terminology 134. 101 102 . 181. 276. 114 X XDOC format Xdoclet XDoclet2 79 100. 59. 123. 182.Better Builds with Maven plugins definition of 28. 234. 193. 173. 194. 186. 113-117. 68. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue listening from where you left off, or restart the preview.
https://pt.scribd.com/doc/37780726/BetterBuildsWithMaven-1-0-2
CC-MAIN-2016-30
refinedweb
73,066
52.36
Introduction: Monitoring Acceleration Using Raspberry Pi and AIS328DQTR Using Python Acceleration is finite, I think according to some laws of Physics.- Terry Riley A cheetah utilizes amazing acceleration and quick changes in speed when chasing. The speediest creature ashore once in a while utilizes its top pace to catch prey. The creatures get this speeding up by applying almost five times more power than that of Usain Bolt amid his record-breaking 100m run. In the present time, individuals can't envision their existence without innovation. Surrounding us different innovations are assisting people to carry on with their existence with more extravagance. Raspberry Pi, the mini, single board Linux PC, gives a cheap and respectable base for electronics endeavors and cutting-edge advancements like IoT, Smart Cities, and School Education. As computer and gadgets fans , we've been taking in a considerable measure with the Raspberry Pi and chose to mix our interests. So what are the possible results that what we can do if we have a Raspberry Pi and a 3-axis Accelerometer close by ? In this task, we will incorporate AIS328DQTR, a digital 3-axis MEMS linear accelerometer sensor, to measure acceleration in 3 directions, X, Y, and Z, with the Raspberry Pi using Python. That's worth looking into. Step 1: Hardware We Require The issues were less for us since we have a huge measure of stuff lying around to work from. In any case, we know how it's troublesome for others to put away the right part in perfect time from the strong spot and that is protected paying little notice to every penny. So we would help you.DCUBE Store 3. 3-Axis accelerometer, AIS328DQTR Belonging to the STMicroelectronics motion sensors, the AIS328DQTR is an ultra-low-power high-performance 3-axis linear accelerometer with a digital serial interface SPI standard output. We acquired this sensor from DCUBE Store 4. Connecting Cable We acquired the I2C Connecting cable from DCUBE Store 5. Micro USB cable The humblest bewildered, yet most stringent to the degree power need is the Raspberry Pi! The most straightforward way to deal with the game plan is by the use of the Micro USB cable.GPIO pins or USB ports can in like manner be used to give ample power supply. 6. Web Access is a Need Get your Raspberry Pi associated with an Ethernet (LAN) cable and interface it to your especially to a Monitor or TV with an HDMI cable. Elective, you can utilize SSH to bring up with your Raspberry Pi from a Linux PC or Macintosh from the terminal. Also,PuTTY, a free and open-source terminal emulator sounds like a not all that bad choice. Step 2: Connecting the Hardware Make the circuit as indicated by the schematic showed up. In the graph, you will see the various parts, power fragments, and I2C sensor. Raspberry Pi and I2C Shield Connection Most importantly else, take the Raspberry Pi and spot the I2C Shield on it. Press the Shield carefully over the GPIO pins of Pi and we are done with this step as straightforward encourage view the Python Code for the Raspberry Pi and AIS328DQTR Sensor. # AIS328DQTR # This code is designed to work with the AIS328DQTR_I2CS I2C Mini Module available from dcubestore.com # import smbus import time # Get I2C bus bus = smbus.SMBus(1) # AIS328DQTR address, 0x18(24) # Select control register1, 0x20(32) # 0x27(39) Power ON mode, Data rate selection = 50Hz # X, Y, Z-Axis enabled bus.write_byte_data(0x18, 0x20, 0x27) # AIS328DQTR address, 0x18(24) # Select control register4, 0x23(35) # 0x30(48) Continuous update, Full-scale selection = +/-8G bus.write_byte_data(0x18, 0x23, 0x30) time.sleep(0.5) # AIS328DQTR address, 0x18(24) # Read data back from 0x28(40), 2 bytes # X-Axis LSB, X-Axis MSB data0 = bus.read_byte_data(0x18, 0x28) data1 = bus.read_byte_data(0x18, 0x29) # Convert the data xAccl = data1 * 256 + data0 if xAccl > 32767 : xAccl -= 65536 # AIS328DQTR address, 0x18(24) # Read data back from 0x2A(42), 2 bytes # Y-Axis LSB, Y-Axis MSB data0 = bus.read_byte_data(0x18, 0x2A) data1 = bus.read_byte_data(0x18, 0x2B) # Convert the data yAccl = data1 * 256 + data0 if yAccl > 32767 : yAccl -= 65536 # AIS328DQTR address, 0x18(24) # Read data back from 0x2C(44), 2 bytes # Z-Axis LSB, Z-Axis MSB data0 = bus.read_byte_data(0x18, 0x2C) data1 = bus.read_byte_data(0x18, 0x2D) # Convert the data zAccl = data1 * 256 + data0 if zAccl > 32767 : zAccl -= 65536 # Output data to screen print "Acceleration in X-Axis : %d" %xAccl print "Acceleration in Y-Axis : %d" %yAccl print "Acceleration in Z-Axis : %d" %zAccl Step 4: The Practicality of the Code Download (or git pull) the code from Github and open it in the Raspberry Pi. Run the commands to Compile and Upload the code in the terminal and see the yield on Screen. Taking following several minutes, it will exhibit every one of the parameters. In the wake of guaranteeing that everything works effortlessly, you can use this venture every day or make this venture a little part of a much greater assignment. Whatever your needs you now have one more contraption in your accumulation. Step 5: Applications and Features Manufactured by STMicroelectronics, ultra compact low-power high performance 3 axes linear accelerometer belonging to the motion sensors. The AIS328DQTR is appropriate for application such as Telematics and Black Boxes, In-Dash Car Navigation, Tilt / Inclination Measurement, Anti-Theft Device, Intelligent Power Saving, Impact Recognition and Logging, Vibration Monitoring and Compensation and Motion-Activated Functions. Step 6: Conclusion If you've been contemplating to explore the universe of the Raspberry Pi and I2C sensors, then you can shock yourself by making used of the hardware basics, coding, arranging, authoritative, etc. Behavior Tracker Prototype to monitor and depict the physical movements and body stances of animals with AIS328DQTR and Raspberry Pi using Python. In the above task, we have utilized fundamental computations of an accelerometer. The protocol is to create a system of accelerometer along with any Gyrometer and a GPS, and a supervised(machine) learning algorithm(support vector machine (SVM)) for automated behavior identification of animals. This to be followed by the collection of parallel sensor measurements and evaluation of the measurements by using support vector machine (SVM) classification. Use different combinations of independent measurements(sitting, walking or running) for training and validation to determine of robustness of the prototype. We will attempt to make a working rendition of this prototype sooner rather than later, the configuration, the code, and modeling works for more behavioral modes. We believe all of you like it! For your comfort, we have a charming video on YouTube which may help your examination. Trust this endeavor motivates further exploration. Start where you are. Use what you have. Do what you can. Be the First to Share Recommendations
https://www.instructables.com/Monitoring-Acceleration-Using-Raspberry-Pi-and-AIS/
CC-MAIN-2021-39
refinedweb
1,128
53.61
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also #include <port.h> int port_alert(int port, int flags, int events, void *user); The port_alert() function transitions a port into or out of alert mode. A port in alert mode immediately awakens all threads blocked in port_get(3C) or port_getn(3C). These threads return with an alert notification that consists of a single port_event_t structure with the source PORT_SOURCE_ALERT. Subsequent threads trying to retrieve events from a port that is in alert mode will return immediately with the alert notification. A port is transitioned into alert mode by calling the port_alert() function with a non-zero events parameter. The specified events and user parameters will be made available in the portev_events and the portev_user members of the alert notification, respectively. The flags argument determines the mode of operation of the alert mode: If flags is set to PORT_ALERT_SET, port_alert() sets the port in alert mode independent of the current state of the port. The portev_events and portev_user members are set or updated accordingly. If flags is set to PORT_ALERT_UPDATE and the port is not in alert mode, port_alert() transitions the port into alert mode. The portev_events and portev_user members are set accordingly. If flags is set to PORT_ALERT_UPDATE and the port is already in alert mode, port_alert() returns with an error value of EBUSY. PORT_ALERT_SET and PORT_ALERT_UPDATE are mutually exclusive. A port is transitioned out of alert mode by calling the port_alert() function with a zero events parameter. Events can be queued to a port that is in alert mode, but they will not be retrievable until the port is transitioned out of alert mode. Upon succesful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The port_alert() function will fail if: The port identifier is not valid. The port argument is not an event port file descriptor. The port is already in alert mode. Mutually exclusive flags are set. See attributes(5) for descriptions of the following attributes: port_associate(3C), port_create(3C), port_get(3C), port_send(3C), attributes(5) Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i0999b/index.html
CC-MAIN-2015-22
refinedweb
354
64.1
I just started using subversion, had some minor problems getting it built and installed, and I think it's working now. But I'm confused about some details of svn import, export, and the trunk. I'll describe that at the end of the message, but first I'll explain what I've done so far. My appologies if this is too wordy. I just set up Subversion on a server and a client machine both running Red Hat 9. I didn't want to upgrade the stock Apache 2.0.40, mainly due to the huge cascade of other stuff I'd also have to upgrade. So on the client machine, which does not have Apache installed, I unpacked the tarball of 0.26.0, configured it with --enable-all-static, and built it. The links failed due to the lack of "-lpthread" in the link options, so I manually added that to the LIBS variable in the Makefile. (I saw that some people have claimed that doing a second configure fixes this, but it didn't for me.) I tried to do an "svnadmin create" using the resulting binary, and got errors from db4 (Red Hat db4-4.0.14-20) about an unimplemented function. I thought 4.0.14 was supported? I downloaded db-4.1.25.tar.gz and patch.4.1.25.1 from Sleepycat, unpacked it in the subversion directory and patched it, did a make clean, a fresh configure, and built it again. This time I was able to do an "svnadmin create" successfully. I copied the svn, svnadmin, svndumpfilter, svnlook, svnserve, and svnversion binaries to /usr/local/bin on the server, created a user for subversion, and set up xinetd and sshd appropriately. I created a /home/svn directory as a parent for my repositories, and within that directory did an "svnadmin create tsbutils" to set up a repository for my tsbutils project. On the client, from the directory above my tsbutils source directory, I tried to import my code based on an example in the book: svn import svn+ssh://svn.example.com/tsbutils/ tsbutils This resulted in an error. Apparently the book is incorrect, or the syntax has changed since it was written, and the URL comes after the local path rather than before. I tried: svn import tsbutils svn+ssh://svn.example.com/tsbutils/ and it seemed to work OK. Next I wanted to check out the code from Subversion into a new directory. Based on the examples in the book, I tried: svn checkout svn+ssh://svn.example.com/tsbutils/trunk tsbutils but this gave the error: svn: Bad URL passed to RA layer svn: Source URL doesn't exist: svn+ssh://svn.brouhaha.com/tsbutils/trunk. After some experimentation, I found that: svn checkout svn+ssh://svn.example.com/tsbutils tsbutils did in fact check out my sources as expected. So my question is, why didn't it work with /trunk at the end? Was I supposed to do something to manually create trunk? Or was I supposed to import to .../tsbutils/trunk/? Thanks, Eric Smith --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected] Received on Sat Aug 2 03:46:38 2003 This is an archived mail posted to the Subversion Users mailing list.
https://svn.haxx.se/users/archive-2003-08/0038.shtml
CC-MAIN-2019-04
refinedweb
559
66.64
AWS DevOps Blog to take advantage of the large selection of community cookbooks or want to build and customize their own cookbooks. You can use the latest release of Chef 12 to support Linux-based stacks currently running Chef Client 12.5.1 (For those of you concerned about future Chef Client upgrades, be assured that new versions of the Chef 12.x Client will be made available shortly after their public release). OpsWorks now also prevents cookbook namespace conflicts by using two separate Chef runs (OpsWorks’s Chef run and yours run independently). Use Chef Supermarket Cookbooks Because this release focuses on providing you with full control and flexibility when using your own cookbooks, built-in layers and cookbooks will no longer be available for Chef 12 (PHP, Rails, Node.JS, MySQL, etc.,). Instead, Chef 12 users can use OpsWorks to leverage up-to-date community cookbooks to support the creation of custom layers. A Chef 12 Node.js sample stack (on Windows and Linux) is now available in the OpsWorks console. We’ll provide additional examples in the future. "With the availability of the Chef 12 Linux client, AWS OpsWorks customers can now leverage shared Chef Supermarket cookbooks for both Windows and Linux workloads. This means our joint customers can maximize the full potential of the vibrant open source Chef Community across the entire stack." – Ken Cheney, Vice President of Business Development, Chef Chef 11.10 and earlier versions for Linux will continue to support built-in layers. The built-in cookbooks will continue to be available at. Beginning in January 2016, you will no longer be able to create Chef 11.4 stacks using the OpsWorks console. Existing Chef 11.4 stacks will continue to operate normally, and you will continue to be able to create stacks with Chef 11.4 by using the API. Use Chef Search With Chef 12 Linux, you can use Chef search, which is the native Chef way to obtain information about stacks, layers, instances, and stack resources, such as Elastic Load Balancing load balancers and RDS DB instances. The following examples show how to use Chef search to get information and to perform common tasks. A complete reference of available search indices is available in our documentation. Use Chef search to retrieve the stack’s state: search(:node, “name:web1”) search(:node, “name:web*”) Map OpsWorks layers as Chef roles: appserver = search(:node, "role:my-app").first Chef::Log.info(”Private IP: #{appserver[:private_ip]}") Use Chef search to retrieve hostnames, IP addresses, instance types, Amazon Machine Images (AMIs), Availability Zones (AZs), and more: search(:aws_opsworks_app, "name:myapp") search(:aws_opsworks_app, ”deploy:true") search(:aws_opsworks_layer, "name:my_layer*") search(:aws_opsworks_rds_db_instance) search(:aws_opsworks_volume) search(:aws_opsworks_ecs_cluster) search(:aws_opsworks_elastic_load_balancer) search(:aws_opsworks_user) Use Chef search for ad-hoc resource discovery, for example, to find the database connection information for your applications or to discover all available app server instances when configuring a load-balancer. Explore a Chef 12 Linux or Chef 12.2 Windows Stack To explore a Chef 12 Linux or Chef 12.2 Windows stack, simply select the “Sample stack” option in the OpsWorks console: To create a Chef 12 stack based on your Chef cookbooks, choose Linux as the Default operating system: Use any Chef 12 open source community cookbook from any source, or create your own cookbooks. OpsWorks’s built-in operational tools continue to empower you to manage your day-to-day operations.
https://aws.amazon.com/blogs/devops/aws-opsworks-now-supports-chef-12-for-linux/
CC-MAIN-2018-13
refinedweb
570
53.81
in reply to short sorts The options could be simplified by mimicking the regex flags e.g. assuming case sensitivity as the default i.e. 'i' meaning case insensitive and no 'i' meaning case sensitive. And also leaving out the hyphens. It could be expanded by proving alternate equivalent options so you don't have to remember the order of them e.g. 'idr', 'rid' and 'dir' would all mean insensitive + descending + reversed. #! }, ); given ($type) { when ([qw[dr rd]]) { $sorter{$type} = sub { reverse($b) cmp re +verse($a) }; } when ([qw[ar ra]]) { $sorter{$type} = sub { reverse($a) cmp re +verse($b) }; } when ([qw[ai ia]]) { $sorter{$type} = sub { uc $a cmp uc $b }; + } when ([qw[di id]]) { $sorter{$type} = sub { uc $b cmp uc $a }; + } when ([qw[an na]]) { $sorter{$type} = sub { $a <=> $b }; } when ([qw[nd dn]]) { $sorter{$type} = sub { $b <=> $a }; } when ([qw[la al]]) { $sorter{$type} = sub { length($a) <=> len +gth($b) }; } when ([qw[ld dl]]) { $sorter{$type} = sub { length($b) <=> len +gth($a) }; } when ([qw[air ari iar ira rai ria]]) { $sorter{$type} = sub { uc reverse($a) cmp uc reverse($b) } +; } when ([qw[dir dri idr ird rdi rid]]) { $sorter{$type} = sub { uc reverse($b) cmp uc reverse($a) } +; } } if ($type) { return $sorter{$type} or die 'AAARGH!!'; } else { return (shuffle values %sorter)[0]; } } my @unsorted = qw(red lilac yelloW green cyan blue magenta); my $criteria = short_sorter('dir'); my @sorted = sort $criteria @unsorted; print "$_\n" for @sorted; __DATA__ yelloW green cyan blue red lilac magenta [download] #! }, ai => sub { uc $a cmp uc $b }, di => sub { uc $b cmp uc $a }, an => sub { $a <=> $b }, dn => sub { $b <=> $a }, al => sub { length($a) <=> length($b) }, dl => sub { length($b) <=> length($a) }, ar => sub { reverse($a) cmp reverse($b) }, dr => sub { reverse($b) cmp reverse($a) }, air => sub { uc reverse($a) cmp uc reverse($b) }, dir => sub { uc reverse($b) cmp uc reverse($a) }, ); given ($type) { when ('dr') { $sorter{$type} = $sorter{dr} } when ('ra') { $sorter{$type} = $sorter{ar} } when ('ia') { $sorter{$type} = $sorter{ai} } when ('id') { $sorter{$type} = $sorter{di} } when ('na') { $sorter{$type} = $sorter{an} } when ('nd') { $sorter{$type} = $sorter{dn} } when ('la') { $sorter{$type} = $sorter{al} } when ('ld') { $sorter{$type} = $sorter{dl} } when ([qw[ari iar ira rai ria]]) { $sorter{$type} = $sorter{air}; } when ([qw[dri idr ird rdi rid]]) { $sorter{$type} = $sorter{dir}; } } if ($type) { return $sorter{$type} or die "Unknown options: $type"; } else { return (shuffle values %sorter)[0]; } } my @unsorted = qw(red lilac yelloW green cyan blue magenta); my $criteria = short_sorter('dri'); my @sorted = sort $criteria @unsorted; print "$_\n" for @sorted; [download] Thank you for pointing out that the options should not be so rigid Arundear. I changed the dispatch table to include the option variations, even though I did not use switch like you did. I also kept case sensitive in because I like it. (I know that is not the best reason, but it is why case sensitivity is still there.) sr +a asr ars rsa ras); $sorts{$_} = sub { reverse($_[1]) cmp reverse($_[0]) } for qw(sdr sr +d { # A random sort was here, however, it was not acting as expected, +so the die is being used. die "A sort type was not selected."; } } [download] I'm still passing $a and $b since there are times where I am sorting hashes by a key within a hash of hashes. (sort { short_sorts($master_list{$a}{members},$master_list{$b}{members},'nd' +) || short_sorts($a,$b,'ia') } keys %master_list) [download] I am not sure how I am going to combine those just yet.].
http://www.perlmonks.org/index.pl?node_id=962629
CC-MAIN-2016-36
refinedweb
590
57.13
Trevor. Some time back, Kenzal was brought on as a senior developer to work on the e-commerce system. As he spelunked his way through the system, Kenzal would find some piece of puzzling code and ask Trevor what he was going for, or why he did it that way. Trevor would invariably respond: I had my reasons. Kenzal encounterd this particular snippet in the "critical logic" in the batch creation process, around 7,500 lines into in the 10K+ LOC invoice manger file, somewhere after running the query and checking for results: <?php $m = $SYSTEM->getValue('FULFILLMENT_CART_CONFIG'); if ($m == '') $m = 'LLLLSSSSSSSSLLLLLLLL'; $m = strtoupper($m); $t = $this->db->getDataset(); $n = sizeof($t); $sp = 0; $lp = $n - 1; $info = array(); for ($i=0; $i<$n; $i++) { $info[$i] = array(); if (substr($m,$i,1) == 'L') { foreach ($t[$lp] as $k => $v) $info[$i][$k] = $v; --$lp; } else { foreach ($t[$sp] as $k => $v) $info[$i][$k] = $v; ++$sp; } } return (array(0,$info)); ?> Rather than just simply returning the result set, Trevor decided that the results needed to be reordered according to the value of some random string, manually popping and de-queuing the values in the array. When queried as to why he would write something like that, Trevor replied with his usual: I had my reasons. Both Trevor and his code have since been replaced. When Trevor was asked to leave, he was told (among other things) that they had their reasons. All of the above code has since been replaced with: <?php `return (array(0,$this->db->getDataset()));` ?>
http://thedailywtf.com/articles/I-Had-My-Reasons
CC-MAIN-2018-05
refinedweb
259
66.98
Editor's note: Kathy Sierra and Bert Bates, the brains behind O'Reilly's Head First series, first wrote this article in 2003 at the time of the release of Head First Java. It was hugely popular and we are still receiving requests for more like it, so much so that we decided to bring it back again now that Head First Java, 2nd Edition has released. So, if you didn't catch this piece the first time around, let Kathy and Bert show you how to hold your own in conversation with Java geeks. It's all so predictable. There you are at a dinner party, sipping a second martini, when the conversation turns, inevitably, to distributed programming. What to do? Relax, because here we'll tackle three of the most interesting distributed technologies, complete with all the napkin-ready drawings you can use to up your whuffie. (Don't know about whuffie either? Read Richard Koman's interview with Cory Doctorow.) But first, in case you're actually at a party right now, we'll start with a few quick phrases that you can use even if you don't know Java. After that, for those who do know some Java, we'll dig a little deeper. "What's dynamic discovery? It just means that clients and services find each other, you know, dynamically, over the network, without any prior knowledge of one another. It's all based on IP multicast." "What's an automatic self-healing network? It means the Jini network is always reflecting the current status of all the available services -- like, 'OK, this one's up and this one's not ...' without any human administration!" Related Reading Head First Java By Kathy Sierra, Bert Bates "Of course, most routers have IP multicast disabled, so you aren't going to use Jini on the web. But Jini was designed to work for local networks, or collections of local networks, so that's not so much of a problem." "The cool things about J2EE are vendor-independence and that you get to focus on your business logic and leave all the big plumbing/heavy lifting to the server vendor. You work on the rules for your particular business, and leave the security, transactions, concurrency, persistence, and even the networking code implementations up to the server. And you have to learn only one API, and you can redeploy your J2EE apps to any vendor's J2EE-compliant server, so now the vendors have to kiss up to you instead of the way it used to be, where you were locked in and had to beg the vendor to add some new capability or fix bugs ..." "The cool thing about Web services is ... well, OK, so maybe in its current state there isn't anything really cool about Web services. But that could change. Sometime in the future. When all the standards are worked out, and the tools mature, and ..." "But if Web services were cool, it would be because you can take the business apps you already have, even legacy apps, and expose them on the Web using XML as the interface. A client sends an XML message (in a format called, cutely, SOAP) to a service in an interoperable way." "Security and transactions are the two glaring holes in Web services right now, which means everybody has to roll their own solutions. There's no big ol' transaction manager sitting out there on the Web, and the only security you've got is HTTPS for mutual authentication. "Jini and J2EE are both Java technologies. With J2EE, it's like the spec is saying, 'Oh, don't you worry your little developer head about all these big, hard, things. The vendor will take care of all those messy things so you can pay attention to your own domain-specific needs (like, how to sell more lingerie).' But with Jini, it's like the spec is saying, 'You are so out there on your own with this. Don't expect anybody to come to your rescue with a bunch of big infrastructure. This is lean and mean, baby, but you can do the most amazing and elegant things. And check out JavaSpaces while you're here.'" "Web services is not a Java-specific technology, but Java can sure make it a gazillion times easier to do, especially if you switch to J2EE 1.4. No, you're right, J2EE 1.4 isn't out yet, but it will be before the end of the year. You'll probably see some solid vendor support in early 2004." (Before we go further, some disclaimers: First, if you have any problem with the high-level nature of this article, re-read the title. We're taking huge bartender license with the content. We won't tell you anything that's not true, but you aren't getting the whole story, either. Each of these topics requires a book of its own. (And darned if we're not just about to come out with one, Head First EJB. What are the odds?) Let's just say the cocktail view of a topic probably isn't what you want driving your next architecture. (We know you know this, but for that one disturbed person who will somehow mistake this for a Serious Technical White Paper, we felt compelled to cover our a**es).) So now it's time to dive down a level. We'll start with a look at what it means to have a service, and, most importantly, how you expose yourself to others. In other words, how you tell potential clients about your service. With Jini, J2EE, and Web services, the goal is for a client to access a service. What's a service? Pretty much anything you can do as the result of a message sent from one piece of software to another (which may or may not involve human direction). You might have a service that does huge calculations for genetic matching. Or a service that plays a mean game of Go. Or a service that lets you buy concert tickets. Or books an exotic tropical cruise. Or even sends your text to a high-volume printer. In other words, a service is anything that starts as software, but the result doesn't necessarily stay in software. You might have a service that moves a video camera, prints to a printer, dials a phone. Doesn't matter. Or at least it shouldn't matter. One of the main goals, as it always with OO, is have the least amount of coupling — in this case, between the client and server. In other words, we want the players (the clients and services) to know as little about one another as possible. But let's say you've got a service ... now what? How do clients find you? And if they find you, how do they know what your service can do? In other words, how do they know the methods they can call on your service? Somehow, you have to expose yourself to clients, and it's a little different for each of the three technologies we're looking at. Regardless of whether you've talking Jini, J2EE, or Web services, though, there's always an interface in there somewhere. That interface declares what you can do. As a service developer, you have to tell folks what your service can do. In other words, you have to declare your service's methods. That includes declaring what the client can pass to your methods (the arguments) and what the service will return to the client (the return types). The interface above is for a service called Advice. It has one method, getAdvice(). You call the method and you get back a String of stunningly useful, randomly chosen advice. (We've chosen to not show the return types in our little fake UML-ish things. It was a really small napkin.) Advice getAdvice() String For Jini and EJB (Enterprise JavaBeans — the heart and soul of J2EE), that interface is usually Remote. A Remote interface means the service developer writes a Java interface that extends java.rmi.Remote. Oh yeah, the Remote methods (meaning all of the methods in the Remote interface) have to declare a java.rmi.RemoteException — a checked exception that says to the client, "Things can go horribly wrong, so be prepared." Remote java.rmi.Remote java.rmi.RemoteException For Web services, the interface is defined in a WSDL (pronounced "wizdle," rhymes with "fizdle") — a special type of XML document. So, to summarize, here's another little phrase you can drop in casual conversation: "In other words, with Jini and EJB you expose yourself with a Java interface, but with Web services, you expose yourself with an XML interface." (Actually, if you're a Java developer, you create the WSDL by first writing a Java interface, and then pushing a magic red button on your development tool that turns your Java interface into an XML WSDL, but that is, as they say, beyond the scope of cocktail party conversation.) OK, so now we've got a service, and we have an interface that exposes our service. Now what? How do clients know about the interface? How do they actually get the interface and whatever else they need to talk to your service? That depends on the type of service. And it's different for each of the three types. You'll access Jini, J2EE, and Web services each a little differently, and the rest of this article looks at some of the details and key differences. First, we'll start with a little background with Java's Remote Method Invocation (RMI). It's the backbone of most of Java's distributed technologies, and Jini and EJB depend on it. In fact, since virtually all distributed technologies work on something that is conceptually like RMI, anything you get here will help make sense of the whole distributed world, Java or non-Java. With RMI, you write your service and you make it a Remote object. It's a piece of cake to make it Remote — write a Remote interface, then create a Remote class that implements it. Extend java.rmi.Remote Declare a java.rmi.RemoteException for each method: public interface Advice extends java.rmi.Remote { String getAdvice() throws java.rmi.RemoteException; } Implement your Remote interface Write the actual business logic for the interface methods: public class AdviceImplementation implements Advice { public String getAdvice() { // monumentally important business logic here } } (OK, so code is maybe a bit much for a cocktail party, but we're all geeks here ...) Once you're got your interface and your class, you run your class through the RMI Compiler (rmic) that ships with J2SE. (If you've got javac, you've got rmic.) That process creates what you'll find in nearly all modern distributed programming models — the stub. The stub is just a helper on the client, which acts like the Remote object (the service) by implementing the Remote interface just as the service does. But really, the stub is just a little object that takes the method call, packages it up, and sends it over the wire to the real one true Remote Object. In other words, the stub knows how to phone home and ask the real service to do the real work. rmic javac The stub's methods are fake. Well, they are methods and they do have a lot of code, including all the networking and I/O stuff needed to contact the service. But they aren't the actual business methods. So for the Advice service, the stub has a getAdvice() method, but that method simply passes the client's request to the real AdviceImplementation on the server. AdviceImplementation That way, the client gets to pretend that he's making calls on the Remote object, but of course we know that can't really happen, since the Remote object (i.e., the service) is in a different JVM heap. (Yes, there is also something on the server side that accepts the Socket connection from the client and unpacks the method call, packages and ships the return values, etc., but we're not talking about that in this article. It's a more trivial process, because you don't have to worry about delivering that functionality to the client. So yes, there's a companion to the stub's functionality on the server, but no, you probably won't have to worry about it.) Socket If you're a client and you want something from a Remote service (in other words, a Remote object), you have to have the Remote service interface (at both compile time and runtime), and you have to get the stub object (runtime). It's the stub, remember, that actually knows how to send your method call to the Remote object (and give you back the return value), so you're dead without it. Chances are, you'll get both the interface and the stub .class file from the service developer. .class (There is, however, a much cooler way to get the stub class just in time, without having it prior to runtime, through a process called dynamic code downloading. It's one of the most powerful things you can do in Java, and not very hard, either. But, you can of course always do it the wimpy way and just have the developer email you the stub class file, or carry it down the hall to your cube.) In RMI, you look up the stub using the RMI Registry (which ships with J2SE). The RMI Registry is like a little white pages phone book; you look up the name and you get back the stub. Remember, when the stub object comes over, it's serialized. That means it has to be deserialized when it gets to the client, and for that to work, the stub class has to either be on the current classpath, or be findable using dynamic code downloading. That's just basic Java -- an instance can't be deserialized without its class! With RMI, you must know where the stub is located. That means you have to know the IP address and TCP port number for the RMI Registry. (Which also happens to be the server where the Remote object/service itself lives.) Actually you probably don't have to know the port number, since there's a default, but if the service deployer changes it, you'll need to know. The client does a simple one-line lookup, using a static method (java.rmi.Naming.lookup()), and then a miracle occurs and it has a freshly deserialized, live, stub. An object that knows how to phone home. java.rmi.Naming.lookup() Now that we've looked at RMI, this part'll make more sense. Jini is like extreme RMI. And we use the word extreme here in the sporting sense, not the XP way. This is where plain RMI and Jini differ most. In RMI, the client has to know a lot, and the service has to be explicitly registered (under a logical name) with the RMI Registry, and the registry must be on the same machine with the service! But with Jini, everything is just... out there. Somewhere. And nobody knows much of anything except interfaces. Here's a quote about it, which we've found especially helpful to drop at strategic moments in a conversation: "With Jini, services are always trying to discover lookup services, clients are always trying to discover lookup services, and lookup services are always trying to let services and clients know that they (the lookup services) are out there. And it's all automatic!" The basic Jini architecture uses RMI, although in Jini you can also send the client the whole service itself, rather than having the service be a 100 percent Remote object on the server. When the client asks for a reference to a service, it might get a stub to a Remote object (if the service is implemented as a Remote object), or it might get the whole darn service shipped over, on which it can make plain old local calls. In fact, with Jini, the client might even get a hybrid or "smart-proxy" (drop that phrase at the bar for extra credit) -- that is, a non-Remote Java object that contains a stub to a Remote object (like, in an instance variable). That way, the client makes local calls directly to the service, but the service itself might turn around and make Remote calls to something else. A smart-proxy can help performance on both the client and service by having some of the work done on the.
http://www.onjava.com/pub/a/onjava/2003/08/27/cocktails.html?CMP=NLC-TE9755903113
CC-MAIN-2016-22
refinedweb
2,796
70.63
Opened 4 years ago Closed 4 years ago Last modified 4 years ago #7667 closed bug (fixed) Template Haskell fails to recognize type operator/function + Description The following message is issued for a valid TH program. Main.hs:7:1: Illegal type constructor or class name: `+' When splicing a TH declaration: type instance GHC.TypeLits.+ 1 2 = 3 Failed, modules loaded: Test1. Code attached. The program is attempting to capture the name +, as used by Nat at the type level. The problem appears to be in Convert.hs -- Convert.hs okOcc :: OccName.NameSpace -> String -> Bool okOcc _ [] = False okOcc ns str@(c:_) | OccName.isVarNameSpace ns = startsVarId c || startsVarSym c | otherwise = startsConId c || startsConSym c || str == "[]" + is rejected, by okOcc, even though it is acceptable, the symbol neither starts with upper-case, or ':'. I have tried using reify to extract the *actual* name from other sources (rather than use mkNameG_tc), and it fails in the same way. Attachments (4) Change History (19) comment:1 Changed 4 years ago by comment:2 Changed 4 years ago by comment:3 Changed 4 years ago by comment:4 Changed 4 years ago by comment:5 Changed 4 years ago by comment:7 Changed 4 years ago by Applied, thanks! comment:8 Changed 4 years ago by Thanks for validating! comment:9 Changed 4 years ago by I would like to reinstate this check, having just spent several hours trying to track down a place where I had used VarE in some code where I should have used ConE. If the namespace check had been on, the error would have been very easy to spot. Instead, I just got "out of scope" errors for things that clearly were in scope. (Turned out, I had GHC looking in the wrong scope.) A smaller change than the one Simon proposed (that is, "remove the check") would be just to remove the requirement for colons at the beginning of a type symbol. Is there anything else that should be changed here? comment:10 Changed 4 years ago by OK with me OK so we could test flags etc make okOccaccept the type operator. Or alternatively we could simply omit the test, which would allow TH to generate names that the programmer could not write. In fact it currently only checks the name space; that is, checks that if the name claims to be a data constructor then it starts with an uppper case letter or colon. But it does not check the name is a legal one; it could be the data constructor C$$for example, which the programmer can't write. I'm inclined to go the whole way, and simply treat the name space in the TH name as authoritative, regardless of the string that used for the name. I'll that unless someone yells. Simon
https://ghc.haskell.org/trac/ghc/ticket/7667
CC-MAIN-2017-22
refinedweb
471
68.7
table of contents NAME¶rados - rados object storage utility SYNOPSIS¶ rados [ options ] [ command ] DESCRIPTION¶rados is a utility for interacting with a Ceph object storage cluster (RADOS), part of the Ceph distributed storage system. OPTIONS¶ - -p pool, --pool pool - Interact with the given pool. Required by most commands. - --pgid - As an alternative to --pool, --pgid also allow users to specify the PG id to which the command will be directed. With this option, certain commands like ls allow users to limit the scope of the command to the given PG. - -N namespace, --namespace namespace - Specify the rados namespace to use for the object. - -s snap, --snap snap - Read from the given pool snapshot. Valid for all pool-specific read operations. - ). - -b block_size - Set the block size for put/get/append ops and for write benchmarking. - --striper - Uses the striping API of rados rather than the default one. Available for stat, stat2, get, put, append, truncate, rm, ls and all xattr related operation - -O object_size - Set the object size for put/get ops and for write benchmarking the given pool and write to outfile. Instead of --pool if --pgid will be specified, ls will only list the objects in the given PG. -To view cluster utilization: rados df To get a list object in pool foo sent to stdout: rados -p foo ls - To get a list of objects in PG 0.6: rados --pgid 0.6
https://manpages.debian.org/testing/ceph-common/rados.8.en.html
CC-MAIN-2021-39
refinedweb
236
63.39
Introduction to Microsoft Project: Getting Started Course Length: 2 days Delivery Methods: Multiple delivery options Course Overview In this Introduction to Microsoft Project class, students will spend time getting comfortable with the Project user interface, including project views and the ribbon. They will also learn to enter, organize, and link tasks, work with resources, create basic reports, and create projects independently. The course allows time to practice fundamental basic skills essential for efficient use of this program. Our public classes use the most current version of the software, but if you’re on an earlier version, our instructor will point out any differences. For private classes, we will use the version of the software you use in your office. Course Benefits - Learn to create and manage simple projects. - Learn to enter and manage tasks. - Learn to work with a project calendar. - Learn to add and manage project resources and work with the resource sheet. - Learn to create basic reports for your - Components of a Project - Project Components - Demo and Exercise Projects Used in This Course - Demo Case Study - Exercise Case Study - Getting around Microsoft Project 2019 - Projects Class Materials Each student will receive a comprehensive set of materials, including course notes and all the class examples. Follow-on Courses Register for a Live Class $890.00 Request a Private Class Request Pricing - Private Class for your Team - Online or On-location - Customizable - Expert Instructors
https://www.webucator.com/microsoft-project-training/course/introduction-microsoft-project-getting-started/
CC-MAIN-2022-40
refinedweb
234
51.99
Statically typed languages are those in which you would need to specify the type of an object at the time when you define it. Examples of statically typed languages include C#, VB, and C++. On the contrary, in dynamically typed languages, the type of an object is determined at runtime -- only at the time when a value is assigned to the type. Python, Ruby, and JavaScript are examples of dynamically typed languages. The DLR (Dynamic Language Runtime) runs on top of the CLR (Common Language Runtime) and adds dynamism to the managed environment of .Net -- you can use it to implement dynamic features in your application. In essence, the DLR enables interoperability between statically typed and dynamically typed languages inside the context of the CLR. You can use the DLR to share libraries and objects with dynamic languages. In this article I would present an overview of the Dynamic Language Runtime environment in Microsoft .Net. You can get an open source version of the DLR from Codeplex. What is the DLR? The DLR is an outcome of Microsoft's effort to have services run on top of the CLR and provide interoperability amongst statically and dynamically typed languages. Support for the Dynamic Language Runtime environment is facilitated by the System.Dynamic namespace. The MSDN states: ." How is it helpful? The services provided by the DLR include support for a dynamic type system, a standard hosting model as well as dynamic code generation and dispatch. At a quick glance, the benefits provided by the DLR include: - Provides support for dynamic features in statically typed languages. With the DLR in place, you can create dynamically typed objects and use them together with your statically typed objects in your application. - Enables seamless porting of dynamic languages to the .Net Framework. The DLR enables you to port dynamic languages into the .Net Framework easily. To leverage the DLR features, all your dynamic language need to do have is the ability to produce expression trees and runtime helper routines. - Facilitates sharing of libraries and objects. The DLR enables you to create objects and libraries in one language to be accessed from another language. - Provides support for dynamic method dispatch and invocation. The DLR provides support for dynamic method invocation and dispatch using advanced polymorphic caching. The Dynamic Language Runtime Subsystem The DLR subsystem is basically comprised of the three layers. These include the following: - Expression trees -- the DLR makes use of expression trees to represent language semantics. - Call site caching -- method calls using dynamic objects are cached in the memory so that the DLR can use the cache history for subsequent calls to the same method for faster dispatch. - Dynamic object interoperability -- the DLR enables interoperability between statically and dynamically typed languages. The DLR includes a collection of types -- classes and interfaces in the System.Dynamic namespace. You can leverage the IDynamicMetaObjectProvider interface and the DynamicMetaObject, DynamicObject, and ExpandoObject classes to create dynamic frameworks. Language Binders The language binders in the DLR help it to talk to other languages. So, for each dynamic language you would typically have a binder that can interact with it. As an example the following are the commonly used binders in the DLR. - .Net Binder -- this is used to talk to .Net objects - JavaScript Binder -- this is used to talk to objects created in JavaScript objects - IronRuby Binder -- enables the DLR to talk to IronRuby objects - IronPython Binder -- helps the DLR to talk to IronPython objects - COM Binder -- this helps the DLR to talk to COM objects The "dynamic" keyword You can take advantage of the dynamic keyword to access a dynamic object. The dynamic keyword was first introduced in .Net Framework 4. It enables your application to interoperate with dynamic types. So, you can use the dynamic keyword to access a COM object or an object created in dynamic languages like, Python, Ruby or JavaScript. Here's is a code snippet that illustrates how the dynamic keyword can be used. using System.Dynamic; dynamic excelObj = System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); We no longer need to use reflection to access COM objects – your code is much clean without the reflection code that you would otherwise have had to write sans the dynamic keyword. Suggested readings This article is published as part of the IDG Contributor Network. Want to Join?
http://www.infoworld.com/article/2889320/microsoft-net/exploring-the-dynamic-language-runtime-in-net.html
CC-MAIN-2017-22
refinedweb
721
55.84
Save your program in a file called proj.bf in folders called proj1 and proj2 respectively for phases 1 and 2 of your project. I will test your code in the same environment as the lab machines in MI 302, using the command /usr/bin/beef proj.bf For phase 2 of your project, you will write a compiler from a highly simplified subset of the C programming language to brainfuck code. You will not submit brainfuck code for this part, but rather a parser and scanner generator using flex and bison, in two files called proj.lpp and proj.ypp. I will compile your code to the executable compiler proj as follows: flex -o proj.yy.cpp proj.lpp bison -d proj.ypp g++ -o proj proj.tab.cpp proj.yy.cpp Then, if test.c contains a program in the C subset described below, I will test your compiler as follows: ./proj < test.c > test.bf /usr/bin/beef test.bf Of course, the result of the second command should be the same as if I had compiled the test.c program using gcc and run the resulting executable. Now don't worry, the subset of C that we are going to compile from is going to be very restricted. Specifically, our C subset has the following properties: #character is ignored by your compiler. int main() {function. There must be no function prototypes or global variables. This main function must end with a return 0;statement, which is ignored by your compiler. main. int, but may never store any integer larger than 127. int x,y;is not allowed. int x = 5;is OK, but int x = 5 + 2is not. VAR = ANY OP ANY;. Here ANYis either a variable name or an integer in the range 0-127, and VARmust be a variable name. OPcan be any of the following: +, -, >, <, ==. Addition and subtraction work as usual, and the comparison operators set the variable to 1 if the statement is true, and to 0 if it is false. VAR = getchar();, and an output statement has the form putchar(ANY);. I suggest you write your program in the following steps. Of course you are free to develop however you wish. As always, you are encouraged to submit every time you get some small step of the program working. OPfor all the operators, but I advise against this. Ultimately your parser is going to have to be writing brainfuck code, and this code will be very different for something like 5 + 2compared to 5 == 2. So it might make your job easier to just have every operator be a different token. So for instance, given the following C program: #include <stdio.h> int main() { int x = 5; int y; y = getchar(); x = x + y; putchar(x); return 0; }your program might produce (written to standard out) the brainfuck program +++++ > , <[->>+>+<<<] >>>[-<<<+>>>] <<[->+>+<<] >>[-<<+>>] <<<[-] >>[-<<+>>] <<. Of course your actual program might differ from this one. As long as they behave identically, it's fine.
https://www.usna.edu/Users/cs/roche/courses/f11si413/project/brainfuck.php.html
CC-MAIN-2018-22
refinedweb
497
75.61
If you are reading this blog there may be two reasons. First, you are the programmer and second, you want to be a better programmer. So here we go, Even bad code can function, But if the code isn’t clean, it can bring a development organization to its knees. Every year, countless hours and significant resources are lost because of the poorly written code. But it Doesn’t have to be that way. So there might be a question in your head what is clean code? The answer to that is your logic must be straight forward to make it hard for any bug to hide and which can be read and enhanced by a developer other than its original author. clean code always looks like written by someone who cares. So there are some rules which we will discuss: 1)Meaningful names It is easy to say that a name should be relevant intent. Choosing good names to take time but save more than it takes. So take care of your name and change them to better ones. Everyone who read your code will be happier including you. which can improve consistency, clarity and code integration . The name of the function or class should answer all big question. Suppose your variable name is val x = 10 // This variable name reveals nothing The name “x” doesn’t reveal anything. Use Intention revealing the name We should choose a name that specifies what is being measured and the unit of that measurement. val elapsedTimeinDays // This reveals what is being measured and unit of measurement Programmers must avoid leaving false clues that obscure the meaning of the code. We should avoid words whose entrenched meanings vary from our intended meaning. For example, hp, aix, and sco would be a poor variable name and avoid misleading names like which may create confusion def getAccount () def getAccounts() def getAccountInfo() Use pronounceable name because a name like “genymdhms” means (generate date,year,months,day,hours,minutes,seconds) so we can’t walk around and say “gen why emm dee aich emm ess” which is very hard to discuss on which can be write As “generateTimeStamp” would be a better choice. You also don’t need to prefix member variables with m_ anymore. Your classes and functions should be small enough that you don’t need them. And you should be using an editing environment that highlights or colorizes members to make them distinct. prefixes become unseen clutter and a marker of an older code class Part { var m_dsc = "manager"; def setName(name:String){ m_dsc = name; } } class Part{ var description:String = "Manager"; def setDescription(description:String) { this.description = description; } } Class and object should have a noun or noun phases name like Customer, wiki page, Account etc… avoid a name like Manager, Data or Info. Class name should not be a verb. Methods should have verb or verb phrase names like post payment, deletePage, or save. Accessors, mutators, and predicates should be named for their value and prefixed with getting, set. Pick one word for one abstract concept and stick with it. For instance, it’s confusing to have fetched, retrieve, and get as equivalent methods of different classes Example of good coding practice for the method shown below def genymdhms(t: Any): Timestamp = { val d1 = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss.SSS") val d2 = date1.parse(token_exp.toString) new Timestamp(parsed2.getTime) } Clean code Would be def getCalendarTimeStamp(token_exp: Any): Timestamp = { val dateFormat = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss.SSS") val parsedDate = dateFormat.parse(token_exp.toString) new Timestamp(parsedDate.getTime) } 2)Functions “Functions should DO ONLY ONE THING They should Do It Well They should Do IT only”. The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. So this means that your function should not be large enough to hold nested structures. Therefore, the indent level of a function should not be greater than one or two. This technique makes it easier to read, understand and digest. Your function can take the minimum number of argument that. def registerUser(name: String, password: String, email: String,address:String,zip:Long): String = { implicit val session = AutoSession dBConnection.createConnectiontoDB() val token = UUID.randomUUID().toString import java.util.Calendar val calendar = Calendar.getInstance val token_gen = new Timestamp(calendar.getTime.getTime) calendar.add(Calendar.MINUTE, 30) val token_exp = new Timestamp(calendar.getTime.getTime) withSQL { insert.into(UserData).values(name, password, email, address,zip,token, token_gen, token_exp) }.update().apply() token } Which can be written as case class User(name: String, password: String, email: String,address:String,zip:Long) def registerUser(user:User): String // 3)TDD means “Test Driven Development” The primary goal of TDD is to make the code clearer, simple and bug-free. In TDD approach, first, the test is developed which specifies and validates what the code will do. How TDD work is?? - Write a test - Make it run. - Change the code to make it right i.e. Refactor. - Repeat process By now everyone knows that TDD asks us to write unit tests first before we write production code. But that rule is just the tip of the iceberg. Consider the following three laws. And Also that there should be a single concept per test and only one assertion per test which is said to be good practice example:- "document is empty" should { "not be able to convert a document into an entity" in { val result = UserDataDao.documentToEntity(Document()) assert(result.isFailure) } } References - clean code by robert c. martin
https://blog.knoldus.com/coding-best-practices-to-follow-with-scala/
CC-MAIN-2021-04
refinedweb
929
65.42
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python. from datetime import datetime import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline player = 'Roger Federer' filename = "data/{name}.csv".format( name=player.replace(' ', '-')) df = pd.read_csv(filename) The loaded data is a DataFrame, a 2D tabular data where each row is an observation, and each column is a variable. We can have a first look at this dataset by just displaying it in the IPython notebook. df tailmethod displays the last rows of the column. df['win'] = df['winner'] == player df['win'].tail() df['win']is a Seriesobject: it is very similar to a NumPy array, except that each value has an index (here, the match index). This object has a few standard statistical functions. For example, let's look at the proportion of matches won. print("{player} has won {vic:.0f}% of his ATP matches.".format( player=player, vic=100*df['win'].mean())) start datefield contains the start date of the tournament as a string. We can convert the type to a date type using the pd.to_datetimefunction. date = pd.to_datetime(df['start date']) df['dblfaults'] = (df['player1 double faults'] / df['player1 total points total']) headand tailmethods to take a look at the beginning and the end of the column, and describeto get summary statistics. In particular, let's note that some rows have NaNvalues (i.e. the number of double faults is not available for all matches). df['dblfaults'].tail() df['dblfaults'].describe() groupby. This function allows us to group together rows that have the same value in a particular column. Then, we can aggregate this group-by object to compute statistics in each group. For instance, here is how we can get the proportion of wins as a function of the tournament's surface. df.groupby('surface')['win'].mean() groupby. gb = df.groupby('year') gbis a GroupByinstance. It is similar to a DataFrame, but there are multiple rows per group (all matches played in each year). We can aggregate those rows using the meanoperation. We use matplotlib's plot_datefunction because the x-axis contains dates. plt.figure(figsize=(8, 4)) plt.plot_date(date.astype(datetime), df['dblfaults'], alpha=.25, lw=0); plt.plot_date(gb['start date'].max(), gb['dblfaults'].mean(), '-', lw=3); plt.xlabel('Year'); plt.ylabel('Proportion of double faults per match.'); You'll find all the explanations, figures, references, and much more in the book (to be released later this summer). IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter07_stats/01_pandas.ipynb
CC-MAIN-2017-47
refinedweb
437
60.11
I copied this function directly from my programing book, but it still does not work. The Line temp = tbp->entry[j]; gives me an error when compiling "cannot make assignment of this type" here is all the relevent code (I think ) If anyone can tell me what I am doing wrong i would appreciate it. #include <stdio.h> #include <ctype.h> #include <string.h> #include <stdlib.h> #define TEL_BOOK_SIZE 50 #define MAX_STRING_SIZE 81 struct tel_book_element {char *name; char *telNum; }; typedef struct tel_book_element TelBookElement; struct tel_book {TelBookElement entry[TEL_BOOK_SIZE]; int n; }; typedef struct tel_book TelBook; ... void sort(TelBook *tbp) { int j,k, small; TelBook temp; for (j=0;j<tbp->n-1;j++) {small = j; for (k = j + 1; k < tbp->n;k++) if (strcmp(tbp->entry[k].name, tbp->entry[small].name) < 0) small = k; temp = tbp->entry[j]; tbp->entry[j] = tbp->entry[small]; tbp->entry[small] = temp; } }
http://cboard.cprogramming.com/c-programming/6101-need-help-alphabetizing-struct.html
CC-MAIN-2015-40
refinedweb
149
65.12
Frequentism and Bayesianism: A Practical Introduction. I'll start by addressing the philosophical distinctions between the views, and from there move to discussion of how these ideas are applied in practice, with some Python code snippets demonstrating the difference between the approaches. Fundamentally, the disagreement between frequentists and Bayesians concerns the definition of probability. For frequentists, probability only has meaning in terms of a limiting case of repeated measurements. That is, if I measure the photon flux $F$ from a given star (we'll assume for now that the star's flux does not vary with time), then measure it again, then again, and so on, each time I will get a slightly different answer due to the statistical error of my measuring device. In the limit of a large number of measurements, the frequency of any given value indicates the probability of measuring that value. For frequentists probabilities are fundamentally related to frequencies of events. This means, for example, that in a strict frequentist view, it is meaningless to talk about the probability of the true flux of the star: the true flux is (by definition) a single fixed value, and to talk about a frequency distribution for a fixed value is nonsense. For Bayesians, the concept of probability is extended to cover degrees of certainty about statements. Say a Bayesian claims to measure the flux $F$ of a star with some probability $P(F)$: that probability can certainly be estimated from frequencies in the limit of a large number of repeated experiments, but this is not fundamental. The probability is a statement of my knowledge of what the measurement reasult will be. For Bayesians, probabilities are fundamentally related to our own knowledge about an event. This means, for example, that in a Bayesian view, we can meaningfully talk about the probability that the true flux of a star lies in a given range. That probability codifies our knowledge of the value based on prior information and/or available data. The surprising thing is that this arguably subtle difference in philosophy leads, in practice, to vastly different approaches to the statistical analysis of data. Below I will give a few practical examples of the differences in approach, along with associated Python code to demonstrate the practical aspects of the resulting methods. Here we'll take a look at an extremely simple problem, and compare the frequentist and Bayesian approaches to solving it. There's necessarily a bit of mathematical formalism involved, but I won't go into too much depth or discuss too many of the subtleties. If you want to go deeper, you might consider — please excuse the shameless plug — taking a look at chapters 4-5 of our textbook. Imagine that we point our telescope to the sky, and observe the light coming from a single star. For the time being, we'll assume that the star's true flux is constant with time, i.e. that is it has a fixed value $F_{\rm true}$ (we'll also ignore effects like sky noise and other sources of systematic error). We'll assume that we perform a series of $N$ measurements with our telescope, where the $i^{\rm th}$ measurement reports the observed photon flux $F_i$ and error $e_i$. The question is, given this set of measurements $D = \{F_i,e_i\}$, what is our best estimate of the true flux $F_{\rm true}$? (Gratuitous aside on measurement errors: We'll make the reasonable assumption that errors are Gaussian. In a Frequentist perspective, $e_i$ is the standard deviation of the results of a single measurement event in the limit of repetitions of *that event*. In the Bayesian perspective, $e_i$ is the standard deviation of the (Gaussian) probability distribution describing our knowledge of that particular measurement given its observed value) Here we'll use Python to generate some toy data to demonstrate the two approaches to the problem. Because the measurements are number counts, a Poisson distribution is a good approximation to the measurement process: # Generating some simple photon count data import numpy as np from scipy import stats np.random.seed(1) # for repeatability F_true = 1000 # true flux, say number of photons measured in 1 second N = 50 # number of measurements F = stats.poisson(F_true).rvs(N) # N measurements of the flux e = np.sqrt(F) # errors on Poisson counts estimated via square root Now let's make a simple visualization of the "measured" data: %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.errorbar(F, np.arange(N), xerr=e, fmt='ok', ecolor='gray', alpha=0.5) ax.vlines([F_true], 0, N, linewidth=5, alpha=0.2) ax.set_xlabel("Flux");ax.set_ylabel("measurement number"); These measurements each have a different error $e_i$ which is estimated from Poisson statistics using the standard square-root rule. In this toy example we already know the true flux $F_{\rm true}$, but the question is this: given our measurements and errors, what is our best estimate of the true flux? Let's take a look at the frequentist and Bayesian approaches to solving this. We'll start with the classical frequentist maximum likelihood approach. Given a single observation $D_i = (F_i, e_i)$, we can compute the probability distribution of the measurement given the true flux $F_{\rm true}$ given our assumption of Gaussian errors: $$ P(D_i~|~F_{\rm true}) = \frac{1}{\sqrt{2\pi e_i^2}} \exp{\left[\frac{-(F_i - F_{\rm true})^2}{2 e_i^2}\right]} $$ This should be read "the probability of $D_i$ given $F_{\rm true}$ equals ...". You should recognize this as a normal distribution with mean $F_{\rm true}$ and standard deviation $e_i$. We construct the likelihood function by computing the product of the probabilities for each data point: $$\mathcal{L}(D~|~F_{\rm true}) = \prod_{i=1}^N P(D_i~|~F_{\rm true})$$ Here $D = \{D_i\}$ represents the entire set of measurements. Because the value of the likelihood can become very small, it is often more convenient to instead compute the log-likelihood. Combining the previous two equations and computing the log, we have $$\log\mathcal{L} = -\frac{1}{2} \sum_{i=1}^N \left[ \log(2\pi e_i^2) + \frac{(F_i - F_{\rm true})^2}{e_i^2} \right]$$ What we'd like to do is determine $F_{\rm true}$ such that the likelihood is maximized. For this simple problem, the maximization can be computed analytically (i.e. by setting $d\log\mathcal{L}/dF_{\rm true} = 0$). This results in the following observed estimate of $F_{\rm true}$: $$ F_{\rm est} = \frac{\sum w_i F_i}{\sum w_i};~~w_i = 1/e_i^2 $$ Notice that in the special case of all errors $e_i$ being equal, this reduces to $$ F_{\rm est} = \frac{1}{N}\sum_{i=1}^N F_i $$ That is, in agreement with intuition, $F_{\rm est}$ is simply the mean of the observed data when errors are equal. We can go further and ask what the error of our estimate is. In the frequentist approach, this can be accomplished by fitting a Gaussian approximation to the likelihood curve at maximum; in this simple case this can also be solved analytically. It can be shown that the standard deviation of this Gaussian approximation is: $$ \sigma_{\rm est} = \left(\sum_{i=1}^N w_i \right)^{-1/2} $$ These results are fairly simple calculations; let's evaluate them for our toy dataset: w = 1. / e ** 2 print(""" F_true = {0} F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements) """.format(F_true, (w * F).sum() / w.sum(), w.sum() ** -0.5, N)) F_true = 1000 F_est = 998 +/- 4 (based on 50 measurements) We find that for 50 measurements of the flux, our estimate has an error of about 0.4% and is consistent with the input value. The Bayesian approach, as you might expect, begins and ends with probabilities. It recognizes that what we fundamentally want to compute is our knowledge of the parameters in question, i.e. in this case, $$ P(F_{\rm true}~|~D) $$ Note that this formulation of the problem is fundamentally contrary to the frequentist philosophy, which says that probabilities have no meaning for model parameters like $F_{\rm true}$. Nevertheless, within the Bayesian philosophy this is perfectly acceptable. To compute this result, Bayesians next apply Bayes' Theorem, a fundamental law of probability: $$ P(F_{\rm true}~|~D) = \frac{P(D~|~F_{\rm true})~P(F_{\rm true})}{P(D)} $$ Though Bayes' theorem is where Bayesians get their name, it is not this law itself that is controversial, but the Bayesian interpretation of probability implied by the term $P(F_{\rm true}~|~D)$. Let's take a look at each of the terms in this expression: - $P(F_{\rm true}~|~D)$: The posterior, or the probability of the model parameters given the data: this is the result we want to compute. - $P(D~|~F_{\rm true})$: The likelihood, which is proportional to the $\mathcal{L}(D~|~F_{\rm true})$ in the frequentist approach, above. - $P(F_{\rm true})$: The model prior, which encodes what we knew about the model prior to the application of the data $D$. - $P(D)$: The data probability, which in practice amounts to simply a normalization term. If we set the prior $P(F_{\rm true}) \propto 1$ (a flat prior), we find $$P(F_{\rm true}|D) \propto \mathcal{L}(D|F_{\rm true})$$ and the Bayesian probability is maximized at precisely the same value as the frequentist result! So despite the philosophical differences, we see that (for this simple problem at least) the Bayesian and frequentist point estimates are equivalent. You'll noticed that I glossed over something here: the prior, $P(F_{\rm true})$. The prior allows inclusion of other information into the computation, which becomes very useful in cases where multiple measurement strategies are being combined to constrain a single model (as is the case in, e.g. cosmological parameter estimation). The necessity to specify a prior, however, is one of the more controversial pieces of Bayesian analysis. A frequentist will point out that the prior is problematic when no true prior information is available. Though it might seem straightforward to use a noninformative prior like the flat prior mentioned above, there are some surprisingly subtleties involved. It turns out that in many situations, a truly noninformative prior does not exist! Frequentists point out that the subjective choice of a prior which necessarily biases your result has no place in statistical data analysis. A Bayesian would counter that frequentism doesn't solve this problem, but simply skirts the question. Frequentism can often be viewed as simply a special case of the Bayesian approach for some (implicit) choice of the prior: a Bayesian would say that it's better to make this implicit choice explicit, even if the choice might include some subjectivity. Leaving these philosophical debates aside for the time being, let's address how Bayesian results are generally computed in practice. For a one parameter problem like the one considered here, it's as simple as computing the posterior probability $P(F_{\rm true}~|~D)$ as a function of $F_{\rm true}$: this is the distribution reflecting our knowledge of the parameter $F_{\rm true}$. But as the dimension of the model grows, this direct approach becomes increasingly intractable. For this reason, Bayesian calculations often depend on sampling methods such as Markov Chain Monte Carlo (MCMC). I won't go into the details of the theory of MCMC here. Instead I'll show a practical example of applying an MCMC approach using Dan Foreman-Mackey's excellent emcee package. Keep in mind here that the goal is to generate a set of points drawn from the posterior probability distribution, and to use those points to determine the answer we seek. To perform this MCMC, we start by defining Python functions for the prior $P(F_{\rm true})$, the likelihood $P(D~|~F_{\rm true})$, and the posterior $P(F_{\rm true}~|~D)$, noting that none of these need be properly normalized. Our model here is one-dimensional, but to handle multi-dimensional models we'll define the model in terms of an array of parameters $\theta$, which in this case is $\theta = [F_{\rm true}]$: def log_prior(theta): return 1 # flat prior def log_likelihood(theta, F, e): return -0.5 * np.sum(np.log(2 * np.pi * e ** 2) + (F - theta[0]) ** 2 / e ** 2) def log_posterior(theta, F, e): return log_prior(theta) + log_likelihood(theta, F, e) Now we set up the problem, including generating some random starting guesses for the multiple chains of points. ndim = 1 # number of parameters in the model nwalkers = 50 # number of MCMC walkers nburn = 1000 # "burn-in" period to let chains stabilize nsteps = 2000 # number of MCMC steps to take # we'll start at random locations between 0 and 2000 starting_guesses = 2000 * np.random.rand(nwalkers, ndim) import emcee sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e]) sampler.run_mcmc(starting_guesses, nsteps) sample = sampler.chain # shape = (nwalkers, nsteps, ndim) sample = sampler.chain[:, nburn:, :].ravel() # discard burn-in points If this all worked correctly, the array sample should contain a series of 50000 points drawn from the posterior. Let's plot them and check: # plot a histogram of the sample plt.hist(sample, bins=50, histtype="stepfilled", alpha=0.3, normed=True) # plot a best-fit Gaussian F_fit = np.linspace(975, 1025) pdf = stats.norm(np.mean(sample), np.std(sample)).pdf(F_fit) plt.plot(F_fit, pdf, '-k') plt.xlabel("F"); plt.ylabel("P(F)") <matplotlib.text.Text at 0x1075c7510> We end up with a sample of points drawn from the (normal) posterior distribution. The mean and standard deviation of this posterior are the corollary of the frequentist maximum likelihood estimate above: print(""" F_true = {0} F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements) """.format(F_true, np.mean(sample), np.std(sample), N)) F_true = 1000 F_est = 998 +/- 4 (based on 50 measurements) We see that as expected for this simple problem, the Bayesian approach yields the same result as the frequentist approach! Now, you might come away with the impression that the Bayesian method is unnecessarily complicated, and in this case it certainly is. Using an Affine Invariant Markov Chain Monte Carlo Ensemble sampler to characterize a one-dimensional normal distribution is a bit like using the Death Star to destroy a beach ball, but I did this here because it demonstrates an approach that can scale to complicated posteriors in many, many dimensions, and can provide nice results in more complicated situations where an analytic likelihood approach is not possible. As a side note, you might also have noticed one little sleight of hand: at the end, we use a frequentist approach to characterize our posterior samples! When we computed the sample mean and standard deviation above, we were employing a distinctly frequentist technique to characterize the posterior distribution. The pure Bayesian result for a problem like this would be to report the posterior distribution itself (i.e. its representative sample), and leave it at that. That is, in pure Bayesianism the answer to a question is not a single number with error bars; the answer is the posterior distribution over the model parameters! Let's briefly take a look at a more complicated situation, and compare the frequentist and Bayesian results yet again. Above we assumed that the star was static: now let's assume that we're looking at an object which we suspect has some stochastic variation — that is, it varies with time, but in an unpredictable way (a Quasar is a good example of such an object). We'll propose a simple 2-parameter Gaussian model for this object: $\theta = [\mu, \sigma]$ where $\mu$ is the mean value, and $\sigma$ is the standard deviation of the variability intrinsic to the object. Thus our model for the probability of the true flux at the time of each observation looks like this: $$ F_{\rm true} \sim \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[\frac{-(F - \mu)^2}{2\sigma^2}\right]$$ Now, we'll again consider $N$ observations each with their own error. We can generate them this way: np.random.seed(42) # for reproducibility N = 100 # we'll use more samples for the more complicated model mu_true, sigma_true = 1000, 15 # stochastic flux model F_true = stats.norm(mu_true, sigma_true).rvs(N) # (unknown) true flux F = stats.poisson(F_true).rvs() # observed flux: true flux plus Poisson errors. e = np.sqrt(F) # root-N error, as above The resulting likelihood is the convolution of the intrinsic distribution with the error distribution, so we have $$\mathcal{L}(D~|~\theta) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi(\sigma^2 + e_i^2)}}\exp\left[\frac{-(F_i - \mu)^2}{2(\sigma^2 + e_i^2)}\right]$$ Analogously to above, we can analytically maximize this likelihood to find the best estimate for $\mu$: $$\mu_{est} = \frac{\sum w_i F_i}{\sum w_i};~~w_i = \frac{1}{\sigma^2 + e_i^2} $$ And here we have a problem: the optimal value of $\mu$ depends on the optimal value of $\sigma$. The results are correlated, so we can no longer use straightforward analytic methods to arrive at the frequentist result. Nevertheless, we can use numerical optimization techniques to determine the maximum likelihood value. Here we'll use the optimization routines available within Scipy's optimize submodule: def log_likelihood(theta, F, e): return -0.5 * np.sum(np.log(2 * np.pi * (theta[1] ** 2 + e ** 2)) + (F - theta[0]) ** 2 / (theta[1] ** 2 + e ** 2)) # maximize likelihood <--> minimize negative likelihood def neg_log_likelihood(theta, F, e): return -log_likelihood(theta, F, e) from scipy import optimize theta_guess = [900, 5] theta_est = optimize.fmin(neg_log_likelihood, theta_guess, args=(F, e)) print(""" Maximum likelihood estimate for {0} data points: mu={theta[0]:.0f}, sigma={theta[1]:.0f} """.format(N, theta=theta_est)) Optimization terminated successfully. Current function value: 502.839505 Iterations: 58 Function evaluations: 114 Maximum likelihood estimate for 100 data points: mu=999, sigma=19 This maximum likelihood value gives our best estimate of the parameters $\mu$ and $\sigma$ governing our model of the source. But this is only half the answer: we need to determine how confident we are in this answer, that is, we need to compute the error bars on $\mu$ and $\sigma$. There are several approaches to determining errors in a frequentist paradigm. We could, as above, fit a normal approximation to the maximum likelihood and report the covariance matrix (here we'd have to do this numerically rather than analytically). Alternatively, we can compute statistics like $\chi^2$ and $\chi^2_{\rm dof}$ to and use standard tests to determine confidence limits, which also depends on strong assumptions about the Gaussianity of the likelihood. We might alternatively use randomized sampling approaches such as Jackknife or Bootstrap, which maximize the likelihood for randomized samples of the input data in order to explore the degree of certainty in the result. All of these would be valid techniques to use, but each comes with its own assumptions and subtleties. Here, for simplicity, we'll use the basic bootstrap resampler found in the astroML package: from astroML.resample import bootstrap def fit_samples(sample): # sample is an array of size [n_bootstraps, n_samples] # compute the maximum likelihood for each bootstrap. return np.array([optimize.fmin(neg_log_likelihood, theta_guess, args=(F, np.sqrt(F)), disp=0) for F in sample]) samples = bootstrap(F, 1000, fit_samples) # 1000 bootstrap resamplings Now in a similar manner to what we did above for the MCMC Bayesian posterior, we'll compute the sample mean and standard deviation to determine the errors on the parameters. mu_samp = samples[:, 0] sig_samp = abs(samples[:, 1]) print " mu = {0:.0f} +/- {1:.0f}".format(mu_samp.mean(), mu_samp.std()) print " sigma = {0:.0f} +/- {1:.0f}".format(sig_samp.mean(), sig_samp.std()) mu = 999 +/- 4 sigma = 18 +/- 5 I should note that there is a huge literature on the details of bootstrap resampling, and there are definitely some subtleties of the approach that I am glossing over here. One obvious piece is that there is potential for errors to be correlated or non-Gaussian, neither of which is reflected by simply finding the mean and standard deviation of each model parameter. Nevertheless, I trust that this gives the basic idea of the frequentist approach to this problem. The Bayesian approach to this problem is almost exactly the same as it was in the previous problem, and we can set it up by slightly modifying the above code. def log_prior(theta): # sigma needs to be positive. if theta[1] <= 0: return -np.inf else: return 0 def log_posterior(theta, F, e): return log_prior(theta) + log_likelihood(theta, F, e) # same setup as above: ndim, nwalkers = 2, 50 nsteps, nburn = 2000, 1000 starting_guesses = np.random.rand(nwalkers, ndim) starting_guesses[:, 0] *= 2000 # start mu between 0 and 2000 starting_guesses[:, 1] *= 20 # start sigma between 0 and 20 sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e]) sampler.run_mcmc(starting_guesses, nsteps) sample = sampler.chain # shape = (nwalkers, nsteps, ndim) sample = sampler.chain[:, nburn:, :].reshape(-1, 2) Now that we have the samples, we'll use a convenience routine from astroML to plot the traces and the contours representing one and two standard deviations: from astroML.plotting import plot_mcmc fig = plt.figure() ax = plot_mcmc(sample.T, fig=fig, labels=[r'$\mu$', r'$\sigma$'], colors='k') ax[0].plot(sample[:, 0], sample[:, 1], ',k', alpha=0.1) ax[0].plot([mu_true], [sigma_true], 'o', color='red', ms=10); The red dot indicates ground truth (from our problem setup), and the contours indicate one and two standard deviations (68% and 95% confidence levels). In other words, based on this analysis we are 68% confident that the model lies within the inner contour, and 95% confident that the model lies within the outer contour. Note here that $\sigma = 0$ is consistent with our data within two standard deviations: that is, depending on the certainty threshold you're interested in, our data are not enough to confidently rule out the possibility of a non-varying source! The other thing to notice is that this posterior is definitely not Gaussian: this can be seen by the lack of symmetry in the vertical direction. That means that the Gaussian approximation used within the frequentist approach may not reflect the true uncertainties in the result. This isn't an issue with frequentism itself (i.e. there are certainly ways to account for non-Gaussianity within the frequentist paradigm), but the vast majority of commonly applied frequentist techniques make the explicit or implicit assumption of Gaussianity of the distribution. Bayesian approaches generally don't require such assumptions. (Side note on priors: there are good arguments that a flat prior on $\sigma$ subtley biases the calculation in this case: i.e. a flat prior is not necessarily non-informative in the case of scale factors like $\sigma$. There are interesting arguments to be made that the [Jeffreys Prior]() would be more applicable. Here I believe the Jeffreys prior is not suitable, because $\sigma$ is not a true scale factor (i.e. the Gaussian has contributions from $e_i$ as well). On this question, I'll have to defer to others who have more expertise. Note that subtle — some would say subjective — questions like this are among the features of Bayesian analysis that frequentists take issue with). I hope I've been able to convey through this post how philosophical differences underlying frequentism and Bayesianism lead to fundamentally different approaches to simple problems, which nonetheless can often yield similar or even identical results. To summarize the differences: - Frequentism considers probabilities to be related to frequencies of real or hypothetical events. - Bayesianism considers probabilities to measure degrees of knowledge. - Frequentist analyses generally proceed through use of point estimates and maximum likelihood approaches. - Bayesian analyses generally compute the posterior either directly or through some version of MCMC sampling. In simple problems, the two approaches can yield similar results. As data and models grow in complexity, however, the two approaches can diverge greatly. In a followup post, I plan to show an example or two of these more complicated situations. Stay tuned! Update: see the followup post: Frequentism and Bayesianism II: When Results Differ This post was written entirely in the IPython notebook. You can download this notebook, or see a static view here.
https://jakevdp.github.io/blog/2014/03/11/frequentism-and-bayesianism-a-practical-intro/
CC-MAIN-2019-18
refinedweb
4,032
50.87
A Brief Overview of ES6 for React Native Developers If you’re coming to React Native and you’re - New to JavaScript - Already familiar with JavaScript but haven’t used ES6/ES2015+ features then you may feel a bit lost at times. The syntax can seem weird, confusing, or sometimes you just don’t know what to look for. I’ve compiled a brief list of the most common ES6+ features that I see in React Native apps and tutorials. This is by no means comprehensive but it should at least get you started. Variables Since the advent of JavaScript we’ve had var. But now we have var, let, and const. They're all valid but what's the difference? let: Very similar to var but the scoping is different. var is function scoped (available and can be modified anywhere within a function) whereas let is block scoped, meaning it's available only within that block of code. I pretty much always use let in place of var now (honestly I can't remember the last time I used var). const: Same scoping (block) as let but you can't change the value of it, for example: const name = 'Spencer'; name = 'Johnny' // Can't do this However (and this is something that I was confused about at first) you can modify it if it’s of a type object or array, for example: const info = { name: 'Spencer', company: 'Handlebar Labs', }; info.job = 'Teaching'; // This is perfectly valid const roles = ['Student', 'Teacher']; roles.push('Developer'); // Good to go! Want more info? Arrow Functions Syntax There’s now a new way to declare functions in JavaScript called arrow functions, and you’ll see these a lot when working with React or React Native. The primary difference between standard/old functions and arrow functions is what this is bound to, so sometimes you'll want/need to use function. Creating an arrow function is simple const greet = (name) => { return 'Hello, ' + name + '!'; }; greet('Spencer'); // Hello, Spencer! Learn more about arrow function syntax Formatting Arguments With arrow functions you can format arrow functions in a few different ways, all of which are commonly used. These are the three rules I’ve commited to memory. 1. No arguments = parenthesis required const greet = () => { return 'Hi!'; }; 2. One argument = parenthesis optional const greet = (name) => { return 'Hello, ' + name + '!'; }; const greet = name => { return 'Hello, ' + name + '!'; }; 3. Two or more arguments = parenthesis required const greet = (name, company) => { return 'Hello, ' + name + '!' + 'How is ' + company + '?'; }; Learn more about formatting arguments Default Arguments This is one of my favorites — an extremely simple way to set default arguments for your functions by simply assigning them to a value when naming the argument. If the argument is passed it will use the argument you pass, otherwise it will fall back to the default. const greet = (name = 'Friend') => { return 'Hello, ' + name + '!'; }; greet(); // Hello, Friend! greet('Spencer'); // Hello, Spencer! Learn more about default arguments Implicit Return Have a simple function and sick of writing curly braces and returns? Fret no more! You’re now able to implicitly return from a function, like so const greet = (name) => 'Hello, ' + name + '!'; greet('Spencer'); // Hello, Spencer! Mmm saved keystrokes It gets better though! Say you want to return an object from a function, you can do so like so (you’ll often see this when working with Redux) const getInfo = () => ({ name: 'Spencer', company: 'Handlebar Labs', job: 'Teaching', }); getInfo(); // { name: 'Spencer', company: 'Handlebar Labs', job: 'Teaching' } (notice the parenthesis wrapping the object) And finally you’re also able to return a component in a very similar way as the object, let me demonstrate const Greeting = ({ name }) => ( <View> <Text>Hello, {name}!</Text> </View> ); Again we’re wrapping the component with parenthesis and we don’t have to do any returns. Learn more about implicit returns Objects We’ve now got a few very convenient tools (that previously would have required an external library) that make working with Objects in JavaScript easier. Destructuring Destructuring allows us to “destructure”, or break down, an object so that we can more easily access the information we care about. Let’s say we want to access some data on an object, in the past we would have had to do the following const info = { name: 'Spencer', company: 'Handlebar Labs', location: { city: 'Nashville', state: 'Tennessee', }, }; const name = info.name; const city = info.location.city; const state = info.location.state; That’s fine but now we’re able to save a bit of time defining the variables that access the info we care about. When you’re passing props around a React Native application it’s common to have some nested data and, as we see with city and state, we end up writing a lot of the same code. You’re able to destructure that object to more easily access data. const info = { name: 'Spencer', company: 'Handlebar Labs', location: { city: 'Nashville', state: 'Tennessee', }, }; const { name, location } = info; const { city, state } = location; // name is Spencer // city is Nashville // state is Tennessee You’ll often see this when accessing information from props, like this: const Info = ({ name, location }) => ( <View> <Text>{name} lives in {location.city}, {location.state}</Text> </View> ); Learn more about object destructuring Spread Object spreading allows us to copy information from one object to another. It’s a practice you’ll often see when using Redux because of the need for pure functions. Let’s say we have multiple people who work at Handlebar Labs and they’ve all got some of the same basic information. To save time we’ll copy that information from the “template” to an individual’s information. const handlebarLabsInfo = { company: 'Handlebar Labs', location: { city: 'Nashville', state: 'Tennessee', }, }; const spencerInfo = { ...handlebarLabsInfo, name: 'Spencer', } console.log(spencerInfo); // { name: 'Spencer', company: 'Handlebar Labs', location: { city: 'Nashville', state: 'Tennessee' } } Learn more about object spread Strings Template Literals Another personal favorite of mine. Notice how earlier, or in any of your older code/tutorials, you see 'Hello, ' + name + '!' + 'How is ' + company + '?'? Those + signs can be a pain to write and I know personally I would always forget a space, thus causing the formatting to look off. Template literals make it easier for us because we can much more naturally write strings with dynamic content. By using back ticks (``) to defined the string we can then pass variables in with ${}. Let me just show you... const greet = (name, company) => { // return 'Hello, ' + name + '!' + 'How is ' + company + '?'; return `Hello, ${name}! How is ${company}?`; }; So much better 😄 Learn more about template literals Modules For people first jumping over to React Native this one can be confusing. You’re probably used to seeing exports.greet = (name) => 'Hello, ' + name + '!'; // OR module.exports = (name) => 'Hello, ' + name + '!'; and likewise to actually use that code: const formalities = require('./formalities'); formalities.greet(); const greet = require('./formalities'); greet(); We’ve now got access to a different module syntax that takes advantage of the keywords import and export. Let's convert that first export block. export const greet = (name) => 'Hello, ' + name + '!'; // OR export default greet; Then to access that code we could use import { greet } from './formalities'; // OR import greet from './formalities'; What’s nice is that we can use both export and export default together. There's much more you can do with ES6 modules and I would definitely encourage you to check it out. require still has its place but I rarely use them now Wrapping Up There’s a lot of great stuff in ES6 and beyond, much of it I didn’t cover here. These are just the most common ones that I see in use. Did I forget something? Let me know! Want more React Native related content? Sign up for my email list or take my intro to React Native course (it’s free!).
https://medium.com/the-react-native-log/a-brief-overview-of-es6-for-react-native-developers-15e7c68315da
CC-MAIN-2018-43
refinedweb
1,289
64.91
In my map control article, I tried to parse user input to see if it was a latitude and longitude and display the result on the map. At the time, I didn't want to write a full parser for the user input so I (very lazily) just split the user input using a comma and tried to parse two decimal values. This has a number of problems. First, coordinates couldn't be in degree minute second format, which is quite popular for coordinates. The second point (which was also pointed out in the comments), some countries use a comma as a decimal separator (Spain for example). Jaime Olivares wrote an excellent article here that parses and serializes latitude and longitude coordinates according to the ISO 6709 standard (a nice guide on the standard is available on this page). The article is good at explaining what the standard is and provides nice and concise code to get the job done, but it's a bit unreasonable to expect users of an application to type a coordinate according to this format! For that reason, I'm going to use some simple regular expressions to try and parse as flexibly as possible, taking into account different user's language settings. The nice thing about the ISO 6709 format (from a developer's point of view) is that we know exactly what to expect in the string. For example, to separate multiple coordinates the '/' character is used. Also, the data will not vary depending on the cultural settings of the user; the decimal separator will always be '.' However, there's still a little guess work, as we don't know if it represents decimal degrees (from now on referred to as D), degrees and decimal minutes (DM) or degrees, minutes and decimal seconds (DMS). Also, we do not know if there will be an altitude component or not. Let's list what we do know though: string '/' '.' '+' '-' [±DD(.D)] [±DDMMSS(.S)] [±DDD(.D)] [±DDDMMSS(.S)] [±A(.A)] Now that we know what a valid format is, we can easily translate it into a regular expression (and use the Regex class). This is the regular expression we'll use (if you want to try it remember to use the RegexOptions.IgnorePatternWhitespace flag). Regex RegexOptions.IgnorePatternWhitespace ^\s* # Match the start of the string, ignoring any whitespace (?<latitude> [+-][0-9]{2,6}(?: \. [0-9]+)?) # The decimal part is optional. (?<longitude>[+-][0-9]{3,7}(?: \. [0-9]+)?) (?<altitude> [+-][0-9]+(?: \. [0-9]+)?)? # The altitude component is optional / # The string must be terminated by '/' This regular expression will tell us if the input string might be in the ISO 6709 format and, if it all matched, will allow us to get the various components from the string using the various named groups. I said the string might be in the correct format, because the expression shown also allows '+123+1234/' as a valid value (i.e. ±DDM±DDDM/) and doesn't perform any range checking on the values (e.g. minutes and seconds cannot be greater than or equal to 60). Therefore, we need to pass the output of a successful match onto another function to convert the string to a number that we can use in calculations. string '+123+1234/' ±DDM±DDDM/ For the altitude part, this is extremely easy; check the altitude Group.Success property and, if the altitude was found, convert the string value using double.Parse (making sure to pass in CultureInfo.InvariantCulture to avoid any localization issues). Note there is no need to use double.TryParse as we've already checked the input is valid using the regular expression. Group.Success double.Parse CultureInfo.InvariantCulture double.TryParse For longitude and latitude, it's a little trickier. The basic idea is to split the string into two parts; the integral part and the optional fractional part. Depending on the length of the integral part, we know whether the string is in D, DM or DMS format and can split the string and parse each component separately, making sure to add the fractional part (if any) to the last component. As mentioned in the introduction, the motivation of this article is to extract a coordinate from user supplied strings, whilst being friendly to different cultural settings. The approach I've taken is to split the string up into groups and then use the double.TryParse method (passing in the current cultural setting) to actually do the number processing, as I figured the .NET Framework can do a better job at localization than I can! This just begs the question on how to split the string into groups? What I've assumed is that the latitude and longitude will be separated by whitespace. I've also assumed that the latitude and longitude will be in the same format (i.e. if latitude is a DM then longitude is a DM too). Let's look at some examples of how we might write the latitude: 12° 34' 56? S -12° 34' 56? 'S' -12°34'56?S -12 34" 56' -12 34’ 56” +12 34 56 S S 12d34m56s S 12* 34' 56" Of course, there are many more combinations (only specifying one of the symbols, mixing smart quotes and plain quotes, etc). Also, this is just for DMS format and doesn't even look at decimal seconds (for example, is -12 34' 56.78" valid? Maybe in some countries, but in Spain it's not). There is also a possible source of ambiguity in regards to what 'S' should mean? If we allow 'D' to signify Degrees, 'M' signifies Minutes then naturally 'S' should be interpreted as Seconds. But in most of the examples, 'S' signifies that the latitude is in the Southern hemisphere. We’ll therefore exclude 'S' as a symbol for seconds, so 12d 34m 56s will be interpreted as 12° 34' 56? S -12 34' 56.78" 'D' 'M' 12d 34m 56s Since we're not going to try and validate the numbers, we just need to find a way of splitting the string into groups. As with the ISO format, we can use a regular expression and group together anything which isn't a symbol or whitespace. Here is the simplest case for degrees only: ^\s* # Ignore any whitespace at the start of the string (?<latitudeSuffix>[NS])? # The suffix could be at the start (?<latitude>.+?) # Match anything and we'll try to parse it later [D\*\u00B0]?\s* # Degree symbols (optional) followed by optional whitespace (?<latitudeSuffix>[NS])?\s+ # Optional suffix with at least some whitespace to separate Wow, what a mess! After skipping the whitespace at the start of the string, Regex will look for a North/South specifier and, if it's found, will store it in a group named latitudeSuffix. It will then match any character ('.') more than once but as few times as necessary ('+?'). What that means is that if it finds an optional degree symbol (such as '*' (a reserved character so needs to be escaped), 'D' or '°' (written as a Unicode number)) then the matching will stop. Failing that, it will look for any whitespace. If still no matches are found, it will look for the latitude suffix. Finally, if it still hasn't found any of these, then it must find at least one whitespace character (remember we said that the latitude and longitude must be separated by whitespace). Assuming the regular expression matches the whole string successfully, then we move on to phase two where we try to parse the extracted groups using the current cultural settings. This involves passing the latitude group to double.TryParse and altering the sign (if necessary) based on the latitudeSuffix group. latitudeSuffix '+?' '*' '°' latitude The Angle class serves as a base class for Latitude and Longitude and allows conversion between radians and degrees. It implements the IComparable<T>, IEquatable<T> and IFormattable interfaces, which means you can compare Angles with each other (or a Latitude or Longitude, but you cannot compare a Latitude to a Longitude - that doesn't make sense). It also means that you can choose how to display them: Angle Latitude Longitude IComparable<T> IEquatable<T> IFormattable var latitude = Latitude.FromDegrees(-5, -10, -15.1234); Console.WriteLine("{0:DMS1}", latitude); // 5° 10' 15.1? S Console.WriteLine("{0:DM3}", latitude); // 5° 10.252' S Console.WriteLine("{0:D}", latitude); // 5.17° S Console.WriteLine("{0:ISO}", latitude); // -051015.1234 The class does not have any public visible constructors, so you’ll need to use the static initializers. Here is the full list of methods and properties for the class: public static public class Angle : IComparable<Angle>, IEquatable<Angle>, IFormattable { // Gets the whole number of degrees from the angle. public int Degrees { get; } // Gets the whole number of minutes from the angle. public int Minutes { get; } // Gets the number of seconds from the angle. public double Seconds { get; } // Gets the value of the angle in radians. public double Radians { get; } // Gets the value of the angle in degrees. public double TotalDegrees { get; } // Gets the value of the angle in minutes. public double TotalMinutes { get; } // Gets the value of the angle in seconds. public double TotalSeconds { get; } // Creates a new angle from an amount in degrees. public static Angle FromDegrees(double degrees); public static Angle FromDegrees(double degrees, double minutes); public static Angle FromDegrees(double degrees, double minutes, double seconds); // Creates a new angle from an amount in radians. public static Angle FromRadians(double radians); // Returns the result of multiplying the specified value by negative one. public static Angle Negate(Angle angle);); // Compares this instance with a specified Angle object and indicates // whether the value of this instance is less than, equal to, or greater // than the value of the specified Angle object. public int CompareTo(Angle other); // Determines whether this instance and a specified object have // the same value. public override bool Equals(object obj); public bool Equals(Angle other); // Returns the hash code for this instance. public override int GetHashCode(); // Returns a string that represents the current Angle in degrees, // minutes and seconds form. public override string ToString(); // Formats the value of the current instance using the specified format. public virtual string ToString(string format, IFormatProvider formatProvider); } The Location class contains a Latitude, Longitude and optional altitude. It implements the IEquatable<T>, IFormattable and IXmlSerializable interfaces, using the ISO format to serialize/deserialize itself. It also accepts the same formatting strings as Latitude/Longitude. There are some static parsing methods that accept various options for allowing different formats to be recognised and the class also has a few helper functions as well, derived from Aviation Formulary V1.45 by Ed Williams. Location IXmlSerializable public sealed class Location : IEquatable<Location>, IFormattable, IXmlSerializable { // Initializes a new instance of the Location class. public Location(Latitude latitude, Longitude longitude); public Location(Latitude latitude, Longitude longitude, double altitude); // Gets the altitude of the coordinate, or null if the coordinate doesn't // contain altitude information. public double? Altitude { get; } // Gets the latitude of the coordinate. public Latitude Latitude { get; } // Gets the longitude of the coordinate. public Longitude Longitude { get; } // Converts the string into a Location. public static Location Parse(string value); public static Location Parse(string value, IFormatProvider provider); public static Location Parse(string value, LocationStyles style, IFormatProvider provider); // Converts the string into a Location (without throwing an exception). public static bool TryParse(string value, out Location location); public static bool TryParse(string value, IFormatProvider provider, out Location location); public static bool TryParse(string value, LocationStyles style, IFormatProvider provider, out Location location); public static bool operator !=(Location locationA, Location locationB); public static bool operator ==(Location locationA, Location locationB); // Determines whether this instance and a specified object have the // same value. public override bool Equals(object obj); public bool Equals(Location other); // Returns the hash code for this instance. public override int GetHashCode(); // Returns a string that represents the current Location in degrees, // minutes and seconds form. public override string ToString(); // Formats the value of the current instance using the specified format. public string ToString(string format, IFormatProvider formatProvider); // Calculates the initial course (or azimuth; the angle measured clockwise // from true north) from this instance to the specified value. public Angle Course(Location point); // Calculates the great circle distance, in meters, between this instance // and the specified value. public double Distance(Location point); // Calculates a point at the specified distance along the specified // radial from this instance. public Location GetPoint(double distance, Angle radial); } For the sake of completeness, there is also a serializable LocationCollection class that, like Location, uses the ISO format to serialize/deserialize itself. LocationCollection None of the Angle, Latitude or Longitude classes include a conversion (either implicit or explicit) from built in types (such as double). This is deliberate. In the Math class of the .NET Framework, the methods which work with angles (such as Math.Cos) expect the angles to be in radians. However, when dealing with latitude/longitude, degrees are far more common. It therefore seems inconsistent to be able to cast a double to an Angle and assume that the number is in radians but to have a cast from a double to Latitude assume that the number is in degrees. For this reason, it's best if the developer is explicit with what the number represents by using the FromDegrees or FromRadians static methods. implicit explicit double Math Math.Cos FromDegrees FromRadians static Also, for efficiency reasons, it would be nice if Latitude and Longitude were structs. However, you cannot use inheritance with structs and I think the code re-use between Angle and Latitude/Longitude justifies the use of classes, but would welcome any feedback with your opinions..
http://www.codeproject.com/Articles/151869/Parsing-Latitude-and-Longitude-Information?fid=1607550&select=4073088&tid=3753700
CC-MAIN-2015-35
refinedweb
2,272
54.02
BakeBit - Ultrasonic Ranger From FriendlyARM WiKi Contents 1 Introduction - The BakeBit - Ultrasonic Ranger is an ultrasonic module.The module's sensor emits a sound wave whose wave length is around 6mm and frequency is 40K Hz, which bounces off a reflective surface and returns to the sensor. Then, using the amount of time it takes for the wave to return to the sensor, the distance to the object can be computed. 2 Hardware Spec - Standard 2.0mm pitch 4-Pin BakeBit Interface - Range: 5cm - 300cm - Accuracy: 1cm - PCB dimension(mm): 24 x 42 - Pin Desription: 3 Code Sample:Ultrasonic Sensor with LED This code sample shows how to use the ultrasonic sensor module to measure a distance. When the module detects an object in front of itself the LED will be turned on. 3.1 Hardware Setup Connect the LED module to the NanoHat Hub's D3 and the ultrasonic sensor module to the NanoHat Hub's D4: 3.2 Source Code import bakebit import time # Connect the BakeBit Ultrasonic Ranger to digital port D4 # SIG,NC,VCC,GND ultrasonic_ranger = 4 # Connect the BakeBit LED to digital port D3 led = 3 bakebit.pinMode(led,"OUTPUT") light = 0 while True: try: # Read distance value from Ultrasonic distance = bakebit.ultrasonicRead(ultrasonic_ranger) print(distance) if distance > 0: if distance<10: if light == 0: print("\ton") bakebit.digitalWrite(led,1) light = 1 else: if light == 1: print("\toff") bakebit.digitalWrite(led,0) light = 0 time.sleep(.2) except KeyboardInterrupt: bakebit.digitalWrite(led,0) break except TypeError: print ("Error") except IOError: print ("Error") 3.3 Run Code Sample Before you run the code sample you need to follow the steps in bakebit tutorial to install the BakeBit package. Enter the "BakeBit/Software/Python" directory and run the "bakebit_prj_Ultrasonic_Sensor_with_LED.py" program: cd ~/BakeBit/Software/Python sudo python bakebit_prj_Ultrasonic_Sensor_with_LED.py 3.4 Observation When an object is in front of the sensor module within 10 cm the LED will be turned on otherwise the LED will be off. 4 Resources - [Schematic](BakeBit - Ultrasonic Ranger.pdf) - [BakeBit Github Project Page]() - [BakeBit Starter Kit User's Manual]() 5 Update Log 5.1 December-15-2016 - Released English version 5.2 Jan-19-2017 - Renamed "NEO-Hub" to "NanoHat-Hub" 5.3 Jan-20-2017 - Renamed "NanoHat-Hub" to "NanoHat Hub"
http://wiki.friendlyarm.com/wiki/index.php/BakeBit_-_Ultrasonic_Ranger
CC-MAIN-2019-09
refinedweb
381
57.16
#include <GURL.h> This class is used in the library to store URLs in a system independent format. The idea to use a general class to hold URL arose after we realized, that DjVu had to be able to access files both from the WEB and from the local disk. While it is strange to talk about system independence of HTTP URLs, file names formats obviously differ from platform to platform. They may contain forward slashes, backward slashes, colons as separators, etc. There maybe more than one URL corresponding to the same file name. Compare file:/dir/file.djvu# and. To simplify a developer's life we have created this class, which contains inside a canonical representation of URLs. File URLs are converted to internal format with the help of {GOS} class. All other URLs are modified to contain only forward slashes. Definition at line 119 of file GURL.h.
http://djvulibre.sourcearchive.com/documentation/3.5.22/classGURL.html
CC-MAIN-2018-13
refinedweb
150
63.7
POUND FOR POUND From: Ka-Ming Hello. I'm delighted that there is an online magazine for manga and anime and I do like your style. I want to complain about the amount of money it costs to be graphic novels in the UK. One volume of RANMA 1/2 costs over eleven pounds in the UK whilst if I bought a volume in Hong Kong it would only cost me around 26 HK dollars (around 2 pound 60p). Even across the channel in France, graphic novels cost less than 35 Francs (around 3 pound 50p) and they have a wider selection of stories available. Can anyone tell me why there are such discrepancies for the prices of graphic novels? While I'm not an expert in the European market, some basic rules hold true. Part of the expense is probably due to the fact that they're not printing large numbers of the graphic novels. For example, in the USA, a typical translated manga print run (regular comic book size) is about 10,000, while something more mainstream such as X-MEN will number in the hundreds of thousands. This obviously creates a large discrepancy in the cost per issue. Also, importing also usually adds a substantial cost to a product. If the manga you're buying is the American version imported to the UK, you're probably paying an additional cost to cover the import charges. Unfortunately, anime and manga are not cheap hobbies for those outside Japan (and even for those in Japan). On a QUEST From: Kyle Garrett You guys are doing a great job of educating the USA market on all things anime, keep up the good work, and thank you. Today I am writing to ask if you guys remember an anime TV show called DRAGON WARRIOR. Here is all of the information on it that I know... This show aired on the East Coast (NY and NJ specifically) on ABC at 6:30 in the morning on Saturdays during the 1989/1990 season. I have done some investigation with other Anime fans and I believe that the background story is known as DRAGON QUEST. There have been several video games made in Japan about DRAGON QUEST. The anime story that I know as DRAGON WARRIOR featured a boy named Able (or Abel) who was having his coming of "age celebration" in the first episode. A sleeping monster/lord awakens from his watery resting place to rise again and kidnap the "princess" of Able's rural town. He must embark on a quest to save the girl, and the fun begins from there. The characters were designed by Toriyama Akira, the master artist who brought us the DRAGON BALL characters, the CHRONO TRIGGER (SNES)characters and the TOBAL (PSX) series characters. I hope you guys can dig up some information on this show. It was great! I actually got up in the mornings that early on Saturdays to catch the show only to return happily to sleep after watching it. However I traded my VHS tape of the episodes that I had taped, for a store bought copy of AKIRA. The guy who I traded with switched schools and I have not seen anything from the show ever since. Thanks for listening, and I will keep on reading your mag. Well, you've found out pretty much all there is to the US release. DRAGON QUEST was released in the USA as DRAGON WARRIOR, and like most translated anime series, it suffered an early demise due to poor scheduling (6:30AM is not a good time slot!). In Japan, the DRAGON QUEST anime series lasted for about 52 episodes, and is usually available in Japanese video rental places. The USA broadcast run only lasted for 13 episodes, though. Of further note, the first three games of the DRAGON QUEST series is available as the DRAGON WARRIOR series on the NES. In any case, the anime in question was an original plot based on the world view of the game, much like DRAGON QUEST: DAI NO DAIBOUKEN (manga, movie, TV series) and DRAGON QUEST RETSUDEN: ROTO NO MONSHOU (manga, movie). Where Do I Start? From: Kevin Niemczyk Nice site. I am not really a really big fan but I have started to watch. I am writing this to ask if there is a part on your site that can me out. You see, I've only seen anime TV shows on cartoon network. This includes SAILOR MOON, ROBOTECH, and DBZ. I recently looked up ROBOTECH online and I found out that there has been numerous movies. Is there a site you can suggest that help me catch up on what I have been missing out on in the anime world. Thanks if you can or can't. I would really appreciate it. Well, the first thing I would suggest is that you browse the EX archive and read the things of interest to you. If you're interested in a specific show, you can always use our search engine. Another good site for information is the Anime Web Turnpike, which lists pages with information on specific series as well as general information. There's a wealth of information out there, you just have to start reading. What the FAQ? From: Anthony Hickman Do you know where I can find a game FAQ on DBZ LEGENDS that has a move guide? I suggest you check GameFAQs. They have one of the largest libraries of game FAQs online. If it's not there, I don't know what else to tell you. And this goes for all those people who still write in asking for FAQs and codes and cheats. Please don't ask us where you can get move guides for games. We'll just point you to GameFAQs or a search engine, since we don't happen to have a catalog of everything on the web floating around in our heads. Attention TOKIMEMO Fans! From: Chris Tobita This is the first time I have actually seen your magazine. It is very good. I am presently working on a petition drive to translate the anime game series, TOKIMEKI MEMORIAL. With the upcoming release of the two TOKIMEKI MEMORIAL OVAs, I was wondering if you were planning to review them. Also, if you do, I was wondering if you could mention my petition site on your magazine. I don't know if you do this kind of thing, so I understand if you refuse, but I would appreciate any assistance. Well, we'll probably review the OVAs sooner or later. It depends on what the interest is like from our staff. If they're good quality, though, they'll probably wind up reviewed in the virtual pages of our publication. Meanwhile, TOKIMEMO fans, if you're interested in helping out this petition drive, surf on over to the petition site. Are We Ready for MONONOKE? From: Alexander Harris Interesting web site. I like it, but that's not why I am writing this. I read the article that Chad Kime wrote called: American Anime: Blend or Bastardization ? A well thought-out article, I must say. At the end of it, he asked if anyone out there has an opinion. Well, I do. First off, even though I am utterly excited about the upcoming release of Hayao Miyazaki's PRINCESS MONONOKE (July 9) I am not sure that the American public has the open state of mind it would take to handle a film of this magnitude. I don't think at this point in time that Americans, with our thoughtless stereotypes and our quick-to-judge mentalities, are ready to handle any type of animation that contradicts the Disney idea of animation that has been handed to us without exception all of our lives. Change will come. It will take time, but it will come. Soon, the generation that is growing up and watching anime will come into power. We will be the writers, the artists, the executives. We will decide what movies will be released, and what shows will air on televison. I'm not sure that I agree with you entirely. After all, some of the people who are in power now have already grown up watching anime. That's not the problem. There are groups of people who believe that animation is a medium that can exist beyond the realm of children's programming. However, it is certainly not the majority of the population. Still, the fact that Miramax is taking such care with the film (worrying over the name, the rating, etc) seems to indicate that they are taking this very seriously. Of course, that won't really matter if the American public isn't ready for it. And on the whole, maybe they're not, but then again, we'll never know until it comes out, will we? It will be interesting to see how this all plays out.
http://www.ex.org/4.3/03-letters.html
crawl-001
refinedweb
1,497
72.16
ncl_dpsmth - Man Page Used to draw a curve when fractional coordinates are available and smoothing is to be done. Synopsis CALL DPSMTH (XCPF,YCPF,IFVL) C-Binding Synopsis #include <ncarg/ncargC.h> void c_dpsmth (float xcpf, float ycpf, int ifvl); Description - XCPF (an input expression of type REAL) specifies the X coordinate of a point in the fractional coordinate system. Ignored when IFVL = 2. - YCPF (an input expression of type REAL) specifies the Y coordinate of a point in the fractional coordinate system. Ignored when IFVL = 2. - IFVL (an input expression of type INTEGER) indicates what type of call is being done: "0" implies a "first-point" call, "1" implies a "vector" call, and "2" implies a "last-point" or "buffer-flush" call. C-Binding Description The C-binding argument descriptions are the same as the FORTRAN argument descriptions. Usage Use "CALL DPSMTH(XCPF,YCPF,0)" to do a "pen-up" move to the first point in a sequence of points defining a curve. Use "CALL DPSMTH(XCPF,YCPF,1)" to do "pen-down" moves to the second and following points in a sequence of points defining a curve. Use "CALL DPSMTH(0.,0.,2) to terminate a sequence of calls, finish drawing the curve, and flush internal. Examples Use the ncargex command to see the following relevant examples: tdshpk. Access To use DPSMTH or c_dpsmth, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order. See Also Online: dashpack, dashpack_params, dpcurv, dpdraw, dpfrst, dpgetc, dpgeti, dpgetr, dplast, dpline, dpsetc, dpseti, dpsetr, dpvect, ncarg_cbind. Hardcopy: None. University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement.
https://www.mankier.com/3/ncl_dpsmth
CC-MAIN-2021-21
refinedweb
275
55.34
Homework Homework Questions? Ask a Tutor for Answers ASAP Below is what they wanted to me to: Unzip to get project folder. Open eclipse. Click on File ->import->existing project into workspace->java project and select the unzipped folder. It will show the project in eclipse. Expand project by clicking [+] sign till you find java file. Right click on driver java file and click on Run as - java application! Hi LogicPro, you are so helpful, but my codes still not working. Can you take a look at what I sent you? -----------------------------below is a "BuyerTestDriver.java", ONE PROBLEM EXIST. package buyer;public class BuyerTestDriver ERROR HERE { public class BuyerTestDriver { /** * @param args */ public static void main(String[] args) { // create and test a BuyerImpl Buyer buyer = new BuyerImpl("XXXXX XXXXX"); System.out.println("The buyers name is: " + buyer.getName()); buyer.payForItem(new Product("Laptop", 14455.89)); // create and test a CashBuyer below Buyer cashBuyer = new CashBuyer("With Cash Buyer"); System.out.println("The buyers name is: " + cashBuyer.getName()); cashBuyer.payForItem(new Product("Desk", 194400.89)); // create and test a CreditBuyer Buyer creditBuyer = new CreditCardBuyer("The Motherboard Buyer"); System.out.println("The buyers name is: " + creditBuyer.getName()); creditBuyer.payForItem(new Product("Gold", 120000.89)); }} } -----------------------------------Below is "CashBuyer.java" and ONE ERROR SHOWS. package buyer;/** * A CashBuyer is a Buyer who pays with cash and no check accepted. * * @author of this code is Chan [email protected] * */public class CashBuyer extends BuyerImpl { /** * Here, create a new CashBuyer * * @param name - the name of the CashBuyer */ public CashBuyer (String name) ERROR HERE! @Override /** this code will override! * Pay for an item over here * * @param - the Product to pay for */ public void payForItem(Buyer item) { System.out.println(name + " is paying for item " + item.getName() + " with cash."); } }_________________Below is "CreditCardBuyer.java" package buyer;public class CreditCardBuyer { public static void main(String[] args) { /** * A CreditCardBuyer is a Buyer who pays with a credit card. * */ class CreditCardBuyer extends BuyerImpl "SHOWS ERROR HERE! { /** * Create a new CashBuyer * * @param name - the name of the CashBuyer */ public CreditCardBuyer (String name) { super(name); } @Override /** * Pay for an item * * @param - the Product to pay for */ public void payForItem(CreditCardBuyer item) SHOWS ERROR HERE { System.out.println(name + " is paying for item " + item.getName() + " with just cash buyer."); } } } } Something wrong in CreditCardBuyer java code i think because it kept saying the following" The nested type CreditCardBuyer cannot hide an enclosing type The method payForItem(CreditCardBuyer) of type CreditCardBuyer must override a superclass method at buyer.CreditCardBuyer.main(CreditCardBuyer.java:13)" No, I didn't touch anything. I'm still waiting. If we done with this tonight. I have another homework due in one day. I still encounter problem Sir Still on "CreditCardBuyer.java, line 13 error I tried so many time to send you a screen shoot but it says "Your Message is too long. (Over 70,000 characters after formatting tags)" I just sent you the ScreenShoot I think you will get it now. Thank for your help Screenshoot.docx Hi LogicPro, it works. I re-open a new Eclipe imported the files. There was a conflicted codes on previous Eclipe workplace. thank a lot Ok, thanks
http://www.justanswer.com/homework/82h8a-inheritance-method-overriding.html
CC-MAIN-2017-22
refinedweb
522
52.76
Source code for django.utils.encoding import codecs import datetime import locale from decimal import Decimal from urllib.parse import quote from django.utils import six from django.utils.functional import Promise class DjangoUnicodeDecodeError(UnicodeDecodeError): def __init__(self, obj, *args): self.obj = obj super().__init__(*args) def __str__(self): return '%s. You passed in %r (%s)' % (super().__str__(), self.obj, type(self.obj)) # For backwards compatibility. (originally in Django, then added to six 1.9) python_2_unicode_compatible = six.python_2_unicode_compatible[docs]def smart_text(s, encoding='utf-8', strings_only=False, errors='strict'): """ Return a string representing 's'. Treat bytestrings using the 'encoding' codec. If strings_only is True, don't convert (some) non-string-like objects. """ if isinstance(s, Promise): # The input is the result of a gettext_lazy() call. return s return force_text(s, encoding, strings_only, errors)_PROTECTED_TYPES = ( type(None), int, float, Decimal, datetime.datetime, datetime.date, datetime.time, )[docs]def is_protected_type(obj): """Determine if the object instance is of a protected type. Objects of protected types are preserved as-is when passed to force_text(strings_only=True). """ return isinstance(obj, _PROTECTED_TYPES)[docs), str): return s if strings_only and is_protected_type(s): return s try: if isinstance(s, bytes): s = str(s, encoding, errors) else: s = str(s) except UnicodeDecodeError as e: raise DjangoUnicodeDecodeError(s, *e.args) return s[docs]def smart_bytes(s, encoding='utf-8', strings_only=False, errors='strict'): """ Return)[docs, memoryview): return bytes(s) return str(s).encode(encoding, errors)smart_str = smart_text force_str = force_text smart_str.__doc__ = """ Apply smart_text in Python 3 and smart_bytes in Python 2. This is suitable for writing to sys.stdout (for instance). """ force_str.__doc__ = """ Apply force_text in Python 3 and force_bytes in Python 2. """[docs]def iri_to_uri(iri): """ Convert an Internationalized Resource Identifier (IRI) portion to a URI portion that is suitable for inclusion in a URL. This is the algorithm from section 3.1 of RFC 3987, slightly simplified since the input is assumed to be a string rather than an arbitrary byte stream. Take an IRI (string or UTF-8 bytes, e.g. '/I ♥ Django/' or b'/I \xe2\x99\xa5 Django/') and return a string containing the encoded result with ASCII chars only .parse.quote() already considers all # but the ~ safe. # The % character is also added to the list of safe characters here, as the # end of section 3.1 of RFC 3987 specifically mentions that % must not be # converted. if iri is None: return iri elif isinstance(iri, Promise): iri = str(iri) return quote(iri, safe="/#%[]=:;$&()+,!?*@'~")# List of byte values that uri_to_iri() decodes from percent encoding. # First, the unreserved characters from RFC 3986: _ascii_ranges = [[45, 46, 95, 126], range(65, 91), range(97, 123)] _hextobyte = { (fmt % char).encode(): bytes((char,)) for ascii_range in _ascii_ranges for char in ascii_range for fmt in ['%02x', '%02X'] } # And then everything above 128, because bytes ≥ 128 are part of multibyte # unicode characters. _hexdig = '0123456789ABCDEFabcdef' _hextobyte.update({ (a + b).encode(): bytes.fromhex(a + b) for a in _hexdig[8:] for b in _hexdig })[docs]def uri_to_iri(uri): """ Convert a Uniform Resource Identifier(URI) into an Internationalized Resource Identifier(IRI). This is the algorithm from section 3.2 of RFC 3987, excluding step 4. Take an URI in ASCII bytes (e.g. '/I%20%E2%99%A5%20Django/') and return a string containing the encoded result (e.g. '/I%20♥%20Django/'). """ if uri is None: return uri uri = force_bytes(uri) # Fast selective unqote: First, split on '%' and then starting with the # second block, decode the first 2 bytes if they represent a hex code to # decode. The rest of the block is the part after '%AB', not containing # any '%'. Add that to the output without further processing. bits = uri.split(b'%') if len(bits) == 1: iri = uri else: parts = [bits[0]] append = parts.append hextobyte = _hextobyte for item in bits[1:]: hex = item[:2] if hex in hextobyte: append(hextobyte[item[:2]]) append(item[2:]) else: append(b'%') append(item) iri = b''.join(parts) return repercent_broken_unicode(iri).decode()[docs(path, safe="/:@&+$,-_.!~*'()")def repercent_broken_unicode(path): """ As per section 3.2 of RFC 3987, step three of converting a URI into an IRI, repercent-encode any octet produced that is not part of a strictly legal UTF-8 octet sequence. """ while True: try: path.decode() except UnicodeDecodeError as e: # CVE-2019-14235: A recursion shouldn't be used since the exception # handling uses massive amounts of memory repercent = quote(path[e.start:e.end], safe=b"/#%[]=:;$&()+,!?*@'~") path = path[:e.start] + force_bytes(repercent) + path[e.end:] else: return path[docs]def filepath_to_uri(path): """Convert a file system path to a URI portion that is suitable for inclusion in a URL. Encode certain chars that would normally be recognized as special chars for URIs. Do not encode the ' character, as it is a valid character within URIs. See the encodeURIComponent() JavaScript function for details. """ if path is None: return path # I know about `os.sep` and `os.altsep` but I want to leave # some flexibility for hardcoding separators. return quote(path.replace("\\", "/"), safe="/~!*()'")def get_system_encoding(): """ The encoding of the default system locale. Fallback to 'ascii'()
https://docs.djangoproject.com/en/2.2/_modules/django/utils/encoding/
CC-MAIN-2020-16
refinedweb
839
52.05
Project Setup¶ We have just installed Pyrogram. In this page we’ll discuss what you need to do in order to set up a project with the library. Let’s see how it’s done. Contents API Keys¶ The very first step requires you to obtain a valid Telegram API key (API id/hash pair): Visit and log in with your Telegram Account. Fill out the form to register a new Telegram application. Done! The API key consists of two parts: api_id and api_hash. Important The API key is personal and must be kept secret. Note The API key is unique for each user, but defines a token for a Telegram application you are going to build. This means that you are able to authorize multiple users (and bots too) to access the Telegram database through the MTProto API by a single API key. Configuration¶ Having the API key from the previous step in handy, we can now begin to configure a Pyrogram project. There are two ways to do so, and you can choose what fits better for you: First option (recommended): create a new config.inifile next to your main script, copy-paste the following and replace the api_id and api_hash values with your own. This is the preferred method because allows you to keep your credentials out of your code without having to deal with how to load them: [pyrogram] api_id = 12345 api_hash = 0123456789abcdef0123456789abcdef Alternatively, you can pass your API key to Pyrogram by simply using the api_id and api_hash parameters of the Client class. This way you can have full control on how to store and load your credentials (e.g., you can load the credentials from the environment variables and directly pass the values into Pyrogram): from pyrogram import Client app = Client( "my_account", api_id=12345, api_hash="0123456789abcdef0123456789abcdef" ) Note To keep code snippets clean and concise, from now on it is assumed you are making use of the config.ini file, thus, the api_id and api_hash parameters usage won’t be shown anymore.
https://docs.pyrogram.org/intro/setup
CC-MAIN-2021-49
refinedweb
337
67.69
Perl 5.14 is now available. While this latest major release of Perl 5 brings with it many bugfixes, updates to the core libraries, and the usual performance improvements, it also includes a few nice new features. This series of articles provides a quick introduction to several of these features. One such feature is the package BLOCK syntax: package My::Class { ... } When you declare a package, you may now provide a block at the end of the declaration. Within that block, the current namespace will be the provided package name. Outside of that block, the previously effective namespace will be in effect. The block provides normal lexical scoping, so that any lexical variables declared within the block will be visible only inside the block. As well, any lexical pragmas will respect the block's scoping. You do not need a trailing semicolon after the closing curly brace. You may combine this with the package VERSION syntax introduced in Perl 5.12: package My::Class v2011.05.16 { ... } The VERSION must be an integer, a real number (with a single decimal), or a dotted-decimal v-string as shown in the previous example. When present, the VERSION declaration sets the package-scoped $VERSION variable within the given namespace to the provided value. perldoc -f package documents this syntax.
http://www.perl.com/pub/2011/05/new-features-of-perl-514-package-block.html
CC-MAIN-2014-52
refinedweb
217
57.47
Get notified when your long-running cell finishes execution. If you are a Jupyter Notebook user, there must have been scenarios when a particular cell took a lot of time to finish the execution. This is particularly common during model training in machine learning, hyperparameter optimization, or even when running lengthy computations, etc. If yes, then a browser notification that would inform you once the process is finished could come in real handy. This way, you would be able to navigate to other tabs and only return to your machine learning experiment once you get that completion notification. Well, it turns out that there is a Jupyter extension that does exactly this and is very aptly named Notify. In this article, we’ll see how to enable notifications both in Jupyter Notebook as well as Jupyter Lab using notify. Notify: A Jupyter Magic For Browser Notifications of Cell Completion Notify is a Jupyter Notebook extension that notifies the user once a long-running cell has finished executing. It does so through browser notification. As of now, notify is supported by Chrome (Version: 58.0.3029) and Firefox (Version: 53.0.3). Let’s now see how we can install the package and use it. Installation The installation can be done via pip pip install jupyternotify or through source as: git clone [email protected]:ShopRunner/jupyter-notify.git cd jupyter-notify/ virtualenv venv source venv/bin/activate pip install -r requirements.txt jupyter notebook Enabling notifications in Jupyter Notebook Firstly, we shall look at how we can enable notifications in Jupyter Notebooks. Enter the following text in the first cell of the Notebook: %load_ext jupyternotify When you’ll run this cell for the first time, your browser will ask you to allow notifications in your notebook. You should press ‘Yes.’ Now we are all set. Here I’ll show you a small code snippet that using the sleep() function. This function suspends (waits) the execution of the current thread for a given number of seconds. %%notify import time time.sleep(10) print('Finished!') On executing the above cell, you’ll get the following notification in your browser: When you click on the body of the notification, it will bring you directly to the browser window and the tab containing your notebook. Enabling notifications in Jupyter Lab While currently, there is no official support for Jupyter Lab, I found a workaround. Apparently, a user has submitted a pull request with the solution, but it hasn’t been merged as yet. However, the solution does work seamlessly. Paste the following in a Jupyter Lab cell followed by the code that you wish to run: !pip uninstall jupyternotify -y !pip install git+ %reload_ext jupyternotify We’ll now use the same code as above to check if the notifications are enabled or not. %%notify import time time.sleep(5) Customizing the Notification message If you wish to have a fancier notification message, the code can be easily tweaked to do so. %%notify -m "Congrats.The process has finished" import time time.sleep(3) There are many other options also available, for instance, firing a notification in the middle of a cell or automatically triggering the notification after a certain time. I’ll highly encourage you to check out the GitHub repository of the package for detailed information. Conclusion A feature as trivial as a notification can be beneficial, especially when you regularly work with processes that take a lot of time to execute. It is pretty handy to be able to navigate to another tab or even a desktop and still get notified when a running process finishes. Sometimes, small addons like jupyternotify can really help to increase the productivity of a user. Originally published here
https://parulpandey.com/2021/02/06/enabling-notifications-in-your-jupyter-notebooks-for-cell-completion/
CC-MAIN-2022-21
refinedweb
622
55.74
Python syntax highlighted Markdown doctest. Project description phmdoctest 1.4.0 Introduction Python syntax highlighted Markdown doctest Command line program and Python library to test Python syntax highlighted code examples in Markdown. - Creates a pytest Python module that tests Python examples in README and other Markdown files. - Reads these from Markdown fenced code blocks: - The test cases get run later by running pytest. - Simple use case is possible with no Markdown edits at all. - More features selected by adding HTML comment directives to the Markdown. - Set test case name. - Add a pytest custom marker. - Add a pytest.mark.skip decorator. - Promote names defined in a test case to module level globals. - Label any fenced code block for later retrieval (API). - Configurable. Discover and process many Markdown files in a single command. - Add inline annotations to comment out sections of code. - Get code coverage by running pytest with coverage. - Select Python source code blocks as setup and teardown code. - Setup applies to code blocks and optionally to session blocks. - An included Python library: Latest Development tools API. - Python function returns test file in a string. (testfile() in main.py) - Two pytest fixtures. (tester.py) - testfile_creator runs testfile(). Use with testfile_tester. - testfile_tester runs a pytest file with pytest's pytester in its isolated environment. - Runs phmdoctest and can run pytest too. (simulator.py) - Functions to read fenced code blocks from Markdown. (tool.py) - Test Markdown for Python examples. (tool.py) - Prepare directory for generated test files. (tool.py) - Extract testsuite tree and list of failing trees from JUnit XML. (tool.py) - Available as the pytest plugin pytest-phmdoctest. default branch status Website | Docs | Repos | pytest | Codecov | License Introduction | Installation | Sample usage | Sample Usage with HTML comment directives | CI usage | --report | Identifying blocks | Directives | skip | label on code and sessions | label on any fenced code block | pytest skip | pytest skipif | setup | teardown | share-names | clear-names | pytest mark decorator | label skip and mark example | setup and teardown example | share-names clear-names example | Configuration | Inline annotations | skipping blocks with --skip | --skip | short form of --skip | --fail-nocode | --setup | --teardown | Setup example | Setup for sessions | Execution context | Send outfile to stdout | Usage | Run as a Python module | Python API | pytest fixtures | Simulate command line | Hints | Directive hints | Related projects Changes | Contributions | About Installation It is advisable to install in a virtual environment. python -m pip install phmdoctest Sample usage Given the Markdown file example1.md shown in raw form here... # This is Markdown file example1.md ## Interactive Python session (doctest) ```py >>> print("Hello World!") Hello World! ``` ## Source Code and terminal output Code: ```python from enum import Enum class Floats(Enum): APPLES = 1 CIDER = 2 CHERRIES = 3 ADUCK = 4 for floater in Floats: print(floater) ``` sample output: ``` Floats.APPLES Floats.CIDER Floats.CHERRIES Floats.ADUCK ``` the command... phmdoctest doc/example1.md --outfile test_example1.py creates the python source code file test_example1.py shown here... """pytest file built from doc/example1.md""" from phmdoctest.functions import _phm_compare_exact def session_00001_line_6(): r""" >>> print("Hello World!") Hello World! """ def test_code_14_output_28(capsys): from enum import Enum class Floats(Enum): APPLES = 1 CIDER = 2 CHERRIES = 3 ADUCK = 4 for floater in Floats: print(floater) _phm_expected_str = """\ Floats.APPLES Floats.CIDER Floats.CHERRIES Floats.ADUCK """ _phm_compare_exact(a=_phm_expected_str, b=capsys.readouterr().out) Then run a pytest command something like this in your terminal to test the Markdown session, code, and expected output blocks. pytest --doctest-modules Or these two commands: pytest python -m doctest test_example1.py The line_6 in the function name session_00001_line_6 is the line number in example1.md of the first line of the interactive session. 00001 is a sequence number to order the doctests. The 14 in the function name test_code_14_output_28 is the line number of the first line of python code. 28 shows the line number of the expected terminal output. One test case function gets generated for each: - Markdown fenced code block interactive session - Python-code/expected-output Markdown fenced code block pair The --report option below shows the blocks discovered and how they are tested. Sample Usage with HTML comment directives Given the Markdown file shown in raw form here... <!--phmdoctest-mark.skip--> <!--phmdoctest-label test_example--> ```python print("Hello World!") ``` ``` incorrect expected output ``` the command... phmdoctest tests/one_mark_skip.md --outfile test_one_mark_skip.py creates the python source code file shown here... """pytest file built from tests/one_mark_skip.md""" import pytest from phmdoctest.functions import _phm_compare_exact @pytest.mark.skip() def test_example(capsys): print("Hello World!") _phm_expected_str = """\ incorrect expected output """ _phm_compare_exact(a=_phm_expected_str, b=capsys.readouterr().out) Run the --outfile with pytest... $ pytest -vv test_one_mark_skip.py test_one_mark_skip.py::test_example SKIPPED - The HTML comments in the Markdown are phmdoctest directives. - The mark.skip directive adds the @pytest.mark.skip() line. - The label directive names the test case function. - List of Directives - Directives are optional. - Markdown edits are optional. CI usage Test Python examples in README.md in Continuous Integration scripts. In this snippet for Linux the pytest test suite is in the tests folder. mkdir tests/tmp phmdoctest README.md --report --outfile tests/tmp/test_readme.py pytest --doctest-modules -vv tests This console shows testing Python examples in project.md. Look for the tmp tests at the bottom. Windows Usage on Appveyor. See this excerpt from ci.yml Actions usage example. It runs on Windows, Linux, and macOS. Please find the phmdoctest command at the bottom. No changes to README.md are needed here, look in the last job log. report option To see the GFM fenced code blocks in the MARKDOWN_FILE use the --report option like this: phmdoctest doc/example2.md --report which lists the fenced code blocks it found in the file example2.md. The test role column shows how each fenced code block gets tested. doc/example2.md fenced blocks ------------------------------------------------ block line test TEXT or directive type number role quoted and one per line ------------------------------------------------ python 9 code 14 output python 20 code 26 output 31 -- python 37 code python 44 code 51 output yaml 59 -- text 67 -- py 75 session python 87 code 94 output py 102 session ------------------------------------------------ 7 test cases. 1 code blocks with no output block. Identifying blocks The PYPI commonmark project provides code to extract fenced code blocks from Markdown. Specification CommonMark Spec and website CommonMark. Python code, expected output, and Python interactive sessions get extracted. Only GFM fenced code blocks are considered. A block is a session block if the info_string starts with py and the first line of the block starts with the session prompt: '>>> '. To be treated as Python code the opening fence should start with one of these: ```python ```python3 ```py3 plus the block contents can't start with '>>> '. The examples use the info_strings python for code and py for sessions since they render with coloring on GitHub, readthedocs, GitHub Pages, and Python package index. project.md has more examples of code and session blocks. It is ok if the info string is laden with additional text, it will be ignored. The entire info string will be shown in the block type column of the report. An output block is a fenced code block that immediately follows a Python block and starts with an opening fence like this which has an empty info string. ``` A Python code block has no output if it is followed by any of: - Python code block - Python session block - a fenced code block with a non-empty info string Test code gets generated for it, but there will be no assertion statement. Directives Directives are HTML comments containing test generation commands. They are edited into the Markdown file immediately before a fenced code block. It is OK if other HTML comments are present. See the <!--phmdoctest-skip--> directive in the raw Markdown below. With the skip directive no test code will be generated from the fenced code block. <!--phmdoctest-skip--> <!--Another HTML comment--> ```python print("Hello World!") ``` Expected Output ``` Hello World! ``` List of Directives Directive HTML comment | Use on blocks ---------------------------------- | --------------------- <!--phmdoctest-skip--> | code, session, output <!--phmdoctest-label IDENTIFIER--> | code, session <!--phmdoctest-label TEXT--> | any <!--phmdoctest-mark.skip--> | code <!--phmdoctest-mark.skipif<3.N--> | code <!--phmdoctest-setup--> | code <!--phmdoctest-teardown--> | code <!--phmdoctest-share-names--> | code <!--phmdoctest-clear-names--> | code <!--phmdoctest-mark.ATTRIBUTE--> | code skip The skip directive or --skip TEXT command line option prevents code generation for the code or session block. The skip directive can be placed on an expected output block. There it prevents checking expected against actual output. Example. label on code and sessions When used on a Python code block or session the label directive changes the name of the generated test function. Example. Two generated tests, the first without a label, shown in pytest -v terminal output: test_readme.py::test_code_93 FAILED test_readme.py::test_beta_feature FAILED label on any fenced code block On any fenced code block, the label directive identifies the block for later retrieval by the class phmdoctest.tool.FCBChooser(). The FCBChooser is used separately from phmdoctest in a different pytest file. This allows the test developer to write additional test cases for fenced code blocks that are not handled by phmdoctest. The directive value can be any string. # This is file doc/my_markdown_file.md <!--phmdoctest-label my-fenced-code-block--> ``` The label directive can be placed on any fenced code block. ``` Here is Python code to fetch it: import phmdoctest.tool chooser = phmdoctest.tool.FCBChooser("doc/my_markdown_file.md") contents = chooser.contents(label="my-fenced-code-block") print(contents) Output: The label directive can be placed on any fenced code block. pytest skip The <!--phmdoctest-mark.skip--> directive generates a test case with a @pytest.mark.skip() decorator. Example. pytest skipif The <!--phmdoctest-mark.skipif<3.N--> directive generates a test case with the pytest decorator @pytest.mark.skipif(sys.version_info < (3, N), reason="requires >=py3.N"). N is a Python minor version number. Example. setup A single Python code block can assign names visible to other code blocks by adding a setup directive or using the --setup command line option. Names assigned by the setup block get copied to the test module's global namespace after the setup block runs. Here is an example setup block from setup.md: import math mylist = [1, 2, 3] a, b = 10, 11 def doubler(x): return x * 2 Using setup modifies the execution context of the Python code blocks in the Markdown file. The names math, mylist, a, b, and doubler are visible to the other Python code blocks. The objects can be modified. Example. teardown Selects a single Python code block that runs at test module teardown time. A teardown block can also be designated using the --teardown command line option. Example. share-names Names assigned by the Python code block get copied to the test module as globals after the test code runs. This happens at run time. These names are now visible to subsequent test cases generated for Python code blocks in the Markdown file. share-names modifies the execution context as described for the setup directive above. The share-names directive can be used on more than one code block. Example. This directive effectively joins its Python code block to the following Python code blocks in the Markdown file. clear-names After the test case generated for the Python code block with the clear-names directive runs, all names that were created by one or more preceding share-names directives get deleted. The names that were shared are no longer visible. This directive also deletes the names assigned by setup. Example. pytest mark decorator The <!--phmdoctest-mark.ATTRIBUTE--> directive adds a @pytest.mark.ATTRIBUTE decorator to the generated test function. ATTRIBUTE is a valid Python attribute identifier. This defines a marker to pytest that is used to select and deselect tests. See the pytest documentation section "Working with custom markers". The file mark_example.md contains example usage of the user defined marker "slow". It generates test_mark_example.py label skip and mark example The file directive1.md contains example usage of label, skip, and mark directives. The command below generates test_directive1.py. phmdoctest doc/directive1.md --report produces this phmdoctest doc/directive1.md --outfile test_directive1.py setup and teardown example The file directive2.md contains example usage of label, skip, and mark directives. The command below generates test_directive2.py. phmdoctest doc/directive2.md --report produces this phmdoctest doc/directive2.md --outfile test_directive2.py share-names clear-names example The file directive3.md contains example usage of share-names and clear-names directives. The command below generates test_directive3.py. phmdoctest doc/directive3.md --report produces this phmdoctest doc/directive3.md --outfile test_directive3.py Configuration Supply a .ini, .cfg, or .toml configuration file in place of the Markdown file. Configuration features: - Choose Markdown files for test file generation. (glob wildcards). - Exclude Markdown files from test file generation. (glob wildcards). - Name the output directory. - Removes stale test files from output directory. - Enable printing. Place a [tool.phmdoctest] section in the configuration file. How to configure. Inline annotations Inline annotations comment out sections of code. They can be added to the end of lines in Python code blocks. They should be in a comment. phmdoctest:omitcomments out a section of code. The line it is on, plus following lines at greater indent get commented out. phmdoctest:passcomments out one line of code and prepends the pass statement. Here is a snippet showing how to place phmdoctest:pass in the code. The second block shows the code that is generated. Note there is no # immediately before phmdoctest:pass. It is not required. import time def takes_too_long(): time.sleep(100) # delay for awhile. phmdoctest:pass takes_too_long() import time def takes_too_long(): pass # time.sleep(100) # delay for awhile. phmdoctest:pass takes_too_long() Use phmdoctest:omit on single or multi-line statements. Note the two commented out time.sleep(99). They follow and are indented more that the if condition:line with phmdoctest:omit. import time # phmdoctest:omit condition = True if condition: # phmdoctest:omit time.sleep(99) time.sleep(99) # import time # phmdoctest:omit condition = True # if condition: # phmdoctest:omit # time.sleep(99) # time.sleep(99) Inline annotation processing counts the number of commented out sections and adds the count as the suffix _N to the name of the pytest function in the generated test file. Inline annotations are similar, but less powerful than the Python standard library doctest directive #doctest+SKIP. Improper use of phmdoctest:omit can cause Python syntax errors. The examples above are snippets that illustrate how to use inline annotations. Here is an example that produces a pytest file from Markdown. The command below takes inline_example.md and generates test_inline_example.py. phmdoctest doc/inline_example.md --outfile test_inline_example.py skipping blocks with skip option If you don't want to generate test cases for Python blocks precede the block with a skip directive or use the --skip TEXT option. More than one skip directive or --skip TEXTis allowed. The following describes using --skip TEXT. The code in each Python block gets searched for the substring TEXT. Zero, one or more blocks will contain the substring. These blocks will not generate test cases in the output file. - The Python code in the fenced code block gets searched. - The info string is not searched. - Output blocks are not searched. - Both Python code and session blocks get searched. - Case is significant. The report shows which Python blocks get skipped in the test role column, and the Python blocks that matched each --skip TEXT in the skips section. This option makes it very easy to inadvertently exclude Python blocks from the test cases. In the event no test cases get generated, the option --fail-nocode described below is useful. Three special --skip TEXT strings work a little differently. They select one of the first, second, or last of the Python blocks. Only Python blocks get counted. --skip FIRSTskips the first Python block. --skip SECONDskips the second Python block. --skip LASTskips the final Python block. skip option This command using --skip: phmdoctest doc/example2.md --skip "Python 3.7" --skip LAST --report --outfile test_example2.py Produces the report doc/example2.md fenced blocks ----------------------------------------------------- block line test TEXT or directive type number role quoted and one per line ----------------------------------------------------- python 9 code 14 output python 20 skip-code "Python 3.7" 26 skip-output 31 -- python 37 code python 44 code 51 output yaml 59 -- text 67 -- py 75 session python 87 code 94 output py 102 skip-session "LAST" ----------------------------------------------------- 5 test cases. 1 skipped code blocks. 1 skipped interactive session blocks. 1 code blocks with no output block. skip pattern matches (blank means no match) ------------------------------------------------ skip pattern matching code block line number(s) ------------------------------------------------ Python 3.7 20 LAST 102 ------------------------------------------------ creates the output file test_example2.py short form of skip option This is the same command as above using the short -s form of the --skip option in two places. It produces the same report and outfile. phmdoctest doc/example2.md -s "Python 3.7" -sLAST --report --outfile test_example2.py fail-nocode option The --fail-nocode option produces a pytest file that will always fail when no Python code or session blocks get found. Evem if no Python code or session blocks exist in the Markdown file a pytest file gets generated. This also happens when --skip eliminates all the Python code blocks. The generated pytest file will have the function def test_nothing_passes(). If the option --fail-nocode is passed the function is def test_nothing_fails() which raises an assertion. setup option A single Python code block can assign names visible to other code blocks by giving the --setup TEXT option. Please see the setup directive above. The rules for TEXT are the same as for --skip TEXT plus... - Only one block can match TEXT. - The block cannot match a block that is skipped. - The block cannot be a session block even though session blocks get searched for TEXT. - It is ok if the block has an output block. It will be ignored. teardown option A single Python code block can supply code run by the pytest teardown_module() fixture. Use the --teardown TEXT option. Please see the teardown directive above. The rules for TEXT are the same as for --setup above except TEXT won't match a setup block. Setup example For the Markdown file setup.md run this command to see how the blocks get tested. phmdoctest doc/setup.md --setup FIRST --teardown LAST --report doc/setup.md fenced blocks ------------------------------------------------- block line test TEXT or directive type number role quoted and one per line ------------------------------------------------- python 9 setup "FIRST" python 20 code 27 output python 37 code 42 output python 47 code 51 output python 58 teardown "LAST" ------------------------------------------------- 3 test cases. This command phmdoctest doc/setup.md --setup FIRST --teardown LAST --outfile test_setup.py creates the test file test_setup.py Setup for sessions The pytest option --doctest-modules is required to run doctest on sessions. pytest runs doctests in a separate context. For more on this see Execution context below. To allow sessions to see the variables assigned by the --setup code block, add the option --setup-doctest Here is an example with setup code and sessions setup_doctest.md. The first part of this file is a copy of setup.md. This command uses the short form of setup and teardown. -u for setup and -d for teardown. phmdoctest doc/setup_doctest.md -u FIRST -d LAST --setup-doctest --outfile test_setup_doctest.py It creates the test file test_setup_doctest.py Execution context When run without --setup - pytest and doctest determine the order of test case execution. - phmdoctest assumes test code and session execution is in file order. - Test case order is not significant. - Code and expected output run within a function body of a pytest test case. - If pytest is invoked with --doctest-modules: - Sessions are run in a separate doctest execution context. - Otherwise, sessions do not run. With --setup - Names assigned by setup code are visible to code blocks. - Code blocks can modify the objects created by the setup code. - Code block test case order is significant. - Session order is not significant. - If pytest is run with --doctest-modules: - pytest runs two separate contexts: one for sessions, one for code blocks. - setup and teardown code gets run twice, once by each context. - the names assigned by the setup code block are are notvisible to the sessions. With share-names - Only following code blocks can modify the shared objects. - Shared objects will not be visible to sessions if pytest is run with --doctest-modules. - After running a code block with clear-names - Shared objects will no longer be visible. - Names assigned by setup code will no longer be visible. With --setup and --setup-doctest Same as the setup section plus: - names assigned by the setup code block are visible to the sessions. - Sessions can modify the objects created by the setup code. - Session order is significant. - Sessions and code blocks are still running in separate contexts isolated from each other. - A session can't affect a code block, and a code block can't affect a session. - Names assigned by the setup code block are globally visible to the entire test suite via the pytest doctest_namespace fixture. See hint near the end Hints. pytest live logging demo The live logging demos reveals pytest execution contexts. pytest Live Logs show the execution order of setup_module(), test cases, sessions, and teardown_module(). There are 2 demo invocations in the workflow action called pytest Live Log Demo. GitHub login required. Send outfile to stdout To redirect the above outfile to the standard output stream use one of these two commands. Be sure to leave out --report when sending --outfile to standard output. phmdoctest doc/example2.md -s "Python 3.7" -sLAST --outfile - or phmdoctest doc/example2.md -s "Python 3.7" -sLAST --outfile=- Usage phmdoctest --help Usage: phmdoctest [OPTIONS] MARKDOWN_FILE MARKDOWN_FILE may also be .toml, .cfg, or .ini configuration file. Options: --outfile TEXT Write generated test case file to path TEXT. "-" writes to stdout. -s, --skip TEXT Any Python code or interactive session block that contains the substring TEXT is not tested. More than one --skip TEXT is ok. Double quote if TEXT contains spaces. For example --skip="python 3.7" will skip every Python block that contains the substring "python 3.7". If TEXT is one of the 3 capitalized strings FIRST SECOND LAST the first, second, or last Python code or session block in the Markdown file is skipped. --report Show how the Markdown fenced code blocks are used. --fail-nocode This option sets behavior when the Markdown file has no Python fenced code blocks or interactive session blocks or if all such blocks are skipped. When this option is present the generated pytest file has a test function called test_nothing_fails() that will raise an assertion. If this option is not present the generated pytest file has test_nothing_passes() which will never fail. -u, --setup TEXT The Python code block that contains the substring TEXT is run at test module setup time. Variables assigned at the outer level are visible as globals to the other Python code blocks. TEXT should match exactly one code block. If TEXT is one of the 3 capitalized strings FIRST SECOND LAST the first, second, or last Python code or session block in the Markdown file is matched. A block will not match --setup if it matches --skip, or if it is a session block. Use --setup-doctest below to grant Python sessions access to the globals. -d, --teardown TEXT The Python code block that contains the substring TEXT is run at test module teardown time. TEXT should match exactly one code block. If TEXT is one of the 3 capitalized strings FIRST SECOND LAST the first, second, or last Python code or session block in the Markdown file is matched. A block will not match --teardown if it matches either --skip or --setup, or if it is a session block. --setup-doctest Make globals created by the --setup Python code block or setup directive visible to session blocks and only when they are tested with the pytest --doctest-modules option. Please note that pytest runs doctests in a separate context that only runs doctests. This option is ignored if there is no --setup option. --version Show the version and exit. --help Show this message and exit. Run as a Python module To run phmdoctest from the command line: python -m phmdoctest doc/example2.md --report Python API Call main.testfile() to generate a pytest file in memory. Please see the Python API here. The example generates a pytest file from doc/setup.md and compares the result to doc/test_setup.py. from pathlib import Path import phmdoctest.main generated_testfile = phmdoctest.main.testfile( "doc/setup.md", setup="FIRST", teardown="LAST", ) expected = Path("doc/test_setup.py").read_text(encoding="utf-8") assert expected == generated_testfile pytest fixtures Use fixture testfile_creator to generate a test file in memory. Pass the test file to fixture testfile_tester to run the test file in the pytester environment. Fixture API | Example. See more uses in tests/test_examples.py, tests/test_details.py, and tests/test_many_markdown.py. The fixtures run pytest much faster than run_and_pytest() below since there is no subprocess call. In the readthedocs documentation see the section Development tools API 1.4.0. pytest's pytester is suitable for pytest plugin development. Simulate command line To simulate a command line call to phmdoctest from within a Python script phmdoctest.simulator offers the function run_and_pytest(). - it creates the --outfile in a temporary directory - optionally runs pytest on the outfile - pytest can return a JUnit XML report - useful during development to validate the command line and prevent use of a stale --outfile Please see the Latest Development tools API section or the docstring of the function run_and_pytest() in the file simulator.py. Pass pytest_options as a list of strings as shown below. import phmdoctest.simulator command = "phmdoctest doc/example1.md --report --outfile temporary.py" simulator_status = phmdoctest.simulator.run_and_pytest( well_formed_command=command, pytest_options=["--doctest-modules", "-v"] ) assert simulator_status.runner_status.exit_code == 0 assert simulator_status.pytest_exit_code == 0 Hints To read the Markdown file from the standard input stream. Use -for MARKDOWN_FILE. Write the test file to a temporary directory so that it is always up to date. In CI scripts the following shell command will create the temporary directory tmp in the tests folder on Windows, Linux, and macOS. python -c "from pathlib import Path; d = Path('tests') / 'tmp'; d.mkdir(mode=0o700)" It is easy to use --output by mistake instead of --outfile. If Python code block has no output, put assert statements in the code. Use pytest option --doctest-modulesto test the sessions. Markdown indented code blocks (Spec section 4.4) are ignored. simulator_status.runner_status.exit_code == 2 is the click command line usage error. Since phmdoctest generates code, the input file should be from a trusted source. An empty code block gets given the role del-code. It is not tested. Use special TEXT values FIRST, SECOND, LAST for the command line options --setupand --teardownsince they only match one block. The variable names managenamespace, doctest_namespace, capsys, and _phm_expected_strshould not be used in Markdown Python code blocks since they may be used in generated code. Setup and teardown code blocks cannot have expected output. To have pytest collect a code block with the label directive start the value with test_. With the --setup-doctestoption, names assigned by the setup code block are globally visible to the entire test suite. This is due to the scope of the pytest doctest_namespace fixture. Try using a separate pytest command to test just the phmdoctest test. The module phmdoctest.fixture is imported at pytest time to support setup, teardown, share-names, and clear-names features. The phmdoctest Markdown parser finds fenced code blocks enclosed by html <details>and </details>tags. The tags may require a preceding and trailing blank line to render correctly. See example in tests/test_details.py. Try redirecting phmdoctest standard output into PYPI Pygments to colorize the generated test file. python -m phmdoctest project.md --outfile - | pygmentize If the --outfile is written into a folder that pre-exists in the repository, consider adding the outfile name to .gitignore. If the outfile name later changes, the change will be needed in .gitignore too. # Reserved for generated test file. tests/test_readme.py Directive hints - Only put one of setup, teardown, share-names, or clear-names on a code block. - Only one block can be setup. Only one block can be teardown. - The setup or teardown block can't have an expected output block. - Label directive does not generate a test case name on setup and teardown blocks. - Directives displayed in the --reportstart with a dash like this: -label test_i_ratio. - Code generated by Python blocks with setup and teardown directives runs at the pytest fixture scope="module"level. - Code generated by Python blocks with share-names and clear-names directives are collected and run by pytest like any other test case. - A malformed HTML comment ending is bad. Make sure it ends with both dashes like -->. Running with --reportwill expose that problem. - The setup, teardown, share-names, and clear-names directives have logging. To see the log messages, run pytest with the option: --log-cli-level=DEBUG --color=yes - There is no limit to number of blank lines after the directive HTML comment but before the fenced code block. - The directive <!--phmdoctest-mark.xfail-->might be useful as an alternative to <!--phmdoctest-mark.skip-->for failing examples. - The directive <!--phmdoctest-mark.ATTRIBUTE-->will not be effective when used with <!--phmdoctest-setup-->or <!--phmdoctest-teardown-->because pytest marks can only be applied to tests. They have no effect on fixtures. Setup and teardown use fixtures. Related projects - rundoc - byexample - sphinx.ext.doctest - sybil - doxec - egtest - pytest-phmdoctest - pytest-codeblocks.
https://pypi.org/project/phmdoctest/
CC-MAIN-2022-27
refinedweb
4,927
60.51
Dynamic - The dynamic keywordAs almost .net community people know, the .Net 4.0 introduced many features like the optional and named parameters, the covariance, the countravariance and the dynamic which is a new introduced type that comes with serveral advantages. Frankly, I wasn't convinced by this special kind of object from the beginning and I had some doubts about this for many reasons such as the utility of that kind of object, the benefit of using that kind of dynamic programming as the reflection can do perfectly the job, the raison of ignoring the compiler and the compile time as it could help us to detect program bugs earlier and at last but not least the impact of this kind of programming on the objet inheritance and the polymorphism case. So I tried to parse this kind of object to see what are the benefits and what are the limits. In this article, I will expose those two issues not through the theorist manners but through real cases via some implementation techniques.First, let's begin by introducing the DLR Dynamic Language Runtime.1. Introduction to the Dynamic Language Runtime:At the contrast of the idea that might people have, the DLR unit is neither a dependent component that runs within the Common Language Runtime nor a new added component to the .Net Framwork at the core level. It is built upon both the .Net Framework kernel and the Common Language Runtime like the rest of the assemblies, I mean System.IO, System.Reflection and the others. The DLR is presented by the System.Dynamic namespace. The purpose of that component is to primary bring support to dynamic languages like IronRubby and IronPhyton and dealing with unknown objects structures at the run time such as dealing with DOM Document Object Model cases where the structure of a given XML unit is not known in advance. The DLR reprsents the same aim as the equivalent in the Java world ,I mean the Da vinci Machine which extends the Java Virtual Machine to enable deal dynamically with objects and dynamic languages environements.2.The dynamic vs. the object keywordFirst of all, it is necessary to have visual studio 2010 and .Net framework 4.0 to start programming with dynamic objects. It is possible to download an express version from this link. Let's create a simple console application to start test this object type. class Program { static void Main(string[] args) { //t is type of dynamic dynamic t = 2; Console.WriteLine(" Current assignement: {0} is {1}\r\n", t, t.GetType()); //t is now accepting a string t = "Bechir"; Console.WriteLine("Current assignement: {0} is {1}\r\n", t, t.GetType()); //t is now accepting a person t = new Person { Name = "Bechir" }; Console.WriteLine("Current assignement: {0} is {1}\r\n", t, t.GetType()); Console.Read(); } } public class Person { public string Name { get; set; } }As you can see, the t dynamic variable here can accept any kind of type as assignment. Well, the question that could be asked in this context is what's the new as the object type is also doing the same thing by accepting any kind of assignment?If we try to change the keyword dynamic by object and try to explore the intermediate language represented by the bellow figure 1.We run the ildasm.exe with the command /metadata against the executing assembly and we try to explore the main method IL code.figure 1In the figure 1, it is clear that the type t is defined as an instance of type object so the compiler knows it well. But if now we substitute the object keyword by dynamic and run the ildasm /metadata again against our assembly and start parsing again the main method instructions stack, we can remark that the instance type of object is no longer present. Instead of that, we can find a generic class called CallSite<> that belongs to the System.Runtime.CompilerServices namespace as shown in the figure 2. Figure 2This class plays the role of the intermediate between the caller and the new runtime created object. As the compiler doesn't know about that object and what is going to be at the runtime. The code is not emited according to the classic way but it is encapsulated as an expression tree (notion borrowed from System.Linq.Expressions) that holds necessary data to help resolve the correct object and bind it appropriatly at runtime. This will happen instead of emitting the IL Intermediate Language as usually.Although the CallSite<> class is exposed to the final user, it is not necessary to make use of it directly from the code. It is rather used internaly by the compiler.As a conclusion for this section, althought the code behaviour appears as the same, the way that the compiler deals with and the manner that objects are created differs completely from dynamic to object.3.The dynamic vs. the var keywordAlmost .Net community people know that C# 3.0 introduced the keyword var which helps us create implicit strongly typed objects, and that is particularly useful when dealing with linq.Let's try to substitute the dynamic keyword by var keyword and see what's happening.A compile time error is raised at compile time, although that var keyword gives an impression that this is generic as same as object type, this is a wrong impression as the result is illustrated by those two compiler errors Figure3If we try to modify the code a little class Program { static void Main(string[] args) { //t is now accepting a person var t = new { Name = "Bechir" }; Console.WriteLine("Current assignement: {0} is {1}\r\n", t, t.GetType()); Console.Read(); } }And then run the ildasm /metadata against the executing assembly to see the generated intermediate language. Figure 4The compiler behavior is now different when comparing with the previous cases. It creates an anonymous Type with a reference 0 at the compile time in order to wrap up our anonymous object at the difference to the dynamic case where the compiler doesn't know any thing about our object and leave this job to the CallSite<> and the runtime binder.As a conclusion for this section, the var keyword is totaly different when comparing with the dynamic one as the first type declared with the var keyword is resolved at the compile time as same as the other explecit strongly typed objects. And once that object is assigned, it couldn't accept subsequent assignements of differents types.4.The dynamic method dispatch paradigmTo understand well the issue let's suppose this case. First suppose this pattern: public class Base { public virtual void InstanceMethod() { Console.WriteLine("The base class method is called"); } } public class FirstClass : Base { public override void InstanceMethod() { Console.WriteLine("The first class method is called"); } } public class SecondClass : Base { public override void InstanceMethod() { Console.WriteLine("The second class method is called"); } } public class ThirdClass : Base { public override void InstanceMethod() { Console.WriteLine("The third class method is called"); } }To explain a little this above pattern, we can say that this class that we called Base serves as a base class for the first, second and the third class. An invoker class which plays the role of visitor (Visitor pattern) will invoke the corresponding method of the given class class Invoker { public void Method(Base instance) { instance.InstanceMethod(); } }Inspite of the fact that the defined argument is type of Base in the above method (Method), the right method (InstanceMethod) will be invoked if a given derived class (the first, the second or the third class) is used as an argument at the method level as the compiler will detect at compile time which method to invoke based on the derived instance member type.Thus, for example, if we run the bellow code: Invoker dummy = new Invoker(); SecondClass secondclass = new SecondClass(); dummy.Method(secondclass); Console.Read(); The result will be "The second class method is called". This mecanism is called single dispatch, it is the one where the compiler will comit to a given member at compile time which is type of second class in this case.Now imagine this case: public class FirstClass { public void InstanceMethod() { Console.WriteLine("The first class is invoked"); } } public class SecondClass { public void InstanceMethod() { Console.WriteLine("The second class is invoked"); } } public class ThirdClass { public void InstanceMethod() { Console.WriteLine("The third class is invoked"); } }Although the three classes implement the same method signature for the (InstanceMethod), they didn't have an explicit relationship like inheritance or even polymorphism through interfaces. In this case, if we try to use the invoker class as a visitor we will fall down into a serious pattern case.class Invoker{public void Method(?!!!! instance){instance.InstanceMethod();}}As we can observe, the issue is what can one put as an argument in this case? Hence, the compiler won't be able to determine which argument type to use.if we try this bellow implementation, then we fail to call the InvokeMethod as it is not a part of the object type inspite of the fact that all classes inherit from object type. So that it doesn't make sense here to put argument type of object. class Invoker { public void Method(object instance) { instance.InstanceMethod();//Not able to resolve the InstanceMethod } }Even the generic type doesn't make sens as a solution in this case class Invoker<T> where T : class { public void Method(T instance) { instance.InstanceMethod();//Not able to resolve the InstanceMethod } }The only solution for that is to refactor the code by making all the classes implementing an interface that defines InstanceMethod as the following code illustrates. interface IInterface { void InstanceMethod(); } #region first alternative public class FirstClass : IInterface { public void InstanceMethod() { Console.WriteLine("The first class is invoked"); } } public class SecondClass : IInterface { public void InstanceMethod() { Console.WriteLine("The second class is invoked"); } } public class ThirdClass : IInterface { public void InstanceMethod() { Console.WriteLine("The third class is invoked"); } }But suppose now that we don't have the code source of those classes?It won't be possible to refactor the code so that it suits our need.Using dynamic keyword is a solution key for this case. If we use this bellow code class Invoker { public void Method(dynamic instance) { instance.InstanceMethod();//Enable to resolve the InstanceMethod :) } }Then the problem is resolved as the type determination is reported to the run time then the corresponding method (InstanceMethod) to the given argument (instance) type will be invoked in that moment.As a conclusion for this section we can say that dynamic programming introduces a new kind of dispatching based not only on the member invoker but also on the dynamic arguments those overload a set of methods.5.The dynamic keyword vs. the generic typeSuppose now this case static void Add<T>(T operand1, T oprand2) { Console.WriteLine(operand1 + oprand2); //This will result a compile time error }Me personaly, I suffred a lot from that issue. But with the dynamic the problem is resolved static void Add(dynamic operand1, dynamic operand2) { Console.WriteLine(operand1 + operand2); }In the other hand, we have to be careful when calling this code. This bellow code for example will throw an exception at run time as there is not an overloaded operator + version that supports the addition between 2 and object.Add(2, new object());Hence a RuntimeBinderException will be thrown. Then we can refractor the code to avoid this kind of problems. static void Add(dynamic operand1, dynamic operand2) { try { Console.WriteLine(operand1 + operand2); } catch (RuntimeBinderException) { Console.WriteLine("Unsupported operantion"); } }As a conclusion for this section, it is possible to avoid some situations where generics illustrates some limits but one has to be careful to avoid application instability as all anomalies are reported to the runtime.6. The dynamic keyword vs. reflection mechanismTo illustrate the dynamic programming using dynamic keyword over reflection let's suppose this case: class DummyClass { public void Method(int x) { Console.WriteLine("{0} is an integer", x); } public void Method(string x) { Console.WriteLine("{0} is a string", x); } public void Method(double x) { Console.WriteLine("{0} is a double", x); } public void Method(Person x) { Console.WriteLine("{0} is a person", x); } } class Person { public string Name { get; set; } }Suppose now that we are in situation to deal with those above methods by retrieving and invoking them through reflection. Then the implementation in that case will be as follow static void Main(string[] args) { object[] arr = { 1, 2.3, "viva lalgery !!!", new Person { Name = "Bechir" } }; DummyClass instance = new DummyClass(); Type t = typeof(DummyClass); MethodInfo[] methods = t.GetMethods(); MethodInfo intMethod = methods[0]; MethodInfo stringMethod = methods[1]; MethodInfo doubleMethod = methods[2]; MethodInfo personMethod = methods[3]; foreach (var item in arr) { if (item.GetType() == typeof(int)) { intMethod.Invoke(instance, new object[] { item }); } else if (item.GetType() == typeof(string)) { stringMethod.Invoke(instance, new object[] { item }); } else if (item.GetType() == typeof(double)) { doubleMethod.Invoke(instance, new object[] { item }); } else if (item.GetType() == typeof(Person)) { personMethod.Invoke(instance, new object[] { item }); } } Console.Read(); }Now, if we try to use the dynamic keyword alternative instead, the code will be dramatically simplfied like this: object[] arr = { 1, 2.3, "viva lalgery !!!", new Person { Name = "Bechir" } }; DummyClass instance = new DummyClass(); foreach (dynamic item in arr) { instance.Method(item); } Console.Read();The both codes, namely the reflection one and the dynamic one give the same result.As a conclusion, the dynamic keyword comes to solve some problems that a developer can encounter especially when the static programming mode illustrate some constraints for the developer to implement some concepts.Good Dotneting!!! ©2016 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/uploadfile/yougerthen/toying-with-the-C-Sharp-4-0/
CC-MAIN-2016-36
refinedweb
2,274
54.42
In this tutorial we will see how to use ultrasonic sensor as a counter and output will be shown on LCD display. It will count number of times object passes in front of sensor. So let’s get started. For this you will need - Arduino, - Ultrasonic sensor, - LCD display, - Potentiometer (for adjusting contrast of LCD), - Breadboard, - 4.7 K ohms Resistor, - Jumper wires. Do connection as shown in diagram. Provide separate ground for ultrasonic sensor. And connect VCC pin near to 5V pin. Otherwise it will not work and sensor will give constantly increment values even you are not doing anything with sensor. Sketch for Ultrasonic sensor counter with LCD #include <LiquidCrystal.h> #define trigPin 13 #define echoPin 8 // initialize the library with the numbers of the interface pins LiquidCrystal lcd(12, 11, 5, 4, 3, 2); int counter = 0; int currentState = 0; int previousState = 0; void setup() { pinMode(trigPin, OUTPUT); pinMode(echoPin, INPUT); lcd.begin(16, 2); lcd.setCursor(4, 0); lcd.print("counter"); } void loop() { long duration, distance; digitalWrite(trigPin, LOW); delayMicroseconds(2); digitalWrite(trigPin, HIGH); delayMicroseconds(10); digitalWrite(trigPin, LOW); duration = pulseIn(echoPin, HIGH); distance = (duration/2) / 29.1; if (distance <= 10){ currentState = 1; } else { currentState = 0; } delay(200); if(currentState != previousState){ if(currentState == 1){ counter = counter + 1; lcd.setCursor(4,1); lcd.print(counter); } } } This code is already explain in previous tutorial measuring distance with ultrasonic and ultrasonic as a counter. In this tutorial we have added LCD for output. Now let’s come to the programming part. Include liquid crystal library. Define trigger and echo pins. Initialize liquid crystal library. Define variables for detecting changes in state. In setup function declare pin mode for trigger and echo pin. Set cursor at little bit right side and print counter text on LCD. In loop function this will determine the distance. If any object comes within range of 10 cm. current state will be equal to 1. If current state equal to 1 it will count 1. LCD cursor is set to second row for printing counting values. You can see when I move box in front of ultrasonic sensor it counts how many time box passes. LIST OF COMPONENT BUY ONLINE: (Arduino) (LCD display) (Ultrasonic sensor) (Potentiometer) (Resistor) (Breadboard) (Jumper wire) TILL THEN KEEP LEARNING KEEP MAKING 🙂 really nice tutorial, but if keep the box in front of sensor it keeps on adding numbers to counter, what needs to be change in code if I want it to add 1 only wen something passed from its front….otherwise keep the same number….means count one at one time not continuous. you can change something like this.. while (distance =< 10) { if (distance > 10) { currentState = 1;} } i have not tested it. but you can do some trial and error.
http://roboticadiy.com/arduino-tutorial-ultrasonic-sensor-counter-with-lcd/
CC-MAIN-2018-22
refinedweb
461
68.16
16 September 2011 16:15 [Source: ICIS news] LONDON (ICIS)--?xml:namespace> “In our view, the European TDI market could be hit by ongoing [global] overcapacity,” said ING analyst Adam Milewicz. Noting that Hungary-based BorsodChem has started production at a 160,000 tonne/year TDI installation, while Germany-based Bayer MaterialScience should start production at a 250,000 tonne/year line in China this year and a 300,000 tonne/year line in Germany in 2014, Milewicz added: “Rising oversupply is the main risk for Ciech’s [75,000 tonne/year] business. Ciech could be the victim of expanding oversupply of TDI in the short term”. “In our view, large capacity additions coming on stream [globally] over the next four years (at a forecast 14.7% CAGR [compound annual growth rate] for 2010-2014) will not be absorbed by forecast growing demand (at a 5.2% CAGR for 2010-2014),” the analyst said. A further worry for Ciech is that pertrochemicals major BASF plans to build a TDI plant in Ciech, Europe's fifth largest TDI producer through its subsidiary Zachem, would still be operating against far larger TDI players even after its own 15,000 tonne/year expansion is completed in 2013, ING's figures showed. “The global TDI market is oligopolistic, with Bayer and BASF having a combined market share of 43% in 2010. The top five producers account for 65% of the global TDI market,” Milewicz said. Another headache for Ciech is that the third quarter has seen retreating TDI prices accompanied by high input costs. “In addition, the high cost of production technology means that Ciech could endure persistently weak TDI margins,” Milewicz added.
http://www.icis.com/Articles/2011/09/16/9493076/polands-ciech-pressured-by-rising-tdi-oversupply-ing.html
CC-MAIN-2015-18
refinedweb
279
58.21
01 September 2010 07:13 [Source: ICIS news] SINGAPORE (ICIS)--Asian petrochemical shares were trading higher on Wednesday, taking the cue from a rebound in ?xml:namespace> At 13:27 hours While PMI had steadily declined in May, June and July. With fears of a double-dip recession in the The market may be in for a disappointment later on Wednesday when the “The ISM (survey of manufacturing) is expected to drop by nearly 3 points to 52.7. That’s a hefty drop that would put manufacturing sector growth too close to zero for comfort,” said DBS Bank in a research note. In PTT edged up 1.51%, PTT Chem was 0.94% higher and PTT Aromatics rose 1.71%. Siam Cement was also up 1.01%. The Thai Cabinet had approved Tuesday night the list drafted by the National Environmental Board (NEB) that excluded downstream petrochemicals among those industries considered as environmentally harmful. The PTT group has 17 projects under court suspension, while conglomerate Siam Cement also has a number of projects stalled. The final hurdle for the projects’ restart was the approval of Shares of Japanese petrochemical companies tracked the rebound in the benchmark stock market index – the Nikkei 225, after slumping to a 16-month low on Tuesday. Asahi Kasei gained 1.20%, Mitsubishi Chemical was up 1.50% and Mitsui Chemicals was 0.93% higher, while the Nikkei 225 index added 76.05 points or 0.86% to 8,900.11. The Japanese government hoped to revive the slumping economy through an $11bn fiscal stimulus package announced this week but it may have to address the continued strength of the yen, which was hurting
http://www.icis.com/Articles/2010/09/01/9389573/asia-petchem-shares-up-on-china-data-ptt-up-on-mab-ta-phut.html
CC-MAIN-2015-06
refinedweb
279
65.32
It was asked in an RC Groups forum post if the gruvin9x features will be included in ER9X -- and conversely, if gruvin9x will keep up with feature and bug changes in ER9X. Hre's a copy of the answer I gave, including brief instructions for porting the gruvin9x Fr-Sky stuff over to ER9X, should anyone wish to do that at some point. This copy is here for instructional archive reasons. If you want to follow the conversation, you should start at the forum post I've been trying to keep a half an eye on the developments or ER9X since r262 -- but as you say, "nearly every day". So I'm afraid it's inevitable that the two projects will diverge -- if only for lack of time to manually merge in the changes on either side. Also, alas, it's inevitable that some core files get changed eventually, making keeping things in sync more difficult. In this case, it's happened already. For example, I've had to move a bunch of stuff out of menus.cpp and into a new file menus.h, so that the menu functions can be included for use in frsky.cpp. The alternative was to keep adding more and more code to menus.cpp, whereas keeping as much of the Fr-Sky stuff in one place made more sense to me. Of course, anyone who is set up to contribute code to ER9X (including Erazz of course) can do this themselves. I'm happy to help by answering questions where needed. Conversely, if there's any bug fixes/features introduced into ER9X that people think gruvin9x needs -- then present a case. I'm all ears. In case someone does decide to take this on, then ... Briefly, if you search all the .cpp and .h files for "FRSKY" (case sensitive), then you'll find all the places I've wedged in conditional code, between #ifdef FRSKY and #endif directives. (NOTE: There's changes in menus.cpp to the existing ER9X FRSKY code, which came before gruvin9x.) Off the top of my head, that list includes menus.cpp, menus.h (see below), myeeprom.h, file.h, file.cpp, er9x.h and er9x.cpp. Oh and pers.cpp. (NOTE: Files, er9x.* are named gruvin9x.* in the gruvin9x project.) The other thing then, is the splitting of menus.cpp into menus.cpp and menus.h. Essentially, you just have to copy my menus.h file, delete all the duplicate lines in menus.cpp and stick an #include "menus.h" at the top of menus.cpp. Then you just need to add frsky.h and frsky.cpp from the gruvin9x tree. (2010-12-16: Currently in branches/frsky/ -- not in trunk/.) Easy peasy! (not?) If anyone is going to take this on, then may I suggest you WAIT until I get to the point of merging the Fr-Sky features into the gruvin9x trunk -- otherwise you could be wasting a lot of effort and have to re-trace your tracks, several times. It's highly unlikely ER9X will change in a way that will break the Fr-Sky stuff. So waiting should do no harm. Hope that helps somehow, more that is frustrates! Finally -- I'm sure that any particular feature not in either version that you may want will be kindly considered by either developer(s).
http://code.google.com/p/gruvin9x/wiki/FrskyToER9X
crawl-003
refinedweb
559
85.08
Flex Mock -- Making Mocking Easy FlexMock is a simple, but flexible, mock object library for Ruby unit testing. Installation You can install FlexMock with the following command. $ gem install flexmock Changes Only significant changes (new APIs, deprecated APIs or backward-compatible changes) are documented here, a.k.a. minor or major version bumps. If you want a detailed changelog, go over the commit log in github (it's pretty low-traffic) 2.3.0: - implemented validation of call arity for partial mocks. By setting FlexMock.partials_verify_signatures = true flexmock will verify on partials that the number of arguments, and the keyword arguments passed to the mocked call match the existing method's signature 2.2.0: - #new_instances now mocks the #initialize method instead of mocking after the allocation was done. This allows to do mock methods called by #initialize itself. Behaviour when the allocator is explicitely provided is left unchanged, which means that the old behaviour is still available by passing :new to new_instances. 2.1.0: - added #and_iteratesto fix some shortcomings of #and_yieldwithout breaking backward compatibility - strict partial mocks (and "based mocks" if FlexMock.partials_are_basedis set to true) are now based on the object's singleton class, instead of its class. Simple Example We have a data acquisition class ( TemperatureSampler) that reads a temperature sensor and returns an average of 3 readings. We don't have a real temperature to use for testing, so we mock one up with a mock object that responds to the read_temperature message. Here's the complete example: require 'test/unit' require 'flexmock/test_unit' class TemperatureSampler def initialize(sensor) @sensor = sensor end def average_temp total = (0...3).collect { @sensor.read_temperature }.inject { |i, s| i + s } total / 3.0 end end class TestTemperatureSampler < Test::Unit::TestCase def test_sensor_can_average_three_temperature_readings sensor = flexmock("temp") sensor.should_receive(:read_temperature).times(3). and_return(10, 12, 14) sampler = TemperatureSampler.new(sensor) assert_equal 12, sampler.average_temp end end You can find an extended example of FlexMock in Google Example. Minitest Integration FlexMock integrates nicely with Minitest. Just require the 'flexmock/minitest' file at the top of your test file. The flexmock method will be available for mock creation, and any created mocks will be automatically validated and closed at the end of the individual test. It works with both tests unit-style (subclasses of Minitest::Test) and spec-style. Your test case will look something like this: require 'flexmock/minitest' class TestDog < Minitest::Test def test_dog_wags tail_mock = flexmock(:wag => :happy) assert_equal :happy, tail_mock.wag end end NOTE: If you don't want to automatically extend every Minitest::Test with the flexmock methods and overhead, then require the 'flexmock' file and explicitly include the FlexMock::Minitest module in each test case class where you wish to use mock objects. Test::Unit Integration FlexMock integrates nicely with Test::Unit. Just require the 'flexmock/test_unit' file at the top of your test file. The flexmock method will be available for mock creation, and any created mocks will be automatically validated and closed at the end of the individual test. Your test case will look something like this: require 'flexmock/test_unit' class TestDog < Test::Unit::TestCase def test_dog_wags tail_mock = flexmock(:wag => :happy) assert_equal :happy, tail_mock.wag end end NOTE: If you don't want to automatically extend every TestCase with the flexmock methods and overhead, then require the 'flexmock' file and explicitly include the FlexMock::TestCase module in each test case class where you wish to use mock objects. FlexMock versions prior to 0.6.0 required the explicit include. RSpec Integration FlexMock also supports integration with the RSpec behavior specification framework. Starting with version 2.11.0 of RSpec, you will be able to say: RSpec.configure do |config| config.mock_with :flexmock end describe "Using FlexMock with RSpec" do it "should be able to create a mock" do m = flexmock(:foo => :bar) m.foo.should === :bar end end NOTE: I often can't remember the proper RSpec configuration for flexmock without looking it up. If you are the same, you can put require 'flexmock/rspec/configure' in your spec helper to auto-configure RSpec to use flexmock. NOTE: Older versions of RSpec used the Spec::Runner for configuration. If you are running with a very old RSpec, you may need the following: # Configuration for RSpec prior to RSpec 2.x Spec::Runner.configure do |config| config.mock_with :flexmock end Quick Reference Creating Mock Objects The flexmock method is used to create mocks in various configurations. Here's a quick rundown of the most common options. See FlexMock::MockContainer#flexmock for more details. mock = flexmock("joe") Create a mock object named "joe" (the name is used in reporting errors). mock = flexmock(:foo => :bar, :baz => :froz) Create a mock object and define two mocked methods (:foo and :baz) that return the values :bar and :froz respectively. This is useful when creating mock objects with just a few methods and simple return values. mock = flexmock("joe", :foo => :bar, :bar => :froz) You can combine the mock name and an expectation hash in the same call to flexmock. mock = flexmock("joe", :on, User) This defines a strict mock that is based on the User class. Strict mocks prevent you from mocking or stubbing methods that are not instance methods of the restricting class (i.e. User in our example). This helps prevent tests from becoming stale with incorrectly mocked objects when the method names change. Use the explicitlymodifier to should_receiveto override the strict mock restrictions. partial_mock = flexmock(real_object) If you you give flexmocka real object in the argument list, it will treat that real object as a base for a partial mock object. The return value partial_mockmay be used to set expectations. The real_object should be used in the reference portion of the test. partial_mock = flexmock(real_object, :on, class_object) partial_mock = flexmock(real_object, :strict) Partial mocks can also take a restricting base, so that you cannot mock methods not in the class (without the explicitlymodifier). Since partials already have a class, you can use the :strictkeyword to mean the same thing as :on, real_object.class. partial_mock = flexmock(real_object, "name", :foo => :baz) Names and expectation hashes may be used with partial mocks as well. partial_mock = flexmock(:base, real_string_object) Since Strings (and Symbols for that matter) are used for mock names, FlexMock will not recognize them as the base for a partial mock. To force a string to be used as a partial mock base, proceed the string object in the calling sequence with :base. partial_mock = flexmock(:safe, real_object) { |mock| mock.should_receive(...) } When mocking real objects (i.e. "partial mocks"), FlexMock will add a handful of mock related methods to the actual object (see below for list of method names). If one or more of these added methods collide with an existing method on the partial mock, then there are problems. FlexMock offers a "safe" mode for partial mocks that does not add these methods. Indicate safe mode by passing the symbol :safe as the first argument of flexmock. A block is required when using safe mode (the partial_mock returned in safe mode does not have a should_receivemethod). The methods added to partial mocks in non-safe mode are: - should_receive - new_instances - flexmock_get - flexmock_teardown - flexmock_verify - flexmock_received? - flexmock_calls mock = flexmock(...) { |mock| mock.should_receive(...) } If a block is given to any of the flexmockforms, the mock object will be passed to the block as an argument. Code in the block can set the desired expectations for the mock object. mock_model = flexmock(:model, YourModel, ...) { |mock| mock.should_receive(...) } When given :model, flexmock()will return a pure mock (not a partial mock) that will have some ActiveRecord specific methods defined. YourModel should be the class of an ActiveRecord model. These predefined methods make it a bit easier to mock out ActiveRecord model objects in a Rails application. Other that the predefined mocked methods, the mock returned is a standard FlexMock mock object. The predefined mocked methods are: - id -- returns a unique ID for each mocked model. - to_params -- returns a stringified version of the id. - new_record? -- returns false. - errors -- returns an empty (mocked) errors object. - is_a?(other) -- returns true if other == YourModel. - instance_of?(class) -- returns true if class == YourModel - kind_of?(class) -- returns true if class is YourModel or one of its ancestors - class -- returns YourModel. mock = flexmock(... :on, class_object, ...) NOTE: Versions of FlexMock prior to 0.6.0 used flexstub to create partial mocks. The flexmock method now assumes all the functionality that was spread out between two different methods. flexstub is deprecated, but still available for backward compatibility. Expectation Declarators Once a mock is created, you need to define what that mock should expect to see. Expectation declarators are used to specify these expectations placed upon received method calls. A basic expectation, created with the should_receive method, just establishes the fact that a method may (or may not) be called on the mock object. Refinements to that expectation may be additionally declared. FlexMock always starts with the most general expectation and adds constraints to that. For example, the following code: mock.should_receive(:average).and_return(12) Means that the mock will now accept method calls to an average method. The expectation will accept any arguments and may be called any number of times (including zero times). Strictly speaking, the and_return part of the declaration isn't exactly a constraint, but it does specify what value the mock will return when the expectation is matched. If you want to be more specific, you need to add additional constraints to your expectation. Here are some examples: mock.should_receive(:average).with(12).once mock.should_receive(:average).with(Integer). at_least.twice.at_most.times(10). and_return { rand } Expectation are always matched in order of declaration. That means if you have a general expectation before a more specific expectation, the general expectation will have an opportunity to match first, effectively hiding the second expectation. For example: mock.should_receive(:average) # Matches any call to average mock.should_receive(:average).with(1).once # Fails because it never matches In the example, the second expectation will never be triggered because all calls to average will be handled by the first expectation. Since the second expectation is require to match one time, this test will fail. Reversing the order of the expections so that the more specific expectation comes first will fix that problem. If an expectation has a count requirement (e.g. once or times), then once it has matched its expected number of times, it will let other expectations have a chance to match. For example: mock.should_receive(:average).once.and_return(1) mock.should_receive(:average).once.and_return(2) mock.should_receive(:average).and_return(3) In the example, the first time average is called, the first expectation is matched an average will return 1. The second time average is called, the second expectation matches and 2 is returned. For all calls to average after that, the third expectation returning 3 will be used. Occasionally it is useful define a set of expecations in a setup method of a test and override those expectations in specific tests. If you mark an expectation with the by_default marker, that expectation will be used only if there are no non-default expectations on that method name. See "by_default" below. Expectation Criteria The following methods may be used to create and refine expectations on a mock object. See theFlexMock::Expectation for more details. should_receive(method_name) Declares that a message named method_name will be sent to the mock object. Constraints on this expected message (called expectations) may be chained to the should_receivecall. should_receive(method_name1, method_name2, ...) Define a number of expected messages that have the same constraints. should_receive(meth1 => result1, meth2 => result2, ...) Define a number of expected messages that have the same constrants, but return different values. should_receive(...).explicitly If a mock has a base class, use the explicitlymodifier to override the restriction on method names imposed by the base class. The explicitlymodifier must come immediately after the should_receivecall and before any other expectation declarators. If a mock does not have a base class, this method has no effect. should_expect { |recorder| ... } Creates a mock recording object that will translate received method calls into mock expectations. The recorder is passed to a block supplied with the should_expectmethod. See examples below. with(arglist) Declares that this expectation matches messages that match the given argument list. The ===operator is used on a argument by argument basis to determine matching. This means that most literal values match literally, class values match any instance of a class and regular expression match any matching string (after a to_sconversion). See argument validators (below) for details on argument validation options. with_any_args Declares that this expectation matches the message with any argument (default) with_no_args Declares that this expectation matches messages with no arguments zero_or_more_times Declares that the expected message is may be sent zero or more times (default, equivalent to at_least.never). once Declares that the expected message is only sent once. at_least/ at_mostmodifiers are allowed. twice Declares that the expected message is only sent twice. at_least/ at_mostmodifiers are allowed. never Declares that the expected message is never sent. at_least/ at_mostmodifiers are allowed. times(n) Declares that the expected message is sent n times. at_least/ at_mostmodifiers are allowed. at_least Modifies the immediately following message count constraint so that it means the message is sent at least that number of times. E.g. at_least.oncemeans the message is sent at least once during the test, but may be sent more often. Both at_leastand at_mostmay be specified on the same expectation. at_most Similar to at_least, but puts an upper limit on the number of messages. Both at_leastand at_mostmay be specified on the same expectation. ordered Declares that the expected message is ordered and is expected to be received in a certain position in a sequence of messages. The message should arrive after and previously declared ordered messages and prior to any following declared ordered messages. Unordered messages are ignored when considering the message order. Normally ordering is performed only against calls in the same mock object. If the "globally" adjective is used, then ordering is performed against the other globally ordered method calls. ordered(group) Declare that the expected message belongs to an order group. Methods within an order group may be received in any order. Ordered messages outside the group must be received either before or after all of the grouped messages. For example, in the following, messages flipand flopmay be received in any order (because they are in the same group), but must occur strictly after startbut before end. The message any_timemay be received at any time because it is not ordered. m = flexmock() m.should_receive(:any_time) m.should_receive(:start).ordered m.should_receive(:flip).ordered(:flip_flop_group) m.should_receive(:flop).ordered(:flip_flop_group) m.should_receive(:end).ordered Normally ordering is performed only against calls in the same mock object. If the "globally" adjective is used, then ordering is performed against the other globally ordered method calls. - globally.ordered globally.ordered(group_name) When modified by the "globally" adjective, the mock call will be ordered against other globally ordered methods in any of the mock objects in the same container (i.e. same test). All the options of the per-mock ordering are available in the globally ordered method calls. by_default Marks the expectation as a default. Default expectations act as normal as long as there are no non-default expectations for the same method name. As soon as a non-default expectation is defined, all default expectations for that method name are ignored. Default expectations allow you to setup a set of default behaviors for various methods in the setup of a test suite, and then override only the methods that need special handling in any given test. Expectation Actions Action expectations are used to specify what the mock should do when the expectation is matched. The actions themselves do not take part in determining whether a given expectation matches or not. and_return(value) Declares that the expected message will return the given value. and_return(value1, value2, ...) Declares that the expected message will return a series of values. Each invocation of the message will return the next value in the series. The last value will be repeatably returned if the number of matching calls exceeds the number of values. and_return { |args, ...| code ... } Declares that the expected message will return the yielded value of the block. The block will receive all the arguments in the message. If the message was provided a block, it will be passed as the last parameter of the block's argument list. returns( ... ) Alias for and_return. and_return_undefined Declares that the expected message will return a self-preserving undefined object (see FlexMock::Undefined for details). returns_undefined Alias for and_returns_undefined and_raise(_exception_, *args) Declares that the expected message will raise the specified exception. If exceptionis an exception class, then the raised exception will be constructed from the class with newgiven the supplied arguments. If exceptionis an instance of an exception class, then it will be raised directly. raises( ... ) Alias for and_raise. and_throw(symbol) and_throw(symbol, value) Declares that the expected messsage will throw the specified symbol. If an optional value is included, then it will be the value returned from the corresponding catch statement. throws( ... ) Alias for and_throw. and_yield(values, ...) Declares that the mocked method will receive a block, and the mock will call that block with the values given. Not providing a block will be an error. Providing more than one and_yieldclause one a single expectation will mean that subsquent mock method calls will yield the values provided by the additional and_yieldclause. yields( ... ) Alias for and_yield( ... ). and_iterates(value1, value2>, ...) Declares that the mocked method will receive a block, and the mock will iterate over the values given, calling the block once for each value. Not providing a block will be an error. Providing more than one and_iteratesor and_yieldclause one a single expectation will mean that subsquent mock method calls will yield the values provided by the additional and_iterates/ and_yieldclause. pass_thru pass_thru { |value| .... } Declares that the expected message will allow the method to be passed to the original method definition in the partial mock object. pass_thruis also allowed on regular mocks, but since there is no original method to be called, pass_thru will always return the undefined object. If a block is supplied to pass_thru, the value returned from the original method will be passed to the block and the value of the block will be returned. This allows you to mock methods on the returned value. Dog.should_receive(:new).pass_thru { |dog| flexmock(dog, :wag => true) } Other Expectation Methods mock Expectation constraints always return the expectation so that the constraints can be chained. If you wish to do a one-liner and assign the mock to a variable, the mockmethod on an expectation will return the original mock object. m = flexmock.should_receive(:hello).once.and_return("World").mock NOTE: Using mock when specifying a Demeter mock chain will return the last mock of the chain, which might not be what you expect. Argument Validation The values passed to the with declarator determine the criteria for matching expectations. The first expectation found that matches the arguments in a mock method call will be used to validate that mock method call. The following rules are used for argument matching: A withparameter that is a class object will match any actual argument that is an instance of that class. Examples: with(Integer) will match f(3) A regular expression will match any actual argument that matches the regular expression. Non-string actual arguments are converted to strings via to_sbefore applying the regular expression. Examples: with(/^src/) will match f("src_object") with(/^3\./) will match f(3.1415972) Most other objects will match based on equal values. Examples: with(3) will match f(3) with("hello") will match f("hello") If you wish to override the default matching behavior and force matching by equality, you can use the FlexMock.eq convenience method. This is mostly used when you wish to match class objects, since the default matching behavior for class objects is to match instances, not themselves. Examples: with(eq(Integer)) will match f(Integer) with(eq(Integer)) will NOT match f(3) Note: If you do not use the FlexMock::TestCase Test Unit integration module, or the FlexMock::ArgumentTypes module, you will have to fully qualify the eq method. This is true of all the special argument matches ( eq, on, any, hsh and ducktype). with(FlexMock.eq(Integer)) with(FlexMock.on { code }) with(FlexMock.any) with(FlexMock.hsh(:tag => 3)) with(FlexMock.ducktype(:wag, :bark)) - If you wish to match a hash on some of its values, the FlexMock.hsh(...)method will work. Only specify the hash values you are interested in, the others will be ignored. with(hsh(:run => true)) will match f(:run => true, :stop => false) - If you wish to match any object that responds to a certain set of methods, use the FlexMock.ducktypemethod. with(ducktype(:to_str)) will match f("string") with(ducktype(:wag, :bark)) will match f(dog) (assuming dog implements wag and bark) If you wish to match anything, then use the FlexMock.anymethod in the with argument list. Examples (assumes either the FlexMock::TestCase or FlexMock::ArgumentTypes mix-ins has been included): with(any) will match f(3) with(any) will match f("hello") with(any) will match f(Integer) with(any) will match f(nil) If you wish to specify a complex matching criteria, use the FlexMock.on(&block)with the logic contained in the block. Examples (assumes FlexMock::ArgumentTypeshas been included): with(on { |arg| (arg % 2) == 0 } ) will match any even integer. If you wish to match a method call where a block is given, add Procas the last argument to with. Example: m.should_receive(:foo).with(Integer,Proc).and_return(:got_block) m.should_receive(:foo).with(Integer).and_return(:no_block) will cause the mock to return the following: m.foo(1) { } => returns :got_block m.foo(1) => returns :no_block Creating Partial Mocks Sometimes it is useful to mock the behavior of one or two methods in an existing object without changing the behavior of the rest of the object. If you pass a real object to the flexmock method, it will allow you to use that real object in your test and will just mock out the one or two methods that you specify. For example, suppose that a Dog object uses a Woofer object to bark. The code for Dog looks like this (we will leave the code for Woofer to your imagination): class Dog def initialize @woofer = Woofer.new end def bark @woofer.woof end def wag :happy end end Now we want to test Dog, but using a real Woofer object in the test is a real pain (why? ... well because Woofer plays a sound file of a dog barking, and that's really annoying during testing). So, how can we create a Dog object with mocked Woofer? All we need to do is allow FlexMock to replace the bark method. Here's the test code: class TestDogBarking < Test::Unit::TestCase include FlexMock::TestCase # Setup the tests by mocking the +new+ method of # Woofer and return a mock woofer. def setup @dog = Dog.new flexmock(@dog, :bark => :grrr) end def test_dog assert_equal :grrr, @dog.bark # Mocked Method assert_equal :happy, @dog.wag # Normal Method end end The nice thing about this technique is that after the test is over, the mocked out methods are returned to their normal state. Outside the test everything is back to normal. NOTE: In previous versions of FlexMock, partial mocking was called "stubs" and the flexstub method was used to create the partial mocks. Although partial mocks were often used as stubs, the terminology was not quite correct. The current version of FlexMock uses the flexmock method to create both regular stubs and partial stubs. A version of the flexstub method is included for backwards compatibility. See Martin Fowler's article Mocks Aren't Stubs for a better understanding of the difference between mocks and stubs. This partial mocking technique was inspired by the Stuba library in the Mocha project. Spies FlexMock supports spy-like mocks as well as the traditional mocks. # In Test::Unit / MiniTest class TestDogBarking < Test::Unit::TestCase def test_dog dog = flexmock(:on, Dog) dog.bark("loud") assert_spy_called dog, :bark, "loud" end end # In RSpec describe Dog do let(:dog) { flexmock(:on, Dog) } it "barks loudly" do dog.bark("loud") dog.should have_received(:bark).with("loud") end end Since spies are verified after the code under test is run, they fit very nicely with the Given/When/Then technique of specification. Here is the above RSpec example using the rspec-given gem: require 'rspec/given' describe Dog do Given(:dog) { flexmock(:on, Dog) } context "when barking loudly" do When { dog.bark("loud") } Then { dog.should have_received(:bark).with("loud") } end end NOTE: You can only spy on methods that are mocked or stubbed. That's not a problem with regular mocks, but normal methods on partial objects will not be recorded. You can get around this limitation by stubbing the method in question on the normal mock, and then specifying pass_thru. Assuming :bark is a normal method on a Dog object, then the following allows for spying on :bark. dog = Dog.new flexmock(dog).should_receive(:bark).pass_thru # ... dog.should have_received(:bark) Asserting Spy Methods are Called (Test::Unit / MiniTest) FlexMock provied a custom assertion method for use with Test::Unit and MiniTest for asserting that mocked methods are actually called. assert_spy_called mock, options_hash, method_name, args... This will assert that the method called method_name has been called at least once on the given mock object. If arguments are given, then the method must be called with actual argument that match the given argument matchers. All the argument matchers defined in the "Argument Validation" section above are allowed in the assert_spy_calledmethod. The optionshash is optional. If omitted, all options will have their default values. See below for spy option definitions. assert_spy_not_called mock, options_hash, method_name, args... Same as assert_spy_called, except with the sense of the test reversed. Spy Options times: n Specify the number of times a matching method should have been invoked. nil(or omitted) means any number of times. with_block: true/false/nil Is a block required on the invocation? truemeans the method must be invoked with a block. falsemeans the method must have been invoked without a block. nilmeans that the presence of a block does not matter. Default is nil. and: [proc1, proc2...] Additional validations to be run on each matching method call. The list of arguments for each call is passed to the procs. This allows additional validations on supplied arguments. Default is no additional validations. on: n Only apply the additional validations on the n'th invocation of the matching method. Default is apply additional validations to all invocations. Examples: dog = flexmock(:on, Dog) dog.wag(:tail) dog.wag(:head) dog.bark(5) dog.bark(6) assert_spy_called dog, :wag, :tail assert_spy_called dog, :wag, :head assert_spy_called dog, {times: 2}, :wag assert_spy_not_called dog, :bark assert_spy_not_called dog, {times: 3}, :wag is_even = proc { |n| assert_equal 0, n%2 } assert_spy_called dog, { and: is_even, on: 2 }, :bark, Integer RSpec Matcher for Spying FlexMock also provides an RSpec matcher that can be used to specify spy behavior. mock.should have_received(method_name).modifier1.modifier2... Specifies that the method named method_name should have been received by the mock object with the given arguments. Just like should_receive, have_receivedwill accept a number of modifiers that modify its behavior. Modifiers for have_received with(args) If a withmodifier is given, only messages with matching arguments are considered. args can be any of the argument matches mentioned in the "Argument Validation" section above. If withis not given, then the arguments are not considered when finding matching calls. times(n) If a timesmodifier is given, then there must be exactly ncalls for that method name on the mock. If the timesclause is not given, then there must be at least one call matching the method name (and arguments if they are considered). neveris an alias for times(0), onceis an alias for times(1), and twiceis an alias for times(2). and { |args| code } If an andmodifier is given, then the supplied block will be run as additional validations on any matching call. Arguments to the matching call will be supplied to the block. If multiple andmodifiers are given, all the blocks will be run. The additional validations are run on all the matching calls unless an onmodifier is supplied. on(n) If an onmodifier is given, then the additional validations supplied by andwill only be run on the n'th invocation of the matching method. Examples: dog = flexmock(:on, Dog) dog.wag(:tail) dog.wag(:head) dog.should have_received(:wag).with(:tail) dog.should have_received(:wag).with(:head) dog.should have_received(:wag).twice dog.should_not have_received(:bark) dog.should_not have_received(:wag).times(3) dog.bark(3) dog.bark(6) dog.should have_received(:bark).with(Integer).and { |arg| (arg % 3).should == 0 } dog.should have_received(:bark).with(Integer).and { |arg| arg.should == 6 }.on(2) Mocking Class Object In the previous example we mocked out the bark method of a Dog object to avoid invoking the Woofer object. Perhaps a better technique would be to mock the Woofer object directly. But Dog uses Woofer explicitly so we cannot just pass in a mock object for Dog to use. But wait, we can add mock behavior to any existing object, and classes are objects in Ruby. So why don't we just mock out the Woofer class object to return mocks for us. class TestDogBarking < Test::Unit::TestCase include FlexMock::TestCase # Setup the tests by mocking the `new` method of # Woofer and return a mock woofer. def setup flexmock(Woofer).should_receive(:new). and_return(flexmock(:woof => :grrr)) @dog = Dog.new end def test_dog assert_equal :grrrr, @dog.bark # Calls woof on mock object # returned by Woofer.new end end Mocking Behavior in All Instances Created by a Class Object Sometimes returning a single mock object is not enough. Occasionally you want to mock every instance object created by a class. FlexMock makes this very easy. class TestDogBarking < Test::Unit::TestCase include FlexMock::TestCase # Setup the tests by mocking Woofer to always # return partial mocks. def setup flexmock(Woofer).new_instances.should_receive(:woof => :grrr) end def test_dog assert_equal :grrrr, Dog.new.bark # All dog objects assert_equal :grrrr, Dog.new.bark # are mocked. end end Note that FlexMock adds the mock expectations after the original new method has completed. If the original version of new yields the newly created instance to a block, that block will get an non-mocked version of the object. Note that new_instances will accept a block if you wish to mock several methods at the same time. E.g. flexmock(Woofer).new_instances do |m| m.should_receive(:woof).twice.and_return(:grrr) m.should_receive(:wag).at_least.once.and_return(:happy) end Default Expectations on Mocks Sometimes you want to setup a bunch of default expectations that are pretty much for a number of different tests. Then in the individual tests, you would like to override the default behavior on just that one method you are testing at the moment. You can do that by using the by_default modifier. In your test setup you might have: def setup @mock_dog = flexmock("Fido") @mock_dog.should_receive(:tail => :a_tail, :bark => "woof").by_default end The behaviors for :tail and :bark are good for most of the tests, but perhaps you wish to verify that :bark is called exactly once in a given test. Since :bark by default has no count expectations, you can override the default in the given test. def test_something_where_bark_must_be_called_once @mock_dog.should_receive(:bark => "woof").once # At this point, the default for :bark is ignored, # and the "woof" value will be returned. # However, the default for :tail (which returns :a_tail) # is still active. end By setting defaults, your individual tests don't have to concern themselves with details of all the default setup. But the details of the overrides are right there in the body of the test. Mocking Law of Demeter Violations The Law of Demeter says that you should only invoke methods on objects to which you have a direct connection, e.g. parameters, instance variables, and local variables. You can usually detect Law of Demeter violations by the excessive number of periods in an expression. For example: car.chassis.axle.universal_joint.cog.turn The Law of Demeter has a very big impact on mocking. If you need to mock the "turn" method on "cog", you first have to mock chassis, axle, and universal_joint. # Manually mocking a Law of Demeter violation cog = flexmock("cog") cog.should_receive(:turn).once.and_return(:ok) joint = flexmock("gear", :cog => cog) axle = flexmock("axle", :universal_joint => joint) chassis = flexmock("chassis", :axle => axle) car = flexmock("car", :chassis => chassis) Yuck! The best course of action is to avoid Law of Demeter violations. Then your mocking exploits will be very simple. However, sometimes you have to deal with code that already has a Demeter chain of method calls. So for those cases where you can't avoid it, FlexMock will allow you to easily mock Demeter method chains. Here's an example of Demeter chain mocking: # Demeter chain mocking using the short form. car = flexmock("car") car.should_receive( "chassis.axle.universal_joint.cog.turn" => :ok).once You can also use the long form: # Demeter chain mocking using the long form. car = flexmock("car") car.should_receive("chassis.axle.universal_joint.cog.turn").once. and_return(:ok) That's it. Anywhere FlexMock accepts a method name for mocking, you can use a demeter chain and FlexMock will attempt to do the right thing. But beware, there are a few limitations. The all the methods in the chain, except for the last one, will mocked to return a mock object. That mock object, in turn, will be mocked so as to respond to the next method in the chain, returning the following mock. And so on. If you try to manually mock out any of the chained methods, you could easily interfer with the mocking specified by the Demeter chain. FlexMock will attempt to catch problems when it can, but there are certainly scenarios where it cannot detect the problem beforehand. Examples Refer to the following documents for examples of using FlexMock: License Copyright 2003-2013 by Jim Weirich ([email protected]). Copyright 2014- by Sylvain Joyeux ([email protected]) Licensed under the MIT license Other stuff - Author -- Jim Weirich [email protected] and Sylvain Joyeux [email protected] - Requires -- Ruby 2.0 or later See Also If you like the spy capability of FlexMock, you should check out the rspec-given gem that allows you to use Given/When/Then statements in you specifications. Warranty This software is provided "as is" and without any express or implied warranties, including, without limitation, the implied warranties of merchantibility and fitness for a particular purpose.
http://www.rubydoc.info/gems/flexmock/frames
CC-MAIN-2017-04
refinedweb
5,783
56.86
Content-type: text/html acl_set_file - Sets the ACL on the object designated by the pathname Security Library (libpacl.a) #include <sys/acl.h> int acl_set_file( char *path_p; acl_type_t type_d; acl_t acl_d); The pathname of the object to set the ACL on. Designates the type of ACL to set: ACL_TYPE_ACCESS, ACL_TYPE_DEFAULT, or ACL_TYPE_DEFAULT_DIR. Working storage internal representation of the ACL that is being set. NOTE: This function is based on Draft 13 of the POSIX P1003.6 standard. The function may change as the P1003.6 standard is finalized. Given a pathname to a object the acl_set_file() function sets the designated ACL. The type of ACL being set is determined by the acl_tag_t parameter. If acl_d is NULL then the designated ACL will be removed from the designated object. The entry pointer used by the acl_get_entry() function becomes undefined after a call to the acl_set_file() function. Upon successful completion, the acl_set_file() function returns a value of 0 (zero). Otherwise, a value of -1 is returned and errno is set to indicate the error. If any of the following conditions occur, the acl_set_file() function sets errno to the corresponding value: The required access to the file was denied. The named object does not exist. The argument acl_d does not contain a valid ACL. Argument type_d does not contain a valid ACL type. The pathname is longer than allowed. The directory or file system that would contain the new ACL cannot be extended or the file system is out of file allocation resources. The argument type_d indicates a default ACL, and path_p does not point to a directory object. The designated object resides on a file system that does not support ACLs The process does not have the appropriate permissions to perform the operation. The setting and changing of ACLs have been disabled by the system administrator. The designated object resides on a read-only file system. acl_get_fd(3),acl_valid(3), acl_set_fd(3), acl_get_file(3) Security delim off
http://backdrift.org/man/tru64/man3/acl_set_file.3.html
CC-MAIN-2017-22
refinedweb
325
58.79
Opened 9 years ago Closed 8 years ago #3275 closed enhancement (duplicate) [patch] select_related() additions (depth=N, fields=[]) Description While I was happy with django's db api, to an extent, it did not have everything needed for the basics. So here is a quick solution to some related fields using select_related(). It adds to parameters to select_related(), depth, and fields. depth: A numerical field and represents the recursion depth for keys, by default, django recurses infinitely on any keys that are not blank=True fields: A list of field names in the base model to join with. It does not support children, ie relatedfieldfieldname, but I'd like to add this later, this will also set depth to 1. I've done several unit tests (several being all of curse-gaming.com) and we're pushing the changes live on the site now. It's helping performance out quite a bit in areas where it was too difficult to manually join with just one other table, or areas where we just wanted top level results. Attachments (3) Change History (27) Changed 9 years ago by David Cramer <dcramer@…> comment:1 Changed 9 years ago by Jeremy Dunck <jdunck@…> - Summary changed from select_related() additions (depth=N, fields=[]) to [patch] select_related() additions (depth=N, fields=[]) comment:2 Changed 9 years ago by Gary Wilson <gary.wilson@…> - Cc gary.wilson@… added comment:3 Changed 9 years ago by David Cramer <dcramer@…> It still seems to have a bug when just doing .select_related(depth=1), sometimes its filling the field w/ the wrong data, looking into it. comment:4 Changed 9 years ago by David Cramer <dcramer@…> Ok the bug is not with my code, but select_related() in general. If your field order doesn't match django's field order it will bug. Code should be good :) comment:5 Changed 9 years ago by SmileyChris - Needs documentation set - Needs tests set - Patch needs improvement set - Triage Stage changed from Unreviewed to Accepted If it can "bug" because of this new feature (supplying fields), then it needs improvement. Something this big also needs tests and probably some documentation. But thanks for the first cut, David! comment:6 Changed 9 years ago by anonymous Well the bug wasn't select_related, it was my code (I believe). As I was pulling fields in the wrong order and creating an object off it comment:7 Changed 9 years ago by Michael Radziej <mir@…> comment:8 Changed 9 years ago by Michael Radziej <mir@…> comment:9 Changed 9 years ago by David Cramer <dcramer@…> I've come to the conclusion I don't have enough time to please everyone, if someone wants to do the docs and unit tests for this it's fine, otherwise we'll just continue to merge our branch with the live branch. It works 100% for us as we are running 300k+ visits per hour using this solution to optimize SQL queries and have been for over two weeks now. comment:10 follow-up: ↓ 12 Changed 9 years ago by jacob comment:11 Changed 9 years ago by boris.erdmann@… select_related (sometimes?) produces too broad queries for *depth=1*. In these cases the resulting data set confuses django to the point where the according QuerySet has only empty attributes. Changing the declaration of fill_table_cache (db/models/query.py) from def fill_table_cache(opts, select, tables, where, old_prefix, cache_tables_seen, max_depth=0, cur_depth=0): to def fill_table_cache(opts, select, tables, where, old_prefix, cache_tables_seen, max_depth=0, cur_depth=1): Fixes this for me. So it MIGHT solve the problem generally. Boris comment:12 in reply to: ↑ 10 Changed 9 years ago by wolfram.kriesing@… (In [4645]) Added a "depth" argument to select_related() to control how many "levels" of relations select_related() is willing to follow (refs #3275). Also added unit tests for select_related(). Thanks for adding this. But why not add the entire patch? The fields parameter is the one that really makes sense. Imagine: MyModel(models.Model): field1 = ForeignKey() field2 = ForeignKey() field3 = ForeignKey() But I only want field1 to be considered in select_related(), so i would call ...select_related(fields=["field1"]).all() That would leave out two joins in the query, that might have a huge effect, and might reduce load by lots! Please add the fields parameter too! Thanks comment:13 Changed 9 years ago by David Cramer <dcramer@…> I believe the reason they didn't add it is because there could be a better way to implement it, and it's not a full-featured solution (as you can't select child tables) comment:14 Changed 9 years ago by wolfram.kriesing@… mmmh, Django is in development, so why not this feature? :-) I see it as very useful, so get it started, please! comment:15 Changed 9 years ago by marcin@… There is a bug in the way depth is handled in fill_table_cache, see test_depth_bug.diff for a test case. When you specify a non-zero depth fill_table_cache goes one relationship too deep which causes get_cached_row not to handle all the columns from select_related. This bug shows up when you use extra on that QuerySet: the extra fields to select should immediately follow columns handled by get_cached_row, but instead they get data from the relationship that fill_table_cache added and get_cached_row skipped. See fix_depth_bug.diff for a solution. Changed 9 years ago by marcin@… Test case for the bug with non-zero depth Changed 9 years ago by marcin@… Fix for the bug with non-zero depth comment:16 Changed 8 years ago by PhiR comment:17 Changed 8 years ago by Gábor Farkas <gabor@…> - Cc gabor@… added comment:18 Changed 8 years ago by Matthias Urlichs <smurf@…> - Cc matthias@… added The fix_depth_bug.diff patch is not yet in mainline. What gives? comment:19 Changed 8 years ago by russellm What gives? The same thing that 'gives' for every other contributed patch that hasn't been committed. This is that this is a volunteer project, and the core contributors are all very busy people. Unfortunately, our priorities don't correspond with yours in this case. It also doesn't help that it is currently triaged as 'requiring docs, needing tests and patch needs improvement'. A patch with all 3 of those labels isn't going to get much attention from anyone in a position to commit. Further complication comes from the queryset refactor that is currently underway, which will potentially tread on the toes of this contribution. You're free to use this patch in your own Django checkouts, and you're free to act as an advocate for including this patch in trunk by attempting to raise a productive discussion on django-developers. Complaining that your pet patch hasn't been included in trunk is neither helpful nor productive. comment:20 follow-up: ↓ 21 Changed 8 years ago by Gábor Farkas <gabor@…> unfortunately in this case i think improving the patch won't help. there was a patch in ticket 4879, that had tests and everything, and it was not committed. as Malcom described here: , it will be fixed when the queryset-refactor branch is merged. btw. my opinion is that this bug should get a little higher priority, because without this patch select_related(depth) simply returns wrong data, but probably different priorities for different people :-) p.s: please note, that the "patch needs improvement" flag is probably meant for the other patches in this ticket. the fix_depth_bug.diff patch is a one-line-patch, which adds exactly one character. it's hard to imagine how such a patch could be improved :) comment:21 in reply to: ↑ 20 Changed 8 years ago by russellm Replying to Gábor Farkas <[email protected]>: btw. my opinion is that this bug should get a little higher priority, because without this patch select_related(depth) simply returns wrong data, but probably different priorities for different people :-) Caveat - I haven't looked into this particular issue or patch, and I probably won't have a chance to in the near future (the "what gives" comment just caught my ire). However, you will find that the overwhelming philosophy of Django development is that once a serious problem has been identified, we put all of our efforts into fixing the underlying problem rather than patching a solution known to be broken. The core developers have limited resources, and every hour we spend identifying, inspecting and applying a partial fix is an hour we don't spend fixing the underlying problem. Edit-inline is one area where this is currently the case; anything touching querysets is another. p.s: please note, that the "patch needs improvement" flag is probably meant for the other patches in this ticket. the fix_depth_bug.diff patch is a one-line-patch, which adds exactly one character. it's hard to imagine how such a patch could be improved :) It's also hard to know how the core developers are supposed to know this without tracking every single comment on every single ticket. If there is some subset of this ticket that constitutes a one character fix that needs to be made, it should be a separate ticket describing the single issue that will be resolved by applying the patch. I should also point out that in reality, there isn't any such thing as a 'one character fix'. You need to convince me (or any other core developer) that the one character fix is (1) required and (2) correct. You can't do that in 1 character :-) comment:22 Changed 8 years ago by Matthias Urlichs <smurf@…> I'm sorry if you misunderstood my "what gives". It wasn't directed at you specifically, and I admittedly overlooked the fact that the last two patches (which I'm concerned about) address a problem that ends up being completely unrelated to the first one (to which the needs-* bits apply). I'll re-file these patches into a new ticket. comment:23 Changed 8 years ago by Matthias Urlichs <smurf@…> comment:24 Changed 8 years ago by mtredinnick - Resolution set to duplicate - Status changed from new to closed diffs for django/db/models/query.py
https://code.djangoproject.com/ticket/3275
CC-MAIN-2015-48
refinedweb
1,682
59.03
The String class represents character strings. A quoted string constant can be assigned to a String variable. String literals in Java are specified by enclosing a sequence of characters between a pair of double quotes. In Java strings are actually object types. The following code declares String type variable with Java String literal. public class Main{ public static void main(String[] argv){ String str = "this is a test from java2s.com"; System.out.println(str); } } The output: You can use + operator to concatenate strings together. For example, the following fragment concatenates three strings: public class Main { public static void main(String[] argv) { String age = "9"; String s = "He is " + age + " years old."; System.out.println(s); } } The following code uses string concatenation to create a very long string. public class Main { public static void main(String args[]) { //from www. j a v a2 s.c om String longStr = "A java 2s. com" + "B j a v a 2 s . c o m " + "C java 2s.com" + "D java2s.com ."; System.out.println(longStr); } } You can concatenate strings with other types of data. public class Main { public static void main(String[] argv) { int age = 1; String s = "He is " + age + " years old."; System.out.println(s); } } The output: Be careful when you mix other types of operations with string concatenation. Consider the following: public class Main { public static void main(String[] argv) { String s = "four: " + 2 + 2; System.out.println(s); } } This fragment displays rather than the To complete the integer addition first, you must use parentheses, like this: String s = "four: " + (2 + 2); Now s contains the string "four: 4". The escape sequences are used to enter impossible-to-enter-directly strings. For example, " \"" is for the double-quote character. " \n" for the newline string. For octal notation, use the backslash followed by the three-digit number. For example, " \141" is the letter "a". For hexadecimal, you enter a backslash-u ( \u), then exactly four hexadecimal digits. For example, " \u0061" is the ISO-Latin-1 " a" because the top byte is zero. " \ua432" is a Japanese Katakana character. The following table summarizes the Java String escape sequence. Examples of string literals with escape are "Hello World" "two\nlines" "\"This is in quotes\"" The following example escapes the new line string and double quotation string. public class Main { public static void main(String[] argv) { String s = "java2s.com"; System.out.println("s is " + s); //w w w.j av a 2s .com s = "two\nlines"; System.out.println("s is " + s); s = "\"quotes\""; System.out.println("s is " + s); } } The output generated by this program is shown here: Java String literials must be begin and end on the same line. If your string is across several lines, the Java compiler will complain about it. public class Main { public static void main(String[] argv){ String s = "line 1 line 2 "; } } If you try to compile this program, the compiler will generate the following error message. equals( ) method and the == operator perform two different operations. equals( ) method compares the characters inside a String object. The == operator compares two object references to see whether they refer to the same instance. The following program shows the differences: public class Main { public static void main(String args[]) { String s1 = "demo2s.com"; String s2 = new String(s1); System.out.println(s1 + " equals " + s2 + " -> " + s1.equals(s2)); System.out.println(s1 + " == " + s2 + " -> " + (s1 == s2)); } } Here is the output of the preceding example:
http://www.java2s.com/Tutorials/Java/Java_Language/2050__Java_String.htm
CC-MAIN-2017-43
refinedweb
574
59.8
Opened 14 years ago Closed 13 years ago #2915 closed defect (fixed) Need Mechanism to Clear Overviews Description GDAL needs a mechanism to clear existing overviews. It is planned to treat the case of passing an empty list of overview levels to BuildOverviews() to indicate that any existing overviews should be cleared. Attachments (1) Change History (10) comment:1 by , 14 years ago comment:2 by , 13 years ago comment:3 by , 13 years ago comment:4 by , 13 years ago BuildOverviews on a tiff (or other formats such as Grid) with an existing RRD is broken. It now just deletes the existing rrd without building any overviews. by , 13 years ago A smaller sample dataset comment:5 by , 13 years ago Gao, I tried running gdaladdo -clean, followed by gdaladdo to add an overview and this seemed to work ok. I tried the following: import gdal gdal.SetConfigOption( 'USE_RRD', 'YES' ) gdal.SetConfigOption( 'HFA_USE_RRD', 'YES' ) ds = gdal.Open('pyr93_v.jpg',gdal.GA_ReadOnly) ds.BuildOverviews( overviewlist = [] ) ds.BuildOverviews( overviewlist = [ 9 ] ) ds = None and it worked. But if I remove the config options, it ended up replacing the .aux file with a TIFF file with the name .aux. Is it possible this is the behavior you saw? What overview related config options are in effect in ArcGIS? comment:6 by , 13 years ago Frank, The 'USE_RRD'option is not set. The behavior I am looking for is: - creating new overviews as .ovr for all except HFA - updating existing overviews using the same format. comment:7 by , 13 years ago Here are overviews related config options: CPLSetConfigOption("HFA_USE_RRD", "YES"); CPLSetConfigOption("COMPRESS_OVERVIEW", "LZW"); BuildOverviews is used to directly update/rebuild overviews without removing the existing one first. comment:8 by , 13 years ago Gao, I tried the script: gdal.SetConfigOption( 'COMPRESS_OVERVIEW', 'LZW' ) gdal.SetConfigOption( 'HFA_USE_RRD', 'YES' ) ds = gdal.Open('pyr93_v.jpg',gdal.GA_ReadOnly) ds.BuildOverviews( overviewlist = [ 9 ] ) and this worked fine, in the sense that it build an _ss_9_ overview layer in the existing .rrd file. If I first clean the overviews, and then generate overviews without closing the file in between then I end up deleting the .rrd file, and .aux file, and then rebuilding the .aux file, but actually in TIFF format and this does not work afterwards. I will fix this aspect though it is not entirely clear that it directly relates to the problem you are seeing. comment:9 by , 13 years ago I have made changes in the GDALDefaultOverviews::CleanOverviews() method to reset the osOvrFilename according to the normal rules. Now a clean and build in sequence results in the .aux and .rrd being deleted and an .ovr built. The changes are in trunk (r17263) and 1.6-esri branch (r17264). I have added an initial implementation which implements BuildOverviews(0) to clear overviews in the HFA, and GTiff drivers, as well as in the gdaldefaultoverviews class and adds a -clear option to gdaladdo. This is in trunk (r16670). No regression tests yet...
https://trac.osgeo.org/gdal/ticket/2915
CC-MAIN-2022-40
refinedweb
494
66.64
This(). Follow @peterbe on Twitter A BadAssError IS A TypeError and also IS A Exception. There is no problem there. Your tests were probably using a parent to check against an exception instead of the specific type. For instance, if you change foo() to raise TypeError, the first assertRaises will fail. It directly parallels the behavior of "except". If an exception would be handled by "except Foo", then "assertRaises(Foo, ...)" will pass. If you changed your exceptions and your unit tests kept passing, consider what that will mean for *application code* already written to handle certain exceptions. Did you want the different exception type to continue to be handled by existing code? Or did you want it to start bypassing certain "except" suites? I think Python's idea of respecting inheritance hierarchies in the exception handling system itself is questionable, but given that's what we have, this behavior seems to make sense in `assertRaises`. Also, note that `assertRaises` will give you exact exception object if you want it, so you can make additional assertions about it - such as its exact type. How is this any different than a test like def testClass(self): x = ValueError(example) self.assertTrue(isinstance(x, StandardError)) Why should checking class membership have different semantics for assertRaises than it does for isinstance? If you really need to validate that the exact class is being raised, and not a subclass (which you shouldn't ever need to do, since try...except clauses will still catch the subclass), you can use the context manager behavior of assertRaises in unittest2 or Python 2.7 with self.assertRaises(BadAssError) as cm: foo() self.assertEqual(cm.exception.__class__, BadAssError) Why? In the same sense that `assertEqual` does an `==` not an `isintance`. My point is that it's a bit surprising. I'm not pointing out that it's a bug. I am not sure what you mean. If you want to make sure that the exception “really is” BadAssError, then you test whether assertRaises(BadAssError,…) and you get a real, accurate answer about whether the exception raised will match the pattern “except BadAssError…” in your users' code. It is true that the check assertRaises(BadAssError,…) will *also* succeed if the exception raised is a subclass of BadAssError — but if you were afraid of *that*, then you simply wouldn't create a child-class exception of BadAssError, right? I know it's a very "silly" issue. I raised this (no pun intended) because I refactored my code to use more explicit exception classes but got baffled that my tests continued to pass even though I was messing around in the code. I don't see what the problem is: a BadAssError is a TypeError is an Exception, so it makes perfect sense that raising a BadAssError and catching a TypeError will pass: that's exactly what a standard `except` will do. > I mean, if I want to write a test that really makes sure the exception really is BadAssError The only case where this could be an issue is if your BadAssError was subtyped (so you had a BadAsserError(BadAssError)) and that subtype was raised and you wanted the base and not the subtype. The chances of exactly this occuring are pretty low, and in that case you should feel free to use a standard try/except and asserting that `type(e) is BadAssError` (if that snipped does not make you squirm, there might be something wrong with you) I'm not surprised by this behavior, but sometimes you do want to check the exact error class. When I changed a function in PyMongo to raise a different error in the same hiearchy, I wrote assertRaisesExactly as an alternative to assertRaises: Thank you! Clearly I'm not alone in being anal about testing *exactly* which exception is raised. For a enlightening metaphor, imagine that in your example, "Exception" is "animal", "TypeError" is "dog", and "BadAssError" is "poodle". You have some function that returns a poodle, which naturally must make the answer to those questions "Yes": * Did the function return a poodle? * Did the function return a dog? * Did the function return an animal? If you want to test for dog, regardless of breed, test for dog. If you want to test for poodle, and beagle or labrador won't do, test for poodle. But why would you want to test for dog and fail if it is a poodle? If you want a test that would pass for any dog except a poodle, then do two tests: * assert it is a dog * assert it is not a poodle If you were always testing for dogs and suddenly you realize some functions should return a general "dog", but some others should return specifically "poodle" or "labrador", there is no reason why all old tests should not still pass. All functions are still returning dogs. If you want to test some of them for poodle, you can make that specific test for those specific functions.
https://www.peterbe.com/plog/assertraises-and-inheritance
CC-MAIN-2019-18
refinedweb
832
60.14
FirebirdClient version 6.0, Entity Framework Core 2.x provider, Entity Framework 6 provider Yep, it’s finally here. The big version, version 6.0, of FirebirdClient and providers for Entity Framework Core and Entity Framework 6. I think this is the biggest release ever. A lot of changes (some breaking). Let’s take it one by one. FirebirdClient 6.0 Although only about 20 items were completed in this version, some have quite a big impact. Some sweeping also happened. For example item from 2013 – yes, that’s five years – was finally resolved (it took so long because it’s a huge breaking change). You can get overview of all changes from tracker. Because some changes are (can be) breaking, all these items are marked #breaking in tracker for your convenience. You should check your code before updating. To mention few important: - Support for .NET 4.0 was dropped (DNET-774). - Only Entity Framework 6 is supported and Entity Framework provider is completely moved to separate assembly, using new namespaces (DNET-732). - GUIDs in/from .NET have the same representation as in Firebird (DNET-509). Entity Framework Core 2.x provider You can now use Entity Framework Core 2.x with Firebird. The NuGet package is FirebirdSql.EntityFrameworkCore.Firebird. Small tutorial to get you started is here. Scaffolding and Migrations are not part of this version (mostly to not block the release). Entity Framework 6 provider Only Entity Framework 6 is now supported and new namespaces are used ( EntityFramework.Firebird). Small tutorial to get you started is here. Documentation To help you started with usage, I created documents for ADO.NET, Entity Framework Core and Entity Framework 6. At the moment I went with the simplest solution, plain Markdown files in docs folder in the repository, and let’s see where that ends. Feel free to contribute. Getting the bits You can get the bits from NuGet FirebirdSql.Data.FirebirdClient, FirebirdSql.EntityFrameworkCore.Firebird, EntityFramework.Firebird.
https://www.tabsoverspaces.com/233727-firebirdclient-version-6-0-entity-framework-core-2-x-provider-entity-framework-6-provider
CC-MAIN-2021-31
refinedweb
326
61.63
Key Takeaways - Pathpida solves the challenge of validating the existence of dynamic routes in Next.js and Nuxt.js projects. - Pathpida automatically collects routes in one place. - Pathpida generates a TypeScript file to support static checking of routes. - Pathpida strives for zero-configuration. - Pathpida is easily added to existing Next.js and Nuxt.js projects. Pathpida collects dynamic routing into a single TypeScript file Pathpida is a library for TypeScript projects to collect dynamic routes in one place. It is a tedious task to do manually. This helps us check the existence of routes, which is often overseen as a project grows. Pathpida is optimized for Next.js (React) and Nuxt.js (Vue). Pathpida can be added to existing Next.js and Nuxt.js projects without configuration. What problems does Pathpida solve? Managing routes across complex applications is challenging and tedious. Consider a simple example with Next.js, there is a Link to URL /post/1: import Link from 'next/link' export default () => { const url = `/post/${1}` return <Link href={url} /> } Here, we cannot check statically the existence of /post/{pid}. If pages/post/[pid].tsx is absent, route transition would unexpectedly fail at runtime. Even with Template Literal Types shipped by TypeScript 4.1, it is a messy work to capture all routes manually. If we write all routes in one place manually, we need to do point-and-shoot check, also every time we change the routes. Pathpida watches and walks the pages directory to achieve checking existence of routes we use in components statically. This means, we can check whether all links are valid in CI with validating through TypeScript. The core concept of watching, analyzing AST, and writing a TypeScript file is derived from aspida. Consider the situation that some paths include dynamic route parameters such as article slugs or product ids in e-commerce: pages/[pid].tsx pages/blog/[...slug].tsx pages/index.tsx In this example, pathpida produces the following lib/$path.ts: export const pagesPath = { _pid: (pid: number | string) => ({ $url: () => ({ pathname: '/[pid]', query: { pid }}) }), blog: { _slug: (slug: string[]) => ({ $url: () => ({ pathname: '/blog/[...slug]', query: { slug }}) }) }, $url: () => ({ pathname: '/' }) } The properties of the generated client corresponds to routes, one by one, including dynamic routes. The value returned by .$url() gets passed to next/link and next/router. All routes are typed by inference and can get statically checked for existence. Now, we can write the links using the Pathpida generated routing client. For example, within components: // pages/index.tsx import Link from 'next/link' import { pagesPath } from '../lib/$path' export default () => { return <Link href={pagesPath.blog._slug(['a', 'b', 'c']).$url()} /> } Introducing pathpida to a Next.js project Consider an environment with Next.js and TypeScript. It is easy to use npm-run-all for convenience with npm or yarn. Run yarn add pathpida npm-run-all --dev and add the following npm scripts to package.json: { "scripts": { "dev": "run-p dev:*", "dev:next": "next dev", "dev:path": "pathpida --watch", "build": "pathpida && next build" } } Now, yarn dev starts pathpida alongside the Next.js dev server. In either the utils or lib directory, $path.ts gets generated. Each time we update, add or delete files under the pages/ (also supports configurations like using src/pages/), lib/$path.ts would get replaced automatically. Files under the directory pages/api in Next.js projects are ignored because they are reserved as an API, not pages. Then we can use the pathpida client by importing pagesPath from $path.ts. An example with the Next.js link and router: lib/ $path.ts pages/ articles/ [id].tsx users/ [...userInfo].tsx _app.tsx index.tsx // components/ActiveLink.tsx import Link from 'next/link' import { useRouter } from 'next/router' import { pagesPath } from '../lib/$path' function ActiveLink() { const router = useRouter() const handleClick = () => { router.push(pagesPath.users._userInfo(['mario', 'hello', 'world!']).$url()) } return <> <div onClick={handleClick}>Hello</div> <Link href={pagesPath.articles._id(1).$url()}> World! </Link> </> } export default ActiveLink Supplying the required query string parameter Pathpida can also add types for query string by exporting Query type from the pages component. To make the route /user?userId={number}, edit pages/user.tsx as follows: export type Query = { userId: number } export default () => <div /> With this small change we can specify the query string as pagesPath.user.$url({ query: { userId: 1 }}). Using Query, we should provide one argument to .$url if all properties are optional. To make the argument itself optional, use OptionalQuery instead of Query: export type OptionalQuery = { userId: number } export default () => <div /> This change allows us to call .$url without any arguments. import { pagesPath } from '../lib/$path' pagesPath.user.$url({ query: { userId: 1 }}) pagesPath.user.$url() Hash can also be specified using the hash property. import { pagesPath } from '../lib/$path' pagesPath.user.$url({ query: { userId: 1 }, hash: 'hoge' }) pagesPath.user.$url({ hash: 'fuga' }) Get static file paths under the public directory with type-safety By supplying the flag --enableStatic, pathpida generates the staticPath client by watching the public directory: { "scripts": { "dev": "run-p dev:*", "dev:next": "next dev", "dev:path": "pathpida --enableStatic --watch", "build": "pathpida --enableStatic && next build" } } Consider an example where the public directory consists of one JSON and one png file: public/aa.json public/bb/cc.png lib/$path.ts or utils/$path.ts Then we can see staticPath generated in $path.ts. This has all static paths as string in properties, with periods converted to underscores as follows: // pages/index.tsx import Link from 'next/link' import { staticPath } from '../lib/$path' console.log(staticPath.aa_json) // /aa.json export default () => <img src={staticPath.bb.cc_png} /> Introducing pathpida to Nuxt.js projects For projects set up with Nuxt.js and TypeScript, add pathpida client as a plugin to nuxt.config.js: { plugins: ['~/plugins/$path'] } We can access the pathpida client from Vue/Vuex instances via $pagesPath. Using --enableStatic, pathpida also provides the $staticPath. For example: <!-- pages/index.vue --> <template> <div> <nuxt-link : <div @ </dijv> </template> <script lang="ts"> import Vue from 'vue' export default Vue.extend({ methods: { onclick() { this.$router.push(this.$pagesPath.post._pid(1).$url()) } } }) </script> Pathpida treats the project as Nuxt.js when detecting nuxt.config.js or nuxt.config.ts in project root, otherwise falling back to Next.js. $path.ts could be different for Next.js and Nuxt.js. When Nuxt.js gets used, files with names starting with hyphen get ignored. With Vue files, we cannot use exported Query consisting of non-global types. For details, refer to the generated $path.ts. Pathpida is open source software available under the MIT license. Contributions and feedback are encouraged via the Pathpida GitHub project. About the Author Teppei Kawaguchi is a web developer, especially backend and TypeScript. He is contributing to open source projects whatever he likes. He is an enthusiastic competitive programmer, qualified in more than 8 contests. He experiences constructing a cloud architecture to achieve delivering efficiently. Community comments
https://www.infoq.com/articles/pathpida-dynamic-routing-nextjs-nuxtjs/?itm_source=articles_about_reactive-programming&itm_medium=link&itm_campaign=reactive-programming
CC-MAIN-2021-31
refinedweb
1,142
53.17
From: Michael Glassford (glassfordm_at_[hidden]) Date: 2004-06-29 10:39:12 Christopher Currie wrote: > Michael Glassford wrote: > > > Christopher Currie wrote: > >> TryLock: What would be the semantics of l(m, NO_LOCK, b)? In other > >> words, if you're not going to lock on construction, why specify a > >> blocking policy? > > > > > > The only reason is so that if you write l(m, LOCK, ...) you could also > > specify the blocking parameter. An expansion of Vladimir Batov's of > > using structs rather than enums could help here: > > > > struct nolock_t {}; > > nolock_t nolock; > > > > struct lock_t {}; > > lock_t lock; > > > > class TryLock > > { > > TryLock(TryMutex m, no_lock_t s) > > {...} > > > > TryLock(TryMutex m, lock_t s, blocking_t b) > > {...} > > } > > An interesting syntax, I can see how it does make explicit whether you > are locking or not. Personally, I dislike having to type an extra > argument when it's the only choice; IMO if you're specifying a blocking > parameter, the locking is implied, and therefore superfluous. That was an oversight on my part. Though in the case of read/write locks, you do need to specify both the lock type (read/write) and whether it is blocking or not. > I was thinking something like (going back to enums for a moment): > > enum { unlocked = 0, locked } lock_init; > enum { nonblocking = 0, blocking } blocking_action; > > class TryLock > { > public: > TryLock( TryMutex m, lock_init s = locked ) { ... } > > TryLock( TryMutex m, blocking_action b ) { ... } > }; > > Also, going back to the struct technique, do the struct instances need > to be in an anonymous namespace to prevent ODR violations? Just trying > to get my head around the concept. Probably they should be in a separate namespace, though possibly not anonymous. Mike Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/06/67107.php
CC-MAIN-2022-40
refinedweb
287
65.73
This part describes the Domain Name System (DNS) and how to administer it. Chapter 28, "Introduction to DNS" Chapter 29, "Administering DNS" This for information regarding initial setup and configuration of DNS. DNS, NIS+, NIS, and FNS provide similar functionality and sometimes use the same terms to define different entities. Thus, this chapter takes care to define terms like domain and name server according to their DNS functionality, a very different functionality than NIS+ and NIS domains and servers. Though. could include all machines on a vast university network that belong to the computer science department or to university administration. For example, suppose the Ajax company has two sites, one in San Francisco and one in Seattle. The Retail.Sales.Ajax.com. domain might be in Seattle and the Wholesale.Sales.Ajax.com. domain might be in San Francisco. One part of the Sales.Ajax.com. domain would be in one city, the other part in the second city. Each administrative domain must have its own unique subdomain name. Moreover, if you want your network to participate in the Internet, the network must be part of a registered administrative domain. The section "Joining the Internet" has full details about domain names and domain registration. As mentioned previously, name servers in an administrative domain maintain the DNS database. They also run the in.named daemon, which implements DNS services, most significantly, name-to-address mapping. in.named is a public domain TCP/IP program and included with the Solaris operating environment. The in.named daemon is also called the Berkeley Internet Name Domain service, or BIND, because it was developed at the University of California at Berkeley. There are three types of DNS name servers: Each domain must have one primary server and should have at least one secondary server to provide backup. "Zones" explains primary and secondary servers in detail. To. The. DNS service for a domain is managed on the set of name servers first introduced "in.named and DNS Name Servers". Name servers can manage a single domain, or multiple domains, or domains and some or all of their corresponding subdomains. The part of the namespace that a given name server controls is called a zone; thus, the name server is said to be authoritative for the zone. If you are responsible for a particular name server, you may be given the title zone administrator. The data in a name server's database are called zone files. One type of zone file stores IP addresses and host names. When someone attempts to connect to a remote host using a host name by a utility like ftp or telnet, DNS performs name-to-address mapping, by looking up the host name in the zone file and converting it into its IP address. For example, the Ajax domain shown in Figure 28-7 contains a top domain (Ajax), four subdomains, and five sub-subdomains. It is divided into four zones shown by the thick lines. Thus, the Ajax name server administers a zone composed of the Ajax, Sales, Retail, and Wholesale domains. The Manf and QA domains are zones unto themselves served by their own name servers, and the Corp name server manages a zone composed of the Corp, Actg, Finance, and Mktg domains. The DNS database also include zone files that use the IP address as a key to find the host name of the machine, enabling IP address to host name resolution. This process is called reverse resolution or more commonly, reverse mapping. Reverse mapping is used primarily to verify the identity of the machine that sent a message or to authorize remote operations on a local host. The your site is connected to the Internet, your DNS name server's boot files must point to a common cache file (usually called named.ca) that identifies the root domain name servers. A template for this file may be obtained from InterNIC registration services via: naming your DNS files according to the conventions in this manual, you need to move this file to /var/named/named.ca. If. D, Table 28-2 compares BIND file names from these three sources:Table 28-2 BIND File Name Examples BIND 8.1 adds a new configuration file, /etc/named.conf, that replaces the /etc/named.boot file. The /etc/named.conf file establishes the server as a primary, secondary, or cache-only name server. It also specifies the zones over which the server has authority and which data files it should read to get its initial data. The /etc/named.conf file contains statements that implement: Security through an Access Control List (ACL) that defines a collection of IP addresses that. This chapter describes how to administer the Domain Name System (DNS). For more detailed information, see DNS and Bind by Cricket Liu and Paul Albitz, (O'Reilly, 1992) and "Name Server Operations Guide for BIND", University of California, Berkeley. "Trailing Dots in Domain Names" "Modifying DNS Data Files" "Adding and Deleting Machines" "Adding Additional DNS Servers" "Creating DNS Subdomains" "DNS Error Messages and Problem Solving" When working with DNS-related files, follow these rules regarding the trailing dot in domain names: Use a trailing dot in domain names in hosts, hosts.rev, named.ca, and named.local data files. For example, sales.doc.com. is correct. Do not use a trailing dot in domain names in named.boot or resolv.conf files. For example, sales.doc.com is correct. Whenever. When you add or delete a machine, always make your changes in the data files stored on your primary DNS server. Do not make changes or edit the files on your secondary servers because those will be automatically updated from the primary server based on your changing the SOA serial number. To. To remove a machine from a DNS domain: Remove dns from the hosts line of the machine's nsswitch.conf file. Remove the machine's /etc/resolv.conf file. Delete the records for that machine from the primary server's hosts and hosts.rev files. If the machine has CNAME records pointing to it, those CNAME records must also be deleted from the hosts file. Set up replacements for services supported by the removed machine. If the machine is a primary server, mail host, or host for any other necessary process or service, you must take whatever steps are necessary to set up some other machine to perform those services. You can add primary and secondary servers to your network. To add a DNS server: Set the server up as a DNS client. See "Adding a Machine". Set up the server's boot file. Set up the server's named.ca file. Set up the server's hosts file. Set up the server's hosts.rev file. Set up the server's named.local file. Initialize the server. Test the server. These steps are explained in more detail in Solaris Naming Setup and Configuration Guide. As your network grows you may find it convenient to divide it into one or more DNS subdomains. (See "Introducing the DNS Namespace" for a discussion of DNS domain hierarchy and structure.) When you divide your network into a parent domain and one or more subdomains, you reduce the load on individual DNS servers by distributing responsibility across multiple domains. In this way you can improve network performance. For example, suppose there are 900 machines on your network and all of them are in one domain. In this case, one set of DNS servers composed of a primary and additional secondary and caching-only servers have to support 900 machines. If you divide this network into a parent domain and two subdomain, each with 300 machines, then you have three sets of primary and secondary servers each responsible for only 300 machines. By dividing your network into domains that match either your geographic or organizational structure (or both), the DNS domain names indicate where a given machine or email address fits into your structure. For example, [email protected] implies that the machine rigel is located at your Alameda site, and the email address [email protected] implies that the user barnum is part of your Sales organization. Dividing your network into multiple domains requires more set up work than keeping everything in one domain, and you have to maintain the delegation data that ties your domains together. On the other hand, when you have multiple domains, you can distribute domain maintenance tasks among different administrators or teams, one for each domain. Here are some points to consider before dividing your network into a parent and one or more subdomains: How many subdomains? The more subdomains your create, the more initial set up work you have to do and the more ongoing coordination work for the administrators in the parent domain. The more subdomains, the more delegation work for the servers in the parent domain. On the other hand, fewer domains mean larger domains, and the larger a domain is the more server speed and memory is required to support it. How to divide your network? You can divide your network into multiple domains any way you want. The three most common methods are by organizational structure where you have separate subdomains for each department or division (sales, research, manufacturing, etc.); by geography where you have separate subdomains for each site; or by network structure where you have separate subdomains for each major network component. The most important rule to remember is that administration and use will be easier if your domain structure follows a consistent, logical, and self-evident pattern. Consider the future. The most confusing domain structures are those that grow over time with subdomains added haphazardly as new sites and departments are created. To the degree possible, try to take future growth into account when designing your domain hierarchy. Also take into account stability. It is best to base your subdomains on what is most stable. For example, if your geographic sites are relatively stable but your departments and divisions are frequently reorganized, it is probably better to base your subdomains on geography rather than organizational function. On the other hand, if your structure is relatively stable but you frequently add or change sites, it is probably better to base your subdomains on your organizational hierarchy. Wide area network links. When a network spans multiple sites connected via modems or leased lines, performance will be better and reliability greater if your domains do not span such Wide Area Network (WAN) links. In most cases, WAN links are slower than contiguous network connections and more prone to failure. When servers have to support machines that can only be reached over a WAN link, you increase the network traffic funneling through the slower link, and if there is a power failure or other problem at one site, it could affect the machines at the other sites. (The same performance and reliability considerations apply to DNS zones. As a general rule of thumb, it is best if zones do not span WAN links.) NIS+ name service. If your enterprise-level name service is NIS+, administration will be easier if your DNS and NIS+ domain and subdomain structures match. Subdomain names. To the degree possible, it is best to establish and follow a consistent policy for naming your subdomains. When domain names are consistent, it is much easier for users to remember and correctly specify them. Keep in mind that domain names are an important element in all of your DNS data files and that changing a subdomain name requires editing every file in which the old name appears. Thus, it is best to choose subdomain names that are stable and unlikely to need changing. You can use either full words, such as manufacturing, or abbreviations, such as manf, as subdomain names, but it will confuse users if some subdomains are named with abbreviations and others with full names. If you decide to use abbreviations, use enough letters to clearly identify the name because short cryptic names are hard to use and remember. Do not use reserved top-level Internet domain names as subdomain names. This means that names like org, net, com, gov, edu, and any of the two-letter country codes such as jp, uk, ca, and it should never be used as a subdomain name.. See Appendix A, Problems and Solutions, and Appenix B, Error Messages, for DNS problem solving and error message information.
http://docs.oracle.com/cd/E19455-01/806-1387/6jam692eu/index.html
CC-MAIN-2017-39
refinedweb
2,075
54.42