text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
> On Oct. 11, 2016, 8:48 p.m., Alex Clemmer wrote: > > 3rdparty/stout/include/stout/wait.hpp, line 25 > > <> > > > > Couple of questions here: > > > > * This seems useful for POSIX platforms, but I don't really understand > > the implications of having this enabled on Windows. Windows will never > > support the signal semantics of POSIX, and in general my feeling is that > > I'd rather not define things on platforms that don't support those > > semantics, unless there is a particularly pertinent reason to. Thoughts? > > * In the 2 or 3 places we do use `W_EXITCODE`, rather than defining it > > for Windows, we simply `#ifdef` the code out. So the precedent is to just > > not use the macro. Is there a compelling reason to change this? > > > > > > Also, tiny, tiny nit question: is it style to have 2 spaces between > > `W_EXITCODE(ret, sig)` and `((ret) << 8 | (sig))` > > Kevin Klues wrote: > To answer both your bullets at once... > If you look in `windows.hpp` all of the other `W*` macros from the posix > standard are defined in there (even though they may be meaningless on > windows). This is consistent with them all being defined in `sys/wait.h` on a > posix compliant system. This is why I separated the two by a simple `#ifdef` > > Adding the code below to be added to both windows and poix systems: > > ``` > #ifndef W_EXITCODE > #define W_EXITCODE(ret, sig) ((ret) << 8 | (sig)) > #endif > ``` > > is just to cover a corner case where some libc variants don't actually > define this macro (because it's not technically posix compliant). Almost all > libc variants do define it, but a bug was filed against this because > (apparently) `musl` does not. > > Given that adding this code makes both windows and any variant of libc > now define all of the `W*` macros symetrically, I think this is likely the > right way to do it for now. In the future when we completely strip out all > `W*` macros from windows, we can revisit this. > > --- > > Regarding the formatting question. I just copy and pasted this from the > glibc header file. I don't think we have a preferred style for this, but I > tried it both ways just now, and it seems to be more readable as 2 spaces, so > I'll leave it. Advertising I'm fine with this change being submitted because the applicable domain of error is so small. But, I am hoping we can agree to not define semantically empty macros in the future on Windows codepaths in the future. So I will add some historical perspective to this thread. :) The `W*` macros you bring up that made it into `windows.hpp` all made it for extremely pragmatic reasons. We're not at all happy _per se_ with this outcome. For example, some of the macros are doing things like checking for the results of signals that do not exist on Windows; our decision calculus, essentially, was: (1) it hurt readability too much to `#ifdef` out their usage in the codebase, (2) we were too time constrained with MesosCon approaching to correctly factor them out correctly, and (3) the places where we were checking these results had a negligible effect on Windows code paths because the signal would never be triggered. If `W_EXITCODE` is similarly justified, I don't quite see it. Maybe I'm missing something. It looks like it's used twice on non-Windows codepaths. I'm definitely sympathetic to the idea that we do not want to decrease readability in the POSIX codepaths to help Windows, a minority use case, but I think that dumping POSIX stuff into the Windows codepaths is a really dangerous habit to get into, particularly when it's not needed. This has caused me, personally, probably a hundred hours of debugging, all told. Let me know if I'm missing something important here, but for now, my recommendation is that we strongly avoid defining things we don't have a strict operational justification for. - Alex ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: ----------------------------------------------------------- On Oct. 13, 2016, 12:08 a.m., Kevin Klues wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > > ----------------------------------------------------------- > > (Updated Oct. 13, 2016, 12:08 a.m.) > > > Review request for mesos and Jie Yu. > > > Bugs: MESOS-6310 > > > > Repository: mesos > > > Description > ------- > > This was motivated by the need for a default definition of > 'W_EXITCODE' (since it is not technically POSIX compliant). > > > Diffs > ----- > > 3rdparty/stout/include/Makefile.am b0b08d8e0d284a88bc8daa4570540659b94dc2d0 > 3rdparty/stout/include/stout/os/wait.hpp PRE-CREATION > > Diff: > > > Testing > ------- > > sudo GTEST_FILTER="*Nested*" src/mesos-tests > > > Thanks, > > Kevin Klues > >
https://www.mail-archive.com/[email protected]/msg47815.html
CC-MAIN-2016-44
refinedweb
756
62.88
smart-router a message routing system that routes messages based on their content. Want to see pretty graphs? Log in now!Want to see pretty graphs? Log in now! npm install smart-router smart-router The smart-router is a message routing system that routes messages based on their content. It is meant to be light-weight and HA. Internally, it uses RabbitMQ to handle the messages and socket.io as its transport protocol. It can be used to connect server-side services as well as client-side applications. To use it: npm install smart-router Concepts Endpoints The smart-router will listen to several endpoints or sub-endpoints as defined in its config file. One end point can be divided into sub-endpoints who will share the same route definitions, but if an endpoint has sub-endpoints, the smart-router will listen to the sub-endpoints and not the endpoint itself. Actors An Actor is a client of the smart-router. It has its own unique Id. It will connect to an endpoint or a sub-endpoint to publish and receive messages. They can be configured to receive messages sent directly to them or sent to their endpoint. Messages Messages are exchanged by the Actors through the smart-router. It will then introspect them to route them to the right actor or to the right endpoint for one actor to pick them up. A message has a type and a body which can be represented like that: { ids: { }, metadata: { }, payload: { } } ids contains the ids of the actors or endpoints concerned by the message. By looking, preferably, at the metadata, the smart-router will choose which of these actors it will route the message to. The payload contains application specific data, whereas metadata will contain data used by the routing. (The smart-router still has access to the payload and can decide using it, but it is better to have a clean separation between the two.) Routes A Route is a function that is called when the smart-router receives a message of a specific type on a specific end point. In this function, the smart-router can look at the endpoint, the message type and the message body to define what to do with it. Usually, it will publish it as-is to another and point or actor, but it can modify it, fork it and publish it to several endpoints. In the following route, when we receive a message of type business from the serviceA endpoint, we check if it is important. If it is, we route it to serviceC endpoint as an important message and log it by sending it to the logger as a log message. If not, we forward it as-is to serviceB. { endpoint: 'serviceA', messagetype: 'business', action: function (message, socket, smartrouter) { if (message.ids.serviceC && message.metadata.isImportant) { smartrouter.publish(message.ids.serviceC, 'important', message, socket); smartrouter.publish(message.ids.logger, 'log', message, socket); } else { smartrouter.publish(message.ids.serviceB, 'business', message, socket); } } } Queues and Exchanges Queues and Exchanges are an internal notion. Actors don't see the queues and don't know about them. Internally, the route functions will publish messages to some queues and when a new actor connects, it will subscribe to one or two queues. One exchange is created per (sub)endpoint. Queues exist at the (sub)endpoint or actor level, depending on the flags used in the configuration of the endpoint. endpoints: [ { name: 'endpoint', queue: QUEUEFLAG.endpoint }, { name: 'subendpoint', sub: [ 456, 457 ], queue: QUEUEFLAG.endpoint }, { name: 'actoronly', sub: [ 'subactor' ], queue: QUEUEFLAG.actor }, // QUEUEFLAG.actor is the default value { name: 'endpointandactor', queue: QUEUEFLAG.endpoint | QUEUEFLAG.actor } ] With this configuration, the smart-router will listen to: /endpoint /subendpoint/456 /subendpoint/457 /actoronly/subactor /endpointandactor and will use the following queues: endpointof exchange endpoint subendpoint/456of exchange subendpoint/456 subendpoint/457of exchange subendpoint/457 <actorid>of exchange actoronly/subactorwhere actorid is the unique id of the actors connecting to the end point endpointandactorof exchange endpointandactor <actorid>of exchange endpointandactorwhere actorid is the unique id of the actors connecting to the end point During its transit inside the smart-router, a message will: - be received on the endpoint - routed using the corresponding route function - queued on the queue selected by the routing function - dequeued and - sent to an actor. Queue cleaning The smart-router will create queues with 'x-expires' argument. By default, a queue will be deleted 15 minutes after the last actor has been disconnected from it. This value configurable in the yaml properties file. High Availability Internally, the smart-router is composed of two modules: - a socket.io server written in node.js that handles the routing of the messages - a RabbitMQ cluster that handles the persistence and the publication of the messages. Any number of the node.js application can be deployed as long as they all connect to the same RabbitMQ cluster. A single message can be queued by one instance and dequeued by another. As long as the RabbitMQ is correctly set up to mirror the queues, there is no SPoF. Usage Smart-router configuration On start, the smart-router will read a configuration object. This configuration will contain: portThe port on which the smart-router will listen. amqpThe amqp connection options. endpointsThe endpoints configuration. Will define endpoints' names and the socket's namespaces on which the smart-router will listen. Actors will connect on these endpoints. This object will be an array of objects containing the following properties: nameEndpoint's name. subList containing endpoint's sub-endpoints. This will determine on which namespaces the smart-router will listen: If no sub are present, it will listen on /name. If sub are set, it will listen on /name/id1, /name/id2, ... queueA flag to determine the queue(s) which will be created for the endpoint. Use ('./lib').const.QUEUEFLAG to set it. If there is no flag or if QUEUEFLAG.actoris set, smart-router will create a queue named with the actorId which has established a connection on the namespace. If the flag QUEUEFLAG.endpointis set, the smart-router will create a generic queue named endpointName/subendpoint. routesArray of configuration objects which will define actions to do for each type of message received on an endpoint. Each object will contains: endpointEndpoint's name (one of those defined in endpointsconfiguration). messagetypeThe name of the event that the smart-router will listen for. action: function(message, socket, smartrouter)A function which will be called once we receive the event messagetypeon the endpoint. It's here that you need to route the received message. Typically, you will do something like: smartrouter.publish(queueId, 'messagetype', message, socket)which will publish a message of type messagetypeto the queue queueid. The socketargument is the actual socket where your actor is connected. By passing it as an argument of the publish()method of the smart-router, the smart-router will be able to send back to your actor the errors which might occur while publishing the message on RabbitMQ (Typically: The queue where your actor is trying to publish does not exist). Handshake protocol If you develop your actors in JS, you only have to use the Actor class as describe in the next section. In any other language, you would need to use a socket.io client and to implement the handshake protocol: - when a new Actor connects, the smart-router emits an empty whoareyoumessage. - the Actor must respond with a iammessage whose payload will be its unique id. These ids have to be unique through out the whole platform. - the smart-router responds then with a helloempty message. - when receiving a message from an unknown Actor (unknown unique id), the smart-router will emit a whoareyoumessage containing the previous message as a payload ( payload.typebeing the message type, and payload.messagethe message body.) - it is expected that the Actor then emits a iamwith its id and re-emits the rejected message. Writing actors In JS, all actors need to extend the raw Actor class defined in lib/actor.js. var Actor = require('smart-router').Actor; MyActor = new JS.Class(Actor, { connect: function() { var socket = this.callSuper(); socket.on('myactorevent', function(data) { // do some awesome stuff socket.emit('responseevent', message); }; socket.on('otheractorevent', function(data) { // do other stuff }; }, my_actor_method : function() { } }); As you see, the only mandatory thing to do in an actor is to extends the connect() function, to get a reference on the socket by calling its parent, and to add listeners on it. Of course listeners must match the messagetype you have configured in routes. Then, you are able to instantiate your actor: new MyActor('localhost:8080', 'endpoint', 'my_actor_id'); Examples Basic An example of basic actor can be found in example/basic.js. The scenario is very simple: - Actor1 starts by sending a 'message' which will be published to the queue actor/2(subscribed by actor2). - The message is routed to actor2 which reply to the queue actor/1/my_actor_id1(subscribed by actor1) - The message is routed to actor1 which reply to the queue actor/2/actor_id2(subscribed by actor2) - ... It stops after two back and forth. Tests The test folder contains different actors used to test the behaviour of the smart-router. agentis the main actor. It will decide of the flow of the messages by adding some metadata. uisimulates a UI. It can request to talk to the external service. serviceis an external service to which some messages can get routed. Use npm test from the command line to launch.
https://www.npmjs.org/package/smart-router
CC-MAIN-2014-15
refinedweb
1,582
65.93
Basic Image Operations - pixel access To load an image, we can use imread() function. It loads an image from the specified file and returns it. Actually, it reads an image as an array of RGB values. If the image cannot be read (because of missing file, improper permissions, unsupported or invalid format), the function returns an empty matrix ( Mat::data==NULL ). Even if the image path is wrong, it won't throw any error, but print image will give us None. Here are the file formats that are currently supported: - Windows bitmaps - *.bmp, *.dib - JPEG files - *.jpeg, *.jpg, *.jpe - JPEG 2000 files - *.jp2 - Portable Network Graphics - *.png - Portable image format - *.pbm, *.pgm, *.ppm - Sun rasters - *.sr, *.ras - TIFF files - *.tiff, *.tif CloudyGoldenGate.jpg: CloudyGoldenGate_grayscale.jpg: Here is a simple python code for image loading: import cv2 import numpy as np img = cv2.imread('images/CloudyGoldenGate.jpg') The syntax for the imread() looks like this: cv2.imread(filename[, flags]) The flags is to specify the color type of a loaded image: - CV_LOAD_IMAGE_ANYDEPTH - If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit. - CV_LOAD_IMAGE_COLOR - If set, always convert image to the color one. - CV_LOAD_IMAGE_GRAYSCALE - If set, always convert image to the grayscale one. - >0 Return a 3-channel color image. - =0 Return a grayscale image. - <0 Return the loaded image as is (with alpha channel). Image properties include number of rows, columns and channels, type of image data, number of pixels etc. Shape of image is accessed by img.shape. It returns a tuple of number of rows, columns and channels. If image is grayscale, tuple returned contains only number of rows and columns. So it is a good method to check if loaded image is grayscale or color image: import cv2 import numpy as np img_file = 'images/TriColor.png' img = cv2.imread(img_file, cv2.IMREAD_COLOR) # rgb alpha_img = cv2.imread(img_file, cv2.IMREAD_UNCHANGED) # rgba gray_img = cv2.imread(img_file, cv2.IMREAD_GRAYSCALE) # grayscale print type(img) print 'RGB shape: ', img.shape # Rows, cols, channels print 'ARGB shape:', alpha_img.shape print 'Gray shape:', gray_img.shape print 'img.dtype: ', img.dtype print 'img.size: ', img.size Output: <type 'numpy.ndarray'> RGB shape: (240, 240, 3) ARGB shape: (240, 240, 4) Gray shape: (240, 240) img.dtype: uint8 img.size: 172800 - img.dtype (usually, dtype=np.uint8) is very important while debugging because a large number of errors in OpenCV-Python code is caused by invalid datatype. - If image is grayscale, tuple returned does not contain any channels. - The number of channels for ARGB = 4. We can access a pixel value by its row and column coordinates. For BGR image, it returns an array of Blue, Green, Red values. For grayscale image, corresponding intensity is returned. We get BGR value from the color image: img[45, 90] = [200 106 5] # mostly blue img[173, 25] = [ 0 111 0] # green img[145, 208] = [ 0 0 177] # red We can also access the alpha value (transparent = 0, opaque = 255) and grayscale value (intensity) as well. alpha_img[173, 25] = [ 0 111 0 255] # opaque gray_img[173, 25] = 87 # intensity for grayscale If we specify what we want to get. Here we specified 5x5 pixels: alpha_img[170:175, 25:30, 0] = [[0 0 0 0 0] # blue [1 0 0 0 0] [2 0 0 0 0] [0 1 0 0 0] [0 2 0 0 0]] alpha_img[170:175, 25:30, 1] = [[137 150 161 170 178] # green [130 143 155 165 173] [122 136 148 159 168] [111 128 140 152 162] [ 98 120 132 145 155]] alpha_img[170:175, 25:30, 2] = [[0 0 0 0 0] # red [1 0 0 0 0] [2 0 0 0 0] [0 1 0 0 0] [0 2 0 0 0]] alpha_img[170:175, 25:30, 3] = [[255 255 255 255 255] # alpha [255 255 255 255 255] [255 255 255 255 255] [255 255 255 255 255] [255 255 255 255 255]] gray_img[170:175, 25:30] = [[107 117 126 133 140] # intensity [102 111 120 129 135] [ 95 106 116 124 131] [ 87 99 109 119 127] [ 76 93 103 114 120]]
http://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_basic_image_operations_pixel_access_image_load.php
CC-MAIN-2017-26
refinedweb
694
74.79
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives asks if perhaps it's time we stopped exploring that space, and simply picked "The Last Programming Language". What would that language we like? What attributes should it have? And is this idea wise? No, we haven't. An attempt at counterexample : I think we have much to do about typed metaprogramming -- in the sense of "program generation". Typed staged languages like MetaML and MetaOCaml have explored the construction of typed program expressions to be run later, but this covers only a relatively small part of a programming language grammar: we could want to generate patterns (in the ML/Haskell sense), type declarations, modules, etc. in a typed fashion. I know of some attempts at rich typed metaprogramming of languages with binders (eg. Beluga), but internalizing type declarations seems a step farther. Much metaprogramming today is still done at the syntactic level, with no help from the host language to give guarantees about the target program beyond syntactical correctness. Of course, LtU abunds of other examples of "exploring unknown programming language space" such as Sean McDirmid's touch-based languages and dmbarbour's Reactive Demand Programming. PS: I'm sorry to react to the abstract of the talk only, rather than the talk content, but I can't get the video to play and wouldn't want to try to extract the interesting bits out of a one-hour video anyway. I would welcome a transcript, or at least the slides in a downloadable format. In the end there can only be one. Sorry, I couldn't resist. Have we completely explored the PL design space? Two words: "No, moron." I don't think we have fully explored the design space for languages but we might have explored the design space for language features. I don't think the idea of a "last language" is quite the heresy it once was. However, if such an animal does exist though that means you would have to have both C programmers and Smalltalk programmers agreeing that it is significantly better than their preferred language and I don't see that happening any time soon. This idea resurfaces every decade or so and Bob Martin has some interesting ideas about what such a language might look like but most of the presentation is more entertaining than informative. Here is what I understood Bob Martin's thoughts on what the "last language" might look like. The last language: Bob Martin also claimed that Static versus Dynamic typing doesn't matter (much) to real programmers writing real software so it wouldn't matter one way or the other in the last language. we might have explored the design space for language features Not even close! Unless you speak of features one at a time and in very high-level terms. But features interact, thus the feature space for N features is on the order of 2N. Consider, for example: In the case I described, the features interact synergistically. Most arbitrary combinations of features, however, are not effective. (Some - such as lazy evaluation with imperative side-effects - are even disastrous.) Some people have expressed interest in developing a contradiction matrix for software engineering. We are not anywhere near a 'last language'. I agree with your opinion that we'll be wanting a language that works well both near-the-metal and near-the-developer. But we're able to tweak the metal, too (e.g. more focus on FPGAs and other flexible computing substrates), and I expect we'll want to do so for high-efficiency computing (to improve battery life, support sensor-clouds and robotics, allow more kinetic and light-powered computation and pervasive computing). Obviously I meant one by one because if you consider them together I don't think there is a valid distinction between the feature design space and the language design space. His list of features gives an idea of the level he was thinking about so that sort of shaped how I answered. Bob Martin was making a pragmatic argument based the fact that we really haven't seen anything truly unprecedented since the 60's or 70's. Obviously languages have been improving but they haven't been changing in fundamental ways, or so his argument goes. Long on examples and generalizations but no real substantive theoretical meat to the argument. Which is fine, that wasn't his purpose. Bob Martin said the only language that had all the features he expected was Clojure and that there was "no way in Hell" that was the last language but that maybe it held the seed of the last language. Not sure I agree with that but it did pique my interest about Clojure. Another way of thinking about the issue is to do the thought experiment where we are 100 years in the future and they use the "last language". I suspect that Bob Martin would claim that any programmer from today wouldn't have any conceptual difficulty picking up the changes and in very short order become a proficient programmer. I suspect that Bob Martin would claim that any programmer from today wouldn't have any conceptual difficulty picking up the changes and in very short order become a proficient programmer. That's possible (assuming they already know Chinese). But hopefully they also think, "wow, this is so much better than the crap I used to use." This is a rather short term view of the last language. If humanity survives for a significant amount of time, then natural language processing will reach levels comparable to what humans can do. So the last language will probably be a derivative of english. Really? Only Lawyerese could possibly be unambiguous enough. I'd quit CS over that. I'm puzzled how execution on a virtual machine would count as a language feature - isn't that an implementation option, or am I missing something? Bah, you're just arguing semantics - not something a real programmer ever thinks about. How many digits of pi do you know? ;-) Bob Martin's argument for virtual machines was along lines of "because we don't want to be slaves to our hardware" and "we can spare the CPU". Virtual machines are a proven means to an end. (Rant: documentation-by-video is a huge waste of everyone's time: I can read a transcript, conservatively, ten times as fast as watch a video. I can't devote an hour to every interesting-sounding new idea, but six minutes is no problem.) Three times so far the computing community has tried to invent universal general-purpose languages: PL/I, Algol 68, and C++. The first two each had their hour, but came to a bad end. C++ is still important, but no longer looks like the "last PL". A student of history would conclude that such languages will continue to appear infrequently but regularly, and as regularly continue to be superseded.. —Preface to Samuel Johnson's dictionary (1755) Being ill at home today I actually sat through most of the thing. His analysis is about as coherent as a 90 day weather forecast for central China based on the digit sum of the air humidity measurements in a California basement. Before watching the video I did find your comment amusing and was wondering of how far you were exagerating your expression over your opinion about it. Now that I've watched the video, and no offense meant to Uncle Bob Martin whose previous works and other credentials are likely at least 20 times as much as mine, I do find he's missing the point (at least the one I've personally tried to put my deepest thoughts therein). For another analogy, it felt to me that it's as if he is looking at languages and language paradigms with a specific interest in diagnosing their diversity seemingly reaching a point of stagnation (hence his belief that "all has been explored already") ... but from the wrong symptoms. Put otherwise, it's as if after taking a snapshot of human activity on earth and looking at the recurring patterns of the ways businesses are built, along with a closed set of currencies ... you would allow yourself to deduce that all the different ways of making money have been invented already and you can now move on and devise for yourself how to get to the final equation that will make everyone happy without richness vs. poverty discrepancies anymore... but I shouldn't elaborate more on what that would mean (politically). So, yes, if one restricts oneself to considering the Von Neumann computer and Turing's model of computation only, "everything" has been said, sort of, already, because the only remaining specifics depend upon the arbitrary number of indirections and layers you decide to put between languages and processors to compute things for solving problems. But, as dmbarbour pointed out precisely: this knowledge really doesn't matter at all in the end, as you still haven't answered about the issue of how to compose and have all these languages and tools (which are just the same one thing looked at/sculpted in different angles and smoothness of curves for its general shape) get along well with each other for given, specific domain problems. That's where I believe Uncle Bob Martin is wrong: I think it's useless to try synthetize this into one last angle of view (to have everybody agree upon in the end). I seriously doubt that could ever happen, given the way people defend their tastes and preferences (and accumulated skills, their own personal "history") over this or that form vs. the common content (of problem solving approach) they can share otherwise. I surely know and agree with billions of other people of how to find food and shelter to sustain and protect myself and my family (problem is easy to solve; only two mandatory sub-tasks are: make money, respect The Law), but my brain and/or body imposes very strong limitations on the type of work activities I can commit myself to make my income honestly without unreasonable effort! (e.g., even 20 years ago, you couldn't have asked me to become a surgeon if I couldn't sustain the view of blood, as much interesting the medical field can be.) You seem to be saying this is more like the table of elements which was really the beginning of chemistry. In contrast, Bob Martin seems to be saying that we have all the pieces so maybe we are nearing the end of language design. I guess the question is whether there is one way to put the pieces together that, after the fact, is obviously correct and the-way-to-do-it to almost everyone or if there are many different ways to combine language features resulting in fundamentally different (obviously not in the Turing sense) languages. I've watched the video attentively because I was curious to hear how he could have come to such an intuition. I know next to nothing to Clojure, though I've read good things about its design decision specifics many times. I sure wish I had more time to invest to catch up on languages that receive good criticism, of course. But even assuming Bob Martin is right on seeing Clojure has a nice synthesis PL, serving as a sort of witness of "the closure of all the useful paradigms" (just my own words/rephrasing of Bob's message, here) -- though no doubt many people might find this debatable and prefer other language instead, btw -- what I meant is that I seriously doubt it's even relevant. Let's say one is able to demonstrate formally that the core set of useful programming paradigms to practice on top of today's computer technology happens to be the one already put in best synthesis through the Clojure looking glass (either as is, or as a basis for future Clojure-influenced language yet to define, implement, etc). I know humans a bit, I am one of them: I would never bet a cent on the impossibility that someone, and very shortly thereafter, comes up with a better idea that eventually attract attention and adds to what was believed as being the closed set of useful paradigms... but then is not, and we're back to square one. I have no idea if Bob Martin is actually correct whether today's Clojure is a sufficient foundation to back up his idea of attempting such a language design convergence effort. The only thing I know for sure is my basic assumption is the exact opposite of his: TM-computation model-based computer languages as I understand them today are, IMHO, doomed to diverge from each other in both syntax and semantics, completely disregarding their apparent, but very deceptive (IMHO, again) similarities. My own personal "research" interest is in the ways we could possibly investigate to limit the "damages" of language and tools designers' inventivity (damaging when not well-understood, resp. not well-documented, at programmers' side, resp. tool vendors' side) w.r.t. the interoperability constraints the importance of which is, from my own observation, ever increasing all throughout the new use cases for putting computers in human activities and having those compute and communicate/transform "passive data" (e.g., final reports, screen renderings, etc) thru these PLs. We may (I suspect) have only barely scratched the surface of what can be done in a PL to support "abstraction", by which I mean in its most general possible sense the use of facilities within the language to modify the language. (A particularly old and ubiquitous form of "abstraction" support is the ability to define new procedures.) Although, for example, OO languages have vastly more abstractive power than, say, Pascal, I'm inclined to think on an absolute scale all mainstream languages are abstractively pretty feeble. If there can be such a thing as a last language, I think it would have to be one with effectively infinite abstractive power, so that it could be modified-from-within to usefullly address any possible problem domain. I've no idea whether that's possible, but I see no evidence it isn't. Curiously, with the success of such a language, it seems the very concept of "programming languages" would fade into the background, which is the usual fate of old technologies when they pass through the curtain of a technological singularity. The PL design space is vast, subtle, and largely unexplored, we agree. I believe there are a number of properties we should prioritize above abstraction. For example, if we ever want to combine solutions from different problem domains (and we very often do!) then it is necessary that independently developed abstractions can coherently work together. This need for integration should constrain our abstractive power. With composition, we have uniform operators for constructing programs, and the ability to inductively reason about useful properties of the resulting program. This is the basis for local reasoning, since we don't need to peek inside the operands to reason about the composition. This is the basis for scalable development, since we can build large programs without keeping everything in our heads and validate independent subprograms. This is the basis for open systems development since we can reason about how our programs behave when integrating hidden elements based on a few known high-level properties. Composition is simply more valuable than abstraction. I think that the concerns for composition and integration can, and should, constrain our pursuit of abstractive power. I don't object to user-defined syntax, but I believe it must be tamed, perhaps with locality to a module. That said, we can get plenty of abstractive power without sacrificing composition and integration. A few techniques include generalized arrows, dependent typing, constructive logic, term rewrite systems, and generative grammars. We don't need infinite abstractive power. Just enough for the problems we'll actually encounter, constrained by the need to integrate solutions and reason about their composition. Makes sense to me. But don't you think that composition and integration are "simply", in essence, just yet another form of abstraction lever for languages and the processors/interpreters that give a meaning to their phrases thru the computation results? My belief is your works, ideas (and of others') are mostly acknowledging the fact that this very flavor of abstraction has precisely been neglected for too long and it's time to address it explicitly, as integral part of the languages and tools' design process, instead of implicitly or in ad hoc fashions (or even worse, not at all, when downright ignored). I tend to agree that composition should be usefully viewable as a form of [abstraction] — though I freely admit I haven't yet attempted to work out how one might do that within the framework of my mathematical theory of abstraction. (It'd be beautiful if one could also usefully view abstraction as composition; no wonder mathematics is often compared to poetry...) [edit: composition -> abstraction (!)] Composition is not a form of abstraction. With composition, you have 'elements' that you are combining. With abstraction, you don't have an element yet, just a new way to later create one. Consider function composition. We compose functions to create new functions. Be careful to avoid assumptions: I have not said whether the language supports first-class functions, nor have I indicated whether the composition is 'point-free'. We can build some useful programs just by composing functions that come with the language. But that would be inconvenient. We want the ability to take a function and give it a name so that we may use it elsewhere in the program without massive copy-paste efforts. Hopefully, that should clarify the difference. We often think of 'functions' as being abstractions due to the connotations in day-to-day usage. But functions are not abstractions. The abstraction is when we name a function for later use. John explains this as a language transform - i.e. after abstraction, we have a new language that is almost exactly like the old one but has a new term in it. (More detail on his blog.) It is worth noting that certain semantics, together, can give us abstractions. For example, if we do have first-class functions and we can trap the input to a function in a variable (lambda-calculus style), then we can model 'let' statements. We could benefit from a little syntactic sugar, but this is primarily a semantic approach to abstraction: the syntactic sugar requires only a local transform and is tightly coupled to a semantic form in a 'host' language.. Thank you for fixing my possible terminology usage issue, there. I might have confused the two notions for a long time. I'm not sure whether I've put enough thoughts in this, to even agree if only with myself in a definitive way... The abstraction is when we name a function for later use. John explains this as a language transform - i.e. after abstraction, we have a new language that is almost exactly like the old one but has a new term in it. Now this, too, will definitely be much helpful to me, as it does completely relate to this idea I had of a high level algebra basis of a language-/transform-interop (may-be-)infrastructure (granted, still in big need of clarification for what I tried to express, when time will allow...) And yes, I also have a lot to catch up with re: John's works on these matters. Thank you for the links. John Shutt would like to break the chains of semantics, e.g. using fexprs to reach under-the-hood to wrangle and mutate the vital organs of a previously meaningful subprogram. No, I wouldn't. There are two distinct issues here: There may be some misunderstandings about the general mathematics, would could in turn muddle discussion of properties of fexprs (and may even have already done so). So I'd better address the general stuff first. First thing to keep in mind: the word abstraction, in association with the general mathematical framework, is at least to some extent a legacy term. I devised this theoretical approach to study properties of programming languages because I wanted a "theory of abstraction", and I didn't think (and still don't) the most usual theoretical approaches were structurally capable of discerning the essence of abstractive power. My preliminary explorations of the approach so far have all focused on abstractive power (which is how it relates to my blog post), so the only name I have for it is "abstraction theory". But. After I finally succeeded in casting this approach as rigorous math (techreport), I realized that although the word abstraction and its kin occur often in the informal discussion, their only use in the formal definitions is well downstream in the development, as a less specific shorthand for the key relation between languages with expressive structure ("B C-expresses A for observables O (or, B is as abstractive as A)"). Second thing to keep in mind: When I do talk about "abstractive power", I'm not talking about ability to violate encapsulation, such as reading and even writing "private" data. That would be expressive power. Abstractive power, as I've said, is about ability to modify the language from within so as to get to various other languages. Modifying the language is different from modifying particular objects within a program. For example, when you define a new module, you might want to make its internal state private, or make it public. A language that forces you to make it private is abstractively weaker than a language that allows you, as the author of the module, to choose whether you want to make it private or not. A language the forces you to make it public is also abstractively weaker than one that lets you choose. The abstractive power is, essentially, the power to choose. This is a subtle thing, because the choice wouldn't be meaningful if the privacy, once chosen, weren't binding on future programmers: if you, the author of that module, choose to make its internal state private, there can't be any choice that could be made downstream that would undo that privacy; it wouldn't really be privacy if it could be undone. Lightning sketch: A programming language is viewed as, essentially, an infinite-state machine, where the states don't have identity, so really all that matters is the sequences of labels on the transitions (the labels being terms over a CFG). Because the labels aren't limited to source code, this view can model arbitrary meta-information ("behavior"); some subset of possible terms are designated observables. You can view the overall shape of this infinite machine through the lens of an expressiveness relation between its states (such as Felleisen's expressiveness), and that's a language with expressive structure. My abstractive power relation is a relation between the shapes of different languages-with-expressive-structure. Although this approach is explicitly looking only at these vast networks of transitions between languages, never attending directly to fine substructures within a term such as, say, functions being composed with each other, nevertheless the rules of interaction between fine substructures should be expected to affect the overall shape of those vast networks. So one should be able to derive formal results about the 'abstractive' consequences of fine-structure language features. Which is (in part) why I suspect it may be possible to study composition using this approach. The fexpr-related issues raised by your last paragraph are sufficiently separate that they'd be well placed in a separate post, which is what I'll do with them. Modifying the language is different from modifying particular objects within a program. I've found that, in an open system, the ideas get to be quite entangled. We cannot transform a reference, only wrap them. We can transform the language for a few particular objects in the system, but only those we are responsible for maintaining, and that is further constrained by compatibility and integration requirements. Conversely, adding services or plugins to a system might be considered to be a form of language extension and abstraction. If we have easy code distribution, we can feasibly 'abstract' new network protocols, overlays, and distributed frameworks. We can also understand language as a first-class object, especially in the context of staged programming models. can't be any choice that could be made downstream that would undo that privacy Above, you did not ask merely for "abstractive power". You asked for "infinite abstractive power". You will never see infinite abstractive power if other developers can choose to deny that freedom. Our ability to make choices will always be constrained in an open system. Compatibility, integration, reuse, et cetera will constrain us, both upstream and downstream. Developers have no need for a 'pretense' of abstractive power that they cannot effectively utilize in practice. Indeed, it is better if we remove an illusion of choice where there is none to be made. I think if you ever work out a 'bill of rights' for abstraction, to give developers freedom to abstract insofar as it does not abridge this right in other developers, you will end up favoring composition and integration above abstraction. We can achieve a lot of real, practical abstractive power if we keep our priorities in order. Well, of course we have ... to some extent, at least. But by "extent" it's all about how the question is phrased, indeed. Language space? I take it it's about PLs, right? (haven't had the time to watch the video yet) Then, if the word "programming" is used, I'd also take it "by default", that it's about the current "binary computer" technology space? Finally, then, yes the last language does certainly exist: isn't it any one that is capable to encode in one syntactical form or another the most general computation model (known as of today) we aim to express with the language phrases thereof? So I'd rather conclude it's useless to try think of getting to meet/recognize "the last language" there: for it's been known ever since Turing's works and it just so happens that this last language has many flavors and tastes in the way we write it, but is essentially the same working concept shared and used by our brains: it's much like with our 26 letter alphabet and English or French, where my hand writing and use of english words is not especially better or worse than anybody's else, but what does make a difference sometimes (though of course very rarely(*)), is whether or not the ideas I try to convey in English vs., say, French, require less effort from me in the former than in the latter, precisely because the English corpus of phrases is better prepared to receive them (e.g., with coined up words, acronym jargon, etc.) in this or that specific instance of ideas (i.e., "computations"). AFAIC, long story short, if one asks me, I'm much more interested in seeking for "The Last Interoperability between (TC-)Languages" (thus, which are unavoidably in legion of forms, but essentially the very same thing at work in machines) than in "The Last Language" that I look at as a rather trivial and unconstructive question, actually (in my answer's PoV above, that is). [Edit] (*) yes, I did write "rarely" up there, that I think is consistent with my other belief that the "language interoperability" topic is way much less of an issue for natural languages than it is for PLs, given my basic assumption that our brains' computation power is strictly greater than TMs, precisely (then it makes sense to me to try seek for some global interoperability improvement, if we can, constructively, between TC languages, as opposed to natural languages where our brain power can find mysterious ways to workaround communication ambiguities ... I suspect not unrelated to the fact we have those five senses, btw) We know, with a reasonable level of certainty, that it exists somewhere and sometime in our universe. Maybe our galaxy, even. Bob Martin offers some excellent lines (at 27m22s):. I would happily offer many suggestions about what to take away: divergent functions, message passing and events, synchronous IO, global namespaces, global state, ambient authority, implicit time, arbitrary delay and recursion, explicit iteration and for-loops, exceptions, randomness, floating point equality. Recently, I've been working out how to take away even local state, i.e. eliminating 'new' calls from the runtime behavior of a program. My motivation is to resolve the state vs. upgrade problem, as it applies to live programming in eternal systems. Rather than modeling state as living inside the application, all state belongs to an abstract machine provided by the runtime then influenced by the application. But being rid of local state is only feasible because I already eliminate most need for state: caching, memoization, buffering, and queuing updates are easy to achieve in RDP without explicit state; there is no need for iterators, temporal semantics offer me synchronization without stateful semaphores or workflows, et cetera. Also, the above solution would be little different from global state if I had not also eliminated the global namespace and ambient authority. The lesson: eliminating cruft and complexity allows me to find even more 'features' to remove. Though, I shouldn't be surprised: I should have learned this much the first time I cleaned a garage. Today, we program with these massive Rube Goldberg languages. Many features seem essential because, if you remove any one of them, the machine stops working. But don't let that deter you! Subtract, and find a new idiom or solution for the pieces that fail. Subtract. Subtract, and simplify. If we ever reach a point where we do "not have any code any more", but we can still meet user requirements, we know we're done! (Aside: One might propose that the ultimate subtractive language is subleq. But then we need to make it easier to specify the data...) subleq You've got it backwards I think. I don't believe current general purpose languages are like Rube Goldberg machines: they have enough features that you can encode what you want rather directly, even if it requires a lot of verbosity. On the other hand, when you use a stripped down language (end-user or the old Lotus Notes Script), then you really do start programming Rube Goldberg machines! My languages always tend to be in the later space where rube goldberg machines are required to do something, b/c the languages are not general purpose. So signals are great for continuous interactions, ah, but what about those discrete actions like pushing a button? Ugh. You can do it, but its not pretty. The tricky part is designing a higher-level language that is somehow flexible enough to avoid rube goldberg machines. Rube Goldberg machines do not obviously appear to be Rube Goldberg machines when we are working on one small component of the machine. Individual components are simple and direct. We must stand back to see the complicated, fragile mess. I agree with your observations on 'stripped down' languages. Suppose we took a 'general purpose' toolkit for building Rube Goldberg machines. Then we strip it down, remove the 'dangerous' tools and materials - but don't replace them with anything. Now, our engineers need to be 'clever' if they wish to get work done - more so than before. The failures of those 'stripped down' languages should teach their own lesson: It is easy to aim for simple and achieve simplistic. Effective 'negative paradigm' design is not obvious. signals are great for continuous interactions, ah, but what about those discrete actions like pushing a button? Ugh. You can do it, but its not pretty. It can be pretty to model a button with a signal. But pretty things might seem out of place if the setting isn't right. I might ask: should I attribute the ugliness to the button model? Or should I attribute the ugliness to the setting? Buttons on modern PS3 controllers are analog, e.g. press harder to jump higher. We use duration of a button behaviors in significant ways, e.g. hold button to control camera zoom. We use multi-button behaviors: ctrl+alt+delete, music board synthesizers, mouse+keyboard actions. Buttons have physical state that varies over time: up or down. I am finding it very difficult, Sean, to attribute any ugliness to the modeling of buttons with signals. The tricky part is designing a higher-level language that is somehow flexible enough to avoid rube goldberg machines. Yes. We want powerful facilities for abstraction, and we want an effective set of compositional properties. But the real trick is to look at this problem and not immediately think 'tradeoff'. Language design is not a zero-sum game. It depends on what you do with your signals. If you want something to popup when you press a button, which is the common case, then you have to come up with a way getting that thing to popup. Yes, we could imagine the buttons are analog with continuous states, but that still does not help us get that popup on the screen and stay there once we've pushed it! The real problem is that the world is messy and nice clean programming models can't deal with that messiness very well. They instead would rather reinvent the world to be as clean as they are, but the world can't be changed so easily. [edit] A general purpose language doesn't suffer from this problem (in general) since you can always express exactly what you want to do, you have complete control. A rube goldberg machine, by definition, is some sort of hack to deal with inflexibility in the language, perverting some behavior to do something else. Sort of like doing math in Kodu by creating and destroying robots and then having another robot "seeing" them to ascertain some sort of global condition (that you can't express directly in Kodu). It seems you have difficulty gathering signals into state. I vaguely recall similar troubles in Coding at the Speed of Touch, though your state model was not what interested me in reading so I felt no need to comment on it. The state model I currently use in RDP is based on accumulators: newAccum :: (Signal s, Signal s', SigTime s ~ SigTime s', HasLinkRef v a) => (st -> (SigTime s) -> (s x) -> (s' st)) -- accum function -> st -- initial state -> (a (Unit a) (s x)) -- provides signal to accumulate -> v (a (Unit a) (s' st)) -- provides state signal This is a fairly simple model with some nice properties. Foremost, it is compositional: the input is a signal source, and the output is a signal source. I also have a clear, monadic start-time and initial state for the accumulator, so I avoid the semantic concern common in FRP models with accumulating history. Since the accumulator controls its own state, there are no concurrency conflicts. External influence on state is indirect. The accumulation function itself might be an integral or something else. I allow multiple signal types to live in my model, i.e. both discrete and continuous signals can coexist in the same program. [To clarify, since you and I use 'discrete' differently: a discrete signal is one that is guaranteed to hold a constant value between updates, and to have a finite count of updates during any period of continuous time. You use 'discrete' primarily to describe events. I call events 'instantaneous'. It's a bit confusing, but I think we can learn one another's languages here.] A non-analog button would probably be modeled by a discrete signal. Now, to model the persistent popup problem (in no particular order, since I don't think sequentially): (de,mon) <- newDemandMonitor In summary: The button's state signal indirectly causes a GUI control signal, which is promptly accumulated, influencing accumulator state, which is observed by a view agent, translated into a scene-graph signal, which is pushed upon the renderer. This all happens continuously, though the rendering element might sample the model discretely. The above sketch is a few modules shy of an implementation, but I don't foresee any trouble with it. There is a bit more to be done - e.g. I recently sketched some code for redrawing only dirty rectangles. the world is messy and nice clean programming models can't deal with that messiness very well Our models of the world are messy. The world itself seems to be built on some pristine physics beyond our ken, unencumbered by human fallacy. We will face much ugliness and semantic conflict for interaction and interop between human models. So our languages should be effective at this. Interacting with diverse models in an open, federated system was among the founding requirements that got me started on language design in 2003. But I don't agree with your conclusion. We have no need to add to the mess. Garbage-in garbage-out is acceptable. Multiplying the garbage is not. First, do no harm. This means: predictable failure modes, graceful degradation, resilience, and continuous consistency management. I've already described one example of the latter: the accumulator will continuously decide what happens when there are conflicting signals for whether to raise a popup or kill it. This effectively dodges the issue of race-conditions, lost or out-of-order events, and so on. Even better if we could filter the mess and extract the useful signal. That, I leave to developers with specific domain knowledge. like doing math in Kodu by creating and destroying robots and then having another robot "seeing" them to ascertain some sort of global condition GlaDOS approves. [addendum] My mention of Rube Goldberg machines was artistic hyperbole, and perhaps that analogy has run its course. I would agree with a claim that developers need access to Turing-powerful computation facilities. Given an accumulator and real-time functions, for example, I can model Turing-complete calculations. The extra hoop for developers is rarely needed and very useful: the computation will be incremental, and it becomes easy to add explicit job control (pause, reset, etc.). In context of my interests, the job control benefits are not trivial - i.e. I'm interested in open distributed systems programming, where there is no single process to kill. Incremental computation allows us to more easily share a partial solution, which is useful in a variety of problem domains (UI, AI, control systems, etc.). I have not suggested we limit power, only that we control it. We do that by removing features. But we must sometimes replace them with a tamer alternative. First, YinYang doesn't have the problem I described, which is why like the behavior-based programming model so much. You just specify Pushed(Button), Open(Window) and you are done. Actually, it really should just be Button.Pushed = Open(Window), but I'm not there yet. Second, you have a powerful model, but it doesn't seem to be very accessible. Perhaps this is an example of the Haskell-problem: a language that requires such logical minds that only a handful of people on Earth can really take advantage of it and realize its beauty. This isn't the Rube Goldberg problem, to be sure, that is completely something else. Third, the universe might be based on a simple model of computation (to borrow from Wolfram), but the model has been running for billions of years. The state of the current universe is extremely complex, although we could apply some laws of physics to predict how this state will evolve, we have no chance (right now), of figuring out how the universe got in its state, and we don't know enough about this state to make accurate predictions. Instead, at best, we can react to what we know about the universe, and must forgo a perfect model of it. Related: Elephants don't play chess. I do not know the meanings of Button.Pushed or Open(Window), but I guess you mean here that 'Button.Pushed' is a signal and 'Open' describes a 'one-time' action from YinYang. In that case, you are transforming a signal into a stateful event. Event-based programming has its own problems, but not too bad for local tablet games. Earlier conceptions of my RDP model were similar: developers simply specified arbitrary conditions, and the agents would eventfully 'become' a new agent when those conditions are met - i.e. a reactive finite state machine. I eventually chose against this, because it hinders both anticipation and runtime upgrade. So I developed accumulators, which allow anticipation. I'm still working on the upgrade issue. My model is very simple, and I expect people will find it intuitive. The few people I've managed to sit down were able to grasp the concepts very quickly while working through some simple pseudo-code problems. Of course, even OO might seem inaccessible to a Smalltalk programmer if presented as a model in Haskell. I must eventually develop a reactive demand programming language. I'm not sure what you're objecting to on point 3. We don't need to know the state of the universe to do useful computation. We only need to know the states of our sensors, and that's a lot more accessible to us. YinYang isn't based on signals (behaviors are quite different), Superglue was. If to say, the event maps one to one with action, then there was no problem with signals; all details can be hidden in the plumbing. If you needed to open that plumbing up, that is when you got into trouble with complexity. As a result, I went to great pains to avoid opening up the plumbing, hence the Rube Goldberg machines...Ah, whenever I think of Rube Goldberg Machines, I can hear the Looney Toons' Rube Goldberg theme* in my head. YinYang is incredibly stateful, every single tile that executes can add its own state to the executing object, the state is initialized on first execution, updated as the behavior is re-executed, and cleaned up when the tile is no longer in execution. I feel liberated since I can now not feel bad about state, it is very well modularized, and state was never really the problem, it was the poor modularization of state. On point 3, you were claiming that the world was somehow elegant, which implied to me that we could have very good models of it. But now you are claiming otherwise, maybe I misunderstood your point. * Powerhouse by Raymond Scott I would say that YinYang has signals. A tile can observe state in its board. The value of that state changes over time. A signal is a value that changes over time. Therefore, observing the value of state over time implies a signal. But that is implicit. If you ever try to formalize YinYang, you would benefit from modeling signals more explicitly. Even well-modularized state can cause problems. (1) State can become inconsistent with an observed resource (especially under conditions such as disruption, delay, temporal indeterminism). (2) State often makes it difficult to modify the system in a cross-cutting way (e.g. editing the code for a particular tile) without losing important work. The world is elegant, but that doesn't imply we can have very good models of it. There's a difference between creating a model and living in one. Since you've had your 'Elephant' reference for the day, see Blind men and an Elephant. Someday, I will make the argument that behaviors/tiles are fundamentally better than signals (note that Conal calls also his continuous signals "behaviors," but mine are different, more like Brooks'). It should be more obvious in the next draft of my paper (I'll post that on Monday or Tuesday), but I still have some more deep thinking to do. There are things you can do to protect your state, basically heavily encapsulate it, but you are correct that editing the code for a tile, at least one that is atomic (implemented in C# code), requires reloading the tile. But otherwise live programming should be a bit more robust in YinYang than SuperGlue when I finally get there. Nice parable. Perhaps one day we can tell the tale of the blind language designers, each with their own philosophies based on their experience, each is right according to their own perception but all missing the complete picture. Behaviors describe continuous observation and influence over a pre-existing environment. Signals describe values over time. To observe an environment, one must observe a property of that environment. To continuously influence an environment, one must manipulate a property observed by some element in the environment. A signal is simply a value or property over time. Thus, behaviors imply signals both as input and output. Signals serve the same role in a behavior model as messages serve in an actors model. They are the basis for communication. Just as there are many roles for messages (command, query, reply), there are many roles for signals (control, live query, response). Claiming 'behaviors are better than signals' is absurd, analogous to a claim that 'actors are better than messages'. Reactive demand programming has behaviors. Indeed, agents in an RDP system are very analogous to tiles in YinYang. The main differences are in how they are structured: YinYang structures tiles in a hierarchical containment relationship, supporting a subsumption architecture. RDP agents are structured in accordance with the object capability model for secure service mashups, and duration coupling for resource control. SuperGlue was a mix of paradigms: OO, imperative, reactive. IIRC, conditional statements, state, and effects were only provided in one of the paradigms (imperative) so you were forced to 'open' signals to make useful decisions or do other interesting things. I don't believe that the signals were the problem, there. An explicit model of signals is very valuable for a behavior model, especially if you wish to control timing and consistency while allowing a high level of parallelism. You would need to 'open' the signals for certain FFI and legacy library integration. But the basic library of behaviors should mean that regular users never open signals. [edited to make it shorter] live programming should be a bit more robust in YinYang than SuperGlue Two observations I've made: it is difficult to manage a live program if many 'new' objects are being constructed at runtime (the program can get away from the developer, esp. if distributed), and we want the ability to transition state across changes in both client and server (e.g. keep documents, data, subscriptions). This has led me to resurrect techniques for 'externalizing' state rather than encapsulating it. External is not global - i.e. no need for ambient authority. It simply means that access is separate from life-cycle. If all state is external, then an application would never call 'new'. It would, instead, 'discover' state existing in an abstract machine or database, and manipulate that state. This allows us to model live programming as simply generating a new application that picks up where the last one left off. For changes that require a reformat, the IDE would raise a dialog asking developers for advice on how to transition the existing state (with a lot of common strategies available). That's my current vision of how to get 'robust' live programming. I think it will also help with orthogonal persistence. This does involve replacing the 'newAccum' I mentioned not very long ago as being how I currently model state. next draft of my paper I look forward to it. I look forward to it. Its up. The lesson: eliminating cruft and complexity allows me to find even more 'features' to remove. Though, I shouldn't be surprised: I should have learned this much the first time I cleaned a garage. Don't you think that if you find yourself with the need of removing more and more features as time passes (e.g., compared to what you have thought of removing in the language design's state-of-affairs surrounding you 30 years ago) it's precisely because: 1. the number of languages, implementations, and interop use cases of the processing tools grew totally out of your own control; 2. those bad boys in (1) also eventually decided 15 years later to go global and to be able to hop in and out of your by-that-time-not-even-floppy disk-equipped- box ? If so, my guess is that much as in the analogy for your garage (and mine), the cleanup process effort's estimate and planning is highly dependent upon the number of folks having access to your garage and what are their interests for the natures of things they toss and store into it... ;-) I agree (if I'm reading your point correctly): it is very difficult to remove a feature from a language that is already in use, and any new language should interop effectively with existing systems. There are a lot of strategies for interop, though, and many ways to support 'legacy interaction' without compromising the language. It is a non-trivial issue, but I am very satisfied with my solution to it. His reasoning is just absurd. Language evolution is about finding better and more high-level abstractions. Picking a couple of random examples where that allowed getting rid of some low-level feature, and making a sweeping generalisation from there? I could use a similar line of argument to conclude that all language evolution ultimately is about writing more and more colons (you know, machine code: no colons; assembly: some; Pascal: some more; ML: double colons for cons; Haskell: omnipresent double colons for types; Scala: triple colons!). His story about typing is equally silly, completely self-contradictory, and he doesn't even notice. His observation that progress is made by discovering liberating constraints is spot on. That is how we achieve those better abstractions - i.e. abstractions that we can reason about, compose, reuse, integrate, scale, and optimize. I am certain Bob Martin wasn't thinking of it in such positive terms. I call his lines 'excellent' because they speak of a greater philosophy than he seems to recognize. I especially like the last line I quoted: "If there's another paradigm, we may not have any code any more." If our next paradigm lets us meet general purpose requirements without any code, that would be more ideal than any paradigm I can imagine! With regards to his argument, I agree: Bob Martin's argument was classic fallacy, cherry-picking of evidence and examples to draw the conclusions he wants, combined with some ridiculous 'contagion' arguments (equivalent to: C is the last language because even Haskell uses '==' for equality, which is C syntax). To me paradigms are mostly about eliminating or limiting a very general feature so the source code better reflects the intention of the programmer. Obviously it must be replaced by something or we need to make clear how it is being limited so we are also adding something but it is the flip side of the same coin. Is there some Paradigm you don't think that applies to? Wrong place Can we pick a Last language? No. That's a stupid idea. Programming Languages, like Natural Languages, are subject to change over time. The process is less organic and defocused, of course; it's measured in terms of implementations used by many rather than in terms of idioms used by individuals. But the same principles apply, IMO. There can be no last language while technically creative people exist. There are entire syntactic paradigms we haven't explored or have barely explored -- essentially any language whose parse pattern doesn't reduce to a tree (funges, DAGs, etc) is so far outside canon that we don't even have terms to express its syntax. There are semantic paradigms we barely recognize and have never coded in terms of (Petri Nets, for example, are a nice abstraction which could be the basis of theory for parallel processing and sequential allocation of finite resources). There are others we've coded in only a little bit, where an infinite variety of possible enhancements remain unexplored (For example Unification Languages where it's possible to restrict the "universe" considered when attacking a subproblem). Here's what I think we should be working on, which, so far, we aren't. When we build abstraction on top of abstraction on top of abstraction, and do things in terms of FFI's and translation layers between modules written in different languages (and we do this more and more often as our projects get larger and larger) we often wind up with a horribly inefficient implementation of functionality -- written in terms of abstractions that aren't optimized in the specific case because they're written for a general case, or aren't optimized because they're written in a language which doesn't provide guarantees of properties which the specific program has anyway, or because there's translation code that moves data between different representations or runtime environments needed by code compiled from different languages. That, IMO, is what we need to fix. We need to find ways to efficiently optimize what's there -- treating things that the program does not do as sufficient guarantees for optimizing the things it does do (optimizing the functional case when something like assignment *does* not happen, because the programmer does not do it, as opposed to only the cases where we can prove it *cannot* happen, because the language forbids it), leaving out representation switches when they can be made unnecessary by reorganizing code in terms of the representations available, etc. In short, we are doing composition, integration, and abstraction badly (in terms of code size, speed, etc), and we need to figure out some serious theory and practice about how to do them well. I think performance issues are caused more often by bad domain models or concurrency models than by inefficient implementations. However, I do share your position:. I think your title absolutely nailed the idea I was going for; Separation of Abstraction from Performance. That's the grail. Just as a tremendously simplified example, consider someone implementing a stack of integers for some reason. He uses a library's "list" abstraction. It's implemented as a doubly linked list with individual nodes dynamically allocated, and a pointer to potentially variable-size data in each. And finally, if it can be proven (or if the code asserts after each insertion) that there are no more than 60 elements in the list (perhaps corresponding to minutes in an hour, seconds in a minute, degrees in a 1/6-of-a-circle arc, or some other feature of the problem domain), then instead of dynamically allocated individual nodes, the entire list ought to be allocated in a chunk, leaving out pointers altogether and replacing the "head" pointer with an integer indexing into the array. In short, programs should be seen as specifying desired results, or abstractions -- and the underlying system seen as free to find the most efficient method of achieving those results that it can, whether or not the method bears any resemblance to the algorithm outlined in the code. And right now that's what we're failing to do. When each feature of an abstraction adds its own cruft, the abstraction becomes "too expensive" in performance to use for a simpler task.. Apologies for being terse -- I'm in a bit of a hurry. It sounds like you're looking for serious progress on partial evaluation and/or staged compilation. I'm with you on that, but I wonder if it won't require serious uptake on dependent-types first (or perhaps it could go the other way). Any talk about a "last language" pretty much depends on programming languages being like mathematics and not natural languages. I'm not so sure about that. It's not like mathematical language is static. In fact, programming languages are (arguably) just a proper subset of all mathematically-derived languages. We got here from Frege's crazy logic diagrams, Principia Mathematica, Goedel numbers, and eventually to C++, Java, etc. Speaking of Goedel -- perhaps all of this talk of the "last language" mirrors the debate about the foundation of mathematics 100 years ago (and will meet with similar ends). It seems to me that calling the likes of C++ and Java "mathematically-derived" would make the cited mathematicians scream in agony. I have never seen anything remotely assembling a mathematical definition, especially not for the former. Nor do I think you could ever provide one that would even have the slightest chance of meeting mathematics' most basic standards. A counterpart of your remark is the opinion that a formally defined language would necessarily be "not enough" on some aspect, breaking the idea of a last "complete" language. This is formally true for languages with consistent static type systems (in the extended sense of "everything that can be said about a program without running it"), but this may or may not be extended to other aspects of "language formalisms". For example, it seems harder to characterize how dynamic semantics may be "incomplete". I think we need formal definitions for our programming languages, and I postulate that the clarity of those formal definitions will always allow us to see that there is something missing, unexplained or badly understood, and that we can go further (in the situations where it is necessary). So definitely no last "mathematically-inspired language". only applies to a formal system that can perform the arithmetic required for "Gödelization" that is used in the proof. Assume we have a formal system for defining other formal systems. Call it the meta-system and the systems defined by it are object-systems. The fact that an object-system can peform arithmetic and is thus subject to the incompleteness theorm does not imply that the meta-system is also subject to the incompleteness theorem. Arithmetic is a red herring; it's only used to encode data structures necessary to formalize a host version of the system inside the system itself. Even if the meta-system can describe itself -- and it better can, or its severely limited as a system description tool and thus unsatisfying -- it cannot give strong guarantees about its description (consistency, termination, whatever). So you need another, more powerful system to reason about it. I don't think there is a workaround. The point of a formal system is not to "prove everything"; we know it can't. It is to be a good framework to work on the interesting things we want to prove. Trying to go "as far as possible" is not necessarily the most interesting things; there are hard problems to be solved even in the systems of today. You only need to go as far as your current objects of study dictate. To take a concrete example, the "module system" question is still unsolved : most languages have module system, but some, mostly in the ML family, try to formalize and statically verify the use of the module system. This is a very difficult problem and the current "mainstream" solution (SML and OCaml in particular) are not satisfying. There has been good research in this direction, but we still don't have a compelling picture that would make you say "ok, we'll put just that in the next ML/Haskell"; because it's hard. And to study and work on that, you don't need an incredibly powerful type system; you can work at the F-omega level, possibly with singleton types (which are very, very restricted dependent types), but you probably don't need the powerful features of today "big" type/formal systems with full dependent types, universes, etc. It's about precision, not power. The property you want is not self-definition; it is self-reference: the ability of a formal system to make statements about itself from within itself. That is what is at the heart of the two incompleteness theorems. But a language and programs written in that language are generally considered distinct entities (for example most people don't consider writing a program to be the same as extending the language) Hence the emphasis on axiomatic systems in the two incompleteness theorems where what is being defined is defined within the language rather than external to the language thus providing the required reflectivity and enabling self-reference. I'm not saying mathematical language didn't evolve, I'm saying it isn't going to evolve much further; especially "core" mathematics that are used across virtually all mathematical and engineering disciplines. And the conformity is almost total at the semantic level with only minor differences at the syntactic level. Obviously out at the fringe of mathematics there are new ideas and new notations and a lot less agreement but focusing on that misses the point. We really need to distinguish between "programs as languages" and a "programming language". A modern programming language typically provides the ability to define new abstractions as well as a set of predefined abstractions. A program defines a language in terms of a set of abstractions and then says something in that language. For me the essence of a programming language is its ability to define new abstractions and not the predefined abstractions which just provide a common sub-language for programs. From this point of view a programming language is really a meta-language for defining other languages. This idea is not the least bit novel as people have been pointing it out for decades. So from my point of view if you are talking about a "last language" then this implies a focus on the meta-language role of programming languages. So a "last language" would need to answer the question of how we define abstractions and their relationships but need not say anything about what abstractions we should define. Ha! So well put, in your last paragraph. I'd only add: and I fear that ain't gonna be easy, for it seems to me very few of us bother even to mention these issues or express strong enough concerns, beside people like you, or dmbarbour, or etc. The pre-scientific evolution of human language was under no obvious pressure to select features oriented to formal languages, suitable for the investigation of abstract concepts in mathematics and computer science. To suggest that the language space in those fields has been more or less explored, also suggests that the conventional tree language is adequate as the main basis for developing formalisms and programming languages. I strongly question that view. An arboreal language is one whose syntax is tree based, in which most if not all branches of the syntax tree may be arbitrarily deep (resulting in trees with high structural variability), and in which there is no explicit notion of abstract memory. Natural languages, and those in Chomsky’s hierarchy are arboreal. There are at least four aspects of arboreal language, that render it problematic as a template for formal languages, and parallel programming languages in particular.. Graph based languages bypass SPR, but are delimited by requiring the solution of the NP-complete subgraph isomorphism problem for the application of a rewrite rule. Secondly, arboreal language historically has a serial character in that a transmission of the shortest string of linguistic entities capable of expressing a basic relationship, can only describe a single relationship between objects. Even if extensions are added to a tree language in order to describe parallelism on the syntactic level, as is the case with most parallel programming languages, SPR complicates the expression of shared structures in parallel processes. One factor obscuring the limitations of arboreal languages until recently, has been that there was never a need for the development of cues and mechanisms for many basic sentences/relationships to be semantically processed at the same time. A language system designed from scratch to be transmitted and processed non-serially, is more likely to be able to support a coherent form of simultaneous semantic processing. Thirdly, arboreal trees exhibit a high degree of variability, where any individual branch may be arbitrarily long, requiring a complex parsing phase before semantic processing. Within the context of parallelism, it is also difficult to access and process multiple parts of an irregular syntax or semantic tree at the same time. The fourth factor relates to parallel computation. In sequential environments, it is the sequence of operations and data transfers that are principal in defining a sequential algorithm. In parallel computing, the placement of operations and data transfers in the machine environment, is also important. Arboreal trees alone do not express spatial information, indicating data transfers and resource allocation within an abstract machine environment at the syntactic level. To argue that these factors can be dealt with by plumbing code or “under the hood†dynamic semantics, is questionable, because of the well attested issues that have been encountered in trying to make arboreal parallel languages work well. A spatial approach to language snd computation, which bypasses the above factors was discussed here. I think the approach has the potential to support Robb Nebbe’s list of features mentioned above (with the exception of the stipulation against gotos), and is also close to the metal, as suggested by dmbarbour.. Programming languages are not "arboreal" according to your definition, because of the name bindings. When you manipulate a program, you have to take the binding structure into account, and this is one of the reasons why most programming languages -- say, lambda-calculus with names represented as strings -- cannot be described with a CFG only. The binding structure is richer than that, and you cannot guarantee the processing, transformation and production of only well-formed programs if you're only equipped with bare CFG-manipulation tools. Arboreal-ness refers to syntax only. The lambda calculus is defined by a CFG. Name bindings exist on the semantic level, surely. I recommend reading The search for the perfect language from Umberto Eco, which is, among other things, about a centuries long search for the language of Adam, which was neither "natural" nor "mathematical" but surely well designed. Today the "last language" is usually a topic for programming newbies, idealists and kooks. It doesn't mean of course that no one shares the ennui sentiments of Bob Martin. Few people expect another "cambriam explosion" in language design or that after a period of purification and asceticism we are reborn in an incestuous paradise where everything works with everything else. BTW I wonder if the ambition of singling out a language as a "last language" is even politically correct? Users of other than the last language would inevitably be discriminated. What about diversity? Isn't this our ultimate value? | Few people expect another "cambriam explosion" in language design The idea of a last language is indeed a bit silly. But trying to make language environments less imperfect in a general sense is valid, because it could result in Black Swan moment, leading to a cambrian explosion in language design. If you go to POPL, say, and ask around, you will find a significant number of people there who are of the opinion that if you have Coq and a macro assembler, then you don't need programming languages any more. This is not a majority opinion, but it's a pretty big minority. I don't entirely agree with this position, since I think languages arise from fundamental semantic structures. But I do think it makes a lot of sense to explore building languages on the Coq+MASM substrate rather than with "compilers" or "interpreters". That is, we maintain invariants by stating them and proving we maintain them, rather than by elaborately circumscribing our language so that it is impossible to say things that break invariants. In categorical terms, the idea is to take a semantic category, and work externally with the morphisms of the category, rather than internally in the internal language of the category. Of course, any such fundamental change to the infrastructure of language design means that a lot of things we took to be fixed truths aren't, and that could shake up language design quite radically. Could you elaborate a bit more here? I find that very interesting. The way I understand your post, It makes me think of Adam Chlipala's work on Bedrock. Do you have other references? I'm also not sure what you're describing in your "categorical" remark ("working externally with the morphism ..."). Are you speaking of a "low-level" category of meaning and describing denotational semantics as a translation/compilation process, or thinking of something else? So this big minority is saying that you basically prove your program in Coq, and then spit out custom assembler code for it? Serious question: why do you need the assembler? I just said it that way because I thought it sounded punchier. :) For real no-fooling all-the-way-down verification, you really do want to verify the actual machine code. (E.g., see Magnus O. Myreen's PhD thesis.) That is, we maintain invariants by stating them and proving we maintain them, rather than by elaborately circumscribing our language so that it is impossible to say things that break invariants. Sorry, I can't parse this sentence. Do you intend to say that we will build the type system first, prove it and then go on designing the language instead of bolting it onto an untyped language? It's not exactly the converse of what you say, because it cannot really be described as starting from an "untyped language". A common trend among type-system lovers is to design type systems of increasing complexity that capture finer aspect of the language that the classic type systems. For example, one would design a type system such that only polynomial programs are well-typed, or such that only privacy-secure programs are well-typed. Once you have such a system, your language is limited to "good programs" -- but the drama is that some good programs must be left out. What neelk describes is the idea that instead of bolting those elaborate analysis in the type system, you capture the invariants they guarantee (the "do not go wrong" property) and prove that they hold for each fragment of a program in an ambient less-typed -- but not necessarily untyped -- language, possibly using analyses similar to the previous type-system (eg. witnessing that the fragment admits a type in this system), or by hand-crafted proofs. So you write programs that do not necessarily respect those elaborate well-behavior conditions, and prove them afterwards. With the advent of proof systems and proof automation, this approach of proving good properties afterwards become tractable. One nice thing with this idea is that you can hope to compose different type systems for different part of the program -- if they invariants are indeed composable. I suppose the counterpart of this vision are the people promoting that type systems can be more than a helpfully restrictive framework, and also help you write typed programs through automation and all that. See for example Conor McBride's position: "types might have an active role to play, structuring the process of program inference". To my read, he means almost the opposite of this, but I may be writing between the lines, since this is basically the approach I'm taking with my own language. At a high level the language consists of: - A powerful underlying logic (e.g. Coq) - Axioms describing machine features - A customizable "skin" specifying both syntax and what the syntax means in terms of the underlying logic I think this third bullet is important - no one actually wants to write code in Coq. And BTW, I think some language of this form will eventually become the "last language," in the sense of underlying the new languages that people build in the future. One nice thing about such a system is that you can 'bolt on' new type systems fairly easily. Type systems can be proven sound (they propagate true propositions), but they needn't be complete in any way. Even if there isn't a typing rule to get you from the type you have to the type you want to have, you can always drop down and prove the correctness of your cast in the ambient logic which defines what the types mean. Thus instead of working only with "morphisms in the internal language of a category" through type checked terms we allow ourselves to build morphisms externally. Could you detail more precisely what you and neelk mean by "externally" here ? The way I understand it, it means defining "low-level" mathematical objects, and then afterwards proving that they are indeed legitimate morphisms in the "high-level" category. Like you define a function on natural numbers, and afterwards prove that it is a monoid morphism from (N,+) to whatever. Is it what you mean, or something completely different? Something confusing with this terminology is that, in category theory, if I understand correctly (I'm no expert), "externally" is often used to mean "observationally" (defined in term of its interaction with other objects/morphisms), which is completely different and perhaps even contradictory. Yes, that probably should have been in quotes in my post as well, since I was parroting Neel. My category-fu is weak, but my understanding of what he is saying is that you have a semantic category that defines the meaning of the subject category's objects and morphisms, and then, rather than using the natural type system for the subject category, you prove in the semantic category something to the effect of "there exists this morphism in the subject category such that ...". So your example of proving a function to be a monoid rather than building it up from primitive monoid operations is probably a good example of what I'm talking about and I think it's also what Neel is talking about. I can't comment on your second point. This is exactly what I mean. "Low-level" in this sense might be literally low-level, such as inline assembly. The idea is that the really important things are the invariants that the runtime system and the rest of the program expect, and it doesn't really matter how the binary code that respects them got generated. Maybe they were written in some typesafe language and compiled, or maybe somebody just wrote some assembly and did a correctness proof. If the invariants are maintained, there's no reason to care.
http://lambda-the-ultimate.org/node/4312
CC-MAIN-2018-39
refinedweb
12,543
50.97
hi - Hibernate Native SQL Query Introduction In this section, you will learn Hibernate Native SQL SQL Query/Native Query This tutorial describes Hibernate SQL Query, which is also known as Native Query Hibernate Named Native SQL Query In this section, you will learn Named Native SQL Query in Hibern hi sir i need hibernate complete tutorial for download Hi Friend, Please visit the following link: Hibernate Tutorials how to write join quaries using hql Hi satish I am sending links to you where u can find the solution regarding your query: Hibernate Native SQL Example Hibernate Native SQL Example  ... procedures. Hibernate allows you to run Native SQL Query for all the database.... In this example we will show you how you can use Native SQL with hibernate. You will learn hi - SQL hi hi sir,i want to copy the sql prompt queries to one text file,how to achieve this ,plz tell me Thanq hibernate............... hibernate............... goodevining. I am using hibernate...: SQL Error: 0, SQLState: null 31 May, 2012 8:18:01 PM... with the following error: ORA-12505, TNS:listener does not currently know of SID given hi - SQL hi hi sir,my table is this, SQL> desc introducer; Name...) my problem is i want to remove the primary key,how to remove the primary key sir,plz tell me ThanQ Hi Friend, Run hi - SQL hi hi sir,i want to copy the mysql prompt queries to one text file,how to achieve this ,plz tell me Thanq hi - SQL hi hi sir,i want to create a database in oracle,not in my sql sir,plz tell me how to create a database. Hi Friend, Try the following...", "rose", "rose"); Statement st = conn.createStatement(); int i hi - SQL hi hi sir,i want to insert a record in 1 table for example sno sname sno1 i want to insert...,............ plz provide this query sir,plzzzzzzzzzzzz Hi Friend problem - Hibernate hibernate code problem String SQL_QUERY =" from Insurance... this lngInsuranceId='1'. but i want to search not only the value 1 but the value... thanks shakti Hi friend, Your code is : String SQL_QUERY =" from Hi Radhika, i think, you hibernate configuration...*,resultset*,(query|sql-query)*)". What should I do to rectify this? My...hibernate I have written the following program package Hibernate Named Native SQL in XML Returning Scalar In this section, you will learn to execute Hibernate named native SQL query written in XML mapping file which return scalar values(raw provides HQL for performing selective search. Hibernate also supports SQL Queries (Native Query). Hibernate provides primary and secondary level caching...why hibernate? why hibernate? Hibernate: -Hibernate JPA Native Queries, JPA Native Queries Tutorials JPA Native Queries In this section, you will know about the jpa native queries and how... query executes plain SQL queries. JPA Native queries have the following I want to learn Hibernate tutorial quickly I want to learn Hibernate tutorial quickly Hello, I want to learn Hibernate tutorial quickly. Is there any way.. I want to learn Hibernate online.. Please help HIBERNATE CODE - Hibernate HIBERNATE CODE Dear sir, I am working with MyEclipse IDE.I want to connect with MYSQL using Hibernate. What is the Driver Template and URL of MYSQL toconnect using Hibernate I downloaded the zip file given in the tutorial of Hibernate. I followed all th steps as given in the tutorial, but a build error... not be resolved. How can i rectify that error? Hi, The error comes when Please convert this SQL query to hibernate - Hibernate Please convert this SQL query to hibernate I have a SQl query, which needs to be converted to HQL query, but i am not successfull, please help me... in advance Hi Friend, Please visit the following links: http hibernate sql error - Hibernate hibernate sql error Hibernate: insert into EMPLOYE1 (firstName... to use polymorphiuc mapping in type2 using subclasses? Hi Friend, Please visit the following links: Ask Hibernate Questions Online in its own portable SQL extension (HQL), as well as in native SQL... program. If you want to learn new thing and don't know how to run... Ask Hibernate Questions Online   hibernate ;Hi Friend, Please visit the following link: Hi Good Morning Will u please send me the some of the tutorials of hibernate.Because ,i have to learn the hibernate.i am new to this What is a lazy loading in hibernate?i want one example of source code?plz reply hi friends i had one doubt how to do struts with hibernate in myeclipse ide its urgent hi - Hibernate hi hi, what is object life cycle in hibernate Hibernate Native Entity Query This section contains detail about Hibernate Native Entity Query with example code I want to build sessionfactory in hibernate 4. Need help. I want to build sessionfactory in hibernate 4. Need help. Hello, I want to build sessionfactory in hibernate 4. Need help I need hibernate session factory example. I need hibernate session factory example. Hi, I want a simple hibernate session factory example.. hello, Here is a simple Hibernate SessionFactory Example Also go through the Hibernate 4 Thanks SessionFactory Can anyone please give me an example of Hibernate SessionFactory? Hi friend,package roseindia;import... = sessionFactory.openSession(); String SQL_QUERY ="from Procedure proced" display sql query in hibernate display sql query in hibernate If you want to see the Hibernate generated SQL statements on console, what should we do sir - Java Beginners hi sir Hi,sir,i am try in netbeans for to develop the swings,plz provide details about the how to run a program in netbeans and details about... the details sir, Thanks for ur coporation sir Hi Friend hibernate - Hibernate hibernate I have written the following program package Hibernate; import org.hibernate.Session; import org.hibernate.*; import...(); session =sessionFactory.openSession(); //Create Select Clause HQL String SQL I have a problem while developing... the application I got an exception that it antlr..... Exception.Tell me the answer plz.If any application send me thank u (in advance). Hi friend Hai this is jagadhish while running a Hibernate application i got the exception like this what is the solution for this plz inform me.... Hi friend, Read for more information, hibernate configuration with eclipse 3.1 - Hibernate hibernate configuration with eclipse 3.1 Dear Sir, i got your mail... project.its not running. so i want to about the whole process. i have... by step process. i have that folder in d:/hibernate i thing - Hibernate Hibernate Hai this is jagadhish, while executing a program in Hibernate in Tomcat i got an error like this HTTP Status 500...) Give me answer for this exception. Thank u inadvance. Hi Hibernate Overview and Architecture your transactions. Hibernate Query: Hibernate provide simple SQL, native query... framework This article is providing the overview of the Hibernate framework.... Hibernate automatically generates SQL query to perform operations like select Diff Bn Uni and Bi-directional Mapping in Hibernate - Hibernate Interview Questions Diff Bn Uni and Bi-directional Mapping in Hibernate Hi Friends, I want to know d difference bn uni-directional and bidirectional mapping in hibernate. Hi I am sending links where u can find Hibernate 1 - Hibernate Hibernate 1 what is a fetchi loading in hibernate?i want source code?plz reply Joining Multiple table in Hibernate Joining Multiple table in Hibernate Hi everyone, I'm new to Hibernate (even in JAVA), and I'm having some doubt's about one thing. I created 2...='Peter' OR usOperadora='Vodafone'*** Finnaly, I need to know how a can print Hibernate Spring - Hibernate Struts Hibernate Spring HI Deepak, This is reddy.i want expamle for struts hibernate spring example. Hi Friend, Please visit the following link: How to know how many columns changed in a table when we are using hibernate Hibernate code - Hibernate Hibernate code can you show the insert example of Hibernate other than session.save(obj); Hi I am sending a link where u can find lots of example related to hibernate... Please explain Hibernate Sessionfactory. Please explain Hibernate Sessionfactory. Hi there, Please explain Hibernate session factory in detail. I have just started learning hibernate so i... Complete Hibernate 3.0 and Hibernate 4 Tutorial Let me know if the problem persists
http://roseindia.net/tutorialhelp/comment/4868
CC-MAIN-2014-42
refinedweb
1,370
55.13
5.1. Layers and Blocks¶ When we first started talking about neural networks, we introduced linear models with a single output. Here, the entire model consists of just a single neuron. By itself, a single neuron takes some set of inputs, generates a corresponding (scalar) output, and has a set of associated parameters that can be updated to optimize some objective function of interest. Then, once we started thinking about networks with multiple outputs, we leveraged vectorized arithmetic, we showed how we could use linear algebra to efficiently express an entire layer of neurons. Layers too expect some inputs, generate corresponding outputs, and are described by a set of tunable parameters. When we worked through softmax regression, a single layer was itself the model. However, when we subsequently introduced multilayer perceptrons, we developed models consisting of multiple layers. One interesting property of multilayer neural networks is that the entire model and its constituent layers share the same basic structure. The model takes the true inputs (as stated in the problem formulation), outputs predictions of the true outputs, and possesses parameters (the combined set of all parameters from all layers) Likewise any individual constituent layer in a multilayer perceptron ingests inputs (supplied by the previous layer) generates outputs (which form the inputs to the subsequent layer), and possesses a set of tunable parameters tht are updated with respect to the ultimate objective (using the signal that flows backwards through the subsequent layer). While you might think that neurons, layers, and models give us enough abstractions to go about our business, it turns out that we will often want to express our model in terms of a components that are large than an indivudal layer. For example, when designing models, like ResNet-152, which possess hundreds (152, thus the name) of layers, implementing the network one layer at a time can grow tedious. Moreover, this concern is not just hypothetical—such deep networks dominate numerous application areas, especially when training data is abundant. For example the ResNet architecture mentioned above won the 2015 ImageNet and COCO computer vision competitions for both recognition and detection [He et al., 2016a]. Deep networks with many layers arranged into components with various repeating patterns are now ubiquitous in other domains including natural language processing and speech. To facilitate the implementation of networks consisting of components of arbitrary complexity, we introduce a new flexible concept: a neural network block. A block could describe a single neuron, a high-dimensional layer, or an arbitrarily-complex component consisting of multiple layers. From a software development, a Block is a class. Any subclass of Block must define a method called forward that transforms its input into output, and must store any necessary parameters. Note that some Blocks do not require any parameters at all! Finally a Block must possess a backward method, for purposes of calculating gradients. Fortunately, due to some behind-the-scenes magic supplied by the autograd autograd package (introduced in Section 2) when defining our own Block typically requires only that we worry about parameters and the forward function. One benefit of working with the Block abstraction is that they can be combined into larger artifacts, often recursively, e.g., as illustrated in Fig. 5.1.1. By defining code to generate Blocks of arbitrary complexity on demand, we can write surprisingly compact code and still implement complex neural networks. To begin, we revisit the Blocks that played a role in our implementation of the multilayer perceptron (Section 4.3). The following code generates a network with one fully-connected hidden layer containing 256 units followed by a ReLU activation, and then another fully-connected layer consisting of 10 units (with no activation function). Because there are no more layers, this last 10-unit layer is regarded as the output layer and its outputs are also the model’s output. from mxnet import np, npx from mxnet.gluon import nn npx.set_np() x = np.random.uniform(size=(2, 20)) net = nn.Sequential() net.add(nn.Dense(256, activation='relu')) net.add(nn.Dense(10)) net.initialize() net(x) array([[ 0.06240272, -0.03268593, 0.02582653, 0.02254182, -0.03728798, -0.04253786, 0.00540613, -0.01364186, -0.09915452, -0.02272738], [ 0.02816677, -0.03341204, 0.03565666, 0.02506382, -0.04136416, -0.04941845, 0.01738528, 0.01081961, -0.09932579, -0.01176298]]) In this example, as in previous chapters, our model consists of an object returned by the nn.Sequential constructor. After instantiating a nn.Sequential and storing the net variable, we repeatedly called its add method, appending layers in the order that they should be executed. We suspect that you might have already understood more or less what was going on here the first time you saw this code. You may even have understood it well enough to modify the code and design your own networks. However, the details regarding what exactly happens inside nn.Sequential have remained mysterious so far. In short, nn.Sequential just defines a special kind of Block. Specifically, an nn.Sequential maintains a list of constituent Blocks, stored in a particular order. You might think of nnSequential as your first meta-Block. The add method simply facilitates the addition of each successive Block to the list. Note that each our layers are instances of the Dense class which is itself a subclass of Block. The forward function is also remarkably simple: it chains each Block in the list together, passing the output of each as the input to the next. Note that until now, we have been invoking our models via the construction net(X) to obtain their outputs. This is actually just shorthand for net.forward(X), a slick Python trick achieved via the Block class’s __call__ function. Before we dive in to implementing our own custom Block, we briefly summarize the basic functionality that each Block must perform the following duties: Ingest input data as arguments to its forwardfunction. Generate an output via the value returned by its forwardfunction. Note that the output may have a different shape from the input. For example, the first Dense layer in our model above ingests an input of arbitrary dimension but returns an output of dimension 256. Calculate the gradient of its output with respect to its input, which can be accessed via its backwardmethod. Typically this happens automatically. Store and provide access to those parameters necessary to execute the forwardcomputation. Initialize these parameters as needed. 5.1.1. A Custom Block¶ Perhaps the easiest way to develop intuition about how nn.Block works is to just dive right in and implement one ourselves. In the following snippet, instead of relying on nn.Sequential, we just code up a Block from scratch that implements a multilayer perceptron with one hidden layer, 256 hidden nodes, and 10 outputs. Our MLP class below inherits the Block class. While we rely on some predefined methods in the parent class, we need to supply our own __init__ and forward functions to uniquely define the behavior of our model. from mxnet.gluon import nn): return self.output(self.hidden(x)) This code may be easiest to understand by working backwards from forward. Note that the forward method takes as input x. The forward method first evaluates self.hidden(x) to produce the hidden representation, passing this output as the input to the output layer self.output( ... ). The constituent layers of each MLP must be instance-level variables. After all, if we instantiated two such models net1 and net2 and trained them on different data, we would expect them to them to represent two different learned models. The __init__ method is the most natural place to instantiate the layers that we subsequently invoke on each call to the forward method. Note that before getting on with the interesting parts, our customized __init__ method must invoke the parent class’s init method: super(MLP, self).__init__(**kwargs) to save us from reimplementing boilerplate code applicable to most Blocks. Then, all that is left is to instantiate our two Dense layers, assigning them to self.hidden and self.output, respectively. Again note that when dealing with standard functionality like this, we do not have to worry about backpropagation, since the backward method is generated for us automatically. The same goes for the initialize method. Let’s try this out: net = MLP() net.initialize() net(x) array([[-0.03989594, -0.1041471 , 0.06799038, 0.05245074, 0.02526059, -0.00640342, 0.04182098, -0.01665319, -0.02067346, -0.07863817], [-0.03612847, -0.07210436, 0.09159479, 0.07890771, 0.02494172, -0.01028665, 0.01732428, -0.02843242, 0.03772651, -0.06671704]]) As we argued earlier, the primary virtue of the Block abstraction is its versatility. We can subclass Block to create layers (such as the Dense class provided by Gluon), entire models (such as the MLP class implemented above), or various components of intermediate complexity, a pattern that we will lean on heavily throughout the next chapters on convolutinoal neural networks. 5.1.2. The Sequential Block¶ As we described earlier, the Sequential class itself is also just a subclass of Block, designed specifically for daisy-chaining other Blocks together. All we need to do to implement our own MySequential block is to define a few convenience functions: 1. An add method for appending Blocks one by one to a list. 2. A forward method to pass inputs through the chain of Blocks (in the order of addition). The following MySequential class delivers the same functionality as Gluon’s default Sequential class:. net = MySequential() net.add(nn.Dense(256, activation='relu')) net.add(nn.Dense(10)) net.initialize() net(x) array([[-0.07645682, -0.01130233, 0.04952145, -0.04651389, -0.04131573, -0.05884133, -0.0621381 , 0.01311472, -0.01379425, -0.02514282], [-0.05124625, 0.00711231, -0.00155935, -0.07555379, -0.06675334, -0.01762914, 0.00589084, 0.01447191, -0.04330775, 0.03317726]]) Indeed, it can be observed that the use of the MySequential class is no different from the use of the Sequential class described in Section 4.3. are at it, we need to introduce another concept, that of the constant parameter. These are parameters that are not used when invoking backprop. This sounds very abstract but here’s what, since Gluon does not know about this beforehand, it is worth while to give it a hand (this makes the code go faster, too, since we are not sending the Gluon engine on a wild goose chase after a parameter that does not change). get_constant is the method that can be used to accomplish this. Let’s see what this looks like in practice. class FancyMLP(nn.Block): def __init__(self, **kwargs): super(FancyMLP, self).__init__(**kwargs) # Random weight parameters created with the get_constant are not # iterated during training (i.e., constant parameters) self.rand_weight = self.params.get_constant( 'rand_weight', np.random.uniform(size=(20, 20))) self.dense = nn.Dense(20, activation='relu') def forward(self, x): x = self.dense(x) # Use the constant parameters created, as well as the relu # and dot functions x = npx.relu(np.dot(x, self.rand_weight.data()) + 1) # Reuse the fully connected layer. This is equivalent to sharing # parameters with two fully connected layers x = self.dense(x) # Here in Control flow, we need to call asscalar to return the scalar # for comparison while np.abs(x).sum() > 1: x /= 2 if np.abs(x).sum() < 0.8: x *= 10 return x.sum() In this FancyMLP model, we used constant weight Rand_weight (note that it is not a model parameter), performed a matrix multiplication operation ( np. net = FancyMLP() net.initialize() net(x) array(5.2637568) There) array(0.97720534). Gluon does this by allowing for Hybridization (Section 12.1). In it, the Python interpreter executes the block the first time it is invoked. The Gluon runtime records what is happening and the next time around it short circuits any calls to Python. This can accelerate things considerably in some cases but care needs to be taken with control flow. We suggest that the interested reader skip forward to the section covering hybridization and compilation after finishing the current chapter.. 5.1.6. Exercises.
https://d2l.ai/chapter_deep-learning-computation/model-construction.html
CC-MAIN-2019-51
refinedweb
2,021
55.74
. In this ebook, code using normal characters of the English language. The XML document is made of units called entities. These entities are spread on various lines of the document as you judge them necessary and as we will learn. XML has strict rules as to how the contents"?> By default, an XML file created using Visual Studio 2005 specifies the version as 1.0. Under the XML declaration line, you can then create the necessary tags of the XML file. an XML file.. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml; namespace cpap1 { class Program { static int Main(string[] args) { XmlDocument docXML = new XmlDocument(); docXML.LoadXml(""); return 0; } } }. Probably the most common way to create an XML file in Microsoft Windows consists of using Notepad or any other text editor. After opening the text editor, you can enter the necessary lines of text. After creating the file, you must save it. When saving it, you can include the name of the file in double-quotes: You can also first set the Save As Type combo box to All Files and then enter the name of the file with the .xml extension. To assist you with creating XML Files, Microsoft Visual C# includes an XML File option in the Add New Item dialog box. After selecting this option, you can accept the suggested name of the file or replace it in the Name text box. If you don't specify the extension, the wizard would add it for you. If you.: using System; using System.Xml; namespace VideoCollection1 { class Program { static int Main(string[] args) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load("Videos.xml"); return 0; } } } provide access to the root of an XML file, the XmlDocument class is equipped with the DocumentElement property.. Based on this, the above line can be written as follows: <dinner /> Both produce the same result or accomplish the same role. In the above à> <; } } }
http://functionx.com/csharp/xml/Lesson01.htm
CC-MAIN-2018-34
refinedweb
326
67.35
A time-honored method of detecting cheap stocks is to look for ones that sell below book value. Book value is a company’s net worth, or assets minus liabilities. Divide total book value by the number of shares outstanding, and you get the company’s book value per share, often called simply “book.” About 6 percent of U.S. stocks now trade below book. Some of these companies are going nowhere. They are cheap for good reason, illustrating the old joke, “Things are always darkest just before they go completely black.” Others are inexpensive because they are working through problems that probably are temporary. I have selected five that I think have good potential to rebound from whatever is troubling them, and to notch good capital gains over the next year or two. One is BlackRock Inc. (BLK), the world’s largest asset manager. Public since 1999, the New York-based company acquired State Street Research in 2005, absorbed Merrill Lynch Investment Management in 2006, and merged with Barclays Global Investors in 2009. It now manages more than $3 trillion. Last year it pulled in more than $8 billion in revenue. BlackRock trades right at book value, and for 17 times earnings. In the past five years -- a difficult period for financial companies -- earnings have grown at almost a 20 percent annual clip. BlackRock’s Performance In the five years through March, it has provided investors with a cumulative total return (including reinvested dividends) of 57 percent. Contrast that with about 6 percent for Goldman Sachs Group Inc. (GS), and a loss of 65 percent for Bank of America Corp. (BAC) Some people insist on looking only at tangible book value per share, which excludes items such as the value of patents, brand names, and goodwill, or the bookkeeping entry that represents the premium one company paid to buy another. They wouldn’t like BlackRock. Its tangible book value is negative, as it has lots of goodwill on its books from acquisitions. My other four picks are much smaller than BlackRock. OM Group Inc. (OMG) is a producer of cobalt and metals-based powders and specialty chemicals that I sold during the recession and bear market of 2007-2009 because its revenue and earnings were dropping precipitously. Now, I see signs that operations are recovering. The Cleveland-based company has turned a profit six quarters in a row. Though earnings are far from the record levels of early 2008, the latest quarter was the best performance in more than two years. One ‘Buy’ The stock is obscure. Only four analysts from lesser-known brokerage houses cover it, according to data compiled by Bloomberg; only one rates it a “buy.” This is the sort of situation that often gets my greed glands going: a little-followed stock, disdained by those few who know about it, selling cheaply, with improving earnings. Speedway Motorsports Inc. (TRK), based in Concord, North Carolina, owns and operates eight auto-racing tracks in eight states -- California, Georgia, Kentucky, Nevada, New Hampshire, North Carolina, Tennessee and Texas. I recommended this stock in January 2010 and it has returned about a negative 10 percent since then. But I think it is likely to rev up as the economy revives, particularly in the South, where auto racing is most popular. E.W. Scripps Co. is a recommendation I make with my heart in my throat. My ventures into newspaper stocks in recent years -- notably with New York Times Co. (NYT) and Gannett Co. -- have been unprofitable. Rising paper costs and Internet competition have punished the industry. Newspaper Man What’s more, I know I am not objective. I spent 27 years as a journalist for various papers and magazines (including Forbes and the Wall Street Journal) before becoming a money manager in 1997. I’m still fond of the nation’s rags. All that said, Scripps strikes me as a likely gainer in the next year or two. The Cincinnati-based company owns 14 newspapers (including the Commercial Appeal in Memphis, Tennessee) and 10 television stations (among them, WPTV in West Palm Beach, Florida, which the company says is the state’s highest-rated broadcaster). The pulse of newspapers and TV stations is advertising, and I believe ad spending is likely to increase in 2011 and 2012 as the economy gains steam. Internet-based advertising is gaining market share, but it isn’t the whole ball game, nor will it become so. Insurance Buy Most stocks selling for less than book are insurance stocks. That’s partly because book value for insurers is boosted by the reserves of cash and securities they hold to pay future claims. One insurance stock that looks good to me is Horace Mann Educators Corp. (HMN), a seller of property and casualty policies, annuities and life insurance. The Springfield, Illinois-based company has shown an annual profit since 1992 and earnings have risen in six of the past eight years. Horace Mann’s expense ratio (expenses divided by premiums) is too high -- 41 percent last year. If the company doesn’t cut costs, I think someone else will acquire it and do it for them. In any case, at 0.8 times book value, 0.7 times revenue, and under 10 times earnings, Horace Mann looks like a bargain to me. Disclosure note: I have no long or short positions, personally or for clients, in the.
http://www.bloomberg.com/news/2011-04-24/five-below-book-stocks-look-ready-to-rebound-commentary-by-john-dorfman.html
CC-MAIN-2014-10
refinedweb
902
64.61
ffprobe − ffprobe media prober ffprobe [options] [input_url] ffprobe gathers information from multimedia streams and prints it in human− "−nofoo" will set the boolean option with name "foo" to false.. "−codec "−b:a 128k" matches all audio streams. An empty stream specifier matches all streams. For example, "−codec copy" or "−codec: copy" would copy all the streams without reencoding. Possible forms of stream specifiers are: stream_index Matches the stream with this index. E.g. "−threads These options are shared amongst the ff* tools. −L Show license. −h, −?, −help, −−help −decoders option to get a list of all decoders. encoder=encoder_name Print detailed information about the encoder named encoder_name. Use the −encoders option to get a list of all encoders. demuxer=demuxer_name Print detailed information about the demuxer named demuxer_name. Use the −formats option to get a list of all demuxers and muxers. muxer=muxer_name Print detailed information about the muxer named muxer_name. Use the −formats option to get a list of all muxers and demuxers. filter=filter_name Print detailed information about the filter name filter_name. Use the −filters option to get a list of all filters. −version Show version. −formats Show available formats (including devices). −demuxers Show available demuxers. −muxers Show available muxers. −devices Show available devices. −codecs Show all codecs known to libavcodec. Note that the term ’codec’ is used throughout this documentation as a shortcut for what is more correctly called a media bitstream format. −decoders Show available decoders. −encoders Show all available encoders. −bsfs Show available bitstream filters. −protocols Show available protocols. −filters Show available libavfilter filters. −pix_fmts Show available pixel formats. −sample_fmts Show available sample formats. −layouts Show channel names and standard channel layouts. −colors Show recognized color names. −sources device[,opt1=val1[,opt2=val2]...] Show autodetected sources of the intput device. Some devices may provide system-dependent source names that cannot be autodetected. The returned list cannot be assumed to be always complete. ffmpeg −sources pulse,server=192.168.0.4 −sinks device[,opt1=val1[,opt2=val2]...] Show autodetected sinks of the output device. Some devices may provide system-dependent sink names that cannot be autodetected. The returned list cannot be assumed to be always complete. ffmpeg −sinks pulse,server=192.168.0.4 −loglevel [repeat+]loglevel | −v , −8. Dump full command line and console output to a file named "program−YYYYMMDD−HHMMSS.log" in the current directory. This file can be useful for bug reports. It also implies "−loglevel verbose". Setting the environment variable FFREPORT to any value has the same effect. If the value is a ’:’−separated key=value sequence, these options will affect the report; option values must be escaped if they contain special characters or the options delimiter ’:’ (see the ‘‘Quoting "−loglevel"). For example, to output a report to a file named ffreport.log using a log level of 32 (alias for log level "info"): FFREPORT=file=ffreport.log:level=32 ffmpeg −i input output Errors in parsing the environment variable are not fatal, and will not appear in the report. −hide_banner Suppress printing banner. All FFmpeg tools will normally show a copyright notice, build options and library versions. This option can be used to suppress printing this information. −cpuflags flags (global) Allows setting and clearing cpu flags. This option is intended for testing. Do not use it unless you know what you’re doing. ffmpeg −cpuflags −sse+mmx ... ffmpeg −cpuflags mmx ... ffmpeg −cpu neon PowerPC altivec Specific Processors pentium2 pentium3 pentium4 k6 k62 athlon athlonxp k8 −opencl_bench This option is used to benchmark all available OpenCL devices and print the results. This option is only available when FFmpeg has been compiled with "−−enable−opencl". When FFmpeg is configured with "−−enable−opencl", the options for the global OpenCL context are set via −open −opencl_options to obtain the best performance for the OpenCL accelerated code. Typical usage to use the fastest OpenCL device involve the following steps. Run the command: ffmpeg −opencl_bench Note down the platform ID (pidx) and device ID (didx) of the first i.e. fastest device in the list. Select the platform and device using the command: ffmpeg −opencl_options platform_idx=<pidx>:device_idx=<didx> ... −opencl_options options (global) Set OpenCL environment options. This option is only available when FFmpeg has been compiled with "−−enable−opencl". options must be a list of key=value option pairs separated by ’:’. See the ‘‘OpenCL Options’’ section in the ffmpeg-utils manual for the list of supported options. AVOptions These options are provided directly by the libavformat, libavdevice and libavcodec libraries. To see the list of available AVOptions, use the −help −i input.flac −id3v2_version 3 out.mp3 All codec AVOptions are per-stream, and thus a stream specifier should be attached to them. Note: the −nooption syntax cannot be used for boolean AVOptions, use −option 0/−option 1. Note: the old undocumented way of specifying per-stream AVOptions by prepending v/a/s to the options name is now obsolete and will be removed soon. Main options −f format Force format to use. −unit Show the unit of the displayed values. −prefix Use SI prefixes for the displayed values. Unless the "−byte_binary_prefix" option is used all the prefixes are decimal. −byte_binary_prefix Force the use of binary prefixes for byte values. −sexagesimal Use sexagesimal format HH:MM:SS.MICROSECONDS for time values. −pretty Prettify the format of the displayed values, it corresponds to the options "−unit −prefix −byte_binary_prefix −sexagesimal". −of, −print_format writer_name[=writer_options] Set the output printing format. writer_name specifies the name of the writer, and writer_options specifies the options to be passed to the writer. For example for printing the output in JSON format, specify: −print_format json For more details on the available output printing formats, see the Writers section below. −sections Print sections structure and section information, and exit. The output is not meant to be parsed by a machine. −select_streams stream_specifier Select only the streams specified by stream_specifier. This option affects only the options related to streams (e.g. "show_streams", "show_packets", etc.). For example to show only audio streams, you can use the command: ffprobe −show_streams −select_streams a INPUT To show only video packets belonging to the video stream with index 1: ffprobe −show_packets −select_streams v:1 INPUT −show_data Show payload data, as a hexadecimal and ASCII dump. Coupled with −show_packets, it will dump the packets’ data. Coupled with −show_streams, it will dump the codec extradata. The dump is printed as the "data" field. It may contain newlines. −show_data_hash algorithm Show a hash of payload data, for packets with −show_packets and for codec extradata with −show_streams. −show_error Show information about the error found when trying to probe the input. The error information is printed within a section with name " ERROR". −show_format Show information about the container format of the input multimedia stream. All the container format information is printed within a section with name " FORMAT". −show_format_entry name Like −show_format, but only prints the specified entry of the container format information, rather than all. This option may be given more than once, then all specified entries will be shown. This option is deprecated, use "show_entries" instead. −show −show_packets Show information about each packet contained in the input multimedia stream. The information for each single packet is printed within a dedicated section with name " PACKET". −show_frames Show information about each frame and subtitle contained in the input multimedia stream. The information for each single frame is printed within a dedicated section with name " FRAME" or " SUBTITLE". −show_log loglevel Show logging information from the decoder about each frame according to the value set in loglevel, (see "−loglevel"). This option requires "−show_frames". The information for each log message is printed within a dedicated section with name " LOG". −show_streams Show information about each media stream contained in the input multimedia stream. Each media stream information is printed within a dedicated section with name " STREAM". −show_programs Show information about programs and their streams contained in the input multimedia stream. Each media stream information is printed within a dedicated section with name " PROGRAM_STREAM". −show_chapters Show information about chapters stored in the format. Each chapter is printed within a dedicated section with name " CHAPTER". −count_frames Count the number of frames per stream and report it in the corresponding stream section. −count_packets Count the number of packets per stream and report it in the corresponding stream section. −read. <INTERVAL> ::= [<START>|+<START_OFFSET>][%[<END>|+<END_OFFSET>]] <INTERVALS> ::= <INTERVAL>[,<INTERVALS>] A few examples follow. • −show_private_data, −private Show private data, that is data depending on the format of the particular shown element. This option is enabled by default, but you may need to disable it for specific uses, for example when creating XSD-compliant XML output. −show_program_version Show information related to program version. Version information is printed within a section with name " PROGRAM_VERSION". −show_library_versions Show information related to library versions. Version information for each library is printed within a section with name " LIBRARY_VERSION". −show_versions Show information related to program and library versions. This is the equivalent of setting both −show_program_version and −show_library_versions options. −show_pixel_formats Show information about all pixel formats supported by FFmpeg. Pixel format information for each format is printed within a section with name " PIXEL_FORMAT". −bitexact Force bitexact output, useful to produce output which is not dependent on the specific build. −i input_url Read input_url.−8 ) sequence or code point is found in the input. This is especially useful to validate input metadata. ignore Any validation error will be ignored. This will result in possibly broken output, especially with the json or xml writer. replace The writer will substitute invalid UTF−8−like escaping. Strings containing a newline (\n), carriage return (\r), a tab (\t), a form feed (\f), the escaping character (\) or the item separator character SEP are escaped using C−like begin of each line if the value is 1, disable it with value set to 0. Default value is 1. INI format output. Print output in an INI based format. The following conventions are adopted: all key and values are UTF−8 . json JSON based format. Each section is printed using JSON notation. The description of the accepted options follows. compact, c If set to 1 enable compact output, that is each section will be printed on a single line. Default value is 0. For more information about JSON, <>. ffprobe supports Timecode extraction: MPEG1/2 timecode is extracted from the GOP, and is available in the video stream details (−show_streams, see timecode). MOV timecode is extracted from tmcd track, so is available in the tmcd stream metadata (−show_streams, see TAG:timecode). DV, GXF and AVI timecodes are available in format metadata (−show_format, see TAG:timecode). ffprobe−all(1), ffmpeg(1), ffplay(1), ffserver(1), ffmpeg−utils(1), ffmpeg−scaler(1), ffmpeg−resampler(1), ffmpeg−codecs(1), ffmpeg−bitstream−filters(1), ffmpeg−formats(1), ffmpeg−devices(1), ffmpeg−protocols(1), ffmpeg−filters(1).
http://man.sourcentral.org/MGA6/1+ffprobe
CC-MAIN-2019-30
refinedweb
1,789
51.75
TWC9: Silverlight 4, SQL Server R2 & Ent Library 5.0 all RTM, Why code comments aren't bad - Posted: Apr 24, 2010 at 11:24 AM - 47,944Ms are the ultimate leaky abstraction. You get the ability to write neat expressions in the code and have the SQL generated for you and the data returned in CLR objects but if you care about performance you either have to check everything that the ORM does by hand or you need to have abstraction-puncturing knowlege of how it works internally. @ performance implications of IQueryable I believe the reason Chris points out, is why WCF RIA services is pretty much implemented solely with IQueryable, to allow the Silverlight client to create an expression tree [or trees] that is executed on the server by the Silverlight client. This comment is in response to your comments on the article about code comments being bad for code clarity. Did you have a chance to read the entire article before the show? I ask because the author does make a comment (admittedly toward the end of the article) about exceptional cases where comments can actually add value. I think the real problem with comments is that, historically, commenting code was often presented as simply a Good Thing™ to do. Like so many other things, inexperienced programmers tend to accept this advice and apply it in the most obvious (and unhelpful) manner possible. We need to get the message out there that writing good comments is similar to writing good code in that it takes some effort and usually less is more. While it's a good thing that WCF RIA services uses IQueryable, I do agree with RHM that it's a little scary that developers could unintentionally cause a massive perf hit just by using the wrong interface, especially since you tend to find lots of LINQ samples that use IEnumerable<T>. Well the author calls out that comments are the exception, not the rule and again, this is my personal opinion, but comments are being demonized as this thing that you shouldn't do except in exceptional cases. I agree with you that succinct comments are useful, but the default answer shouldn't be in removing comments. My main example isn't commenting what the code is doing, but rather why the code is needed. One example is when you have a corner-case bug, something that only happens in a specific scenario (when an app needs to runs side-by-side, a specific version/language of an OS, etc). Having a comment that says that the method was added to resolve this particular bug (with a link to the bug #) is very useful, otherwise you're wondering why the method exists if you comment it out and "it works on my machine" and the testers machine, but breaks a customer. The other big issue I've seen is using legacy code that you can't refactor and comment out. Either way, I'd love to see someone throw time and effort to see what it would be like walking up to a code base say part of .NET Rotor, add a bug and test to see if having/removing comments makes a meaningful difference between the amount of time it takes to use/understand the code. Elegant Code = Code - Comments. This is a false dichotomy. I can clearly see the tension between comments and code. More comments obscures code, less comments obscures design insights. The problem is the IDE, not commented or commentless code, it's the ability to seamlessly switch between the two views. It's rather amazing how often people conflate presentation and representation. It happened with XML microformats, denying XML fidelity for presentational bliss, it's happening here, denying obscuring comments for code elegance - which is indeed desirable. What I dispise about making comments is commenting the trivial. It would be nice if the IDE could infer the entire documentation and XML comments for a trivial function (and by the way, showing XML comments not as XML.) Mind over matter. Good show. The stuff about IQueryable was very informative. I'm gonna check out some prime suspects first thing this monday to see exactly what kind of SQL query they're generating. Edit: disregard this entire post. Once again, my attempt at a post is foiled by this forum's amazing one line code blocks. AWESOME. Regarding the comments thing: one of the things I've always learned is to make the code self documenting via clear function names etc. This included wrapping unclear blocks of code into a function with a descriptive name, even if that function was going to get called only once. This adds a tiny bit of overhead, of course, but I always thought it was worth it because it made code so much more readable. After all: is a lot less clear (or readable) than even if you have to transmogrify only once, wrapping that stuff into a descriptive function makes it a lot more readabe than just a comment above the block of mystery code. However, since we've had regions, I've been more prone to just wrapping these complicated blocks of code into a descriptive region rather than a separate function. That way, I can just look at the collapsed region and see instantly what it does. Also, no function call overhead. What does everybody else think? Is the code better documented via regions or by wrapping complicated blocks in their own function? I would personally push things like Transmogrify into their own function in a couple of scenarios - you know you're only going to do one thing in that chunk of code (it's atomic), if it's a good candidate for reuse in other parts of your code, if the function it lives inside is big then it's a good choice for refactoring, if you know it could change (ex: switch 3rd party Transmogrify vendors) or be optimized in the future. I'm not worried about the function call overhead for performance as there's likely other things that you can do to optimize your performance before you optimize that. The one nice thing about keeping it all together is that code with a lot of functions can be "jumpy", where you're potentially going 3-4 levels deep multiple times to understand exactly what each method call is doing (and the methods that the method call can do). Plus in this example, the total lines of code is tiny so I don't think anyone would complain about having that function inline. I like to keep functions small, self-explanatory and pure. This is prescribed by LINQ which of course makes heavy use of pure extension methods. Pure and simple extension methods are composable and reasonable. As in Search(Bar).Transmogrify() - or assuming Bar is implicitly converted to SearchQuery then Bar.Search().Transmogrify() - or arbitrarily transformed as in Bar.[Tx]().Search().[Tx]().Transmogrify().[Tx](). Invariant where possible, variant when necessary. Not true, beautiful code = code + comments. In proper places it can help visualize important portions of the code easily, that would be otherwise impossible without.. I've had to deal with code where the comments are literally explaining every line of code. I'm with Dan and Sampy. Explain edge cases, TDD, and PLEASE explain what the regex should be doing. Comments in code is a flame war much like the proper place to put a curley bracket in c#.or And we all know the bottom way is correct Of course, documenting every single line is silly, but not documenting the code at all is too. And we are not talking about hello world applications or useless chunks of code used for demo purposes. Yes, the point - as written - was: comments are not evil, over-commenting or under-commenting is, but you should be able to toggle comments on and off at the function, namespace or file level. There's no point in commenting the trivial but that's also what you have to do in practice, especially if you are making a library, which can be annoying and feel like a waste of time. Other times it feels like a pleasure even if the documentation far exceeds the implementation size (albeit one that does not involve much thought - so the end result is nice but getting there is mindless work). As an example, some binary domain-specific operators (LINQ and otherwise) [nice "return"-alternative for the LINQ operator: IEnumerable<bool> where Count() = 1] The comments are extremely more verbose than the implementation but nice and useful. One more example, the implication operator, which is very useful in contracts albeit simple - but much more readable in its application than its expansion, in my oppinion I like the comments but would also like an easy way to hide them when skimming code. Well, your mileage may vary. I thought it was a shame that VS2010 didn't provide something for this. The XML is overly verbose, you basically only want to see it when editing the comment block and not when reading it. It would've been cool if the IDE turned these blocks of /// comments into a single line of intellisense-style information. Then it could expand to the text view when you click on it, or something. Come to think of it, now that the IDE is all XAML, it might be possible to change the way these blocks are presented? Yes, I'm very much looking forward to much more compact and interactive presentations of the source code made possible with the new WPF based editor in VS 2010. Appropos: I've always been on the side of Anders Hejlsberg when it comes to XML literals but one aspect of them that is a bit interesting is that you could have one underlying syntax for both <metadata>, //comments, [attribute]s and <expressions>x</expressions> but I think the people on the cutting edge of the DSL space work at Intentional Software with Charles Simonyi. Speaking of elegance, couldn't you just make a "pop-out" link for your player so we can move it to whichever monitor we want, then maximize it there? I have a feeling a function that requires a "security" prompt won't let me do that. Did you gentlemen say SQL Server R2 RTM'd? I am unable to find it on TechNet or MSDN. The CTPs are posted, but not the RTM. It's a bit confusing, but to quote Mary Jo Foley (who has the best explanation). You can download SQL Server Express R2 now from here - The trial is also available to download Hi Dan, So I have to disagree with you about my post on about eliminating comments. I think you have missed the point of the post. I fully understand that many people "don't believe in comments" and write bad horrible to understand code, because their belief stems from laziness as opposed to the desire to write good code. There is a MAJOR difference between this viewpoint or reason for not writing comments vs the reason I am advocating. I went through some examples in my post which showed how writing self-documenting code eliminates comments and make the code more clearly understandable. There are instances where comments are going to be needed. Which I also state in my post, (regex explanations for example). In general though, I am pretty confident the guidance I am giving out is 100% correct, and I am pretty convinced that I can prove this to be true to you. So, I challenge you. Give me some piece of code that is heavily commented to explain what is going on. I will refactor it to make it at least as readable, and most likely better by eliminating comments and replacing them with self-documenting code. Let your viewers decide if it is clearer or not. If they say it is not clearer, I will admit defeat. By the way you might like this post:">">">">"> Remove this comment Remove this threadclose
http://channel9.msdn.com/Shows/This+Week+On+Channel+9/TWC9-Silverlight-4-SQL-Server-R2--Ent-Library-50-all-RTM-Why-code-comments-arent-bad?format=html5
CC-MAIN-2013-48
refinedweb
2,021
58.92
Section 3.1 Blocks, Loops, and Branches The ability of a computer to perform complex tasks is built on just a few ways of combining simple commands into control structures. In Java, there are just six such structures that are used to determine the normal flow of control in a program -- and, in fact, just three of them would be enough to write programs to perform any task. The six control structures are: the block, the while loop, the do..while loop, the for loop, the if statement, and the switch statement. Each of these structures is considered to be a single "statement," but each is in fact a structured statement that can contain one or more other statements inside itself. 3.1.1 Blocks The block is the simplest type of structured statement. Its purpose is simply to group a sequence of statements into a single statement. The format of a block is: { statements } That is, it consists of a sequence of statements enclosed between a pair of braces, "{" and "}". (In fact, it is possible for a block to contain no statements at all; such a block is called an empty block, and can actually be useful at times. An empty block consists of nothing but an empty pair of braces.) Block statements usually occur inside other statements, where their purpose is to group together several statements into a unit. However, a block can be legally used wherever a statement can occur. There is one place where a block is required: As you might have already noticed in the case of the main subroutine of a program, the definition of a subroutine is a block, since it is a sequence of statements enclosed inside a pair of braces. I should probably note again; // A temporary variable for use in this block. temp = x; // Save a copy of the value of x in temp. x = y; // Copy the value of y into x. y = temp; // Copy the value of temp into y. } In the second example, a variable, temp, is declared inside the block. This is perfectly legal, and it is good style to declare a variable inside a block if that variable is used nowhere else but inside the block. A variable declared inside a block is completely inaccessible and invisible from outside that block. When the computer executes the variable declaration statement, it allocates memory to hold the value of the variable. When the block ends, that memory is discarded (that is, made available for reuse). The variable is said to be local to the block. There is a general concept called the "scope" of an identifier. The scope of an identifier is the part of the program in which that identifier is valid. The scope of a variable defined inside a block is limited to that block, and more specifically to the part of the block that comes after the declaration of the variable. 3.1.2 The Basic While Loop introduce the while loop and the if statement. I'll give the full details of these statements and of the other three control structures in later sections. A while loop is used to repeat a given statement over and over. Of course, it's not likely that you would want to keep repeating it statement or block; // The number to be printed. number = 1; // Start with 1. while ( number < 6 ) { // Keep going as long as number is < 6. System.out.println(number); number = number + 1; // Go on to the next number. } System.out.println("Done!"); The variable number is initialized with the value 1. So the first time through the while loop, when the computer evaluates the expression "number < 6", it is asking whether 1 is less than 6. Once again this is true, so the computer executes the loop again, this time printing out 2 as the value of number and then changing the value of number to 3. It continues in this way until eventually number becomes equal to 6. At that point, the expression "number < 6" evaluates to false. So, the computer jumps past the end of the loop to the next statement and prints out the message "Done!". Note that when the loop ends, the value of number is 6, but the last value that was printed was 5.. This is an improvement over examples from the previous chapter that just reported the results for one year: public class Interest3 { /* This class implements a simple program that will compute the amount of interest that is earned on an investment over a period of 5 years. The initial amount of the investment and the interest rate are input by the user. The value of the investment at the end of each year is output. */ public static void main(String[] args) { double principal; // The value of the investment. double rate; // The annual interest rate. /* Get the initial investment and interest rate from the user. */ TextIO.put("Enter the initial investment: "); principal = TextIO.getlnDouble(); TextIO.put("Enter the annual interest rate: "); rate = TextIO.getlnDouble(); /* Simulate the investment for 5 years. */ int years; // Counts the number of years that have passed. years = 0; while (years < 5) { double interest; // Interest for this year. interest = principal * rate; principal = principal + interest; // Add it to principal. years = years + 1; // Count the current year. System.out.print("The value of the investment after "); System.out.print(years); System.out.print(" years is $"); System.out.printf("%1.2f", principal); System.out.println(); } // end of while loop } // end of main() } // end of class Interest3 And here is an applet which simulates this program. (Remember that for "console applets" like this one, if the applet does not respond to your typing, you might have to click on it to activate it. In some browsers, you might also need to leave the mouse cursor inside the applet for it to recognize your typing.) You should study this program, and make sure that you understand what the computer does step-by-step as it executes the while loop. 3.1.3 The Basic If Statement. Of course, either or; // A temporary variable for use in this block. temp = x; // Save a copy of the value of x in temp. x = y; // Copy the value of y into x. y = temp; // Copy the.printf("%1.2f", principal); // this is done in any case I'll have more to say about control structures later in this chapter. But you already know the essentials. If you never learned anything more about control structures, you would already know enough to perform any possible computing task. Simple looping and branching are all you really need!
http://math.hws.edu/javanotes/c3/s1.html
crawl-001
refinedweb
1,104
72.76
Mark Hobson. It is much appreciated. for accepting my crazy ideas about open source. All of us would like to thank Lisa Malgeri. Tim O'Brien. Emmanuel Venisse and John Tolentino. Chris Berry. Napoleon Esmundo C. David Blevins. Lester Ecarma. Jerome Lacoste. Ramirez. I'd like to thank my family for their continuous support. John. Stephane Nicoll. and the teammates during my time at Softgal. Jason. Fabrice Bellingard. we would like to thank all the reviewers who greatly enhanced the content and quality of this book: Natalie Burdick. especially my parents and my brother for helping me whenever I needed. Finally.I would like to thank professor Fernando Bellas for encouraging my curiosity about the open source world. Abel Rodriguez. Also. Felipe Leme. Vincent. Brett and Carlos . Elena Renard and Joakim Erdfelt for their many contributions to the book. Bill Dudney. Allan Ramirez. Thanks also to all the people in Galicia for that delicious food I miss so much when traveling around the world. Carlos Sanchez Many thanks to Jesse McConnell for his contributions to the book. Ruel Loehr. 0 major releases.0 and 2. He enjoys cycling and raced competitively when he was younger. specializing in open source consulting. published by O'Reilly in 2005 (ISBN 0-596-00750-7). Immediately hooked. Australia. Brett Porter has been involved in the Apache Maven project since early 2003. financial. and today a large part of John's job focus is to continue the advancement of Maven as a premier software development tool. He was invited to become a Maven committer in 2004. Vincent lives and works in Paris. he is a co-author of JUnit in Action. supporting both European and American companies to deliver pragmatic solutions for a variety of business problems in areas like e-commerce. software development. Florida with his wife.About the Authors Vincent Massol has been an active participant in the Maven community as both a committer and a member of the Project Management Committee (PMC) since Maven's early days in 2002. when he began looking for something to make his job as Ant “buildmeister” simpler. Inc.. Brett is a co-founder and the Director of Engineering at Mergere. John Casey became involved in the Maven community in early 2002. as well as to various Maven plugins. He is grateful to work and live in the suburbs of Sydney. John enjoys amateur astrophotography. joining the Maven Project Management Committee (PMC) and directing traffic for both the 1. where he hopes to be able to make the lives of other developers easier. In addition to his work on Maven. Jason van Zyl focuses on improving the Software Development Infrastructure associated with medium to large scale projects. Build management and open source involvement have been common threads throughout his professional career. published by Manning in 2003 (ISBN 1-930-11099-5) and Maven: A Developer's Notebook. Brett became increasingly involved in the project's development. CSSC. a company which specializes in collaborative offshore software development using Agile methodologies. discovering Maven while searching for a simpler way to define a common build process across projects. When he's not working on Maven. and started early in the open source technology world. and working on his house. Spain. and is a Member of the Apache Software Foundation. John lives in Gainesville.. of course. Emily. Jason van Zyl: As chief architect and co-founder of Mergere. Brett has become involved in a variety of other open source projects. Carlos Sanchez received his Computer Engineering degree in the University of Coruña. where he is the technical director of Pivolis. telecommunications and. Inc. his focus in the Maven project has been the development of Maven 2. Additionally. he founded the Jakarta Cactus project-a simple testing framework for server-side Java code and the Cargo project-a J2EE container manipulation framework. Vincent has directly contributed to Maven's core. roasting coffee. He continues to work directly on Maven and serves as the Chair of the Apache Maven Project Management Committee. This is Vincent's third book. and in 2005. He created his own company. . John was elected to the Maven Project Management Committee (PMC). which has led to the founding of the Apache Maven project. Since 2004. .This page left intentionally blank. Introduction 3.3.6. Maven Overview 1.4. Creating Applications with Maven 38 39 40 42 44 46 48 49 52 53 54 55 3.6.4. Using Project Inheritance 3.1. Maven’s Principles 1.1. Coherent Organization of Dependencies Local Maven repository Locating dependency artifacts 22 22 23 24 25 26 27 27 28 28 28 31 32 34 1.7.8. Using Profiles 56 56 59 61 64 65 69 70 9 . Introducing Maven 17 21 1.2.8. Utilizing the Build Life Cycle 3. Resolving Dependency Conflicts and Using Version Ranges 3. What Does Maven Provide? 1. Using Maven Plugins 2. Handling Classpath Resources 2. Compiling Test Sources and Running Unit Tests 2.3.1.1.2. Managing Dependencies 3.3.2. Preparing to Use Maven 2. Packaging and Installation to Your Local Repository 2.6.1. Creating Your First Maven Project 2.6. What is Maven? 1.7.6.Table of Contents Preface 1. Summary 3.2. Handling Test Classpath Resources 2.2.3.1. Maven's Origins 1. Using Snapshots 3. Compiling Application Sources 2.1. Maven's Benefits 2. Preventing Filtering of Binary Resources 2.2. Reuse of Build Logic Maven's project object model (POM) 1.1.3. Setting Up an Application Directory Structure 3. Filtering Classpath Resources 2. Convention Over Configuration Standard Directory Layout for Projects One Primary Output Per Project Standard Naming Conventions 1.5.2.5.3.1. Getting Started with Maven 35 37 2.2.2. Developing Your First Mojo 5. Testing J2EE Application 4.2.14. A Note on the Examples in this Chapter 134 134 135 135 136 137 137 138 140 140 5.3.7. BuildInfo Example: Capturing Information with a Java Mojo Prerequisite: Building the buildinfo generator project Using the archetype plugin to generate a stub plugin project The mojo The plugin POM Binding to the life cycle The output 5.4. Building a Web Services Client Project 4.2.5. Deploying your Application 3. Building an EAR Project 4. Introducing the DayTrader Application 4. Building J2EE Applications 74 74 75 75 76 77 78 84 85 4.4.9. A Review of Plugin Terminology 5.2. Building an EJB Project 4. Building an EJB Module With Xdoclet 4. Introduction 5.9.9.4. Summary 5.3. Deploying with an External SSH 3.3.1.5.2.3.1.9.3.2.9. Deploying a J2EE Application 4.1.13.6.9. Deploying EJBs 4.4.3. Deploying with SFTP 3. The Plugin Framework Participation in the build life cycle Accessing build information The plugin descriptor 5. Plugin Development Tools Choose your mojo implementation language 5.4. Introduction 4. Summary 4. Bootstrapping into Plugin Development 5.11. Deploying with FTP 3. Creating a Web Site for your Application 3.12.9.8. Deploying to the File System 3.10. Improving Web Development Productivity 4. Deploying Web Applications 4.1.1. Organizing the DayTrader Directory Structure 4. Developing Custom Maven Plugins 86 86 87 91 95 100 103 105 108 114 117 122 126 132 133 5.11.3. BuildInfo Example: Notifying Other Developers with an Ant Mojo The Ant target The mojo metadata file 141 141 141 142 142 145 146 147 148 148 149 10 .3. Deploying with SSH2 3.10. Building a Web Application Project 4. 5.12.1.8. Adding Reports to the Project Web site 6. Assessing Project Health with Maven 165 167 6. Migrating to Maven 208 209 212 215 218 228 233 236 240 241 8. Where to Begin? 8.10. Introducing the Spring Framework 8.4. Team Collaboration with Maven 168 169 171 174 180 182 186 194 199 202 206 206 207 7. Viewing Overall Project Health 6.1.9. Creating Reference Material 6.3.3. Monitoring and Improving the Health of Your Releases 6. Monitoring and Improving the Health of Your Dependencies 6. Creating an Organization POM 7. Choosing Which Reports to Include 6. Accessing Project Dependencies Injecting the project dependency set Requiring dependency resolution BuildInfo example: logging dependency versions 5. Team Dependency Management Using Snapshots 7.7.3.9. The Issues Facing Teams 7.Modifying the plugin POM for Ant mojos Binding the notify mojo to the life cycle 150 152 5.6.2.5. Creating a Standard Project Archetype 7.5. Introduction 8. Separating Developer Reports From User Documentation 6.4. Creating POM files 242 242 244 250 11 .1. Advanced Mojo Development 5.5. Accessing Project Sources and Resources Adding a source directory to the build Adding a resource to the build Accessing the source-root list Accessing the resource list Note on testing source-roots and resources 5. How to Set up a Consistent Developer Environment 7.2.5. Monitoring and Improving the Health of Your Source Code 6.2.5. Monitoring and Improving the Health of Your Tests 6.1. Attaching Artifacts for Installation and Deployment 153 153 154 154 155 156 157 158 159 160 161 163 163 5.11.2. Cutting a Release 7. Summary 6.6.8. Continuous Integration with Continuum 7. Summary 8. Gaining Access to Maven APIs 5. Creating a Shared Repository 7.6.1. Configuration of Reports 6. What Does Maven Have to do With Project Health? 6.4.5. Summary 7.3.7.1. Complex Expression Roots A. Maven’s Super POM B.2.5.3. Ant Metadata Syntax Appendix B: Standard Conventions 272 272 273 273 274 274 278 278 279 279 283 B. Non-redistributable Jars 8.1.8. The default Life Cycle Life-cycle phases Bindings for the jar packaging Bindings for the maven-plugin packaging A.1.4.2.1.3.6.2.2.4. Testing 8.6. Running Tests 8. The Expression Resolution Algorithm Plugin metadata Plugin descriptor syntax A.7.2.6. Maven's Life Cycles A.1. Using Ant Tasks From Inside Maven 8.4.3. Compiling Tests 8.1.6.8.1. Other Modules 8.6. Some Special Cases 8. Building Java 5 Classes 8.6.5. The site Life Cycle Life-cycle phases Default Life Cycle Bindings 266 266 266 268 269 270 270 270 271 271 271 A.2.2.5.1.1.2.2.2. Maven’s Default Build Life Cycle Bibliography Index 284 285 286 287 289 12 . Java Mojo Metadata: Supported Javadoc Annotations Class-level annotations Field-level annotations A.2. The clean Life Cycle Life-cycle phases Default life-cycle bindings A. Mojo Parameter Expressions A.6. Avoiding Duplication 8.5.5. Simple Expressions A.6.1.3. Referring to Test Classes from Other Modules 8. Standard Directory Structure B. Compiling 8. Restructuring the Code 8. Summary Appendix A: Resources for Plugin Developers 250 254 254 256 257 257 258 258 261 263 263 264 264 265 A.. with all modules 192 195 199 201 202 219 222 223 225 226 231 234 243 244 259 259 260 15 . This page left intentionally blank. 16 . For users more familiar with Maven (including Maven 1. but Maven shines in helping teams operate more effectively by allowing team members to focus on what the stakeholders of a project require -leaving the build infrastructure to Maven! This guide is not meant to be an in-depth and comprehensive resource but rather an introduction. this guide is written to provide a quick solution for the need at hand. Maven works equally well for small and large projects. Maven 2 is a product that offers immediate value to many users and organizations. it is recommended that you step through the material in a sequential fashion. an indispensable guide to understand and use Maven 2. which provides a wide range of topics from understanding Maven's build platform to programming nuances.0. For first time users.x)..Preface Preface Welcome to Better Builds with Maven. 17 . Perhaps. it does not take long to realize these benefits. We hope that this book will be useful for Java project managers as well. As you will soon find. Finally. how to use Maven to build J2EE archives (JAR. goes through the background and philosophy behind Maven and defines what Maven is. After reading this second chapter. Chapter 7 discusses using Maven in a team development environment. shows how to create the build for a full-fledged J2EE application. focuses on the task of writing custom plugins. including a review of plugin terminology and the basic mechanics of the Maven plugin framework. Chapter 4 shows you how to build and deploy a J2EE application. You will learn how to use Maven to ensure successful team development. looks at Maven as a set of practices and tools that enable effective team communication and collaboration. At this stage you'll pretty much become an expert Maven user.Better Builds with Maven Organization The first two chapters of the book are geared toward a new user of Maven 2. explains a migration path from an existing build in Ant to Maven. you should be up and running with Maven. Chapter 5 focuses on developing plugins for Maven. Chapter 6 discusses project monitoring issues and reporting. Assessing Project Health with Maven. visualize. EJB. compile and test the code. Web Services). it discusses the various ways that a plugin can interact with the Maven build environment and explores some examples. gives detailed instructions on creating. WAR. reporting tools. Chapter 8. Getting Started with Maven. Chapter 3 builds on that and shows you how to build a real-world project. Introducing Maven. Chapter 4. create JARs. and install those JARs in your local repository using Maven. and document for reuse the artifacts that result from a software project. split it into modular components if needed. and how to use Maven to generate a Web site for your project. These tools aid the team to organize. It starts by describing fundamentals. the chapter covers the tools available to simplify the life of the plugin developer. compiling and packaging your first project. illustrates Maven's best practices and advanced uses by working on a real-world example application. and Chapter 8 shows you how to migrate Ant builds to Maven. At the same time. Chapter 1. they discuss what Maven is and get you started with your first Maven project. you will be able to take an existing Ant-based build. In this chapter. After reading this chapter. Chapter 3. From there. and learning more about the health of the project. Creating Applications with Maven. Chapter 6. EAR. Developing Custom Maven Plugins. Team Collaboration with Maven. and how to use Maven to deploy J2EE archives to a container. discusses Maven's monitoring tools. you will be revisiting the Proficio application that was developed in Chapter 3. In this chapter you will learn to set up the directory structure for a typical application and the basics of managing an application's development with Maven. Chapter 7. Chapter 2. 18 . Chapter 5. you will be able to keep your current build working. Migrating to Maven. Building J2EE Applications. mergere. We offer source code for download. However. We’ll check the information and. Once at the site.mergere. then you're ready to go. How to Download the Source Code All of the source code used in this book is available for download at. So if you have Maven 2. On this page you will be able to view all errata that have been submitted for this book and posted by Maven editors.com.0 installed.com. so occasionally something will come up that none of us caught prior to publication. and technical support from the Mergere Web site at. go to. To find the errata page for this book. we are human. post an update to the book’s errata page and fix the problem in subsequent editions of the book. 19 . click the Get Sample Code link to obtain the source code for the book. You can also click the Submit Errata link to notify us of any errors that you might have found. Simply email the information to [email protected] and locate the View Book Errata link.Preface Errata We have made every effort to ensure that there are no errors in the text or in the code. How to Contact Us We want to hear about any errors you find in this book.com.mergere. if appropriate. errata. Better Builds with Maven This page left intentionally blank. 20 . but not any simpler..Albert Einstein 21 . . Maven can be the build tool you need. So. 1 You can tell your manager: “Maven is a declarative project management tool that decreases your overall time to market by effectively leveraging cross-project intelligence. uninspiring words. It's the most obvious three-word definition of Maven the authors could come up with. Maven. it is a build tool or a scripting framework.” 22 . distribution. Revolutionary ideas are often difficult to convey with words. an artifact repository model. Too often technologists rely on abstract phrases to capture complex topics in three or four words. and many developers who have approached Maven as another build tool have come away with a finely tuned build system.” Maven is more than three boring. While you are free to use Maven as “just another build tool”. but this doesn't tell you much about Maven. When someone wants to know what Maven is. It is a combination of ideas.1. and software. and a software engine that manages and describes projects. Maven also brings with it some compelling second-order benefits. are beginning to have a transformative effect on the Java community. documentation. It simultaneously reduces your duplication effort and leads to higher code quality . It provides a framework that enables easy reuse of common build logic for all projects following Maven's standards. first-order problems such as simplifying builds. From compilation. Maven Overview Maven provides a comprehensive approach to managing software projects. to distribution. sound-bite answer. The Maven project at the Apache Software Foundation is an open source community which produces software tools that understand a common declarative Project Object Model (POM). but the term project management framework is a meaningless abstraction that doesn't do justice to the richness and complexity of Maven. In addition to solving straightforward. you can stop reading now and skip to Chapter 2. Maven 2. and deploying project artifacts. to view it in such limited terms is akin to saying that a web browser is nothing more than a tool that reads hypertext. a framework that greatly simplifies the process of managing a software project. they expect a short.1.Better Builds with Maven 1. 1. If you are reading this introduction just to find something to tell your manager1.1. Maven provides the necessary abstractions that encourage reuse and take much of the work out of project builds. it will prime you for the concepts that are to follow. what exactly is Maven? Maven encompasses a set of build standards. to team collaboration. and it is impossible to distill the definition of Maven to simply digested sound-bites. “Well. If you are interested in a fuller. to documentation. You may have been expecting a more straightforward answer. and with repetition phrases such as project management and enterprise software start to lose concrete meaning. It defines a standard life cycle for building. testing. Perhaps you picked up this book because someone told you that Maven is a build tool. and the deployment process. Don't worry. What is Maven? Maven is a project management framework. and the technologies related to the Maven project. standards. richer definition of Maven read this introduction. This book focuses on the core tool produced by the Maven project. to answer the original question: Maven is many things to many people. Many people come to Maven familiar with Ant. and deploying. developers were building yet another build system. Soon after the creation of Maven other projects. common build strategies. and the Turbine developers had a different site generation process than the Jakarta Commons developers. Using Maven has made it easier to add external dependencies and publish your own project components. your project gained a build by default. While there were some common themes across the separate builds. started focusing on component development. Developers at the ASF stopped figuring out creative ways to compile. you will wonder how you ever developed without it. and package software.Introducing Maven As more and more projects and products adopt Maven as a foundation for project management. and instead. If you followed the Maven Build Life Cycle. projects such as Jakarta Taglibs had (and continue to have) a tough time attracting developer interest because it could take an hour to configure everything in just the right way. Maven provides standards and a set of patterns in order to facilitate project management through reusable. 1. It is a set of standards and an approach to project development. So. generating documentation. this copy and paste approach to build reuse reached a critical tipping point at which the amount of work required to maintain the collection of build systems was distracting from the central task of developing high-quality software. It is the next step in the evolution of how individuals and organizations collaborate to create software systems. Whereas Ant provides a toolbox for scripting builds. Maven's Origins Maven was borne of the practical desire to make several projects at the Apache Software Foundation (ASF) work in the same. and Web site generation. the Codehaus community started to adopt Maven 1 as a foundation for project management. Maven's standards and centralized repository model offer an easy-touse naming system for projects. Maven is not just a build tool. Maven's standard formats enable a sort of "Semantic Web" for programming projects. predictable way. The same standards extended to testing.1. each community was creating its own build systems and there was no reuse of build logic across projects. distribution. The ASF was effectively a series of isolated islands of innovation. as much as it is a piece of software. they did not have to go through the process again when they moved on to the next project. every project at the ASF had a different approach to compilation. which can be described in a common format. generating metrics and reports. Maven is a way of approaching a set of software as a collection of highly-interdependent components. but Maven is an entirely different creature from Ant. This lack of a common approach to building software meant that every new project tended to copy and paste another project's build system. The build process for Tomcat was different than the build process for Struts. Maven entered the scene by way of the Turbine project. the barrier to entry was extremely high. Ultimately. 23 . test.2. for a project with a difficult build system. Once developers spent time learning how one project was built. so it's a natural association. it becomes easier to understand the relationships between projects and to establish a system that navigates and reports on these relationships. such as Jakarta Commons. Developers within the Turbine project could freely move between subcomponents. and it immediately sparked interest as a sort of Rosetta Stone for software project management. Once you get up to speed on the fundamentals of Maven. In addition. and not necessarily a replacement for Ant. Instead of focusing on creating good component libraries or MVC frameworks. knowing clearly how they all worked just by understanding how one of the components worked. Prior to Maven. the car provides a known interface. to provide a common layout for project documentation. declarative build approach tend to be more transparent. 1. existing Ant scripts (or Make files) can be complementary to Maven and used through Maven's plugin architecture. Maven takes a similar approach to software projects: if you can build one Maven project you can build them all. Maven’s ability to standardize locations for source files. in order to perform the build. The key value to developers from Maven is that it takes a declarative approach rather than requiring developers to create the build process themselves. An individual Maven project's structure and contents are declared in a Project Object Model (POM).Better Builds with Maven However. you can apply it to all projects. if you've learned how to drive a Jeep. and you gain access to expertise and best-practices of an entire industry. test. Projects and systems that use Maven's standard. install) is effectively delegated to the POM and the appropriate plugins.3. and the software tool (named Maven) is just a supporting element within this model. more maintainable. Developers can build any given project without having to understand how the individual plugins work (scripts in the Ant world). which forms the basis of the entire Maven system. if your project currently relies on an existing Ant build script that must be maintained. assemble.1. Given the highly inter-dependent nature of projects in open source. and if you can apply a testing plugin to one project. When you purchase a new car. Much of the project management and build orchestration (compile. more reusable. Plugins allow developers to call existing Ant scripts and Make files and incorporate those existing functions into the Maven build life cycle. and easier to comprehend. and to retrieve project dependencies from a shared storage area makes the building process much less time consuming. you can easily drive a Camry. What Does Maven Provide? Maven provides a useful abstraction for building software in the same way an automobile provides an abstraction for driving. documentation. You describe your project using Maven's model. Maven allows developers to declare life-cycle goals and project dependencies that rely on Maven’s default structures and plugin capabilities. The model uses a common project “language”. 24 . and much more transparent. Maven provides you with: • A comprehensive model for software projects • Tools that interact with this declarative model Maven provides a comprehensive model that can be applied to all software projects. and output. referred to as "building the build". Organizations that adopt Maven can stop “building the build”.Maven is built upon a foundation of reuse. You will see these principles in action in the following chapter. but also for software components. allowing more effective communication and freeing team members to get on with the important work of creating value at the application level. publicly-defined model. As mentioned earlier. • • • Without these advantages.Introducing Maven Organizations and projects that adopt Maven benefit from: • Coherence . When everyone is constantly searching to find all the different bits and pieces that make up a project. Maven makes it is easier to create a component and then integrate it into a multi-project build. when you create your first Maven project. Maintainability .“ Reusability . and focus on building the application. Each of the principles above enables developers to describe their projects at a higher level of abstraction. The following Maven principles were inspired by Christopher Alexander's idea of creating a shared language: • Convention over configuration • Declarative execution • Reuse of build logic • Coherent organization of dependencies Maven provides a shared language for software development projects. it is improbable that multiple individuals can work productively together on a project.2. 1. Developers can jump between different projects without the steep learning curve that accompanies custom. Maven’s Principles According to Christopher Alexander "patterns help create a shared language for communicating insight and experience about problems and their solutions". Because Maven projects adhere to a standard model they are less opaque. and aesthetically consistent relation of parts. Maven provides a structured build life cycle so that problems can be approached in terms of this structure.Maven lowers the barrier to reuse not only for build logic. Agility . This chapter will examine each of these principles in detail. logical. This is a natural effect when processes don't work the same way for everyone. there is little chance anyone is going to comprehend the project as a whole. The definition of this term from the American Heritage dictionary captures the meaning perfectly: “Marked by an orderly. Further.Maven allows organizations to standardize on a set of best practices. Without visibility it is unlikely one individual will know what another has accomplished and it is likely that useful code will not be reused. when code is not reused it is very hard to create a maintainable system. Maven projects are more maintainable because they follow a common. home-grown build systems. along with a commensurate degree of frustration among team members. 25 . When you adopt Maven you are effectively reusing the best practices of an entire industry. As a result you end up with a lack of shared knowledge. If you follow basic conventions. such as classes are singular and tables are plural (a person class relates to a people table). so stray from these defaults when absolutely necessary only. Well. The class automatically knows which table to use for persistence. 2 O'Reilly interview with DHH 26 . With Maven you slot the various pieces in where it asks and Maven will take care of almost all of the mundane aspects for you. This "convention over configuration" tenet has been popularized by the Ruby on Rails (ROR) community and specifically encouraged by R Or's creator David Refinement Hansson who summarizes the notion as follows: “Rails is opinionated software. you're rewarded by not having to configure that link. and allows you to create value in your applications faster with less effort. he probably doesn't even know what Maven is and wouldn't like it if he did because it's not written in Ruby yet!): that is that you shouldn't need to spend a lot of time getting your development infrastructure functioning Using standard conventions saves time. you gain an immense reward in terms of productivity that allows you to do more. One characteristic of opinionated software is the notion of 'convention over configuration'. Convention Over Configuration One of the central tenets of Maven is to provide sensible default strategies for the most common tasks.”2 David Heinemeier Hansson articulates very well what Maven has aimed to accomplish since its inception (note that David Heinemeier Hansson in no way endorses the use of Maven.1. It eschews placing the old ideals of software in a primary position. We have a ton of examples like that. generating documentation. Rails does. You don’t want to spend time fiddling with building. which all add up to make a huge difference in daily use. and better at the application level. or deploying. but the use of sensible default strategies is highly encouraged. and this is what Maven provides. This is not to say that you can't override Maven's defaults. All of these things should simply work. you trade flexibility at the infrastructure level to gain flexibility at the application level. sooner.2.Better Builds with Maven 1. If you are happy to work along the golden path that I've embedded in Rails. and I believe that's why it works. One of those ideals is flexibility. the notion that we should try to accommodate as many approaches as possible. that we shouldn't pass judgment on one form of development over another.. so that you don't have to think about the mundane details. With Rails. makes it easier to communicate to others. It is a very simple idea but it can save you a lot of time.Introducing Maven Standard Directory Layout for Projects The first convention used by Maven is a standard directory layout for project sources. If you have no choice in the matter due to organizational policy or integration issues with existing systems. but you can also take a look in Appendix B for a full listing of the standard conventions. One Primary Output Per Project The second convention used by Maven is the concept that a single Maven project produces only one primary output. You could produce a single JAR file which includes all the compiled classes. If you do have a choice then why not harness the collective knowledge that has built up as a result of using this convention? You will see clear examples of the standard directory structure in the next chapter. configuration files. server code. Maven encourages a common arrangement of project content so that once you are familiar with these standard. maintainability. you will be able to navigate within any Maven project you build in the future. To illustrate. If this saves you 30 minutes for each new project you look at. In this scenario. which should be identified and separated to cope with complexity and to achieve the required engineering quality factors such as adaptability. and a project for the shared utility code portion. you will be able to adapt your project to your customized layout at a cost. Follow the standard directory layout. but. and you will make it easier to communicate about your project. you might be forced to use a directory structure that diverges from Maven's defaults. makes it much easier to reuse. and shared utility code. 27 . project resources. First time users often complain about Maven forcing you to do things a certain way and the formalization of the directory structure is the source of most of the complaints. separate projects: a project for the client portion of the application. Having the utility code in a separate project (a separate JAR file). even if you only look at a few new projects a year that's time better spent on your application. you need to ask yourself if the extra configuration that comes with customization is really worth it. a project for the server portion of the application. the code contained in each project has a different concern (role to play) and they should be separated. default locations. and documentation. You will be able to look at other projects and immediately understand the project layout. These components are generally referred to as project content. The separation of concerns (SoC) principle states that a given problem involves different kinds of concerns. but Maven would encourage you to have three. increased complexity of your project's POM. generated output. when you do this.consider a set of sources for a client/server-based application that contains client code. If you have placed all the sources together in a single project. In this case. Maven pushes you to think clearly about the separation of concerns when setting up your projects because modularity leads to reuse. extendibility and reusability. You can override any of Maven's defaults to create a directory layout of your choosing. the boundaries between our three separate concerns can easily become blurred and the ability to reuse the utility code could prove to be difficult. in a lot of cases. and the POM is Maven's description of a single project.2. 28 .jar you would not really have any idea of the version of Commons Logging. Maven can be thought of as a framework that coordinates the execution of plugins in a well defined way. looking at it.jar are inherently flawed because eventually.2. The execution of Maven's plugins is coordinated by Maven's build life cycle in a declarative fashion with instructions from Maven's POM. It is the POM that drives execution in Maven and this approach can be described as model-driven or declarative execution. easily comprehensible manner. In Maven there is a plugin for compiling source code. later in this chapter. Plugins are the key building blocks for everything in Maven. because the naming convention keeps each one separate in a logical. but with Maven. 1. a set of conventions really. A simple example of a standard naming convention might be commons-logging-1. and it doesn't have to happen again.jar. It is immediately obvious that this is version 1. a plugin for creating Javadocs. Systems that cannot cope with information rich artifacts like commons-logging-1. is the use of a standard naming convention for directories and for the primary output of each project.2. This is important if there are multiple subprojects involved in a build process. Maven's project object model (POM) Maven is project-centric by design. a plugin for creating JARs. Even from this short list of examples you can see that a plugin in Maven has a very specific role to play in the grand scheme of things. you'll track it down to a ClassNotFound exception. when something is misplaced. Maven is useless . This is illustrated in the Coherent Organization of Dependencies section. If the JAR were named commonslogging.the POM is Maven's currency. Declarative Execution Everything in Maven is driven in a declarative fashion using Maven's Project Object Model (POM) and specifically. The intent behind the standard naming conventions employed by Maven is that it lets you understand exactly what you are looking at by.2 of Commons Logging. Maven puts this SoC principle into practice by encapsulating build logic into coherent modules called plugins. One important concept to keep in mind is that everything accomplished in Maven is the result of a plugin executing. and many other functions. Moreover.Better Builds with Maven Standard Naming Conventions The third convention in Maven. you would not even be able to get the information from the jar's manifest. Reuse of Build Logic As you have already learned. Without the POM. well. which results because the wrong version of a JAR file was used. Maven promotes reuse by encouraging a separation of concerns . It doesn't make much sense to exclude pertinent information when you can have it at hand to use.2. a plugin for running tests. the plugin configurations contained in the POM. It's happened to all of us. The naming conventions provide clarity and immediate comprehension. • • project . In Java.0</modelVersion> <groupId>com. myapp-1. Additional artifacts such as source bundles also use the artifactId as part of their file name. and is the analog of the Java language's java.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM will allow you to compile.plugins is the designated groupId for all Maven plugins. The version of the model itself changes very infrequently.This element indicates the unique base name of the primary artifact being generated by this project. • groupId .<extension> (for example. You. in Maven all POMs have an implicit parent in Maven's Super POM.This is the top-level element in all Maven pom.This element indicates the unique identifier of the organization or group that created the project. The answer lies in Maven's implicit use of its Super POM.lang. but still displays the key elements that every POM contains. A typical artifact produced by Maven would have the form <artifactId>-<version>.lang.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.maven. The groupId is one of the key identifiers of a project and is typically based on the fully qualified domain name of your organization. The key feature to remember is the Super POM contains important default information so you don't have to repeat this information in the POMs you create. test. 29 . The POM contains every important piece of information about your project.Introducing Maven The POM below is an example of what you could use to build and test a project.Object class. but it is mandatory in order to ensure stability when Maven introduces new features or other model changes.8. Maven's Super POM carries with it all the default conventions that Maven encourages. and generate basic documentation. Likewise. being the observant reader. modelVersion . For example org. all objects have the implicit parent of java. The POM is an XML document and looks like the following (very) simplified example: <project> <modelVersion>4. so if you wish to find out more about it you can refer to Appendix B. • artifactId . The Super POM can be rather intimidating at first glance.xml files.This required element indicates the version of the object model that the POM is using.jar).0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.0.apache. The POM shown previously is a very simple POM.0.mycompany.Object. will ask “How this is possible using a 15 line file?”. When you need to add some functionality to the build life cycle you do so with a plugin.7 Using Maven Plugins and Chapter 5 Developing Custom Maven Plugins for examples and details on how to customize the Maven build. or EAR.html.). etc. generate-resources. but also indicates a specific life cycle to use as part of the build process. Maven's Build Life Cycle Software projects generally follow similar. initialize. packaging.This element indicates the display name used for the project. and Maven deals with the details behind the scenes. See Chapter 2. testing. Any time you need to customize the way your project builds you either use an existing plugin.This element indicates where the project's site can be found. For example. compilation. In Maven. just keep in mind that the selected packaging of a project plays a part in customizing the build life cycle. Maven will execute the validate. if you tell Maven to compile. and compile phases that precede it automatically. In Maven you do day-to-day work by invoking particular phases in this standard build life cycle. or test. generate-sources. For now. • • name . This not only means that the artifact produced is a JAR. It is important to note that each phase in the life cycle will be executed up to and including the phase you specify. 30 . or other projects that use it as a dependency. The actions that have to be performed are stated at a high level. The life cycle is a topic dealt with later in this chapter.This element provides a basic description of your project. WAR. etc. installation. and during the build process for your project. process-sources. the compile phase invokes a certain set of goals to compile a set of classes. Maven plugins provide reusable build logic that can be slotted into the standard build life cycle. or install. well-trodden build paths: preparation. Maven goes a long way to help you with version management and you will often see the SNAPSHOT designator in a version.This element indicates the package type to be used by this artifact (JAR. For a complete reference of the elements available for use in the POM please refer to the POM reference at. This is often used in Maven's generated documentation. • • url . version . The default value for the packaging element is jar so you do not have to specify this in most cases. or create a custom plugin for the task at hand.apache. The standard build life cycle consists of many phases and these can be thought of as extension points. So. which indicates that a project is in a state of development. EAR.This element indicates the version of the artifact generated by the project. you tell Maven that you want to compile. the build life cycle consists of a series of phases where each phase can perform one or more actions. or goals.org/maven-model/maven. related to that phase. For example.Better Builds with Maven • packaging . The path that Maven moves along to accommodate an infinite variety of projects is called the build life cycle. WAR. or package. description . In “Maven-speak” an artifact is a specific piece of software. but you may be asking yourself “Where does that dependency come from?” and “Where is the JAR?” The answers to those questions are not readily apparent without some explanation of how Maven's dependencies.jar. and it supplies these coordinates to its own internal dependency mechanisms. the most common artifact is a JAR file. or EAR file. When a dependency is declared within the context of your project.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM states that your project has a dependency on JUnit. you stop focusing on a collection of JAR files. artifacts.3. Your project doesn't require junit-3. A dependency is uniquely identified by the following identifiers: groupId.1. In Java. our example POM has a single dependency listed for Junit: <project> <modelVersion>4. Maven needs to know what repository to search as well as the dependency's coordinates. Maven takes the dependency coordinates you provide in the POM. If a matching artifact is located. In the POM you are not specifically telling Maven where the dependencies are physically located. 31 . Coherent Organization of Dependencies We are now going to delve into how Maven resolves dependencies and discuss the intimately connected concepts of dependencies. artifactId and version. In order for Maven to attempt to satisfy a dependency.1 of the junit artifact produced by the junit group. instead it depends on version 3. but a Java artifact could also be a WAR.8.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1. SAR. instead you deal with logical dependencies.8.8. A dependency is a reference to a specific artifact that resides in a repository. At a basic level. If you recall.mycompany.Introducing Maven 1.2. Maven tries to satisfy that dependency by looking in all of the remote repositories to which it has access. With Maven. and repositories. and providing this dependency to your software project.0. we can describe the process of dependency management as Maven reaching out into the world. you are simply telling Maven what a specific project expects. Dependency Management is one of the most powerful features in Maven. which is straightforward. artifacts and repositories work.0</modelVersion> <groupId>com. There is more going on behind the scenes.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. grabbing a dependency. Maven transports it from that remote repository to your local repository for project use. in order to find the artifacts that most closely match the dependency request. but the key concept is that Maven dependencies are declarative. jar: 32 . but when a declared dependency is not present in your local repository Maven searches all the remote repositories to which it has access to find what’s missing. Maven creates your local repository in ~/. Read the following sections for specific details regarding where Maven searches for these dependencies. You must have a local repository in order for Maven to work.m2/repository.Better Builds with Maven Maven has two types of repositories: local and remote. it will create your local repository and populate it with artifacts as a result of dependency requests. By default.8.1. Maven usually interacts with your local repository. Local Maven repository When you install and run Maven for the first time. The following folder structure shows the layout of a local Maven repository that has a few locally installed dependency artifacts such as junit-3. a repository is just an abstract storage mechanism. On the next page is the general pattern used to create the repository layout: 33 .Introducing Maven Figure 1-1: Artifact movement from remote to local repository So you understand how the layout works.1. In theory. We’ll stick with our JUnit example and examine the junit-3..jar artifact that are now in your local repository. Above you can see the directory structure that is created when the JUnit dependency is resolved. but in practice the repository is a directory structure in your file system.8. take a closer look at one of the artifacts that appeared in your local repository. for example.8.1. artifactId of “junit”. 34 .x then you will end up with a directory structure like the following: Figure 1-3: Sample directory structure In the first directory listing you can see that Maven artifacts are stored in a directory structure that corresponds to Maven’s groupId of org.maven.8.m2/repository/junit/junit/3. Maven attempts to locate a dependency's artifact using the following process: first.1” in ~/. If this file is not present.jar. Maven will attempt to find the artifact with a groupId of “junit”. Locating dependency artifacts When satisfying dependencies.Better Builds with Maven Figure 1-2: General pattern for the repository layout If the groupId is a fully qualified domain name (something Maven encourages) such as z.apache. and a version of “3.8.y. Maven will generate a path to the artifact in your local repository.1/junit-3. Maven will fetch it from a remote repository. Maven provides such a technology for project management. and it is a trivial process to upgrade all ten web applications to Spring 2. it is the adoption of a build life-cycle process that allows you to take your software development to the next level. Before Maven. Once the dependency is satisfied. all projects referencing this dependency share a single copy of this JAR. modular project arrangements. but it is incompatible with the concept of small.0 by changing your dependency declarations. Your local repository is one-stop-shopping for all artifacts that you need regardless of how many projects you are building. Using Maven is more than just downloading another JAR file and a set of scripts. Continuum and Archiva build platform. if ever. For more information on Maestro please see:. Maven is a framework. Declare your dependencies and let Maven take care of details like compilation and testing classpaths.8. a useful technology just works. If you were coding a web application.0 distribution based on a pre-integrated Maven. every project with a POM that references the same dependency will use this single copy installed in your local repository. you would check the 10-20 JAR files.3 If your project's POM contains more than one remote repository. upon which your project relies. Maven is a repository. While this approach works for a few projects. and. you don't have to jump through hoops trying to get it to work. there is no need to store the various spring JAR files in your project. 3 Alternatively. From this point forward. and you would add these dependencies to your classpath. the common pattern in most projects was to store JAR files in a project's subdirectory. In other words.ibiblio.jar for each project that needs it. To summarize. you don’t store a copy of junit3. it should rarely. into a lib directory. With Maven. Maven is a set of standards. active open-source community that produces software focused on project management. Dependencies are not your project's code. 1. be a part of your thought process. simplifies the process of development. Each project relies upon a specific artifact via the dependencies listed in a POM. and Maven is software. shielding you from complexity and allowing you to focus on your specific task. 35 . Maven's Benefits A successful technology takes away burden.1. Maven will attempt to download an artifact from each remote repository in the order defined in your POM. which all depend on version 1. Maestro is an Apache License 2.mergere.org/maven2. it doesn't scale easily to support an application with a great number of small components.6 of the Spring Framework.com/.Introducing Maven By default. You don't have to worry about whether or not it's going to work. artifacts can be downloaded from a secure. rather than imposing it. in the background. the artifact is downloaded and installed in your local repository. internal Maven repository. if your project has ten web applications. in doing so.2. Maven will attempt to fetch an artifact from the central Maven repository at. which can be managed by Mergere Maestro.0 JARs to every project. Storing artifacts in your SCM along with your project may seem appealing. you simply change some configurations in Maven. and they shouldn't be versioned in an SCM. Maven is also a vibrant. Instead of adding the Spring 2. Like the engine in your car or the processor in your laptop.3. 36 .Better Builds with Maven This page left intentionally blank. . The terrible temptation to tweak should be resisted unless the payoff is really noticeable. not battalions of special cases.2. then you should be all set to create your first Maven project.com</host> <port>8080</port> <username>your-username</username> <password>your-password</password> </proxy> </proxies> </settings> If Maven is already in use at your workplace.com</id> <name>My Company's Maven Proxy</name> <url>. Create a <your-home-directory>/. <settings> <mirrors> <mirror> <id>maven. If you are behind a firewall.m2/settings. To do this. create a <your-homedirectory>/.m2/settings. it may be necessary to make a few more preparations for Maven to function correctly.xml file with the following content: <settings> <proxies> <proxy> <active>true</active> <protocol>http</protocol> <host>proxy. it is assumed that you are a first time Maven user and have already set up Maven on your local system. then note the URL and let Maven know you will be using a proxy. then please refer to Maven's Download and Installation Instructions before continuing.xml file with the following content. 38 . ask your administrator if there if there is an internal Maven proxy.mycompany.xml file will be explained in more detail in the following chapter and you can refer to the Maven Web site for the complete details on the settings.Better Builds with Maven 2.mycompany. Now you can perform the following basic check to ensure Maven is working correctly: mvn -version If Maven's version is displayed.xml file. If you have not set up Maven yet. If there is an active Maven proxy running. The settings. Preparing to Use Maven In this chapter. so for now simply assume that the above settings will work. then you will have to set up Maven to understand that. Depending on where your machine is located. Maven requires network access.mycompany.1.com/maven2</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> </settings> In its optimal mode. app \ -DartifactId=my-app You will notice a few things happened when you executed this command. 39 .2. which looks like the following: <project> <modelVersion>4. and that it in fact adheres to Maven's standard directory layout discussed in Chapter 1.apache. Creating Your First Maven Project To create your first project. an archetype is a template of a project. After the archetype generation has completed.8.xml file.mycompany. you will notice that the following directory structure has been created. and this directory contains your pom.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url>. This chapter will show you how the archetype mechanism works.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1. which contains a pom. please refer to the Introduction to Archetypes.Getting Started with Maven 2.0</modelVersion> <groupId>com.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. execute the following: C:\mvnbook> mvn archetype:create -DgroupId=com. To create the Quick Start Maven project.xml. Whenever you see a directory structure.mycompany.1</version> <scope>test</scope> </dependency> </dependencies> </project> At the top level of every project is your pom. you know you are dealing with a Maven project. you will notice that a directory named my-app has been created for the new project. First. which is combined with some user input to produce a fullyfunctional Maven project. you will use Maven's Archetype mechanism. In Maven.xml file.0. An archetype is defined as an original pattern or model from which all other things of the same kind are made. but if you would like more information about archetypes. note that this one simple command encompasses Maven's four foundational principles: • Convention over configuration • Reuse of build logic • Declarative execution • Coherent organization of dependencies These principles are ingrained in all aspects of Maven. Now that you have a POM. and so on). some application sources. 2. various descriptors such as assembly descriptors. you tell Maven what you need. configuration files. you are ready to build your project. but the following analysis of the simple compile command shows you the four principles in action and makes clear their fundamental importance in simplifying the development of a project. ${basedir}.Better Builds with Maven Figure 2-1: Directory structure after archetype generation The src directory contains all of the inputs required for building. but later in the chapter you will see how the standard directory layout is employed for other project content. for the my-app project. compile your application sources using the following command: C:\mvnbook\my-app> mvn compile 40 . Then. at a very high level. Change to the <my-app> directory. Compiling Application Sources As mentioned in the introduction. in one fell swoop. and deploying the project (source files. testing. in order to accomplish the desired task. in a declarative way. The <my-app> directory is the base directory. the site. Before you issue the command to compile the application sources. In this first stage you have Java source files only. and some test sources. documenting.3. along with its default configuration. What actually compiled the application sources? This is where Maven's second principle of “reusable build logic” comes into play. what Maven uses to compile the application sources. of course.plugins:maven-resources-plugin: checking for updates from central . you won't find the compiler plugin since it is not shipped with the Maven distribution. By default. there is a form of mapping and it is called Maven's default build life cycle...plugins:maven-compiler-plugin: checking for updates from central . In fact. The same holds true for the location of the compiled classes which. is the tool used to compile your application sources.. inherited from the Super POM. Although you now know that the compiler plugin was used to compile the application sources.Getting Started with Maven After executing this command you should see output similar to the following: [INFO-------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [compile] [INFO]------------------------------------------------------------------[INFO] artifact org. Instead. now you know how Maven finds application sources. The standard compiler plugin. override this default location..maven. The next question is.maven. [INFO] [resources:resources] . Maven downloads plugins as they are needed. but there is very little reason to do so. and how Maven invokes the compiler plugin. Even the simplest of POMs knows the default location for application sources. You can.apache. How did Maven know where to look for sources in order to compile them? And how did Maven know where to put the compiled classes? This is where Maven's principle of “convention over configuration” comes into play. if you poke around the standard Maven installation.. application sources are placed in src/main/java. by default. This default value (though not visible in the POM above) was.apache. in fact. 41 . [INFO] artifact org. how was Maven able to retrieve the compiler plugin? After all. So. how was Maven able to decide to use the compiler plugin. This means you don't have to state this location at all in any of your POMs. . The same build logic encapsulated in the compiler plugin will be executed consistently across any number of projects. if you use the default location for application sources.. is target/classes. in the first place? You might be guessing that there is some background process that maps a simple command to a particular plugin. Maven will execute the command much quicker. By following the standard Maven conventions you can get a lot done with very little effort! 2. because Maven already has what it needs. If you're a keen observer you'll notice that using the standard conventions makes the POM above very small. you probably have unit tests that you want to compile and execute as well (after all. and eliminates the requirement for you to explicitly tell Maven where any of your sources are. internal Maven repository. simply tell Maven you want to test your sources. 42 . the compiled classes were placed in target/classes. or where your output should go. Use the following simple command to test: C:\mvnbook\my-app> mvn test 4 Alternatively.4 The next time you execute the same command again. it took almost 4 minutes with a broadband connection). Maven will download all the plugins and related dependencies it needs to fulfill the command. Therefore.0 distribution based on a pre-integrated Maven. wink wink*).4. which is specified by the standard directory layout. Maestro is an Apache License 2.com/.mergere. which can be managed by Mergere Maestro. Compiling Test Sources and Running Unit Tests Now that you're successfully compiling your application's sources. As you can see from the output. programmers always write and execute their own unit tests *nudge nudge. Continuum and Archiva build platform.Better Builds with Maven The first time you execute this (or any other) command. Again. From a clean installation of Maven this can take quite a while (in the output above. This implies that all prerequisite phases in the life cycle will be performed to ensure that testing will be successful. artifacts can be downloaded from a secure. For more information on Maestro please see:. it won't download anything new. apache. These are the dependencies and plugins necessary for executing the tests (recall that it already has the dependencies it needs for compiling and won't download them again).mycompany. • Before compiling and executing the tests. Errors: 0 [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 15 seconds [INFO] Finished at: Thu Oct 06 08:12:17 MDT 2005 [INFO] Final Memory: 2M/8M [INFO]------------------------------------------------------------------- Some things to notice about the output: • Maven downloads more dependencies this time. you'll want to move on to the next logical step. 43 . Maven compiles the main code (all these classes are up-to-date. Errors: 0.maven. Failures: 0.Getting Started with Maven After executing this command you should see output similar to the following: [INFO]------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [test] [INFO]------------------------------------------------------------------[INFO] artifact org.app. mvn test will always run the compile and test-compile phases first. as well as all the others defined before it. you can execute the following command: C:\mvnbook\my-app> mvn test-compile However... [INFO] [surefire:test] [INFO] Setting reports dir: C:\Test\Maven2\test\my-app\target/surefire-reports ------------------------------------------------------T E S T S ------------------------------------------------------[surefire] Running com. [INFO] [resources:resources] [INFO] [compiler:compile] [INFO] Nothing to compile . Time elapsed: 0 sec Results : [surefire] Tests run: 1. Failures: 0. and execute the tests. remember that it isn't necessary to run this every time.plugins:maven-surefire-plugin: checking for updates from central . compile the tests. how to package your application. since we haven't changed anything since we compiled last). Now that you can compile the application sources..AppTest [surefire] Tests run: 1.all classes are up to date [INFO] [resources:testResources] [INFO] [compiler:testCompile] Compiling 1 source file to C:\Test\Maven2\test\my-app\target\test-classes .. If you simply want to compile your test sources (but not execute the tests). 0-SNAPSHOT..AppTest . The directory <your-homedirectory>/. you'll want to install the artifact (the JAR file) you've generated into your local repository.app. Now.Better Builds with Maven 2. It can then be used by other projects as a dependency. This is how Maven knows to produce a JAR file from the above command (you'll read more about this later).0-SNAPSHOT.0-SNAPSHOT.jar [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 5 seconds [INFO] Finished at: Tue Oct 04 13:20:32 GMT-05:00 2005 [INFO] Final Memory: 3M/8M [INFO]------------------------------------------------------------------- 44 .m2/repository is the default location of the repository. Errors: 0 [INFO] [jar:jar] [INFO] Building jar: <dir>/my-app/target/my-app-1. Time elapsed: 0.001 sec Results : [surefire] Tests run: 1.jar [INFO] [install:install] [INFO] Installing c:\mvnbook\my-app\target\my-app-1. Take a look in the the target directory and you will see the generated JAR file. you will notice the packaging element is set to jar. Errors: 0.0-SNAPSHOT\my-app-1.5.jar to <localrepository>\com\mycompany\app\my-app\1. To install.mycompany. Failures: 0. Failures: 0. By default.Getting Started with Maven Note that the Surefire plugin (which executes the test) looks for tests contained in files with a particular naming convention. alternatively you might like to generate an Eclipse descriptor: C:\mvnbook\my-app> mvn eclipse:eclipse 45 . Perhaps you'd like to generate an IntelliJ IDEA descriptor for the project: C:\mvnbook\my-app> mvn idea:idea This can be run over the top of a previous IDEA project. there is far more functionality available to you from Maven without requiring any additions to the POM. So. simply execute the following command: C:\mvnbook\my-app> mvn site There are plenty of other stand-alone goals that can be executed as well. the following tests are excluded: • • **/Abstract*Test. For projects that are built with Maven. This chapter will cover one in particular. In this case. what other functionality can you leverage. testing.java **/Test*. Or. to get any more functionality out of an Ant build script. there are a great number of Maven plugins that work out-of-the-box.java • **/*TestCase. it will update the settings rather than starting fresh. Of course. building. as it is one of the highly-prized features in Maven. as it currently stands. and installing a typical Maven project. this covers the majority of tasks users perform. if you're pressed for time and just need to create a basic Web site for your project. the following tests are included: • • **/*Test. you must keep making error-prone additions.java Conversely.java **/Abstract*TestCase.java You have now completed the process for setting up. for example: C:\mvnbook\my-app> mvn clean This will remove the target directory with the old build data before starting. Without any work on your part. this POM has enough information to generate a Web site for your project! Though you will typically want to customize your Maven site. packaging. and if you've noticed. so it is fresh. everything done up to this point has been driven by an 18-line POM. In contrast. given Maven's re-usable build logic? With even the simplest POM. 6. you can package resources within JARs. In the following example. Maven again uses the standard directory layout. This means that by adopting Maven's standard conventions. The rule employed by Maven is that all directories or files placed within the src/main/resources directory are packaged in your JAR with the exact same structure. If you unpacked the JAR that Maven created you would see the following: 46 . Handling Classpath Resources Another common use case. Figure 2-2: Directory structure after adding the resources directory You can see in the preceding example that there is a META-INF directory with an application. is the packaging of resources into a JAR file. That is where you place any resources you wish to package in the JAR. starting at the base of the JAR.Better Builds with Maven 2.properties file within that directory. you need to add the directory src/main/resources. For this common task. which requires no changes to the POM shown previously. simply by placing those resources in a standard directory structure. xml and pom. One simple use might be to retrieve the version of your application. simply create the resources and META-INF directories and create an empty file called application. Then run mvn install and examine the jar file in the target directory. as well as a pom. should the need arise. Operating on the POM file would require you to use Maven utilities.Getting Started with Maven Figure 2-3: Directory structure of the JAR file created by Maven The original contents of src/main/resources can be found starting at the base of the JAR and the application. If you would like to try this example. You can create your own manifest if you choose. 47 . but the properties can be utilized using the standard Java APIs. These come standard with the creation of a JAR in Maven.xml inside.xml and pom.properties file is there in the META-INF directory.properties files are packaged up in the JAR so that each artifact produced by Maven is self-describing and also allows you to utilize the metadata in your own application. You will also notice some other files like META-INF/MANIFEST. The pom.MF. but Maven will generate one by default if you don't.properties file. .properties" ).. you could use a simple snippet of code like the following for access to the resource required for testing: [.] // Retrieve resource InputStream is = getClass().] 48 ...Better Builds with Maven 2. follow the same pattern as you do for adding resources to the JAR. // Do something with the resource [. At this point you have a project directory structure that should look like the following: Figure 2-4: Directory structure after adding test resources In a unit test. Handling Test Classpath Resources To add resources to the classpath for your unit tests.getResourceAsStream( "/test. except place resources in the src/test/resources directory.1.6. To have Maven filter resources when copying. Filtering Classpath Resources Sometimes a resource file will need to contain a value that can be supplied at build time only.xml.maven.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.Getting Started with Maven To override the manifest file yourself.0</modelVersion> <groupId>com.xml.0.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url></groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.apache.8. The property can be either one of the values defined in your pom. a property defined in an external properties file.6.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifestFile>META-INF/MANIFEST. you can use the follow configuration for the maven-jarplugin: <plugin> <groupId>org.MF</manifestFile> </archive> </configuration> </plugin> 2.apache.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> </project> 49 .mycompany. To accomplish this in Maven.2. simply set filtering to true for the resource directory in your pom. a value defined in the user's settings. or a system property.xml: <project> <modelVersion>4. you can filter your resource files dynamically by putting a reference to the property that will contain the value into your resource file using the syntax ${<property name>}. 0-SNAPSHOT To reference a property defined in an external file. All of this information was previously provided as default values and now must be added to the pom. resources.xml file: [. and resource elements . add a reference to this new file in the pom.properties my. In addition.properties file.xml to override the default value for filtering and set it to true.filter.name=${project.name} refers to the name of the project.properties application.finalName} refers to the final name of the file created. whose values will be supplied when the resource is filtered as follows: # application.value=hello! Next.properties application. So ${project.Better Builds with Maven You'll notice that the build. you can execute the following command (process-resources is the build life cycle phase where the resources are copied and filtered): mvn process-resourcesThe application.which weren't there before .version} refers to the version of the project.] 50 . the property name uses the names of the XML elements that define the value. and ${project. any element in your POM is available when filtering resources. create an external properties file and call it src/main/filters/filter.. which weren't there before.xml.properties file under target/classes..build.name} application.properties</filter> </filters> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> [.version=${project.. create an src/main/resources/application. First. the POM has to explicitly state that the resources are located in the src/main/resources directory.version} With that in place.] <build> <filters> <filter>src/main/filters/filter. ${project.properties: # filter.name=Maven Quick Start Archetype application. all you need to do is add a reference to this external file in your pom.have been added. In fact. To continue the example. which will eventually go into the JAR looks like this: # application.version=1.. To reference a property defined in your pom. when the built project is packaged.xml. name} application.value property in an external file.filter.filter.prop} Now.properties either):<project> <modelVersion>4. when you execute the following command (note the definition of the command. To continue the example.name=${project.value> </properties> </project> Filtering resources can also retrieve values from system properties.value>hello</my.mycompany.home).xml and you'd get the same effect (notice you don't need the references to src/main/filters/filter.properties application. mvn process-resources "-Dcommand.apache.version or user.properties java.line.prop=hello again" 51 .line.version=${project. add a reference to this property in the application.properties file to look like the following: # application. either the system properties built into Java (like java.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.0</modelVersion> <groupId>com.filter.Getting Started with Maven Then.prop property on the command line).0.version=${java.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.prop=${command.value} The next execution of the mvn process-resources command will put the new property value into application.version} command.properties. change the application.line.filter. you could have defined it in the properties section of your pom. the application.line.version} message=${my. As an alternative to defining the my.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> <properties> <my.properties file as follows: # application. or properties defined on the command line using the standard Java -D parameter.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url> file will contain the values from the system properties.8. but you do not want them filtered. In addition you would add another resource entry. and an inclusion of your images directory. </project> 52 . <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> <excludes> <exclude>images/**</exclude> </excludes> </resource> <resource> <directory>src/main/resources</directory> <includes> <include>images/**</include> </includes> </resource> </resources> </build> ... If you had a src/main/resources/images that you didn't want to be filtered.Better Builds with Maven 2.3. with filtering disabled.6. then you would create a resource entry to handle the filtering of resources with an exclusion for the resources you wanted unfiltered. This is most often the case with binary resources. for example image files.. The build element would look like the following: <project> .. Preventing Filtering of Binary Resources Sometimes there are classpath resources that you want to include in your JAR. plugins or the org. You can specify an additional groupId to search within your POM.5</source> <target>1. and in some ways they are. The configuration element applies the given parameters to every goal from the compiler plugin. For example. or settings.codehaus. you may want to configure the Java compiler to allow JDK 5. then Maven will default to looking for the plugin with the org. To illustrate the similarity between plugins and dependencies. or configure parameters for the plugins already included in the build.0</version> <configuration> <source>1. but if you find something has changed . this plugin will be downloaded and installed automatically in much the same way that a dependency would be handled. <build> <plugins> <plugin> <groupId>org.. For the most part.Getting Started with Maven 2.maven.mojo groupId label.5</target> </configuration> </plugin> </plugins> </build> .apache.. If you do not specify a groupId.. the compiler plugin is already used as part of the build process and this just changes the configuration. </project> You'll notice that all plugins in Maven 2 look very similar to a dependency. you must include additional Maven plugins. plugin developers take care to ensure that new versions of plugins are backward compatible so you are usually OK with the latest release. 53 . the groupId and version elements have been shown. If it is not present on your local system. to customize the build for a Maven project. This is often the most convenient way to use a plugin. If you do not specify a version then Maven will attempt to use the latest released version of the specified plugin. but in most cases these elements are not required.7. Using Maven Plugins As noted earlier in the chapter.0 sources..you can lock down a specific version. This is as simple as adding the following to your POM: <project> .maven. but you may want to specify the version of a plugin to ensure reproducibility.xml.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.apache. In the above case. you'll know how to use the basic features of Maven: creating a project. and packaging a project. you've seen how you can use Maven to build your project. Summary After reading Chapter 2. You should also have some insight into how Maven handles dependencies and provides an avenue for customization using Maven plugins.apache.8.apache. The next few chapters provide you with the how-to guidelines to customize Maven's behavior and use Maven to manage interdependent software projects. By learning how to build a Maven project. you have gained access to every single project using Maven. If you want to see the options for the maven-compiler-plugin shown previously.plugins \ -DartifactId=maven-compiler-plugin -Dfull=true You can also find out what plugin configuration is available by using the Maven Plugin Reference section at. If someone throws a Maven project at you.Better Builds with Maven If you want to find out what the plugin's configuration options are.maven. If you are interested in learning how Maven builds upon the concepts described in the Introduction and obtaining a deeper working knowledge of the tools introduced in Chapter 2. 54 . compiling a project. testing a project. use the mvn help:describe command. read on. you should be up and running with Maven. 2. You've learned a new language and you've taken Maven for a test drive.org/plugins/ and navigating to the plugin and goal you are using. you could stop reading this book now. If you were looking for just a build tool. although you might want to refer to the next chapter for more information about customizing your build to fit your project's unique needs. In eighteen pages. use the following command: mvn help:describe -DgroupId=org. . Berard 55 . • Proficio CLI: The code which provides a command line interface to Proficio.Better Builds with Maven 3. which enable code reusability. you will see that the Proficio sample application is made up of several Maven modules: • Proficio API: The application programming interface for Proficio. In doing so. and be able to easily identify what a particular module does simply by looking at its name. In this chapter. Moreover. which is Latin for “help”. • These are default naming conventions that Maven uses. The interfaces for the APIs of major components. 56 .1. 3. The application that you are going to create is called Proficio. you are going to learn about some of Maven’s best practices and advanced uses by working on a small application to manage frequently asked questions (FAQ). • Proficio Stores: The module which itself. The guiding principle in determining how best to decompose your application is called the Separation of Concerns (SoC). using a real-world example. more manageable and comprehensible parts. Introduction In the second chapter you stepped though the basics of setting up a simple project. everyone on the team needs to clearly understand the convention. are also kept here. houses all the store modules. like the store. which consists of all the classes that will be used by Proficio as a whole. Setting Up an Application Directory Structure In setting up Proficio's directory structure. which consists of a set of interfaces. Now you will delve in a little deeper. Concerns are the primary motivation for organizing and decomposing software into smaller.2. goal. but you are free to name your modules in any fashion your team decides. • Proficio Model: The data model for the Proficio application. task. it is important to keep in mind that Maven emphasizes the practice of standardized and modular builds. As such. Proficio has a very simple memory-based store and a simple XStream-based store. each of which addresses one or more specific concerns. you will be guided through the specifics of setting up an application and managing that application's Maven structure. a key goal for every software development project. • Proficio Core: The implementation of the API. The only real criterion to which to adhere is that your team agrees to and uses a single naming convention. and operate on the pieces of software that are relevant to a particular concept. So. lets start by discussing the ideal directory structure for Proficio. The natural outcome of this practice is the generation of discrete and coherent components. or purpose. SoC refers to the ability to identify. encapsulate. In Maven 1. You should take note of the packaging element.x documentation. It is recommended that you specify the application version in the top-level POM and use that version across all the modules that make up your application.0-SNAPSHOT</version> <name>Maven Proficio</name> <url> Applications with Maven In examining the top-level POM for Proficio.apache. which you can see is 1. If you were to look at Proficio's directory structure you would see the following: 57 . which in this case has a value of pom.0-SNAPSHOT.maven.proficio</groupId> <artifactId>proficio</artifactId> <packaging>pom</packaging> <version>1. you can see in the modules element all the sub-modules that make up the Proficio application.org</url> .apache. so it makes sense that all the modules have a common application version. A module is a reference to another Maven project. Currently there is some variance on the Maven web site when referring to directory structures that contain more than one Maven project. For POMs that contain modules....0</modelVersion> <groupId>org.. but the Maven team is trying to consistently refer to these setups as multimodule builds now. For an application that has multiple modules.0. which really means a reference to another POM. This setup is typically referred to as a multi-module build and this is how it looks in the top-level Proficio POM: <project> <modelVersion>4. .x these were commonly referred to as multi-project builds and some of this vestigial terminology carried over to the Maven 2. it is very common to release all the sub-modules together. </project> An important feature to note in the POM above is the value of the version element. maven.0.Better Builds with Maven Figure 3-1: Proficio directory structure You may have noticed that the module elements in the POM match the names of the directories in the prior Proficio directory structure. Looking at the module names is how Maven steps into the right directory to process the respective POMs located there.proficio</groupId> <artifactId>proficio</artifactId> <version>1. If you take a look at the POM for the proficio-stores module you will see a set of modules contained therein: <project> <parent> <groupId>org.0-SNAPSHOT</version> </parent> <modelVersion>4..apache.0</modelVersion> <artifactId>proficio-stores</artifactId> <name>Maven Proficio Stores</name> <packaging>pom</packaging> <modules> <module>proficio-store-memory</module> <module>proficio-store-xstream</modul </modules> </project> 58 . which is the proficio-stores module. Being the observant user.3. This is the snippet in each of the POMs that lets you draw on the resources stated in the specified toplevel POM and from which you can inherit down to the level required . state your deployment information. using our top-level POM for the sample Proficio application. Using Project Inheritance One of the most powerful features in Maven is project inheritance. organizing your projects in groups according to concern.all in a single place. or state your common dependencies . 3.apache. You can nest sets of projects like this to any level.proficio</groupId> <artifactId>proficio</artifactId> <version>1.0-SNAPSHOT</version> </parent> .maven. you have probably taken a peek at all the POMs in each of the projects that make up the Proficio project and noticed the following at the top of each of the POMs: . which are all placed in one directory.. Using project inheritance allows you to do things like state your organizational information. 59 . Let's examine a case where it makes sense to put a resource in the top-level POM.. just as has been done with Proficio’s multiple storage mechanisms. <parent> <groupId>org...enabling you to add resources where it makes sense in the hierarchy of your projects. Better Builds with Maven If you look at the top-level POM for Proficio.maven. by stating the dependency in the top-level POM once. you never have to declare this dependency again.. So... The dependency is stated as following: <project> . you will see that in the dependencies section there is a declaration for JUnit version 3. </project> What specifically happens for each child POM.apache.codehaus.proficio</groupId> <artifactId>proficio-api</artifactId> </dependency> <dependency> <groupId>org.proficio</groupId> <artifactId>proficio</artifactId> <version>1.1. in any of your child POMs.8.0</modelVersion> <artifactId>proficio-core</artifactId> <packaging>jar</packaging> <name>Maven Proficio Core</name> <dependencies> <dependency> <groupId>org. if you take a look at the POM for the proficio-core module you will see the following (Note: there is no visible dependency declaration for Junit): <project> <parent> <groupId>org. <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. In this case the assumption being made is that JUnit will be used for testing in all our child projects.plexus</groupId> <artifactId>plexus-container-default</artifactId> </dependency> </dependencies> </project> 60 . is that each one inherits the dependencies section of the top-level POM.0-SNAPSHOT</version> </parent> <modelVersion>4.apache..maven. So.1</version> <scope>test</scope> </dependency> </dependencies> .8.0. So in this case. When this happens it is critical that the same version of a given dependency is used for all your projects. you will see the JUnit version 3. </project> You will have noticed that the POM that you see when using the mvn help:effective-pom is bigger than you expected... to end up with multiple versions of a dependency on the classpath when your application executes..1</version> <scope>test</scope> </dependency> . the proficio-core project inherits from the top-level Proficio project. making dependency management difficult to say the least. <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. across all of your projects are in alignment so that your testing accurately reflects what you will deploy as your final result.1 dependency: <project> .8. 3. it is likely that some of those projects will share common dependencies. so that the final application works correctly. </dependencies> . In order to manage. After you move into the proficio-core module directory and run the command. which in turn inherits from the Super POM.. take a look at the resulting POM. Managing Dependencies When you are building applications you typically have a number of dependencies to manage and that number only increases over time. But remember from Chapter 2 that the Super POM sits at the top of the inheritance hierarchy. versions of dependencies across several projects. This command will show you the final result for a target POM. When you write applications which consist of multiple. <dependencies> . 61 . as the results can be far from desirable. you will need to use the handy mvn help:effective-pom command..4.. individual projects. you use the dependency management section in the top-level POM of an application. of all your dependencies.8. You want to make sure that all the versions. Maven's strategy for dealing with this problem is to combine the power of project inheritance with specific dependency management elements in the POM. Looking at the effective POM includes everything and is useful to view when trying to figure out what is going on when you are having problems.Creating Applications with Maven In order for you to see what happens during the inheritance process. You don't want.. or align.. for example. <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> .version} specification is the version specified by the top-level POM's version element.apache.maven. As you can see within the dependency management section..version}</version> </dependency> <dependency> <groupId>org.version}</version> </dependency> <dependency> <groupId>org..Better Builds with Maven To illustrate how this mechanism works.. There is an important distinction to be made between the dependencies element contained within the dependencyManagment element and the top-level dependencies element in the POM.apache. we have several Proficio dependencies and a dependency for the Plexus IoC container.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1. </project> Note that the ${project.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project. which is the application version.proficio</groupId> <artifactId>proficio-core</artifactId> <version>${project.maven. let's look at the dependency management section of the Proficio top-level POM: <project> .codehaus..version}</version> </dependency> <dependency> <groupId>org. 62 .maven. .apache. <dependencies> <dependency> <groupId>org.. to make it complete.Creating Applications with Maven The dependencies element contained within the dependencyManagement element is used only to state the preference for a version and by itself does not affect a project's dependency graph.proficio</groupId> <artifactId>proficio-model</artifactId> </dependency> </dependencies> </project> The version for this dependency is derived from the dependencyManagement element which is inherited from the Proficio top-level POM. 63 .maven.0-SNAPSHOT (stated as ${project. The dependencies stated in the dependencyManagement only come into play when a dependency is declared without a version. you will see a single dependency declaration and that it does not specify a version: <project> . If you take a look at the POM for the proficio-api module.version}) for proficio-model so that version is injected into the dependency above. The dependencyManagement declares a stated preference for the 1. whereas the top-level dependencies element does affect the dependency graph. maven. <version>1. so Maven will attempt to update them.codehaus. By default Maven will look for snapshots on a daily basis. If you look at the top-level POM for Proficio you will see a snapshot version specified: <project> . Snapshot dependencies are assumed to be changing.apache. Controlling how snapshots work will be explained in detail in Chapter 7. A snapshot in Maven is an artifact that has been prepared using the most recent sources available. and this is where Maven's concept of a snapshot comes into play.. but you can use the -U command line option to force the search for updates. Your APIs might be undergoing some change or your implementations are undergoing change and are being fleshed out.0-SNAPSHOT</version> <dependencyManagement> <dependencies> <dependency> <groupId>org. it is usually the case that each of the modules are in flux.5.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project..apache.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1..Better Builds with Maven 3.version}</version> </dependency> <dependency> <groupId>org. or you may be doing some refactoring. Using Snapshots While you are developing an application with multiple modules. When you specify a non-snapshot version of a dependency Maven will download that dependency once and never attempt to retrieve it again. 64 ..0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> . Your build system needs to be able to deal easily with this real-time flux.maven.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project. </project> Specifying a snapshot version for a dependency means that Maven will look for new versions of that dependency without you having to manually specify a new version.version}</version> </dependency> <dependency> <groupId>org. Resolving Dependency Conflicts and Using Version Ranges With the introduction of transitive dependencies in Maven 2. this has limitations: • The version chosen may not have all the features required by the other dependencies. the output will contain something similar to: proficio-core:1. While further dependency management features are scheduled for the next release of Maven at the time of writing. see section 6. local scope test wins) proficio-api:1. it became possible to simplify a POM by including only the dependencies you need directly. you can remove the incorrect version from the tree. as the graph grows. or you can override both with the correct version.0.1 (selected for compile) 65 . to compile.1 (selected for test) plexus-container-default:1.0-SNAPSHOT junit:3.6.Creating Applications with Maven 3.0-SNAPSHOT (selected for compile) plexus-utils:1. For example. A dependency in the POM being built will be used over anything else. In Maven. • If multiple versions are selected at the same depth.8.0. and allowing Maven to calculate the full dependency graph.that is.0-SNAPSHOT (selected for compile) proficio-model:1.9 in Chapter 6). there are ways to manually resolve these conflicts as the end user of a dependency. In this case. then the result is undefined..4 (selected for compile) classworlds:1.1 (not setting. To manually resolve conflicts. and more importantly ways to avoid it as the author of a reusable library. Maven selects the version that requires the least number of dependencies to be traversed.0-alpha-9 (selected for compile) plexus-utils:1. However..1-alpha-2 (selected for compile) junit:3. it is inevitable that two or more artifacts will require different versions of a particular dependency. However. if you run mvn -X test on the proficio-core module. Removing the incorrect version requires identifying the source of the incorrect version by running Maven with the -X flag (for more information on how to do this.8. the version selected is the one declared “nearest” to the top of the tree . Maven must choose which version to provide. in this situation. In this example.1 version is used instead.plexus</groupId> <artifactId>plexus-utils</artifactId> </exclusion> </exclusions> </dependency> .1</version> <scope>runtime</scope> </dependency> </dependencies> . You'll notice that the runtime scope is used here.xml file as follows: . not for compilation. the dependency is used only for packaging. modify the plexus-container-default dependency in the proficio-core/pom. so that the 1. which will accumulate if this project is reused as a dependency itself. The alternate way to ensure that a particular version of a dependency is used.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>1. plexus-utils occurs twice. as follows: .. for a library or framework. if the dependency were required for compilation.codehaus. and Proficio requires version 1.codehaus. The reason for this is that it distorts the true dependency graph. 66 .codehaus.0-alpha-9</version> <exclusions> <exclusion> <groupId>org.0. for stability it would always be declared in the current POM as a dependency .. that will be used widely by others.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1. use version ranges instead. is to include it directly in the POM. Neither of these solutions is ideal. This ensures that Maven ignores the 1. but it is possible to improve the quality of your own dependencies to reduce the risk of these issues occurring with your own build artifacts. you can exclude the dependency from the graph by adding an exclusion to the dependency that introduced it... However.4 version of plexus-utils in the dependency graph. To ensure this. This is extremely important if you are publishing a build. In fact.. a WAR file). To accomplish this.. <dependencies> <dependency> <groupId>org.Better Builds with Maven Once the path to the version has been identified. This is because..regardless of whether another dependency introduces it. <dependency> <groupId>org. this approach is not recommended unless you are producing an artifact that is bundling its dependencies and is not used as a dependency itself (for example..1 be used. codehaus. the build will fail. Finally. it is possible to make the dependency mechanism more reliable for your builds and to reduce the number of exception cases that will be required. it is necessary to understand how versions are compared.0. so in the case of a conflict with another dependency.(1. this indicates that the preferred version of the dependency is 1.) (.1. However. while the nearest dependency technique will still be used in the case of a conflict. To understand how version ranges work. which is greater than or equal to 1. and table 3-2 shows some of the values that can be used. if none of them match.2.)</version> </dependency> What this means is that. the version that is used must fit the range given.1. the version you are left with is [1.1. but that other versions may be acceptable. Table 3-2: Examples of Version Ranges Range Meaning (. The notation used above is set notation. In figure 3-1.3 (inclusive) Greater than or equal to 1.0) [1.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>[1.3] [1.Creating Applications with Maven When a version is declared as 1.1. if two version ranges in a dependency graph do not intersect at all. you need to avoid being overly specific as well. If the nearest version does not match. the dependency should be specified as follows: <dependency> <groupId>org.1.2.) Less than or equal to 1. but less than 2.1. or there were no conflicts originally.0 Between 1.1 By being more specific through the use of version ranges. except 1. as shown above for plexus-utils.).0] [1.1. Figure 3-3: Version parsing 67 .0 Greater than or equal to 1. you can see how a version is partitioned by Maven. Maven assumes that all versions are valid and uses the “nearest dependency” technique described previously to determine which version to use. and so on. However. For instance. then the next nearest will be tested. This means that the latest version.1.1.1.5. Maven has no knowledge regarding which versions will work. In this case. will be retrieved from the repository.1). you may require a feature that was introduced in plexus-utils version 1.5 Any version.2 and 1.0.> <!-.Creating Applications with Maven If you take a look at the POM for the proficio-cli module you will see the following profile definitions: <project> . . <!-.. including simple file-based deployment.jar file only. 3. you’ll want to share it with as many people as possible! So. and external SSH deployment. while the XStream-based store contains the proficio-store-xstream-1. 3. </project> 74 . SSH2 deployment. Deploying your Application Now that you have an application assembly. it is now time to deploy your application assembly.9. If you wanted to create the assembly using the memory-based store. SFTP deployment. you need to correctly configure your distributionManagement element in your POM.Better Builds with Maven You can see there are two profiles: one with an id of memory and another with an id of xstream. Currently Maven supports several methods of deployment.. In each of these profiles you are configuring the assembly plugin to point at the assembly descriptor that will create a tailored assembly. you will see that the memory-based assembly contains the proficiostore-memory-1.. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>{basedir}/target/deploy</url> </repository> </distributionManagement> . you would execute the following: mvn -Dxstream clean assembly:assembly Both of the assemblies are created in the target directory and if you use the jar tvf command on the resulting assemblies. but it illustrates how you can customize the execution of the life cycle using profiles to suit any requirement you might have. FTP deployment. so it might be useful to run mvn install at the top level of the project to ensure that needed components are installed into the local repository. you would execute the following: mvn -Dmemory clean assembly:assembly If you wanted to create the assembly using the XStream-based store. You will also notice that the profiles are activated using a system property.0-SNAPSHOT. It should be noted that the examples below depend on other parts of the build having been executed beforehand. In order to deploy. Here are some examples of how to configure your POM via the various deployment mechanisms..0-SNAPSHOT.9.1.. Deploying to the File System To deploy to the file system you would use something like the following: <project> .jar file only. so that all child POMs can inherit this information. This is a very simple example. which would typically be your top-level POM. </project> 75 . Deploying with SSH2 To deploy to an SSH2 server you would use something like the following: <project> .2..Creating Applications with Maven 3...9.yourcompany.yourcompany. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>scp://sshserver..3.. Deploying with SFTP To deploy to an SFTP server you would use something like the following: <project> .com/deploy</url> </repository> </distributionManagement> .com/deploy</url> </repository> </distributionManagement> . </project> 3....9. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>s. which does the work of moving your files to the remote server. but to use an external SSH command to deploy you must configure not only the distributionManagement element..Better Builds with Maven 3. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>scpexe://sshserver.com/deploy</url> </repository> </distributionManagement> <build> <extensions> <extension> <groupId>org.wagon</groupId> <artifactId>wagon-ssh-external</artifactId> <version>1. Wagon is the general purpose transport mechanism used throughout Maven. <project> .yourcompany. 76 . </project> The build extension specifies the use of the Wagon external SSH provider.apache.. the first three methods illustrated are included with Maven. Deploying with an External SSH Now.maven..9..0-alpha-6</version> </extension> </extensions> </build> .4. but also a build extension. so only the distributionManagement element is required. and you are ready to initiate deployment..Creating Applications with Maven 3.com/deploy</url> </repository> </distributionManagement> <build> <extensions> <extension> <groupId>org. Deploying with FTP To deploy with FTP you must also specify a build extension. </project> Once you have configured your POM accordingly.0-alpha-6</version> </extension> </extensions> </build> .9.apache.maven.wagon</groupId> <artifactId>wagon-ftp</artifactId> <version>1..5.yourcompany.. simply execute the following command: mvn deploy 77 .. To deploy to an FTP server you would use something like the following: <project> . <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>. 78 . If you take a look. it is time for you to see how to create a standard web site for an application. Maven supports a number of different documentation formats to accommodate various needs and preferences. Creating a Web Site for your Application Now that you have walked though the process of building.Better Builds with Maven 3. there is a subdirectory for each of the supported documentation formats that you are using for your site and the very important site descriptor. testing and deploying Proficio. it is recommended that you create a source directory at the top-level of the directory structure to store the resources that are used to generate the web site. Within the src/site directory. For applications like Proficio. you will see that we have something similarly the following: Figure 3-4: The site directory structure Everything that you need to generate the Web site resides within the src/site directory.10. A simple XML format for managing FAQs. A full reference of the APT Format is available. We will look at a few of the more well-supported formats later in the chapter. • The APT format (Almost Plain Text). which is a simple XML format used widely at Apache. which is a wiki-like format that allows you to write simple. • The Confluence format. Maven also has limited support for: • The Twiki format. which is a popular Wiki markup format. the most well supported formats available are: • The XDOC format. structured documents (like this) very quickly.Creating Applications with Maven Currently. which is a less complex version of the full DocBook format. • The FML format. Simple format. which is the FAQ format. which is another popular Wiki markup format. • The DocBook format.. WAR. Web Services) Setting up in-place Web development Deploying J2EE archives to a container Automating container start/stop Keep your face to the sun and you will never see the shadows. EAR. .4.Helen Keller 85 . you’ll learn how to automate configuration and deployment of J2EE application servers. EJBs. Figure 4-1: Architecture of the DayTrader application 86 . Through this example. Its goal is to serve as both a functional example of a full-stack J2EE 1. The functional goal of the DayTrader application is to buy and sell stock. Web services. As a consequence the Maven community has developed plugins to cover every aspect of building J2EE applications. Introducing the DayTrader Application DayTrader is a real world application developed by IBM and then donated to the Apache Geronimo project. This chapter demonstrates how to use Maven on a real application to show how to address the complex issues related to automated builds. You'll learn not only how to create a J2EE build but also how to create a productive development environment (especially for Web application development) and how to deploy J2EE modules into your container.4 application and as a test bed for running performance tests.Better Builds with Maven 4. This chapter will take you through the journey of creating the build for a full-fledged J2EE application called DayTrader. 4.2.1. Whether you are using the full J2EE stack with EJBs or only using Web applications with frameworks such as Spring or Hibernate. and its architecture is shown in Figure 4-1. you’ll learn how to build EARs. it's likely that you are using J2EE in some of your projects. As importantly. and Web applications. Introduction J2EE (or Java EE as it is now called) applications are everywhere. logout. 3. The easy answer is to follow Maven’s artifact guideline: one module = one main artifact. 2. using Web services. • A module producing a JAR that will contain the Quote Streamer client application. In addition you may need another module producing an EAR which will contain the EJB and WAR produced from the other modules. Asynchronously the order that was placed on the queue is processed and the purchase completed. 4. get a stock quote. • • • A typical “buy stock” use case consists of the following steps that were shown in Figure 4-1: 1. This EAR will be used to easily deploy the server code into a J2EE container. and using the Quote Streamer. and Message-Driven Beans (MDB) to send purchase orders and get quote changes. Looking again at Figure 4-1. A new “open” order is saved in the database using the CMP Entity Beans. • A module producing a WAR which will contain the Web application. It uses container-managed persistence (CMP) entity beans for storing the business objects (Order.Building J2EE Applications There are 4 layers in the architecture: • The Client layer offers 3 ways to access the application: using a browser.Quote and AccountProfile). The EJB layer is where the business logic is. Account. This request is handled by the Trade Session bean. Once this happens the Trade Broker MDB is notified 6. The user gives a buy order (by using the Web client or the Web services client).3. 5. Organizing the DayTrader Directory Structure The first step to organizing the directory structure is deciding what build modules are required. 87 . It uses servlets and JSPs. The user is notified of the completed order on a subsequent request. The order is then queued for processing in the JMS Message Server. cancel an order. • A module producing another JAR that will contain the Web services client application. and a JMS Server for interacting with the outside world. The Trade Session is a stateless session bean that offers the business services such as login. The Quote Streamer is a Swing GUI application that monitors quote information about stocks in real-time as the price changes. buy or sell a stock. The Trade Broker calls the Trade Session bean which in turn calls the CMP entity beans to mark the order as “completed". The Data layer consists of a database used for storing the business objects and the status of each purchase. The creation of the “open” order is confirmed for the user. 4. and so on. The Web layer offers a view of the application for both the Web client and the Web services client. you can see that the following modules will be needed: • A module producing an EJB which will contain all of the server-side EJBs. Thus you simply need to figure out what artifacts you need. Holding. the module containing the Web services client application ear .the module containing the EJBs web . The next step is to give these modules names and map them to a directory structure. This file also contains the list of modules that Maven will build when executed from this directory (see the Chapter 3.. Creating Applications with Maven..] <modules> <module>ejb</module> <module>web</module> <module>streamer</module> <module>wsappclient</module> <module>ear</module> </modules> [.. it is usually easier to choose names that represent a technology instead.the module containing the Web application streamer . On the other hand. if you needed to physically locate the WARs in separate servlet containers to distribute the load. Figure 4-2 shows these modules in a flat directory structure. It is possible to come up with more. Best practices suggest to do this only when the need arises.] 88 . it is important to split the modules when it is appropriate for flexibility.Better Builds with Maven Note that this is the minimal number of modules required.. you may want to split the WAR module into 2 WAR modules: one for the browser client and one for the Web services client. It is flat because you're locating all the modules in the same directory. Figure 4-2: Module names and a simple flat directory structure The top-level daytrader/pom. For example.xml file contains the POM elements that are shared between all of the modules.the module producing the EAR which packages the EJBs and the Web application There are two possible layouts that you can use to organize these modules: a flat directory structure and a nested one. For example. for more details): [. However. For the DayTrader application the following names were chosen: • • • • • ejb . it is better to find functional names for modules. Let's discuss the pros and cons of each layout. If there isn't a strong need you may find that managing several modules can be more cumbersome than useful. As a general rule.the module containing the client side streamer application wsappclient . Figure 4-4: Nested directory structure for the EAR. For example. as shown in Figure 4-4. EJB and Web modules 89 .Building J2EE Applications This is the easiest and most flexible structure to use. Having this nested structure clearly shows how nested modules are linked to their parent. Note that in this case the modules are still separate. if you have many modules in the same directory you may consider finding commonalities between them and create subdirectories to partition them. However. not nested within each other. you might separate the client side modules from the server side modules in the way shown in Figure 4-3. In this case. Figure 4-3: Modules split according to a server-side vs client-side directory organization As before.xml file containing the shared POM elements and the list of modules underneath. and is the structure used in this chapter. The other alternative is to use a nested directory structure. the ejb and web modules are nested in the ear module. This makes sense as the EAR artifact is composed of the EJB and WAR artifacts produced by the ejb and web modules. each directory level containing several modules contains a pom. 0\daytrad er-1. The modules we will work with from here on will each be referring to the parent pom. the nested strategy doesn’t fit very well with the Assembler role as described in the J2EE specification.. it has several drawbacks: • Eclipse users will have issues with this structure as Eclipse doesn’t yet support nested projects. For example.. These examples show that there are times when there is not a clear parent for a module. A flat layout is more neutral with regard to assembly and should thus be preferred. starting with the wsappclient module after we take care of one more matter of business. but by some client-side application. you're going to create the Maven build for each module.0. the ejb or web modules might depend on a utility JAR and this JAR may be also required for some other EAR.Better Builds with Maven However. [INFO] --------------------------------------------------------------------[INFO] Building DayTrader :: Performance Benchmark Sample [INFO] task-segment: [install] [INFO] --------------------------------------------------------------------[INFO] [site:attach-descriptor] [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\pom. We are now ready to continue on with developing the sub-projects! 90 .m2\repository\org\apache\geronimo\samples\daytrader\daytrader. etc. EJB project. Now that you have decided on the directory structure for the DayTrader application. • It doesn’t allow flexible packaging. The Assembler has a pool of modules and its role is to package those modules for deployment.xml to C:\[. so before we move on to developing these sub-projects we need to install the parent POM into our local repository so it can be further built on. For example.]\.xml of the project. In those cases using a nested directory structure should be avoided. In addition. EAR project). but then you’ll be restricted in several ways.. even though the nested directory structure seems to work quite well here.pom. the three modules wouldn’t be able to have different natures (Web application project.. You’d need to consider the three modules as one project. Or the ejb module might be producing a client EJB JAR which is not used by the EAR. Depending on the target deployment environment the Assembler may package things differently: one EAR for one environment or two EARs for another environment where a different set of machines are used.. and Maven's ability to integrate toolkits can make them easier to add to the build process. and this will be used from DayTrader’s wsappclient module.org/axis/java/).apache.html#WSDL2JavaBuildingStubsSkeletonsAndDataTypesFromWSDL. the plugin uses the Axis framework (. which is the default used by the Axis Tools plugin: Figure 4-5: Directory structure of the wsappclient module 91 . As the name suggests. Figure 4-5 shows the directory structure of the wsappclient module. As you may notice.4. For example.apache.Building J2EE Applications 4. We start our building process off by visiting the Web services portion of the build since it is a dependency of later build stages. the Maven plugin called Axis Tools plugin takes WSDL files and generates the Java files needed to interact with the Web services it defines. see. the WSDL files are in src/main/wsdl. Building a Web Services Client Project Web Services are a part of many J2EE applications. While you might expect the Axis Tools plugin to define this for you.xml file must declare and configure the Axis Tools plugin: <project> [. and more importantly.] <plugin> <groupId>org.] In order to generate the Java source files from the TradeServices.. 92 .xml.] <build> <plugins> [... it is required for two reasons: it allows you to control what version of the dependency to use regardless of what the Axis Tools plugin was built against.] <plugin> <groupId>org.. This is because after the sources are generated.codehaus..Better Builds with Maven The location of WSDL source can be customized using the sourceDirectory property. the wsappclient/pom... it allows users of your project to automatically get the dependency transitively.mojo</groupId> <artifactId>axistools-maven-plugin</artifactId> <configuration> <sourceDirectory> src/main/resources/META-INF/wsdl </sourceDirectory> </configuration> [.wsdl file. For example: [.. any tools that report on the POM will be able to recognize the dependency. Similarly. it would fail. At this point if you were to execute the build. you will require a dependency on Axis and Axis JAXRPC in your pom. Building J2EE Applications As before.2</version> <scope>provided</scope> </dependency> <dependency> <groupId>org. you need to add the J2EE specifications JAR to compile the project's Java sources. Run mvn install and Maven will fail and print the installation instructions.specs</groupId> <artifactId>geronimo-j2ee_1.geronimo.apache. Thus.4_spec</artifactId> <version>1. they are not present on ibiblio and you'll need to install them manually.0</version> <scope>provided</scope> </dependency> </dependencies> The Axis JAR depends on the Mail and Activation Sun JARs which cannot be redistributed by Maven. Thus add the following three dependencies to your POM: <dependencies> <dependency> <groupId>axis</groupId> <artifactId>axis</artifactId> <version>1. 93 .2</version> <scope>provided</scope> </dependency> <dependency> <groupId>axis</groupId> <artifactId>axis-jaxrpc</artifactId> <version>1. The generated WSDL file could then be injected into the Web Services client module to generate client-side Java files.. [INFO] [compiler:testCompile] [INFO] No sources to compile [INFO] [surefire:test] [INFO] No tests to run. [INFO] [compiler:compile] Compiling 13 source files to C:\dev\m2book\code\j2ee\daytrader\wsappclient\target\classes [INFO] [resources:testResources] [INFO] Using default encoding to copy filtered resources. But that's another story...jar to C:\[. lets visit EJBs next. The Axis Tools reference documentation can be found at. Now that we have discussed and built the Web services portion.Better Builds with Maven After manually installing Mail and Activation.0.] Note that the daytrader-wsappclient JAR now includes the class files compiled from the generated source files...]\.] [INFO] [axistools:wsdl2java {execution: default}] [INFO] about to add compile source root [INFO] processing wsdl: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ src\main\wsdl\TradeServices. in addition to the sources from the standard source directory.codehaus. [INFO] [jar:jar] [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ target\daytrader-wsappclient-1.jar [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\wsappclient\ target\daytrader-wsappclient-1 [.jar [.. The Axis Tools plugin boasts several other goals including java2wsdl that is useful for generating the server-side WSDL file from handcrafted Java classes. 94 ..0\daytrader-wsappclient-1..0.0.org/axistoolsmaven-plugin/.wsdl [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources. Any container-specific deployment descriptor should also be placed in this directory. Unit tests are tests that execute in isolation from the container.5.xml. More specifically. Figure 4-6: Directory structure for the DayTrader ejb module Figure 4-6 shows a canonical directory structure for EJB projects: • Runtime Java source code in src/main/java. the standard ejb-jar. • Runtime classpath resources in src/main/resources. Building an EJB Project Let’s create a build for the ejb module. 95 . • Unit tests in src/test/java and classpath resources for the unit tests in src/test/resources.Building J2EE Applications 4.xml deployment descriptor is in src/main/resources/META-INF/ejbjar. Tests that require the container to run are called integration tests and are covered at the end of this chapter. 0</version> </parent> <artifactId>daytrader-ejb</artifactId> <name>Apache Geronimo DayTrader EJB Module</name> <packaging>ejb</packaging> <description>DayTrader EJBs</description> <dependencies> <dependency> <groupId>org.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1.Better Builds with Maven Now. This is because the DayTrader build is a multi-module build and you are gathering common POM elements in a parent daytrader/pom.0</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.xml file: <project> <modelVersion>4.3</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.4_spec</artifactId> <version>1.0</modelVersion> <parent> <groupId>org.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.geronimo.maven.specs</groupId> <artifactId>geronimo-j2ee_1. 96 .geronimo.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean.0.apache.apache.class</clientExclude> </clientExcludes> </configuration> </plugin> </plugins> </build> </project> As you can see. take a look at the content of this project’s pom.geronimo. you're extending a parent POM using the parent element.apache.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1. If you look through all the dependencies you should see that we are ready to continue with building and installing this portion of the build.xml file.apache.samples.0.samples. You could instead specify a dependency on Sun’s J2EE JAR. Even though this dependency is provided at runtime.class</clientExclude> </clientExcludes> </configuration> </plugin> The EJB plugin has a default set of files to exclude from the client EJB JAR: **/*Bean.Building J2EE Applications The ejb/pom. You should note that you're using a provided scope instead of the default compile scope. However. the pom.class. **/*Session. The reason is that this dependency will already be present in the environment (being the J2EE application server) where your EJB will execute. Fortunately. this JAR is not redistributable and as such cannot. 97 . You make this clear to Maven by using the provided scope. This is done by specifying: <packaging>ejb</packaging> • As you’re compiling J2EE code you need to have the J2EE specifications JAR in the project’s build classpath. it still needs to be listed in the POM so that the code can be compiled. so you must explicitly tell it to do so: <plugin> <groupId>org.maven.class and **/package. the Geronimo project has made the J2EE JAR available under an Apache license and this JAR can be found on ibiblio. The Client will be used in a later examples when building the web module. • Lastly.apache.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean. By default the EJB plugin does not generate the client JAR. This is achieved by specifying a dependency element on the J2EE JAR. this prevents the EAR module from including the J2EE JAR when it is packaged. **/*CMP.html.class.xml contains a configuration to tell the Maven EJB plugin to generate a Client EJB JAR file when mvn install is called. [INFO] ----------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [install] [INFO] ----------------------------------------------------------[INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.jar to C:\[.jar to C:\[.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1. ..0-client.0.geronimo.02 sec Results : [surefire] Tests run: 1. Note that it's also possible to specify a list of files to include using clientInclude elements.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1.class).0 [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.class pattern and which need to be present in the generated client EJB JAR.0-client..0.. Time elapsed: 0. Errors: 0 [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1.0-client [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.jar Maven has created both the EJB JAR and the client EJB JAR and installed them in your local Maven repository. 98 . You’re now ready to execute the build. Failures: 0. Errors: 0..]\.apache..jar [INFO] Building ejb client daytrader-ejb-1. Relax and type mvn install: C:\dev\m2book\code\j2ee\daytrader\ejb>mvn install [INFO] Scanning for projects.0\daytrader-ejb-1. you need to override the defaults using a clientExclude element because it happens that there are some required non-EJB files matching the default **/*Bean. Failures: 0.jar [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.0\daytrader-ejb-1.samples..0-client. Thus you're specifying a pattern that only excludes from the generated client EJB JAR all EJB implementation classes located in the ejb package (**/ejb/*Bean.jar [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.]\.0.FinancialUtilsTest [surefire] Tests run: 1.daytrader.Better Builds with Maven In this example. apache. the EJB3 specification is still not final.Building J2EE Applications The EJB plugin has several other configuration elements that you can use to suit your exact needs. There is a working prototype of an EJB3 Maven plugin. Early adopters of EJB3 may be interested to know how Maven supports EJB3. however in the future it will be added to the main EJB plugin after the specification is finalized.org/plugins/maven-ejb-plugin/. Please refer to the EJB plugin documentation on. Stay tuned! 99 . At the time of writing. TradeBean" * @ejb.TradeHome" * @ejb.samples.interface-method * view-type="remote" * @ejb. Exception […] 100 . the Remote and Local interfaces. When writing EJBs it means you simply have to write your EJB implementation class and XDoclet will generate the Home interface.transaction * type="RequiresNew" *[…] */ public void queueOrderOnePhase(Integer orderID) throws javax. you can run the XDoclet processor to generate those files for you.jms.geronimo.bean * display-name="TradeEJB" * name="TradeEJB" * view-type="remote" * impl-class-name= * "org.geronimo.home * generate="remote" * remote-class= * "org.samples.geronimo. Using XDoclet is easy: by adding Javadoc annotations to your classes. Building an EJB Module With Xdoclet If you’ve been developing a lot of EJBs (version 1 and 2) you have probably used XDoclet to generate all of the EJB interfaces and deployment descriptors for you.6.xml descriptor. Note that if you’re an EJB3 user.ejb.ejb.daytrader.daytrader. the container-specific deployment descriptors.apache.ejb.JMSException.Trade" * […] */ public class TradeBean implements SessionBean { […] /** * Queue the Order identified by orderID to be processed in a * One Phase commit […] * * @ejb. and the ejb-jar.samples.apache.Better Builds with Maven 4. you can safely skip this section – you won’t need it! Here’s an extract of the TradeBean session EJB using Xdoclet: /** * Trade Session EJB manages all Trading services * * @ejb.apache.interface * generate="remote" * remote-class= * "org.daytrader. the project’s directory structure is the same as in Figure 4-6.1" destDir= "${project.build. This is achieved by using the Maven XDoclet plugin and binding it to the generate-sources life cycle phase.outputDirectory}/META-INF"/> </ejbdoclet> </tasks> </configuration> </execution> </executions> </plugin> 101 .directory}/generated-sources/xdoclet"> <fileset dir="${project. As you can see in Figure 4-7.java"></include> </fileset> <homeinterface/> <remoteinterface/> <localhomeinterface/> <localinterface/> <deploymentdescriptor destDir="${project..xml file anymore as it’s going to be generated by Xdoclet. Since XDoclet generates source files. Now you need to tell Maven to run XDoclet on your project.build.codehaus.build.java"></include> <include name="**/*MDB.Building J2EE Applications To demonstrate XDoclet. this has to be run before the compilation phase occurs.sourceDirectory}"> <include name="**/*Bean. Local and Remote interfaces as they’ll also get generated.java classes and remove all of the Home. Here’s the portion of the pom. Figure 4-7: Directory structure for the DayTrader ejb module when using Xdoclet The other difference is that you only need to keep the *Bean. create a copy of the DayTrader ejb module called ejb-xdoclet. AccountBean'. 102 .TradeBean'.geronimo.XDocletMain start INFO: Running <deploymentdescriptor/> Generating EJB deployment descriptor (ejb-jar. The plugin generates sources by default in ${project.sourceforge.0 […] You might also want to try XDoclet2. It also tells Maven that this directory contains sources that will need to be compiled when the compile phase executes. In addition. In practice you can use any XDoclet task (or more generally any Ant task) within the tasks element.apache.geronimo..ejb. 2006 16:53:50 xdoclet. It’s based on a new architecture but the tag syntax is backwardcompatible in most cases.ejb.XDocletMain start INFO: Running <homeinterface/> Generating Home interface for 'org. […] 10 janv.codehaus. but here the need is to use the ejbdoclet task to instrument the EJB class files.daytrader. it should be noted that XDoclet2 is a work in progress and is not yet fully mature. […] [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1. […] 10 janv.geronimo.AccountBean'.ejb.daytrader. This is required by Maven to bind the xdoclet goal to a phase. 2006 16:53:51 xdoclet.samples. in the tasks element you use the ejbdoclet Ant task provided by the XDoclet project (for reference documentation see. […] 10 janv.org/Maven2+Plugin.directory}/generated-sources/xdoclet (you can configure this using the generatedSourcesDirectory configuration element).ejb.samples. […] INFO: Running <remoteinterface/> Generating Remote interface for 'org.daytrader.xml). nor does it boast all the plugins that XDoclet1 has. the XDoclet plugin will also trigger Maven to download the XDoclet libraries from Maven’s remote repository and add them to the execution classpath.net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask.samples.build.apache.Better Builds with Maven The XDoclet plugin is configured within an execution element.apache.geronimo.TradeBean'. 2006 16:53:50 xdoclet.daytrader.XDocletMain start INFO: Running <localhomeinterface/> Generating Local Home interface for 'org. 2006 16:53:51 xdoclet.apache.html). Finally. However.XDocletMain start INFO: Running <localinterface/> Generating Local interface for 'org.samples. There’s also a Maven 2 plugin for XDoclet2 at. log</log> [. you will need to have Maven start the container automatically. Netbeans. in the Testing J2EE Applications section of this chapter. In this example.Building J2EE Applications 4.net/ sourceforge/jboss/jboss-4.x (containerId element) and that you want Cargo to download the JBoss 4. 103 .] See. you can use the log element to specify a file where Cargo logs will go and you can also use the output element to specify a file where the container's output will be dumped. Later. configuring them and deploying modules to them. Cargo is a framework for manipulating containers. It offers generic APIs (Java. you will learn how to deploy it.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <zipUrlInstaller> <url>. you will also learn how to test it automatically. the JBoss container will be used.sourceforge.log</output> <log>${project.. Maven 2.build. Maven 1.. stopping. In order to build this project you need to create a Profile where you define the ${installDir} property's value. IntelliJ IDEA.codehaus.dl. Let's discover how you can automatically start a container and deploy your EJBs into it.0.build. For example: <container> <containerId>jboss4x</containerId> <output>${project.] <plugin> <groupId>org.xml file and add the following Cargo plugin configuration: <build> <plugins> [. In the container element you tell the Cargo plugin that you want to use JBoss 4. To do so you're going to use the Maven plugin for Cargo.. Deploying EJBs Now that you know how to build an EJB project.7.0.directory}/cargo.) for performing various actions on containers such as starting.2 distribution from the specified URL and install it in ${installDir}..codehaus.directory}/jboss4x. The location where Cargo should install JBoss is a user-dependent choice and this is why the ${installDir} property was introduced.zip</url> <installDir>${installDir}</installDir> </zipUrlInstaller> </container> </configuration> </plugin> </plugins> </build> If you want to debug Cargo's execution. Ant.org/Debugging for full details. Edit the ejb/pom. First. etc.2. [INFO] ----------------------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [cargo:start] [INFO] ----------------------------------------------------------------------[INFO] [cargo:start] [INFO] [talledLocalContainer] Parsed JBoss version = [4. [INFO] [talledLocalContainer] JBoss 4. That's it! JBoss is running. For example: <home>c:/apps/jboss-4. you can define a profile in the POM.xml file. activated by default and in which the ${installDir} property points to c:/apps/cargo-installs. In that case replace the zipURLInstaller element with a home element.2 starting.2] [INFO] [talledLocalContainer] JBoss.2 started on port [8080] [INFO] Press Ctrl-C to stop the container. or in a settings.xml file. the EJB JAR should first be created.. 104 .xml file defines a profile named vmassol. in a settings.0.xml file. so run mvn package to generate it. The Cargo plugin does all the work: it provides a default JBoss configuration (using port 8080 for example). Thus the best place is to create a profiles.. in a profiles..Better Builds with Maven As explained in Chapter 3. It's also possible to tell Cargo that you already have JBoss installed locally.2</home> That's all you need to have a working build and to deploy the EJB JAR into JBoss. [INFO] Searching repository for plugin with prefix: 'cargo'. it detects that the Maven project is producing an EJB from the packaging element and it automatically deploys it when the container is started. Of course. In this case. Nor should the content be shared with other Maven projects at large.. as the content of the Profile is user-dependent you wouldn't want to define it in the POM. and the EJB JAR has been deployed..0.0.0. JSPs. If the container was already started and you wanted to just deploy the EJB. Figure 4-8: Directory structure for the DayTrader web module showing some Web application resources 105 . and more. Cargo has many other configuration options such as the possibility of using an existing container installation. modifying various container parameters. you would run the cargo:deploy goal. except that there is an additional src/main/webapp directory for locating Web application resources such as HTML pages. Building a Web Application Project Now. Subsequent calls will be fast as Cargo will not download JBoss again. etc.org/Maven2+plugin.Building J2EE Applications As you have told Cargo to download and install JBoss.codehaus. let’s focus on building the DayTrader web module.8. The layout is the same as for a JAR module (see the first two chapters of this book). to stop the container call mvn cargo:stop. WEB-INF configuration files. Finally. the first time you execute cargo:start it will take some time. Check the documentation at. (see Figure 4-8). 4. especially if you are on a slow connection. deploying on a remote machine. geronimo. 106 .apache. but it’s not necessary and would increase the size of the WAR file.4_spec</artifactId> <version>1.0</version> <scope>provided</scope> </dependency> </dependencies> </project> You start by telling Maven that it’s building a project generating a WAR: <packaging>war</packaging> Next. for example to prevent coupling.Better Builds with Maven As usual everything is specified in the pom.apache.specs</groupId> <artifactId>geronimo-j2ee_1.0</version> </parent> <artifactId>daytrader-web</artifactId> <name>DayTrader :: Web Application</name> <packaging>war</packaging> <description>DayTrader Web</description> <dependencies> <dependency> <groupId>org.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1.xml file: <project> <modelVersion>4. It’s always cleaner to depend on the minimum set of required classes.geronimo. This is because the servlets are a client of the EJBs. The reason you are building this web module after the ejb module is because the web module's servlets call the EJBs. This is why you told the EJB plugin to generate a client JAR earlier on in ejb/pom.geronimo.apache.0</modelVersion> <parent> <groupId>org.geronimo.samples.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1.xml. Therefore. the servlets only need the EJB client JAR in their classpath to be able to call the EJBs.xml: <dependency> <groupId>org.samples.0</version> <type>ejb-client</type> </dependency> <dependency> <groupId>org. Therefore.apache.0. Depending on the main EJB JAR would also work. you need to add a dependency on the ejb module in web/pom.samples.0</version> <type>ejb-client</type> </dependency> Note that you’re specifying a type of ejb-client and not ejb.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1. you specify the required dependencies. security.properties.xml file will be applied first. that you have custom plugins that do all sorts of transformations to Web application resource files.jetty. isn’t it? What happened is that the Jetty6 plugin realized the page was changed and it redeployed the Web application automatically. By default the plugin uses the module’s artifactId from the POM.jetty</groupId> <artifactId>maven-jetty6-plugin</artifactId> <configuration> […] <connectors> <connector implementation= "org. you would use: <plugin> <groupId>org. For a reference of all configuration options see the Jetty6 plugin documentation at. 112 .mortbay.org/jetty6/mavenplugin/index.mortbay.html. There are various configuration parameters available for the Jetty6 plugin such as the ability to define Connectors and Security realms.xml configuration file using the jettyConfig configuration element.jetty. possibly generating some files. The Jetty container automatically recompiled the JSP when the page was refreshed. Fortunately there’s a solution.mortbay. The strategy above would not work as the Jetty6 plugin would not know about the custom actions that need to be executed to generate a valid Web application.SelectChannelConnector"> <port>9090</port> <maxIdleTime>60000</maxIdleTime> </connector> </connectors> <userRealms> <userRealm implementation= "org. and so on.properties</config> </userRealm> </userRealms> </configuration> </plugin> You can also configure the context under which your Web application is deployed by using the contextPath configuration element. It's also possible to pass in a jetty.nio. In that case anything in the jetty.Better Builds with Maven That’s nifty. Now imagine that you have an awfully complex Web application generation process.HashUserRealm"> <name>Test Realm</name> <config>etc/realm. For example if you wanted to run Jetty on port 9090 with a user realm defined in etc/realm. log. Then the plugin deploys the WAR file to the Jetty server and it performs hot redeployments whenever the WAR is rebuilt (by calling mvn package from another window. To demonstrate.log ..xml file is modified.Slf4jLog [INFO] Context path = /daytrader-web 2214 [main] INFO org. The plugin then watches the following files: WEB-INF/lib. 113 ..0.0 [INFO] Assembling webapp daytrader-web in C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.mortbay.. for example) or when the pom.log .0.slf4j. execute mvn jetty6:run-exploded goal on the web module: C:\dev\m2book\code\j2ee\daytrader\web>mvn jetty6:run-exploded [.] [INFO] [war:war] [INFO] Exploding webapp. WEB-INF/classes.mortbay.mortbay. [INFO] Scan complete at Wed Feb 15 11:59:00 CET 2006 [INFO] Starting scanner at interval of 10 seconds.0. [INFO] Copy webapp resources to C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.war [INFO] Building war: C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1.xml.. The Jetty6 plugin also contains two goals that can be used in this situation: • jetty6:run-war: The plugin first runs the package phase which generates the WAR file..0.Started SelectChannelConnector @ 0.war [INFO] [jetty6:run-exploded] [INFO] Configuring Jetty for project: DayTrader :: Web Application [INFO] Starting Jetty Server . 0 [main] INFO org.0:8080 [INFO] Scanning .Logging to org. any change to those files results in a hot redeployment.0 [INFO] Generating war C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1...Building J2EE Applications The WAR plugin has an exploded goal which produces an expanded Web application in the target directory.SimpleLogger@78bc3b via org. • jetty6:run-exploded: The plugin runs the package phase as with the jetty6:runwar goal.impl.. Then it deploys the unpacked Web application located in target/ (whereas the jetty6:run-war goal deploys the WAR file). Calling this goal ensures that the generated Web application is the correct one. WEB-INF/web.xml and pom. .. so now the focus will be on deploying a packaged WAR to your target container.xml file and add the Cargo configuration: <plugin> <groupId>org.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>${containerId}</containerId> <zipUrlInstaller> <url>${url}</url> <installDir>${installDir}</installDir> </zipUrlInstaller> </container> <configuration> <properties> <cargo. Deploying Web Applications You have already seen how to deploy a Web application for in-place Web development in the previous section. Restarting webapp .10.org/Containers).. Stopping webapp .. If you open another shell and run mvn package you'll see the following in the first shell's console: [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] Scan complete at Wed Feb 15 12:02:31 CET 2006 Calling scanner listeners . Scanning . Reconfiguring webapp . First...servlet.servlet. This is very useful when you're developing an application and you want to verify it works on several containers. Listeners completed. You're now ready for productive web development...codehaus. No more excuses! 4.. edit the web module's pom.Better Builds with Maven As you can see the WAR is first assembled in the target directory and the Jetty plugin is now waiting for changes to happen. This example uses the Cargo Maven plugin to deploy to any container supported by Cargo (see. Restart completed.port> </properties> </configuration> </configuration> </plugin> 114 .port>8280</cargo.codehaus.. 115 .0.apache.30. A cargo. You could add as many profiles as there are containers you want to execute your Web application on.servlet.xml file: [.port element has been introduced to show how to configure the containers to start on port 8280 instead of the default 8080 port.xml file..30/bin/ jakarta-tomcat-5. Thus.sourceforge. There are two differences though: • Two new properties have been introduced (containerId and url) in order to make this build snippet generic.0.org/dist/jakarta/tomcat-5/v5. However. add the following profiles to the web/pom. Those properties will be defined in a Profile. This is very useful if you have containers already running your machine and you don't want to interfere with them.dl.2. • As seen in the Deploying EJBs section the installDir property is user-dependent and should be defined in a profiles.0.] </build> <profiles> <profile> <id>jboss4x</id> <activation> <activeByDefault>true</activeByDefault> </activation> <properties> <containerId>jboss4x</containerId> <url> J2EE Applications As you can see this is a configuration similar to the one you have used to deploy your EJBs in the Deploying EJBs section of this chapter. the containerId and url properties should be shared for all users of the build.net/sourceforge/jboss/jboss4.zip</url> </properties> </profile> <profile> <id>tomcat5x</id> <properties> <containerId>tomcat5x</containerId> <url></url> </properties> </profile> </profiles> </project> You have defined two profiles: one for JBoss and one for Tomcat and the JBoss profile is defined as active by default (using the activation element). 30 starting.remote.0.username>${remoteUsername}</cargo..Better Builds with Maven Executing mvn install cargo:start generates the WAR... starts the JBoss container and deploys the WAR into it: C:\dev\m2book\code\j2ee\daytrader\web>mvn install cargo:start [.codehaus. once this is verified you'll want a solution to deploy your WAR into an integration platform.servlet. To deploy the DayTrader’s WAR to a running JBoss server on machine remoteserver and executing on port 80.. This is useful for development and to test that your code deploys and works.30 started on port [8280] [INFO] Press Ctrl-C to stop the container. [INFO] [talledLocalContainer] JBoss 4.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <type>remote</type> </container> <configuration> <type>runtime</type> <properties> <cargo.0. [INFO] [CopyingLocalDeployer] Deploying [C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1. you would need the following Cargo plugin configuration in web/pom...port>${remotePort}</cargo.2 starting...remote..xml: <plugin> <groupId>org.war] to [C:\[.servlet.hostname> <cargo.. [.0.hostname>${remoteServer}</cargo..2 started on port [8280] [INFO] Press Ctrl-C to stop the container.password> </properties> </configuration> </configuration> </plugin> 116 .port> <cargo..remote. However.0..] [INFO] [cargo:start] [INFO] [talledLocalContainer] Parsed JBoss version = [4.username> <cargo.0. [INFO] [talledLocalContainer] Tomcat 5.0.remote. One solution is to have your container running on that integration platform and to perform a remote deployment of your WAR to it...] [INFO] [cargo:start] [INFO] [talledLocalContainer] Tomcat 5.password>${remotePassword}</cargo..]\Temp\cargo\50866\webapps].2] [INFO] [talledLocalContainer] JBoss 4. daytrader</groupId> <artifactId>daytrader</artifactId> <version>1. Check the Cargo reference documentation for all details on deployments at. It’s time to package the server module artifacts (EJB and WAR) into an EAR for convenient deployment.0</version> </parent> <artifactId>daytrader-ear</artifactId> <name>DayTrader :: Enterprise Application</name> <packaging>ear</packaging> <description>DayTrader EAR</description> 117 . Start by defining that this is an EAR project by using the packaging element: <project> <modelVersion>4.0. Building an EAR Project You have now built all the individual modules. Note that there was no need to specify a deployment URL as it is computed automatically by Cargo.xml file (see Figure 4-11).geronimo.codehaus... Figure 4-11: Directory structure of the ear module As usual the magic happens in the pom. the changes are: • A remote container and configuration type to tell Cargo that the container is remote and not under Cargo's management.xml file.xml file (or the settings. The ear module’s directory structure can't be any simpler. 4.xml file) for those user-dependent.Building J2EE Applications When compared to the configuration for a local deployment above.0</modelVersion> <parent> <groupId>org.apache.samples. All the properties introduced need to be declared inside the POM for those shared with other users and in the profiles. it solely consists of a pom.org/Deploying+to+a+running+container. • Several configuration properties (especially a user name and password allowed to deploy on the remote JBoss container) to specify all the details required to perform the remote deployment.11. 0</version> </dependency> </dependencies> Finally. you need to configure the Maven EAR plugin by giving it the information it needs to automatically generate the application.apache.samples.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1. define all of the dependencies that need to be included in the generated EAR: <dependencies> <dependency> <groupId>org.geronimo. ejb-client.samples. 118 . This includes the display name to use.0</version> <type>ejb</type> </dependency> <dependency> <groupId>org.apache. It is also necessary to tell the EAR plugin which of the dependencies are Java modules.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. Web modules. war. ejb3. rar. par.daytrader</groupId> <artifactId>daytrader-web</artifactId> <version>1. At the time of writing.apache.samples.0</version> </dependency> <dependency> <groupId>org. and EJB modules.0</version> <type>war</type> </dependency> <dependency> <groupId>org.samples. sar and wsr.geronimo. and the J2EE version to use.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <version>1.geronimo. the description to use. the EAR plugin supports the following module types: ejb.geronimo.Better Builds with Maven Next.apache.xml deployment descriptor file. jar. the contextRoot element is used for the daytrader-web module definition to tell the EAR plugin to use that context root in the generated application.geronimo.samples.xml file. By default.apache. or those with a scope of test or provided.apache.geronimo.apache. However.4</version> <modules> <javaModule> <groupId>org.samples.Building J2EE Applications By default.maven. it is often necessary to customize the inclusion of some dependencies such as shown in this example: <build> <plugins> <plugin> <groupId>org.daytrader</groupId> <artifactId>daytrader-web</artifactId> <contextRoot>/daytrader</contextRoot> </webModule> </modules> </configuration> </plugin> </plugins> </build> </project> Here.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> <configuration> <displayName>Trade</displayName> <description> DayTrader Stock Trading Performance Benchmark Sample </description> <version>1. You should also notice that you have to specify the includeInApplicationXml element in order to include the streamer and wsappclient libraries into the EAR.samples. 119 .geronimo.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <webModule> <groupId>org. only EJB client JARs are included when specified in the Java modules list. all dependencies are included. with the exception of those that are optional.apache.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <javaModule> <groupId>org. ] <defaultBundleDir>lib</defaultBundleDir> <modules> <javaModule> [: [.apache.geronimo. The streamer module's build is not described in this chapter because it's a standard build generating a JAR.org/plugins/maven-ear-plugin.apache.. 120 . However the ear module depends on it and thus you'll need to have the Streamer JAR available in your local repository before you're able to run the ear module's build..] </javaModule> [.Better Builds with Maven It is also possible to configure where the JARs' Java modules will be located inside the generated EAR.samples.samples. For example.apache.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> <bundleDir>lib</bundleDir> </javaModule> <javaModule> <groupId>org... Run mvn install in daytrader/streamer..geronimo.] There are some other configuration elements available in the EAR plugin which you can find out by checking the reference documentation on. if you wanted to put the libraries inside a lib subdirectory of the EAR you would use the bundleDir element: <javaModule> <groupId>org.. 0] to [daytrader-wsappclient-1.0] to [daytrader-streamer-1.ear to C:\[.apache.geronimo.samples. run mvn install: C:\dev\m2book\code\j2ee\daytrader\ear>mvn install […] [INFO] [ear:generate-application-xml] [INFO] Generating application.0.ear [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.geronimo.daytrader: daytrader-streamer:1.jar] [INFO] Could not find manifest file: C:\dev\m2book\code\j2ee\daytrader\ear\src\main\application\ META-INF\MANIFEST.jar] [INFO] Copying artifact [ejb-client:org.Generating one [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.geronimo..xml [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.0] to [daytrader-web-1.ear 121 .Building J2EE Applications To generate the EAR.]\.0. [INFO] [ear:ear] [INFO] Copying artifact [jar:org.apache.0-client.samples.0\daytrader-ear-1..geronimo.0] to [daytrader-ejb-1.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ear\1.apache.0.daytrader: daytrader-ejb:1.0.samples.daytrader: daytrader-ejb:1.0.geronimo.0] to[daytrader-ejb-1.apache.MF .daytrader: daytrader-web:1.war] [INFO] Copying artifact [ejb:org.samples.samples.jar] [INFO] Copying artifact [war:org.apache.daytrader: daytrader-wsappclient:1.0.0.jar] [INFO] Copying artifact [jar:org. 4"> <description> DayTrader Stock Trading Performance Benchmark Sample </description> <display-name>Trade</display-name> <module> <java>daytrader-streamer-1. Like any other container. etc.0. 122 . You'll need to use the JDK 1.jar</ejb> </module> </application> This looks good.sun.war</web-uri> <context-root>/daytrader</context-root> </web> </module> <module> <ejb>daytrader-ejb-1.com/xml/ns/j2ee" xmlns:xsi=". Deploying EARs follows the same principle.sun. 4. enabling the Geronimo plan to be modified to suit the deployment environment. Deploying a J2EE Application You have already learned how to deploy EJBs and WARs into a container individually.jar</java> </module> <module> <java>daytrader-wsappclient-1.0.com/xml/ns/j2ee/application_1_4.com/xml/ns/j2ee. it is recommended that you use an external plan file so that the deployment configuration is independent from the archives getting deployed.org/2001/XMLSchema-instance" xsi:schemaLocation=". you'll deploy the DayTrader EAR into Geronimo.Better Builds with Maven You should review the generated application.4 for this section and the following.w3. Geronimo is somewhat special among J2EE containers in that deploying requires calling the Deployer tool with a deployment plan.0. A plan is an XML file containing configuration information such as how to map CMP entity beans to a specific database.jar</java> </module> <module> <web> <web-uri>daytrader-web-1. In this example.xsd" version="1. Geronimo also supports having this deployment descriptor located within the J2EE archives you are deploying.0. how to map J2EE resources in the container. The DayTrader application does not deploy correctly when using the JDK 5 or newer.0" encoding="UTF-8"?> <application xmlns=". However. The next section will demonstrate how to deploy this EAR into a container.xml to prove that it has everything you need: <?xml version="1.sun. xml</plan> </properties> </deployable> </deployables> </deployer> </configuration> </plugin> 123 .0/ geronimo-tomcat-j2ee-1.codehaus.xml configuration snippet: <plugin> <groupId>org.apache.Building J2EE Applications To get started..0. You would need the following pom.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url>. as shown on Figure 4-12. Figure 4-12: Directory structure of the ear module showing the Geronimo deployment plan How do you perform the deployment with Maven? One option would be to use Cargo as demonstrated earlier in the chapter.org/dist/geronimo/1. As you've seen in the EJB and WAR deployment sections above and in previous chapters it's possible to create properties that are defined either in a properties section of the POM or in a Profile. put the following profile in a profiles. You'll use it to run the Geronimo Deployer tool to deploy your EAR into a running Geronimo container.xml or settings.build.0.build.jar</argument> <argument>--user</argument> <argument>system</argument> <argument>--password</argument> <argument>manager</argument> <argument>deploy</argument> <argument> ${project.xml to configure the Exec plugin: <plugin> <groupId>org.xml </argument> </arguments> </configuration> </plugin> You may have noticed that you're using a geronimo.ear C:\dev\m2book\code\j2ee\daytrader\ear/src/main/deployment/geronimo/plan.ear </argument> <argument> ${basedir}/src/main/deployment/geronimo/plan. learning how to use the Exec plugin is useful in situations where you want to do something slightly different.0-tomcat</geronimo.home}/bin/deployer.home property that has not been defined anywhere.home> </properties> </profile> </profiles> At execution time.home>c:/apps/geronimo-1.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <configuration> <executable>java</executable> <arguments> <argument>-jar</argument> <argument>${geronimo.Better Builds with Maven However.xml file: <profiles> <profile> <id>vmassol</id> <properties> <geronimo. This plugin can execute any process. the Exec plugin will transform the executable and arguments elements above into the following command line: java -jar c:/apps/geronimo-1.0-tomcat/bin/deployer.directory}/${project. Modify the ear/pom. Even though it's recommended to use a specific plugin like the Cargo plugin (as described in 4.finalName}. As the location where Geronimo is installed varies depending on the user. in this section you'll learn how to use the Maven Exec plugin.codehaus.xml 124 . or when Cargo doesn't support the container you want to deploy into.13 Testing J2EE Applications).jar –user system –password manager deploy C:\dev\m2book\code\j2ee\daytrader\ear\target/daytrader-ear-1. by creating a new execution of the Exec plugin or run the following: C:\apps\geronimo-1.jar [INFO] [INFO] `-> daytrader-streamer-1.0-tomcat\bin>deploy stop geronimo/daytrader-derby-tomcat/1.. start your preinstalled version of Geronimo and run mvn exec:exec: C:\dev\m2book\code\j2ee\daytrader\ear>mvn exec:exec [.0-tomcat\bin>deploy undeploy Trade 125 .. You will need to make sure that the DayTrader application is not already deployed before running the exec:exec goal or it will fail.jar [INFO] [INFO] `-> TradeDataSource [INFO] [INFO] `-> TradeJMS You can now access the DayTrader application by opening your browser to J2EE Applications First.jar [INFO] [INFO] `-> daytrader-wsappclient-1.war [INFO] [INFO] `-> daytrader-ejb-1.0 comes with the DayTrader application bundled.0-SNAPSHOT.] [INFO] [exec:exec] [INFO] Deployed Trade [INFO] [INFO] `-> daytrader-web-1. you should first stop it. Since Geronimo 1.0/car If you need to undeploy the DayTrader version that you've built above you'll use the “Trade” identifier instead: C:\apps\geronimo-1.0-SNAPSHOT.0-SNAPSHOT. Figure 4-13: The new functional-tests module amongst the other DayTrader modules You need to add this module to the list of modules in the daytrader/pom. For example. To achieve this. modify the daytrader/pom. 126 .13. At the time of writing. Functional tests can take a long time to execute.Better Builds with Maven 4. Maven only supports integration and functional testing by creating a separate module.xml so that it's built along with the others. so you can define a profile to build the functional-tests module only on demand. Testing J2EE Application In this last section you'll learn how to automate functional testing of the EAR built previously. see Chapter 7. the compiler and Surefire plugins are not triggered during the build life cycle of projects with a pom packaging. but running mvn install -Pfunctional-test will.xml. Figure 4-14: Directory structure for the functional-tests module As this module does not generate an artifact. so these need to be configured in the functional-tests/pom. the packaging should be defined as pom. • The Geronimo deployment Plan file is located in src/deployment/geronimo/plan. However.xml file: 127 .Building J2EE Applications This means that running mvn install will not build the functional-tests module. Now. • Classpath resources required for the tests are put in src/it/resources (this particular example doesn't have any resources). take a look in the functional-tests module itself. Figure 4-1 shows how it is organized: • Functional tests are put in src/it/java. .0</modelVersion> <parent> <groupId>org.0-SNAPSHOT</version> </parent> <artifactId>daytrader-tests</artifactId> <name>DayTrader :: Functional Tests</name> <packaging>pom</packaging> <description>DayTrader Functional Tests</description> <dependencies> <dependency> <groupId>org.geronimo.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <goals> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.samples...Better Builds with Maven <project> <modelVersion>4.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <executions> <execution> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> [.apache.geronimo..0.daytrader</groupId> <artifactId>daytrader-ear</artifactId> <version>1.apache.samples.] </plugins> </build> </project> 128 .apache.apache.0-SNAPSHOT</version> <type>ear</type> <scope>provided</scope> </dependency> [.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1.maven.] </dependencies> <build> <testSourceDirectory>src/it</testSourceDirectory> <plugins> <plugin> <groupId>org.maven. in the case of the DayTrader application.] <dependencies> [.] <dependency> <groupId>org.cargo</groupId> <artifactId>cargo-core-uberjar</artifactId> <version>0.cargo</groupId> <artifactId>cargo-ant</artifactId> <version>0. However. In addition. To set up your database you can use the DBUnit Java API (see. Start by adding the Cargo dependencies to the functional-tests/pom. Derby is the default database configured in the deployment plan.8</version> <scope>test</scope> </dependency> </dependencies> 129 . you will usually utilize a real database in a known state. so DBUnit is not needed to perform any database operations.codehaus. thus ensuring the proper order of execution.sourceforge.. there's a DayTrader Web page that loads test data into the database. You may be asking how to start the container and deploy the DayTrader EAR into it. This is because the EAR artifact is needed to execute the functional tests. You're going to use the Cargo plugin to start Geronimo and deploy the EAR into it..xml file: <project> [. As the Surefire plugin's test goal has been bound to the integration-test phase above.. and it is started automatically by Geronimo.Building J2EE Applications As you can see there is also a dependency on the daytrader-ear module. It also ensures that the daytrader-ear module is built before running the functional-tests build when the full DayTrader build is executed from the toplevel in daytrader/.8</version> <scope>test</scope> </dependency> <dependency> <groupId>org.codehaus. you'll bind the Cargo plugin's start and deploy goals to the preintegration-test phase and the stop goal to the postintegration-test phase.net/). For integration and functional tests.. geronimo.xml</plan> </properties> <pingURL></pingURL> </deployable> </deployables> </deployer> </configuration> </execution> [.0.0/ geronimo-tomcat-j2ee-1. thus ensuring that the EAR is ready for servicing when the tests execute.] The deployer element is used to configure the Cargo plugin's deploy goal. It is configured to deploy the EAR using the Geronimo Plan file. In addition.] <plugin> <groupId>org.apache.Better Builds with Maven Then create an execution element to bind the Cargo plugin's start and deploy goals: <build> <plugins> [. a pingURL element is specified so that Cargo will ping the specified URL till it responds.daytrader</groupId> <artifactId>daytrader-ear</artifactId> <type>ear</type> <properties> <plan>${basedir}/src/deployment/geronimo/plan..codehaus.org/dist/geronimo/1... 130 ..apache.samples.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <wait>false</wait> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url>. add an execution element to bind the Cargo plugin's stop goal to the post-integration-test phase: [. An alternative to using Cargo's Maven plugin is to use the Cargo Java API directly from your tests.net/) to call a Web page from the DayTrader application and check that it's working.. Add the JUnit and HttpUnit dependencies. as you're only using them for testing: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.] <execution> <id>stop-container</id> <phase>post-integration-test</phase> <goals> <goal>stop</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> The functional test scaffolding is now ready.sourceforge.6.1</version> <scope>test</scope> </dependency> <dependency> <groupId>httpunit</groupId> <artifactId>httpunit</artifactId> <version>1.Building J2EE Applications Last. The only thing left to do is to add the tests in src/it/java.1</version> <scope>test</scope> </dependency> 131 . by wrapping it in a JUnit TestSetup class to start the container in setUp() and stop it in tearDown(). with both defined using a test scope. You're going to use the HttpUnit testing framework (.. At this stage you've pretty much become an expert Maven user! The following chapters will show even more advanced topics such as how to write Maven plugins.framework.daytrader.daytrader. Change directory into functional-tests.Better Builds with Maven Next.geronimo.apache.14. WebResponse response = wc.getResponse(request).531 sec [INFO] [cargo:stop {execution: stop-container}] 4.geronimo.*. type mvn install and relax: C:\dev\m2book\code\j2ee\daytrader\functional-tests>mvn install [. how to gather project health information from your builds. public class FunctionalTest extends TestCase { public void testDisplayMainPage() throws Exception { WebConversation wc = new WebConversation(). response. Time elapsed: 0.httpunit. In addition you've discovered how to automate starting and stopping containers. the URL is called to verify that the returned page has a title of “DayTrader”: package org.meterware. add a JUnit test class called src/it/java/org/apache/geronimo/samples/daytrader/FunctionalTest. WebRequest request = new GetMethodWebRequest( ""). deploying J2EE archives and implementing functional tests.FunctionalTest [surefire] Tests run: 1. how to effectively set up Maven in a team.*. Errors: 0.apache..getTitle()). and more.samples. import com. 132 . Failures: 0.. In the class.] . assertEquals("DayTrader". import junit.samples. } } It's time to reap the benefits from your build.java. Summary You have learned from chapters 1 and 2 how to build any type of application and this chapter has demonstrated how to build J2EE applications. Richard Feynman 133 . and resources from a plugin Attaching an artifact to the project For a successful technology. . reality must take precedence over public relations. source directories. Developing Custom Maven Plugins Developing Custom Maven Plugins This chapter covers: • • • • • How plugins execute in the Maven life cycle Tools and languages available to aid plugin developers Implementing a basic plugin using Java and Ant Working with dependencies. for Nature cannot be fooled.5. This makes Maven's plugin framework extremely important as a means of not only building a project. 5. This ordering is called the build life cycle. and more. This chapter will focus on the task of writing custom plugins. A Review of Plugin Terminology Before delving into the details of how Maven plugins function and how they are written. resolving project dependencies. In this case. Packaging these mojos inside a single plugin provides a consistent access mechanism for users. resolving dependencies. Just like Java packages. the build process for a project is comprised of set of mojos executing in a particular. such as integration with external tools and systems. it will discuss the various ways that a plugin can interact with the Maven build environment and explore some examples. executing all the associated mojos at each phase of the build. Even if a project requires a special task to be performed. but also extending a project's build to incorporate new functionality. For example. Maven is actually a platform that executes plugins within a build life cycle. the chapter will cover the tools available to simplify the life of the plugin developer. With most projects. and is defined as a set of task categories. allowing shared configuration to be added to a single section of the POM. When Maven executes a build. let's begin by reviewing the terminology used to describe a plugin and its role in the build. well-defined order. Each mojo can leverage the rich infrastructure provided by Maven for loading projects.Better Builds with Maven 5. it may be necessary to write a custom plugin to integrate these tasks into the build life cycle. Introduction As described in Chapter 2. the plugins provided “out of the box” by Maven are enough to satisfy the needs of most build processes (see Appendix A for a list of default plugins used to build a typical project). it traverses the phases of the life cycle in order. called phases. injecting runtime parameter information. Correspondingly. of the build process are executed by the set of plugins associated with the phases of a project's build life-cycle. Such supplemental plugins can be found at the Apache Maven project. 134 . Maven's core APIs handle the “heavy lifting” associated with loading project definitions (POMs). they are packaged together into a plugin. Recall that a mojo represents a single task in the build process. the common theme for these tasks is the function of compiling code. or work. From there. or even at the Web sites of third-party tools offering Maven integration by way of their own plugins (for a list of some additional plugins available for use. plugins provide a grouping mechanism for multiple mojos that serve similar functions within the build life cycle. It executes an atomic build task that represents a single step in the build process. it is still likely that a plugin already exists to perform this task. This association of mojos to phases is called binding and is described in detail below. in order to perform the tasks necessary to build a project. Additionally. A mojo is the basic unit of work in the Maven application. the maven-compiler-plugin incorporates two mojos: compile and testCompile. the loosely affiliated CodeHaus Mojo project. It starts by describing fundamentals. including a review of plugin terminology and the basic mechanics of the the Maven plugin framework. The actual functional tasks. However. refer to the Plugin Matrix. it enables these mojos to share common code more easily.1.2. and organizing and running plugins. When a number of mojos perform related tasks. if your project requires tasks that have no corresponding plugin. Finally. 5. 135 . Think of these mojos as tangential to the the Maven build process. it may still require that certain activities have already been completed. you will also need a good understanding of how plugins are structured and how they interact with their environment. These mojos are meant to be used by way of direct invocation. which is used for the majority of build activities (the other two life cycles deal with cleaning a project's work directory and generating a project web site). Bootstrapping into Plugin Development In addition to understanding Maven's plugin terminology. However. While Maven does in fact define three different lifecycles. so be sure to check the documentation for a mojo before you re-bind it. The Plugin Framework Maven provides a rich framework for its plugins. sequencing the various build operations. Maven also provides a welldefined procedure for building a project's sources into a distributable archive. the discussion in this chapter is restricted to the default life cycle. Most mojos fall into a few general categories. As a plugin developer. Such mojos may be meant to check out a project from version control. and as such. Understanding this framework will enable you to extract the Maven build-state information that each mojo requires. Indeed. Using the life cycle. plus much more. using the plugin executions section of the project's POM. While mojos usually specify a default phase binding. or even create the directory structure for a new project.Developing Custom Maven Plugins Together with phase binding.1. Therefore. parameter injection and life-cycle binding form the cornerstone for all mojo development. which correspond to the phases of the build life cycle. Each execution can specify a separate phase binding for its declared set of mojos. they can be bound to any phase in the life cycle. including a well-defined build life cycle. Binding to a phase of the Maven life cycle allows a mojo to make assumptions based upon what has happened in the preceding phases. Using Maven's parameter injection infrastructure. As a result.3. 5. a mojo may be designed to work outside the context of the build life cycle. successive phases can make assumptions about what work has taken place in the previous phases. before a mojo can execute. Since phase bindings provide a grouping mechanism for mojos within the life cycle. it is important to provide the appropriate phase binding for your mojos.3. the ordered execution of Maven's life cycle gives coherence to the build process. or aid integration with external development tools. a given mojo can even be bound to the life cycle multiple times during a single build. and parameter resolution and injection. since they often perform tasks for the POM maintainer. mojos have a natural phase binding which determines when a task should execute within the life cycle. dependency management. In some cases. to ensure compatibility with other plugins. will not have a life-cycle phase binding at all since they don't fall into any natural category within a typical build process. in addition to determining its appropriate phase binding. you must understand the mechanics of life-cycle phase binding and parameter injection. Together. a mojo can pick and choose what elements of the build state it requires in order to execute its task. A discussion of all three build life cycles can be found in Appendix A. then two additional mojos will be triggered to handle unit testing. These mojos were always present in the life-cycle definition. Maven's plugin framework ensures that almost anything can be integrated into the build life cycle. the jar mojo from the maven-jarplugin will harvest these class files and archive them into a jar file. providing functions as varied as deployment into the repository system. but until now they had nothing to do and therefore. Instead. many more plugins can be used to augment the default life-cycle definition. is often as important as the modifications made during execution itself.Better Builds with Maven Participation in the build life cycle Most plugins consist entirely of mojos that are bound at various phases in the life cycle according to their function in the build process. generation of the project's website.. First. each of the resource-related mojos will discover this lack of non-code resources and simply opt out without modifying the build in any way. 136 . This is not a feature of the framework. at least two of the above mojos will be invoked. did not execute. and much more. Indeed. This level of extensibility is part of what makes Maven so powerful. As a specific example of how plugins work together through the life cycle. consider a very basic Maven build: a project with source code that should be compiled and archived into a jar file for redistribution. but a requirement of a well-designed mojo. Depending on the needs of a given project. In good mojo design. then the test mojo from the maven-surefire-plugin will execute those compiled tests. If this basic Maven project also includes source code for unit tests. the compile mojo from the maven-compiler-plugin will compile the source code into binary class files in the output directory. Since our hypothetical project has no “non-code” resources. Only those mojos with tasks to perform are executed during this build. Maven will execute a default life cycle for the 'jar' packaging. determining when not to execute. The testCompile mojo from the maven-compiler-plugin will compile the test sources. During this build process. validation of project content. Then. none of the mojos from the maven-resources-plugin will be executed. and what methods Maven uses to extract mojo parameters from the build state.Developing Custom Maven Plugins Accessing build information In order for mojos to execute effectively. along with any system properties that were provided when Maven was launched.xml. • To gain access to the current build state. using a language-appropriate mechanism. whether it is editable. how do you instruct Maven to inject those values into the mojo instance? Further. see Appendix A. The Maven plugin descriptor is a file that is embedded in the plugin jar archive. For example. the life-cycle phase to which the mojo should be bound. For the complete plugin descriptor syntax. and once resolved.and machinelevel Maven settings. This information comes in two categories: • Project information – which is derived from the project POM. The plugin descriptor Though you have learned about binding mojos to life-cycle phases and resolving parameter values using associated expressions. how do you associate mojo parameters with their expression counterparts. until now you have not seen exactly how a life-cycle binding occurs. and more. Using the correct parameter expressions. That is to say. whether it is required for the mojo's execution. thereby avoiding traversal of the entire build-state object graph. Maven allows mojos to specify parameters whose values are extracted from the build state using expressions. in addition to any programmatic modifications made by previous mojo executions. 137 . and the resulting value is injected into the mojo. At runtime. and the mechanism for injecting the parameter value into the mojo instance. they require information about the state of the current build. and consists of the user. This mojo would retrieve the list of source directories from the current build information using the following expression: ${project. It contains information about the mojo's implementation class (or its path within the plugin jar). each declared mojo parameter includes information about the various expressions used to resolve its value. Within this descriptor. the expression associated with a parameter is resolved against the current build state. under the path /META-INF/maven/plugin. the set of parameters the mojo declares. assuming the patch directory is specified as mojo configuration inside the POM.. The descriptor is an XML file that informs Maven about the set of mojos that are contained within the plugin. a mojo can keep its dependency list to a bare minimum. see Appendix A. a mojo that applies patches to the project source code will need to know where to find the project source and patch files. Environment information – which is more static.compileSourceRoots} Then. 138 . By abstracting many of these details away from the plugin developer. it's a simple case of providing special javadoc annotations to identify the properties and parameters of the mojo. This framework generates both plugin documentation and the coveted plugin descriptor. To accommodate the extensive variability required from the plugin descriptor. For example. and its format is specific to the mojo's implementation language.verbose}" default-value="false" */ private boolean verbose.2. adding any other plugin-level metadata through its own configuration (which can be modified in the plugin's POM). However. the clean mojo in the maven-cleanplugin provides the following class-level javadoc annotation: /** * @goal clean */ public class CleanMojo extends AbstractMojo This annotation tells the plugin-development tools the mojo's name. the maven-plugin-plugin simply augments the standard jar life cycle mentioned previously as a resource-generating step (this means the standard process of turning project sources into a distributable jar archive is modified only slightly. This is where Maven's plugin development tools come into play. except when configuring the descriptor.3. and direct invocations (as from the command line). one per supported mojo language). • Of course. POM configurations. this flexibility comes at a price. it uses a complex syntax. Maven's development tools expose only relevant specifications in a format convenient for a given plugin's implementation language. This metadata is embedded directly in the mojo's source code where possible. These plugindevelopment tools are divided into the following two categories: • The plugin extractor framework – which knows how to parse the metadata formats for every language supported by Maven. Writing a plugin descriptor by hand demands that plugin developers understand low-level details about the Maven plugin framework – details that the developer will not use. The clean mojo also defines the following: /** * Be verbose in the debug log-level? * * @parameter expression="${clean. Using Java. and orchestrates the process of extracting metadata from mojo implementations. In short. to generate the plugin descriptor). 5.Better Builds with Maven The plugin descriptor is very powerful in its ability to capture the wiring information for a wide variety of mojos. it consists of a framework library which is complemented by a set of provider libraries (generally. The maven-plugin-plugin – which uses the plugin extractor framework. Maven's plugindevelopment tools remove the burden of maintaining mojo metadata by hand. so it can be referenced from lifecycle mappings. Plugin Development Tools To simplify the creation of plugin descriptors. Maven provides plugin tools to parse mojo metadata from a variety of formats. the format used to write a mojo's metadata is dependent upon the language in which the mojo is implemented. However.File instance. see Appendix A. At first. the underlying principles remain the same. it's impossible to initialize the Java field with the value you need. The second specifies that this parameter can also be configured from the command line as follows: -Dclean. rather than in the Java field initialization code. If you choose to write mojos in another language. When the mojo is instantiated. consider the following field annotation from the resources mojo in the maven-resources-plugin: /** * Directory containing the classes.build. this value is resolved based on the POM and injected into this field.Developing Custom Maven Plugins Here. This parameter annotation also specifies two attributes. Since the plugin tools can also generate documentation about plugins based on these annotations.verbose=false Moreover. it's a good idea to consistently specify the parameter's default value in the metadata. these annotations are specific to mojos written in Java. But consider what would happen if the default value you wanted to inject contained a parameter expression. especially when you could just declare the field as follows: private boolean verbose = false.outputDirectory}" */ private File classesDirectory. it specifies that this parameter can be configured from the POM using: <configuration> <verbose>false</verbose> </configuration> You may notice that this configuration name isn't explicitly specified in the annotation. 139 . * * @parameter default-value="${project. For a complete list of javadoc annotations available for specifying mojo metadata. For instance. namely the java. like Ant. Remember. which references the output directory for the current project. it might seem counter-intuitive to initialize the default value of a Java field using a javadoc annotation. In this case. the annotation identifies this field as a mojo parameter. then the mechanism for specifying mojo metadata such as parameter definitions will be different.io. expression and default-value. it's implicit when using the @parameter annotation. The first specifies that this parameter's default value should be set to false. when translating a project build from Ant to Maven (refer to Chapter 8 for more discussion about migrating from Ant to Maven). Since it provides easy reuse of third-party APIs from within your mojo. In these cases. Ant. Ant-based plugins can consist of multiple mojos mapped to a single build script. called buildinfo.3. Whatever language you use. For example. Plugin parameters can be injected via either field reflection or setter methods. which is used to read and write build information metadata files. you risk confusing the issue at hand – namely. it's important to keep the examples clean and relatively simple. Such information might include details about the system environment. this technique also works well for Beanshell-based mojos. During the early phases of such a migration. and minimizes the number of dependencies you will have on Maven's core APIs. To make Ant scripts reusable. and because many Maven-built projects are written in Java. in certain cases you may find it easier to use Ant scripts to perform build tasks. the examples in this chapter will focus on a relatively simple problem space: gathering and publishing information about a particular build. and so on. Since Beanshell behaves in a similar way to standard Java. Maven can wrap an Ant build target and use it as if it were a mojo. mojo mappings and parameter definitions are declared via an associated metadata file. In addition. and Beanshell. A Note on the Examples in this Chapter When learning how to interact with the different aspects of Maven from within a mojo. it also provides good alignment of skill sets when developing mojos from scratch.. 5. Java is the language of choice. This project can be found in the source code that accompanies this book. it is often simpler to wrap existing Ant build targets with Maven mojos and bind them to various phases in the life cycle. the specific snapshot versions of dependencies used in the build. or any combination thereof. Maven currently supports mojos written in Java. Simple javadoc annotations give the plugin processing plugin (the maven-plugin-plugin) the instructions required to generate a descriptor for your mojo. Since Java is currently the easiest language for plugin development. You can install it using the following simple command: mvn install 140 . this chapter will focus primarily on plugin development in this language. Maven lets you select pieces of the build state to inject as mojo parameters. However. due to the migration value of Ant-based mojos when converting a build to Maven. the particular feature of the mojo framework currently under discussion. Therefore. individual mojos each mapped to separate scripts.Better Builds with Maven Choose your mojo implementation language Through its flexible plugin descriptor format and invocation framework. Maven can accommodate mojos written in virtually any language. you will need to work with an external project. this chapter will also provide an example of basic plugin development using Ant.3. Otherwise. To facilitate these examples. This is especially important during migration. For many mojo developers. 5. you will look at the development effort surrounding a sample project. if the system property os. In addition to simply capturing build-time information.Developing Custom Maven Plugins 5. As a side note. which allows the build to succeed in that environment. This development effort will have the task of maintaining information about builds that are deployed to the development repository. perform the following steps: cd buildinfo mvn install 141 . providing a thin adapter layer that allows the generator to be run from a Maven build. This information should capture relevant details about the environment used to build the Guinea Pig artifacts. this approach encapsulates an important best practice. For simplicity. you are free to write any sort of adapter or front-end code you wish. and this dependency is injected by one of the aforementioned profiles. If you have a test dependency which contains a defect. When this profile is not triggered. which will be triggered by the value of a given system property – say.4. Therefore. Here. this dependency is used only during testing. and take advantage of a single. To build the buildinfo generator library. and has no impact on transitive dependencies for users of this project. The buildinfo plugin is a simple wrapper around this generator.name is set to the value Linux (for more information on profiles.4. a default profile injects a dependency on a windows-specific library. called Guinea Pig. consider a case where the POM contains a profile. you must first install the buildinfo generator library into your Maven local repository. Capturing this information is key. BuildInfo Example: Capturing Information with a Java Mojo To begin. for the purposes of debugging. refer to Chapter 3). When triggered. then the value of the triggering system property – and the profile it triggers – could reasonably determine whether the build succeeds or fails. reusable utility in many different scenarios. eventually publishing it alongside the project's artifact in the repository for future reference (refer to Chapter 7 for more details on how teams use Maven). by separating the generator from the Maven binding code. it makes sense to publish the value of this particular system property in a build information file so that others can see the aspects of the environment that affected this build.1. the values of system properties used in the build are clearly very important. Prerequisite: Building the buildinfo generator project Before writing the buildinfo plugin. Developing Your First Mojo For the purposes of this chapter. which will be deployed to the Maven repository system. since it can have a critical effect on the build process and the composition of the resulting Guinea Pig artifacts. you will need to disseminate the build to the rest of the development team. this profile adds a new dependency on a Linux-specific library. /** * The location to write the buildinfo file.plugins \ -DartifactId=maven-buildinfo-plugin \ -DarchetypeArtifactId=maven-archetype-mojo When you run this command.. you should remove the sample mojo. This will create a project with the standard layout under a new subdirectory called mavenbuildinfo-plugin within the current working directory.build.mvnbook. simply execute the following: mvn archetype:create -DgroupId=com. The mojo You can handle this scenario using the following. Once you have the plugin's project structure in place. interacting with Maven's own plugin parameter annotations.. you will need to modify the POM as follows: • Change the name element to Maven BuildInfo Plugin. fairly simple Java-based mojo: [. You will modify the POM again later.] /** * Write environment information for the current build to file.Better Builds with Maven Using the archetype plugin to generate a stub plugin project Now that the buildinfo generator library has been installed. you're likely to see a warning message saying “${project. * @goal extract * @phase package */ public class WriteBuildInfoMojo extends AbstractMojo { /** * Determines which system properties are added to the file. writing your custom mojo is simple. This message does not indicate a problem. since this plugin doesn't currently have an associated web site. To generate a stub plugin project for the buildinfo plugin. as you know more about your mojos' dependencies. used to generate the plugin source code.mergere. it's helpful to jump-start the plugin-writing process by using Maven's archetype plugin to create a simple stub project from a standard pluginproject template. 142 . * This is a comma-delimited list. Finally. this simple version will suffice for now. Inside. For the purposes of this plugin. It can be found in the plugin's project directory.java.systemProperties}" */ private String systemProperties. since you will be creating your own mojo from scratch.directory} is not a valid reference”. This is a result of the Velocity template. * @parameter expression="${buildinfo. you'll find a basic POM and a sample mojo. However. under the following path: src\main\java\com\mergere\mvnbook\plugins\MyMojo. • Remove the url element. value ).getProperty( key. BuildInfoConstants. buildInfo. } } } While the code for this mojo is fairly straightforward. outputFile ). public void execute() throws MojoExecutionException { BuildInfo buildInfo = new BuildInfo().outputFile}" defaultvalue="${project. if ( systemProperties != null ) { String[] keys = systemProperties. } } try { BuildInfoUtils. } catch ( IOException e ) { throw new MojoExecutionException( "Error writing buildinfo XML file. i < keys. it's worthwhile to take a closer look at the javadoc annotations.xml" * @required */ private File outputFile.MISSING_INFO_PLACEHOLDER ). String value = sysprops." ). i++ ) { String key = keys[i].split( ".getProperties(). e ).build. Reason: " + e.length.addSystemProperty( key.version}buildinfo. for ( int i = 0. In the class-level javadoc comment.artifactId}-${project.outputDirectory}/${project. there are two special annotations: /** * @goal extract * @phase package */ 143 .trim().writeXml( buildInfo.getMessage(). Properties sysprops = System.Developing Custom Maven Plugins * @parameter expression="${buildinfo. @goal. you're collecting information from the environment with the intent of distributing it alongside the main project artifact in the repository.Better Builds with Maven The first annotation.user. However.systemProperties}" */ This is one of the simplest possible parameter specifications.directory}/${project.systemProperties=java. In general. using several expressions to extract project information on-demand. In this example. will allow this mojo field to be configured using the plugin configuration specified in the POM. If this parameter has no value when the mojo is configured.version. consider the parameter for the systemProperties variable: /** * @parameter expression="${buildinfo.outputFile}" defaultvalue="${project.build. as execution without an output file would be pointless. Using the @parameter annotation by itself. the mojo cannot function unless it knows where to write the build information file. which are used to specify the mojo's parameters. tells the plugin tools to treat this class as a mojo named extract. with no attributes. When you invoke this mojo.version}buildinfo. you have several field-level javadoc comments. you will use this name. the outputFile parameter presents a slightly more complex example of parameter annotation.artifactId}-${project. To ensure that this parameter has a value. In addition. Therefore. However. so they will be considered separately. you want the mojo to use a certain value – calculated from the project's information – as a default value for this parameter. it makes sense to execute this mojo in the package phase. This is where the expression attribute comes into play. the mojo uses the @required annotation. Aside from the class-level comment. First. the complexity is justified. the expression attribute allows you to specify a list of system properties on-the-fly. The second annotation tells Maven where in the build life cycle this mojo should be executed.xml" * * @required */ In this case. 144 . In this case. the build will fail with an error. Take another look: /** * The location to write the buildinfo file. attaching to the package phase also gives you the best chance of capturing all of the modifications made to the build state before the jar is produced. Using the expression attribute. * * @parameter expression="${buildinfo. you may want to allow a user to specify which system properties to include in the build information file. you can specify the name of this parameter when it's referenced from the command line. In this case. as follows: localhost $ mvn buildinfo:extract \ -Dbuildinfo. so it will be ready to attach to the project artifact. Each offers a slightly different insight into parameter specification. you can see why the normal Java field initialization is not used. The default output path is constructed directly inside the annotation.dir Finally. since you have more specific requirements for this parameter. Also.0</version> </dependency> <dependency> <groupId>com.0</modelVersion> <groupId>com.0-SNAPSHOT</version> </dependency> </dependencies> </project> This POM declares the project's identity and its two dependencies. 145 .Developing Custom Maven Plugins The plugin POM Once the mojo has been written. as follows: <project> <modelVersion>4. This mapping is a slightly modified version of the one used for the jar packaging. which simply adds plugin descriptor extraction and generation to the build process.0-SNAPSHOT</version> <packaging>maven-plugin</packaging> <dependencies> <dependency> <groupId>org. Note the dependency on the buildinfo project.shared</groupId> <artifactId>buildinfo</artifactId> <version>1.mvnbook.0.mvnbook.mergere. note the packaging – specified as maven-plugin – which means that this plugin build will follow the maven-plugin life-cycle mapping.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.mergere.apache. which provides the parsing and formatting utilities for the build information file.plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <version>1. you can construct an equally simple POM which will allow you to build the plugin. mvnbook. <plugins> <plugin> <groupId>com.. you need to ensure that every build captures this information. as follows: <build> .name system property..mergere. </plugins> . This involves modification of the standard jar life-cycle.Better Builds with Maven Binding to the life cycle Now that you have a method of capturing build-time environmental information.. </build> The above binding will execute the extract mojo from your new maven-buildinfo-plugin during the package phase of the life cycle..plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> <configuration> <systemProperties>os. so that every build triggers it.version</systemProperties> </configuration> <goals> <goal>extract</goal> </goals> </execution> </executions> </plugin> ..java. 146 . and capture the os. which you can do by adding the configuration of the new plugin to the Guinea Pig POM..name. The easiest way to guarantee this is to bind the extract mojo to the life cycle. 0" encoding="UTF-8"?><buildinfo> <systemProperties> <os.name>Linux</os. you will find information similar to the following: <?xml version="1.xml In the file.. there should be a file named: guinea-pig-1.0-SNAPSHOT-buildinfo.. Your mojo has captured the name of operating system being used to execute the build and the version of the jvm. and both of these properties can have profound effects on binary compatibility.version> </systemProperties> </buildinfo> While the name of the OS may differ.] [buildinfo:extract {execution:extract}] ------------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------------- Under the target directory. you should see output similar to the following: [..4</java.name> <java. the output of of the generated build information is clear enough.] [INFO] [INFO] [INFO] [INFO] [. you can build the plugin and try it out! First.. test the plugin by building Guinea Pig with the buildinfo plugin bound to its life cycle as follows: > C:\book-projects\guinea-pig > mvn package When the Guinea Pig build executes.Developing Custom Maven Plugins The output Now that you have a mojo and a POM. build the buildinfo plugin with the following commands: > C:\book-projects\maven-buildinfo-plugin > mvn clean install Next.version>1. 147 . therefore. The Ant target To leverage the output of the mojo from the previous example – the build information file – you can use that content as the body of the e-mail. and should look similar to the following: <project> <target name="notify-target"> <mail from="maven@localhost" replyto="${listAddr}" subject="Build Info for Deployment of ${project. However. it's simpler to use Ant. you need to share it with others in your team when the resulting project artifact is deployed. simply declare mojo parameters for them. it might be enough to send a notification e-mail to the project development mailing list. From here. you'll notice that this mojo expects several project properties.xml. given the amount of setup and code required. it's a simple matter of specifying where the email should be sent. it should be extracted directly from the POM for the project we're building. mature tasks available for build script use (including one specifically for sending e-mails).2. Information like the to: address will have to be dynamic. “deployment” is defined as injecting the project artifact into the Maven repository system. It's important to remember that in the Maven world. so that other team members have access to it.outputFile}"> <to>${listAddr}</to> </mail> </target> </project> If you're familiar with Ant.name}" mailhost="${mailHost}" mailport="${mailPort}" messagefile="${buildinfo.Better Builds with Maven 5. 148 . After writing the Ant target to send the notification e-mail. BuildInfo Example: Notifying Other Developers with an Ant Mojo Now that some important information has been captured. Of course. and the dozens of well-tested. Your new mojo will be in a file called notify.build. you just need to write a mojo definition to wire the new target into Maven's build process. and how. To ensure these project properties are in place within the Ant Project instance. such a task could be handled using a Java-based mojo and the JavaMail API from Sun. For now.4. xml and should appear as follows: <pluginMetadata> <mojos> <mojo> <call>notify-target</call> <goal>notify</goal> <phase>deploy</phase> <description><![CDATA[ Email environment information from the current build to the development mailing list when the artifact is deployed.name</name> <defaultValue>${project. metadata for an Ant mojo is stored in a separate file.directory}/${project. ]]></description> <parameters> <parameter> <name>buildinfo. the build script was called notify.mojos.build.artifactId}${project.xml.version}-buildinfo. The corresponding metadata file will be called notify.outputFile</name> <defaultValue> ${project. In this example.xml </defaultValue> <required>true</required> <readonly>false</readonly> </parameter> <parameter> <name>listAddr</name> <required>true</required> </parameter> <parameter> <name>project.build.Developing Custom Maven Plugins The mojo metadata file Unlike the prior Java examples. . you will see many similarities.0 shipped without support for Ant-based mojos (support for Ant was added later in version 2. and parameter flags such as required are still present.2). The maven-plugin-plugin ships with the Java and Beanshell provider libraries which implement the above interface. the contents of this file may appear different than the metadata used in the Java mojo. Finally. since you now have a good concept of the types of metadata used to describe a mojo. When this mojo is executed. As with the Java example. In an Antbased mojo however. a more in-depth discussion of the metadata file for Ant mojos is available in Appendix A. each with its own information like name. mojo-level metadata describes details such as phase binding and mojo name. its value is injected as a project reference. you'd have to add a <type> element alongside the <name> element. notice that this mojo is bound to the deploy phase of the life cycle. by binding the mojo to the deploy phase of life cycle. First of all. all of the mojo's parameter types are java. you will have to add support for Ant mojo extraction to the maven-plugin-plugin. parameter injection takes place either through direct field assignment.Better Builds with Maven At first glance.String. Instead. otherwise. As with the Java example. the overall structure of this file should be familiar. The rule for parameter injection in Ant is as follows: if the parameter's type is java. Maven still must resolve and inject each of these parameters into the mojo. some special configuration is required to allow the maven-plugin-plugin to recognize Ant mojos. Any build that runs must be deployed for it to affect other development team members. the notification e-mails will be sent only when a new artifact becomes available in the remote repository. however. This library defines a set of interfaces for parsing mojo descriptors from their native format and generating various output from those descriptors – including plugin descriptor files.0. or through JavaBeans-style setXXX() methods. but expressed in XML. Also. In Java. metadata specify a list of parameters for the mojo. because you're going to be sending e-mails to the development mailing list. then its value is injected as a property. default value.lang. with its use of the MojoDescriptorExtractor interface from the maven-plugin-tools-api library. This allows developers to generate descriptors for Java.or Beanshell-based mojos with no additional configuration. upon closer examination. and more. In this example. The maven-plugin-plugin is a perfect example. parameters are injected as properties and references into the Ant Project instance. the difference here is the mechanism used for this injection. to develop an Ant-based mojo.String (the default). expression. If one of the parameters were some other object type.lang. in order to capture the parameter's type in the specification. Maven allows POM-specific injection of plugin-level dependencies in order to accommodate plugins that take a framework approach to providing their functionality. However. so it's pointless to spam the mailing list with notification e-mails every time a jar is created for the project. Fortunately. The expression syntax used to extract information from the build state is exactly the same. Modifying the plugin POM for Ant mojos Since Maven 2. 150 . This is an important point in the case of this mojo. the specifications of which should appear as follows: <dependencies> [. and it is always necessary for embedding Ant scripts as mojos in the Maven build process. it requires a couple of new dependencies.2</version> </dependency> </dependencies> </plugin> </plugins> </build> [.] </project> Additionally.maven</groupId> <artifactId>maven-plugin-tools-ant</artifactId> <version>2..apache.] </dependencies> The first of these new dependencies is the mojo API wrapper for Ant build scripts.Developing Custom Maven Plugins To accomplish this. you will need to add a dependency on the maven-plugin-tools-ant library to the maven-plugin-plugin using POM configuration as follows: <project> [. quite simply. it will be quite difficult to execute an Ant-based plugin. 151 ....maven</groupId> <artifactId>maven-script-ant</artifactId> <version>2. since the plugin now contains an Ant-based mojo. a dependency on the core Ant library (whose necessity should be obvious).2</version> </dependency> <dependency> <groupId>ant</groupId> <artifactId>ant</artifactId> <version>1. The second new dependency is.6..0..] <dependency> <groupId>org..apache.] <build> <plugins> <plugin> <artifactId>maven-plugin-plugin</artifactId> <dependencies> <dependency> <groupId>org. If you don't have Ant in the plugin classpath.5</version> </dependency> [..0. you should add a configuration section to the new execution section. and these two mojos should not execute in the same phase (as mentioned previously). Again. This is because an execution section can address only one phase of the build life cycle. Even its configuration is the same.org</listAddr> </configuration> </execution> </executions> </plugin> [.] </execution> <execution> <id>notify</id> <goals> <goal>notify</goal> </goals> <configuration> <listAddr>[email protected] in this case. it behaves like any other type of mojo to Maven. execute the following command: > mvn deploy The build process executes the steps required to build and deploy a jar .. which supplies the listAddr parameter value. a new section for the notify mojo is created..codehaus.] <plugins> <plugin> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> [..] </plugins> </build> The existing <execution> section – the one that binds the extract mojo to the build – is not modified.Better Builds with Maven Binding the notify mojo to the life cycle Once the plugin descriptor is generated for the Ant mojo. In order to tell the notify mojo where to send this e-mail... Instead. notification happens in the deploy phase only. it will also extract the relevant environmental details during the package phase. Now. Adding a life-cycle binding for the new Ant mojo in the Guinea Pig POM should appear as follows: <build> [. 152 .. and send them to the Guinea Pig development mailing list in the deploy phase. because non-deployed builds will have no effect on other team members. project source code and resources. including the ability to work with the current project instance.1.0</version> </dependency> To enable access to information in artifacts via Maven's artifact API. in that the artifact-related interfaces are actually maintained in a separate artifact from the components used to work with them. it's important to mention that the techniques discussed in this section make use of Maven's project and artifact APIs. if you also need to work with artifacts – including actions like artifact resolution – you must also declare a dependency on maven-artifact-manager in your POM. However. Gaining Access to Maven APIs Before proceeding. Therefore. the above dependency declaration is fine.Developing Custom Maven Plugins 5. if you want to know how to develop plugins that manage dependencies.5. like this: <dependency> <groupId>org. modify your POM to define a dependency on maven-artifact by adding the following: <dependency> <groupId>org. Whenever you need direct access to the current project instance.0</version> </dependency> 153 . you must add a dependency on one or more Maven APIs to your project's POM. and artifact attachments. The next examples cover more advanced topics relating to mojo development. Advanced Mojo Development The preceding examples showed how to declare basic mojo parameters.maven</groupId> <artifactId>maven-artifact-manager</artifactId> <version>2.maven</groupId> <artifactId>maven-project</artifactId> <version>2.0</version> </dependency> It's important to realize that Maven's artifact APIs are slightly different from its project API. The following sections do not build on one another.apache. modify your POM to define a dependency on maven-project by adding the following: <dependency> <groupId>org. However. then read on! 5.5.maven</groupId> <artifactId>maven-artifact</artifactId> <version>2. if you only need to access information inside an artifact. To enable access to Maven's project API. and are not required for developing basic mojos.apache. and how to annotate the mojo with a name and a preferred phase binding. or any related components. one or more artifacts in the current build.apache. . • Second.2. the test mojo in the maven-surefire-plugin requires the project's dependency paths so it can execute the project's unit tests with a proper classpath. this is specified via a mojo parameter definition and should use the following syntax: /** * The set of dependencies required by the project * @parameter default-value="${project. Maven makes it easy to inject a project's dependencies. the mojo must tell Maven that it requires the project's dependencies be resolved (this second requirement is critical.Better Builds with Maven 5. As with all declarations. such as: -Ddependencies=[. However. “How exactly can I configure this parameter?" The answer is that the mojos parameter value is derived from the dependencies section of the POM. this declaration has another annotation..Set dependencies.5. the mojo must tell Maven that it requires the project dependency set. For example. only the following two changes are required: • First.util. 154 . if the mojo works with a project's dependencies.. Fortunately. If this parameter could be specified separately from the main dependencies section.] So. since it defines a parameter with a default value that is required to be present before the mojo can execute. the compile mojo in the maven-compiler-plugin must have a set of dependency paths in order to build the compilation classpath..dependencies}" * @required * @readonly */ private java. you may be wondering. namely it disables configuration via the POM under the following section: <configuration> <dependencies>. This annotation tells Maven not to allow the user to configure this parameter directly. Accessing Project Dependencies Many mojos perform tasks that require access to a project's dependencies. This declaration should be familiar to you. In addition. since the dependency resolution process is what populates the set of artifacts that make up the project's dependencies). To enable a mojo to work with the set of artifacts that comprise the project's dependencies. so you configure this parameter by modifying that section directly. it must tell Maven that it requires access to that set of artifacts.</dependencies> </configuration> It also disables configuration via system properties. Injecting the project dependency set As described above. users could easily break their builds – particularly if the mojo in question compiled project source code. which might not be as familiar: @readonly. Consider the case where a developer wants to clean the project directory using Maven 1. this is a direct result of the rigid dependency resolution design in Maven 1. the build process doesn't incur the added overhead of resolving them.. at which scope. Maven encounters another mojo that declares a requirement for test-scoped dependencies. To gain access to the project's dependencies. if your mojo needs to work with the project's dependencies. but being unavailable for testing. rather than configuring a specific plugin only. Maven provides a mechanism that allows a mojo to specify whether it requires the project dependencies to be resolved. any dependencies specific to the test scope will remain unresolved.] */ Now. It's important to note that your mojo can require any valid dependency scope to be resolved prior to its execution. Requiring dependency resolution Having declared a parameter that injects the projects dependencies into the mojo. Returning to the example. Therefore. Maven 2 addresses this problem by deferring dependency resolution until the project's dependencies are actually required. If you've used Maven 1. 155 . If a mojo doesn't need access to the dependency list. However.Developing Custom Maven Plugins In this case. Failure to do so will cause an empty set to be injected into the mojo's dependencies parameter. Even then. Maven 2.x. the mojo should be ready to work with the dependency set.x term 'goal'). the clean process will fail – though not because the clean goal requires the project dependencies. Maven 2 will not resolve project dependencies until a mojo requires it. your mojo must declare that it needs them. it will have to tell Maven to resolve them. you'll know that one of its major problems is that it always resolves all project dependencies before invoking the first goal in the build (for clarity. it will force all of the dependencies to be resolved (test is the widest possible scope. the @readonly annotation functions to force users to configure the POM. encapsulating all others). and if so. In other words. Maven will resolve only the dependencies that satisfy the requested scope. the mojo is missing one last important step. Rather. If the project's dependencies aren't available.. direct configuration could result in a dependency being present for compilation. if a mojo declares that it requires dependencies for the compile scope.x. if later in the build process. You can declare the requirement for the test-scoped project dependency set using the following class-level annotation: /** * @requiresDependencyResolution test [.0 uses the term 'mojo' as roughly equivalent to the Maven 1. setType( artifact. This is critical when the project depends on snapshot versions of other libraries.next(). In this case.setClassifier( artifact. rd.setArtifactId( artifact. which enumerates all the dependencies used in the build. knowing the specific set of snapshots used to compile a project can lend insights into why other builds are breaking.setOptional( artifact.Better Builds with Maven BuildInfo example: logging dependency versions Turning once again to the maven-buildinfo-plugin. you'll add the dependency-set injection code discussed previously to the extract mojo in the maven-buildinfo-plugin.setResolvedVersion( artifact. Once you have access to the project dependency set. you will need to iterate through the set. ResolvedDependency rd = new ResolvedDependency(). rd. } buildInfo. adding the information for each individual dependency to your buildinfo object. } } 156 . rd.setGroupId( artifact.getVersion() ).getClassifier() ). For example. rd.getType() ).isEmpty() ) { for ( Iterator it = dependencies. ) { Artifact artifact = (Artifact) it.isOptional() ).getGroupId() ).getClassifier() != null ) { rd.setScope( artifact. rd.getArtifactId() ). it. rd.addResolvedDependency( rd ). one of the dependency libraries may have a newer snapshot version available.hasNext().iterator(). you will want to log the versions of the dependencies used during the build. This will result in the addition of a new section in the buildinfo file. The code required is as follows: if ( dependencies != null && !dependencies. if ( artifact. so it can log the exact set of dependencies that were used to produce the project artifact. To that end. along with their versions – including those dependencies that are resolved transitively.getScope() ). it's important for mojos to be able to access and manipulate both the source directory list and the resource definition list for a project. it may be necessary to augment a project's code base with an additional source directory. Once this new source directory is in place. junit. the resolvedVersion in the output above would be 1. If this plugin adds resources like images. particularly if the newest snapshot version is different. has a static version of 3.1.8.8.guineapig</groupId> <artifactId>guinea-pig-api</artifactId> <resolvedVersion>1. This dependency is part of the example development effort.0-alpha-SNAPSHOT in the POM. If you were using a snapshot version from the local repository which has not been deployed. the extract mojo should produce the same buildinfo file.3.5. when a project is built in a JDK 1. Accessing Project Sources and Resources In certain cases.Developing Custom Maven Plugins When you re-build the plugin and re-run the Guinea Pig build.. the compile mojo will require access to it. with an additional section called resolvedDependencies that looks similar to the following: <resolvedDependencies> <resolvedDependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <resolvedVersion>3.. it can have dramatic effects on the resulting project artifact. but consider the next dependency: guinea-pigapi.mergere. For instance. Therefore. This is because snapshot time-stamping happens on deployment only.mvnbook.] </resolvedDependencies> The first dependency listed here.094434-1</resolvedVersion> <optional>false</optional> <type>jar</type> <scope>compile</scope> </resolvedDependency> [. The actual snapshot version used for this artifact in a previous build could yield tremendous insight into the reasons for a current build failure.0-20060210.1</resolvedVersion> <optional>false</optional> <type>jar</type> <scope>test</scope> </resolvedDependency> [.] <resolvedDependency> <groupId>com... it's possible that a plugin may be introduced into the build process when a profile is activated. 5. 157 . or new source code directories to the build. This won't add much insight for debuggers looking for changes from build to build. and other mojos may need to produce reports based on those same source directories. and is still listed with the version 1.4 environment.0-alpha-SNAPSHOT. 6. This can be very useful when plugins generate source code. instead.directory}/generated-sources/<plugin-prefix> While conforming with location standards like this is not required. Further. unless declared otherwise. used to add new source directory to the build. It is possible that some builds won't have a current project. The generally-accepted binding for this type of activity is in the generate-sources life-cycle phase. and no other project contains current state information for this build. Once the current project instance is available to the mojo. 158 .build. this parameter also adds the @readonly annotation. The current project instance is a great example of this. it refers to a part of the build state that should always be present (a more in-depth discussion of this annotation is available in section 3. when generating source code. This annotation tells Maven that users cannot modify this parameter. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. Maven will fail the build if it doesn't have a current project instance and it encounters a mojo that requires one. as in the case where the mavenarchetype-plugin is used to create a stub of a new project. which can be injected into a mojo using the following code: /** * Project instance. Mojos that augment the source-root list need to ensure that they execute ahead of the compile phase. mojos require a current project instance to be available. or simply need to augment the basic project code base. Maven's project API bridges this gap. which tells Maven that it's OK to execute this mojo in the absence of a P declaration identifies the project field as a required mojo parameter that will inject the current MavenProject instance into the mojo for use. So. Chapter 3 of this book). the accepted default location for the generated source is in: ${project. any normal build will have a current project. it's a simple matter of adding a new source root to it. However. Maven's concept of a project can accommodate a whole list of directories. as in the following example: project. As in the prior project dependencies discussion. allowing plugins to add new source directories as they execute. It requires access to the current MavenProject instance only. it does improve the chances that your mojo will be compatible with other plugins bound to the same life cycle.addCompileSourceRoot( sourceDirectoryPath ). so your mojo simply needs to ask for it. as in the case of Maven itself and the components. which means it's always present. Maven components can make it much simpler to interact with the build process. Right away. the process of adding a new resource directory to the current build is straightforward and requires access to the MavenProject and MavenProjectHelper: /** * Project instance. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. This could be a descriptor for binding the project artifact into an application framework. the project helper is not a build state. which will be packaged up in the same jar as the project classes. Many different mojo's package resources with their generated artifacts such as web.Developing Custom Maven Plugins Adding a resource to the build Another common practice is for a mojo to generate some sort of non-code resource. Component requirements are simple to declare. or wsdl files for web services. used to make addition of resources * simpler. * @component */ private MavenProjectHelper helper. and abstract the associated complexities away from the mojo developer. the unadorned @component annotation – like the above code snippet – is adequate. this is what Maven calls a component requirement (it's a dependency on an internal component of the running Maven application). the Maven application itself is well-hidden from the mojo developer. It provides methods for attaching artifacts and adding new resource definitions to the current project. For example. To be clear. the MavenProjectHelper component is worth mentioning here. to simplify adding resources to a project. the mojo also needs access to the MavenProjectHelper component. however. used to add new source directory to the build. However. This component is part of the Maven application. Whatever the purpose of the mojo. Normally. However. in some special cases. This declaration will inject the current project instance into the mojo.xml file found in all maven artifacts. you should notice something very different about this parameter. 159 . that it's not a parameter at all! In fact. it is a utility. Component requirements are not available for configuration by users. Namely. as discussed previously. A complete discussion of Maven's architecture – and the components available – is beyond the scope of this chapter.xml files for servlet engines. the MavenProjectHelper is provided to standardize the process of augmenting the project instance. as it is particularly useful to mojo developers. in most cases. The project helper component can be injected as follows: /** * project-helper instance. adding a new resource couldn't be easier. if it's missing. Other examples include javadoc mojo in the maven-javadoc-plugin. Again. for the sake of brevity. List includes = Collections. directory. and then call a utility method on the project helper. inclusion patterns. The most common place for such activities is in the generate-resources life-cycle phase. the entire build will fail. or else bind a mojo to the life-cycle phase that will add an additional source directory to the build. The code should look similar to the following: String directory = "relative/path/to/some/directory". List excludes = null. which actually compiles the source code contained in these root directories into classes in the project output directory. in order to perform some operation on the source code. Resources are copied to the classes directory of the build during the process-resources phase. and the jar mojo in the maven-source-plugin. In a typical case. includes.addResource(project. it will need to execute ahead of this phase.Better Builds with Maven With these two objects at your disposal. helper. instead. which may or may not be directly configurable. Again. others must read the list of active source directories. excludes). * @parameter default-value="${project. If your mojo is meant to add resources to the eventual project artifact. they have to modify the sourceDirectory element in the POM. as in the following example: /** * List of source roots containing non-test code. this parameter declaration states that Maven does not allow users to configure this parameter directly. The parameter is also required for this mojo to execute. The prior example instantiates the resource's directory.singletonList("**/*"). all you have to do is declare a single parameter to inject them. Gaining access to the list of source root directories for a project is easy. it's important to understand where resources should be added during the build life cycle. Accessing the source-root list Just as some mojos add new source directories to the build. Similar to the parameter declarations from previous sections. along with inclusion and exclusion patterns for resources within that directory.compileSourceRoots}" * @required * @readonly */ private List sourceRoots. conforming with these standards improves the compatibility of your plugin with other plugins in the build. these values would come from other mojo parameters. and exclusion patterns as local variables. 160 . Simply define the resources directory to add. The classic example is the compile mojo in the maven-compiler-plugin. Therefore. When you add this code to the extract mojo in the maven-buildinfo-plugin. In order to make this information more generally applicable. ) { String sourceRoot = (String) it. for eventual debugging purposes. then this profile would dramatically alter the resulting project artifact when activated. in order to incorporate list of source directories to the buildinfo object. } One thing to note about this code snippet is the makeRelative() method. Accessing the resource list Non-code resources complete the picture of the raw materials processed by a Maven build.. which copies all non-code resources to the output directory for inclusion in the project artifact. it's better to bind it to a later phase like package if capturing a complete picture of the project is important. it. as in the case of the extract mojo. By the time the mojo gains access to them. To be clear.next(). it can be bound to any phase in the life cycle. You've already learned that mojos can modify the list of resources included in the project artifact. If a certain profile injects a supplemental source directory into the build (most likely by way of a special mojo binding). the ${basedir} expression refers to the location of the project directory in the local file system. Returning to the buildinfo example.addSourceRoot( makeRelative( sourceRoot ) ). binding to any phase later than compile should be acceptable. 161 . any reference to the path of the project directory in the local file system should be removed. This is the mechanism used by the resources mojo in the maven-resources-plugin. buildInfo. it can iterate through them. you need to add the following code: for ( Iterator it = sourceRoots. This involves subtracting ${basedir} from the source-root paths. source roots are expressed as absolute file-system paths. let's learn about how a mojo can access the list of resources used in a build. Remember. it could be critically important to track the list of source directories used in a particular build. since compile is the phase where source files are converted into classes. now. binding this mojo to an early phase of the life cycle increases the risk of another mojo adding a new source root in a later phase.hasNext().iterator(). However. applying whatever processing is necessary.Developing Custom Maven Plugins Now that the mojo has access to the list of project source roots. In this case however. 4 environment that doesn't support Java generics. As noted before with the dependencies parameter.Resource instances. 162 . you'll notice the makeRelative() method. ) { Resource resource = (Resource) it. allowing direct configuration of this parameter could easily produce results that are inconsistent with other resource-consuming mojos. and Maven mojos must be able to execute in a JDK 1. it.addResourceRoot( makeRelative( resourceRoot ) ).hasNext(). since the ${basedir} path won't have meaning outside the context of the local file system.apache. and can be accomplished through the following code snippet: if ( resources != null && !resources. it can mean the difference between an artifact that can be deployed into a server environment and an artifact that cannot. which in fact contain information about a resource root.model. In this case. * @parameter default-value="${project.resources}" * @required * @readonly */ private List resources.isEmpty() ) { for ( Iterator it = resources. For instance. Since the resources list is an instance of java. containing * directory.getDirectory().next(). this parameter is declared as required for mojo execution and cannot be edited by the user. Just like the source-root injection parameter. by trimming the ${basedir} prefix. } } As with the prior source-root example. the user has the option of modifying the value of the list by configuring the resources section of the POM. This method converts the absolute path of the resource directory into a relative path. includes.iterator().Better Builds with Maven Much like the source-root list. if an activated profile introduces a mojo that generates some sort of supplemental framework descriptor.util.List. along with some matching rules for the resource files it contains. to avoid any ambiguity. String resourceRoot = resource. mojos must be smart enough to cast list elements as org. It's also important to note that this list consists of Resource objects. It's necessary to revert resource directories to relative locations for the purposes of the buildinfo plugin. All POM paths injected into mojos are converted to their absolute form first. and excludes. Since mojos can add new resources to the build programmatically. the resources list is easy to inject as a mojo parameter. Therefore. buildInfo. The parameter appears as follows: /** * List of Resource objects for the current build.maven. It's a simple task to add this capability. it is important that the buildinfo file capture the resource root directories used in the build for future reference. capturing the list of resources used to produce a project artifact can yield information that is vital for debugging purposes. the key differences are summarized in the table below. Like the vast majority of activities. This chapter does not discuss test-time and compile-time source roots and resources as separate topics. and even the buildinfo file produced in the examples throughout this chapter.testSourceRoots} helper.Developing Custom Maven Plugins Adding this code snippet to the extract mojo in the maven-buildinfo-plugin will result in a resourceRoots section being added to the buildinfo file. it's worthwhile to discuss the proper place for this type of activity within the build life cycle. These artifacts are typically a derivative action or side effect of the main build process. this classifier must also be specified when declaring the dependency for such an artifact. Maven treats these derivative artifacts as attachments to the main project artifact. javadoc bundles. which may be executed during the build process. due to the similarities. an artifact attachment will have a classifier.5. which sets it apart from the main project artifact in the repository. Therefore. a corresponding activity can be written to work with their test-time counterparts. any mojo seeking to catalog the resources used in the build should execute at least as late as the process-resources phase.testResources} 5.resources} project. in that they are never distributed without the project artifact being distributed. Classic examples of attached artifacts are source archives. instead. which must be processed and included in the final project artifact. it can be referenced like any other artifact. like sources or javadoc. This ensures that any resource modifications introduced by mojos in the build process have been completed. by using the classifier element for that dependency section within the POM. Note on testing source-roots and resources All of the examples in this advanced development discussion have focused on the handling of source code and resources. that for every activity examined that relates to source-root directories or resource definitions. Once an artifact attachment is deposited in the Maven repository. It's important to note however. mojos produce new artifacts that should be distributed alongside the main project artifact in the Maven repository system. That section should appear as follows: <resourceRoots> <resourceRoot>src/main/resources</resourceRoot> <resourceRoot>target/generated-resources/xdoclet</resourceRoot> </resourceRoots> Once more. 163 . Since all project resources are collected and copied to the project output directory in the processresources phase.addTestSourceRoot() ${project.4.addTestResource() ${project.addCompileSourceRoot() ${project.compileSourceRoots} helper. Usually. Table 5-2: Key differences between compile-time and test-time mojo activities Activity Change This To This Add testing source root Get testing source roots Add testing resource Get testing resources project. Attaching Artifacts for Installation and Deployment Occasionally. only the parameter expressions and method names are different. The concepts are the same.addResource() ${project. collecting the list of project resources has an appropriate place in the life cycle. for historical reference. since it provides information about how each snapshot of the project came into existence. respectively. you'll need a parameter that references the current project instance as follows: /** * Project instance. The MavenProject instance is the object with which your plugin will register the attachment with for use in later phases of the lifecycle. which is still missing from the maven-buildinfo-plugin example. 164 . "xml". See Section 5. For convenience you should also inject the following reference to MavenProjectHelper. an extra piece of code must be executed in order to attach that artifact to the project artifact. there are also two somewhat cryptic string values being passed in: “xml” and “buildinfo”. This extra step.2 for a discussion about MavenProjectHelper and component requirements. Once you include these two fields in the extract mojo within the maven-buildinfo-plugin. which will make the process of attaching the buildinfo artifact a little easier: /** * This helper class makes adding an artifact attachment simpler. However. the meaning and requirement of project and outputFile references should be clear. or set of mojos. the process of attaching the generated buildinfo file to the main project artifact can be accomplished by adding the following code snippet: helper. can provide valuable information to the development team. and only serves to describe the latest build. First. outputFile ). the distribution of the buildinfo file via Maven's repository will provide a more permanent record of the build for each snapshot in the repository.attachArtifact( project.5. While an e-mail describing the build environment is transient. * @component */ private MavenProjectHelper helper. From the prior examples. Doing so guarantees that attachment will be distributed when the install or deploy phases are run. "buildinfo". produces a derivative artifact. These values represent the artifact extension and classifier. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. to which we want to add an attached artifact.Better Builds with Maven When a mojo. Including an artifact attachment involves adding two parameters and one line of code to your mojo. a project requires special tasks in order to build successfully. Finally. It identifies the file as being produced by the the maven-buildinfoplugin. you're telling Maven that this artifact should be distinguished from other project artifacts by using this value in the classifier element of the dependency declaration.6. when the project is deployed. 165 . Maven can integrate these custom tasks into the build process through its extensible plugin framework. in certain circumstances. as follows: > > > > mvn install cd C:\Documents and Settings\jdcasey\. However. “This is an XML file”. If you build the Guinea Pig project using this modified version of the maven-buildinfo-plugin.Developing Custom Maven Plugins By specifying an extension of “xml”.m2\repository cd com\mergere\mvnbook\guineapig\guinea-pig-core\1.0-SNAPSHOT. enabling you to attach custom artifacts for installation or deployment. you can test it by re-building the plugin. Whether they be code-generation. you should see the buildinfo file appear in the local repository alongside the project jar. as opposed to another plugin in the build process which might produce another XML file with different meaning.jar guinea-pig-core-1. you're telling Maven that the file in the repository should be named using a. reporting. the mojos – that are bound to the build life cycle. Summary In its unadorned state. It can extract relevant details from a running build and generate a buildinfo file based on these details. or verification steps. you've learned that it's relatively simple to create a mojo that can extract relevant parts of the build state in order to perform a custom build-process task – even to the point of altering the set of source-code directories used to build the project. This serves to attach meaning beyond simply saying. the maven-buildinfo-plugin is ready for action.xml guinea-pig-core-1. then running Maven to the install life-cycle phase on our test project. there is a standardized way to inject new behavior into the build by binding new mojos at different life-cycle phases.0-SNAPSHOT dir guinea-pig-core-1. you've also learned how a plugin generated file can be distributed alongside the project artifact in Maven's repository system. In this chapter. and route that message to other development team members on the project development mailing list. Since the build process for a project is defined by the plugins – or more accurately. Maven represents an implementation of the 80/20 rule.xml extension. 5.0-SNAPSHOT. Finally. By specifying the “buildinfo” classifier. From there.pom Now. the maven-buildinfo-plugin can also generate an e-mail that contains the buildinfo file contents. Working with project dependencies and resources is equally as simple.0-SNAPSHOT-buildinfo. Maven can build a basic project with little or no modification – thus covering the 80% case. Using the default lifecycle mapping. Now that you've added code to distribute the buildinfo file. it can attach the buildinfo file to the main project artifact so that it's distributed whenever Maven installs or deploys the project. if you have the means. it's unlikely to be a requirement unique to your project. If your project requires special handling. However. Mojo development can be as simple or as complex (to the point of embedding nested Maven processes within the build) as you need it to be. remember that whatever problem your custom-developed plugin solves. chances are good that you can find a plugin to address this need at the Apache Maven project. 166 . Using the plugin mechanisms described in this chapter. you can integrate almost any tool into the build process. So. please consider contributing back to the Maven community by providing access to your new plugin. only a tiny fraction of which are a part of the default lifecycle mapping. the Codehaus Mojo project. If not.Better Builds with Maven Many plugins already exist for Maven use. or the project web site of the tools with which your project's build must integrate. It is in great part due to the re-usable nature of its plugins that Maven can offer such a powerful build platform. developing a custom Maven plugin is an easy next step. .Samuel Butler 167 . it is an art. Because the POM is a declarative model of the project. you will be revisiting the Proficio application that was developed in Chapter 3. The next three sections demonstrate how to set up an effective project Web site.zip for convenience as a starting point. because if the bar is set too high. it was pointed out that Maven's application of patterns provides visibility and comprehensibility. and how well it adapts to change. and then run mvn install from the proficio subdirectory to ensure everything is in place. unzip the Code_Ch06-1. there will be too many failed builds. • Maven takes all of the information you need to know about your project and brings it together under the project Web site. When referring to health. 168 . you'll learn how to use a number of these tools effectively. and using a variety of tools. Conversely. and learning more about the health of the project. In this chapter. It is important not to get carried away with setting up a fancy Web site full of reports that nobody will ever use (especially when reports contain failures they don't want to know about!).Better Builds with Maven 6. the project will meet only the lowest standard and go no further.zip file into C:\mvnbook or your selected working directory. why have a site. and what the nature of that activity is. The code that concluded Chapter 3 is also included in Code_Ch06-1. if the bar is set too low. For this reason. This is unproductive as minor changes are prioritized over more important tasks.determining how well the code works. and whether the conditions for the checks are set correctly. and display that information in a single place.1. This is important. there are two aspects to consider: • Code quality . It is these characteristics that assist you in assessing the health of your project. Through the POM. new tools that can assess its health are easily integrated. Maven can analyze. Maven has access to the information that makes up a project. many of the reports illustrated can be run as part of the regular build in the form of a “check” that will fail the build if a certain condition is not met. relate. What Does Maven Have to do With Project Health? In the introduction. In this chapter. But.finding out whether there is any activity on the project. Project vitality . to get a build to pass. which everyone can see at any time. if the build fails its checks? The Web site also provides a permanent record of a project's health. how well it is tested. It provides additional information to help determine the reasons for a failed build. To begin. 2. is the focus of the rest of this chapter. adding a new report is easy. and so on. Project Reports. you can add the Surefire report to the sample application. These reports provide a variety of insights into the quality and vitality of the project. The Project Info menu lists the standard reports Maven includes with your site by default. Figure 6-1: The reports generated by Maven You can see that the navigation on the left contains a number of reports. Adding Reports to the Project Web site This section builds on the information on project Web sites in Chapter 2 and Chapter 3. unless you choose to disable them. On a new project. having these standard reports means that those familiar with Maven Web sites will always know where to find the information they need. The second menu (shown opened in figure 6-1). issue tracker.xml: 169 . However. To start. by including the following section in proficio/pom. These reports are useful for sharing information with others. review the project Web site shown in figure 6-1. For newcomers to the project. For example.Assessing Project Health with Maven 6. and now shows how to integrate project health information. SCM. and to reference as links in your mailing lists. this menu doesn't appear as there are no reports included. You can now run the following site task in the proficio-core directory to regenerate the site.html.maven. and as a result. <reporting> <plugins> <plugin> <groupId>org.apache. </project> This adds the report to the top level project... Figure 6-2: The Surefire report 170 . C:\mvnbook\proficio\proficio-core> mvn site This can now be found in the file target/site/surefire-report. and is shown in figure 6-2. it will be inherited by all of the child modules..Better Builds with Maven ..plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> </plugin> </plugins> </reporting> . 3. Configuration of Reports Before stepping any further into using the project Web site. <build> <plugins> <plugin> <groupId>org. Configuration for a reporting plugin is very similar.xml.maven.apache.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <showSuccess>false</showSuccess> </configuration> </plugin> </plugins> </reporting> .maven. the defaults are sufficient to get started with a useful report. however it is added to the reporting section of the POM.... it is important to understand how the report configuration is handled in Maven.5</target> </configuration> </plugin> </plugins> </build> .plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.. and due to using convention over configuration. For example.apache.. 171 ..5</source> <target>1.. the report shows the test results of the project. You might recall from Chapter 2 that a plugin is configured using the configuration element inside the plugin declaration in pom. <reporting> <plugins> <plugin> <groupId>org. for example: . 6. Maven knows where the tests and test results are.. For a quicker turn around..Assessing Project Health with Maven As you may have noticed in the summary.xml: . the report can be modified to only show test failures by adding the following configuration in pom. 172 . To do this. they will all be included. even though it is not specific to the execution. and not site generation. “Executions” such as this were introduced in Chapter 3. what if the location of the Surefire XML reports that are used as input (and would be configured using the reportsDirectory parameter) were different to the default location? Initially.maven. you might think that you'd need to configure the parameter in both sections.. while the configuration can be used to modify its appearance or behavior. To continue with the Surefire report.directory}/surefire-reports </outputDirectory> </configuration> <executions> <execution> <phase>test</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> </plugins> </build> . However. However. The plugin is included in the build section to ensure that the configuration.. Any plugin configuration declared in the reporting section is also applied to those declared in the build section.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <outputDirectory> ${project. Fortunately. If a plugin contains multiple reports. or in addition to. is used only during the build. the reporting section: . Plugins and their associated configuration that are declared in the build section are not used during site generation.build.apache.. and the build. as seen in the previous section. <build> <plugins> <plugin> <groupId>org.. some reports apply to both the site. this isn't the case – adding the configuration to the reporting section is sufficient. the plugin would need to be configured in the build section instead of. consider if you wanted to create a copy of the HTML report in the directory target/surefirereports every time the build ran.Better Builds with Maven The addition of the plugin element triggers the inclusion of the report in the Web site. build. and that you had had generated its XML results to target/surefire-reports/unit and target/surefire-reports/perf respectively. once for unit tests and once for a set of performance tests.directory}/surefire-reports/unit </reportsDirectory> <outputName>surefire-report-unit</outputName> </configuration> <reports> <report>report</report> </reports> </reportSet> <reportSet> <id>perf</id> <configuration> <reportsDirectory> ${project.Assessing Project Health with Maven When you configure a reporting plugin. The configuration value is specific to the build stage When you are configuring the plugins to be used in the reporting section. you would include the following section in your pom. always place the configuration in the reporting section – unless one of the following is true: 1. The reports will not be included in the site 2. by default all reports available in the plugin are executed once. and cases where a particular report will be run more than once. each time with a different configuration. For example.. To generate two HTML reports for these results. there are cases where only some of the reports that the plugin produces will be required.directory}/surefire-reports/perf </reportsDirectory> <outputName>surefire-report-perf</outputName> </configuration> <reports> <report>report</report> </reports> </reportSet> </reportSets> </plugin> </plugins> </reporting> .apache. consider if you had run Surefire twice in your build.xml: .build.. However. and a list of reports to include. 173 .plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <reportSets> <reportSet> <id>unit</id> <configuration> <reportsDirectory> ${project. Each report set can contain configuration.. <reporting> <plugins> <plugin> <groupId>org. Both of these cases can be achieved with the reportSets element. which is the reporting equivalent of the executions element in the build section..maven. The reports element in the report set is a required element.html and target/site/surefire-report-perf.. 6. This approach to balancing these competing requirements will vary. running mvn surefire-report:report will not use either of these configurations. add the following to the reporting section of the pom. Maven will use only the configuration that is specified in the plugin element itself.. • The open source graphical application.. but quite separate to the end user documentation. The reports in this list are identified by the goal names that would be used if they were run from the command line. where much of the source code and Javadoc reference is of interest to the end user. outside of any report sets. <plugin> <groupId>org. as with executions. which are targeted at the developers.plugins</groupId> <artifactId>maven-project-info-reports-plugin</artifactId> <reportSets> <reportSet> <reports> <report>mailing-list</report> <report>license</report> </reports> </reportSet> </reportSets> </plugin> .html. and an inconvenience to the developer who doesn't want to wade through end user documentation to find out the current state of a project's test coverage.4. • The open source reusable library. On the entrance page there are usage instructions for Proficio. While the defaults are usually sufficient.maven. but in the navigation there are reports about the health of the project. and most likely doesn't use Maven to generate it. It is also possible to include only a subset of the reports in a plugin. to generate only the mailing list and license pages of the standard reports. there's something subtly wrong with the project Web site. depending on the project. who isn't interested in the state of the source code. where the developer information is available. Consider the following: • The commercial product. This may be confusing for the first time visitor. When a report is executed individually. For example. they must be enumerated in this list.. 174 . Separating Developer Reports From User Documentation After adding a report.Better Builds with Maven Running mvn site with this addition will generate two Surefire reports: target/site/surefirereport-unit. this customization will allow you to configure reports in a way that is just as flexible as your build. which are targeted at an end user. If you want all of the reports in a plugin to be generated.apache. where the end user documentation is on a completely different server than the developer information. However.xml file: . which are based on time and the current state of the project. the Updated column indicates whether the content is regularly updated. and the content's characteristics. The best compromise between not updating between releases. It refers to a particular version of the software. For a single module library. 175 . is to branch the end user documentation in the same way as source code. Table 6-1 lists the content that a project Web site may contain.Assessing Project Health with Maven To determine the correct balance. that can be updated between releases without risk of including new features. and to maintain only one set of documentation. and sometimes they are available for download separately. including the end user documentation in the normal build is reasonable as it is closely tied to the source code reference. It is important not to include documentation for features that don't exist in the last release. each section of the site needs to be considered. and a development branch where new features can be documented for when that version is released. It is good to update the documentation on the Web site between releases. While there are some exceptions. as it is confusing for those reading the site who expect it to reflect the latest release. This is documentation for the end user including usage instructions and guides. like mailing list information and the location of the issue tracker and SCM are updated also. Some standard reports. However. This is typically true for the end user documentation. These are the reports discussed in this chapter that display the current state of the project to the developers. The situation is different for end user documentation. The Distributed column in the table indicates whether that form of documentation is typically distributed with the project. in some cases down to individual reports. This is true of the news and FAQs. Table 6-1: Project Web site content types Content Description Updated Distributed Separated News. For libraries and frameworks. Features that are available only in more recent releases should be marked to say when they were introduced. regardless of releases. Sometimes these are included in the main bundle. and not introducing incorrect documentation. You can maintain a stable branch. which are continuously published and not generally of interest for a particular release. Yes No Yes Yes Yes No No Yes No Yes No No In the table. Javadoc) that in a library or framework is useful to the end user. source code references should be given a version and remain unchanged after being released. but usually not distributed or displayed in an application. FAQs and general Web site End user documentation Source code reference material Project health and vitality reports This is the content that is considered part of the Web site rather than part of the documentation. This is reference material (for example. the source code reference material and reports are usually generated from the modules that hold the source code and perform the build. the Javadoc and other reference material are usually distributed for reference as well. It is also true of the project quality and vitality reports. The Separated column indicates whether the documentation can be a separate module or project. or maybe totally independent. but make it an independent project when it forms the overall site with news and FAQs.mergere. in most cases. In the following example. This is done using the site archetype : C:\mvnbook\proficio> mvn archetype:create -DartifactId=user-guide \ -DgroupId=com. and is not distributed with the project. In Proficio. the documentation and Web site should be kept in a separate module dedicated to generating a site.Better Builds with Maven However. the site currently contains end user documentation and a simple report. which you can later add content to. you will learn how to separate the content and add an independent project for the news and information Web site. This separated documentation may be a module of the main project. a module is created since it is not related to the source code reference material.mvnbook. you are free to place content wherever it best suits your project. 176 . It is important to note that none of these are restrictions placed on a project by Maven. You would make it a module when you wanted to distribute it with the rest of the project. In this case. The resulting structure is shown in figure 6-4. This avoids including inappropriate report information and navigation elements. The current structure of the project is shown in figure 6-3. Figure 6-3: The initial setup The first step is to create a module called user-guide for the end user documentation. While these recommendations can help properly link or separate content according to how it will be used.proficio \ -DarchetypeArtifactId=maven-archetype-site-simple This archetype creates a very basic site in the user-guide subdirectory. whether to maintain history or to maintain a release and a development preview. <url> scp://mergere. Previously.xml file to change the site deployment url: . <distributionManagement> <site> .. edit the top level pom. Under the current structure... In this example.com/mvnbook/proficio.com/www/library/mvnbook/proficio/reference/${pom. and the user guide to. while optional.. the development documentation will be moved to a /reference/version subdirectory so that the top level directory is available for a user-facing web site.mergere. 177 .version} </url> </site> </distributionManagement> . the URL and deployment location were set to the root of the Web site:. is useful if you are maintaining multiple public versions. the development documentation would go to that location..Assessing Project Health with Maven Figure 6-4: The directory layout with a user guide The next step is to ensure the layout on the Web site is correct. First.mergere.. Adding the version to the development documentation.com/mvnbook/proficio/user-guide. hyper links in the content pane can be used to navigate to other classes and interfaces within the cross reference. </plugins> </reporting> . 183 ..... the links can be used to quickly find the source belonging to a particular exception.. A useful way to leverage the cross reference is to use the links given for each line number in a source file to point team mates at a particular piece of code. Including JXR as a permanent fixture of the site for the project is simple.apache.plugins</groupId> <artifactId>maven-jxr-plugin</artifactId> </plugin> . however the content pane is now replaced with a syntax-highlighted. Those familiar with Javadoc will recognize the framed navigation layout.xml: . crossreferenced Java source file for the selected class. You can now run mvn site in proficio-core and see the Source Xref item listed in the Project Reports menu of the generated site. <reporting> <plugins> <plugin> <groupId>org. if you don't have the project open in your IDE. and can be done by adding the following to proficio/pom. Or.maven.Assessing Project Health with Maven Figure 6-6: An example source code cross reference Figure 6-6 shows an example of the cross reference. com/j2se/1. the Javadoc report is quite configurable.0-alpha-9/apidocs</link> </links> </configuration> </plugin> . when added to proficio/pom. will link both the JDK 1.. <plugin> <groupId>org.apache.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <configuration> <links> <link>. browsing source code is too cumbersome for the developer if they only want to know about how the API works.4 API documentation and the Plexus container API documentation used by Proficio: . Unlike JXR. the following configuration. The end result is the familiar Javadoc output. In the online mode. A Javadoc report is only as good as your Javadoc! Make sure you document the methods you intend to display in the report.. For example. this will link to an external Javadoc reference at a given URL. you should include it in proficio/pom. Using Javadoc is very similar to the JXR report and most other reports in Maven... and if possible use Checkstyle to ensure they are documented. 184 . <plugin> <groupId>org. Again. in target/site/apidocs.xml.apache.Better Builds with Maven In most cases. you can run it on its own using the following command: C:\mvnbook\proficio\proficio-core> mvn javadoc:javadoc Since it will be included as part of the project site. One useful option to configure is links.sun. However.org/plugins/maven-jxr-plugin/.codehaus.xml as a site report to ensure it is run every time the site is regenerated: .maven. many of the other reports demonstrated in this chapter will be able to link to the actual code to highlight an issue.maven.4..org/ref/1. Now that you have a source cross reference..plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> </plugin> . so an equally important piece of reference material is the Javadoc report. see the plugin reference at</link> <link>... however if you'd like a list of available configuration options. with most of the command line options of the Javadoc tool available. the default JXR configuration is sufficient. but it results in a separate set of API documentation for each library in a multi-module build. this simple change will produce an aggregated Javadoc and ignore the Javadoc report in the individual modules. this setting is always ignored by the javadoc:jar goal. the Javadoc plugin provides a way to produce a single set of API documentation for the entire project. the next section will allow you to start monitoring and improving its health. Try running mvn clean javadoc:javadoc in the proficio directory to produce the aggregated Javadoc in target/site/apidocs/index.. but conversely to have the Javadoc closely related.String and java. you'll see that all references to the standard JDK classes such as java.html.Object. Edit the configuration of the existing Javadoc plugin in proficio/pom. of course!). <configuration> <aggregate>true</aggregate> .. 185 .Assessing Project Health with Maven If you regenerate the site in proficio-core with mvn site again.lang. as well as any references to classes in Plexus. This setting must go into the reporting section so that it is used for both reports and if the command is executed separately..lang. ensuring that the deployed Javadoc corresponds directly to the artifact with which it is deployed for use in an IDE. Since it is preferred to have discrete functional pieces separated into distinct modules. Setting up Javadoc has been very convenient.xml by adding the following line: . this is not sufficient. Now that the sample application has a complete reference for the source code. However.. are linked to API documentation on the Sun website. One option would be to introduce links to the other modules (automatically generated by Maven based on dependencies. </configuration> . When built from the top level project. but this would still limit the available classes in the navigation as you hop from module to module... Instead. sf. Figure 6-7 shows the output of a PMD report on proficio-core. copy-and-pasted code.sf. Figure 6-7: An example PMD report 186 . and this section will look at three: • PMD (. and violations of a coding standard.net/) • Tag List PMD takes a set of either predefined or user-defined rule sets and evaluates the rules across your Java source code.7. which in turn reduces the risk that its accuracy will be affected by change) Maven has reports that can help with each of these health factors.net/) • Checkstyle (..Better Builds with Maven 6. this is important for both the efficiency of other team members and also to increase the overall level of code comprehension. The result can help identify bugs. which is obtained by running mvn pmd:pmd. since the JXR report was included earlier.xml file: .. Also.apache. some source files are identified as having problems that could be addressed. unused code. The default PMD report includes the basic. <plugin> <groupId>org. if you configure these.xml</ruleset> <ruleset>/rulesets/imports.maven...xml</ruleset> </rulesets> </configuration> </plugin> .xml</ruleset> <ruleset>/rulesets/unusedcode. 187 .xml</ruleset> <ruleset>/rulesets/finalizers... and imports rule sets.Assessing Project Health with Maven As you can see. such as unused methods and variables. Adding new rule sets is easy.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>/rulesets/basic.apache.maven. the line numbers in the report are linked to the actual source code so you can browse the issues. methods. For example. you must configure all of them – including the defaults explicitly. The “basic” rule set includes checks on empty blocks.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> </plugin> . redundant or unused import declarations... add the following to the plugin configuration you declared earlier: . unnecessary statements and possible bugs – such as incorrect loop variables. by passing the rulesets configuration to the plugin. However. <plugin> <groupId>org. The “imports” rule set will detect duplicate. variables and parameters. to include the default rules.. and the finalizer rule sets. The “unused code” rule set will locate unused private fields. Adding the default PMD report to the site is just like adding any other report – you can include it in the reporting section in the proficio/pom. sf.. For PMD. see the instructions on the PMD Web site at file by adding: . try the following guidelines from the Web site at</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>${basedir}/src/main/pmd/custom. you can choose to create a custom rule set. and add more as needed. with the following content: <?xml version="1.0"?> <ruleset name="custom"> <description> Default rules.Better Builds with Maven You may find that you like some rules in a rule set. There is no point having hundreds of violations you won't fix.apache. override the configuration in the proficio-core/pom.xml" /> <rule ref="/rulesets/imports. you may use the same rule sets in a number of projects.xml. you need to make sure it stays that way. and imports are useful in most scenarios and easily fixed. select the rules that apply to your own project.html. 188 . Start small. From this starting. create a file in the proficio-core directory of the sample application called src/main/pmd/custom.html: • • Pick the rules that are right for you. basic.xml</ruleset> </rulesets> </configuration> </plugin> </plugins> </reporting> . but exclude the “unused private field” rule. but not others. It is also possible to write your own rules if you find that existing ones do not cover recurring problems in your source code. For example. In either case.sf.. To try this. unusedcode. One important question is how to select appropriate rules.net/bestpractices.maven. no unused private field warning </description> <rule ref="/rulesets/basic. If you've done all the work to select the right rules and are correcting all the issues being discovered.net/howtomakearuleset..xml"> <exclude name="UnusedPrivateField" /> </rule> </ruleset> To use this rule set.. <reporting> <plugins> <plugin> <groupId>org.xml" /> <rule ref="/rulesets/unusedcode. you could create a rule set with all the default rules. For more examples on customizing the rule sets. Or. </plugins> </build> You may have noticed that there is no configuration here.maven.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> . If you need to run checks earlier. the pmd:check goal is run in the verify phase. so that it is regularly tested. You will see that the build fails.: 189 . This is done by binding the goal to the build life cycle. which occurs after the packaging phase. [INFO] --------------------------------------------------------------------------- Before correcting these errors. you could add the following to the execution block to ensure that the check runs just after all sources exist: <phase>process-sources</phase> To test this new setting. By default. To correct this. To do so.apache. try running mvn verify in the proficio-core directory.. you should include the check in the build. add the following section to the proficio/pom. fix the errors in the src/main/java/com/mergere/mvnbook/proficio/DefaultProficio.Assessing Project Health with Maven Try this now by running mvn pmd:check on proficio-core.xml file: <build> <plugins> <plugin> <groupId>org. but recall from Configuring Reports and Checks section of this chapter that the reporting configuration is applied to the build as well. the build will succeed. // Trigger PMD and checkstyle int i... but mandatory in an integration environment.. it can be slow and obtrusive during general development. private void testMethod() // NOPMD { } . If you run mvn verify again. // NOPMD ..This report is included by default when you enable the PMD plugin in your reporting section. Figure 6-8: An example CPD report 190 . adding the check to a profile. See Continuous Integration with Continuum section in the next chapter for information on using profiles and continuous integration.. there is one that is in a separate report. While the PMD report allows you to run a number of different rules.. An example report is shown in figure 6-8. and will appear as “CPD report” in the Project Reports menu. and it includes a list of duplicate code fragments discovered across your entire source base. // NOPMD . While this check is very useful. or copy/paste detection report. For that reason. which is executed only in an appropriate environment.Better Builds with Maven .. int j. can make the check optional for developers.. This is the CPD. It was originally designed to address issues of format and style. rather than identifying a possible factoring of the source code. Checkstyle is a tool that is. This may not give you enough control to effectively set a rule for the source code.net/availablechecks. • Use it to check code formatting and to detect other problems exclusively This section focuses on the first usage scenario. and rely on other tools for detecting other problems. and a commercial product called Simian (. resulting in developers attempting to avoid detection by making only slight modifications. However. If you need to learn more about the available modules in Checkstyle.Assessing Project Health with Maven In a similar way to the main check. Figure 6-9 shows the Checkstyle report obtained by running mvn checkstyle:checkstyle from the proficio-core directory. but has more recently added checks for other code issues. and still rely on other tools for greater coverage. in many ways. 191 .com. which defaults to 100. • Use it to check code formatting and selected other problems. similar to PMD. pmd:cpd-check can be used to enforce a failure if duplicate source code is found. refer to the list on the Web site at. such as Checkstyle. Simian can also be used through Checkstyle and has a larger variety of configuration options for detecting duplicate source code. you may choose to use it in one of the following ways: • Use it to check code formatting only.html. Whether to use the report only. There are other alternatives for copy and paste detection. or to enforce a check will depend on the environment in which you are working. With this setting you can fine tune the size of the copies detected. Some of the extra summary information for overall number of errors and the list of checks used has been trimmed from this display.au/products/simian/).sf.redhillconsulting. the CPD report contains only one variable to configure: minimumTokenCount. Depending on your environment. This style is also bundled with the Checkstyle plugin.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <configuration> <configLocation>config/maven_checks.maven. with a link to the corresponding source line – if the JXR report was enabled.. but Proficio is using the Maven team's code style.xml: .Better Builds with Maven Figure 6-9: An example Checkstyle report You'll see that each file with notices.. and then the errors are shown.apache. the rules used are those of the Sun Java coding conventions. add the following to the reporting section of proficio/pom. That's a lot of errors! By default. warnings or errors is listed in a summary. <plugin> <groupId>org.xml</configLocation> </configuration> </plugin> 192 . so to include the report in the site and configure it to use the Maven style. The Checkstyle plugin itself has a large number of configuration options that allow you to customize the appearance of the report.net/config. you will need to create a Checkstyle configuration.html config/maven_checks. known as “Task List” in Maven 1.Assessing Project Health with Maven Table 6-3 shows the configurations that are built into the Checkstyle plugin.xml config/turbine_checks. This report.0.apache. The configLocation parameter can be set to a file within your build.org/guides/development/guidem2-development. Table 6-3: Built-in Checkstyle configurations Configuration config/sun_checks. It is a good idea to reuse an existing Checkstyle configuration for your project if possible – if the style you use is common. However.html#Maven%20Code%20Style. a URL. While this chapter will not go into an example of how to do this. filter the results.xml No longer online – the Avalon project has closed. then it is likely to be more readable and easily learned by people joining your project. These checks are for backwards compatibility only. will look through your source code for known tags and provide a report on those it finds. and to parameterize the Checkstyle configuration for creating a baseline organizational standard that can be customized by individual projects. one or the other will be suitable for most people.org/turbine/common/codestandards. Before completing this section it is worth mentioning the Tag List plugin. It is also possible to share a Checkstyle configuration among multiple projects.html. or a resource within a special dependency also. the Checkstyle documentation provides an excellent reference at. if you have developed a standard that differs from these. and typically.com/docs/codeconv/ Description Reference Sun Java Coding Conventions Maven team's coding conventions Conventions from the Jakarta Turbine project Conventions from the Apache Avalon project config/avalon_checks. 193 . By default. The built-in Sun and Maven standards are quite different.apache. as explained at. or would like to use the additional checks introduced in Checkstyle 3.0 and above. this will identify the tags TODO and @todo in the comments of your source code. @todo. and Tag List are just three of the many tools available for assessing the health of your project's source code.codehaus. In addition to that.net) is the open source tool best integrated with Maven.. you saw that tests are run before the packaging of the library or application for distribution. Knowing whether your tests pass is an obvious and important assessment of their health. There are additional testing stages that can occur after the packaging step to verify that the assembled package works under other circumstances. Failing the build is still recommended – but the report allows you to provide a better visual representation of the results. <plugin> <groupId>org. Another critical technique is to determine how much of your source code is covered by the test execution. it is easy to add a report to the Web site that shows the results of the tests that have been run.sf. While the default Surefire configuration fails the build if the tests fail. In the build life cycle defined in Chapter 2. such as FindBugs. 6. JavaNCSS and JDepend. Cobertura (. based on the theory that you shouldn't even try to use something before it has been tested. PMD. for assessing coverage. Some other similar tools. using this report on a regular basis can be very helpful in spotting any holes in the test plan. Setting Up the Project Web Site. Monitoring and Improving the Health of Your Tests One of the important (and often controversial) features of Maven is the emphasis on testing as part of the production of your code. 194 .xml: . This configuration will locate any instances of TODO. While you are writing your tests.. have beta versions of plugins available from the. and more plugins are being added every day. Checkstyle. or as part of the site).8.. add the following to the reporting section of proficio/pom. the report (run either on its own. It is actually possible to achieve this using Checkstyle or PMD rules.2. it can be a useful report for demonstrating the number of tests available and the time it takes to run certain tests for a package. As you learned in section 6.codehaus. however this plugin is a more convenient way to get a simple report of items that need to be addressed at some point later in time. will ignore these failures when generated to show the current test state. At the time of writing.mojo</groupId> <artifactId>taglist-maven-plugin</artifactId> <configuration> <tags> <tag>TODO</tag> <tag>@todo</tag> <tag>FIXME</tag> <tag>XXX</tag> </tags> </configuration> </plugin> .org/ project at the time of this writing.. FIXME. or XXX in your source code.Better Builds with Maven To try this plugin. html. a branch is an if statement that can behave differently depending on whether the condition is true or false. The report contains both an overall summary. For example. Figure 6-10 shows the output that you can view in target/site/cobertura/index. or for which all possible branches were not executed. This includes method and class declarations. in the familiar Javadoc style framed layout.Assessing Project Health with Maven To see what Cobertura is able to report. run mvn cobertura:cobertura in the proficio-core directory of the sample application. • • Unmarked lines with a green number in the second column are those that have been completely covered by the test execution. For a source file. Each line with an executable statement has a number in the second column that indicates during the test run how many times a particular statement was run. Figure 6-10: An example Cobertura report 195 . you'll notice the following markings: • Unmarked lines are those that do not have any executable code associated with them. comments and white space. Lines in red are statements that were not executed (if the count is 0). and a line-by-line coverage analysis of each source file. .codehaus.ser file is deleted. The Cobertura report doesn't have any notable configuration. If this is a metric of interest.. you might consider having PMD monitor it... add the following to the build section of proficio/pom. and is not cleaned with the rest of the project. the database used is stored in the project directory as cobertura.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> . over 10). might indicate a method should be re-factored into simpler pieces. If you now run mvn clean in proficio-core.ser. To ensure that this happens. <build> <plugins> <plugin> <groupId>org. as well as the target directory.. 196 .. While not required. the report will be generated in target/site/cobertura/index. The Cobertura plugin also contains a goal called cobertura:check that is used to ensure that the coverage of your source code is maintained at a certain percentage..Better Builds with Maven The complexity indicated in the top right is the cyclomatic complexity of the methods in the class. <plugin> <groupId>org. If you now run mvn site under proficio-core. as it can be hard to visualize and test the large number of alternate code paths. High numbers (for example.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <executions> <execution> <id>clean</id> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> </plugins> </build> .codehaus. Add the following to the reporting section of proficio/pom.html. there is another useful setting to add to the build section. you'll see that the cobertura.xml: . so including it in the site is simple. Due to a hard-coded path in Cobertura..xml: . which measures the number of branches that occur in a particular method. . <execution> <id>check</id> <goals> <goal>check</goal> </goals> </execution> </executions> .Assessing Project Health with Maven To configure this goal for Proficio. This is because Cobertura needs to instrument your class files. However. the check will be performed.. and 100% branch coverage rate. The Surefire report may also re-run tests if they were already run – both of these are due to a limitation in the way the life cycle is constructed that will be improved in future versions of Maven.. looking through the report. If you run mvn verify again. <configuration> <check> <totalLineRate>80</totalLineRate> . This ensures that if you run mvn cobertura:check from the command line. so are not packaged in your application).... Note that the configuration element is outside of the executions. these are instrumented in a separate directory. and the tests are re-run using those class files instead of the normal ones (however. You can do this for Proficio to have the tests pass by changing the setting in proficio/pom. You would have seen in the previous examples that there were some lines not covered. you would add unit tests for the functions that are missing tests. You'll notice that your tests are run twice. the check passes.. and decide to reduce the overall average required.. add a configuration and another execution to the build plugin definition you added above when cleaning the Cobertura database: . <configuration> <check> <totalLineRate>100</totalLineRate> <totalBranchRate>100</totalBranchRate> </check> </configuration> <executions> . The rules that are being used in this configuration are 100% overall line coverage rate. If you now run mvn verify under proficio-core..xml: . so running the check fails.. Normally. 197 . you may decide that only some exceptional cases are untested. This wouldn't be the case if it were associated with the life-cycle bound check execution. as in the Proficio example. the configuration will be applied. For example. For more information.org/plugins/maven-clover-plugin/. although not yet integrated with Maven directly. It behaves very similarly to Cobertura. and get integration with these other tools for free. Consider setting any package rates higher than the per-class rate. For more information. Remember. using packageLineRate and packageBranchRate. these reports work unmodified with those test types. exceptional cases – and that's certainly not something you want! The settings above are requirements for averages across the entire source tree. refer to the Cobertura plugin configuration reference at. the easiest way to increase coverage is to remove code that handles untested. Some helpful hints for determining the right code coverage settings are: • • • • • • Like all metrics. using lineRate and branchRate. If you have another tool that can operate under the Surefire framework. Set some known guidelines for what type of code can remain untested. It also won't tell you whether the results of untested input values produce the correct results. Choosing appropriate settings is the most difficult part of configuring any of the reporting metrics in Maven.sf. there is more to assessing the health of tests than success and coverage. It is just as important to allow these exceptions.apache. only allowing a small number of lines to be untested. as it will discourage writing code to handle exceptional cases that aren't being tested. or as the average across each package. These reports won't tell you if all the features have been implemented – this requires functional or acceptance testing. it is possible for you to write a provider to use the new tool. In both cases.Better Builds with Maven These settings remain quite demanding though. Tools like Jester (. such as handling checked exceptions that are unexpected in a properly configured system and difficult to test. so that they understand and agree with the choice. and at the time of writing experimental JUnit 4. it is worth noting that one of the benefits of Maven's use of the Surefire abstraction is that the tools above will work for any type of runner introduced. Don't set it too low. The best known commercial offering is Clover. It is also possible to set requirements on individual packages or classes using the regexes parameter. as it is to require that the other code be tested. Of course.0 support is also available. To conclude this section on testing. see the Clover plugin reference on the Maven Web site at. Jester mutates the code that you've already determined is covered and checks that it causes the test to fail when run a second time with the wrong code. involve the whole development team in the decision. as it will become a minimum benchmark to attain and rarely more. Cobertura is not the only solution available for assessing test coverage. Remain flexible – consider changes over time rather than hard and fast rules. Surefire supports tests written with TestNG. may be of assistance there. and you can evaluate it for 30 days when used in conjunction with Maven. Don't set it too high.net). You may want to enforce this for each file individually as well.codehaus. and setting the total rate higher than both.org/cobertura-maven-plugin. Choose to reduce coverage requirements on particular classes or packages rather than lowering them globally. which is very well integrated with Maven as well. This will allow for some constructs to remain untested. 198 . Figure 6-11: An example dependency report 199 . but any projects that depend on your project. run mvn site in the proficio-core directory. This brought much more power to Maven's dependency mechanism. Left unchecked. where the dependencies of dependencies are included in a build. While this is only one of Maven's features.9. the full graph of a project's dependencies can quickly balloon in size and start to introduce conflicts. and a number of other features such as scoping and version selection. Maven 2.Assessing Project Health with Maven 6. used well it is a significant time saver. and browse to the file generated in target/site/dependencies. If you haven't done so already. but does introduce a drawback: poor dependency maintenance or poor scope and version selection affects not only your own project. The result is shown in figure 6-11.html.0 introduced transitive dependencies. The first step to effectively maintaining your dependencies is to review the standard report included with the Maven site. Monitoring and Improving the Health of Your Dependencies Many people use Maven primarily as a dependency manager. 0-SNAPSHOT junit:3.8. an incorrect version. It's here that you might see something that you didn't expect – an extra dependency. but that it is overridden by the test scoped dependency in proficio-core. and must be updated before the project can be released. Currently. proficio-model is introduced by proficio-api.0-SNAPSHOT (selected for compile) proficio-model:1.0-SNAPSHOT (selected for compile) Here you can see that. for example. here is the resolution process of the dependencies of proficio-core (some fields have been omitted for brevity): proficio-core:1. To see the report for the Proficio project. which indicates dependencies that are in development. as well as comments about what versions and scopes are selected. Whether there are outstanding SNAPSHOT dependencies in the build.Better Builds with Maven This report shows detailed information about your direct dependencies. Another report that is available is the “Dependency Convergence Report”.1 (selected for test) plexus-container-default:1. The file target/site/dependencyconvergence. This report is also a standard report. This helps ensure your build is consistent and reduces the probability of introducing an accidental incompatibility. local scope test wins) proficio-api:1. A dependency graphing plugin that will render a graphical representation of the information. but appears in a multi-module build only. and is shown in figure 6-12.4 (selected for compile) classworlds:1.1 (not setting scope to: compile. and that plexus-container-default attempts to introduce junit as a compile dependency.1-alpha-2 (selected for compile) junit:3. but more importantly in the second section it will list all of the transitive dependencies included through those dependencies. run mvn site from the base proficio directory. or an incorrect scope – and choose to investigate its inclusion. For example. this requires running your build with debug turned on. and why. It also includes some statistics and reports on two important factors: • Whether the versions of dependencies used for each module is in alignment.0.0-alpha-9 (selected for compile) plexus-utils:1.8. The report shows all of the dependencies included in all of the modules within the project. • 200 . such as mvn -X package.html will be created. using indentation to indicate which dependencies introduce other dependencies. This will output the dependency tree as it is calculated. so at the time of this writing there are two features in progress that are aimed at helping in this area: • • The Maven Repository Manager will allow you to navigate the dependency tree through the metadata stored in the Ibiblio repository. This can be quite difficult to read. declaring the absolute minimum supported as the lower boundary. rather than using the latest available. they can provide basic help in identifying the state of your dependencies once you know what to find. Use a range of supported dependency versions. Add exclusions to dependencies to remove poorly defined dependencies from the tree. This is particularly the case for dependencies that are optional and unused by your project. However. try the following recommendations for your dependencies: • • • • Look for dependencies in your project that are no longer used Check that the scope of your dependencies are set correctly (to test if only used for unit tests. 201 . or runtime if it is needed to bundle with or run the application but not for compiling your source code).Assessing Project Health with Maven Figure 6-12: The dependency convergence report These reports are passive – there are no associated checks for them. To improve your project's health and the ability to reuse it as a dependency itself. You can control what version is actually used by declaring the dependency version in a project that packages or runs the application. 10. An example Clirr report is shown in figure 6-13. and the information released with it. but then expected to continue working as they always have. Because existing libraries are not recompiled every time a version is changed. Monitoring and Improving the Health of Your Releases Releasing a project is one of the most important procedures you will perform. While the next chapter will go into more detail about how Maven can help automate that task and make it more reliable. but there are plans for more: • • A class analysis plugin that helps identify dependencies that are unused in your current project Improved dependency management features including different mechanisms for selecting versions that will allow you to deal with conflicting versions. Libraries will often be substituted by newer versions to obtain new features or bug fixes. Catching these before a release can eliminate problems that are quite difficult to resolve once the code is “in the wild”. Figure 6-13: An example Clirr report This is particularly important if you are building a library or framework that will be consumed by developers outside of your own project. 6. this section will focus on improving the quality of the code released. more tools are needed in Maven.Better Builds with Maven Given the importance of this task. Clirr detects whether the current version of a library has introduced any binary incompatibilities with the previous release.net/).sf. 202 . there is no verification that a library is binary-compatible – incompatibility will be discovered only when there's a failure. and more. but it is often tedious and error prone. An important tool in determining whether a project is ready to be released is Clirr (. specification dependencies that let you depend on an API and manage the implementation at runtime. Two that are in progress were listed above. This is particularly true in a Maven-based environment. This gives you an overview of all the changes since the last release.. Maven currently works best if any version of an artifact is backwards compatible.8 release... <reporting> <plugins> <plugin> <groupId>org.9 ------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------- This version is determined by looking for the newest release in repository.html.Assessing Project Health with Maven But does binary compatibility apply if you are not developing a library for external consumption? While it may be of less importance. As a project grows. add the following to the reporting section of proficio-api/pom.xml: . However. even if they are binary compatible. If you run mvn site in proficio-api.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <minSeverity>info</minSeverity> </configuration> </plugin> </plugins> </reporting> . or a quick patch may need to be made and a new version deployed into an existing application. back to the first release...9 of proficio-api against which to compare (and that it is downloaded if you don't have it already): . For example. by setting the minSeverity parameter. the interactions between the project's own components will start behaving as if they were externally-linked. you can configure the plugin to show all informational messages.. You can obtain the same result by running the report on its own using mvn clirr:clirr. To see this in action. to compare the current code to the 0.codehaus.8 203 . [clirr:clirr] Comparing to version: 0. the Clirr report shows only errors and warnings. If you run either of these commands. you'll notice that Maven reports that it is using version 0. where the dependency mechanism is based on the assumption of binary compatibility between versions.. While methods of marking incompatibility are planned for future versions. [INFO] [INFO] [INFO] [INFO] [INFO] . By default. that is before the current development version. the answer here is clearly – yes.. the report will be generated in target/site/clirrreport. Different modules may use different versions. You can change the version used with the comparisonVersion parameter. run the following command: mvn clirr:clirr -DcomparisonVersion=0. You'll notice there are a more errors in the report.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> . Like all of the quality metrics. however you can see the original sources by extracting the Code_Ch06-2. so that fewer people are affected.. to discuss and document the practices that will be used. 204 . To add the check to the proficio-api/pom. there is nothing in Java preventing them from being used elsewhere. as it will be used as the interface into the implementation by other applications. and later was redesigned to make sure that version 1. <build> <plugins> <plugin> <groupId>org. The Clirr plugin is also capable of automatically checking for introduced incompatibilities through the clirr:check goal.codehaus. This is the most important one to check. on the acceptable incompatibilities.Better Builds with Maven These versions of proficio-api are retrieved from the repository. it is important to agree up front. </plugins> </build> . it is a good idea to monitor as many components as possible. Even if they are designed only for use inside the project. you are monitoring the proficio-api component for binary compatibility changes only.. since this early development version had a different API. and to check them automatically. it is almost always preferable to deprecate an old API and add a new one... if the team is prepared to do so. It is best to make changes earlier in the development cycle.0 would be more stable in the long run.zip file.. rather than removing or changing the original API and breaking binary compatibility. and it can assist in making your own project more stable. the harder they are to change as adoption increases.xml file. delegating the code. Once a version has been released that is intended to remain binary-compatible going forward. In this instance. add the following to the build section: . then there is no point in checking the others – it will create noise that devalues the report's content in relation to the important components. However. The longer poor choices remain.. If it is the only one that the development team will worry about breaking. It has a functional Maven 2 plugin. it takes a very different approach. This can be useful in getting a greater level of detail than Clirr on specific class changes. which is available at. 205 . and then act accordingly.9 preview release and the final 1. it will not pinpoint potential problems for you. Hopefully a future version of Clirr will allow acceptable incompatibilities to be documented in the source code. However. taking two source trees and comparing the differences in method signatures and Javadoc annotations. <plugin> <groupId>org. and particularly so if you are designing a public API. as well as strategies for evolving an API without breaking it.codehaus. the following articles and books can be recommended: • Evolving Java-based APIs contains a description of the problem of maintaining binary compatibility. and ignored in the same way that PMD does.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <excludes> <exclude>**/Proficio</exclude> </excludes> </configuration> . it is listed only in the build configuration. This allows the results to be collected over time to form documentation about known incompatibilities for applications using the library.0 version. Effective Java describes a number of practical rules that are generally helpful to writing code in Java. you will see that the build fails due to the binary incompatibility introduced between the 0. Note that in this instance. you can create a very useful mechanism for identifying potential release disasters much earlier in the development process. A limitation of this feature is that it will eliminate a class entirely. so the report still lists the incompatibility.Assessing Project Health with Maven If you now run mvn verify. and so is most useful for browsing.codehaus..org/jdiff-maven-plugin. </plugin> This will prevent failures in the Proficio class from breaking the build in the future. • A similar tool to Clirr that can be used for analyzing changes between releases is JDiff. you can choose to exclude that from the report by adding the following configuration to the plugin: . Since this was an acceptable incompatibility due to the preview nature of the 0.. With this simple setup. not just the one acceptable failure. Built as a Javadoc doclet. While the topic of designing a strong public API and maintaining binary compatibility is beyond the scope of this book...9 release. How well this works in your own projects will depend on the development culture of your team. The additions and changes to Proficio made in this chapter can be found in the Code_Ch06-1. and run in the appropriate environment. along with techniques to ensure that the build checks are now automated. but none related information from another report to itself. Some of the reports linked to one another. 206 . they did not address all of these requirements. it requires a shift from a focus on time and deadlines.xml). then. this focus and automated monitoring will have the natural effect of improving productivity and reducing time of delivery again. each in discrete reports. Most Maven plugins allow you to integrate rules into the build that check certain constraints on that piece of information once it is well understood. In some cases. Summary The power of Maven's declarative project model is that with a very simple setup (often only 4 lines in pom.11. the Dashboard plugin). a large amount of information was presented about a project. While some attempts were made to address this in Maven 1. and incorporates the concepts learned in this chapter. 6. it is important that your project information not remain passive.0.zip source archive. enforcing good. It is important that developers are involved in the decision making process regarding build constraints. Finally. so that they feel that they are achievable. and few of the reports aggregated information across a multiple module build. and have not yet been implemented for Maven 2. regularly scheduled. Best of all. However. to a focus on quality. none of the reports presented how the information changes over time other than the release announcements. and as the report set stabilizes – summary reports will start to appear.Better Builds with Maven 6. Viewing Overall Project Health In the previous sections of this chapter. However. Once established. of the visual display is to aid in deriving the appropriate constraints to use. a new set of information about your project can be added to a shared Web site to help your team visualize the health of the project. as there is a constant background monitor that ensures the health of the project is being maintained. These are all important features to have to get an overall view of the health of a project. the model remains flexible enough to make it easy to extend and customize the information published on your project web site. In the absence of these reports.12. individual checks that fail the build when they're not met.0 (for example. and will be used as the basis for the next chapter. it should be noted that the Maven reporting API was written with these requirements in mind specifically. The purpose. will reduce the need to gather information from various sources about the health of the project. The next chapter examines team development and collaboration. .7. . in rapid. whether it is 2 people or 200 people. real-time stakeholder participation. the fact that everyone has direct access to the other team members through the CoRE framework reduces the time required to not only share information. it does encompass a set of practices and tools that enable effective team communication and collaboration. working on complex.. web-based communication channels and web-based project management tools. and asynchronous engineering. However. further contributing to the problem. This problem gets exponentially larger as the size of the team increases.Better Builds with Maven 7. Many of these challenges are out of any given technology's control – for instance finding the right people for the team. although a distributed team has a higher communication overhead than a team working in a single location. will inevitably have to spend time obtaining this localized information. resulting in shortened development cycles. which is enabled by the accessibility of consistently structured and organized information such as centralized code repositories. Even though teams may be widely distributed. and dealing with differences in opinions. A Community-oriented Real-time Engineering (CoRE) process excels with this information challenge. the key to the information issue in both situations is to reduce the amount of communication necessary to obtain the required information in the first place. As teams continue to grow. but also to incorporate feedback. iterative cycles. Using the model of a community. Even when it is not localized. one of the biggest challenges relates to the sharing and management of development information. it's just as important that they don't waste valuable time researching and reading through too many information sources simply to find what they need. The Issues Facing Teams Software development as part of a team. As each member retains project information that isn't shared or commonly accessible. An organizational and technology-based framework. and document for reuse the artifacts that result from a software project.1. While it's essential that team members receive all of the project information required to be productive. CoRE enables globally distributed development teams to cohesively contribute to high-quality software. The CoRE approach to development also means that new team members are able to become productive quickly. CoRE emphasizes the relationship between project information and project members. or forgotten. rapid development. visualize. faces a number of challenges to the success of the effort. misinterpreted. However. These tools aid the team to organize. repeating errors previously solved or duplicating efforts already made. 208 . component-based projects despite large. every other member (and particularly new members). This value is delivered to development teams by supporting project transparency. project information can still be misplaced. and that existing team members become more productive and effective. While Maven is not tied directly to the CoRE framework. CoRE is based on accumulated learnings from open source projects that have achieved successful. widely-distributed teams. multiple JDK versions. varying operating systems.m2 subdirectory of your home directory (settings in this location take precedence over those in the Maven installation directory). there are unavoidable variables that remain. In a shared development environment. it's a good idea to leverage Maven's two different settings files to separately manage shared and user-specific settings. In Maven. error-prone and full of omissions. In Chapter 2.2. and to user-specific profiles. such as proxy settings. because the environment will tend to evolve inconsistently once started that way. it will be the source of timeconsuming development problems in the future. This file can be stored in the conf directory of your Maven installation. How to Set up a Consistent Developer Environment Consistency is important when establishing a shared development environment. while an individual developer's settings are stored in their home directory. 7. The settings. In this chapter. these variables relate to the user and installation settings files. such as different installation locations for software. demonstrating how Maven provides teams with real-time information on the builds and health of a project. you learned how to create your own settings.xml file. This chapter also looks at the adoption and use of a consistent development environment. Common configuration settings are included in the installation directory. through the practice of continuous integration.xml file contains a number of settings that are user-specific. the key is to minimize the configuration required by each individual developer.Team Collaboration with Maven As described in Chapter 6. the set up process for a new developer can be slow. and to effectively define and declare them. To maintain build consistency. While one of Maven's objectives is to provide suitable conventions to reduce the introduction of inconsistencies in the build environment. Maven can gather and share the knowledge about the health of a project. Additionally. but also several that are typically common across users in a shared environment. while still allowing for this natural variability. 209 . this is taken a step further. and the use of archetypes to ensure consistency in the creation of new projects. or in the . Without it. and other discrete settings such as user names and passwords. com/internal/</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>property-overrides</activeProfile> <activeProfile>default-repositories</activeProfile> </activeProfiles> <pluginGroups> <pluginGroup>com.Better Builds with Maven The following is an example configuration file that you might use in the installation directory.mycompany.com/internal/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>internal</id> <name>Internal Plugin Repository</name> <url>: ></pluginGroup> </pluginGroups> </settings> 210 . <maven home>/conf/settings. See section 7. across users. issues with inconsistently-defined identifiers and permissions are avoided. you can easily add and consistently roll out any new server and repository settings. the local repository is defined as the repository of a single user. You'll notice that the local repository is omitted in the prior example. it would usually be set consistently across the organization or department. The mirror element can be used to specify a mirror of a repository that is closer to you. Another profile. it is important that you do not configure this setting in a way that shares a local repository. without having to worry about integrating local changes made by individual developers. By placing the common configuration in the shared settings.username>myuser</website. The previous example forms a basic template that is a good starting point for the settings file in the Maven installation.home}/maven-repo). In Maven.username> </properties> </profile> </profiles> </settings> To confirm that the settings are installed correctly.3 of this chapter for more information on creating a mirror of the central repository within your own organization. The user-specific configuration is also much simpler as shown below: <settings> <profiles> <profile> <id>property-overrides</id> <properties> <website. property-overrides is also enabled by default.Team Collaboration with Maven There are a number of reasons to include these settings in a shared configuration: • • • • • • If a proxy server is allowed. While you may define a standard location that differs from Maven's default (for example. The active profiles listed enable the profile defined previously in every environment.3 for more information on setting up an internal repository. See section 7. These repositories are independent of the central repository in this configuration. at a single physical location.username}. Using the basic template. with only specific properties such as the user name defined in the user's settings. This profile will be defined in the user's settings file to set the properties used in the shared file. ${user. internal repositories that contain a given organization's or department's released artifacts. The server settings will typically be common among a set of developers. The profile defines those common. The plugin groups are necessary only if an organization has plugins. you can view the merged result by using the following help plugin command: C:\mvnbook> mvn help:effective-settings 211 . which is typically one that has been set up within your own organization or department. which are run from the command line and not defined in the POM. such as ${website. 1. Now that each individual developer on the team has a consistent set up that can be customized as needed.xml file covers the majority of use cases for individual developer customization.1. by one of the following methods: • Using the M2_HOME environment variable to force the use of a particular installation. Each developer can check out the installation into their own machines and run it from there. but it is also important to ensure that the shared settings are easily and reliably installed with Maven. the most popular is HTTP. Configuring the settings. download the Jetty 5. and run: C:\mvnbook\repository> java -jar jetty-5. For an explanation of the different types of repositories. or other custom solution. Jetty. The following are a few methods to achieve this: • • • • Rebuild the Maven release distribution to include the shared configuration file and distribute it internally. While any of the available transport protocols can be used. or any number of other servers. easily updated. Creating a Shared Repository Most organizations will need to set up one or more shared repositories.xml file. organization's will typically want to set up what is referred to as an internal repository. the next step is to establish a repository to and from which artifacts can be published and dependencies downloaded.jar 8081 212 . You can use an existing HTTP server for this. • Adjusting the path or creating symbolic links (or shortcuts) to the desired Maven executable. However. developers must use profiles in the profiles. since not everyone can deploy to the central Maven repository. or if there are network problems. For more information on profiles. 7. A new release will be required each time the configuration is changed. see Chapter 2. Setting up an internal repository is simple. Change to that directory. This internal repository is still treated as a remote repository in Maven. however it applies to all projects that are built in the developer's environment. doing so will prevent Maven from being available off-line. but requires a manual procedure. To set up your organization's internal repository using Jetty. each execution will immediately be up-to-date. Place the Maven installation on a read-only shared or network drive from which each developer runs the application. If necessary. Subversion.10-bundle. create a new directory in which to store the files. located in the project directory. To do this. just as any other external repository would be.10 server bundle from the book's Web site and copy it to the repository directory. Retrieving an update from an SCM will easily update the configuration and/or installation. Apache Tomcat. If this infrastructure is available. so that multiple developers and teams can collaborate effectively.Better Builds with Maven Separating the shared settings from the user-specific settings is helpful.3. Use an existing desktop management solution. To set up Jetty. In some circumstances however. and when possible. in this example C:\mvnbook\repository will be used. Check the Maven installation into CVS. While it can be stored anywhere you have permissions. an individual will need to customize the build of an individual project. it is possible to maintain multiple Maven installations. or create a new server using Apache HTTPd. or other source control management (SCM) system. if M2_HOME is not set. To publish releases for use across different environments within their network. see Chapter 3. This creates an empty repository.apache. sftp and more. and reporting. you will want to set up or use an existing HTTP server that is in a shared. To populate the repository you just created. as well as friendly repository browsing. You can create a separate repository under the same server. ftp. the size of the Maven repository was 5. This will download anything that is not already present. scp. • The Maven Repository Manager (MRM) is a new addition to the Maven build platform that is designed to administer your internal repository.Team Collaboration with Maven You can now navigate to and find that there is a web server running displaying that directory. using the following command: C:\mvnbook\repository> mkdir central This repository will be available at. However. Your repository is now set up. you can store the repositories on this single server. In addition. it provides faster performance (as most downloads to individual developers come from within their own network). there are a number of methods available: • • Manually add content as desired using mvn deploy:deploy-file Set up the Maven Repository Manager as a proxy to the central repository. 213 . This chapter will assume the repositories are running from and that artifacts are deployed to the repositories using the file system. and keep a copy in your internal repository for others on your team to reuse. and is all that is needed to get started. create a subdirectory called internal that will be available at. separate repositories. it is common in many organizations as it eliminates the requirement for Internet access or proxy configuration. However. Later in this chapter you will learn that there are good reasons to run multiple. For the first repository. The repository manager can be downloaded from. by avoiding any reliance on Maven's relatively open central repository. It is deployed to your Jetty server (or any other servlet container) and provides remote repository proxies. Use rsync to take a copy of the central repository and regularly update it. and gives full control over the set of artifacts with which your software is built. While this isn't required. configured securely and monitored to ensure it remains running at all times. The server is set up on your own workstation for simplicity in this example. At the time of writing. refer to Chapter 3. but rather than set up multiple web servers. accessible location. it is possible to use a repository on another server with any combination of supported protocols including http. searching. For more information. C:\mvnbook\repository> mkdir internal It is also possible to set up another repository (or use the same one) to mirror content from the Maven central repository. hierarchy. so that a project can add repositories itself for dependencies located out of those repositories configured initially. Repositories such as the one above are configured in the POM usually. it is necessary to declare only those that contain an inherited POM. To override the central repository with your internal repository. it must retrieve the parent from the repository. and as a result. this must be defined as both a regular repository and a plugin repository to ensure all access is consistent. This makes it impossible to define the repository in the parent. On the other hand. However. you should override the central repository. unless you have mirrored the central repository using one the techniques discussed previously.Better Builds with Maven When using this repository for your projects. for a situation where a developer might not have configured their settings and instead manually installed the POM. 214 . Not only is this very inconvenient.2. It is still important to declare the repositories that will be used in the top-most POM itself. to configure the repository from the project level instead of in each user's settings (with one exception that will be discussed next). that declares shared settings within an organization and its departments. otherwise Maven will fail to download any dependencies that are not in your local repository. it would be a nightmare to change should the repository location change! The solution is to declare your internal repository (or central replacement) in the shared settings. You would use it as a mirror if it is intended to be a copy of the central repository exclusively. or had it in their source code check out. as shown in section 7. Usually. and if it's acceptable to have developers configure this in their settings as demonstrated in section 7. there are two choices: use it as a mirror. or have it override the central repository. you must define a repository in a settings file and/or POM that uses the identifier central.2. The next section discusses how to set up an “organization POM”. it would need to be declared in every POM. or the original central repository directly without consequence to the outcome of the build. or to include your own artifacts in the same repository. if you want to prevent access to the central repository for greater control. Developers may choose to use a different mirror. there is a problem – when a POM inherits from another POM that is not in the central repository. If you have multiple repositories.xml file. Any number of levels (parents) can be used. consistency is important when setting up your build infrastructure.. While project inheritance was limited by the extent of a developer's checkout in Maven 1. and then the teams within those departments. the easiest way to version a POM is through sequential numbering. its departments.. or the organization as a whole. It is important to recall. consider the POM for Maven SCM: <project> <modelVersion>4. itself. 215 . depending on the information that needs to be shared. from section 7. the current project – Maven 2 now retrieves parent projects from the repository. You may have noticed the unusual version declaration for the parent project. that if your inherited projects reside in an internal repository. Since the version of the POM usually bears no resemblance to the software.4. wherein there's the organization. <modules> <module>maven-scm-api</module> <module>maven-scm-providers</module> .. As an example. By declaring shared elements in a common parent POM. so it's possible to have one or more parents that define elements common to several projects. etc.maven. which is shared across all Maven projects through inheritance. and is a project that.apache. there are three levels to consider when working with any individual module that makes up the Maven project. you'd find that there is very little deployment or repositoryrelated information.0 – that is.maven</groupId> <artifactId>maven-parent</artifactId> <version>1</version> </parent> <groupId>org. Creating an Organization POM As previously mentioned in this chapter. consider the Maven project itself.3.0</modelVersion> <parent> <groupId>org. These parents (levels) may be used to define departments. Maven SCM.Team Collaboration with Maven 7. Future versions of Maven plan to automate the numbering of these types of parent projects to make this easier. This project structure can be related to a company structure. project inheritance can be used to assist in ensuring project consistency.org/maven-scm/</url> . As a result.apache.0.apache. </modules> </project> If you were to review the entire POM.scm</groupId> <artifactId>maven-scm</artifactId> <url>. as this is consistent information. has a number of sub-projects (Maven. It is a part of the Apache Software Foundation.).xml file in the shared installation (or in each developer's home directory). To continue the Maven example. then that repository will need to be added to the settings.. Maven Continuum. .apache.org</post> ... <mailingLists> <mailingList> <name>Maven Announcements List</name> <post>[email protected]</modelVersion> <parent> <groupId>org. </mailingList> </mailingLists> <developers> <developer> ... you'd see it looks like the following: <project> <modelVersion>4.Better Builds with Maven If you look at the Maven project's parent POM. </developer> </developers> </project> 216 .apache</groupId> <artifactId>apache</artifactId> <version>1</version> </parent> <groupId>org.org/</url> .maven</groupId> <artifactId>maven-parent</artifactId> <version>5</version> <url>. For this reason.6). and the deployment locations. when working with this type of hierarchy.org/</url> </organization> <url>.. <repositories> <repository> <id>apache. 217 .apache.. Source control management systems like CVS and SVN (with the traditional intervening trunk directory at the individual project level) do not make it easy to store and check out such a structure. In fact. and deployed with their new version as appropriate. it is best to store the parent POM files in a separate area of the source control tree. An issue that can arise. in this case the Apache Software Foundation: <project> <modelVersion>4.apache. modified.apache.apache</groupId> <artifactId>apache</artifactId> <version>1</version> <organization> <name>Apache Software Foundation</name> <url>. is regarding the storage location of the source POM files. such as the announcements mailing list and the list of developers that work across the whole project. and less frequent schedule than the projects themselves. you can retain the historical versions in the repository if it is backed up (in the future. These parent POM files are likely to be updated on a different.0. </snapshotRepository> </distributionManagement> </project> The Maven project declares the elements that are common to all of its sub-projects – the snapshot repository (which will be discussed further in section 7. </repository> <snapshotRepository> .snapshots</id> <name>Apache Snapshot Repository</name> <url></modelVersion> <groupId>org... most of the elements are inherited from the organization-wide parent project. <distributionManagement> <repository> . the Maven Repository Manager will allow POM updates from a web interface).org/maven-snapshot-repository</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> ...org/</url> . Again. there is no best practice requirement to even store these files in your source control management system.Team Collaboration with Maven The Maven parent POM includes shared elements.. where they can be checked out.. For most installations this is all the configuration that's required. As such. and learn how to use Continuum to build this project on a regular basis. More than just nightly builds. continuous integration can enable a better development culture where team members can make smaller. In this chapter. press Ctrl-C in the window that is running Continuum). as well as the generic bin/plexus. You can verify the installation by viewing the web site at. The configuration on the screen is straight forward – all you should need to enter are the details of the administration account you'd like to use.org/.3.Better Builds with Maven 7.tigris. and you must stop the server to make the changes (to stop the server. 218 . As of Continuum 1. which you can obtain for your operating system from. The examples discussed are based on Continuum 1. Continuous Integration with Continuum If you are not already familiar with it. you will pick up the Proficio example from earlier in the book.0.sh for use on other Unix-based platforms. This is very simple – once you have downloaded it and unpacked it.3> bin\win32\run There are scripts for most major platforms.0. however. iterative changes that can more easily support concurrent development processes. continuous integration is a key element of effective collaboration. The examples also assumes you have Subversion installed. The first screen to appear will be the one-time setup page shown in figure 7-1.3. however newer versions should be similar. you can run it using the following command: C:\mvnbook\continuum-1. continuous integration enables automated builds of your project on a regular interval. these additional configuration requirements can be set only after the previous step has been completed. if you are running Continuum on your desktop and want to try the examples in this section. some additional steps are required. Starting up continuum will also start a http server and servlet engine. rather than close to a release. First. ensuring that conflicts are detected earlier in a project's release life cycle. Continuum is Maven's continuous integration and build server. you will need to install Continuum.5.0. and the company information for altering the logo in the top left of the screen. Team Collaboration with Maven Figure 7-1: The Continuum setup screen To complete the Continuum setup page. you can cut and paste field values from the following list: Field Name Value working-directory Working Directory Build Output Directory build-output-directory Base URL 219 . POM files will be read from the local hard disk where the server is running.plexus. you can start Continuum again. This requires obtaining the Code_Ch07. After these steps are completed.org/continuum/guides/mini/guide-configuration. The default is to use localhost:25. edit the file above to change the smtp-host setting. The next step is to set up the Subversion repository for the examples.formica. edit apps/continuum/conf/application.zip archive and unpacking it in your environment....html..Better Builds with Maven In the following examples. this is disabled as a security measure.xml and verify the following line isn't commented out: .validation. refer to. you will also need an SMTP server to which to send the messages. <implementation> org. By default.. If you do not have this set up on your machine. since paths can be entered from the web interface. for example if it was unzipped in C:\mvnbook\svn: C:\mvnbook> svn co \ proficio 220 .. You can then check out Proficio from that location.apache. <allowedScheme>file</allowedScheme> </allowedSchemes> </configuration> . For instructions.UrlValidator </implementation> <configuration> <allowedSchemes> . To enable this setting. To have Continuum send you e-mail notifications.codehaus. 3 for information on how to set this up. The distributionManagement setting will be used in a later example to deploy the site from your continuous integration environment... <distributionManagement> <site> <id>website</id> <url> /reference/${project..Team Collaboration with Maven The POM in this repository is not completely configured yet.xml to correct the e-mail address to which notifications will be sent. since not all of the required details were known at the time of its creation. and edit the location of the Subversion repository.version} </url> </site> </distributionManagement> . commit the file with the following command: C:\mvnbook\proficio> svn ci -m 'my settings' pom.. with the following command: C:\mvnbook\proficio> mvn install 221 . If you haven't done so already. This assumes that you are still running the repository Web server on localhost:8081.. by uncommenting and modifying the following lines: .. refer to section 7.. <ciManagement> <system>continuum</system> <url> <notifiers> <notifier> <type>mail</type> <configuration> <address>youremail@yourdomain. from the directory C:\mvnbook\repository. The ciManagement section is where the project's continuous integration is defined and in the above example has been configured to use Continuum locally on port 8080.com</address> </configuration> </notifier> </notifiers> </ciManagement> . Edit proficio/pom.xml You should build all these modules to ensure everything is in order. Once these settings have been edited to reflect your setup. <scm> <connection> scm:svn: </connection> <developerConnection> scm:svn: </developerConnection> </scm> .. you must either log in with the administrator account you created during installation.Better Builds with Maven You are now ready to start using Continuum. While uploading is a convenient way to configure from your existing check out. and each of the modules will be added to the list of projects. Figure 7-2: Add project screen shot This is all that is required to add a Maven 2 project to Continuum. Before you can add a project to the list. Instead. you will enter either a HTTP URL to a POM in the repository. the builds will be marked as New and their checkouts will be queued. When you set up your own system later. as in the Proficio example. or a Subversion HTTP server. This will present the screen shown in figure 7-2. in Continuum 1. or upload from your local drive. under the Continuum logo.0+ Project from the Add Project menu. Initially.3 this does not work when the POM contains modules. Once you have logged in. Continuum will return to the project summary page. After submitting the URL. you can now select Maven 2. 222 . or perform other tasks.0. You have two options: you can provide the URL for a POM. or with another account you have since created with appropriate permissions. you will see an empty project list. enter the file:// URL as shown. The result is shown in figure 7-3. If you return to the location that was set up previously. The login link is at the top-left of the screen. a ViewCVS installation. and send an e-mail notification if there are any problems. check the file in: C:\mvnbook\proficio\proficio-api> svn ci -m 'introduce error' \ src/main/java/com/mergere/mvnbook/proficio/Proficio. This chapter will not discuss all of the features available in Continuum. MSN and Google Talk are all supported. First. The Build History link can be used to identify the failed build and to obtain a full output log.. go to your earlier checkout and introduce an error into Proficio.] public Proficio [.] Now.. Jabber. The build in Continuum will return to the successful state.java. for example. In addition. you might want to set up a notification to your favorite instant messenger – IRC. press Build Now on the Continuum web interface next to the Proficio API module. marking the left column with an “!” to indicate a failed build (you will need to refresh the page using the Show Projects link in the navigation to see these changes). the build will show an “In progress” status. 223 .. If you want to put this to the test.Team Collaboration with Maven Figure 7-3: Summary page after projects have built Continuum will now build the project hourly.. you should receive an e-mail at the address you configured earlier. For example. restore the file above to its previous state and commit it again. but you may wish to go ahead and try them.java Finally. remove the interface keyword: [. To avoid receiving this error every hour. and then fail. separate from QA and production releases. In Chapter 6. Continuous integration is most beneficial when tests are validating that the code is working as it always has. Avoid customizing the JDK. if it isn't something already in use in other development. Consider a regular. This will make it much easier to detect the source of an error when the build does break. operating system and other variables. you learned how to create an effective site containing project information and reports about the project's health and vitality. before the developer moves on or loses focus. there are two additional topics that deserve special attention: automated updates to the developer web site. it is beneficial to test against all different versions of the JDK. When a failure occurs in the continuous integration environment. it is often ignored. the continuous integration environment should be set up for all of the active branches. it is recommend that a separate. but rather keeping changes small and well tested. but it is best to detect a failure as soon as possible. not just that the project still compiles after one or more changes occur. For these reports to be of value. iterative builds are helpful in some situations. 224 . This will be constrained by the length of the build and the available resources on the build machine. and profile usage. and a future version will allow developers to request a fresh checkout. Though it would be overkill to regenerate the site on every commit. Continuum has preliminary support for system profiles and distributed testing. Continuum can be configured to trigger a build whenever a commit occurs. While this seems obvious. and independent of the environment being used. Run builds as often as possible. This is another way continuous integration can help with project collaboration and communication. there are a few tips for getting the most out of the system: • Commit early. run a servlet container to which the application can be deployed from the continuous integration environment. This doesn’t mean committing incomplete code. if the source control repository supports postcommit hooks. Build all of a project's active branches. for example. but regular schedule is established for site generation. it is also important that failures don't occur due to old build state. it is important that it can be isolated to the change that caused it. clean build. This can be helpful for non-developers who need visibility into the state of the application. This also means that builds should be fast – long integration and performance tests should be reserved for periodic builds. If multiple branches are in development. In addition. If the application is a web application. Run a copy of the application continuously. While rapid. Continuous integration is most effective when developers commit regularly. they need to be kept up-todate. enhancements that are planned for future versions. Fix builds as soon as possible. Establish a stable environment. test and production environments. Continuous integration will be pointless if developers repetitively ignore or delete broken build notifications. or local settings. Continuum currently defaults to doing a clean build.Better Builds with Maven Regardless of which continuous integration server you use. Run clean builds. Run comprehensive tests. commit often. based on selected schedules. and your team will become desensitized to the notifications in the future. • • • • • • • In addition to the above best practices. periodically. . Figure 7-4: Schedule configuration To complete the schedule configuration.. It is not typically needed if using Subversion. since commits are not atomic and a developer might be committing midway through a update. 225 . which will be configured to run every hour during business hours (8am – 4pm weekdays). from the Administration menu on the left-hand side. The example above runs at 8:00:00. The “quiet period” is a setting that delays the build if there has been a commit in the defined number of seconds prior.. Click the Add button to add a new schedule.html. Next.Team Collaboration with Maven Verify that you are still logged into your Continuum instance. This is useful when using CVS.com/quartz/api/org/quartz/CronTrigger.opensymphony. 16:00:00 from Monday to Friday. 9:00:00. The appropriate configuration is shown in figure 7-4.. select Schedules.. You will see that currently. only the default schedule is available. return to the project list. so you will need to add the definition to each module individually. The downside to this approach is that Continuum will build any unchanged modules. but does not recurse into the modules (the -N or --non-recursive argument). 226 .xml clean site-deploy --batch-mode -DenableCiProfile=true The goals to run are clean and site-deploy. The Add Build Definition screen is shown in figure 7-5. and select the top-most project. you can cut and paste field values from the following list: Field Name Value POM filename Goals Arguments pom. and add the same build definition to all of the modules. use the non-recursive mode instead. click the Add button below the default build definition. on the business hours schedule. In addition to building the sites for each module. In this example you will add a new build definition to run the site deployment for the entirety of the multi-module build. which will be visible from. Since this is the root of the multi-module build – and it will also detect changes to any of the modules – this is the best place from which to build the site. Maven Proficio. To add a new build definition. In Continuum 1. Figure 7-5: Adding a build definition for site deployment To complete the Add Build Definition screen.3. The project information shows just one build on the default schedule that installs the parent POM.0.Better Builds with Maven Once you add this schedule. there is no way to make bulk changes to build definitions. as well – if this is a concern. The site will be deployed to the file system location you specified in the POM. it can aggregate changes into the top-level site as required. when you first set up the Subversion repository earlier in this chapter. none of the checks added in the previous chapter are executed.maven. which means that Build Now from the project summary page will not trigger this build. which can be a discouragement to using them. which is essential for all builds to ensure they don't block for user input. Profiles are a means for selectively enabling portions of the build. and that it is not the default build definition.. In the previous example. . You'll find that when you run the build from the command line (as was done in Continuum originally). to ensure these checks are run.. you'll see that these checks have now been moved to a profile.Team Collaboration with Maven The arguments provided are --batch-mode. <profiles> <profile> <id>ciProfile</id> <activation> <property> <name>enableCiProfile</name> <value>true</value> </property> </activation> <plugins> <plugin> <groupId>org. However. In Chapter 6. which sets the given system property.apache. Click this for the site generation build definition. The --non-recursive option is omitted. However. if you want to fail the build based on these checks as well. Any of these test goals should be listed after the site-deploy goal. the profile is enabled only when the enableCiProfile system property is set to true. It is rare that the site build will fail. verify or integration-test goal to the list of goals. each build definition on the project information page (to which you would have been returned after adding the build definition) has a Build Now icon. so that if the build fails because of a failed check. However.. The checks will be run when you enable the ciProfile using mvn -DenableCiProfile=true.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> . and view the generated site from. If you haven't previously encountered profiles. since most reports continue under failure conditions. a number of plugins were set up to fail the build if certain project health checks failed. In this particular case. such as the percentage of code covered in the unit tests dropping below a certain value. You can see also that the schedule is set to use the site generation schedule created earlier.. a system property called enableCiProfile was set to true. and -DenableCiProfile=true. you can add the test. If you compare the example proficio/pom.xml file in your Subversion checkout to that used in Chapter 6. 227 . the generated site can be used as reference for what caused the failure. please refer to Chapter 3. The meaning of this system property will be explained shortly. these checks delayed the build for all developers. in an environment where a number of modules are undergoing concurrent development.xml: . rather than the property used to enable it. the team dynamic makes it critical. <activeProfile>ciProfile</activeProfile> </activeProfiles> . How you configure your continuous integration depends on the culture of your development team and other environmental factors such as the size of your projects and the time it takes to build and test them. The other alternative is to set this profile globally. Projects in Maven stay in the snapshot state until they are released. in some cases. if the additional checks take too much time for frequent continuous integration builds..8 of this chapter. or for the entire multi-module project to run the additional checks after the site has been generated. and clicking Edit next to the default build definition... As you saw before. As Maven 2 is still executed as normal. To enable this profile by default from these settings. and in contrast to regular dependencies.home}/. 7. Additionally.. you will learn about using snapshots more effectively in a team environment. and how to enable this within your continuous integration environment. the verify goal may need to be added to the site deployment build definition. which is discussed in section 7. as well as the settings in the Maven installation. and while dependency management is fundamental to any Maven build.. The guidelines discussed in this chapter will help point your team in the right direction. The first is to adjust the default build definition for each module.Better Builds with Maven There are two ways to ensure that all of the builds added in Continuum use this profile. indicates that the profile is always active when these settings are read. Snapshots were designed to be used in a team environment as a means for sharing development versions of artifacts that have already been built. you must build all of the modules simultaneously from a master build.. <activeProfiles> . these artifacts will be updated frequently. 228 . it is necessary to do this for each module individually. it may be necessary to schedule them separately for each module.6. for all projects in Continuum. where projects are closely related.m2/settings. In this case the identifier of the profile itself. at least in the version of Continuum current at the time of writing. Team Dependency Management Using Snapshots Chapter 3 of this book discussed how to manage your dependencies in a multi-module build. by going to the module information page. Usually. In this section. snapshots have been used to refer to the development version of an individual module. which are not changed. it reads the ${user. So far in this book. For example. as discussed previously. add the following configuration to the settings.xml file for the user under which it is running.xml file in <maven home>/conf/settings. but the timing and configuration can be changed depending upon your circumstances. The generated artifacts of the snapshot are stored in the local repository. the build involves checking out all of the dependent projects and building them yourself. .jar. such as the internal repository set up in section 7.. While this is not usually the case. 229 ... In this case.131114-1. you'll see that the repository was defined in proficio/pom.. building from source doesn't fit well with an environment that promotes continuous integration. Currently. it can lead to a number of problems: • • • • It relies on manual updates from developers. the Proficio project itself is not looking in the internal repository for dependencies.. but rather relying on the other modules to be built first.xml: .xml: .0SNAPSHOT. <distributionManagement> <repository> <id>internal</id> <url></url> </repository> .. or to lock down a stable version by declaring the dependency version to be the specific equivalent such as 1. the version used is the time that it was deployed (in the UTC timezone) and the build number.3. In Maven.0-20060211. locking the version in this way may be important if there are recent changes to the repository that need to be ignored temporarily.0-20060211. <repositories> <repository> <id>internal</id> <url></url> </repository> </repositories> . Considering that example. deploy proficio-api to the repository with the following command: C:\mvnbook\proficio\proficio-api> mvn deploy You'll see that it is treated differently than when it was installed in the local repository. though it may have been configured as part of your settings files. Instead. This technique allows you to continue using the latest version by declaring a dependency on 1. use binary snapshots that have been already built and tested.131114-1. the time stamp would change and the build number would increment to 2. add the following to proficio/pom.Team Collaboration with Maven While building all of the modules from source can work well and is handled by Maven inherently. which can be error-prone. To add the internal repository to the list of repositories used by Proficio regardless of settings.. </distributionManagement> Now. The filename that is used is similar to proficio-api-1. If you were to deploy again.. this is achieved by regularly deploying snapshots to a shared repository. . If it were omitted. Several of the problems mentioned earlier still exist – so at this point.. The -U argument in the prior command is required to force Maven to update all of the snapshots in the build. You can always force the update using the -U command. without having to manually intervene. assuming that the other developers have remembered to follow the process. to see the updated version downloaded. <repository> . proficio-api:1. all that is being saved is some time. as well as updating any version ranges. the updates will still occur only as frequently as new versions are deployed to the repository... any snapshot dependencies will be checked once an hour to determine if there are updates in the remote repository. this introduces a risk that the snapshot will not be deployed at all. and interval:minutes. to check for an update the first time that particular dependency is used after midnight local time. making it out-of-date. always. However.. by default... <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> . daily (the default). Now. or deployed without all the updates from the SCM. However. This technique can ensure that developers get regular updates. add the following configuration to the repository configuration you defined above in proficio/pom. The settings that can be used for the update policy are never. This causes many plugins to be checked for updates. you may also want to add this as a pluginRepository element as well. you will see that some of the dependencies are checked for updates. no update would be performed. and without slowing down the build by checking on every access (as would be the case if the policy were set to always).Better Builds with Maven If you are developing plugins. In this example. similar to the example below (note that this output has been abbreviated): . It is possible to establish a policy where developers do an update from the source control management (SCM) system before committing.xml: . it updates both releases and snapshots.0-SNAPSHOT: checking for updates from internal . 230 . This is because the default policy is to update snapshots daily – that is. but you can also change the interval by changing the repository configuration... build proficio-core with the following command: C:\mvnbook\proficio\proficio-core> mvn -U install During the build. and then deploy the snapshot to share with the other team members.. Whenever you use the -U argument. To see this. deployed with uncommitted code. you can cut and paste field values from the following list: Field Name Value C:\mvnbook\continuum-1. To deploy from your server. so let's go ahead and do it now. Log in as an administrator and go to the following Configuration screen.gif Company Logo. as you saw earlier. you have not been asked to apply this setting. So far in this section. Figure 7-6: Continuum configuration To complete the Continuum configuration page.0...\apps\ continuum\build-output-directory C:\mvnbook\repository\internal Deployment Repository Directory Base URL Mergere Company Name\bin\win32\. it makes sense to have it build snapshots. this feature is enabled by default in a build definition. you must ensure that the distributionManagement section of the POM is correctly configured. shown in figure 7-6.com Company URL Working Directory 231 .com/_design/images/mergere_logo.\apps\ continuum\working-directory Build Output Directory C:\mvnbook\continuum-1. Continuum can be configured to deploy its builds to a Maven snapshot repository automatically. as well.\. If there is a repository configured to which to deploy them. How you implement this will depend on the continuous integration server that you use.\. However..3\bin\win32\.Team Collaboration with Maven A much better way to use snapshots is to automate their creation. Since the continuous integration server regularly rebuilds the code from a known state.0.mergere.. return to your console and build proficio-core again using the following command: C:\mvnbook\proficio\proficio-core> mvn -U install You'll notice that a new version of proficio-api is downloaded. <snapshotRepository> <id>internal. but still keep a full archive of releases. you can avoid all of the problems discussed previously.. To try this feature. if you had a snapshot-only repository in /www/repository/snapshots.. Another point to note about snapshots is that it is possible to store them in a separate repository from the rest of your released artifacts. Once the build completes.. follow the Show Projects link. This will deploy to that repository whenever the version contains SNAPSHOT. or build from source.. 232 . If this is not the case. with an updated time stamp and build number. you would add the following: .snapshots</id> <url></url> </snapshotRepository> </distributionManagement> . you can either lock a dependency to a particular build. If you are using the regular deployment mechanism (instead of using Continuum). and click Build Now on the Proficio API project. and deploy to the regular repository you listed earlier. you can enter a full repository URL such as scp://repositoryhost/www/repository/internal. For example. <distributionManagement> . This can be useful if you need to clean up snapshots on a regular interval.. this separation is achieved by adding an additional repository to the distributionManagement section of your POM.Better Builds with Maven The Deployment Repository Directory field entry relies on your internal repository and Continuum server being in the same location. when necessary. Better yet.. when it doesn't. while you get regular updates from published binary dependencies. With this setup. Team Collaboration with Maven Given this configuration. archetypes give you the opportunity to start a project in the right way – that is. Beyond the convenience of laying out a project structure instantly. the requirement of achieving consistency is a key issue facing teams. run the following command: C:\mvnbook\proficio> mvn archetype:create \ -DgroupId=com. To avoid this. and replacing the specific values with parameters.. The replacement repository declarations in your POM would look like this: . using an archetype. you have seen the archetypes that were introduced in Chapter 2 used to quickly lay down a project structure. by hand. 7. As you saw in this chapter.7.. There are two ways to create an archetype: one based on an existing project using mvn archetype:create-from-project.mvnbook \ -DartifactId=proficio-archetype \ -DarchetypeArtifactId=maven-archetype-archetype 233 .mergere. Creating a Standard Project Archetype Throughout this book.. in a way that is consistent with other projects in your environment. you can create one or more of your own archetypes. While this is convenient. Writing an archetype is quite like writing your own project. there is always some additional configuration required.snapshots</id> <url></url> <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> </repositories> . either in adding or removing content from that generated by the archetypes. you can make the snapshot update process more efficient by not checking the repository that has only releases for updates. To get started with the archetype. <repositories> <repository> <id>internal</id> <url></url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>internal.. and the other. xml at the top level. and siteResources. Figure 7-7: Archetype directory layout If you look at pom. The example descriptor looks like the following: <archetype> <id>proficio-archetype</id> <sources> <source>src/main/java/App.java</source> </sources> <testSources> <source>src/test/java/AppTest. The JAR that is built is composed only of resources. 234 . so everything else is contained under src/main/resources.java</source> </testSources> </archetype> Each tag is a list of files to process and generate in the created project. The example above shows the sources and test sources. but it is also possible to specify files for resources. and the template project in archetype-resources.Better Builds with Maven The layout of the resulting archetype is shown in figure 7-7. There are two pieces of information required: the archetype descriptor in META-INF/maven/archetype. testResources.xml. The archetype descriptor describes how to construct a new project from the archetype-resources provided. you'll see that the archetype is just a normal JAR project – there is no special build configuration required. apache. go to an empty directory and run the following command: C:\mvnbook> mvn archetype:create -DgroupId=com.0</modelVersion> <groupId>$groupId</groupId> <artifactId>$artifactId</artifactId> <version>$version</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.w3.org/2001/XMLSchema-instance" xsi: <modelVersion>4. Continuing from the example in section 7. artifactId and version elements are variables that will be substituted with the values provided by the developer running archetype:create. These files will be used to generate the template files when the archetype is run. To do so.org/maven-v4_0_0.8.3 of this chapter. Releasing a project is explained in section 7.0" xmlns:xsi=" file looks like the following: <project xmlns=" \ -DartifactId=proficio-example \ -DarchetypeGroupId=com.mergere. if omitted. Maven will build. now however. the groupId. the required version would not be known (or if this was later development. so you can run the following command: C:\mvnbook\proficio\proficio-archetype> mvn deploy The archetype is now ready to be used.mvnbook \ -DarchetypeArtifactId=proficio-archetype \ -DarchetypeVersion=1. Since the archetype inherits the Proficio parent. refer to the documentation on the Maven web site. you need to populate the template with the content that you'd like to have applied consistently to new projects. the content of the files will be populated with the values that you provided on the command line.mergere. 235 .apache. It will look very similar to the content of the archetype-resources directory you created earlier. you will use the “internal” repository. a previous release would be used instead). Once you have completed the content in the archetype. the archetypeVersion argument is not required at this point. the pom. You now have the template project laid out in the proficio-example directory. and released as 1. Maestro is an Apache License 2. For more information on Maestro please see:. run the following command: c:\mvnbook\proficio> mvn release:prepare -DdryRun=true This simulates a normal release preparation. Finally.5 The release plugin operates in two steps: prepare and perform. and to perform standard tasks. Accept the defaults in this instance (note that running Maven in “batch mode” avoids these prompts and will accept all of the defaults). and creating tags (or equivalent for your SCM). once a release has been made. The perform step could potentially be run multiple times to rebuild a release from a clean checkout of the tagged version.com/. the Proficio example will be revisited. 236 .0 distribution based on a pre-integrated Maven.0. Worse. Maven provides a release plugin that provides the basic functions of a standard release process. new release. Continuum and Archiva build platform.Better Builds with Maven 7. 5 Mergere Maestro provides an automated feature for performing releases. which often leads to omissions or short cuts. and does all of the project and source control manipulation that results in a tagged version. it is usually difficult or impossible to correct mistakes other than to make another. Once the definition for a release has been set by a team. updating the source control management system to check and commit release related changes. The release plugin takes care of a number of manual steps in updating the project POM. As the command runs. such as deployment to the remote repository. you will be prompted for values. allowing them to be highly automated. You'll notice that each of the modules in the project is considered. Cutting a Release Releasing software is difficult. The prepare step is run once for a release. or check out the following: C:\mvnbook> svn co \ \ proficio To start the release process. it happens at the end of a long period of development when all everyone on the team wants to do is get it out there. releases should be consistent every time they are built. full of manual steps that need to be completed in a particular order. To demonstrate how the release plugin works.mergere. It is usually tedious and error prone. without making any modifications to your project. You can continue using the code that you have been working on in the previous sections. and this is reverted in the next POM. and setting the version to the latest release (But only after verifying that your project builds correctly with that version!). an error will appear. and is set based on the values for which you were prompted during the release process. Describe the SCM commit operation You might like to review the POM files that are created for steps 5 and 9. Describe the SCM commit and tag operations 9.tag and pom. as they will be committed for the next development iteration 10. or that different profiles will be applied. However. other modules). if you are using a dependency that is a snapshot. named pom. not ready to be used as a part of a release. or obtained from the development repository of the Maven project) that is implied through the build life cycle.tag file written out to each module directory. all of the dependencies being used are releases. the explicit version of plugins and dependencies that were used are added any settings from settings. 5. The prepare step ensures that there are no snapshots in the build. 3. 7. that resulting version ranges will be different. This can be corrected by adding the plugin definition to your POM. This is because you are using a locally installed snapshot of a plugin (either built yourself. to verify they are correct. as they will be committed to the tag Run mvn clean integration-test to verify that the project will successfully build Describe other preparation goals (none are configured by default.xml. In this POM. these changes are not enough to guarantee a reproducible build – it is still possible that the plugin versions will vary.xml and profiles.xml. The SCM information is also updated in the tag POM to reflect where it will reside once it is tagged. but this might include updating the metadata in your issue tracker. a number of changes are made: • • • 1.Team Collaboration with Maven In this project. there is also a release-pom.xml. However. This is because the prepare step is attempting to guarantee that the build will be reproducible in the future. and snapshots are a transient build. or part of the project.xml 237 . 4. including profiles from settings.next respectively in each module directory. For that reason. even if the plugin is not declared in the POM. This contains a resolved version of the POM that Maven will use to build from if it exists. other than those that will be released as part of the process (that is. any active profiles are explicitly activated. Modify all POM files in the build. 6. 2.xml (both per-user and per-installation) are incorporated into the POM. you may encounter a plugin snapshot. In some cases. or creating and committing an announcement file) 8. the appropriate SCM settings) Check if there are any local modifications Check for snapshots in dependency tree Check for snapshots of plugins in the build Modify all POM files in the build. To review the steps taken in this release process: Check for correct version of the plugin and POM (for example. You'll notice that the version is updated in both of these files. when a build is run from this tag to ensure it matches the same circumstances as the release build.. This is used by Maven.5. the release still hasn't been generated yet – for that. However. you need to deploy the build artifacts. while locally. Recall from Chapter 6 that you learned how to configure a number of checks – so it is important to verify that they hold as part of the release. instead of the normal POM.maven. and the updated POM files are committed..apache. Once this is complete. you need to enable this profile during the verification step.1-SNAPSHOT.xml in the same directory as pom. This is run as follows: C:\mvnbook\proficio> mvn release:perform 238 . or that expressions would have been resolved. To do so. you can remove that file.] Try the dry run again: C:\mvnbook\proficio> mvn release:prepare -DdryRun=true Now that you've gone through the test run and are happy with the results.] <plugin> <groupId>org. Having run through this process you may have noticed that only the unit and integration tests were run as part of the test build. or run mvn -Dresume=false release:prepare instead.. you can go for the real thing with the following command: C:\mvnbook\proficio> mvn release:prepare You'll notice that this time the operations on the SCM are actually performed.. recall that in section 7. this file will be release-pom. To include these checks as part of the release process.Better Builds with Maven You may have expected that inheritance would have been resolved by incorporating any parent elements that are used.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <configuration> <arguments>-DenableCiProfile=true</arguments> </configuration> </plugin> [. the version is now 1. When the final run is executed. you'll see in your SCM the new tag for the project (with the modified files). This is achieved with the release:perform goal. This is not the case however. the release plugin will resume a previous attempt by reading the release.xml.properties file that was created at the end of the last run. use the following plugin configuration: [. Also. you created a profile to enable those checks conditionally. You won't be prompted for values as you were the first time – since by the default. as these can be established from the other settings already populated in the POM in a reproducible fashion. If you need to start from the beginning. rather than the specific version used for the release.Team Collaboration with Maven No special arguments are required. When the release is performed... or to set certain properties. you can change the goals used with the goals parameter: C:\mvnbook\proficio> mvn release:perform -Dgoals="deploy" However. To do so.properties file still exists to tell the goal the version from which to release. check out the tag: C:\mvnbook> svn co \ file. To do this..apache. add the following goals to the POM: [. and not the release-pom. before running Maven from that location with the goals deploy site-deploy. If this is not what you want to run. the release plugin will confirm that the checked out project has the same release plugin configuration as those being used (with the exception of goals). you want to avoid such problems. you would run the following: C:\mvnbook\proficio> mvn release:perform -DconnectionUrl=\ scm:svn:. before you run the release:prepare goal. it is necessary to know what version ranges are allowed for a dependency.] <plugin> <groupId>org. and to deploy a copy of the site.org/plugins/maven-release-plugin/ for more information. you can examine the files that are placed in the SCM repository. It is important in these cases that you consider the settings you want. 239 . Refer to the plugin reference at. because the release. To release from an older version. For the same reason.xml files are included in the generated JAR file. this requires that you remember to add the parameter every time. you'll see that a clean checkout was obtained from the created tag.. though.xml file and the release-pom.0 If you follow the output above. during the process you will have noticed that Javadoc and source JAR files were produced and deployed into the repository for all the Java projects.0 You'll notice that the contents of the POM match the pom. These are configured by default in the Maven POM as part of a profile that is activated when the release is performed. Since the goal is for consistency. Also.xml file.maven.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <configuration> <goals>deploy</goals> </configuration> </plugin> [. The reason for this is that the POM files in the repository are used as dependencies and the original information is more important than the release-time information – for example. This is the default for the release plugin – to deploy all of the built artifacts. or if the release. both the original pom. To ensure reproducibility.] You may also want to configure the release plugin to activate particular profiles. and the built artifacts are deployed.apache.properties file had been removed. There are also strong team-related benefits in the preceding chapters – for example. All of the features described in this chapter can be used by any development team.] <plugin> <groupId>org. removing release. And all of these features build on the essentials demonstrated in chapters 1 and 2 that facilitate consistent builds... whether your team is large or small. rather than creating silos of information around individual projects. and indeed this entire book.properties and any POM files generated as a result of the dry run..Extra plugin configuration would be inserted here --> </build> </profile> </profiles> [.Better Builds with Maven You can disable this profile by setting the useReleaseProfile parameter to false.] Instead. Summary As you've seen throughout this chapter. define a profile with the identifier release-profile. 240 ..plugins</groupId> <artifactId>maven-release-plugin</artifactId> <configuration> <useReleaseProfile>false</useReleaseProfile> </configuration> </plugin> [. without having to declare and enable an additional profile.apache. you may want to include additional actions in the profile. and while Maven focuses on delivering consistency in your build infrastructure through patterns. as follows: [. the only step left is to clean up after the plugin. as follows: [. This in turn can lead to and facilitate best practices for developing in a community-oriented. by making information about your projects visible and organized. The site and reports you've created can help a team communicate the status of a project and their work more effectively. Maven was designed to address issues that directly affect teams of developers.] After the release process is complete. Lack of consistency is the source of many problems when working in a team. Maven provides value by standardizing and automating the build process.] <profiles> <profile> <id>release-profile</id> <build> <!-. the adoption of reusable plugins can capture and extend build knowledge throughout your entire organization... real-time engineering style. it can aid you in effectively using tools to achieve consistency in other areas of your development.. So..maven. Simply run the following command to clean up: C:\mvnbook\proficio> mvn release:clean 7. To do this.9. Migrating to Maven Migrating to Maven This chapter explains how to migrate (convert) an existing build in Ant.Morpheus.4 and Java 5 Using Ant tasks from within Maven Using Maven with your current directory structure This is your last chance.the story ends. there is no turning back. You take the blue pill . The Matrix 241 . using both Java 1. to a build in Maven: • • • • • Splitting existing sources and resources into modular Maven projects Taking advantage of Maven's inheritance and multi-project capabilities Compiling. . After this. you wake up in your bed and believe whatever you want to believe. testing and building jars with Maven.8.you stay in Wonderland and I show you how deep the rabbit-hole goes. You take the red pill . 1. which is the latest version at the time of writing. Introduction The purpose of this chapter is to show a migration path from an existing build in Ant to Maven. while still running your existing. The Maven migration example is based on the Spring Framework build. 8. we will focus only on building version 2. For the purpose of this example. which uses an Ant script. The Spring release is composed of several modules. Ant-based build system. You will learn how to start building with Maven.0-m1 of Spring. how to run Ant tasks from within Maven. You will learn how to use an existing directory structure (though you will not be following the standard.1. recommended Maven directory structure). how to split your sources into modules or components.Better Builds with Maven 8.1. you will be introduced to the concept of dependencies. while enabling you to continue with your required work. This will allow you to evaluate Maven's technology. This example will take you through the step-by-step process of migrating Spring to a modularized. Maven build. component-based. and among other things. Introducing the Spring Framework The Spring Framework is one of today's most popular Java frameworks. . 4 and 1. with the Java package structure. properties files. These modules are built with an Ant script from the following source directories: • • src and test: contain JDK 1. more or less.4 compatible source code and JUnit tests respectively tiger/src and tiger/test: contain additional JDK 1. The src and tiger/src directories are compiled to the same destination as the test and tiger/test directories. TLD files. using inclusions and exclusions that are based on the Java packages of each class. For Spring.). 243 . the Ant script compiles each of these different source directories and then creates a JAR for each module. resulting in JARs that contain both 1. and each produces a JAR.Migrating to Maven Figure 8-1: Dependency relationship between Spring modules In figure 8-1. Each of these modules corresponds. Optional dependencies are indicated by dotted lines. you can see graphically the dependencies between the modules.5 classes. etc.5 compatible source code and JUnit tests • mock: contains the source code for the spring-mock module • aspectj/src and aspectj/test: contain the source code for the spring-aspects module Each of the source directories also include classpath resources (XML files. To start. the rule of thumb to use is to produce one artifact (JAR. you will need to create a directory for each of Spring's modules. Figure 8-2: A sample spring module directory 244 . WAR.Better Builds with Maven 8.) per Maven project file. you will create a subdirectory called 'm2' to keep all the necessary Maven changes clearly separated from the current build system. Inside the 'm2' directory. Where to Begin? With Maven. that means you will need to have a Maven project (a POM) for each of the modules listed above.2. In the Spring example. etc. . <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. non-snapshot version for a short period of time.SNAPSHOT – that is.migrating</groupId> <artifactId>spring-parent</artifactId> <version>2. however. the Spring team would use org. thereby eliminating the requirement to specify the dependency repeatedly across multiple modules. in Spring. in order to tag the release in your SCM. the main source and test directories are src and test.mergere. spring-parent) • version: this setting should always represent the next release version number appended with . department.springframework • artifactId: the setting specifies the name of this module (for example. Maven will convert to the definitive. each module will inherit the following values (settings) from the parent POM. Recall from previous chapters that during the release process. as it is our 'unofficial' example version of Spring. • packaging: the jar. you will need to create a parent POM. etc. war. the version you are developing in order to release. company. You will use the parent POM to store the common configuration settings that apply to all of the modules. primary used for documentation purposes. For example.springframework.m2book. project. 245 .migrating.mergere. <groupId>com.1</version> <scope>test</scope> </dependency> </dependencies> As explained previously. For this example. and ear values should be obvious to you (a pom value means that this project is used for metadata only) The other values are not strictly required.org</url> <organization> <name>The Spring Framework Project</name> </organization> In this parent POM we can also add dependencies such as JUnit.Migrating to Maven In the m2 directory. • groupId: this setting indicates your area of influence.0-m1-SNAPSHOT</version> <name>Spring parent</name> <packaging>pom</packaging> <description>Spring Framework</description> <inceptionYear>2002</inceptionYear> <url>. respectively.m2book. and it should mimic standard package naming conventions to avoid duplicate values. you will use com. Let's begin with these directories. which will be used for testing in every module.8. /. For the debug attribute.dir}" source="1.dir}"/> <!-..maven. your build section will look like this: <build> <sourceDirectory>.classes..3</target> </configuration> </plugin> </plugins> </build> 246 . Recall from Chapter 2.3" debug="${debug}" deprecation="false" optimize="false" failonerror="true"> <src path="${src. so to specify the required debug function in Maven./src</sourceDirectory> <testSourceDirectory>. <javac destdir="${target..tempdir.3</source> <target>1. that Maven automatically manages the classpath from its list of dependencies. you will need to append -Dmaven.debug=false to the mvn command (by default this is set to true). For now.Include Commons Attributes generated Java sources --> <src path="${commons.attributes. you don't have to worry about the commons-attributes generated sources mentioned in the snippet. in the buildmain target./test</testSourceDirectory> <plugins> <plugin> <groupId>org. so there is no need for you to add the configuration parameters.3" target="1.apache./.Better Builds with Maven Using the following code snippet from Spring's Ant build script. and failonerror (true) values. These last three properties use Maven's default values. you can retrieve some of the configuration parameters for the compiler. deprecation and optimize (false).plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.src}"/> <classpath refid="all-libs"/> </javac> As you can see these include the source and target compatibility (1. as you will learn about that later in this chapter. At this point.. Spring's Ant script uses a debug parameter.compiler.3). properties files etc take precedence --> <classpath location="${target.awt. haltonfailure and haltonerror settings.Must go first to ensure any jndi.dir}"/> <classpath location="${target.testclasses. The nested element jvmarg is mapped to the configuration parameter argLine As previously noted. as Maven prints the test summary and stops for any test error or failure.mockclasses.properties file loaded from the Ant script (refer to the code snippet below for details).dir}"> <fileset dir="${target. by default. Maven uses the default value from the compiler plugin. since the concept of a batch for testing does not exist.dir}"/> <classpath refid="all-libs"/> <formatter type="plain" usefile="false"/> <formatter type="xml"/> <batchtest fork="yes" todir="${reports. this value is read from the project.testclasses. • • • • • 247 .Migrating to Maven The other configuration that will be shared is related to the JUnit tests.includes}" excludes="${test. You will not need any printsummary.excludes from the nested fileset.classes.dir}"/> <classpath location="${target.includes and test.dir}"/> <!-. • • formatter elements are not required as Maven generates both plain text and xml reports. so you will not need to locate the test classes directory (dir). classpath is automatically managed by Maven from the list of dependencies. From the tests target in the Ant script: <junit forkmode="perBatch" printsummary="yes" haltonfailure="yes" haltonerror="yes"> <jvmarg line="-Djava. by default. and this doesn't need to be changed.excludes}"/> </batchtest> </junit> You can extract some configuration information from the previous code: • forkMode=”perBatch” matches with Maven's forkMode parameter with a value of once.Need files loaded as resources --> <classpath location="${test. Maven sets the reports destination directory (todir) to target/surefire-reports.headless=true -XX:MaxPermSize=128m -Xmx128m"/> <!-.dir}" includes="${test. You will need to specify the value of the properties test. class # # Wildcards to exclude among JUnit tests. When building only on Java 5 you could remove that option and the XML parser (Xerces) and APIs (xml-apis) dependencies.4 to run you do not need to exclude hibernate3 tests.headless=true -XX:MaxPermSize=128m -Xmx128m </argLine> <includes> <include>**/*Tests.1 # being compiled with target JDK 1. Since Maven requires JDK 1.includes=**/*Tests. It makes tests run using the standard classloader delegation instead of the default Maven isolated classloader.Better Builds with Maven # Wildcards to be matched by JUnit tests.excludes=**/Abstract* #test.and generates sources from them that have to be compiled with the normal Java compiler.class</include> </includes> <excludes> <exclude>**/Abstract*</exclude> </excludes> </configuration> </plugin> The childDelegation option is required to prevent conflicts when running under Java 5 between the XML parser provided by the JDK and the one included in the dependencies in some modules.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <forkMode>once</forkMode> <childDelegation>false</childDelegation> <argLine> -Djava. which are processed prior to the compilation.awt. mandatory when building in JDK 1.5 . # Second exclude needs to be used for JDK 1.maven. due to Hibernate 3.apache. <plugin> <groupId>org. Spring's Ant build script also makes use of the commons-attributes compiler in its compileattr and compiletestattr targets.3. Note that it is possible to use another lower JVM to run tests if you wish – refer to the Surefire plugin reference documentation for more information. The commons-attributes compiler processes javadoc style annotations – it was created before Java supported annotations in the core language on JDK 1. 248 . translate directly into the include/exclude elements of the POM's plugin configuration. test. # Convention is that our JUnit test classes have XXXTests-style names. test.4.excludes=**/Abstract* org/springframework/orm/hibernate3/** The includes and excludes referenced above.4. attributes.servlet.dir}" includes="**/metadata/*. to support indexing.dir}" includes="org/springframework/aop/**/*. so you will only need to add the inclusions for the main source and test source compilation.java</include> </includes> <testIncludes> <include>org/springframework/aop/**/*.java</include> </testIncludes> </configuration> <goals> <goal>compile</goal> <goal>test-compile</goal> </goals> </execution> </executions> </plugin> Later in this chapter you will need to modify these test configurations. --> <fileset dir="${src.tempdir.java"/> </attribute-compiler> In Maven.src}"> <!-Only the PathMap attribute in the org.handler. 249 .java</include> <include>org/springframework/jmx/**/*.tempdir.attributes. this same function can be accomplished by adding the commons-attributes plugin to the build section in the POM.Compile to a temp directory: Commons Attributes will place Java Source here.web. --> <attribute-compiler <fileset dir="${test.Compile to a temp directory: Commons Attributes will place Java Source here.springframework. --> <attribute-compiler </attribute-compiler> From compiletestattr: <!-.codehaus.Migrating to Maven From compileattr: <!-.test}"> <fileset dir="${test.mojo</groupId> <artifactId>commons-attributes-maven-plugin</artifactId> <executions> <execution> <configuration> <includes> <include>**/metadata/*.metadata package currently needs to be shipped with an attribute.dir}" includes="org/springframework/jmx/**/*. Compiling In this section. setting the files you want to include (by default Maven will pick everything from the resource directory). The following is the POM for the spring-core module. Creating POM files Now that you have the basic configuration shared by all modules (project information. you will start to compile the main Spring source. you will need to create a POM that extends the parent POM. you won't need to specify the version or groupId elements of the current module. For the resources. description. 8. where spring-core JAR is created: <jar jarfile="${dist. you need to create the POM files for each of Spring's modules.Better Builds with Maven 8. you will need to exclude the *. JUnit test configuration. version. etc.jar"> <fileset dir="${target. since the sources and resources are in the same directory in the current Spring build.mergere. tests will be dealt with later in the chapter. As you saw before.dir}"> <include name="org/springframework/core/**"/> <include name="org/springframework/util/**"/> </fileset> <manifest> <attribute name="Implementation-Title" value="${spring-title}"/> <attribute name="Implementation-Version" value="${spring-version}"/> <attribute name="Spring-Version" value="${spring-version}"/> </manifest> </jar> From the previous code snippet. you will need to tell Maven to pick the correct classes and resources from the core and util packages. and organization name to the values in the POM. In each subdirectory.4. in this case the defaults are sufficient.migrating</groupId> <artifactId>spring-parent</artifactId> <version>2. which centralizes and maintains information common to the project. 250 . Maven will automatically set manifest attributes such as name. To begin. as those values are inherited from parent POM. However. compiler configuration. review the following code snippet from Spring's Ant script. <parent> <groupId>com.m2book.dir}/modules/spring-core. While manifest entries can also be customized with additional configuration to the JAR plugin.java files from the resources. you will need to add a resources element in the build section.).0-m1-SNAPSHOT</version> </parent> <artifactId>spring-core</artifactId> <name>Spring core</name> Again.classes. This module is the best to begin with because all of the other modules depend on it.3. you can determine which classes are included in the JAR and what attributes are written into the JAR's manifest. or they will get included in the JAR. apache./src</directory> <includes> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </includes> <excludes> <exclude>**/*..java</exclude> </excludes> </resource> </resources> <plugins> <plugin> <groupId>org. you will need to configure the compiler plugin to include only those in the core and util packages.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <includes> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </includes> </configuration> </plugin> </plugins> </build> 251 ./. <build> <resources> <resource> <directory>.. Maven will by default compile everything from the source directory. which is inherited from the parent POM.maven. because as with resources.Migrating to Maven For the classes. But.java:[31.logging.commons. and then choose from the search results. Typically.commons.org/maven2/commons-logging/commons-logging/. For example.34] package org. From the previous output. what groupid. you now know that you need the Apache Commons Logging library (commons-logging) to be added to the dependencies section in the POM.\. You will see a long list of compilation failures.. changing dots to slashes.24] cannot find symbol symbol : class Log location: class org.support.java:[30.Better Builds with Maven To compile your Spring build..ibiblio.\src\org\springframework\core\io\support\PathMatchingResourcePatternResol ver. you will find all the available versions of commons-logging under. artifactId and version should we use? For the groupId and artifactId.logging does not exist These are typical compiler messages... commons-logging groupId would become org. for historical reasons some groupId values don't follow this convention and use only the name of the project..org/maven2 commons logging.springframework. If you check the repository. In the case of commons-logging. you can search the repository using Google. beginning with the following: [INFO] -----------------------------------------------------------------------[ERROR] BUILD FAILURE [INFO] -----------------------------------------------------------------------[INFO] Compilation failure C:\dev\m2book\code\migrating\spring\m2\springcore\. Specify site:www.\src\org\springframework\core\io\support\PathMatchingResourcePatternResol ver. As an alternative.\src\org\springframework\util\xml\SimpleSaxErrorHandler.\. However.\.java:[107.34] package org.apache. the actual groupId is commons-logging.logging does not exist C:\dev\m2book\code\migrating\spring\m2\springcore\.\src\org\springframework\core\io\support\PathMatchingResourcePatternResol ver. it's usually the JAR name without a version (in this case commonslogging).apache.logging does not exist C:\dev\m2book\code\migrating\spring\m2\springcore\.34] package org. 252 . Regarding the artifactId.commons. the convention is to use a groupId that mirrors the package name..\.PathMatchingResourcePatternResolver C:\dev\m2book\code\migrating\spring\m2\springcore\. caused by the required classes not being on the classpath. you need to check the central repository at ibiblio..java:[19.apache.core..apache.commons. you can now run mvn compile.ibiblio. located in the org/apache/commons/logging directory in the repository. the option that is closest to what is required by your project. we discovered that the commons-beanutils version stated in the documentation is wrong and that some required dependencies are missing from the documentation.jar.md5 You can see that the last directory is the version (3. While adding dependencies can be the most painful part of migrating to Maven. with the slashes changed to dots (org. component-oriented. For details on Maven Archiva (the artifact repository manager) refer to the Maven Archiva project for details)6.1/hibernate-3. For example. Continuum and Archiva build platform. Doing so will result in cleaner. using a Web interface.hibernate) An easier way to search for dependencies.MF • For advanced users. and then search in Google prepending site: distribution based on a preintegrated Maven. through inheritance.ibiblio.4</version> </dependency> </dependencies> Usually you will convert your own project. you will find that there is documentation for all of Spring's dependencies in readme. has been developed and is available as part of Maestro. So. 253 . modular projects that are easier to maintain in the long term. You can use this as a reference to determine the versions of each of the dependencies. For more information on Maestro please see:. Maestro is an Apache License 2. <dependencies> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1. so you will have first hand knowledge about the dependencies and versions used. 6 Maven Archiva is part of Mergere Maestro.0.mergere. during the process of migrating Spring to Maven.org/maven2 78d5c38f1415efc64f7498f828d8069a The search will return: www. When needed. all submodules would use the same dependencies).org/maven2 to the query.1.txt in the lib directory of the Spring source. although you could simply follow the same behavior used in Ant (by adding all the dependencies in the parent POM so that. you have to be careful as the documentation may contain mistakes and/or inaccuracies. you could search with: site:www. there are some other options to try to determine the appropriate versions for the dependencies included in your build: • Check if the JAR has the version in the file name • Open the JAR file and look in the manifest file META-INF/MANIFEST. for the hibernate3.jar provided with Spring under lib/hibernate. explicit dependency management is one of the biggest benefits of Maven once you have invested the effort upfront.org/maven2/org/hibernate/hibernate/3.ibiblio. search the ibiblio repository through Google by calculating the MD5 checksum of the JAR file with a program such as md5sum.1). For instance. we strongly encourage and recommend that you invest the time at the outset of your migration. to make explicit the dependencies and interrelationships of your projects.Migrating to Maven With regard the version. the previous directory is the artifactId (hibernate) and the other directories compose the groupId. However.com/. and it is just for the convenience of the users./test</directory> <includes> <include>log4j.1</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1. Optional dependencies are not included transitively. you will need to add the log4j. Now. Compiling Tests Setting the test resources is identical to setting the main resources.1. This is because in other projects. we will cover how to run the tests.properties</include> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </includes> <excludes> <exclude>**/*.. you will notice that you also need Apache Commons Collections (aka commons-collections) and log4j. In addition.properties file required for logging configuration.Better Builds with Maven Running again mvn compile and repeating the process previously outlined for commons-logging.5. you may decide to use another log implementation. <testResources> <testResource> <directory>. run mvn compile again .this time all of the sources for spring-core will compile.java</exclude> </excludes> </testResource> </testResources> 254 . After compiling the tests. Testing Now you're ready to compile and run the tests. 8. and setting the JUnit test sources to compile. you will repeat the previous procedure for the main classes./.5. For the first step. 8.. Using the optional tag does not affect the current project.2. with the exception of changing the location from which the element name and directory are pulled.9</version> <optional>true</optional> </dependency> Notice that log4j is marked as optional. <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3. so log4j will not be included in other projects that depend on this. setting which test resources to use. java</exclude> <exclude>org/springframework/util/ObjectUtilsTests.java class for any hard codes links to properties files and change them accordingly. the key here is to understand that some of the test classes are not actually unit tests for springcore. not tests. 255 .beans packages are missing. you will need to add the testIncludes element. Therefore.java</exclude> <exclude>org/springframework/util/SerializationTestUtils. If you run mvn test-compile again you will have a successful build.springframework.springframework. as well. but if you try to compile those other modules. As a result. as all the test classes compile correctly now. but rather require other modules to be present. if spring-core depends on spring-beans and spring-beans depends on spring-core. It may appear initially that spring-core depends on spring-mock. depend on classes from springcore. as this is not needed for the main sources. you will see the following error: package javax.java</exclude> <exclude>org/springframework/util/ClassUtilsTests. you will see that their main classes. spring-web and spring-beans modules. in order to compile the tests: <dependency> <groupId>javax. Now.mock.web and org. Inside the mavencompiler-plugin configuration.java</exclude> <exclude>org/springframework/core/io/ResourceTests. In other words. So.springframework. if you try to compile the test classes by running mvn test-compile.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2. which one do we build first? Impossible to know.4</version> <scope>test</scope> </dependency> The scope is set to test. org. as before. <testExcludes> <exclude>org/springframework/util/comparator/ComparatorTests. add the testExcludes element to the compiler configuration as follows. <testIncludes> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </testIncludes> You may also want to check the Log4JConfigurerTests. but this time there is a special case where the compiler complains because some of the classes from the org.java</exclude> </testExcludes> Now.Migrating to Maven Setting the test sources for compilation follows the same procedure.java</exclude> <exclude>org/springframework/util/ReflectionUtilsTests.servlet does not exist This means that the following dependency must be added to the POM. we cannot add a dependency from spring-core without creating a circular dependency. To exclude test classes in Maven. when you run mvn test-compile. it makes sense to exclude all the test classes that reference other modules from this one and include them elsewhere. you will get compilation errors. 5.0</version> <scope>test</scope> </dependency> Now run mvn test again. Errors: 1.springframework. etc. so to resolve the problem add the following to your POM <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> <version>1.springframework.2. Failures: 1. compile. The first section starts with java.io. you will find the following: [surefire] Running org. there is a section for each failed test called stacktrace.015 sec <<<<<<<< FAILURE !! This output means that this test has logged a JUnit failure and error. However.support.FileNotFoundException: class path resource [org/aopalliance/] cannot be resolved to URL because it does not exist.txt.aopalliance package is inside the aopallience JAR. for the test class that is failing org. Errors: 1 [INFO] -----------------------------------------------------------------------[ERROR] BUILD ERROR [INFO] -----------------------------------------------------------------------[INFO] There are test failures. as it will process all of the previous phases of the build life cycle (generate sources.io.core. Time elapsed: 0. The org.PathMatchingResourcePatternResolverTests [surefire] Tests run: 5. you will get the following error report: Results : [surefire] Tests run: 113.support. Failures: 1. when you run this command. You will get the following wonderful report: [INFO] -----------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ The last step in migrating this module (spring-core) from Ant to Maven. run tests. This command can be used instead most of the time.Better Builds with Maven 8. [INFO] ------------------------------------------------------------------------ Upon closer examination of the report output. To debug the problem. is to run mvn install to make the resulting JAR available to other projects in your local Maven repository. compile tests.io.core. Running Tests Running the tests in Maven.) 256 . Within this file.PathMatchingResourcePatternResolverTe sts. simply requires running mvn test. This indicates that there is something missing in the classpath that is required to run the tests. you will need to check the test logs under target/surefire-reports. move these configuration settings to the parent POM instead. and remove the versions from the individual modules (see Chapter 3 for more information). you can refer to spring-core from spring-beans with the following. In the same way. Other Modules Now that you have one module working it is time to move on to the other modules. For instance.Migrating to Maven 8. instead of repeatedly adding the same dependency version information to each module. <dependencyManagement> <dependencies> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1. See figure 8-1 to get the overall picture of the interdependencies between the Spring modules. since they have the same groupId and version: <dependency> <groupId>${project. That way. you will be adding the Surefire plugin configuration settings repeatedly for each module that you convert.0.groupId}</groupId> <artifactId>spring-core</artifactId> <version>${project.version}</version> </dependency> 257 . To avoid duplication.groupId}: groupId of the current POM being built For example. you will find that you are repeating yourself. 8. use the parent POM's dependencyManagement section to specify this information once. If you follow the order of the modules described at the beginning of the chapter you will be fine. Using the parent POM to centralize this information makes it possible to upgrade a dependency version across all sub-projects from a single location. Avoiding Duplication As soon as you begin migrating the second module. each of the modules will be able to inherit the required Surefire configuration.version}: version of the current POM being built ${project.6. otherwise you will find that the main classes from some of the modules reference classes from modules that have not yet been built.1.4</version> </dependency> </dependencyManagement> The following are some variables that may also be helpful to reduce duplication: • • ${project.6. Generally with Maven.3 and some compiled for Java 5 in the same JAR. they will need to run them under Java 5. particularly in light of transitive dependencies. 258 .version}</version> <type>test-jar</type> <scope>test</scope> </dependency> A final note on referring to test classes from other modules: if you have all of Spring's mock classes inside the same module. So. by specifying the test-jar type.maven.5 sources be added? To do this with Maven. with only those classes related to spring-context module. how can the Java 1. it's easier to deal with small modules. make sure that when you run mvn install. However.3 or 1. Building Java 5 Classes Some of Spring's modules include Java 5 classes from the tiger folder.2. First. attempting to use one of the Java 5 classes under Java 1.3 compatibility. you need to create a new module with only Java 5 classes instead of adding them to the same module and mixing classes with different requirements.Better Builds with Maven 8. this can cause previously-described cyclic dependencies problem.4. any users. you can split Spring's mock classes into spring-context-mock.apache. Referring to Test Classes from Other Modules If you have tests from one component that refer to tests from other modules. and spring-web-mock. would experience runtime errors. that a JAR that contains the test classes is also installed in the repository: <plugin> <groupId>org.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <executions> <execution> <goals> <goal>test-jar</goal> </goals> </execution> </executions> </plugin> Once that JAR is installed.6. As the compiler plugin was earlier configured to compile with Java 1. Consider that if you include some classes compiled for Java 1. with only those classes related to spring-web. there is a procedure you can use. users will know that if they depend on the module composed of Java 5 classes.3. To eliminate this problem. you can use it as a dependency for other components. Although it is typically not recommended. be sure to put that JAR in the test scope as follows: <dependency> <groupId>${project. you will need to create a new spring-beans-tiger module. By splitting them into different modules. 8.groupId}</groupId> <artifactId>spring-beans</artifactId> <version>${project. in this case it is necessary to avoid refactoring the test source code.6. 259 . the Java 5 modules will share a common configuration for the compiler.Migrating to Maven As with the other modules that have been covered. /../../.5</target> </configuration> </plugin> </plugins> </build> 260 .5</source> <target>1.../tiger/src</sourceDirectory> <testSourceDirectory>./.Better Builds with Maven Figure 8-5: Dependency relationship. you will need to add a module entry for each of the directories.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.../tiger/test</testSourceDirectory> <plugins> <plugin> <groupId>org. classes.rmi.5</id> <activation> <jdk>1.5</jdk> </activation> <modules> <module>tiger</module> </modules> </profile> </profiles> 8.classes.rmi. In this case. you just need a new module entry for the tiger folder. you need to use the Ant task in the spring-remoting module to use the RMI compiler. you may find that Maven does not have a plugin for a particular task or an Ant target is so small that it may not be worth creating a new plugin.remoting.5 JDK. but to still be able to build the other modules when using Java 1. with the Spring migration.6. For example.springframework.remoting.RmiInvocationWrapper"/> <rmic base="${target.RmiInvocationWrapper" iiop="true"> <classpath refid="all-libs"/> </rmic> 261 . Maven can call Ant tasks directly from a POM using the maven-antrun-plugin.4 you will add that module in a profile that will be triggered only when using 1.4.dir}" classname="org. Using Ant Tasks From Inside Maven In certain migration cases.springframework. this is: <rmic base="${target.Migrating to Maven In the parent POM.dir}" classname="org. From Ant. <profiles> <profile> <id>jdk1. 4</version> <systemPath>${java.classpath. such as ${project./lib/tools.apache.jar above. stub and tie classes from them. 262 .build.jar</systemPath> </dependency> </dependencies> </plugin> As shown in the code snippet above.directory}/classes" classname="org.directory} and maven. which is bundled with the JDK.RmiInvocationWrapper" iiop="true"> <classpath refid="maven. will take the compiled classes and generate the rmi skeleton.remoting. the most appropriate phase in which to run this Ant task is in the processclasses phase.classpath"/> </rmic> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>com. the rmic task.RmiInvocationWrapper"/> <rmic base="${project.home}/. there are some references available already.Better Builds with Maven To include this in Maven build.springframework. In this case.build.directory}/classes" classname="org.build.compile. and required by the RMI task. such as the reference to the tools. which applies to that plugin only.rmi.sun</groupId> <artifactId>tools</artifactId> <scope>system</scope> <version>1. which is a classpath reference constructed from all of the dependencies in the compile scope or lower.maven..springframework. add: <plugin> <groupId>org. To complete the configuration.compile. There are also references for anything that was added to the plugin's dependencies section. you will need to determine when Maven should run the Ant task.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <configuration> <tasks> <echo>Running rmic</echo> <rmic base="${project.remoting. So.rmi. Sun's Activation Framework and JavaMail are not redistributable from the repository due to constraints in their licenses. which uses AspectJ for weaving the classes. special cases that must be handled. to install JavaMail: mvn install:install-file -Dfile=mail.Migrating to Maven 8. as these test cases will not work in both Maven and Ant. Using classpath resources is recommended over using file system resources.5. You may need to download them yourself from the Sun site or get them from the lib directory in the example code for this chapter.2 -Dpackaging=jar You will only need to do this process once for all of your projects or you may use a corporate repository to share them across your organization. Non-redistributable Jars You will find that some of the modules in the Spring build depend on JARs that are not available in the Maven central repository.html. such as springaspects.. These issues were shared with the Spring developer community and are listed below: • Moving one test class.apache. For more information on dealing with this issue. 8.6.org/guides/mini/guide-coping-with-sun-jars. There is some additional configuration required for some modules.3.mail -DartifactId=mail -Dversion=1. These can be viewed in the example code.6. NamespaceHandlerUtilsTests. there are two additional.jar -DgroupId=javax. which used relative paths in Log4JConfigurerTests class. For example. You can then install them in your local repository with the following command. Some Special Cases In addition to the procedures outlined previously for migrating Spring to Maven. see. mvn install:install-file -Dfile=<path-to-file> -DgroupId=<group-id> -DartifactId=<artifact-id> -Dversion=<version> -Dpackaging=<packaging> For instance. 263 . just remember not to move the excluded tests (ComparatorTests. you would move all Java files under org/springframework/core and org/springframework/util from the original src folder to the module's folder src/main/java. Now that you have seen how to do this for Spring. it it highly recommended that you go through the restructuring process to take advantage of the many timesaving and simplifying conventions within Maven. for the spring-core module. split it into modular components (if needed). By adopting Maven's standard directory structure. as Maven downloads everything it needs and shares it across all your Maven projects automatically . The same for tests. Finally. you will be able to take an existing Ant-based build. By doing this. In the case of the Spring example. you can realize Maven' other benefits . Once you have spent this initial setup time Maven. All of the other files under those two packages would go to src/main/resources. you can simplify the POM significantly. 264 .8.Better Builds with Maven 8. you will be able to take advantage of the benefits of adopting Maven's standard directory structure.advantages such as built-in project documentation generation. SerializationTestUtils and ResourceTests).7.you can delete that 80 MB lib folder. Summary By following and completing this chapter. you can apply similar concepts to your own Ant based build. Once you decide to switch completely to Maven. ReflectionUtilsTests. At the same time. ObjectUtilsTests. For example. you would eliminate the need to include and exclude sources and resources “by hand” in the POM files as shown in this chapter. you will be able to keep your current build working. reducing its size by two-thirds! 8. and quality metrics. ClassUtilsTests. and install those JARs in your local repository using Maven. Restructuring the Code If you do decide to use Maven for your project. in addition to the improvements to your build life cycle. create JARs. these would move from the original test folder to src/test/java and src/test/resources respectively for Java sources and other files . Maven can eliminate the requirement of storing jars in a source code management system. reports. compile and test the code. I'll try not to take that personally. Mr. sir. A chimpanzee and two trainees could run her! Kirk: Thank you. Scott.Star Trek 265 . All systems automated and ready. .Appendix A: Resources for Plugin Developers Appendix A: Resources for Plugin Developers In this appendix you will find: • Maven's Life Cycles • Mojo Parameter Expressions • Plugin Metadata Scotty: She's all yours. compile – compile source code into binary form.Better Builds with Maven A. Maven's Life Cycles Below is a discussion of Maven's three life cycles and their default mappings. Life-cycle phases The default life cycle is executed in order to perform a traditional build.1. initialize – perform any initialization steps required before the main part of the build can start. 8. and distributing it into the Maven repository system. A. this section will describe the mojos bound by default to the clean and site life cycles. it takes care of compiling the project's code. and the content of the current set of POMs to be built is valid. along with a short description for the mojos which should be bound to each. in the target output location. This section contains a listing of the phases in the default life cycle. performing any associated tests. 6. generate-test-sources – generate compilable unit test code from other source formats. such as instrumentation or offline code-weaving. generate-sources – generate compilable code from other source formats. It begins by listing the phases in each life cycle. process-resources – perform any modification of non-code resources necessary. It contains the following phases: 1. The default Life Cycle Maven provides three life cycles. and generating a project web site. archiving it into a jar. process-sources – perform any source modification processes necessary to prepare the code for compilation. along with a summary of bindings for the jar and maven-plugin packagings. 2. validate – verify that the configuration of Maven. corresponding to the three major activities performed by Maven: building a project from source.) from other source formats. cleaning a project of the files generated by a build. For the default life cycle. as when using Aspect-Oriented Programming techniques. For example. process-classes – perform any post-processing of the binaries produced in the preceding step. Finally. In other words. 3. a mojo may apply source code patches here.1. This is necessary to accommodate the inevitable variability of requirements for building different types of projects. 4. mojo-binding defaults are specified in a packaging-specific manner. 5.1. 9. etc. generate-resources – generate non-code resources (such as configuration files. 266 . It continues by describing the mojos bound to the default life cycle for both the jar and maven-plugin packagings. 7. This may include copying these resources into the target classpath directory in a Java build. 17. process-test-sources – perform any source modification processes necessary to prepare the unit test code for compilation. 21. post-integration-test – return the environment to its baseline form after executing the integration tests in the preceding step. This may involve installing the archive from the preceding step into some sort of application server. preintegration-test – setup the integration testing environment for this project. 16. verify – verify the contents of the distributable archive. 15. using the environment configured in the preceding step. 14. 11. deploy – deploy the distributable archive into the remote Maven repository configured in the distributionManagement section of the POM. This may include copying these resources into the testing target classpath location in a Java build. package – assemble the tested application code and resources into a distributable archive. before it is available for installation or deployment. 13. 267 . etc. 18. 19. install – install the distributable archive into the local Maven repository. This could involve removing the archive produced in step 15 from the application server used to test it. 20. test-compile – compile unit test source code into binary form. a mojo may apply source code patches here. 12. in the testing target output location. test – execute unit tests on the application compiled and assembled up to step 8 above. generate-test-resources – generate non-code testing resources (such as configuration files. For example. integration-test – execute any integration tests defined for this project.Appendix A: Resources for Plugin Developers 10.) from other source formats. process-test-resources – perform any modification of non-code testing resources necessary. Compile project source code to the staging directory for jar creation.. Compile unit-test source code to the test output directory. you will find a short description of what that mojo does. Execute project unit tests.Better Builds with Maven Bindings for the jar packaging Below are the default life-cycle bindings for the jar packaging. Deploy the jar archive to a remote Maven repository. Copy non-source-code test resources to the test output directory for unit-test compilation. Create a jar archive from the staging directory. Alongside each. Filter variables if necessary. process-test. Indeed. addPluginArtifact Metadata maven-plugin-plugin Integrate current plugin information with package plugin search metadata. the maven-plugin packaging also introduces a few new mojo bindings. testing. maven-plugin-plugin Update the plugin registry.Appendix A: Resources for Plugin Developers Bindings for the maven-plugin packaging The maven-plugin project packaging behaves in almost the same way as the more common jar packaging. and generate a plugin descriptor. if one exists. and metadata references to latest plugin version. maven-plugin artifacts are in fact jar files. and the rest. compiling source code. to install updateRegistry reflect the new plugin installed in the local repository. to extract and format the metadata for the mojos within. for example).. they undergo the same basic processes of marshaling non-source-code resources. However. 269 . As such. packaging. 1. along with any additional directories configured in the POM. Alongside each. Life-cycle phases The clean life-cycle phase contains the following phases: 1. you will find a short description of what that mojo does. the state of the project before it was built. pre-clean – execute any setup or initialization procedures to prepare the project for cleaning 2. along with a summary of the default bindings. effective for all POM packagings. Maven provides a set of default mojo bindings for this life cycle. Below is a listing of the phases in the clean life cycle. Default life-cycle bindings Below are the clean life-cycle bindings for the jar packaging. The clean Life Cycle This life cycle is executed in order to restore a project back to some baseline state – usually. clean – remove all files that were generated during another build process 3. Table A-3: The clean life-cycle bindings for the jar packaging Phase Mojo Plugin Description clean clean maven-clean-plugin Remove the project build directory.2.Better Builds with Maven A. 270 . which perform the most common tasks involved in cleaning a project. post-clean – finalize the cleaning process. Table A-4: The site life-cycle bindings for the jar packaging Phase Mojo Plugin Description site site maven-site-plugin maven-site-plugin Generate all configured project reports. post-site – execute any actions required to finalize the site generation process.1. pre-site – execute any setup or initialization steps to prepare the project for site generation 2.Appendix A: Resources for Plugin Developers A.3. and render documentation into HTML 3. Deploy the generated web site to the web server path specified in the POM distribution Management section. and render documentation source files into HTML. which perform the most common tasks involved in generating the web site for a project. Alongside each. and prepare the generated web site for potential deployment 4. effective for all POM packagings. Below is a listing of the phases in the site life cycle. Life-cycle phases The site life cycle contains the following phases: 1. Default Life Cycle Bindings Below are the site life-cycle bindings for the jar packaging. It will run any reports that are associated with your project. along with a summary of the default bindings. site-deploy deploy 271 . and even deploy the resulting web site to your server. site-deploy – use the distributionManagement configuration in the project's POM to deploy the generated web site files to the web server. site – run all associated project reports. Maven provides a set of default mojo bindings for this life cycle. you will find a short description of what that mojo does. render your documentation source files into HTML. The site Life Cycle This life cycle is executed in order to generate a web site for your project. java.Mav This is a cloned instance of the project enProject instance currently being built. Using the discussion below. which act as a shorthand for referencing commonly used build state objects.maven.MavenProject> processed as part of the current build. it will describe the algorithm used to resolve complex parameter expressions.ArtifactRepository used to cache artifacts during a Maven build.apache. along with the published Maven API documentation.project.apache. This section discusses the expression language used by Maven to inject build state and plugin configuration into mojos.maven.execution.project. It will summarize the root objects of the build state which are available for mojo expressions.util.apache. Simple Expressions Maven's plugin parameter injector supports several primitive expressions. Mojo Parameter Expressions Mojo parameter values are resolved by way of parameter expressions when a mojo is initialized.apache.Better Builds with Maven A. A.ma List of reports to be generated when the site ven. They are summarized below: Table A-5: Primitive expressions supported by Maven's plugin parameter Expression Type Description ${localRepository} ${session} org.2. 272 .M The current build session. This contains avenSession methods for accessing information about how Maven was called.re This is a reference to the local repository pository.artifact.MavenReport> life cycle executes. org.1.maven. org.util.2.List<org. Finally.reporting. This reduces the complexity of the code contained in the mojo. and extract only the information it requires. mojo developers should have everything they need to extract the build state they require. and often eliminates dependencies on Maven itself beyond the plugin API. ${reactorProjects} ${reports} ${executedProject} java.apache.List<org. These expressions allow a mojo to traverse complex build state. in addition to providing a mechanism for looking up Maven components on-demand. It is used for bridging results from forked life cycles back to the main line of execution.ma List of project instances which will be ven. the expression is split at each '. Project org.PluginDescriptor including its dependency artifacts.Appendix A: Resources for Plugin Developers A. ptor. merged from ings conf/settings. When there are no more expression parts. then the value mapped to that expression is returned. successive expression parts will extract values from deeper and deeper inside the build state.Maven Project instance which is currently being built. During this process.File The current project's root directory. If at some point the referenced object doesn't contain a property that matches the next expression part. this reflective lookup process is aborted. Otherwise. First.apache. if there is one.' character. The first is the root object. org.settings. Maven supports more complex expressions that traverse the object graph starting at some root object that contains build state.project.maven. rendering an array of navigational directions.io.apache.3. the value that was resolved last will be returned as the expression's value. The Expression Resolution Algorithm Plugin parameter expressions are resolved using a straightforward algorithm. ${plugin} org.apache.m2/settings. an expression part named 'child' translates into a call to the getChild() method on that object.xml in the user's home directory. unless specified otherwise. No advanced navigation can take place using is such expressions. The valid root objects for plugin parameter expressions are summarized below: Table A-6: A summary of the valid root objects for plugin parameter expressions Expression Root Type Description ${basedir} ${project} ${settings} java. and must correspond to one of the roots mentioned above. following standard JavaBeans naming conventions. A. Complex Expression Roots In addition to the simple expressions above. Repeating this. the next expression part is used as a basis for reflectively traversing that object' state. This root object is retrieved from the running application using a hard-wired mapping.maven. much like a primitive expression would.2. From there.2.xml in the maven application directory and from .2.plugin. if the expression matches one of the primitive expressions (mentioned above) exactly.Sett The Maven settings. The resulting value then becomes the new 'root' object for the next round of traversal.maven.descri The descriptor instance for the current plugin. 273 . 2. or an active profile. array index references. it will attempt to find a value in one of two remaining places. an ancestor POM. Plugin descriptor syntax The following is a sample plugin descriptor. --> <description>Sample Maven Plugin</description> <!-.The description element of the plugin's POM. |-> <goal>do-something</goal> 274 .plugins</groupId> <artifactId>maven-myplugin-plugin</artifactId> <version>2.This is a list of the mojos contained within this plugin. | this name allows the user to invoke this mojo from the command line | using 'myplugin:do-something'. For | instance. The POM properties. as well as the metadata formats which are translated into plugin descriptors from Java.Whether the configuration for this mojo should be inherted from | parent to child POMs by default. it will be resolved as the parameter value at this point. this plugin could be referred to from the command line using | the 'myplugin:' prefix. Currently. The system properties. |-> <goalPrefix>myplugin</goalPrefix> <!-. Plugin metadata Below is a review of the mechanisms used to specify metadata for plugins. Combined with the 'goalPrefix' element above.0-SNAPSHOT</version> <!-.The name of the mojo. It includes summaries of the essential plugin descriptor. This includes properties specified on the command line using the -D commandline option. Its syntax has been annotated to provide descriptions of the elements.This element provides the shorthand reference for this plugin. or method invocations that don't conform to standard JavaBean naming conventions.and Ant-specific mojo source files. Maven plugin parameter expressions do not support collection lookups. |-> <inheritedByDefault>true</inheritedByDefault> <!-. If the parameter is still empty after these two lookups. then the string literal of the expression itself is used as the resolved value.apache. --> <mojos> <mojo> <!-.These are the identity elements (groupId/artifactId/version) | from the plugin POM. |-> <groupId>org. resolved in this order: 1. If the value is still empty.Better Builds with Maven If at this point Maven still has not been able to resolve a value for the parameter expression. <plugin> <!-.maven. Maven will consult the current system properties. If a user has specified a property mapping this expression to a specific value in the current POM. |-> <requiresProject>true</requiresProject> <!-. |-> <requiresReports>false</requiresReports> <!-.This is optionally used in conjunction with the executePhase element. This is | useful to inject specialized behavior in cases where the main life | cycle should remain unchanged. without | also having to specify which phase is appropriate for the mojo's | execution.Tells Maven that this mojo can ONLY be invoked directly. |-> <phase>compile</phase> <!-.Some mojos cannot execute if they don't have access to a network | connection. | This is useful when the user will be invoking this mojo directly from | the command line.Which phase of the life cycle this mojo will bind to by default. |-> <executePhase>process-resources</executePhase> <!-.</description> <!-. but the mojo itself has certain life-cycle | prerequisites. |-> <aggregator>false</aggregator> <!-. It's restricted to this plugin to avoid creating inter-plugin | dependencies. If Maven is operating in offline mode.Tells Maven that a valid list of reports for the current project are | required before this plugin can execute. | and specifies a custom life-cycle overlay that should be added to the | cloned life cycle before the specified phase is executed. | This allows the user to specify that this mojo be executed (via the | <execution> section of the plugin configuration in the POM). |-> <requiresDirectInvocation>false</requiresDirectInvocation> <!-. |-> <executeLifecycle>myLifecycle</executeLifecycle> <!-. it will only | execute once.Description of what this mojo does. Mojos that are marked as aggregators should use the | ${reactorProjects} expression to retrieve a list of the project | instances in the current build.Determines how Maven will execute this mojo in the context of a | multimodule build. If a mojo is marked as an aggregator.Ensure that this other mojo within the same plugin executes before | this one. to give users a hint | at where this task should run.Appendix A: Resources for Plugin Developers <!-. |-> <executeGoal>do-something-first</executeGoal> <!-. then execute that life cycle up to the specified phase.This tells Maven to create a clone of the current project and | life cycle. This flag controls whether the mojo requires 275 . via the | command line.Tells Maven that a valid project instance must be present for this | mojo to execute. such mojos will | cause the build to fail. it will be executed once for each project instance in the | current build. It is a good idea to provide this. --> <description>Do something cool. If the mojo is not marked as an | aggregator. regardless of the number of project instances in the | current build. |-> <inheritedByDefault>true</inheritedByDefault> <!-.maven.site.Better Builds with Maven | Maven to be online. the | mojo (and the build) will fail when this parameter doesn't have a | value. |-> <editable>true</editable> <!-.apache. --> <language>java</language> <!-. as in the case of the list of project dependencies.The class or script path (within the plugin's jar) for this mojo's | implementation.This is a list of the parameters used by this mojo. |-> <requiresOnline>false</requiresOnline> <!-.Description for this parameter. unless the user specifies | <inherit>false</inherit>. specified in the javadoc comment | for the parameter field in Java mojo implementations. --> <type>java.The Java type for this parameter. this will often reflect the | parameter field name in the mojo class.This is an optional alternate parameter name for this parameter.The implementation language for this mojo. In Java mojos. |-> <implementation>org. either via command-line or POM configuration. |-> <name>inputDirectory</name> <!-. | It will be used as a backup for retrieving the parameter value. If set to | false.SiteDeployMojo</implementation> <!-.io.plugins.The parameter's name.Tells Maven that the this plugin's configuration should be inherted | from a parent POM by default. |-> <alias>outputDirectory</alias> <!-.Whether this parameter is required to have a value. |-> <description>This parameter does something important.File</type> <!-. --> <parameters> <parameter> <!-. If true. |-> <required>true</required> <!-.Whether this parameter's value can be directly specified by the | user. this parameter must be configured via some other section of | the POM.</description> </parameter> </parameters> 276 . manager. Each parameter must | have an entry here that describes the parameter name.artifact. | and the primary expression used to extract the parameter's value.maven.outputDirectory}. |-> <field-name>wagonManager</field-name> </requirement> </requirements> </mojo> </mojos> </plugin> 277 . the requirement specification tells | Maven which mojo-field should receive the component instance.maven. Finally.io. The expression used to extract the | parameter value is ${project.apache. |-> <requirements> <requirement> <!-.Appendix A: Resources for Plugin Developers <!-.Use a component of type: org.This is the operational specification of this mojo's parameters.WagonManager |-> <role>org. and it | expects a type of java.artifact. parameter type. |-> <inputDirectory implementation="java. | along with an optional classifier for the specific component instance | to be used (role-hint).manager.For example. | | The general form is: | <param-nameparam-expr</param-name> | |-> <configuration> <!-.apache.File">${project.reporting.WagonManager</role> <!-.io.outputDirectory}</inputDirectory> </configuration> <!-.reporting. this parameter is named "inputDirectory".Inject the component instance into the "wagonManager" field of | this mojo.This is the list of non-parameter component references used by this | mojo. Components are specified by their interface class name (role). as | compared to the descriptive specification above.File. with dash ('-') Any valid phase name true or false (default is false) true or false (default is true) true or false (default is false) true or false (default is false) Yes No No No No No 278 . Table A-7: A summary of class-level javadoc annotations Descriptor Element Javadoc Annotation Values Required? aggregator description executePhase. life cycle name. phase. Alphanumeric.Better Builds with Maven A. Classlevel annotations correspond to mojo-level metadata elements.2. executeLifecycle.4.. Java Mojo Metadata: Supported Javadoc Annotations The Javadoc annotations used to supply metadata about a particular mojo come in two types. and requirements sections of a mojo's specification in the plugin descriptor. parameterconfiguration section @parameter expression=”${expr}” alias=”alias” default-value=”val” @component roleHint=”someHint” @required @readonly N/A (field comment) @deprecated Anything roleHint is optional.Contains the list of mojos described by this metadata file.Whether this mojo requires access to project reports --> <requiresReports>true</requiresReports> 279 .2. Ant Metadata Syntax The following is a sample Ant-based mojo metadata file. corresponding to the ability to map | multiple mojos into a single build script.The default life-cycle phase binding for this mojo --> <phase>compile</phase> <!-.5. and Yes Requirements section required editable description deprecated usually left blank None None Anything Alternative parameter No No No No (recommended) No A. configuration. <pluginMetadata> <!-. These metadata translate into elements within the parameter. Table A-8: Field-level annotations Descriptor Element Javadoc Annotation Values Required? alias.Whether this mojo requires a current project instance --> <requiresProject>true</requiresProject> <!-. |-> <requiresDependencyResolution>compile</requiresDependencyResolution> <!-. NOTE: | multiple mojos are allowed here.The dependency scope required for this mojo. |-> <mojos> <mojo> <!-.The name for this mojo --> <goal>myGoal</goal> <!-.Appendix A: Resources for Plugin Developers Field-level annotations The table below summarizes the field-level annotations which supply metadata about mojo parameters. Its syntax has been annotated to provide descriptions of the elements. Maven will resolve | the dependencies in this scope before this mojo executes. The property name used by Ant tasks to reference this parameter | value.Another mojo within this plugin to execute before this mojo | executes.artifact.The phase of the forked life cycle to execute --> <phase>initialize</phase> <!-. |-> <requiresDirectInvocation>true</requiresDirectInvocation> <!-.The parameter name.maven. --> <role>org.This describes the mechanism for forking a new life cycle to be | executed prior to this mojo executing.List of non-parameter application components used in this mojo --> <components> <component> <!-.ArtifactResolver</role> <!-.Better Builds with Maven <!-.This is an optional classifier for which instance of a particular | component type should be used. |-> <execute> <!-.Whether this mojo must be invoked directly from the command | line. |-> <property>prop</property> <!-. --> <name>nom</name> <!-. |-> <hint>custom</hint> </component> </components> <!-.The list of parameters this mojo uses --> <parameters> <parameter> <!-.This is the type for the component to be injected.resolver. |-> <inheritByDefault>true</inheritByDefault> <!-.Whether the configuration for this mojo should be inherited | from parent to child POMs by default.Whether this mojo operates as an aggregator --> <aggregator>true</aggregator> <!-.apache.A named overlay to augment the cloned life cycle for this fork | only |-> <lifecycle>mine</lifecycle> <!-.Whether this mojo requires Maven to execute in online mode --> <requiresOnline>true</requiresOnline> <!-.Whether this parameter is required for mojo execution --> <required>true</required> 280 . |-> <goal>goal</goal> </execute> <!-. maven.Whether the user can edit this parameter directly in the POM | configuration or the command line |-> <readonly>true</readonly> <!-.An alternative configuration name for this parameter --> <alias>otherProp</alias> <!-.project. this element will provide advice for an | alternative parameter to use instead.The Java type of this mojo parameter --> <type>org.Appendix A: Resources for Plugin Developers <!-. it provides advice on which alternative mojo | to use.property}</expression> <!-.The expression used to extract this parameter's value --> <expression>${my.When this is specified.artifactId}</defaultValue> <!-. |-> <deprecated>Use something else</deprecated> </parameter> </parameters> <!-.The description of this parameter --> <description>Test parameter</description> <!-.If this is specified.The default value provided when the expression won't resolve --> <defaultValue>${project. </description> <!-. |-> <deprecated>Use another mojo</deprecated> </mojo> </mojos> </pluginMetadata> 281 .MavenProject</type> <!-.apache.The description of what the mojo is meant to accomplish --> <description> This is a test. . you src/main/java/ src/main/resources/ src/main/filters/ src/main/assembly/ src/main/config/ src/test/java/ src/test/resources/ src/test/filters/ Standard location for application sources.xml LICENSE. A license file is encouraged for easy identification by users and is optional. Standard location for application configuration filters. For example. may generate some sources from a JavaCC grammar. Standard location for test sources.txt target/ Maven’s POM. Standard location for test resources. Directory for all generated output. which is always at the top-level of a project. A simple note which might help first time users and is optional. Standard location for resource filters. This would include compiled classes. Standard location for test resource filters.txt README. target/generated-sources/<plugin-id> Standard location for generated sources. generated sources that may be compiled. Standard Directory Structure Table B-1: Standard directory layout for maven project content Standard Location Description pom. 284 . the generated site or anything else that might be generated as part of your build. Standard location for assembly filters.Better Builds with Maven B.1. Standard location for application resources. </project> 285 .Reporting Conventions --> <reporting> <outputDirectory>target/site</outputDirectory> </reporting> .Repository Conventions --> <repositories> <repository> <id>central</id> <name>Maven Repository Switchboard</name> <layout>default</layout> > <!-.maven.0. Maven’s Super POM <project> <modelVersion>4.org/maven2</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <!-.maven.2.0</modelVersion> <name>Maven Default Project</name> <!-.Appendix B: Standard Conventions B.Plugin Repository Conventions --> <pluginRepositories> <pluginRepository> <id>central</id> <name>Maven Plugin Repository</name> <url>. such as a JAR. for example to filter any values.3. Process the test source code. ready for packaging. for use as a dependency in other projects locally. 286 . Perform actions required after integration tests have been executed. Take the compiled code and package it in its distributable format. Generate any test source code for inclusion in compilation. Generate any source code for inclusion in compilation. Process and deploy the package if necessary into an environment where integration tests can be run. copies the final package to the remote repository for sharing with other developers and projects. Compile the test source code into the test destination directory Run tests using a suitable unit testing framework. Perform actions required before integration tests are executed. These tests should not require the code be packaged or deployed. for example to do byte code enhancement on Java classes. Compile the source code of the project. Description Validate the project is correct and all necessary information is available. Process the source code. for example to filter any values. Generate resources for inclusion in the package. This may involve things such as setting up the required environment. Install the package into the local repository. Create resources for testing. Copy and process the resources into the test destination directory.Better Builds with Maven B.. Post-process the generated files from compilation. Run any checks to verify the package is valid and meets quality criteria. Copy and process the resources into the destination directory. Done in an integration or release environment. This may including cleaning up the environment. Cargo Merging War Files Plugin .org/Merging+WAR+files Cargo Reference Documentation .net/config.org/eclipse/development/java-api-evolution. Bloch.codehaus.org/axis/java/ AxisTools Reference Documentation . June 8.codehaus.apache.com/docs/books/effective/ Web Sites Axis Building Java Classes from WSDL. Effective Java.codehaus. Bibliography Online Books des Rivieres.org/Containers Cargo Container Deployments .codehaus.. Sun Developer Network .sf.eclipse.org/ Checkstyle .codehaus. 287 .org/axistools-maven-plugin/ Cargo Containers Reference . Jim.org/Deploying+to+a+running+container Cargo Plugin Configuration Options . 2001. Joshua.sun. Evolving Java-based APIs.html#WSDL2JavaBuildingStubsSkeletonsAndDataTypesFromWSDL Axis Tool Plugin . sf.org/xdoclet-maven-plugin/ XDoclet Reference Documentation Tomcat Manager Web Application . 288 .org Maven Downloads .html Jester .apache. Mojo .ibiblio.net/bestpractices. EJB Plugin Documentation . J2EE Specification .html Cobertura .org/jetty6/maven-plugin/index. ibiblio.org/plugins/maven-clover-plugin/ DBUnit Java API . Introduction to the Build Life Cycle – Maven . Simian . Maven 2 Wiki .codehaus.html Ruby on Rails .sf.html PMD Rulesets .apache.net/ Clover Plugin .com. Maven Plugins . Jetty 6 Plugin Documentation .html Xdoclet2 .rubyonrails. PMD Best Practices .codehaus. Introduction to Archetypes .org XDoclet2 Maven Plugin .html POM Reference .sun.net/howtomakearuleset.Better Builds with Maven Checkstyle Available Checks . XDoclet EjbDocletTask .codehaus. Jdiff .sourceforge.codehaus.html XDoclet Maven Plugin .sf.mortbay.net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask. Clirr .html Xdoclet .sf.org/maven-model/maven. 117. 126-132 creating 55-59. 193 Clancy. 255 Confluence format 79 container 62. 62. 234 definition of 39 artifactId 29. 131. 144. 129. 84 DayTrader 86-88. 59 APT format 79 archetypes creating standard project 233. 101-110. 97 HTTPd 212 Maven project 134. 152. 217 Tomcat 212 application building J2EE 85-88. 271 build life cycle 30. 269. 130 Changes report 182 Checkstyle report 181. 77. 135. 84 Separation of Concerns (SoC) 56 setting up the directory structure 56. 165. 235 conventions about 26 default 29 default build life cycle 286 Maven’s super POM 285 naming 56 single primary output per project 27 standard directory layout for projects 27 standard directory structure 284 standard naming conventions 28 copy/paste detection report 190 CPD report 181. 194-198 code improving quality of 202-205 restructuring 264 code restructuring to migrate to Maven 264 Codehaus Mojo project 134.Index A Alexander. 69. 95. 112. 129. 252 ASF 23 aspectj/src directory 243 aspectj/test directory 243 C Cargo 103-105. 122. 63. Jon 37 Berard. 124. 220. 222-233 creating standard project 234. 83 creating a Web site for 78. 57. 134-136. 74. 286 Butler. 190 B Bentley. 166 Software Foundation 22. Tom 207 classpath adding resources for unit tests 48 filtering resources 49-51 handling resources 46. 48. 224 Continuum continuous integration with 218. 61. 114. 50-52 preventing filtering of resources 52 testing 35 clean life cycle 270 Clirr report 182. Edward V. 63-65. 103-105. 80-82. 288 binding 134. 55 bibliography 287. 245. 84 modules 56 preparing a release 236-240 project inheritance 55. 202-205 Cobertura 181. 114-122. 160. 163. 90 deploying 55. 209 setting up shared development environments 209-212 Community-oriented Real-time Engineering (CoRE) 208 compiling application sources 40. 112. 116. 124. 59. 86. 107. 76. 215. 55. 23. 84. 186. 111. 90-99. 268. 84 managing dependencies 61. 100. Christopher 25 Ant metadata syntax 279-281 migrating from 241-264 Apache Avalon project 193 Commons Collections 254 Commons Logging library 252 Geronimo project 86. 43 tests 254. 112. 61. 87. Samuel 167 289 . 213. 191. 166 collaborating with teams introduction to 207 issues facing teams 208. 41. 184. 41 main Spring source 250-254 test sources 42. xml 39. 245. 124 deploying applications methods of 74. 205 212 J J2EE building applications 85-88. 124. 90 Quote Streamer 87 default build life cycle 41. 259. 261 tiger/src 243 tiger/test 243 directory structures building a Web services client project 94 flat 88 nested 89 DocBook format 79 DocBook Simple format 79 E Einstein. 88. 49-51 structures 24 dependencies determining versions for 253 locating dependency artifacts 34 maintaining 199-201 organization of 31 relationship between Spring modules 243 resolving conflicts 65-68 specifying snapshot versions for 64 using version ranges to resolve conflicts 65-68 Dependency Convergence report 181 Deployer tool 122. 245 standard structure 284 test 243. 114122. 101-110. 77 to the file system 74 with an external SSH 76 with FTP 77 with SFTP 75 with SSH2 75 development environment 209-212 directories aspectj/src 243 aspectj/test 243 m2 244. 245 tiger 258. 125 Geronimo specifications JAR 107 testing applications 126. 112. 185 JDK 248 290 . Albert EJB building a project canonical directory structure for deploying plugin documentation Xdoclet external SSH 21 95-99 95 103-105 99 100. 124. 126-132 deploying applications 122. 76. 47. David Heinemeier hibernate3 test 26 248 I IBM improving quality of code internal repository 86 202. 286 conventions 29 location of local repository 44 naming conventions 56 pom.Better Builds with Maven D DayTrader architecture 86. 127.lang. 184. 252 H Hansson. 91-99. 243.Object 29 mojo metadata 278-281 Spring Framework 242-246. Richard filtering classpath resources preventing on classpath resources FindBugs report FML format FTP 133 49-51 52 194 79 77 G groupId 29. 101 76 F Feynman. 245 mock 243 my-app 39 src 40. 182. 87 building a Web module 105-108 organizing the directory structure 87. 69. 204. 250 url 30 Java EE 86 Javadoc class-level annotations 278 field-level annotations 279 report 181. 129-132 Java description 30 java. 34. 40 default build life cycle 69. 144-163. 150-152 capturing information with Java 141-147 definition of 134 implementation language 140 parameter expressions 272-275. 156-163 basic development 141. 166 artifact guideline 87 build life cycle 30.Index Jester JSP JXR report 198 105 181-183 McIlroy. 32-35 26 P packaging parameter injection phase binding plugin descriptor 30. 41 configuration of reports 171-174 creating your first project 39.. 142. 136 developing custom plugins 133-140. 223-240 compiling application sources 40. 53 groupId 34 integrating with Cobertura 194-198 JDK requirement 248 life-cycle phases 266. 286 developing custom 135. 30. 138 291 . 144. 277-281 phase binding 134-136 requiring dependency resolution 155 writing Ant mojos to send e-mail 149-152 my-app directory 39 K Keller. 41 collaborating with 207-221. 142. 245 135 134-136 137. 154. 79 getting started with 37-46. 165 documentation formats for Web sites 78. 48-51. 146-148.. 267 migrating to 241-254 naming conventions 56 origins of 23 plugin descriptor 137 plugin descriptor 138 preparing to use 38 Repository Manager (MRM) 213 standard conventions 283-286 super POM 285 using Ant tasks from inside 261. 45 32 35 35 M m2 directory 244. 245 Maven Apache Maven project 134. 267 136 44 44. Helen 85 L life cycle default for jar packaging local repository default location of installing to requirement for Maven storing artifacts in locating dependency artifacts 266. 252 55. 129. 170. 96. 70. 39. 92. 182-184. 191. 242-246. 214. 198 Tag List 181. 64. 188-191. 276. 186. 186. 226. 61. 243. 204. 211. 185 JavaNCSS 194 JXR 181-183 PMD 181. 197. 227. 181. 142. 203. 68.xml 29. 35 manager 213 types of 32 restructuring code 264 Ruby on Rails (ROR) 288 running tests 256 S SCM SFTP site descriptor site life cycle snapshot Spring Framework src directory SSH2 Surefire report 35. 124. 237-239. 101. 181 separating from user documentation 174-179 standard project information reports 81 292 V version version ranges 30. 221. 170. 97. 190 selecting 180. 193-206 inheritance 55. 193. 171. 197. 194. 278-281 developing custom 133. 103. 106-109. 173. 215. 194 repository creating a shared 212-214 internal 212 local 32. 115. 285 tiger 260 pom. 43 254. 117-122 deploying Web applications 114. 88-90. 113-117. 198 T Tag List report test directory testing sources tests compiling hibernate3 JUnit monitoring running tiger/src directory tiger/test directory Twiki format 181. 245 42. 177179. 126. 134. 229.Better Builds with Maven plugins definition of 28. 65-68 W Web development building a Web services client project 91-93. 197. 184 Dependency Convergence 181 FindBugs 194 Javadoc 181. 206. 123. 190 POM 22. 186-188. 181. 155-163. 186-188. 192. 250 40. 193. 156. 193 Clirr 182. 284 preparing to use Maven 38 profiles 55. 212. 228 35. 84 monitoring overall health of 206 project management framework 22 Project Object Model 22 Surefire 169. 194 243. 255 248 243 194-198 256 243 243 79 Q Quote Streamer 87 R releasing projects 236-240 reports adding to project Web site 169-171 Changes 182 Checkstyle 181. 215 creating an organization 215-217 creating files 250 key elements 29 super 29. 59. 127. 34. 172-174. 135 using 53. 136 Plugin Matrix 134 terminology 134. 134 developer resources 265-274. 183-185. 245 75 169. 144-153. 187-189. 245. 54 PMD report 181. 230. 194. 169. 235. 110-112. 114 X XDOC format Xdoclet XDoclet2 79 100. 182. 174. 196. 194. 184. 230. 186. 101 102 . 190 creating source code reference 182. 202-205 configuration of 171-174 copy/paste detection 190 CPD 181. 137-140. 66. 63. 165 development tools 138-140 framework for 135. 234. 72-74 project assessing health of 167-180. 181. 172-174. 186. 117 improving productivity 108. 236-239 75 81 271 55.
https://pt.scribd.com/doc/49912382/BetterBuildsWithMaven
CC-MAIN-2017-09
refinedweb
72,772
52.36
I have been working on some AR training and found a lack of usable VuMark scripts. While I am not a programmer, I put something together that works for a proof of concept I am working on, and figured I should post it here for anyone to use and hopefully save you some time. The script goes onto an empty game object which I will call the "PickerObject" that is a child of the VuMark. Empty game objects are made with the exact VuMark Id names (I made this using numeric VuMark Ids) These objects are parented to the PickerObject . Anything you then parent to the empty game objects named with the VuMark Ids will become active when the VuMark Id is found. using System.Collections; using System.Collections.Generic; using UnityEngine; using Vuforia; /* *******Trigger a scene with Matching Vumark Id*********** Place script on a empty game object that is a child of the VuMark. Create empty game objects with matching VuMark Instance Ids. inside of those empty game objects put in what you want to be active when the VuMark Id id tracked. 1. Vumark a. Empty game object with this script (child of VuMark) i. Empty game object- named same as VuMark Id (child of empty game object with script) 1. Stuff you want to show up whe the Vumark ID is found (Child of empty game object with VuMark Id name) i. Empty game object- named same as VuMark Id (child of empty game object with script) 1. Stuff you want to show up whe the Vumark ID is found (Child of empty game object with VuMark Id name) You can make as many as you want. The script will look down the list and if it finds a matching VuMarkId and a child of the object the script is on, it will make that object active, if not all objects stay inactive If you find any errors please email me Rich */ public class VuMarkIdToScene : MonoBehaviour { public VuMarkTarget vumark; private VuMarkManager mVuMarkManager; void Start () { mVuMarkManager = TrackerManager.Instance.GetStateManager().GetVuMarkManager(); } void Update () { foreach (var bhvr in mVuMarkManager.GetActiveBehaviours()) { vumark = bhvr.VuMarkTarget; var VuId = vumark.InstanceId; print ("Found ID number " + VuId); foreach (Transform child in transform) { if (child.name == VuId.ToString()) { child.gameObject.SetActive (true); } else { child.gameObject.SetActive(false); } } } } } Thanks a lot for this script ! It works much better than the one I have. I'm also trying to track 2 vumarks simultaneously but it shows the same vumark ID. But works fine when trying to detect just one. Would you have a script that enables this ? Thanks again
https://developer.vuforia.com/forum/vumarks/working-vumark-unity-script
CC-MAIN-2020-45
refinedweb
431
63.09
New Java 9 features at a glance © Shutterstock / koya979 We’ve been waiting for this moment for the past 3+ years but now it’s finally here — Java 9 has been released. In this article, Henning Schwentner presents the new version’s features. New modular system: Project Jigsaw in Java 9 With version 8, Java did get a lot of new features, of which the support of Lambdas had certainly the biggest impact. Also, the enabled bulk operations on collections and the new date-time API improved the daily life of the Java developer. However, a feature that has been wanted for a long time has not made it into Java 8 and became the trademark of the freshly released Java 9 version — the modularization with Project Jigsaw. Project Jigsaw addresses two problems that have hitherto affected Java, namely the “JAR hell” and the lack of a strong encapsulation mechanism above classes. From the beginning, Java had a package construct. A class can have one of two visibility levels within a package. Either it is public, in this case, the class can be accessed from anywhere. If it is not public, it can only be accessed from within the package. But packages cannot be nested. As a result, you either have unstructured „big ball of mud“— packages or those that consist only of public classes. JARs (Java Archives) are just a set of public class files plus data. They are not components and do not offer encapsulation. Therefore, they do not have an interface, or rather, the interface of a JAR is all that the JAR contains because it can’t hide anything from outside access due to a lack of encapsulation. Version 9 gives Java the possibility to define modules. A module is a named, self-describing program component that consists of one or more packages (and data). Modules can be defined as in Listing 1. module de.module.a { exports de.module.a.paket.x; } module de.module.b { exports de.module.a.paket.y; exports de.module.a.paket.z; } module de.module.c { requires de.module.a; requires de.module.b; } This interface definition indicates which packages a module offers to the outside world (with the keyword exports), and which modules it requires from the outside (with the keyword requires). Attention: This is not a typo in the previous sentence; a module exports packages, but requires modules. This can be confusing at first glance since packages and modules have the same or very similar names by convention. All packages of a module that are not explicitly exported can only be used within the module. If you try to access them from outside the module, a compiler error occurs. Using Modular JARs as Modules Now that we have seen how to declare a module, let’s answer another question: where do we write the module declaration? The convention says that you declare it in a source code file called module-info.java, and place it at the root of the file hierarchy of the module. The compiler then translates this into the file module-info.class. The name “module-info” contains the hyphen on purpose because it is an invalid class name. This way, existing code will not be damaged. The Java file is then called module declaration and the class file module descriptor. If you have a module declared in this way, it is possible to create a modular JAR from it. It is structured like a conventional JAR file, with the difference that it has a module-info.class file in its root directory. Such a modular JAR can be used as a module. For reasons of downward compatibility, it can also be used as a classic JAR file and in a classpath. Then the module-info.class is simply ignored. Speaking of classpath: With the introduction of the module concept, it is replaced by a modulepath. In the modulepath you can then specify where specific modules can be found in the file system. In the past, there was a classpath with a bunch of JARs in disorder, which could use each other uncontrollably. What’s more, everything within the JARs could be accessed. Now we can use the module mechanism to clearly define which module should and can use which other modules. This makes it possible to use several versions of the same library parallel. For example, module A can use the library in version 1, module B in version 2 and finally, module C can use the two modules A and B. Domain-driven design with Java 9 With the module concept, the architecture of software can be expressed much better. For example, layers can be represented as modules and their interfaces can be defined clearly. The compiler can at least partially detect and prevent architectural violations. Let’s take an example of a banking application, designed with domain-driven design (Listing 2 and Fig. 1). module de.wps.bankprogramm.domainLayer { exports de.wps.bankprogramm.domainLayer.valueObject; exports de.wps.bankprogramm.domainLayer.entity; } module de.wps.bankprogramm.infrastructurelayer { exports de.wps.bankprogramm.infrastructureLayer.database; } module de.wps.bankprogramm.applicationLayer { requires de.wps.bankprogramm.infrastructureLayer; requires de.wps.bankprogramm.domainLayer; exports de.wps.bankprogramm.applicationLayer.repositories; } module de.wps.bankprogramm.uiLayer { requires de.wps.bankprogramm.domainLayer; requires de.wps.bankprogramm.applicationLayer; } The four layers of the system are implemented as modules. The module of the specialized logic layer (i. e. the module domainLayer) is declared in such a way that it has no dependencies to other modules. We don’t want to pollute our business code with dependencies on technical code. It contains a package for the entities of our system and one for its value objects. The repositories, in turn, can access the infrastructure layer (module infrastructureLayer). Therefore, in this design, they are plugged into the application layer module (applicationLayer). According to the above declaration, it may access the infrastructure and business logic layer. The user interface layer (uiLayer module) can then access the user logic and application layer. Using the package with the database access code would result in a compiler error because it is in the infrastructure package and it was not specified in the requires of uiLayer. The assignment of repositories to the application layer is architecturally not completely clean but was done here in order to avoid making the example too complicated. Cutting the JDK into pieces The module mechanism is interesting for many projects, but especially for the JDK itself. This is where the name of the project, Jigsaw, comes from. And with this jigsaw Java should be divided into modules. Up to now, the entire JRE must always be delivered, even if only small programs are to run that do not have a GUI or do not access a database. With Java 9, JRE and JDK are broken down into modules themselves. This allows each program to define what it needs, reducing memory usage and improving performance. Java standard modules include java.base, java.sql, java.desktop and java.xml. The basic module java.base is always implicitly included — just as the package java.lang does not need to be imported separately. The module java.base will contain the packages java.lang, java.math and java.io. For the modules of the JDK itself, JAR files are not sufficient, because they must also contain native code, for example. Therefore, the so-called JMOD files were introduced here. A direct quote of Mark Reinhold, chief architect of Java: “JMOD files are JAR files on steroids”. Project Jigsaw is certainly the big change that comes with Java 9, and it is also its key feature. But there are also a number of other features that will make the developer’s life easier. What else happens in Java 9 Many programming languages have a read-eval-print loop (REPL), i. e. a kind of command line that directly executes code in this language and outputs the result. Java didn’t have something like this in the standard JDK. There are third-party products like BeanShell or Java REPL and a plug-in for IntelliJ IDEA. The Kulla project introduces the JShell to the JDK — the official REPL for Java. This offers hope that Java will be easier to learn. An interactive mode can give the programmer much faster feedback than the classic write/compile/execute cycle. With Java 9, there is now the CLI program jshell for the command line. An API is also provided so that other applications can use this functionality. This is especially interesting for the IDE manufacturers, who want to use the JShell in Eclipse, NetBeans and Co. can be installed. Unicode is supported Unicode exists to encode the characters of different languages. The standard is constantly being extended, and Java is not yet supported for the last two releases 7.0 and 8.0. Unicode 7.0 contains improvements for bidirectional texts, i. e. those that contain sections in both Latin and non-Latin characters. With version 8.0, for example, the emojis are extended by smileys in different skin colors and new faces like that of Mother Christmas. Furthermore, a separate JEP (No. 226) allows you to save property files in UTF-8. Only ISO 8859-1 was previously supported as encoding. The ResourceBundle API is extended for this purpose. Creating Collections with ease It is not too difficult to define several objects at once, using arrays. String[] firstnames = { "Joe", "Bob", "Bill" }; Unfortunately, this is not so easy with Collections, yet. To create a small, unchangeable Collection, one must construct and assign it, subsequently add elements and finally build a surrounding wrapper. List firsnamesList = new ArrayList<>(); firstnamesList.add("Joe"); firstnamesList.add("Bob"); firstnamesList.add("Bill"); firstnamesList = Collections.unmodifiableList(firstnamesList); Instead of having one line of code, we have five lines at once. Furthermore, it cannot be expressed as a single expression. There are several alternatives, e.g. Arrays.asList(), but if you are to define plenty of values, it is going to take a long time: Set firstnamesQuantity = Collections.unmodifiableSet( new HashSet(Arrays.asList("Joe", "Bob", "Bill"))); Java 9 thus introduces convenience methods, which make similar things easier to express. List firsnamesList = List.of("Joe", "Bob", "Bill"); With varargs, it will be possible to transfer a different number of parameters to these factory methods. This functionality is offered for Set and List, or in a comparable form also for Map. Due to method implementations to interfaces in Java 8, the so-called default methods, it is possible to define those convenience methods directly within the interfaces of List, Set and Map. HTTP/2 support in Java 9 HTTP, the protocol for transferring web pages, was adopted in its current version 1.1 as early as 1997. It was not until 2015 that the new version 2 became standard. The new versions aim is to reduce latency, to thus allow for faster loading of web pages. This is achieved through various techniques: - Header compression - Server-Push - Pipelining - Multiplexing of multiple HTTP requests across a TCP connection At the same time, compatibility with HTTP 1.1 remains maintained. Large parts of the syntax even remain unchanged; for example, the methods (GET, PUT, POST and so on), the URI, status codes, and header fields. Java will have out-of-the-box support for HTTP/2 with the implementation of JEP 110. In addition, the outdated HttpURLConnection-API is being replaced. It was created during the days of HTTP 1.0 and used a protocol-agnostic approach. This suited the nineties, as it was not yet certain how successful HTTP was going to be. Nowadays, however, support for e.g. Gopher, is less important. ALPN is also to be supported. In the New World, you can then use a contemporary Fluent-API. HttpResponse response = HttpRequest .create(new URI("") .body(noBody()) .GET() .send(); The resulting HTTP response can then be used to query status code and content: int statusCode = response.responseCode(); String body = response.body(asString()); Reduced memory consumption with compact strings As of version 1, strings in Java are displayed using the class java.lang.String. From the beginning, this class contained an array of char. This data type occupies two bytes in Java. This makes it easy to display characters in UTF-16 and not just support the Latin alphabet. However, many applications only use characters from the Latin-1 encoding, which only requires one byte. In this case, every second byte is empty and wastes memory space. JEP 254, therefore, introduces an implementation of the String class, which contains a byte array plus an encoding field instead of a char array. The encoding field specifies whether the string contains either a classical sequence of UTF-16 characters occupying two bytes each or a sequence of Latin-1 characters occupying only one byte each. The encoding in which the respective string is created should be recognized automatically by the strings content. The pleasant thing with this optimization is that you automatically benefit from it. It is, in fact, a mere implementation detail, which maintains 100% compatibility with old Java versions. Applications which use many strings will thus significantly reduce their memory requirements – by simply installing the latest version of Java. In addition to the String class, related classes such as StringBuilder and StringBuffer, as well as the HotSpot VM, are adapted. JavaDoc enhancements in Java 9 Until now, JavaDoc was only able to produce HTML in the outdated version of 4.01. With Java 9, it will be possible to create HTML5. Therefore, the command javadoc is to specify which version of HTML code to generate. The explicit non-goal of the associated JEP 224 is to abolish the three frames structure. Hopefully, this will be done with a future version. Furthermore, the generated HTML pages are to be provided with a search mechanism for searching for certain Java elements. The results are then categorized according to “Modules”, “Package” or “Types”. Automatic scaling of HiDPI graphics On the Mac, the JDK already supports retina displays, but on Linux and Windows, it does not. There, Java programs may look so small on current high-resolution screens, that they cannot be used. This is because pixels are used for size calculation on these systems – regardless of how large a pixel actually is. And the fun part of high-resolution displays is after all, that pixels are very small. JEP 263 extends the JDK in such a way, that the size of pixels is also taken into account for Windows and Linux. For this purpose, more modern APIs are used than hitherto: Direct2D for Windows and GTK+ instead of Xlib for Linux. Graphics, windows, and text are thereby scaled automatically. JEP 251 also provides the ability to process multi-resolution images, i.e. files which contain the same image in different resolutions. Depending on the DPI metrics of the respective screen, the image is then used in the appropriate resolution. What else comes in Java 9? Like any other Java release, Java 9 contains a number of minor details and updates. These include: - The new ARM architecture AArch64, which raises ARM processors up into the 64-bit tier, is now being supported. - As of version 1.2, Java uses its own proprietary format for storing cryptographic keys: JKS. JEP 229 new introduces the standard file format PKCS12 in Java. - Update 40 of Java 8 introduced the Garbage Collector G1 (Garbage First). Java 9 now elevates G1 to standard garbage collector status. - Up to Java 8, the Image I/O Framework does not support the image format TIFF. This will be changed with JEP 262 and javax.imageio will be extended accordingly. - With Rhino, Java has a JavaScript execution environment. For IDEs and similar tools, JEP 236 releases the previously only internally available API of the parser publically, for accessing AST. Prospects: What comes after Java 9? Originally, this article was supposed to start with: “In September 2016, the new Java version 9 will be released”. The fact that it is now a whole year later is surely not too bad because many of the changes fall into the “housekeeping” category. So, the current versions of the Unicode and HTTP standards are finally supported. There are also minor changes which make developer life easier, such as a more convenient creation of Collections and the implementation of compact strings. The most important feature is clearly the module concept of Project Jigsaw. Major projects and those which are in need of a small memory footprint are a key factor, will benefit from this development. Major projects benefit because the problem of JAR-hell is on one hand solved with the ModulePath, and on the other hand because the architecture of these systems can be expressed more clearly with modules. Memory-sensitive projects benefit because the JDK itself is divided into modules, and no longer needs to be loaded in its entirety. Another exciting topic is the prospect of what will come after Java 9. It seems as if Java 9 is the last version in the traditional version scheme. As for the future, Mark Reinhold has proposed to release a new release every six months and to name it the year and month of the release. The next step would be Java 18.3. In terms of content, the Valhalla project with value types is announced as the “Next Big Thing”. But for now, we are happy about Java 9!
https://jaxenter.com/new-features-in-java-9-137344.html
CC-MAIN-2021-10
refinedweb
2,907
58.18
I'm working on a program for class, and it specifies that we write code that will generate 10,000 random numbers 1-6, but then we have to display a dialog that shows how many 1's appeared, how many 2's appeared, and so on. this is what i have so far: public class ARG { public static void main(String[] args) { int[] A = new int[10000]; for(int i=0; i<A.length; i++) A[i] = 1 + (int) (6*Math.random()); for (int i=0; i<A.length; i++) System.out.println(A[i]); } } I just need some hints at how to display the number of 1's appeared, etc. I'm a beginner java programmer so if it's possible please use detail and not too many complicated concepts because I'm only in an intro to java class. Any help will be appreciated! thank you!
http://www.dreamincode.net/forums/topic/257902-java-dice-roll-homework-help/
CC-MAIN-2016-22
refinedweb
149
72.66
I was able to solve my problem with the following include file: #pragma once #ifndef __GCCXML__ #include <boost/python.hpp> typedef boost::python::object pyobject; #else class pyobject; #endif pyobject gaussian_diffs(pyobject imarray, double sigma); Thanks for your help =) Best regards, Boris. 26 февраля 2009 г. 11:40 пользователь Pertti Kelloma"ki < pertti.kellomaki at tut.fi> написал: > Roman Yakovenko kirjoitti: > >> 2009/2/26 Борис Казаков <boris.kazakov at gmail.com>: >> >>> May be there is a way to instruct gccxml not to parse some include file? >>> >> >> No, GCCXML is almost a complete C++ compiler, with "xml" instead of >> assembler as the backend. >> > > What might be possible is to provide an alternative include file > where the problematic parts are replaced by some dummy code, > and use ifdefs to select which file to include. Suppose > an include file declares X, Y, and Z, and Z is the problematic > part. The alternative file would then declare X and Y in the > same way as previously, and provide a dummy declaration of Z, > and the py++ code would omit Z from the generated bindings. > > This involves a lot of manual work and tracking of changes to > the original include file, but I think it would work as a last > resort. > -- > Pertti > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/cplusplus-sig/2009-February/014289.html
CC-MAIN-2017-30
refinedweb
225
65.83
On Thu, Nov 10, 2011 at 1:03 PM, "Martin v. Löwis" <martin at v.loewis.de>wrote: > > Actually, scratch that part of my response. *Existing* namespace > > packages that work properly already have a single owner > > How so? The zope package certainly doesn't have a single owner. Instead, > it's spread over a large number of subpackages. > In distro packages (i.e. "system packages") there may be a namespace-defining package that provides an __init__.py. For example, I believe Debian (system) packages peak.util this way, even though there are many separately distributed peak.util.* (python) packages. >". > Nick is speaking again about system packages released by OS distributors. A naive system package built with setuptools of a namespace package will not contain an __init__.py, but only a .nspkg.pth file used to make the __init__.py unnecessary. (In this sense, the existing setuptools namespace package implementation for system-installed packages is actually a primitive partial implementation of PEP 402.) In summary: some system packages are built with an owning package, some aren't. Those with an owning package will need to drop the __init__.py (from that one package), and the others do not, because they don't have an __init__.py. In either case, PEP 402 leaves the directory layout alone. A version of setuptools intended for PEP 402 support would drop the nspkg.pth inclusion, and a version of "packaging" intended for PEP 402 would simply not add one. -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/import-sig/2011-November/000370.html
CC-MAIN-2018-26
refinedweb
252
69.28
Some time ago I wrote up a short article here on how to use P/Invoke to use one of the Kerne32 APIs to get the Volume Serial Number off the specified hard drive. It turns out that there has been some feedback on this and related subjects and so I've decided to revisit it. The whole concept of "per machine" or "per seat" software licensing has tended to revolve around unique machine keys, dongles and other ways of tying a licensing scheme to a particular machine (or machines). Microsoft uses this in its "per seat" or "per processor" licensing scheme as do many other vendors. The Activation key used for Windows XP and Windows Server is also based on a composite of hardware - based "IDs". Most all of these can be easily obtained through the use of the Windows Management Instrumentation classes in the System.Managment namespace, through instances of the ManagementObject class. I'm not giving a tutorial on how to use these classes here; the MSDN documentation is fairly complete. We get lots of questions (usually but not always from newbies) from people who haven't yet learned Rule Number One of software development: RTFM. And of course, our advice, taking the meaning of the great Chinese proverb about teaching men to fish, is usually "RTFM!" (sometimes with a handy link to the actual place where the solution to the question or problem can be found). I say this seriously because it surprises me how many people have not invested the time to learn how to search on their own for answers. You have your Visual Studio.NET help built in where all you have to do is highlight the item and hit F1 to search the index, and you have a built-in search window too (which by the way accepts boolean modifiers). For those without Studio, you have the Framework SDK help which works the same way. Then, you have MSDN online with a huge searchable archive of Knowlege Base articles, help documentation and articles. Finally, you have Google, arguably the best web search engine ever invented, and all you need to do is learn the syntax for complex searches. Switch to the "Groups" tab and you can repeat your search on their 20 year history of all newsgroups. I think you get the picture. Probably the most useful of all of the methods I've created below is the one to retrieve the CPU ID, since this is the one piece of hardware that almost never changes. If you reformat your hard drive, your Volume Serial Number will be changed. As well, there are a number of program utilities that allow you to change the HD volume serial without reformatting the drive. Same with NICs- the network card in my machine now is not the same one that I had in there a year ago. However, the CPU is the same. I've put together a class library with what you see below, along with a nice Winforms - based test harness for it, and you can download the VS.NET 2003 solution for it at the bottom of this article. Enjoy! using System; using System.Text; using System.Runtime.InteropServices; using System.Management; namespace MachineInfo { public class GetInfo { /// <summary> /// return Volume Serial Number from hard drive /// </summary> /// <param name="strDriveLetter">[optional] Drive letter</param> /// <returns>[string] VolumeSerialNumber</returns> public string GetVolumeSerial(string strDriveLetter) { if( strDriveLetter=="" || strDriveLetter==null) strDriveLetter="C"; ManagementObject disk = new ManagementObject("win32_logicaldisk.deviceid=\"" + strDriveLetter +":\""); disk.Get(); return disk["VolumeSerialNumber"].ToString(); } /// <summary> /// Returns MAC Address from first Network Card in Computer /// </summary> /// <returns>[string] MAC Address</returns> public string GetMACAddress() { ManagementClass mc = new ManagementClass("Win32_NetworkAdapterConfiguration"); ManagementObjectCollection moc = mc.GetInstances(); string MACAddress=String.Empty; foreach(ManagementObject mo in moc) { if(MACAddress==String.Empty) // only return MAC Address from first card { if((bool)mo["IPEnabled"] == true) MACAddress= mo["MacAddress"].ToString() ; } mo.Dispose(); } MACAddress=MACAddress.Replace(":",""); return MACAddress; } /// <summary> /// Return processorId from first CPU in machine /// </summary> /// <returns>[string] ProcessorId</returns> public string GetCPUId() { string cpuInfo = String.Empty; string temp=String.Empty; ManagementClass mc = new ManagementClass("Win32_Processor"); ManagementObjectCollection moc = mc.GetInstances(); foreach(ManagementObject mo in moc) { if(cpuInfo==String.Empty) {// only return cpuInfo from first CPU cpuInfo = mo.Properties["ProcessorId"].Value.ToString(); } } return cpuInfo; } } } I want to add before closing that you can gain access to these objects through plain old VBScript in a VBS file: Function CpuID() Dim oWMI, oCpu Set oWMI = GetObject("winmgmts:") For Each oCpu in oWMI.InstancesOf("Win32_Processor") wscript.echo "CPU: " & oCpu.ProcessorID End Function Cpu
http://www.eggheadcafe.com/articles/20030511.asp
crawl-002
refinedweb
755
55.54
In this tutorial you will deploy a Consul datacenter to the Elastic Kubernetes Services (EKS) on Amazon Web Services (AWS) with HashiCorp’s official Helm chart or the Consul K8S CLI. You do not need to override any values in the Helm chart for a basic installation, however, in this guide you will be creating a config file with custom values to allow access to the Consul UI. Security Warning This tutorial is not for production use. By default, the chart will install an insecure configuration of Consul. Please refer to the Kubernetes deployment guide to determine how you can secure Consul on Kubernetes in production. Additionally, it is highly recommended to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features. »Prerequisites »Installing aws-cli, kubectl, and helm CLI tools To follow this tutorial, you will need the aws-cli binary installed, as well as kubectl and helm. Reference the following instruction for setting up aws-cli as well as general documentation: Reference the following instructions to download kubectl and helm: »Installing helm and kubectl with Homebrew Homebrew allows you to quickly install both Helm and kubectl on MacOS & Linux. Install kubectl with Homebrew. $ brew install kubernetes-cli Install helm with Homebrew. $ brew install kubernetes-helm »VPC and security group creation The AWS documentation for creating an EKS cluster assumes that you have a VPC and a dedicated security group created. The instructions on how to create these are here: You will need the SecurityGroups, VpcId, and SubnetId values for the EKS cluster creation step. »Create an EKS cluster At least a three node EKS cluster is required to deploy Consul using the official Consul Helm chart. Create a three node cluster on EKS by following the the EKS AWS documentation. Note: If using eksctl, you can use this command to create a three-node cluster: eksctl create cluster --name=<YOUR CLUSTER NAME> --region=<YOUR REGION> --nodes=3 »Configure kubectl to talk to your cluster Setting up kubectl to talk to your EKS cluster should be as simple as running the following: $ aws eks update-kubeconfig --region <region where you deployed your cluster> --name <your cluster name> You can then run the command kubectl cluster-info to verify you are connected to your Kubernetes cluster: $ kubectl cluster-info Kubernetes master is running at https://<your K8s master location>.eks.amazonaws.com CoreDNS is running at https://<your CoreDNS location>.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy You can also review the documentation for configuring kubectl and EKS here: »Deploy Consul You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. By default, these methods will install a total of three Consul servers as well as one client per Kubernetes node into your EKS cluster. You can review the Consul Kubernetes installation documentation to learn more about these installation options. »Create a values file To customize your deployment, you can pass a yaml file to be used during the deployment; it will override the Helm chart's default values. The following values change your datacenter name and enable the Consul UI via a service. global: name: consul datacenter: hashidc1 ui: enabled: true service: type: LoadBalancer . Run the command kubectl get pods to verify three servers and three clients were successfully created. $ kubectl get pods --namespace consul NAME READY STATUS RESTARTS AGE consul-5fkt7 1/1 Running 0 69s consul-8zkjc 1/1 Running 0 69s consul-lnr74 1/1 Running 0 69s consul-server-0 1/1 Running 0 69s consul-server-1 1/1 Running 0 69s consul-server-2 1/1 Running 0 69s »Accessing the Consul UI Since you enabled the Consul UI in your values file, you can run the command kubectl get services to find the load balancer DNS name or external IP of your UI service. $ kubectl get services --namespace consul NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE consul-dns ClusterIP 172.20.39.92 <none> 53/TCP,53/UDP 8m17s consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 8m17s consul-ui LoadBalancer 172.20.223.228 aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com 80:32026/TCP 8m17s kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 21m You can verify that, in this case, the UI is exposed at over port 80. Navigate to the load balancer DNS name or external IP in your browser to interact with the Consul UI. Click the Nodes tab and you can observe several Consul servers and agents running. »Accessing Consul with the CLI and API In addition to accessing Consul with the UI, you can manage Consul by directly connecting to the pod with kubectl. You can also use the Consul HTTP API by communicating to the local agent running on the Kubernetes node. Feel free to explore the Consul API documentation if you are interested in learning more about using the Consul HTTP API with Kubernetes. »Kubectl To access the pod and data directory, you can remote execute into the pod with the command kubectl to start a shell session. $ kubectl exec --stdin --tty consul-server-0 --namespace consul -- /bin/sh This will allow you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul members. $ consul members Node Address Status Type Build Protocol DC Segment consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 <all> consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 <all> consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 <all> ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 <default> ip-10-0-2-47.ec2.internal 10.0.2.59:8301 alive client 1.10.3 2 hashidc1 <default> ip-10-0-3-94.ec2.internal 10.0.3.225:8301 alive client 1.10.3 2 hashidc1 <default> When you have finished interacting with the pod, exit the shell. $ exit »Using Consul environment variables You can also access the Consul datacenter with your local Consul binary by enabling environment variables. You can read more about Consul environment variables documented here. In this case, since you are exposing HTTP via the load balancer/UI service, you can export the CONSUL_HTTP_ADDR variable to point to the load balancer DNS name (or external IP) of your Consul UI service: $ export CONSUL_HTTP_ADDR= You can now use your local installation of the Consul binary to run Consul commands: $ consul members Node Address Status Type Build Protocol DC Partition Segment consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 default <all> consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 default <all> consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 default <all> ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 default <default> ip-10-0-2-47.ec2.internal 10.0.2.59:8301 alive client 1.10.3 2 hashidc1 default <default> ip-10-0-3-94.ec2.internal 10.0.3.225:8301 alive client 1.10.3 2 hashidc1 default <default> »Next steps In this tutorial, you deployed a Consul datacenter to AWS Elastic Kubernetes Service using the official Helm chart or Consul K8S CLI. You also configured access to the Consul UI. To learn more about deployment best practices, review the Kubernetes Reference Architecture tutorial.
https://learn.hashicorp.com/tutorials/consul/kubernetes-eks-aws?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=eks
CC-MAIN-2022-40
refinedweb
1,292
54.93
Introduction In this article, we will look at how to build a simple web service with gRPC in .NET. We will keep our changes to minimal and leverage the same Protocol Buffer IDL we used in my previous post. We will also go through some common problems that you might face when building a gRPC server in .NET. Motivation For this article also we will be using the Online Bookshop example and leveraging the same Protobufs as we saw before. For those who aren’t familiar with or missed this series, you can find them from here. Introduction to gRPC Building a gRPC server with Go Building a gRPC server with .NET (You are here) Building a gRPC client with Go Building a gRPC client with .NET We will be covering steps 1 and 2 in the following diagram. Plan So this is what we are trying to achieve. Generate the .proto IDL stubs. Write the business logic for our service methods. Spin up a gRPC server on a given port. In a nutshell, we will be covering the following items on our initial diagram. the the .NET’s tooling to generate a sample gRPC project. Run the following command in at the root of your workspace. Once you run the above command, you will see the following structure. We also need to configure the SSL trust: As you might have guessed, this is like a default template and it already has a lot of things wired up for us like the Protos folder. Generating the server stubs Usually, we would have to invoke the protocol buffer compiler to generate the code for the target language (as we saw in my previous article). However, for .NET they have streamlined the code generation process. They use the Grpc.Tools NuGet package with MSBuild to provide automatic code generation, which is pretty neat! 👏 If you open up the Bookshop.csproj file you will find the following lines: <ItemGroup> <Protobuf Include=“Protosgreet.proto“ GrpcServices=“Server“ /> </ItemGroup> … We are going to replace greet.proto with our Bookshop.proto file. We will also update our csproj file like so: <Protobuf Include=“../proto/bookshop.proto“ GrpcServices=“Server“ /> </ItemGroup> Implementing the Server The implementation part is easy! Let’s clean up the GreeterService that comes default and add a new file called InventoryService.cs code BookshopServer/Services/InventoryService.cs This is what our service is going to look like. Let’s go through the code step by step. Inventory.InventoryBase is an abstract class that got auto-generated (in your obj/debug folder) from our protobuf file. GetBookList method’s stub is already generated for us in the InventoryBase class and that’s why we are overriding it. Again, this is the RPC call we defined in our protobuf definition. This method takes in a GetBookListRequest which defines what the request looks like and a ServerCallContext param which contains the headers, auth context etc. Rest of the code is pretty easy – we prepare the response and return it back to the caller/client. It’s worth noting that we never defined the GetBookListRequest GetBookListResponse types ourselves, manually. The gRPC tooling for .NET has already created these for us under the Bookshop namespace. Make sure to update the Program.cs to reflect the new service as well. app.MapGrpcService<InventoryService>(); // … And then we can run the server with the following command. We are almost there! Remember we can’t access the service yet through the browser since browsers don’t understand binary protocols. In the next step, we will to test our service 🎉 Common Errors A common error you’d find on macOS systems with .NET is HTTP/2 and TLS issue shown below. gRPC template uses TLS by default and Kestrel doesn’t support HTTP/2 with TLS on macOS systems. We need to turn off TLS (ouch!) in order for our demo to work. 💡 Please don’t do this in production! This is intended for local development purposes only. On local development builder.WebHost.ConfigureKestrel(options => { // Setup a HTTP/2 endpoint without TLS. options.ListenLocalhost(5000, o => o.Protocols = HttpProtocols.Http2); }); Testing the service. 💡 Note: gRPC defaults to TLS for transport. However, to keep things simple, I will be using the `-plaintext` flag with `grpcurl` so that we can see a human-readable response. How do we figure out the endpoints of the service? There are two ways to do this. One is by providing a path to the proto files, while the other option enables reflection through the code. Using proto files If you don’t want to enable reflection,. Now, let’s say we didn’t have reflection enabled and try to call a method on the server. We can expect that it will error out. Cool! Enabling reflection While in the BookshopServer folder run the following command to install the reflection package. Add the following to the Program.cs file. Note that we are using the new Minimal API approach to configure these services builder.Services.AddGrpcReflection(); // Enable reflection in Debug mode. if (app.Environment.IsDevelopment()) { app.MapGrpcReflectionService(); } Conclusion As we have seen, similar to the Go implementation, we can use the same Protocol buffer files to generate the server implementation in .NET. In my opinion .NET’s new tooling makes it easier to generate the server stubs when a change happens in your Protobufs. However, setting up the local developer environment could be a bit challenging especially for macOS. Feel free to let me know if you have any questions or feedback. Until next time! 👋 References
https://online-code-generator.com/building-a-grpc-server-in-net/
CC-MAIN-2022-40
refinedweb
924
59.4
1.1 ! mspo 1: **This article is a stub. You can help by editing it.** ! 2: ! 3: The file **mk.conf** is the central configuration file for everything that has to do with building software. It is used by the BSD-style *Makefiles* in */usr/share/mk* and especially by [[pkgsrc/pkgsrc]]. Usually, it is found in the */etc* directory. If it doesn't exist there, feel free to create it. ! 4: ! 5: Because all configuration takes place in a single file, there are some variables so the user can choose different configurations based on whether he is building the base system or packages from pkgsrc. These variables are: ! 6: ! 7: * BSD_PKG_MK: Defined when a pkgsrc package is built. ! 8: * BUILDING_HTDOCS: Defined when the NetBSD web site is built. ! 9: * None of the above: When the base system is built. The file /usr/share/mk/bsd.README is a good place to start in this case. ! 10: ! 11: A typical **mk.conf** file would look like this: ! 12: <pre><code> ! 13: # This is /etc/mk.conf ! 14: # ! 15: ! 16: .if defined(BSD_PKG_MK) || defined(BUILDING_HTDOCS) ! 17: # The following lines apply to both pkgsrc and htdocs. ! 18: ! 19: #... ! 20: LOCALBASE= /usr/pkg ! 21: #... ! 22: ! 23: .else ! 24: # The following lines apply to the base system. ! 25: ! 26: WARNS= 4 ! 27: ! 28: .endif ! 29: </code></pre>
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/pkgsrc/mk.conf.mdwn?annotate=1.1;sortby=author
CC-MAIN-2021-39
refinedweb
223
78.96
Using XML Style Name Space Attributes In ColdFusion I was just reading over on Truths and Lies a post by Christ Scott about using Meta Data in ColdFusion. I have used meta data before and it is pretty sweet-ass. But what was very interesting to me about this post was a comment by Scott Arbeitman who states that you could use XML style name space attribute notation (as in "cs:" for Cold Spring) in ColdFusion. This is pretty interesting. To test this out, I did a little demo for myself: <!--- Define a standard ColdFusion user defined function but add some "kinky" name space attributes to the declaration. ---> <cffunction name="Test" access="public" returntype="string" output="false" hint="Returns a string" <!--- Add custon attributes for kinky name space. ---> kinky: <cfreturn "This is a test for XML style attributes" /> </cffunction> Notice that the Author and DateCreated attributes are for the "kinky" name space only. Now, if I dump out the function object: <!--- Dump out the UDF. ---> <cfdump var="#VARIABLES.Test#" label="Test() UDF" /> ... you will see that nothing has changed: However, if you dump out the meta data for the UDF: <!--- Dump out meta data. ---> <cfdump var="#GetMetaData( VARIABLES.Test )#" label="Test() UDF Meta Data" /> ... you will see that the XML style attributes are keys in the meta data structure: Very interesting. Once it is in the meta data object, it is very easy to access as a structure key: <!--- Get the UDF meta data. ---> <cfset objMetaData = GetMetaData( VARIABLES.Test ) /> <!--- Get the author. ---> <cfset strAuthor = objMetaData[ "kinky:author" ] /> Now, I know ZERO about name spaces (and not much more about XML) and in fact, I just wrote a blog entry about stripping them out, so I don't know what this would be used for. But I do like the idea of being able to have attributes that don't get confused with the inherent ColdFusion attributes (which I guess might be the whole idea behind name spaces). Reader Comments Excellent use of the 'kinky' namespace! I was discussing this strategy a while back with Peter Farrell and suggested using the 'titties' namespace. You can be 100% positive adobe's never ever ever gonna use that!'ve seen a framework (Transfer, I believe) that has two attributes for return types of functions. One is always "any", probably for performance reasons, and the other is the real CFC type, probably for documentation. @Chris, Ha ha ha ha ha ha ha. Excellent :) @Scott, That makes sense to me. I guess my confusion has always been in that things that are "name-spaced" are things that don't conflict as-is. However, the future-proofing makes much more sense. You don't happen to know why name-spaces complicated things like XPath searches do you?? find data that don't care about. Searching for nodes with namespace prefixes is as simple as using XPath with the prefix included in your search string. For example: "//myprefix:mynode". ColdFusion's XPath implementation has a "feature" where empty namespaces must be prefixed with the empty namespace prefix. For example: ":/mynode". You'll know you're dealing with an empty namespace as compared to no namespace at all if you see the attribute xmlns="URI" in your document. Scott, I have taken a look at your comments and the comments made on my previous post... and I still cannot get it to work. If you take a look at this: .. and the example in it, can you please help me out. I cannot figure out why this expression is not working: is not finding the non-namespaced xml node. I get this error: An error occured while Transforming an XML document. Prefix must resolve to a namespace: Any suggestions? Awesome. I have a question that I'm hoping someone can help me with. I'm reading in an RSS feed and creating a fresh one using CF8, compliant with iTunes namespace for podcasting. I need to add in ns details like "itunes:author" but CF isn't liking the way I'm doing it. The feed compiles perfectly until I start placing these in, but I hope there is a way around it. A sample code is below. Thank you very much in advance for any help you can provide. <cfset tempxml.rss.XmlChildren[1].XmlChildren[i].enclosure = XmlElemNew(tempXml, "enclosure")> <cfset tempxml.rss.XmlChildren[1].XmlChildren[i].enclosure.XmlAttributes["url"] = "#blogPosts.linkhref[i]#"> <cfset tempxml.rss.XmlChildren[1].XmlChildren[i].enclosure.XmlAttributes["length"] = "#blogPosts.linklength[i]#"> <cfset tempxml.rss.XmlChildren[1].XmlChildren[i].enclosure.XmlAttributes["type"] = "#blogPosts.linktype[i]#"> ])#")#"> Many thanks, Matt @Matt, While I love XML more than chocolate, I am not very good at all with namespaces. What if you tried this: ])#")#"> Notice that I am putting "itunes:duration" in a quoted struct-key. Not sure if that will work, but it should be a step in the more correct direction.
https://www.bennadel.com/blog/499-using-xml-style-name-space-attributes-in-coldfusion.htm
CC-MAIN-2021-31
refinedweb
816
66.94
Since Python is not specifically designed for web development, a number of technologies created by Python users exist that aim to provide a web development environment. While the exact approach to the situation varies among each framework, a few of these frameworks really stand out in the crowd. One such framework is Karrigell. Read on to learn more. The first two methods Karrigell presents to developers are scripts and services. A script is simply a Python script that uses print to output to the user's browser. If you haven't done so already, create a testarea directory, and we can begin our first script. Create the file test.py: print "<center>"print "Hello!"print "<br /><br />"print "Karrigell is configured and working."print "</center>" Point your browser to the file, and if you have Karrigell set up as described above, you should see the message described above. Form data is fairly easy to handle with Python scripts. Let's create a simple script whose output differs depending on whether the user has specified his or her name in a form. Name it askName.py: if QUERY.has_key ( "name" ): print "Your name is", _name + "."else: print "What is your name?<br />" print "<form>" print "<input type='text' name='name' /><br />" print "<input type='submit' value='Proceed' />" print "</form>" Services are written like Python scripts, too. However, they are designed to map requests to functions defined by the user. The desired function is passed along in the URL after the name of the service. For example, the following URL would call the test function of the service test.ks: Let's actually create the test.ks service: def index(): print "Index function."def test(): print "Test function." If you call the script without passing a function name, then you will be redirected to the index function. If you call the script passing the test function name, then the test function will be executed. Attempting to call a function not defined will produce an error. Configuring services to accept form data is quite easy. Let's recreate askName.py as the service askName.ks: def index(): print "What is your name?<br />" print "<form action='nameSubmit'>" print "<input type='text' name='name' /><br />" print "<input type='submit' value='Proceed' />" print "</form>"def nameSubmit ( name ): print "Your name is", name + "." Of course, making every single one of your service's functions accessible to the outside world could be a security hazard. To prevent users from accessing certain functions, simply prefix them with an underscore: def _private(): pass Attempting to access the _private function will result in an error message.
http://www.devshed.com/c/a/Python/Karrigell-for-Python/1/
CC-MAIN-2013-20
refinedweb
436
67.76
Slashback: Beetle, Reading, Streams 57 Can you read me? Over. With both feet in the stream of continuing evolution and convergence of distributed voting, online metaknowledge and probably a few other things, Johnathan Nightingale has created a site called Canonical Tomes, lately featured on Kuro5hin. It's a really cool way to approach the "top picks" in a given subject, and fun to browse especially in the fields you're not very familiar with: the trick is a community voting system -- visit it and pick your favorites. It also raises the question, though, of how to avoid an early lead from remaining permanent; how do new but excellent books gain a foothold? And what about situations where the popular books aren't the best ones? Kudos to Johnathan for putting this together, now it's your turn to point out the best books in your field to others. Gee, Wally, I can colorize you from this "Linux" machine! starlady writes "Linux.com has an interview up with the developers of GStreamer. GStreamer, as mentioned here before, is a full featured multimedia framework with functionality for everything from mp3 playback to audio and video editing." An excerpt, quoting developer Wim Taymans: ." And though everyone is excited about video, things like this will make Linux a lot more capable as an audio capturing and manipulation platform, too. The real question is, did you get in trouble? Regarding the dangling beetle which caused the city fathers of San Francisco some small consternation, Ms Golden Gate 2001 writes: "In case you're still fretting, or wondering, here are a few first-hand pieces of info about the stunt (I hope you guys weren't really believing what you read in the papers, now were you? ;-) - the Bug was hung by cable and nylon webbing from a two-point suspension system (check the math -- that's not so easy: you try figuring out how to sling cable from *both* sides of the bridge to hang something nicely centred!) - the Bug was never in sight of any commmuter after the initial 1-minute deployment (*under* the bridge!) - the first to be informed were the traffic helicopters - the Ironworkers who cut it down (in minutes) thought the job was well done ("They could probably get a job as ironworkers") - the Bug was stripped of nasties, and as the Ironworkers said, it's a new habitat (just like when they sink a ship to create an artificial reef, only smaller, MUCH smaller) All that technology, and it's still nigh impossible to get the facts heard over the Brownian noise :-P At least this is a good forum for venting without swords! P.S. It's National Engineering Week in Canada! (Look out below!)" Goths, Vandals, and Slashdotters. (Score:2) I created an account and got one book added to an empty category, but now it's choking on my second try, and the home page won't even come up anymore. Barbarians, every one of us! Let's go rent a movie! -- Re:the goldenbeetle hack was cool (Score:2) However as a Canadian living in SF, I thought it was great. Re:About your sig (Score:1) Cheers, Karma Sink word of mouth, Usenet (Score:2) Another approach is posting to Usenet - when I was after a book on meteorology, I posted to sci.geo.meteorology [sci.geo.meteorology] (explaining my background and what I was looking for) and three different people from different universities recommended Wallace and Hobbs' Atmospheric Science [dannyreviews.com], which turned out to be just what I was after. Danny. Re:Gstreamer needs: (Score:1) I'm not saying that it would be impossible, but it'd require a lot of collaboration with the hardware makers as well as some tweaking on the software end. No, they haven't gotten in trouble... (Score:5) I've met some of the people involved in the beetle project and had the whole hanging procedure described to me. Let's just say, it wasn't easy and some people involved must have had balls made of steel. As a former UBC engineer, I really couldn't wipe the smile off my face for days, not just because of the stunt but of the publicity that it caused and how it will help to increase spirit in the engineering faculty. Since i've graduated i've heard and seen less and less people come out to events sponsored by the Engineering Student Society. Something like this will (hopefully) show people that there really is more to school than just going to class and doing your homework... I was very proud to be able to meet some of hte people involved and personally congradulate them for a job well done. Of course, I have no idea what their names are or what they look like anymore :-) Re: "All your bridge are belong to us." (Score:1) Other than the stealth, logistics, and "balls of steel" this wasn't that hard - they have had many years to perfect the engineering part of the stunt on other bridges. Hopefully it motivates some young kids to go into Engineering... Mash: Another Multimedia toolkit (Score:1) Re:OK, I give up. How? How??!?!? (Score:2) Worldcom [worldcom.com] - Generation Duh! Another Prank (Score:1) O'Toole's Commentary on Murphy's Law: Re:Beetle stunt (Score:1) Re:Beetle stunt (Score:2) Re:Beetle stunt (Score:1) Actually, I know a guy who works for the Coast Guard in the Bay. Apparently the fact that the bug was hanging from the bridge was preventing shipping traffic from entering and leaving the Bay. Time is money, and someone decided that it would take to long to remove the thing nicely. It's not as if they had all the time in the world to deal with it. And fair enough, in my view. It's like putting a bug on the runway at SFO. Auto traffic isn't the only, or even the most important, traffic. Claim your namespace. Re:Goths, Vandals, and Slashdotters. (Score:2) <halfserious>Or we could start our own.</halfserious> -- Re:OK, I give up. How? How??!?!? (Score:2) the goldenbeetle hack was cool (Score:2) Personally I thought they should have left it there, given it's apparent well-designed linkage. It'd make an interesting monument to human ingenuity as well as a slightly subversive statement regarding people being too uptight to see the humor in a VW bridge-dingleberry ("dingleberry- n. southern US slang, the little bits of fecal matter that stick to the fur/feathers of an animal's nether regions post-evacuation"). And hey, the beetle is also a nod to the counter-culture mecca SF was in the 60s. -- News for geeks in Austin: [geekaustin.org] Re:Beetle stunt (Score:1) Re:Beetle stunt (Score:1) ... unless it landed on them... Sorry. ----- Re:OK, I give up. How? How??!?!? (Score:2) The hope is to shoot the string and weight out until it is taut, then let gravity (coupled with centrifugal force to keep the string taut) swing it down and around. I really don't know how wide the bridge is, nor how high above the water it is - but I doubt it is very wide - doesn't it only have 2 or 3 lanes of traffic in each direction? - so maybe only 50 or 60 feet wide or so? Make it about 20 feet thick, and you are looking at perhaps 100-130 feet of monofilament, tops. Worldcom [worldcom.com] - Generation Duh! Re:Another Prank (Score:1) Re:No, they haven't gotten in trouble... (Score:1) Sorry if my lack of english skills make my comment any less believable. Re:Gstreamer needs: (Score:3) Re:the goldenbeetle hack was cool (Score:1) Since when is dingleberry a buzzword? -- News for geeks in Austin: [geekaustin.org] Beetle stunt (Score:5) Couple of comments about the VW beetle hanging from the GG bridge: the Bug was never in sight of any commmuter after the initial 1-minute deployment (*under* the bridge!) This isn't quite true. On the northbound approach to the bridge, coming from SF, there is a stretch of road (Marina Blvd I believe) that has a full sideview of the bridge, from maybe a mile away. By the peak of commute time, news of the event was all over the radio, so people were slowing down along this stretch of road to have a look. So yes, you couldn't see the car from the bridge itself, but to imply there was no impact on the commute is very wrong. the Bug was stripped of nasties, and as the Ironworkers said, it's a new habitat (just like when they sink a ship to create an artificial reef, only smaller, MUCH smaller) Like Neal Stephenson says in Zodiac, ANYTHING you drop in the ocean will become a habitat, because that's where the fish live! Just because you dropped garbage down there and the fish start swimming around it, that doesn't make it a good, environmental thing to do. That iron and steel will be down there, rusting, for decades. For no good reason. In fact, I'm surprised they chose to snip the cables instead of pull it up, or instead of lowering it onto a barge. -- Sometimes nothing is a real cool hand.-- Cool Hand Luke Righting a wrong (Score:1) Re:wow! (Score:2) Community Book Ratings (Score:2) It seems to me that an awful lot of this is dependant on things like attitude of the community. For example, take a look at the content [slashdot.org] of freenet. As described, the typical member could be "a crypto-anarchist Perl hacker with a taste for the classics of literature, political screeds, 1980s pop music, Adobe software, and lots of porn" Somehow I think that the books recommended by that cultural cross section would be different than that of the Reader's Digest [readersdigest.com] (which has a circulation [readersdigest.com] approaching 100 million) which is exactly why gstreamer gets developed (Score:4) News (Score:1) Newsflash: Ocean not consisting of tap water! (Score:2) As you know, sea water is about 3% "salt". That does not only mean regular table salt, NaCl, but pretty much any element or compound that exists on the planet and is soluble in water. I haven't checked the iron content of the ocean, but I'd be surprised if it's not in the megazillions of tonnes. insufferable preaching section: Iron is a perfectly natural element in nature, and to put it back in nature is not in general a bad thing. More controversially, the same can be said of, among many other things, uranium, which is quite common in ocean water. end of preaching. phew! PS. Could the bug actually be seen from Marina Blvd? It was reported to be a very foggy day. automated kvetching (Score:1) All in one media, or as I like to call it, gouloshware, which is disrespectful to goulosh everywhere, is M$-style thinking, and worse, can be a very bad thing in the occassionally laggy x-windows system environment. Have you tried playing "Tuxedo T. Penguin: A Quest for Herring"? Let's look at M$ for just a moment. Eons ago, technologically speaking, even Windows(tm) had to call on lots of little programmes that did one task at a time. This saved on processor space. This is still not such a bad idea. Today, you only use three applications to run a programme: Internet Explorer, Windows Media Player and M$ Office. Depending on what modules you load in IE, it's not so bad; of course, it's also the least original programme from M$. The other two can take as long as Star Office trying to load in a C environment, and stability during that time isn't garunteed by the M$ Bill of Gates. My theory is: In 2000 XP, all applications will be open by M$ new Internet Media Office, which will require 2Ghz processor, 4.4 terrabytes of disk space and 9GB RAM. Clippit will nolonger work alone, and many fun creatures will hop and play on the screen. You will also receive the following easter eggs: five flight simulators, Age of Empires: A Time of Conquest, MacOs X with DVD, Atari's top 100, Commodore's top 100, video Bill Gates being Janet Reno's Dominatrix, a complete list of everything M$ has ever stolen*, "War and Peace", M$ Encarta: The World of Bill, "Blue Screen of Life: M$ Stress Re-Organization" plus free stress game, a copy of M$ Bob, a complete collection of every paper by "The Onion" and finally, "Bill's Top 100 In & Out Burger fast-food franchises of North America" Now, it's one thing if you have a programme for the sheer purpose of working all the media into one comprehensive pro-gartum something or other, but in the WSOGMM, it's just not worth it for standard use. No, I did not miss the article; I've just been dying to whine about this for awhile, and gstream was about as close as the subject may come in the next month. As usual, I'm now going to steal something just to make my article longer: Microsoft Windows 2000 is based on technology produced by Xerox, Apple, IBM, Bell Labs, Valence Software, ZoomIt Co.,Red Hat, Symantec, Spyglass, Sun Microsystems, Santa Cruz Operation, Corel, VisiCorp, Cooper Software, LinkAge Software, Caldera, General Magic, Dynamic Systems, Citrix, AT&T, the GNU Project, Sendmail Inc., Novell, Borland/Inprise, Digital Research, NeXT, Informix, Netscape, and the following universities: Yale, Dartmout, MIT, Berkeley and Stanford. BlueScreen technology is an original Microsoft innovation created by the BlueScreen Development Team, headed by Steve Ballmer and Ed Muth. This paragraph continues to comply with the Department of Injustice's Vigilante Kangaroo Court Consent Decree (TM). Thank you for marking me as off-topic and dropping my Karma into a double-digit negative number. Now, I must throw bejana beans at my guests. Re:word of mouth, Usenet (Score:2) I'm glad the site exists, but I'm a bit skeptical that it's going to work. Think what's going to happen when all the Slashdot Trolls see it and run over to screw it up just for the fun of it. And there are probably problems anyway. I did not see any mechanism for cross-listing books between multiple categories, nor for correcting erroneous entries. Also, the field for price was somewhat surprising, assuming they plan to be around for more than a couple of years. Meanwhile, there are no fields for indicating the year the book came out, nor what editions exist. To be useful, it will probably need a full-time board of editors to maintain the site. They'll have the troll crap to shovel out, they'll have haphazard category and subcategory definitions, they'll have books posted in the wrong categories, and they'll have books will erroneous entries, grammar and spelling errors in the titles and/or descriptions, and political asides that shouldn't be there. Also, I rather suspect that most of the submissions will be from authors or publishers ("Buy this book!") or from individuals with a political axe to grind. The voting system naively limits you to three votes, as if the site's creators were unaware how easy it is to get disposible voting accounts, so I expect to see ridiculous voting outcomes within a few days. Finally, I notice that when you have entered a book's info you are politely prompted to "submit query". It looks like the code may need some cleanup. All in all, I suspect that it's another noble effort doomed to failure due to naivety and all the problems associated with internet voting. Also, I wonder if it isn't just a site that hopes to make a big splash in the news and then get bought out by the next p0rtal-wannabe within a few months, so that the creators don't really need to worry about all the very obvious problems with such a site. -- You're quite wrong... (Score:1) Certainly, staticism alone is not a valid mode of epistemological advancement, but can any method which wholly abandons static principles and knowledge lead beyond self-referential factualism? Although this has been heavily debated, general consensus holds that it is in fact not possible. While the implications of this will be fleshed out and argued over for decades, the immediate applications are both obvious and non-trivial. Commonly held traditions in literature are critical for progression of social normalizing factors. Although it could certainly be argued that an individual's canon must necessarily supersede one founded on principles of democratic advancement, it is worth noting that not a solitary instance of corresponding phenomena has been observed. Admittedly, non-observation does not entail non-existence, but broader sociological and literary analysis continues this account accross multiple social strata. In conclusion, while I believe your model is fundamentally flawed, you are correct in your assessment of the shortcomings of common-archetypal aesthetic representationalism. Re:Beetle stunt (Score:1) -- Re:Gstreamer needs: (Score:2) Re:Newsflash: Ocean not consisting of tap water! (Score:2) >More controversially, the same can be said of, among many other things, uranium, which is quite common in ocean water. Uranium, in its natural state, is rather harmless (not completely though). This is not true of all elements though. A few elements that are quite lethal in their natural states: Re:the goldenbeetle hack was cool (Score:1) -antipop Re:No, they haven't gotten in trouble... (Score:1) Not in trouble??? Wanker Bush can't even drive in the wrong lane without getting thrown in the pokey. How can these guys get away with hanging under a bridge? -- Re:OK, I give up. How? How??!?!? (Score:2) Re:Newsflash: Ocean not consisting of tap water! (Score:2) Re:Newsflash: Ocean not consisting of tap water! (Score:1) Earlier this year i saw an item in a newspaper that said of all the poison deaths in the united states. (for children) iron overdose was the most common. The method of overdose was from swallowing their parents vitamen tablets. It takes a couple days for a child to die from iron overdose, i have heard it isn't very pretty, the body just starts shutting down. There is no way to remove it from the system once its there. Anyway I wouldn't go drinking water that is brown/red from iron content. Adults can overdose too, just takes more of it. Re:wow! (Score:1) And hey, what's this "even a bug"? That car is one of the finest vehicles ever made (and the only car with running boards I've ever been able to afford). Re:automated kvetching (Score:1) Re:the goldenbeetle hack was cool (Score:2) ---- Re:Goths, Vandals, and Slashdotters. (Score:1) Re: "All your bridge are belong to us." (Score:1) Most of these bug-realted comments have had a "I wonder how they did it?" subtext to them. Universities don't teach engineers how to be criminals, they teach them how to use principles of physics to aid and improve our lives. We need more curious, well-educated youth to make the next big advances in our world, and new engineers will help to make that happen. Sorry my comments got your shorts in a knot. Re:Beetle stunt (Score:1) [geocities.com] Re:Gstreamer needs: (Score:1) Gstreamer needs: (Score:4) Re:Community Book Ratings (Score:1) Is circulation the same as readership? Reader's Digest says it "reaches almost 100 million readers" Doesn't that assume one circulated copy can reach more than one reader? Re:OK, I give up. How? How??!?!? (Score:1) Re:OK, I give up. How? How??!?!? (Score:2) Heh - all of this is sounding to complicated to be practical, though - there is probably a simpler solution. Worldcom [worldcom.com] - Generation Duh! Re:OK, I give up. How? How??!?!? (Score:1) Re:Beetle stunt (Score:5) I understand what you're saying, but seriously the impact of a rusty beetle is neutral. It probably doesn't help the fish, but it won't hurt them either. Re:Beetle stunt (Score:5) What really tweaked the noses of Americans was this: UBC owned joO! All your bridge are belong to us! Hey, s'alright. We Canucks are a humble bunch. We know how hard it is to admit that we r0oL you. Shucks. -- Cannonical Tomes (Score:3) I mean, much of what people feel *must* be read is actually just stuff that everyone else has - what linguists call creating common language rather than actually expanding knowledge. As such, to rely on common knowlege to create a list of common knowledge might create stagnancy rather than a dynamic work. Not that I'm saying having a set of liturature people are expected to read is a bad thing - rather, that cannonizing that liturature via plebian masses might stifle the ability for others to truly create. As such, though I hate sounding so incredibly elitist, creating the sight for "everyman" to decide the cannonical works is less meaningful than just letting the college professors do it - at least they are going out there to find the new stuff, and include works that challenge traditional thought - even if they personally find those works "wrong." What's a class on government without facsim, for instance? But, who'se going to be the gutsy one to add "My Struggle" to the list of political works? Certainly not me! At the same time, however, this does open the "cannonical" list up to works that would not otherwise see play - things like "stomp" as a cannonical play, as opposed to "le mis," or something. It's certainly a project that I'll watch, if not participate in! -- Re:Beetle stunt (Score:2) Why is this such a bad thing? Last I checked, iron is a pure element, and steel is mostly iron. When steel and iron rust, they create iron oxide. There's nothing particularly bad about iron oxide. It already exists in plenty of places in nature. Ever seen redish-brown rocks? Many of them (probably not all, but many) are that color because of naturally occurring iron oxide, aka Rust. If the fish wanna live in a rusty car, so be it. let them. It's hardly an environmental crime.
https://slashdot.org/story/01/03/05/1553248/slashback-beetle-reading-streams
CC-MAIN-2017-34
refinedweb
3,775
61.97
Hi, all. I am having a problem using the getpass() function, there are no problems with the raw_input(), though. I'm trying to check the user input is valid (meaning that the Username is nav and the password is 39429432) Really, I DON'T want to use raw_input() for the password field. def login(): import getpass userpass = "39429432" = "nav"; usr = raw_input('Username: ') psswd = getpass.getpass(["prompt"["stream"]]) if usr==usernm and psswd==userpass: main() else: SystemExit() This code is....is....failing D: Here's the error: Traceback (most recent call last): File "C:\Users\Navid_2\Desktop\NavS.py", line 5, in login psswd = getpass.getpass(["prompt"["stream"]]) TypeError: string indices must be integers, not str
https://www.daniweb.com/programming/software-development/threads/428015/using-getpass-and-having-no-luck
CC-MAIN-2018-13
refinedweb
115
57.57
I normally use txt files but i need to use a csv i based this off how i do txt files and i am not sure what i am doing wrong can anyone help me please. Home = "Road" House = 5 def Save(Home,House): Saved=open('Saved.csv', 'a') Saved.write(Home+House+"/n") Saved.close() Save(Home,House) File "F:/Pygame/Test12.py", line 74, in Save Saved.write(Home+House+"/n") TypeError: cannot concatenate 'str' and 'int' objects 1) that's not a .csv file. 2) in python, you cannot concatenate integers with strings without prior conversion. 3) doing this: Home+str(House) would be legal, but when you want to read back your file you have to separate both fields (you provided no way of separating them) Here's a code which would create a real csv file: import csv def Save(Home,House): with open('Saved.csv', 'a') as Saved: cw = csv.writer(Saved) cw.writerow([Home,House]) when you compose your row, you can put any data you want, the csv module will convert it to string if needed. BTW to read it back, use a csv.reader and iterate through the rows. Since you know the datatype, you can convert 2nd column to int directly. with open('Saved.csv', 'r') as Saved: cr = csv.reader(Saved) for row in cr: Home = row[0] House = int(row[1]) # now you have to do something with those variables :)
https://codedump.io/share/YV9xLaW6R00N/1/python-saving-data-to-a-csv-file
CC-MAIN-2017-09
refinedweb
242
74.69
21528) Python doesn't have a native array data ...READ MORE A scripting language is a programming language ...READ MORE We cannot. Dictionaries aren't meant to be ...READ MORE The %s specifier converts the object using ...READ MORE you can check the subprocess module in ...READ MORE Please check the below-mentioned syntax and commands: To ...READ MORE From your current directory run pig -x local Then ...READ MORE In your log4j.properties file you need to ...READ MORE Maybe this would be more robust? 1) save ...READ MORE In windows: Use winsound.SND_ASYNC to play them asynchronously import winsound winsound.PlaySound("filename", winsound.SND_ASYNC ...READ MORE OR
https://www.edureka.co/community/21528/restart-python-script-automatically-even-when-crashes-linux
CC-MAIN-2019-22
refinedweb
109
72.63
- printing backslash - How do I automatically redirect stdout and stderr when using os.popen2? - creating and naming objects - Very nice python IDE (windows only) - THE IMPORTANCE OF MAKING THE GOOGLE INDEX DOWNLOADABLE - help to install MySQL-python module - wddx problem with entities - curses event handling - cgi and popen - A WAD(-like) C-level exception catcher for Windows? - CENSORSHIP - Django Project (Schema Evolution Support) - Python language problem - secure xmlrpc server? - Use of Python in .NET - Bug in list comprehensions? - pyqt show wizard - assign operator as variable ? - IPython 0.7.2 is out. - pysqlite error: Database locked? - Dr. Dobb's Python-URL! - weekly Python news and links (Jun 7) - Function Verification - fsolve() from scipy crashes python on windows - tempfile Question - subprocesses, stdin/out, ttys, and beating insubordinate processesinto the ground - tkinter: making widgets instance or not? - Need pixie dust for building Python 2.4 curses module on Solaris 8 - Using Komodo 3.5 - Setting breakpoints in multiple *.py files - what are you using python language for? - calling functions style question - Vancouver Python Workshop: New Keynoter - retaining newline characters when writing to file - Namespace problems - Interpretation of UnhandledException.rpt - capture video from camera - cxFreeze executable linked to /usr/lib/libpython2.3.so - Vectorization - Distutils: setup script for binary files - 10GB XML Blows out Memory, Suggestions? - Checking var is a number? - python socket proxy - Writing to a certain line? - Python.h - the most efficient method of adding elements to the list - Get EXE (made with py2exe) path directory name - Is there a way to pass a python function ptr to a c++ method from a python script? - Again, Downloading and Displaying an Image from the Internet in Tkinter - GUI Program Error - ConfigParser, no attribute - Storing nothing in a dictionary and passing it to a function - Expanding Search to Subfolders - xml.sax problem: getting parse() to read a string - [twisted] PyOpenSSL and PyCrypto are outdated! - follow-up to FieldStorage - Little question about Tkiner: window focus - embedding Python in COM server loaded with win32com - strategy pattern and non-public virtual functions - Simple question - is it possible to find which process dumped core - How to add few pictures into one - finding file - Python to C converter - How to search for substrings of a string in a list? - Concatenating dictionary values and keys, and further operations - Pmw ScrolledCanvas: How to scroll to specific item? - attribute error using fnmatch - Freezing a static executable - Lots of orphaned PyCon wiki pages... - Where is the ucs-32 codec? - Max function question: How do I return the index of the maximum value of a list? - how not to run out of memory in cursor.execute - re beginner - Python netstring module - Installation Problem - mutable member, bug or ... - in python , could I accomplish the purpose that "a=Console.read()" used in C? - Hostmask matching - Proposed new PEP: print to expand generators - reordering elements of a list - FreeImagePy and PIL - So what would Python be? - Python & ncurses - Python less error-prone than Java - wxpython wxgrid question - Pyrex list/array - Python + WinCE + serial port - __builtins__.loglog - logging more pythonic, decent & scalable ? - elementtree and inclusion of special characters - Leo 4.4.1 beta 1 released - carshing the interpreter in two lines - can python be a "shell" of c++ program? - Missing unicode data? - Missing unicode data? - Missing unicode data? - how to erase a variable - check for dictionary keys - do you have a local copy of Lython? - Seg fault in python extension module - beginner code problem - Initializing an attribute that needs the object - Making a second window with Tkinter - Reversible replacement of whitespace characters with visible characters - Open Source Charting Tool - Returned mail: see transcript for details - Using pysqlite2 - can you iterate over a FieldStorage object? - announce: DaVinci Rendering Engine - wxPython problems with Fedora Core 5 - PyExcelerator - os.chdir doesn't accept variables sometimes - Package - Sampling a population - Selection in Tkinter Text widget. - import confused by contents of working directory - how to define a static field of a given class - after del list , when I use it again, prompt 'not defined'.how could i delete its element,but not itself? - tp_richcompare - Import Issue - Are there something like "Effective Python"? - C# equivalent to range() - Inheritance structure less important in dynamic languages? - execfile then import back - Recommendations for CD/DVD-based or on-line Python classes? - Can Python format long integer 123456789 to 12,3456,789 ? - Conditional Expressions in Python 2.4 - grouping a flat list of number by range - Replace one element of a tuple - if not CGI: - XML-RPC server with xmlrpclib and mod_python - pyreadline 1.3 release (was UNC readline) - Using print instead of file.write(str) - integer to binary... - win32com: how to connect to a specific instance of a running object? - How to format datetime values - numpy bug - Member index in toples - Python for Visual Basic or C# programmers - Finding web host headers - Zope / Plone Groups - Tkinter: select multiple entries in Listbox widget? - how to create a cgi folder??? - An oddity in list comparison and element assignment - os.walk trouble - New to Python: Do we have the concept of Hash in Python? - image lib & Qt4 - Starting New Process - argmax - default argument values qns - PythonDoc Ant Task - Tkinter - changing existing Dialog? - DB-API: how can I find the column names in a cursor? - py2exe & qt4/qimage - Function mistaken for a method - what is the reasonable (best?) Exception handling strategy? - How do you practice programming? - How do you practice Python? - Best way to do data source abstraction - Is device Connected Windows? - How to access the content of notepad with Python? - Downloading and Displaying an Image from the Internet in Tkinter - struct: type registration? - beginner: using parameter in functions - ctypes pointers and SendMessage - is a wiki engine based on a cvs/svn a good idea? - using import * with GUIs? - Save data to a file thru a http connection - python to NQC converter? - TSV to HTML - Oracle Data Access in Python - Are ActivePython scripts compatible with Linux? - Find the context of importer - wx: PyNoAppError - Variable name has a typo, but code still works. Why? - os.popen3() - how to close cmd window automatically? - Best Python Editor - problem with google api / xml - Add file to zip, or replace file in zip - An algorithm problem - ideas for programs? - Tktable, WinXP and ActiveState Python 2.4.3,x - shuffling elements of a list - genexp performance problem? - Try/Except for ADSI GetObject - how to print newline in xml? - interactive programme (voice) - "initializer element is not constant" - Strange behavior with iterables - is this a bug? - Dr. Dobb's Python-URL! - weekly Python news and links (May 30) - Multiple Polynomial Quadratic Sieve - Way to get an array of latitude/longitude points (tuples) from a trip - os.time() - TypeCheck vs IsInstance in C API - Weekly Python Patch/Bug Summary - wait() on Popen4 object from thread? - Watching serial port activity. - TIming - How to use tk.call ? - create a text file - Is anybody knows about a linkable, quick MD5/SHA1 calculator library ? - How to calc easier the "long" filesize from nFileSizeLow andnFileSizeHigh - omniorbpy: problems sending float values - summarize text - Ricerca Programmatore Python - Any other config parsing modules besides ConfigParser ? - Last Call - proposals for talks in the business and application track at EP 2006 - saving settings - Need C# Coding for MD5 Algorithm... - deleting item from ListCtrl by pop-up menu - dynamically loaded libraries - pyswt SWT.NULL - pygame and wxpython - HTMLParser chokes on bad end tag in comment - q - including manpages in setup.py - why not in python 2.4.3 - Fancy GUI with Python - html 2 plain text - itertools.count() as built-in - Finding a lost PYTHONPATH with find - propose extension of mimetypes - Beginner Python OpenGL difficulties - using FFTW3 with Numeric on Windows - iteration over non-sequence ,how can I resolve it? - unexpected behaviour for python regexp: caret symbol almost useless? - generating random passwords ... for a csv file with user details - Best way to check that a process is running on a Unix system? - run time linked attributes - (mostly-)POSIX regular expressions - Array? Please help. - How to control color of contour lines? - Serializing / Unserializing datetime - dynamic type changing - Using a package like PyInstaller - TreeCtrl to TreeListCtrl - Pyrex speed - CMFBoard - matplotlib and numpy installation - starting some Python script from C# - Running External Commands + Seeing when they are Finished - iterator? way of generating all possible combinations? - need a date look here - Safe eval critique (homework done) - stupid perl question - Looking for triangulator/interpolator - PUDGE - Project Status, Alternative Solutions - Thread vs. generator problem - access serial port in python - Linking onClick event to other controls on the Frame - write() vs. writelines() - sort a dictionary by keys in specific order - chop() and empty() functions - Tkinter canvas zooming (sortof) - PIL problem with biprocessor hardware - Running Python scripts under a different user - hide python window, con'td - hide python window - Trying to get FreeImagePy to work. - "Learning Python" 2nd ed. p479 error? - Python for my mum - WinPops - urllib2 and HTTP 302 - Parsing python dictionary in Java using JPython - hi,everyone. a problem with shelve Module - Can any body help me - genexp surprise (wart?) - monkeypatching NamedTemporaryFile - OLAP and pivot tables - Array? - Speed up this code? - Creating instances of untrusted new-style classes - Your message to slug awaits moderator approval - Survey on open-source software for the desktop - __getattr__ and functions that don't exist - how to open password protected PDF's in Python - Multi-dimensional list initialization trouble - Problem with itertools.groupby. - Secure Pickle-like module - Distutils -- specifying compiled output name - sybase open client 15_0 - Anyone compiling Python 2.3 on an SCO OpenServer 5 box? - list comprehensions put non-names into namespaces! - script vs inneractive - "No handlers could be found for logger xxx" ? - regex/lambda black magic - tkFileDialog.Open to select a large number of files - how to clear up a List in python? - webbrowser module bug on os x? - a good explanation - Two idle questions - Final_Call_SummerSchool2006 - access to TimesTen using python - regex in python - wincerapi - Modify one character in a string - os listdir access denied when run as a service - python __import__ question - deploying big python applications - how to "normalize" indentation sources - wxPython: changing entries of a HtmlListBox - Unexpected extension module behaviour - Web frameworks and credit cards - Why can't timedeltas be divided? - Request for comment: programmer starting page (micro knowledge base) - Compiling Python from Sources - how to use matplotlib contour()? - Python Version Testing Tool? - Finding Upper-case characters in regexps, unicode friendly. - Finding Upper-case characters in regexps, unicode friendly. - simple print is not working.. - Bind an instance of a base to a subclass - can this be done? - IronPython 1.0 Beta 7 Released - Conversion of perl based regex to python method - linking errors with debug build of Python2.4.3 - Scipy: vectorized function does not take scalars as arguments - How to find out a date/time difference - Python keywords vs. English grammar - Python Programming Books? - PEP 3102 for review and comment - referrers - hi,every body. a problem with PyQt. - ftputil.py - problem select object in pyode - Access C++, Java APIs from Python.. - NEWB: how to convert a string to dict (dictionary) - Announcing WERD (1.0), the Phonetic Transliterator to Indic scripts - pickling multiple dictionaries - Looking for help with Regular Expression - Problem with installing MySQL-python-1.2.1_p2 - determining available space for Float32, for instance - Best way to handle exceptions with try/finally - real time info to web browser from apache side ? - how to work with tab-delimited files? - documentation for win32com? - Guide to using python for bash-style scripting - graphs and charts - What's with the @ sign - "Thinking like CS" problem I can't solve - Virtual Collaboratory Announcement - dict literals vs dict(**kwds) - question about shadowing built-in names - Canvas items into widgets? - GUI viewer for profiler output? - Use of lambda functions in OOP, any alternative? - can't figure out error: module has no attribute... - How to open https Site and pass request? - Valid SQL? - how to change sys.path? - Don't wish to give up on a Tkinter GUI Builder :( - New beginner to python for advice - logging - groupby - No math module?? - module confict? gmpy and operator - NEWB: reverse traversal of xml file - Too big of a list? and other problems - global name not defined - global name not defined - File attributes - problem with my regex? - system(...) and unicode - freeze tool like perl2exe? - Class probkem - getting msg that self not defined - MAC Laptop right click/drag mouse button TKinter - COM Server crashing when returning large arrays - Dr. Dobb's Python-URL! - weekly Python news and links (May 22) - Vancouver Python Workshop - registration open - How does a generator object refer to itself? - XML/HTML Encoding problem - enumerate() question - Testing for file type - Win32: Detecting when system is locked or sleeping - problem with writing a simple module - Delivery failure notification@ - Running script in __main__ shows no output in IDLE - grabbing portions of a file to output files - string.count issue (i'm stupid?) - Includeing Python in text files - Menu's problem - Dumb-as-rocks WSGI serving using standard library - Doubt with wx.CallAfter - Index counting from the last char - multithreading - managing transactions and sequence of processing - Web based application in python - Tk.iconname still there? - Hola, queres jugar? - Python source sensitive to PyObject_HEAD layout? - performance difference between OSx and Windows - dynamic drawing in web page - getattr for modules not classes - Iterators: Would "rewind" be a good idea? - proposal: disambiguating type - escapes in regular expressions - PEP-xxx: Unification of for statement and list-comp syntax - dict!ident as equivalent of dict["ident"] - Embedding end extending /Carl - Problem with odbc and Sql Server - Does anybody know how to install PythonMagick? - Software Needs Philosophers - Python update trouble (2.3 to 2.4): x<<y - Python update trouble (2.3 to 2.4): x<<y - WeakrefValueDictionary of Callables? - CRLF handling in the rfc822 module - PHP's openssl_sign() using M2Crypto? - buffers readlines and general popen2 confusion... - File encoding strategy question - 'error reading datastream' -- loading file only when transfer is complete? - LocaWapp 0a - localhost web application - Name conflict in class hierarchy - mod_python, COM, on win2k3 server - misleading prefix ++ - Generating Cutter numbers - PEP 3102 for review and comment - performance problem of streaming data over TCP - FAQ for XML with Python - hidden file detection - Question about Python on Mac - Using metaclasses to inherit class variables - sock2 - Modifying a variable in a non-global outer scope? - altering an object as you iterate over it? - bitstream - Request for comments on python distributed technologies - Why does the _winreg module start with an underscore - combining a C# GUI with Python code? - Daily python url archives - Decimal and Exponentiation - how to suppress the "source code echo" output by warnings.warn("x")? - The use of PyW32_BEGIN_ALLOW_THREADS and PyW32_END_ALLOW_THREADS - Strange Memory Leaks - SIGILL importing random - open file with whitespaces - released: RPyC 2.60 - about py2exe, I installed it, but can't find py2exe.exe in my computer. - noob import question - Python sqlite and regex. - problem with import autotest ... - How to append to a dictionary - memory error with zipfile module - Encode exception for chinese text - how to read a list from python in C? - calling python functions using variables - Specific performance question - Python vs. Java - ftplib.ftpcp(), undocumented function? - Segmenting a pickle stream without unpickling - Getting URL's - who can give me the detailed introduction of re modle? - import woe - how could I get all email address in a html page? - Programming language productivity - Script to make Windows XP-readable ZIP file - Opensource vs Microsoft, Wat do you think about opensource? - realization: no assignments inside expressions - newb: comapring two strings - Subprocess or Process or OMG!! - Reminder: call for proposals "Python Language and Libraries Track"for Europython 2006 - number of different lines in a file - Python trig precision problem - WTF? Printing unicode strings - Python, Mysql, insert NULL - Sorting of list containing tuples - Feature request: sorting a list slice - SPE output - Complex evaluation bug - Windows Registry Dump - Python Install - Strange error - module webbrowser - open link in same window - osx - If you were given a mall would you take it? - Conversion of perl unpack code to python - something odd - Which is More Efficient? - Tkinter Dialog Management problems: - Europython 2006 call for proposals - Best active community website - Python - Web Display Technology - Reference Counts - how to make the program notify me explicitly - getting the value of an attribute from pdb - MySQLdb - parameterised SQL - how to see resulting SQL ? - excel centering columns - Pyparsing: Specify grammar at run time - Proposal for new operators to python that add syntactic sugar for hierarcical data. - How to tell if function was passed a list or a string? - I'm just not cut out for web programming, I guess :) - gettext errors with wxPython in linux - creating a new database with mysqldb - Any pointers/advice to help learn CPython source? - How to customize getattr(obj, prop) function ? - \t not working - How to add columns to python arrays - Point-feature labeling in matplotlib - Question about exausted iterators - Process forking on Windows - How to couple pyunit with GUI? - Pyparsing: Grammar Suggestion - List behaviour - using wxPython events inside a loop - SWIG: name 'new_doubleArray' is not defined - index in for loops - Strange IO Error when extracting zips to a network location - how to traverse network devices in our system? - questions on python script compiling for embedding - CFLAGS are not taken into account properly - formEncode and validation depended on forms field - cross-compile PIL - assignment in a for loop - A better way of making subsclassing of built-in types stick for attributes? - help with a function - python vs perl lines of code - help with this simple DB script - Is the only way to connect Python and Lua through a C interface? - Python and GLSL - arrays, even, roundup, odd round down ? - Beautiful parse joy - Oh what fun - still don't get unicode and xml - help! - calling upper() on a string, not working? - Subclassing types in C - round numbers in an array without importing Numeric or Math? - How to log only one level to a FileHandler using python logging module. - Did anyone get audio/video from PyCon 2006? - Option parser question - reading options from file as well as commandline - Wrong args and issuing a SIGUSR1 signal - spawnv throws exception under Windows XP - Help System For Python Applications - simultaneous reading and writing a textfile - Unable to extract Python source code using Windows - Unable to extract Python source code using Windows - regex help - Windows & Apache 1.3 & mod_python.dll - what is the difference between tuple and list? - constucting a lookup table - build now requires Python exist before the build starts - Questions about the event loop - which one? - Python script for remotely shutting down Windows PC from Linux ? - Unicode digit to unicode string - Google-API Bad-Gateway-Error - Python script windows servcie - [silly] Does the python mascot have a name ? - problem with namespaces using eval and exec - IDLE confusion - Python using http proxies - Python and Combinatorics - Using python for a CAD program - Far from complete - Argument Decorators Enhancement? - Multiple inheritance : waht does this error mean ? - C API: getting sys.argv - Why does stack.inspect take so long? - using target words from arrays in regex, pythons version of perls'map' - How to guess the language of a given textstring? - HTTPServer and ThreadingMixIn - regular expression error ?
https://bytes.com/sitemap/f-292-p-47.html
CC-MAIN-2019-43
refinedweb
3,147
56.76
Windows Runtime Components - Windows Runtime Components in a .NET World By Jeremy Likness | 2012 The. Windows Store apps run on a new set of APIs called the Windows Runtime (WinRT). The Windows Runtime exposes components that are built as part of the Windows 8 OS along with third-party components you can develop yourself. Although some core Windows Runtime Components are accessible from desktop apps, third-party Windows Runtime Components are only available from within the Windows 8 environment. WinRT types are described using WinRT metadata files that have the .winmd extension. These files are encoded using the same standard the Microsoft .NET Framework uses for providing metadata definitions and semantics for classes, ECMA-335 (see bit.ly/sLILI). You can quickly navigate to the type definitions on a Windows 8 machine by changing to the directory that contains system files for Windows (usually c:\windows\system32). A folder within that directory called WinMetadata contains all of the type definitions. You can use the ILDasm.exe utility to explore the types. Open a Visual Studio 2012 command prompt, navigate to the c:\windows\system32\WinMetadata folder and type the following in the command line: Ildasm.exe windows.web.winmd You should see a result similar to Figure 1. You can use the ILDasm.exe utility to inspect all of the namespaces and types defined for that particular Windows Runtime Component. Figure 1 What’s interesting to note is that there’s no code contained within the file; only metadata information is available. The component is part of the underlying OS. It’s most likely written using native code. A unique feature called language projection allows Windows Runtime Components (both native and managed) to be accessed from any language that supports Windows Store app development. Projection and Mapping Many languages, including C#, Visual Basic, C++ and JavaScript, have been updated with Windows 8 to support language projection. This allows Windows Runtime Components to be accessed in a natural way using multiple languages. Projection handles exposing a WinRT type as an object or class that’s native to the language being used to develop the Windows Store app. The following code accesses a native Windows Runtime Component directly from a Windows Store app built using C#: The CameraCaptureUI is a Windows Runtime Component. The component isn’t a managed C# type, but it can be easily accessed and referenced from within C# code as if it were. This is because the CLR automatically generates a Runtime Callable Wrapper (RCW) for the Windows Runtime Component using its metadata and causes it to appear as a native CLR type to managed code. For more on this, see the MSDN Library article, “Runtime Callable Wrapper,” at bit.ly/PTiAly. The RCW makes it easy and straightforward to interact with these components. The reverse is also true. Projection enables a Windows Runtime Component created with managed code to be referenced like a C++ type from native code and as a JavaScript object from within HTML/JavaScript projects. Fundamental types appear automatically as C# types. The Windows Runtime has an ELEMENT_TYPE_STRING type that appears in .NET code as a String object. The ELEMENT_TYPE_I4 scalar appears as an Int32. The CLR will also take certain WinRT types and map them to appear in code as their .NET equivalents. For example, the WinRT type for a fixed-sized collection is IVector<T>, but this type will automatically appear as an IList<T> in .NET code. A WinRT HRESULT appears in the .NET Framework as an Exception type. The CLR will automatically marshal these types between the managed and native representations. Some types, such as streams, can be converted explicitly using a set of extension methods provided by the CLR. For a full list of types that are mapped in this fashion, refer to the MSDN Dev Center topic, “.NET Framework Mappings of WinRT Types,” at bit.ly/PECJ1W. These built-in features enable developers to create their own Windows Runtime Components using managed code with C# and Visual Basic. Visual Studio 2012 provides a template for creating Windows Runtime Components from Visual Basic, C# and C++. These components can be consumed and called from any other language that supports the Windows Runtime, including JavaScript. For this reason, you must follow some specific rules to create a Windows Runtime Component in C#. Playing by the Rules In general, the rules for creating WinRT types in C# relate to any publicly visible types and members your component provides. The restrictions exist because the Windows Runtime Component must be bound by the WinRT type system. The full set of rules is listed in the MSDN Dev Center topic, “Creating Windows Runtime Components in C# and Visual Basic,” at bit.ly/OWDe2A. The fields, parameters and return values that you expose must all be WinRT types (it’s fine to expose .NET types that are automatically mapped to WinRT types). You can create your own WinRT types to expose provided those types, in turn, follow the same set of rules. Another restriction is that any public classes or interfaces you expose can’t be generic or implement any non-WinRT interface. They must not derive from non-WinRT types. The root namespace for Windows Runtime Components must match the assembly name, which in turn can’t start with “Windows.” Public structures are also restricted to only have public fields that are value types. Polymorphism isn’t available to WinRT types, and the closest you can come is implementing WinRT interfaces; you must declare as sealed any classes that are publicly exposed by your Windows Runtime Component. These restrictions might be reason to consider an alternative approach to integrating components within your apps, especially if you’re dealing with legacy code that would require significant refactoring. I’ll discuss possible alternative approaches later. The restrictions are important to ensure the Windows Runtime Components can function appropriately within the WinRT environment and can be referenced and called from all language environments, including C++ and JavaScript. The Thumbnail Generator A simple app will demonstrate how to create a managed Windows Runtime Component with C# and consume it from a Windows Store app built with C#, JavaScript or C++. The component accepts a reference to an image file passed by the WinRT IStorageFile interface. It then creates a 100x100 pixel thumbnail of the image and saves it to the Windows Store app’s local storage. It finally returns a URI that points to the thumbnail. The steps involved include: - Create the solution in Visual Studio 2012. - Build the Windows Runtime Component. - Create the language-specific project in C#, JavaScript or C++. - Reference the Windows Runtime Component. - Set up the UI for each project to enable the user to pick an image. - Display the image. - Call the Windows Runtime Component. - Display the thumbnail. Create the Project and Solution From within Visual Studio 2012, you begin by specifying your language of choice (in this case, C#) and choosing the Windows Store app templates. A template exists specifically for generating Windows Runtime Components. I selected this template and created a component called ThumbnailLibrary with a solution of the same name, as shown in Figure 2. Figure 2 Creating the Windows Runtime Component Project For this example, I created a single class called ThumbnailMaker. A private method returns a Task to asynchronously generate the thumbnail: The first step within the method is to open the file from storage and use the WinRT BitmapDecoder to decode the image stream: Next, a file is created in the local storage for the app to hold the thumbnail. It will be named “thumbnail,” with the same extension as the source file. The option to generate a unique name will ensure that multiple thumbnails can be generated without overwriting previous operations: An encoder is created from the decoded stream. It simply scales the bitmap to 100x100 pixels and then writes it to the file system: The last step is to build a URL that points to the file. The special ms-appdata prefix is used to reference the files in local storage. To learn more about how to reference content using URIs, read the MSDN Dev Center topic, “How to Reference Content,” at bit.ly/SS711o. Although the topic is for HTML and JavaScript, the convention used to access resources is the same regardless of what language option you’re using: Windows Runtime Components written in C# can use any .NET functionality that’s allowed for the Windows Store app profile. As mentioned earlier, however, public types and interfaces must only expose WinRT types. Because Task isn’t a valid WinRT type, the public method for the component must expose the WinRT IAsyncOperation<T> type instead. Fortunately, an extension method exists to easily convert the .NET Task type to the WinRT IAsyncOperation type, as shown here: With the component now complete, you can compile it to make it ready for consumption from Windows Store apps. Under the Hood: Metadata Build the Windows Runtime Component and then navigate to the output directory by right-clicking on the project name in the Solution Explorer and choosing “Open folder in Windows Explorer.” When you navigate to the bin/Debug subdirectory, you’ll find a metadata file has been generated for the component named ThumbnailLibary.winmd. If you open the file with ILDasm.exe, you’ll see an interface has been generated for the component with a return type of: Windows.Foundation.IAsyncOperation<Windows.Foundation.Uri> Those are the WinRT types that were mapped for the component. It’s possible to also inspect the metadata and see how the CLR projects WinRT types. Open the same metadata file with the special /project extension like this: Ildasm.exe /project ThumbnailLibrary.winmd The return type now appears as: Windows.Foundation.IAsyncOperation<System.Uri> Notice that the WinRT version of the URI is projected to the .NET equivalent. The method signature exposes all valid WinRT types for Windows Store apps to consume, but from managed code the types will appear as .NET classes. You can use the /project extension to inspect how projection will affect the signature of managed and unmanaged Windows Runtime Components. Consume from C# Consuming the component from C# should seem familiar because it’s no different than referencing a class library. Note that there’s no reason to build a Windows Runtime Component if your only target is other managed code. You simply reference the WinRT project and then consume the classes as you would from an ordinary C# class library. In the sample code, the CSharpThumbnails project has a reference to the ThumbnailLibrary. The XAML for the main page defines a button for the user to pick a photograph and contains two images to host the original image and the thumbnail version. Figure 3 shows the basic XAML markup. <Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <TextBlock Text="Tap the button below to choose an image to generate a thumbnail." Style="{StaticResource BodyTextStyle}" Margin="12"/> <Button Content="Pick Image" Grid. <TextBlock Text="Thumbnail:" Style="{StaticResource BodyTextStyle}" Grid. <Image x: <TextBlock Text="Source Image:" Style="{StaticResource BodyTextStyle}" Grid. <Image x: </Grid> The codebehind creates an instance of the WinRT FileOpenPicker component and configures it to browse images: The picker is called and a simple dialog is displayed if no valid files are found: The source image is then wired for display. The file is passed to the Windows Runtime Component to generate a thumbnail, and the URI passed back is used to set the thumbnail image for display: using (var fileStream = await file.OpenReadAsync()) { SourceImage.Source = LoadBitmap(fileStream); var maker = new ThumbnailMaker(); var stream = RandomAccessStreamReference .CreateFromUri(await maker.GenerateThumbnailAsync(file)); var bitmapImage = new BitmapImage(); bitmapImage.SetSource(await stream.OpenReadAsync()); ThumbnailImage.Source = bitmapImage; } Figure 4shows the result of running this against a photo I took of a pumpkin I carved. Figure 4 Windows Store Thumbnail App Built Using C# Consume from JavaScript Unlike regular C# class libraries, Windows Runtime Components can be called from any language that’s supported for creating Windows Store apps (core Windows Runtime Components that are part of the OS can be called from desktop apps as well). To see this in action, you can create the thumbnail app using HTML and JavaScript. This project is called JavaScriptThumbnails in the accompanying sample code download. The first step is to create an empty app using the blank Windows Store template for apps built using JavaScript. Once the template has been created, you can use simple HTML markup to define the page using the existing default.html file: Next, add a reference to the WinRT project (ThumbnailLibrary) just as you would to a regular C# project. Build the project so that you can use IntelliSense for the newly referenced component. You can reference the source code for the project to see the JavaScript equivalent for opening the file picker and selecting the image. To create an instance of the managed Windows Runtime Component, generate the thumbnail and display it to the user, use the following JavaScript: As you can see, the API call is almost identical to the one used in the C# project. Projection automatically changed the method signature from Pascal case to camel case (the call to generate the thumbnail begins with a lowercase character, as is the common convention in JavaScript code), and a special library called “promises” is used to handle the asynchronous nature of the code using a then or done statement. You can learn more about promises by reading the MSDN Dev Center topic, “Quickstart: Using Promises,” at bit.ly/OeWQCQ. The image tag supports URLs out of the box, so the URL passed back from the Windows Runtime Component is simply set directly to the src attribute on the image. One important caveat for using managed components in JavaScript code is that you can’t debug JavaScript and managed code at the same time. If you need to debug your component, you must right-click the project and choose the Debugging tab, and then select a debugging option that includes managed code. This is shown in Figure 5. Figure 5 Setting the Debug Options for a JavaScript Project Consume from C++ You can also consume managed Windows Runtime Components from native projects. C++ shares the same rendering engine as C#, so the CPlusPlusThumbnails project has the same XAML as the CSharpThumbnails project. The codebehind is different because the project uses the native C++ language option. C++ uses a special concurrency library to handle asynchronous operations. You can learn more about this library by reading the MSDN Dev Center topic, “Asynchronous Programming in C++,” at bit.ly/MUEqnR. The resulting code looks similar to the promises you saw in the JavaScript versions: ThumbnailMaker^ maker = ref new ThumbnailMaker(); create_task(maker->GenerateThumbnailAsync(file)).then([this](Uri^ uri) { RandomAccessStreamReference^ thumbnailStream = RandomAccessStreamReference::CreateFromUri(uri); create_task(thumbnailStream->OpenReadAsync()).then([this]( IRandomAccessStream^ imageStream) { auto image = ref new BitmapImage(); image->SetSource((IRandomAccessStream^)imageStream); ThumbnailImage->Source = image; }); }); When you run the app, you’ll find it looks and behaves identically to the C# version. Understand the Cost Creating Windows Runtime Components using managed languages is a powerful feature. This feature does come at a cost, however, and it’s important to understand the cost when you’re using it in projects. Windows Store apps built using native code don’t require the CLR to run. These apps may run directly in the Windows 8 environment. Similarly, apps developed using JavaScript also don’t require a dependency on the CLR. They rely on the Trident rendering engine and Chakra JavaScript engine (the same engines that drive Internet Explorer 10) to render HTML and CSS and interpret JavaScript code. Windows Store apps built with JavaScript may call native Windows Runtime Components directly, but when they call managed Windows Runtime Components, they take on a dependency to the CLR. The code written for the managed Windows Runtime Component will be compiled just-in-time (JIT) when it’s first accessed by the CLR’s JIT compiler. This might cause some delay the first time it’s accessed. A precompilation service called NGen handles compiling modules installed on the device, but it can take up to a full day to eventually compile all of the modules in a package once it has been installed. The CLR also manages memory by performing garbage collection. The garbage collector (GC) divides the heap into three generations and collects only portions of the heap using an algorithm designed to optimize performance. The GC might pause your app while it’s performing work. This often only introduces a slight delay that isn’t recognizable to the end user, and more intense garbage collection operations can often run in the background. If you have a large enough heap (when the managed portion of your code references hundreds of megabytes or more in memory objects), garbage collection might pause the app long enough for the user to perceive the lack of responsiveness. Most of these considerations are already in place when you’re building a managed Windows Store app. Managed code does add new concerns when you’re adding it to a Windows Store app built with C++ or JavaScript. It’s important to recognize that your app will consume additional CPU and memory when you introduce managed components. It might also take a recognizable performance hit, depending on the component (although many apps take on managed references without any noticeable impact). The benefit is that you don’t have to worry about managing memory yourself and, of course, you can leverage legacy code and skills. Alternatives for Managed Projects If you’re building Windows Store apps using managed code (C# or Visual Basic), you have several alternatives to create Windows Runtime Components that don’t have the same restrictions. You can easily create reusable components using a simple C# class library. If the class library is built for Windows Store apps, you can reference the project from your own Windows Store app. The creation of a class library also removes the restrictions of having to expose only WinRT types and not being able to use features that aren’t part of the WinRT type system, such as generics. Another alternative to consider is the Portable Class Library (PCL). This is a special type of class library that can be referenced from a variety of platforms without recompiling. Use this option if you have code you wish to share between other platforms—such as Windows Presentation Foundation, Silverlight and Windows Phone—and your Windows Store app. You can learn more about the PCL by reading my three-part blog series, “Understanding the Portable Library by Chasing ICommand,” at bit.ly/pclasslib. When your component includes more than just code, you might consider creating an Extension SDK. This is a special form of SDK that Visual Studio 2012 can treat as a single item. The package might include source code, assets, files and even binaries, including Windows Runtime Components. You can also create design-time extensions to make it easier to consume and use your component from within Visual Studio 2012. Extension SDKs can’t be posted to the Windows Store because they’re not self-contained apps. You can learn more about Extension SDKs by reading the MSDN Library article, “How to: Create a Software Development Kit,” at bit.ly/L9Ognt. When to Create Managed Windows Runtime Components With so many possible alternatives, does it ever make sense to create Windows Runtime Components using managed code? Yes—but consider the following questions. The first question to ask is whether you need your components to be referenced from Windows Store apps written using JavaScript or native code using C++. If this isn’t the case, there’s no reason to use a Windows Runtime Component when class libraries and other options will work instead. If this is the case, you must create a Windows Runtime Component to be consumable from all of the available language options. The next question is whether you should create your component in C++ or use managed code. There are a number of reasons to use managed code. One reason might be that your team is more experienced in C# or Visual Basic than in C++ and can leverage existing skills to build the components. Another reason might be that you have existing algorithms written in a managed language that will be easier to port if you keep the same language selection. There are some tasks that might be easier to write and maintain using managed languages and class libraries instead of using native code, and teams that are used to developing in managed languages will be far more productive. Wrapping up, in this article you’ve learned you can create reusable Windows Runtime Components using managed C# and Visual Basic code. These components can be easily referenced and consumed from Windows Store apps written in any language, including JavaScript and C++. While it’s important to understand the rules for creating Windows Runtime Components and the impact of choosing to use managed code, this option provides a unique opportunity to use the language of your choice and leverage existing code to create components that can be consumed by all Windows Store apps. Jeremy Likness is a principal consultant for Wintellect LLC in Atlanta. He’s a three-year Microsoft Silverlight MVP and the author of several books, including the upcoming “Building Windows 8 Applications with C# and XAML” (Addison-Wesley Professional, 2012). Learn more online at bit.ly/win8design and follow his blog at csharperimage.jeremylikness.com. Thanks to the following technical experts for reviewing this article: Layla Driscoll, Shawn Farkas, John Garland, Jeff Prosise and Jeffrey Richter Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/magazine/jj651570.aspx
CC-MAIN-2019-18
refinedweb
3,652
53
Here's my code: #include <iostream> using namespace std; const int SENIOR_PRICE = 9; const int ADULT_PRICE = 12; const float CHILD_PRICE = 6.95; const float TAX_RATE = .06; int main() { string name; string address; int numSeniorTickets; int numAdultTickets; int numChildTickets; cout << "Enter customer name:" << endl; cin >> name; cout << "Enter customer address:" << endl; cin >> address; cout << "How many senior season tickets?" << endl; cin >> numSeniorTickets; cout << "How many adult tickets?" << endl; cin >> numAdultTickets; cout << "How many child tickets?" << endl; cin >> numChildTickets; return 0; } Enter customer name: Daniel Benson Enter customer address: How many senior season tickets? 5 How many adult tickets? 5 How many child tickets? 5 cin >> name; Takes the input only upto space. Therefore if you print the variable name you will get Daniel. And if you print the variable address you will get Benson. So as far as the program is concerned it has taken in two strings as input. A proof of the above statements by printing the variables after input. You might want to use cin.getline() for taking space separated strings as input.
https://codedump.io/share/1SzD26h0Z52X/1/in-c-my-program-automatically-goes-to-the-next-cin-prompt-without-letting-the-user-add-input
CC-MAIN-2016-44
refinedweb
175
76.72
I am working on Ubuntu 18.04 with Python 2.7 and OpenCV 3.2. My application is the front-end of a video pipeline and entails extracting video frames from a webcam, possibly cropping and/or rotating them (90, 180, 270 deg), and then distributing them to one or more other pieces of code for further processing. The overall system tries to maximize efficiency at every step to e.g., improve options for adding functionality later on (compute power and bandwidth wise). Functionally, I have the front-end working, but I want to improve its efficiency by processing JPEG frames extracted from the camera's MJPEG stream. This would allow efficient, lossless cropping and rotation in the JPEG domain, e.g. using jpegtran-cffi, and distribution of compressed frames that are smaller than the corresponding decoded ones. JPEG decoding will take place if/when/where necessary, with an overall expected gain. As an extra benefit, this approach allows efficient saving of the webcam video without loss of image quality due to decoding + re-coding. The problem I run into is that OpenCV's VideoCapture class does not seem to allow access to the MJPEG stream: import cv2 cam = cv2.VideoCapture() cam.open(0) if not cam.isOpened(): print("Cannot open camera") else: enabled = True while enabled: enabled, frame = cam.read() # do stuff Here, frame is always in component (i.e., decoded) format. I looked at using cam.grab() + cam.retrieve() instead of cam.read() with the same result (in line with the OpenCV documentation). I also tried cam.set(cv2.CAP_PROP_CONVERT_RGB, False) but that only converts the decoded video to RGB (if it is in another component format) and does not prevent decoding. BTW I verified that the camera uses the MJPEG codec (via cam.get(cv2.CAP_PROP_FOURCC)). So my questions are: am I missing something or will this approach not work? If the latter, is there an alternative? A final point: the application has to be able to control the webcam within its capabilities; e.g., frame size, frame rate, exposure, gain, ... This is nicely supported by cv2.VideoCapture. Thanks! === Follow-up: in absence of the solution I was looking for, I added explicit JPEG encoding: jpeg_frame = cv2.imencode('.jpg', frame, [int(cv2.IMWRITE_JPEG_QUALITY), _JPEG_QUALITY])[1] with _JPEG_QUALITY set to 90 (out of 100). While this adds computation and reduces image quality, both in principle redundant, it allows me to experiment with trade-offs. --KvZ
https://techqa.club/v/q/extracting-jpg-frames-from-webcam-mjpg-stream-using-opencv-c3RhY2tvdmVyZmxvd3w1NTU2MDkwNg==
CC-MAIN-2021-10
refinedweb
407
58.99
For other versions, see the Versioned plugin docs. For plugins not bundled by default, it is easy to install by running bin/logstash-plugin install logstash-output-statsd. See Working with plugins for more details. For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix. statsd is a network daemon for aggregating statistics, such as counters and timers, and shipping over UDP to backend services, such as Graphite or Datadog. The general idea is that you send metrics to statsd and every few seconds it will emit the aggregated values to the backend. Example aggregates are sums, average and maximum values, their standard deviation, etc. This plugin makes it easy to send such metrics based on data in Logstash events. You can learn about statsd here: Typical examples of how this can be used with Logstash include counting HTTP hits by response code, summing the total number of bytes of traffic served, and tracking the 50th and 95th percentile of the processing time of requests. Each metric emitted to statsd has a dot-separated path, a type, and a value. The metric path is built from the namespace and sender options together with the metric name that’s picked up depending on the type of metric. All in all, the metric path will follow this pattern: namespace.sender.metric With regards to this plugin, the default namespace is "logstash", the default sender is the host field, and the metric name depends on what is set as the metric name in the increment, decrement, timing, count, set or gauge options. In metric paths, colons (":"), pipes ("|") and at signs ("@") are reserved and will be replaced by underscores ("_"). Example: output { statsd { host => "statsd.example.org" count => { " => "%{bytes}" } } } If run on a host named hal9000 the configuration above will send the following metric to statsd if the current event has 123 in its bytes field: logstash.hal9000. This plugin supports the following configuration options plus the Common Options described later. Also see Common Options for a list of options supported by all output plugins. A count metric. metric_name => count as hash. %{fieldname} substitutions are allowed in the metric names. A decrement metric. Metric names as array. %{fieldname} substitutions are allowed in the metric names. A gauge metric. metric_name => gauge as hash. %{fieldname} substitutions are allowed in the metric names. The hostname or IP address of the statsd server. An increment metric. Metric names as array. %{fieldname} substitutions are allowed in the metric names. The statsd namespace to use for this metric. %{fieldname} substitutions are allowed. The protocol to connect to on your statsd server. The name of the sender. Dots will be replaced with underscores. %{fieldname} substitutions are allowed. A set metric. metric_name => "string" to append as hash. %{fieldname} substitutions are allowed in the metric names. statsd outputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs. output { statsd { id => "my_plugin_id" } } Variable substitution in the id field only supports environment variables and does not support the use of values from the secret store.
https://www.elastic.co/guide/en/logstash/7.13/plugins-outputs-statsd.html
CC-MAIN-2022-21
refinedweb
533
67.86
In the last project I did I used Reach Router and I think it’s the simplest way to have routing in a React app. I think it’s much easier than React Router, which is another router I used in the past. Here’s a 5 minutes tutorial to get the basics of it. Installation First, install it using npm install @reach/router If the @syntax is new to you, it’s an npm feature to allow a scoped package. A namespace, in other words. Next, import it in your project. import { Router } from '@reach/router' Basic usage I use it in the top-level React file, index.js in a create-react-app installation, wrapping all components that I want to appear: ReactDOM.render( <Router> <Form path="/" /> <PrivateArea path="/private-area" /> </Router>, document.getElementById('root') ) The path attribute I add to the components allows me to set the path for them. In other words, when I type that path in the browser URL bar, Reach Router shows that specific component to me. The / path is the index route, and shows up when you don’t set a URL / path beside the app domain. The “home page”, in other words. The default route When a user visits an URL that does not match any route, by default Reach Router redirects to the / route. You can add a default route to handle this case and display a nice “404” message instead: <Router> <Form path="/" /> <PrivateArea path="/private-area" /> <NotFound default /> </Router> Programmatically change the route Use the navigate function to programmatically change the route in your app: import { navigate } from '@reach/router' navigate('/private-area') Link to routes in JSX Use the Link component to link to your routes using JSX: import { Link } from '@reach/router' <Link to="/">Home</Link> <Link to="/private-area">Private Area</Link> URL parameters Add parameters using the :param syntax: <Router> <User path="users/:userId" /> </Router> Now in this hypothetical User component we can get the userId as a prop: const User = ({ userId }) => ( <p>User {userId}</p> ) Nested routes I showed you how routes can be defined in this way in your top level React file: <Router> <Form path="/" /> <PrivateArea path="/private-area" /> </Router> You can define nested routes: <Router> <Form path="/" /> <PrivateArea path="/private-area"> <User path=":userId" /> </PrivateArea> </Router> So now you can have your /private-area/23232 link point to User component, passing the userId 23232. You can also choose to allow a component to define its own routes inside it. You use the /* wildcard after the route: <Router> <Form path="/" /> <PrivateArea path="/private-area/*" /> </Router> then inside the component you can import Router again, and define its own set of sub-routes: //component PrivateArea <Router> <User path="/:userId" /> </router> Any route using /private-area/something will be handled by the User component, and the part after the route will be sent as its userId prop. To display something in the /private-area route now you also need to add a / handler in the PrivateArea component: //component PrivateArea <Router> <User path="/:userId" /> <PrivateAreaDashboard path="/" /> </router> Download my free React Handbook! Check out my Web Development Bootcamp. Next cohort is in April 2022, join the waiting list!
https://flaviocopes.com/react-reach-router/
CC-MAIN-2021-49
refinedweb
534
63.63
32162/how-to-delete-a-vpc-using-boto3 You can view this answer here : Before deleting the VPC successfully you must delete all the dependencies related to the VPC. eg: Internet Gateway, Subnets, Route Tables, etc The code I used is as follows: import boto3 ec2 = boto3.resource('ec2') ec2client = ec2.meta.client ec2client.delete_vpc(VpcId = 'vpc-01fca2f1bae08f4be') Once you are done with this, you can run the above code and it works fine You can delete the file from S3 ...READ MORE Here is the simple way of implementing ...READ MORE .terminate is used for instances and not ...READ MORE To create the subnet in VPC: subnet = ...READ MORE The error clearly says the error you ...READ MORE Here is the code to attach a ...READ MORE Here is a simple implementation. You need ...READ MORE This is the code to delete the ...READ MORE You can refer to this question here: You ...READ MORE You can delete the folder by using ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/32162/how-to-delete-a-vpc-using-boto3
CC-MAIN-2020-34
refinedweb
172
85.39
When you play a music file in your favourite music player, or in your portable media player the track name, album, artist, lyrics gets displayed. You can search the songs with artists, album names. Even some of the tracks come with album art too, but there is no image file anywhere. The question generally arises, where does these information come from? The answer is straight forward; this metadata about the audio track is stored inside the audio file itself. The different audio files need different codecs. Different audio format files also have different such metadata systems. For example The Vorbis comments, APE tag, ID3 tags etc. The most common and popular audio media (although not the best) is the mp3 . Mp3 audio format stores this metadata inside the music file, either at the beginning or at the end or at both locations. The music metadata system used with mp3 is called an ID3 Tag. We will rip off the ID3 tag and check out what’s inside it in this article. We will discuss about ID3v1.x and ID3v2.x tags. Article Index - Introduction - Article Index - History - ID3v1 tag - ID3v1 Extended - ID3v2.x Tag Structure - Synchronization Safe Integers and Unsynchronization Scheme - ID3v2.x Practical Lab - Taglib - At Last - Links and References History In the beginning mp3 music files did not have any feature to carry text data with the audio file, what it could do is to make use of a couple of unused bits on the header of the compressed audio blocks, as ‘copyright’ and ‘private’ bits. Which was not very useful. A man named Eric Kemp came forward and developed a fixed sized tagging system and placed it at the end of the audio file, where some text information could be stored without conflict. This was the first ID3 tag the ID3v1 tag, at that time known as “IDentify an mp3” . After this Michael Mutschler made a minor modification to this and introduced one new field, this version was the ID3v1.1 . Now we straight dig into the tags. ID3v1 tag ID3v1 tags supported text only data and also with a very limited number of fields and field sizes. This contains a fixed 128-byte sized tag and was placed at the last of the media file. The tag contained 6 fixed length fields. Which are described in the below table: The stored information takes up 125 bytes of the 128 bytes. The first three bytes of the are always “TAG” (without the quotes). The genre of the track is represented by 1 byte. The value of this byte is looked up in a table of genres, which is fixed and it is determined. For example 52 is Electronic genre, 32 is classical. ID3v1 defines only the first 79 genre types. Genre 80 to 128 is defined by Winamp. (ID3v2.3 8.Appendix ) The next improvement was done by Michael Mutschler, which was nothing but simply splitting the comment field into two fields. The 30 bytes of the comment field in ID3v1 was split into one 28byte new comment field and the remaining 2 bytes were reserved for Album track. Which describes which the track number. The first byte of the Album track is always zero, and the next byte describes the track number. Although this tagging scheme is simple and easy to make a coder-decoder, but very limited. The Track name , Album name are limited to 30 chars, not all albums and tracks are below 30 chars. The comment field is too short to store something useful. Another problem is that, because the ID3 tag is located at the end of the media file, it is the last data to reach at the user’s end in case of streaming. So the streaming media player would be able to show the track details after the track has ended. There was also no scope for adding more fields, except extending the tag length. I have posted a small ID3v1 tag parsing library in this post : An ID3v1 Tag Parsing Library . With this you can read write and rip the ID3v1 tags from an mp3 file. ID3v1 Extended An unofficial extension of ID3v1.x was the ID3 Extended tag. This was to overcome the field length and it introduced 4 new fields. The extended tag has 60bytes reserved space for track name, artist and album, each of these field contains the next 60 characters of the ID3v1.x tags. So if the name of a track has 53 characters, then the first 30 characters will go into the standard ID3v1.x tag section, and the next 23 characters will go into the extended tag’s title field. The new fields introduced are, speed, free-text genre, start-time and end-time. The speed in represented by 1 byte and could have values 0=unset, 1=slow, 2=medium, 3=fast, 4=hardcore. From this a smart play-list according to a certain mood could be created. The free-text genre allows to store a customized genre upto 30 characters, instead of the fixed table values in ID3v1.x. The start and the end time takes 6bytes each and has a mmm:ss format, which is used to make fade-in and fade-outs in media players. The extended tag has a total size of 227 bytes. This 227 bytes is placed just before the ID3v1.x tag. The extended tag has a header “TAG+“. So first we check if the file has an ID3v1.x tag, if yes then we check if it has an extended tag, then we read the values from both the tags, and concatenate the strings as needed. ID3v2.x Tag Structure On 1998 ID3v2 came to the rescue. Although it may seem a version upgrade to the ID3v1 tag, but it actually is totally different internally. This tag does not have fixed length fields anymore. The information are stored in self describing dynamic length blocks of data called frames. The ID3v2 tag contains the frames within it, thus acts as a container of the frames. This is similar like say you have a cover file, and you have written on it how many papers are there in it, and each paper has written on it how many lines of text is written on that paper. The cover file could be thought as the tag, a container, and the files can be thought as frames, which are self described by the header, and contains data. This eliminates the field length restriction, as now each tag can store any amount of data in a frame. Although the limitation is that the tag could be at max 256MB long, and each frame cannot be more than 16MB. Now a 16MB long frame could hold pdf books, and in reality this could be done. ID3v2.x lets the programmer to define personal frames also. The tag, is placed at the beginning of the file, so the streaming problem was solved, as now the streaming media players received the track information first. Below we discuss the internals of ID3v2.x tags. The ID3v2.x description will follow the ID3v2.3 version mainly; but also we will talk about ID3v2.4 also. Read the id3v2.4.0-changes documents at the ID3 tag official site (see the Links and References section at the end of this article) for the changes. Tag Header The tag, placed at the beginning of the media file, begins with the ID3 tag identifier – the string “ID3” (3 bytes), followed by 2 bytes representing the version number and the revision number. The version number is very important to know for decoding the tag frames, since different tag versions (v2.0, v2.3, v2.4) have different specs, and newer versions are not backward-compatible. The next byte contains a set of flags, which has different interpretations in different versions of the tag. The next 4 bytes, the size byte, store the length of the tag. This header describes the whole tag, the ‘container’, telling about its length, including all the frames and padding (explained later) within it, type and properties. The size bytes describe the span of the tag in the file, and the value is used to stop scanning for frames, but it is not stored in normal bit sequence – involving a tricky part with the size calculation. The size is described by 4 bytes. Each byte’s most significant bit (MSB – that is, the 7th bit) is set to zero and is ignored. The remaining 28 bits represent the actual value. To decode the actual value, convert the 4 byte value to the binary number system, then rewrite the number, removing the 7th bit from each byte, acquiring a 28-bit number defining the length of the tag. The reverse is done when encoding: write a 28-bit number, and insert a 0 in the MSB of each byte. This is not too difficult, but needs a lot of bit-shifting and pasting. The 256 MB limitation on tag size comes from this 28-bit limit (228). An integer encoded in this way is known as a synchronization safe integer. Read section Synchronization Safe Integers and Unsynchronization Scheme to know more An extended header could also exist to describe additional header information. An extended header is not critical to decode the tag information. The extended header follows the header, and is 6 to 10bytes long. To indicate that the tag has an extended header the 6th bit of the ID3 header flag byte is set, which means that an extended header is following the header. (ID3v2.3 Sec 3.2) Frame After retrieving the whole tag, we look inside it for the ‘chunks of data’ (the frames) and decode them. There is no strict ordering of the frames in the tag. Again, each frame has a frame header, describing the frame’s type, length, and other properties. Each frame has an identifier, which identifies what data does that frame contain. The identifier is 4bytes in length in ID3v2.3 and ID3v2.4 (3bytes in ID3v2.0). For example the frame identifier containing the Album name is “TALB” for ID3v2.3 and ID3v2.4 (“TAL” for ID3v2.0). The frame identifier is followed by 4bytes in ID3v2.3 (3bytes in ID3v2.0) which determines the size of the frame. These size bytes are stored as normal integers (big-endian) in ID3v2.3 and lesser version, and as synchronization safe integers in ID3v2.4 as described in the section Synchronization Safe Integers and Unsynchronization Scheme and needs to be decoded to get its actual value. ID3v2.3 has two more bytes following the size bytes, containing frame specific flags representing the properties of that frame. Thus the ID3v2.3 frame header is 10bytes long (ID3v2.0 6bytes long). The frame size, described by the ‘size’ bytes, is the size of the frame without the header length. This completes the frame header. The frame header information describes about the frames and is always 10 bytes long (ID3v2.3). Some frames has some extra bytes to describe more information in addition to the header. These bytes follows the header but is not a part of it, instead these bytes are a part of the frame contents and the frame size is calculated including these special bytes. For example, text frames have one byte just after the header, that represents the text encoding; the actual text follows this byte. (ID3v2.3 Sec 4.2.) Similarly with the “COMM”(comment) frame – the 4 bytes following the header identify the text encoding (1 byte) and text language (3 bytes). Then comes an optional short content description terminated by a NULL byte, after which the actual comment text is stored. (ID3v2.3 Sec 4.11.) The frame size integer in this case will include the 4 bytes, the short description text, if present, and the NULL terminator. This has to be handled by the ID3 reader as per the frame specification. Other than text tags there are picture tags for storing album arts. The “APIC” frame in ID3v2.3 stores the picture values (‘PIC’ in ID3v2.0). This would solve the mystery picture which is displayed as the album art in the media player, but there was no picture. The tag could hold more than one pictures. The frame size could be at max 16MB so hi-res pictures is not a trouble. There are other types of tags as well , like URL tags, license tags, and a lot more. The frame identifiers and all the details are listed in the ID3 official website’s developers documentations. Synchronization Safe Integers and Unsynchronization Scheme But why would anyone store a number is such a manner, and not in a more normal fashion? The quick answer is that this method, could never have a combination starting with 0xFF and hence, numbers represented in such a manner would never cause a false sync. Such integers are called synchronization safe integers. But false synchronization can occur in other data parts of the tag too. For this an unsynchronization scheme is used. A flag in the tag header tells that if unsynchronization is used or not. The unsynchronization scheme replaces all the 0xFFF patterns with 0xFF0F and 0xFFE patterns with 0xFF0E pattern, and thus false signals are avoided. When reading the ID3v2 tag with this first it has to be checked that if the unsynchronization is used by inspecting the flag indicating if the scheme is used, and the reverse process has to be used to get the original data. The parts where this unsynchronization scheme is not used, and can have a 0xFF pattern, are stored in synchronization safe integers, like the tag size byte of the tag, and the frame size bytes in ID3v2.4 It is important to note the difference between the unsynchronization scheme and the synchronization safe integers. In ID3v2.2 and ID3v2.3 tags the unsynchronization scheme is used at the tag level, that is the whole body of the tag (except the header) is subjected to the unsynchronization scheme. And for that the size byte of the tag header is encoded as synchronization safe integers. So these together avoid any false synchronization signals. In ID3v2.4 the unsynchronization scheme is used at the tag level, that is body of each of the tags are unsynchronized separately, depending on the frame’s unsynchronization flag indicator. So to avoid false synchronization due to the frame’s size bytes they each are coded as synchronization safe integers. Thus in ID3v2.2 and ID3v2.3 the tag header’s size bytes are encoded as synchronization safe integers, and the whole body is subjected to the unsynchronization scheme, and in ID3v2.4 the tag header size byte as well as all the frame’s size bytes are stored as synchronization safe integers and each of the tag body are individually subjected to the unsynchronization scheme. This bit shifting can be implemented with the below C Language code snippet. The below code snippet converts an input integer in “x” into or from synchronization safe integers. /* Encoding:*/ /* Convert normal integer into synchronization safe integer */ x_final = 0x00;); /* Decoding:*/ /* Convert synchronization safe integer into normal integer */ x_final = 0x00; a = x & 0xff; b = (x >> 8) & 0xff; c = (x >> 16) & 0xff; d = (x >> 24) & 0xff; x_final = x_final | a; x_final = x_final | (b << 7); x_final = x_final | (c << 14); x_final = x_final | (d << 21); You can get a full source code here in this post : Synchronization Safe Integer ID3v2.x Practical Lab Now let’s hack into the ID3 tags and do something really geeky. The following section describes how to read an ID3 tag manually (and also write it), without using any tag-parsing library. You will need a hex editor like hexdump or Okteta; an MP3 file with an ID3v2 tag; and a synchronization-safe-byte encoder/decoder program, like the one you will find here in this post : Synchronization Safe Integer which will take the bit-shifting load off you. I have used Okteta to do my dumping – it makes it easy to track the hex values and the text. The following is a hex dump of the first 256 bytes of an MP3 file: 00000000 49 44 33 03 00 00 00 00 01 18 54 49 54 32 00 00 |ID3.......TIT2..| 00000010 00 0c 00 00 00 42 69 6c 6c 69 65 20 4a 65 61 6e |.....Billie Jean| 00000020 54 41 4c 42 00 00 00 09 00 00 00 54 68 72 69 6c |TALB.......Thril| 00000030 6c 65 72 54 50 45 31 00 00 00 10 00 00 00 4d 69 |lerTPE1.......Mi| 00000040 63 68 61 65 6c 20 4a 61 63 6b 73 6f 6e 54 59 45 |chael JacksonTYE| 00000050 52 00 00 00 05 00 00 00 32 30 30 35 54 43 4f 4d |R.......2005TCOM| 00000060 00 00 00 10 00 00 00 4d 69 63 68 61 65 6c 20 4a |.......Michael J| 00000070 61 63 6b 73 6f 6e 43 4f 4d 4d 00 00 00 19 00 00 |acksonCOMM......| 00000080 00 00 00 00 00 32 35 79 65 61 72 20 73 70 65 63 |.....25year spec| 00000090 69 61 6c 20 61 6c 62 75 6d 00 00 00 00 ff fb 90 |ial album.......| 000000a0 64 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |d...............| 000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000000c0 00 49 6e 66 6f 00 00 00 0f 00 00 2b f2 00 47 c2 |.Info......+..G.| 000000d0 63 00 02 05 08 0a 0d 10 12 15 17 1a 1c 1f 21 24 |c.............!$| 000000e0 26 29 2b 2e 30 33 35 38 3a 3d 40 42 45 48 4a 4d |&)+.0358:=@BEHJM| 000000f0 4f 52 54 57 59 5c 5e 61 63 66 68 6b 6d 70 73 75 |ORTWY\^acfhkmpsu| A glance is enough to get the major information, but we would look the bytes from the computer’s eye-view. Now switch yourself from human mode to computer mode and scan the above data. Lets now start scanning from the beginning. If you scan the above data (look at the hex values too), you can verify the following: - The first three bytes of the file contain the string “ID3” – that means the file has an ID3v2 tag. - The next byte, which represents the tag version number, is 0x03 – that means this tag’s version is ID3v2.3. - The next byte, the revision number, is 0x00. So after reading the first 5 bytes we know that the file has an ID3v2.3.0 . The version number is very important, we find it to be 3 and open the ID3v2.3 specification document and will use it to decode the tag. If the version was 0 then we would use the ID3v2.0 specification which is different from v2.3 specification and decode according to that. - Then we continue to the sixth byte, the Flags byte, which has a value of 0x00. Bits 7,6 and 5 represent Unsynchronization, Extended header, and Experimental indicator flags respectively for ID3v2.3. Since this byte is zero, this file’s tag does not need the unsynchronization, the tag has no extended header, and no experimental indicator. (See ID3v2.3 Sec 3.1.) - The next 4 bytes are the synchronization-safe integer expressing the size of the file: 0x00 00 01 18. Let’s use the bit-manipulation described earlier to decode the integer. (Later, you could just use the supplied sourcecode provided here : Synchronization Safe Integer) The ignored bit positions are replaced by underscores in our visualization below. The caret signs on the second line of each visualization each represent the end of a 7-bit block. _0000000 _0000000 _0000001 _0011000 -^------ -^------ -^------ -^------ To fill the ignored gaps, shift the higher bytes onto the empty space of the lower byte which results in below 28bit number. _ _ _ _0000 00000000 00000000 10011000 -------^--- ---^---- --^----- -^------ Which interprets into 0x98 or 152 in decimal. So these 4 bytes are trying to say that the tag spans 152bytes from the start of the file. If you look at the 10th line with 0x00 00 00 90 address label and advance 8 bytes you would find the tag ending (Highlighted in red). After we figured out the end, we now have the whole tag, and would only consider data inside the 0x98 boundary (inclusive), and would not see anything beyond that address. So we just read those bytes in a buffer and close the file. Now we are done with reading the 10 byte header and known all about the tag. Now the scanning will proceed reading the frames within the tag. Reading the next 4 bytes in search of a frame yields “TIT2”. Look that up in the ID3v2.3 frame database and we find that’s the Title/song name/content description. We read the next 4 bytes (the frame size), which is 0x00 00 00 0C which is equivalent to 12 in decimal. The frame size bytes are stored as normal integers in ID3v2.3 tags, so it does not need any bit manipulation. If the tag you are reading is ID3v2.4, then you need to perform the bit-shifting on this value, as ID3v2.4 stores the frame size as synch-safe integer. Note that even a number needs to be decoded from synchronization safe integer, a number with the rightmost byte less than 0x80 practically needs no shifting. We remember this information, and read the next 2 bytes, containing the frame flags, which are 0x00 00, meaning all the flags are unset. We will ignore these bytes for now, to keep the interpretation simple. (Check specification sheet) The frame header is complete. The frame size says the data is stored in the next 12 bytes after the frame header. Since this is a text frame, the first byte of the stored data will contain the text encoding information, as per ID3v2.3 Sec. 4.2 – the stored value is 0x00, indicating the text after it is encoded in the ISO-8859-1 character set. If it was 0x01, then it would have been Unicode (ID3v2.3 Sec. 3.3). Next, we read the remaining 11 bytes to get the title of the track: “Billie Jean”. We have just read the Title of the song!! I will brisk on the next tags. We need to follow the same procedure for the subsequent tags, though I’ll speed up a bit here. After the Title text, read the next 4 bytes – “TALB”. Look up the ID3v2.3 database – it’s the Album name. The next 4 bytes return the frame size, 9 bytes. For now, we ignore the next 2 bytes (flags), read the 9 bytes (including the text encoding byte) and thus obtain the Album name – “Thriller”. You can similarly apply this procedure to the “TPE1” (lead performer), “TYER” (year) and “TCOM” (composer) frames. Let’s slow down again for the “COMM” frame, since the 5 extra bytes before the comment might be confusing. Section 4.11 in the ID3v2.3 specification tells us that the “COMM” frame includes three language bytes and one text-encoding byte at the beginning of the data, followed by an optional ‘short content description’, terminated by a NULL. This frame’s size shows as 0x00 00 00 19 – 25 bytes. That includes language (3 bytes), encoding ( 1 byte), NULL terminator (1 byte). So the actual comment string is (25-5) = 20 bytes long: “25year special album”. Now we’ve reached the 0x98 tag boundary, so let’s stop before we get into the music data itself. This particular MP3 file did not have any padding in the tag. Open and check other files, you might see a lot of 0x00 bytes before the tag’s end. To add new frames, first determine what data you want to write. Calculate its length including any frame-specific bytes (like encoding for text frames). If you are writing to ID3v2.4 then encode the byte-size by reversing the bit-shifting, and make it synch-safe. Build up the frame header with the required components for the ID3 specification used in the media file, including the ‘size’ value, flags as needed (or just zero). If the file’s tag has sufficient padding to accommodate your new frame, write the frame header and data into the padded area. If the file doesn’t have enough padding, or has no padding at all, you will need to shift the audio data – possibly by building up a new tag with the new frame plus padding, writing it to a new file, then copying the music data from the old file in after the tag. Keep in mind that when you alter/add frames, you also need to update the tag header’s size value to reflect the change in length of the tag (if any). If you are moving the data into a new file, once done, you can delete (unlink) the original file, and rename the temporary file with the original media file’s name. You can verify the results of your tag manipulation in a media player (e.g. Amarok) or tag-manager (e.g. EasyTag). Note:It might happen that you change the ID3v2.x tag, but the tag editor shows the same old data. This might be because the ID3v2.x tag is corrupted or not in proper format, and so the tag reader is reading the ID3v1.x tag at the end of the file. Strip out the ID3v1.x tag, or simply zero it out, to avoid it being displayed. Also, don’t forget to refresh the file – force the tag editor to re-read the tags. Taglib We won’t write a ID3v2.x decoder, like we wrote for ID3v1.x . Instead we would check out the taglib API . Heard this name somewhere? Might be when updating amarok, or in a missing dependency message. This is a music tag editing library, which not only supports ID3vX.Y tags but also APE, Xiph, and FLAC tagging formats. And this is available for also Mac, and Windows platform. This is used in different applications which you currently use, like, Amarok, JuK, Last.fm, Songbird and a lot more. Without talking much, lets get started. Download taglib from your distribution repository. yum install taglib taglib-devel taglib-doc Or, download the source from the official site , and install by executing: ./configure make make install #as superuser Now we are ready to code. I will implement the C Language binding to demonstrate. /* Program: A minimal example of the taglib library */ #include <stdio.h> #include <taglib/tag_c.h> #ifndef FALSE #define FALSE 0 #endif int main (int argc, char *argv[]) { TagLib_File *file; TagLib_Tag *tag; const TagLib_AudioProperties *audio_properties; taglib_set_strings_unicode (FALSE); if (argc == 1) { printf ("Usage: %s <filename.mp3>\n",argv[0]); return 0; } file = taglib_file_new (argv[1]); if (file == NULL) { printf ("Error Opening File \"%s\"\n",argv[1]); return 1; } tag = taglib_file_tag (file); audio_properties = taglib_file_audioproperties (file); printf ("\n[TAG]"); printf ("\n\tTitle: %s", taglib_tag_title (tag)); printf ("\n\tArtist: %s", taglib_tag_artist (tag)); printf ("\n\tAlbum: %s", taglib_tag_album (tag)); printf ("\n\tYear: %d", taglib_tag_year (tag)); printf ("\n\tComment: %s", taglib_tag_comment (tag)); printf ("\n\tTrack: %d", taglib_tag_track (tag)); printf ("\n\tGenre: %s", taglib_tag_genre (tag)); printf ("\n[AUDIO]"); printf ("\n\tBitrate: %d kbps", taglib_audioproperties_bitrate (audio_properties)); printf ("\n\tSample Rate: %d Hz", taglib_audioproperties_samplerate (audio_properties)); printf ("\n\tChannels: %d", taglib_audioproperties_channels (audio_properties)); printf ("\n\tTrack Length: %d min %d sec", taglib_audioproperties_length (audio_properties) / 60, taglib_audioproperties_length (audio_properties) % 60); printf ("\n\n"); taglib_tag_free_strings (); taglib_file_free (file); return 0; } The functions are self describing. To get the documentation, check the tag_c.h header file. Which should be located in /usr/local/include/taglib/tag_c.h . It is well commented and easy to understand. That is all about taglib i will tell now. At Last After this journey through ID3 tags, the version-specific documentation is your best friend if you wish to start coding your own tag parser. Then there is taglib, which you can use with SQLite to store song metadata and enhance a music player, or make a tag editor. So read the documentation, hack some tags, and enjoy coding! Links and References - ID3 tag official site: - ID3tag Wikipedia: - Taglib Official Site: - EasyTag Official site: First Publish Information : This article was first published on Linux For You (LFY), April 2010 Issue, under Creative Commons Licence. Author: Arjun Pakrashi 21 thoughts on “What are ID3 Tags all about?” What a great article, well organized and written! =) FANTASTIC ARTICLE! It helped me out A LOT! :D Thank you very much! :D Compact, informative and helpful. Thank you for the article. Thank you for the feedback. Best article about İD3 tag I’ve ever read Thank You very much! I have a questions.Whats happens if bytes after flag byte are(ı mean 4 bytes) are not 0x00 00 01 18.what happens if they are 00000000,00000111,00011101,00010100.İf first bit erased then xxxx0000000000011100111010010100.So which part should I count?This doesnt make like 0x98.What do you say about this? It won’t be 0x98. You can just take the ‘x’ s as zeros and find the hex (or decimal) equivalent. In your case 0000 0000 = 0x00, 0000 0111 = 0x07, 0001 1101 =0x1d, 0001 0100 = 0x14 which makes 0x00 07 1d 14 And if you follow what i have explained in the synchsafe integer section, or simply follow the ID3v2 standards then you can decode it as: _000 0000, _000 0111, _001 1101, _001 0100 then simply forget the dashes, which is done by shifting each LSB into the next byte’s MSB (the dash) ____ 0000 = 0x00, 0000 0001 = 0x01, 1100 1110 = 0xce. 1001 0100 = 0x94 That is your number is decoded into 0x1ce94 that is in decimal 118420, that is your ID3 tag is of 118420 bytes. Your calculations are right. Basically you ignore the MSB of each byte, that means you calculate the value without it, therefore you can consider the ‘x’ s in your solution as zeros. Maybe you would also have a look at the code to encode/decode the syncsafe integers here: I couldnt understand in the syncsafe but I understand now.Thanx for quick reply. 118420 is very big for tag right?Maybe there is picture in it.İs it possible? Not that big, may be it contains the lyrics or some long textual information, may be also it contains an image. I have another question.I know I asked too much but .I will be appreciated if you aswer this.I have a mp3 file and ı find “APIC” bytes but ı couldn understand bytes after this bytes.İt continue like this: 00000000(00) 00000001(01) 10100110(A6) 00010111(17) 00000000(00) 00000000(00) 00000000(00) 01101001(69)(i) 01101101(6D)(m) 01100001(61)(a) 01100111(67)(g) 01100101(65)(e) 00101111(2F)(/) 01110000(70)(p) 01101110(6E)(n) 01100111(67)(g) what is the meanin of the bytes before (69)(i).I look from the idtag site but I couldn understand.It says first bit after APIC byte is text encoding.this is ok it means ıso8859-1.But after that byte it says MIME byte.Mine is not $00.Mine is 01.Whats does it mean? What should I undertand till (69)(i) bytes?Thx. hey are you there Sorry for the delay, I am travelling these days a lot, and the time is totally occupied by my project work. I cannot immediately answer your question without referring to the manual. The best thing you can do to get a good answer is to put up the question in . I this time if I can get through the manual then I will definitely comment here about it. Your explanation on ID3 information is excellent. Nice to know that you liked it. Thanks for stopping by! i spend days on the web trying to understand this. you saved me bro. u r really clever. thumbs up Thank you very much. It took a lot of time for me to understand and write this article :D . Thanks very much for writing this detailed explanation, you really helped me to understand and the structure of ID3 tags, I have been struggling how to read and edit these tags. thax a lot.
https://phoxis.org/2010/05/08/what-are-id3-tags-all-about/
CC-MAIN-2017-26
refinedweb
5,416
73.17
Bare-metal mbed; KLBasic is up and running! (Last updated 23 Jan 2012) 23 Jan 2012 -- You can now download hex files for external code modules, such as support libraries or standalone programs, into KLB via the console. This means you can write dedicated modules in C or assembly, download them into KLBasic, then run them from the interpreter or from a stored Basic program on autostart. See below for details on using the hex download feature. See below for details on writing custom external control modules (ECMs). I now have my KLBasic modified to run on the mbed (LPC1768) device. KLBasic is a tokenized interpreter built upon work done earlier by Gordon Doughman of Motorola for the 68hc11. Gordon released the code (in 68hc11 assembler) and I recoded it in C so it is (mostly) platform independent. For more info on KLB in general, go back to my main page and check out the link for KLBasic on the Atmel devices. In the mbed implementation, KLB is a 32-bit integer Basic that reduces your source lines into tokens, then runs the tokenized program through an interpreter. The source is fully tokenized, including any whitespace. When you list out your program, the KLB runtime simply re-expands the tokenized file. Your original source is not stored as ASCII text; it is always rederived from this list of tokens. Here is a list of KLBasic features that are derived from Gordon's original design: 32-bit integer math Variable names up to 14 chars long (Gordon originally used single-letter variables) Single-dimension arrays using DIM statement Ability to save up to three programs locally in LPC1768 on-chip flash Ability to autostart any program saved in flash upon reset (Gordon originally saved to EEPROM) Trace support (TRON and TROFF) Direct access to select LPC1768 registers (GPIO, A/D, RTC, etc.) Ability to execute C or assembly language executables via CALL statement Here is a list of additional features that I've added to KLBasic: 8-, 16-, and 32-bit indirection operators (replace PEEK and POKE) Support for four down-counting timers with 1 msec resolution Ability to download hex files into upper flash for later execution via CALL statement Time/date support via LPC1678's real-time clock subsystem (requires 3V battery at pin VB) Direct printing of time and date in different formats Here is a list of features on my to-do list: Add string support (but you can print literal strings from a PRINT statement) Add multidimension arrays Add SD card file support KLBasic is intended for controller-type embedded applications. Think home automation, greenhouse control, alarm systems, temperature control, model railroading, hobby racing monitors, that kind of thing. Known issues The per-line internal tokenizing logic is not robust; it is possible for the tokenizer to walk off the edge of the token buffer. I need to fix the logic, but for now the buffer will hold 128 tokens, which should be plenty for typical source lines. Some examples Since examples are always the best way to show off stuff, here is a KLBasic program for blinking the mbed LEDs: 100 ' Program to blink a few LEDs 110 ' 200 dim leds(4) 210 dim states(4) 220 dim timers(4) 300 leds(1) = 2^18 310 states(1) = 0 320 timers(1) = addr(timer0) 350 leds(2) = 2^20 360 states(2) = 0 370 timers(2) = addr(timer1) 400 leds(3) = 2^21 410 states(3) = 0 420 timers(3) = addr(timer2) 450 leds(4) = 2^23 460 states(4) = 0 470 timers(4) = addr(timer3) 800 while 1 = 1 820 for n = 1 to 4 840 if @32 timers(n) = 0 then gosub 2000 860 next n 880 endwh Lines 200 - 220 create arrays for holding information on the LEDs. Each LED is assigned a state (1 or 0), a timer (timer0 through timer3), and a mask for setting/clearing the LED output bit. The masks all correspond to bits associated with LED1 through LED4 on the mbed board. Line 300, for example, creates a mask for LED1 by setting bit 18 in the array element leds(1). Lines 300 - 470 fill the arrays with information about the LEDs. In particular, lines 320, 370, 420, and 470 each write the address of a timer into an array element. These elements will later serve as pointers to a down-counting timer. Lines 800 - 880 are the main loop. This loop cycles through all four LEDs, checking to see if the associated timer has reached 0. When a down-counting timer hits 0, it stops decrementing, which means the original delay has elapsed. Line 840 shows how to use the 32-bit indirection operator to access the value pointed to by the contents of timers(n). The @32 operator tells KLB that the value in timers(n) is not the value to test, but a pointer to the value to test. Similar operators exist for 8-bit and 16-bit pointers. Lines 2000 - 2190 are the subroutine that changes the state of an LED and rearms the associated timer with a random number. Line 2120 shows how to use the @32 operator in the assignment side of an expression. Again, variable P is used as a pointer to the value to be changed. Since P holds the address of a down-counting timer, line 2120 assigns a random number to that timer, not to P. To load this program, hook the mbed's UART0 up to a terminal program (38400, 8N1). Type in the above program. Type in the command RUN to see the show. (KLBasic commands and variable names are case-insensitive; foo, FOO, and FoO are all the same). To save this program to a flash file, type in the command SAVE FL0. This will write the program to flash file 0; there are a total of three flash files (fl0 through fl2). To load a program from a flash file, type in the command LOAD FLn, where n is 0 - 2. After the above save, you would reload the program by entering LOAD FL0. To run the program automatically on reset, type in the command AUTOST FL0. On the next reset, the mbed will copy the program from flash file fl0 to RAM and begin execution. KLBasic is an interpreter, so it isn't going to match the speed of compiled C. However, it runs fast enough to do a lot of general-purpose programs. For an example of its speed as an interpreter, the following program: 100 timer0 = 1000 110 n = 0 200 while timer0 <> 0 210 n = n + 1 300 endwh 400 ?n reports a value of N of 20970, which indicates 21K loops per second. Having a Basic interpreter on-board comes in really handy when you are adding hardware to your mbed. For example, the following program will monitor the value of AD0 in real-time, so you can tweak your voltage settings or whatever: 100 while 1 = 1 110 timer0 = 250 : while timer0 > 0 : endwh 120 print chr$(13); (ad0 / 16) and $fff; 140 endwh This program is an endless loop. To end the program after it starts running, just enter Ctrl-C in your terminal program. KLBasic does not support QuickBasic's EXPLICIT command. If you type a name that might be construed as a variable, that variable is created for you. This can lead to problems if you make a mistake when typing a variable's name; you will get two different variables with similar names. To help catch these situations, KLBasic supports variants of the LIST command. Here are some examples, using the blinky program above: >list 2000 2000 ' >list 2000 - >list vars List of variables -- leds() states() timers() n p >list ports List of ports -- timer0 timer1 timer2 timer3 pconp pclksel0 pclksel1 pinsel10 pinsel0 pinsel1 pinsel2 pinsel3 pinsel4 pinsel5 pinsel6 pinsel7 pinsel8 pinsel9 pinmode0 pinmode1 pinmode2 pinmode3 pinmode4 pinmode5 pinmode6 pinmode7 pinmode8 pinmode9 pwm1ir pwm1tcr pwm1tc pwm1tpr pwm1tpc pwm1mcr pwm1mr0 pwm1mr1 pwm1mr2 pwm1mr3 pwm1mr4 pwm1mr5 pwm1mr6 pwm1ccr pwm1cr0 pwm1cr1 pwm1cr2 pwm1cr3 pwm1pcr pwm1ler pwm1ctcr gpio0_dir gpio1_dir gpio2_dir gpio3_dir gpio0_set gpio1_set gpio2_set gpio3_set gpio0_clr gpio1_clr gpio2_clr gpio3_clr gpio0_pin gpio1_pin gpio2_pin gpio3_pin adcr adgdr adstat ad0 ad1 ad2 ad3 ad4 ad5 ad6 ad7 uptime breakflag vectortable As you can see, the list of ports known to KLBasic is pretty limited right now. That will change in the future as I expand the device table. The UPTIME port is not really an LPC1768 port. It is the number of milliseconds that the mbed has been running since the latest reset. This value is updated each tic in the background. The BREAKFLAG port is a U8 variable that is FALSE until the user presses ctrl-C to break a program during execution. The VECTORTABLE port is the address of the RAM vector table used by KLBasic. The original vectors at the start of flash are copied to this address before KLBasic starts up. Other programs, either KLB itself or an ECM, can modify the contents of this vector table and thus take control of selected interrupts. Obviously, use this carefully; if you autosave a program that corrupts the vector table, you will have to reload KLBasic to recover. KLBasic is an interactive program. If you suddenly have to know what is 3 raised to a random power between 4 and 8, type: ? 3^(4+rnd(4)) KLBasic does not provide a WYSIWYG editor. Instead, it uses the original line-number based model. If you need to change a line, reenter that line. This becomes less of a hassle with a good terminal program. TeraTerm, for example, has an excellent copy-and-paste facility that makes it easy to edit code and to save full programs to a local text file for later retransmission to the mbed. Using KLBasic on your mbed You need to have a serial connection between the mbed's UART0 and your PC. This is usually done through the USB port. For Windows boxes, open your terminal program setup menu and select the COM port associated with your mbed's USB connection. Set the terminal program on the PC to 38400, 8N1. The LPC1768 does not have any byte-addressable on-chip non-volatile storage, such as EEPROM. It does, however, have several battery-backed registers in the RTC subsystem, so I borrowed one of those to hold the non-volatile autostart flag. This means that if you want to use the autostart feature, you will have to connect a battery, typically a CR2032 or other small 3V lithium cell, to your mbed device. Hook the positive terminal of the battery to pin 3 (VB) on the mbed, and hook the negative terminal of the battery to pin 1 (GND). You can run KLBasic without having the battery connected, but you won't be able to autostart a program following power-cycle. Note that installing a binary file in the mbed always erases the original flash contents. This means that if you save some KLB programs to flash, then decide to load a new binary onto your mbed, you will lose all of your saved flash files. I have plans to add support for a serial EEPROM for saving files, but for now just be aware of this issue. Note that there is currently no mechanism for defeating the autostart outside of KLBasic. This means that if you have tagged a program for autostart and there is a serious flaw in that program, you will have to use a Ctrl-C from a terminal in order to break the program after it starts, then turn off the autostart feature. This is not usually a problem during development, since you usually have the mbed hooked to a terminal. Flash files are saved to upper flash, one 32 KB sector per file. FL0 is written to sector 24 (0x50000), FL1 is written to sector 25 (0x58000), and FL2 is written to sector 26 (0x60000). To install KLBasic, just drop the basicmbed.bin file into the folder of your mbed device on your computer's desktop. KLBasic is written in ANSI C and weighs in at about 55 KB (version 0.2). Once installed, hook up your terminal program (I use TeraTerm Pro but any good term program should work) and reset your mbed. You should see the KLBasic signon: KLBasic for mbed (LPC1768) v0.2 KLBasic core v0.7 Core written by Karl Lunt, based on Gordon Doughman's BASIC11 READY > You can download the mbed version of KLBasic here . KLBasic is a work in progress. I have really enjoyed getting this running on a 100 MHz mbed. I will continue adding features and fixing bugs, to make this program even better. Please drop me an email if you have comments or features you would like to see. Using hex files with KLBasic KLB lets you download hex files from the console and store them in flash sector 28 (0x70000 - 0x77fff). Because this address range is covered by a single sector, each time you download a hex file the previous contents of that sector are erased and lost. The hex download must be started from the command prompt; it cannot be started from within a KLB program. Here is a typical download sequence: KLBasic for mbed (LPC1768) v0.2 KLBasic core v0.7 Core written by Karl Lunt, based on Gordon Doughman's BASIC11 READY >load hex Ready to begin loading hex file from console. Hex file must exist entirely in sector 28 ($70000 to $77FFF). If you need to abort the hex load, type Cntrl-C on console. Begin transfer now... :0200000270008C :1000000004E0000000F034B800F046B82DE91F000D :100010004FF000000C490D4A914206D0A2F10102B6 :100020009142086001F10401FAD3094809490A4ADA :10003000002A05D010F8014B01F8014B013AF9D123 :100040001FBC00F00BB800000000082004000820CE :100050008C010700000008200000000080B483B07D :1000600000AF7860396007F10C07BD4680BC70476F :1000700080B483B000AF786039603B68012B04D155 :100080007B681B681A46034B1A6007F10C07BD46D4 :1000900080BC70470000082080B487B000AF786053 :1000A0003960384B1B68002B64D0374B4FF4340257 :1000B0001A60354B4FF43402DA613B68032B5BD195 :1000C0007B681B683B614FF000037B617B6803F139 :1000D00004031B6803F00103DBB2002B03D07B6930 :1000E00043F480237B617B6803F104031B6803F006 :1000F0000203002B03D07B6943F480137B617B6890 :1001000003F104031B6803F00403002B03D07B6995 :1001100043F400137B617B6803F104031B6803F065 :100120000803002B03D07B6943F400037B617B68E9 :1001300003F108031B68FB6000E000BF124B7A6903 :100140009A613B69FA681A6000BF3B691B68002B23 :10015000FBD10D4B7A69DA613B69FA681A6000BF1E :100160003B691B68002BFBD1064B1B681B78002BDF :10017000E3D002E000BF00E000BF07F11C07BD466E :0C01800080BC70470000082020C009204F :040000037000000089 :00000001FF Wrote total of 396 bytes. > After the 'load hex' command, KLB prompts you to start the file load. At this point, use your terminal program's ability to send a file as ASCII text to send the selected .hex file. I usually use a delay at the end of each line of 25 msecs to get reliable transfers, YMMV. The above download leaves the hex file installed in flash, starting at 0x70000. The program will remain until overwritten, either by another hex load or by reflashing the device through another USB download.. You can use the CALL statement to execute a program in this flash area. For example, assume your program has a starting address of 0x70000. To invoke this program, enter: call $70000 Notice that KLB uses '$' instead of '0x' to denote hexadecimal constants. Using External Code Modules (ECM)s "External Code Module" means an executable that was installed on the mbed external to KLBasic, usually through a hex load operation from the console. Creating such a module is straightforward and can be done in C or assembly language, or any other language that supports the GCC C-routine model. An ECM consists of these elements: One or more C source files comprising the executable, a custom startup file (usually assembly), a custom linker script, and a makefile. The following example blinks the four LEDs on the mbed, and is about the most convoluted way you could blink LEDs. :-) First up is the C source file: #include <stdio.h> #include "LPC17xx.h" static uint8_t *breakflag; /* * Initialize low-level module initialization * * Note that you MUST call this routine before invoking any other * elements of the module! Even though there is no code in this * function, calling this routine (the first entry in the module's * jump table) forces the initialization of the module's .data sections. * If you don't invoke this function, variables won't be zeroed or * initialized! */ void Initialize(uint32_t *args, uint32_t nargs) { } /* * GetBreakflag register a pointer to the breakflag * * This routine takes a single argument. args[0] must hold * a pointer to the KLBasic break flag. This module will * later test the contents of this break flag to determine * if the user has entered a ctrl-C at the console to halt * the program. */ void GetBreakflag(uint32_t *args, uint32_t nargs) { if (nargs == 1) breakflag = (uint8_t *)args[0]; } /* * Blinky blink an LED the convoluted way * * Upon entry, nargs must be 3. * args[0] must hold the address of a down-counting timer * for use by this routine. * args[1] must hold a mask of LEDs to blink; bit 0 set * means blink LED1, etc. * args[2] holds the delay, in msecs, for each LED state * change. * * This routine does not return until the user hits a ctrl-C * on the console. * * If you didn't invoke GetBreakflag() to register the address * of the break flag, this routine exits immediately. */ void Blinky(uint32_t *args, uint32_t nargs) { uint32_t delay; uint32_t mask; uint32_t *timer; /* * If the breakflag pointer is empty, the user has no way * to break out, short of a reset. Let's be nice and just * refuse to run until the breakflag pointer is at least * not NULL. */ if (breakflag == 0) return; /* * Turn off all mbed LEDs so we know we got this far. */ LPC_GPIO1->FIODIR = ((1<<18) | (1<<20) | (1<<21) | (1<<23)); LPC_GPIO1->FIOCLR = ((1<<18) | (1<<20) | (1<<21) | (1<<23)); if (nargs != 3) return; /* * Record the timer we get to use. */ timer = (uint32_t *)args[0]; /* * Translate the mask arg into a mask suitable for controlling * the mbed LEDs. */ mask = 0; if (args[1] & (1<<0)) mask = mask | (1<<18); if (args[1] & (1<<1)) mask = mask | (1<<20); if (args[1] & (1<<2)) mask = mask | (1<<21); if (args[1] & (1<<3)) mask = mask | (1<<23); /* * Record the delay. */ delay = args[2]; /* * Start the loop that blinks the LEDs. Use the timer * to pace the LED changes. */ while (1) { LPC_GPIO1->FIOSET = mask; *timer = delay; while (*timer) ; LPC_GPIO1->FIOCLR = mask; *timer = delay; while (*timer) ; /* * If the break flag is not zero, the user wants to quit * the blinky program. */ if (*breakflag) return; } } This file provides three routines. Initialize() actually does nothing. It exists as a jump target for the startup module (see below). I could have collapsed all three of these routines into a single function, but left them this way so you can see the mechanics of how ECMs are accessed by KLBasic. The GetBreakflag() routine gives KLB a hook to pass in the address of the KLBasic breakflag variable. This is a system variable that is normally FALSE but will go TRUE if the user enters a ctrl-C from the console. Passing the address of this variable to the ECM allows a function in the ECM to test the value of the breakflag and take action if the user wants to break out. The Blinky() routine accepts three arguments when invoked. The first argument is the address of one of the four KLBasic down-counting timers. Blinky will use this timer to pace the blink-rate of the LEDs. The second argument is a mask (bits 0-3) that define the LEDs to blink. The third argument is the amount of time (in msecs) between state changes on the LEDs. Notice how arguments are passed into each of these functions when invoked. All three functions, and all functions in any ECM, are defined to accept two arguments. Much like the traditional definition of main(), these functions expect a pointer to an array of 32-bit values (args) and the number of elements in the args[] array (nargs). The actual contents of the args array is up to you; you control how the args list is processed in your ECM function. Next up is the ECM startup file. Here is the startup file for the blinky ECM: /* * ECM_startup.s generic startup file for use with ECMs * (External Code Modules) * * This is a generic LPC17xx startup script, suitable for use with * the CodeSourcery Lite gcc toolset. * * However, this startup code is not intended to act as a true * starting point following a cold boot. Instead, this code is * intended to act as gateway to funtions that can be invoked * by other programs. These function can be invoked by * executing jumps into a collection of jump vectors at the * start of this module. * * This code is based on several examples I found on the web, mashed * together to do what I want it to do. My thanks to the various * authors for their contributions. */ .syntax unified .thumb .section ".jump_vector_table" .global __jump_vector_table /* * Unlike a traditional startup file, this file is NOT intended to * occupy the vector table area, and in fact has no traditional * vectors or stack pointer initialization value. * * Note that ECMs usually use the stack pointer that is passed to them * when a function is invoked from outside; there isn't really any need for * a stack for ECMs. However, if you decide you want to have an ECM- * specific stack, use a linker script that includes support for it. Note that * your ECM module will have to modify the stack pointer itself; there * are no provisions for setting up the stack as part of an ECM's make. */ /* * This jump vector table provides target jump points for other * programs to "call" modules within this project. * * For every entry in this jump vector table, you must provide an * identically named C function in your ECM module. You do not need * to have a function named main(), though you are free to use that * name if you like. */ .balign 4 __jump_vector_table: b _Private_Initialize @ special case, need to set up C vars first! .balign 4 @ important! this forces a 4-byte entry for the first label b GetBreakflag b Blinky /* * Actual code. */ .thumb_func .global _Private_Initialize @ make it visible in the map file .global GetBreakflag .global Blinky /* * Control jumps to here when the first entry in the jump table * is invoked. Unlike other jump table entries, the first entry * performs the housekeeping functions normally done by the C * startup code following reset. * * If your calling program does not first invoke this vector, * your variables will NOT be initialized! */ _Private_Initialize: stmdb sp!, {r0-r4} @ save the critical regs /* * Clear the BSS section */ mov r0, #0 ldr r1, = _start_bss ldr r2, = _end_bss cmp r1, r2 beq _done_clear sub r2, #1 _clear: cmp r1, r2 str r0, [r1, #0] add r1, #4 blo _clear _done_clear: /* * Copy data from flash initialization area to RAM * * The three values seen here are supplied by the linker script */ ldr r0, =_start_data_flash /* initial values, found in flash */ ldr r1, =_start_data /* target locations in RAM to write */ ldr r2, =_data_size /* number of bytes to write */ /* * Perform the copy. * Handle the special case where _data_size == 0 */ cmp r2, #0 beq done_copy copy: ldrb r4, [r0], #1 strb r4, [r1], #1 subs r2, r2, #1 bne copy done_copy: /* * Done with copy, restore critical regs and jump to initializer in C. */ ldmia sp!, {r0-r4} b Initialize .end At first glance, this looks very much like a traditional bare-metal startup file. However, the vector table at the beginning of the file has been modified slightly to act as a jump-vector table, not an interrupt vector table. In this case, the only entries in the table are pointers to the functions within your ECM that you want to provide access to in KLBasic. Here, the second and third entries provide access to GetBreakflag() and Blinky(), respectively. The first entry in the vector table, however, is different. It is a short jump to initialization code in the file, which in turn initializes variables used by the ECM. Note that only the first entry in the table provides this initialization. Therefore, it is important that when you use this ECM, you invoke the first vector before invoking any others. Otherwise, the other functions will not have their variables initialized and will surely fail. After the first vector finishes initializing the ECM's variables, it passes control to the function named in the branch at the bottom of the RAM copy loop (below label done_copy), which in this case means invoking function Initialize(). In the example C file above, the function Initialize() didn't do anything, but it serves as a jump target for this first vector. Once invoked, Initialize() simply returns back to KLBasic. The next piece of the puzzle is the linker script. Here is the script for the blinky ECM: /* Adapted for CortexM3 LPC1768, originally based on LPC21xx and LPC22xx User * Manual UM10144, pg. 15. */ OUTPUT_FORMAT("elf32-littlearm") OUTPUT_ARCH(arm) /* * The ECM module uses a small section of flash in the upper range of the * mbed's memory space. * * The ECM module uses the static RAM reserved for APB1 peripherals (16K). */ MEMORY { flash (rx) : ORIGIN = 0x00070000, LENGTH = 32K sram (rwx) : ORIGIN = 0x20080000, LENGTH = 16K } /* * Define the top our stack at the end of SRAM * * Note that ECMs usually use the stack pointer that is passed to them * when a function is invoked from outside; there isn't really any need for * a stack for ECMs. However, if you decide you want to have an ECM- * specific stack, this linker script includes support for it. Note that * your ECM program will have to modify the stack pointer itself; there * are no provisions for setting up the stack as part of an ECM's make. */ TOTAL_RESERVED_STACK = 8196; /* note that printf() and other stdio routines use 4K+ from stack! */ _end_stack = 0x20084000; EXTERN(__jump_vector_table); SECTIONS { .text : { CREATE_OBJECT_SYMBOLS /* Insert the jump vector table first */ __jump_vector_table = .; *(.jump_vector_table) /* Rest of the code (C) */ *(.text) *(.text.*) *(.glue_7) *(.glue_7t) /* Added following section for holding initializers for variables */ /* found in RAM. /* The _data_size value will be used in the startup code to step through the image of data in flash and copy it to RAM. */ . = ALIGN(4); /* _start_data_flash = .; */ *(.rodata) *(.rodata*) *(.init) /* added */ *(.fini) /* added */ . = ALIGN(4); _end_data_flash = .; } >flash /* From generic.ld, supplied by CodeSourcery */ /* .ARM.exidx is sorted, so has to go in its own output section. */ PROVIDE_HIDDEN (__exidx_start = .); .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } >sram PROVIDE_HIDDEN (__exidx_end = .); /* .data : AT (_end_data_flash) */ .data : { _start_data_flash = LOADADDR(.data); _start_data = .; *(.data) *(.data.*) *(.shdata) _end_data = .; } >sram AT>flash . = ALIGN(4); _data_size = _end_data - _start_data; .noinit : { *(.noinit) *(.noinit.*) } _start_bss = .; .bss : { *(.bss) *(.bss.*) *(COMMON) } >sram . = ALIGN(4); _end_bss = .; bss_size = _end_bss - _start_bss; /* Stack can grow down to here, right after data and bss sections in * SRAM */ _start_stack = _end_stack - TOTAL_RESERVED_STACK; _end_stack = _end_stack; /* just to make the map file easier to read */ /* Linker wants .eh_frame section defined because of gcc 4.4.X bug, * just discard it here. */ /DISCARD/ : { *(.eh_*) } } _end = .; PROVIDE(end = .); This linker script is pretty simple. It defines the memory available for the ECM, ensures that the jump vector table is placed at the beginning of available flash, then locates the rest of the program's sections. The combination of the linker script and the startup program above results in a block of jump vectors, each four bytes long, that starts at address $70000. How many vectors you provide depends solely on the needs of your ECM, though you must provide at least the first vector to ensure your ECM's variables are initialized. To build the ECM, run the associated makefile, shown here: # # Makefile for ECM_blinky.o # # You can run this makefile from the command line with: # # cs-make -f ECM_blinky.mak or # cs-make -f ECM_blinky.mak clean # # Make sure the CodeSourcery cs-make.exe is in your # execution path. # # Project Name PROJECT = ECM_blinky # List of the objects files to be compiled/assembled OBJECTS = $(PROJECT).o # This project does not create an executable, so it does # not use a linker script. LSCRIPT = # List of directories to be included during compilation INCDIRS = ..\include # List of additional object modules to link in. # Use this variable to point to prebuilt object modules # that exist outside of a library (such as the startup # code). ADDOBJS = OPTIMIZATION = 0 DEBUG = -g ASLISTING = -alhs LIBDIRS = LIBS = # Compiler Options GCFLAGS = -Wall -fno-common -mcpu=cortex-m3 -mthumb -O$(OPTIMIZATION) $(DEBUG) GCFLAGS += -I$(INCDIRS) #GCFLAGS += -Wcast-align -Wcast-qual -Wimplicit -Wpointer-arith -Wswitch #GCFLAGS += -Wredundant-decls -Wreturn-type -Wshadow -Wunused LDFLAGS = -mcpu=cortex-m3 -mthumb -O$(OPTIMIZATION) -nostartfiles -Wl,-Map=$(PROJECT).map -T$(LSCRIPT) LDFLAGS += -L$(LIBDIRS) LDFLAGS += -l$(LIBS) ASFLAGS = $(ASLISTING) -mcpu=cortex-m3 # Compiler/Assembler/Linker Paths GCC = arm-none-eabi-gcc AS = arm-none-eabi-as LD = arm-none-eabi-ld OBJCOPY = arm-none-eabi-objcopy REMOVE = rm -f SIZE = arm-none-eabi-size ######################################################################### all:: $(PROJECT).o clean: $(REMOVE) $(OBJECTS) $(REMOVE) *.lst ######################################################################### # Default rules to compile .c and .cpp file to .o # and assemble .s files to .o .c.o : $(GCC) $(GCFLAGS) -c $< .cpp.o : $(GCC) $(GCFLAGS) -c $< .s.o : $(AS) $(ASFLAGS) -o $@ $< > $(basename $@).lst ######################################################################### When this makefile completes, you will be left with a file named ECM_blinky.hex, which you can download into the mbed using KLBasic's 'load hex' command. Invoking the functions in this ECM consists of three calls into the vector table. Here is a KLB program for running this convoluted blinky program: 100 call $70000 120 call $70004, addr(breakflag) 140 call $70008, addr(timer0), 10, 250 Line 100 initializes the ECM variables, line 120 passes in the address of the breakflag, and line 140 defines the associated timer, the mask, and the delay, then starts blinking the LEDs. Inside KLBasic, the code for the CALL statement places the arguments (maximum of four) into an args[] array, then passes the address of this array and the number of elements in it (nargs) to the target vector. To halt the program and return to KLBasic's prompt, hit ctrl-C on the terminal's keyboard. Here is a .zip archive of the full ECM_blinky project.
http://www.seanet.com/~karllunt/mbedklbasic.html
CC-MAIN-2017-13
refinedweb
4,913
69.82
Consider the following code: public class MyAttribute : Attribute { public MyAttribute(UnmanagedType foo) { } public int Bar { get; set; } } [StructLayout(LayoutKind.Sequential)] public struct Test { [CLSCompliant(false)] [MyAttribute(UnmanagedType.ByValArray, Bar = 4)] [MarshalAs(UnmanagedType.ByValArray, SizeConst = 4)] public ushort[] ArrayShorts; } class Program { static void Main(string[] args) { FieldInfo field_info = typeof(Test).GetField(“ArrayShorts”); object[] custom_attributes = field_info.GetCustomAttributes(typeof(MarshalAsAttribute), false); Debug.WriteLine(“Attributes: “ + custom_attributes.Length.ToString()); custom_attributes = field_info.GetCustomAttributes(typeof(MyAttribute), false); Debug.WriteLine(“Attributes: “ + custom_attributes.Length.ToString()); custom_attributes = field_info.GetCustomAttributes(typeof(CLSCompliantAttribute), false); Debug.WriteLine(“Attributes: “ + custom_attributes.Length.ToString()); } } The code defines a custom attribute, then defices a struct that uses that attribute, along with some BCL-provided attributes. It then uses reflection to get back those attributes. What would you expect the output to be? Probably this: Attributes: 1 Attributes: 1 Attributes: 1 And if you run it against the full framework, that’s exactly what you get. But if you run it against the Compact framework, you instead get this: Attributes: 0 Attributes: 1 Attributes: 1 Yes, you’re seeing it correctly, the MarshalAsAttribute doesn’t show up. So immediately I call this a bug since the frameworks differ and there is no documentation that says they should (even if it were documented I’d call it a bug). So I did a little asking around and a little research. It turns out that there is a difference between MarshalAs and the other attributes. According to the ECMA-335 spec for the Common Language Interface, section 21.2 genuine custom attributes, and pseudo custom attributes.genuine custom attributes, and pseudo custom attributes. There are two kinds of custom attributes, called Custom attributes and pseudo custom attributes are treated differently, at the time they are defined, as follows: A custom attribute is stored directly into the metadata; the‘blob’ which holds its defining data is stored as-is. That ‘blob’ can be retrieved later. A pseudo custom attribute is recognized because its name is one of a short list. Rather than store its ‘blob’ directly in metadata, that ‘blob’ is parsed, and the information it contains is used to set bits and/or fields within metadata tables. The ‘blob’ is then discarded; it cannot be retrieved later. The spec goes on to say that the MarshalAsAttribute is a pseudo custom attribute, so it would fall into the second bullet above. If you re-read the last sentence in that bullet you’ll see that “it cannot be retrieved later.” So, in fact, it seems like the full framework is the one in error here, at least per the spec! This attribute should not be readable at all at run time. Now why the authors of the spec would have made such a strange exception is beyond me. THe method in the language is clearly called GetCustomattributes, not “GetOnlySomeCustomAttributes” and this is the first I’ve even heard of pseudo cutom attributes, which tells me it’s not well documented and likely not well known. So while the Compact Framework follows the spec to the letter, if you go with reasonable expectations as a guideline, I’m going to have to say that it’s behavior is wrong and that the spec is incorrect and needs revising. Gotta love stuff like this. It’s interesting how roads like this lead to finding out really odd stuff about your platform. I remember a while back having an issue with Vista, DateTimePickers and Toolstrips, that only taught me how much of a hack punching the datetimepicker overlay onto the screen appears to be. Thanks for the write-up on this one, cool read.
http://blog.opennetcf.com/2009/08/14/not-all-custom-attributes-are-created-equal/
CC-MAIN-2016-18
refinedweb
600
56.45
Extracting Data Subsets and Design By Composition Extracting Data Subsets and Design By Composition What's essential here is design by composition, and decomposition to make that possible. And changing the features is a matter of changing the combination of functions. Join the DZone community and get the full member experience.Join For Free The request was murky. It evolved over time to this: Create a function file_record_selection(train.csv, 2, 100, train_2_100.csv) First parameter: input file name (train.csv) Second parameter: first record to include (2) Third parameter: last record to include (100) Fourth parameter: output file name (train_2_100.csv) Fundamentally, this is a bad way to think about things. I want to cover some superficial problems first, though. First superficial dig: It evolved to this. In fairness to people without a technical background, getting to tight, implementable requirements are is difficult. Sadly, the first hand-waving garbage was from a DBA. It evolved to this. The early drafts made no sense. Second superficial whining: The specification — as written — is extraordinarily shabby. This seems to be written by someone who's never read a function definition in the Python documentation before. Something I know is not the case. How can someone who is marginally able to code also unable to write a description of a function? In this case, the "marginally able to code" may be a hint that some folks struggle with abstraction: the world is a lot of unique details; patterns don't emerge from related details. Third: Starting from record 2, it seems to show that they don't get the idea that indexes start with zero. They've seen Python. They've written code. They've posted code to the web for comments. And they are still baffled by the start value of indices. Let's move on to the more interesting topic, functional composition. Functional Composition The actual data file is a .GZ archive. So there's a tiny problem with looking at .CSV extracts from the GZIP. Specifically, we're exploding a file all over the hard drive for no real benefit. It's often faster to read the zipped file: It may involve fewer physical I/O operations. The .GZ is small; the computation overhead to decompress may be less than the time waiting for I/O. To get to functional composition we have to start by decomposing the problem. Then we can build the solution from the pieces. To do this, we'll borrow the interface segregation (ISP) design principle from OO programming. Here's an application of ISP: avoid persistence. It's easier to add persistence than to remove it. This leads peeling off three further tiers of file processing: physical format, logical layout, and essential entities. We shouldn't write a .CSV file unless it's somehow required — for example, if there are multiple clients for a subset. In this case, the problem domain is exploratory data analysis (EDA) and saving .CSV subsets is unlikely to be helpful. The principle still applies: Don't start with persistence in mind. What are the essential entities? This leads away from trying to work with filenames, also. It's better to work with files. And we shouldn't work with file names as strings — we should use pathlib.Path. All consequences of peeling off layers from the interfaces. Replacing names with files means the overall function is really this: a composition. file_record_selection = (lambda source, start, stop, target: file_write(target, file_read_selection(source, start, stop)) ) We applied the ISP again to avoid opening a named .CSV file. We can work with open file-like objects instead of file names. This doesn't change the overall form of the functions, but it changes the types. Here are the two functions that are part of the composition: from typing import * import typing Record = Any def file_write(target: typing.TextIO, records: Iterable[Record]): pass def file_read_selection(source: csv.DictReader, start: int, stop: int) -> Iterable[Record]: pass We've left the record type unspecified, mostly because we don't know what it just yet. The definition of "record" reflects the essential entities, and we'll defer that decision until later. CSV readers can produce either dictionaries or lists, so it's not a complex decision, but we can defer it. The .GZ processing defines the physical format. The content which was zipped was a .CSV file, which defines the logical layout. Separating physical format, logical layout, and essential entity, gets us code like the following: with gzip.open('file.gz') as source: reader = csv.DictReader(source) # Iterator[Record] for line in file_read_selection(reader, start, stop): print(line) We've opened the .GZ for reading, wrapped a CSV parser around that, and wrapped our selection filter around that. We didn't write the CSV output because, actually, that's not required. The core requirement was to examine the input. We can, if we want, provide two variations of the file_write() function and use a composition like the file_record_selection() function with the write-to-a-file and print-to-the-console variants. Pragmatically, the print-to-the-console is all we really need. In the above example, the record type can be formalized as List[Text]. If we want to use csv.DictReader instead, then the record type becomes Dict[Text, Text]. Further Decomposition There's a further level of decomposition: the essential design pattern is pagination. In Python parlance, it's a slice operation. We could use itertools to replace the entirety of file_read_selection() with itertools.takewhile() and itertools.dropwhile(). The problem with these methods is they don't short-circuit — they read the entire file. In this instance, it's helpful to have something like this for paginating an iterable with a start and stop value. for n, r in enumerate(reader): if n < start: continue if n = stop: break yield r This covers the bases with a short-circuit design that saves a little bit of time when looking at the first few records of a file. It's not great for looking at the last few records, however. Currently, the "tail" use case doesn't seem to be relevant. If it was, we might want to create an index of the line offsets to allow arbitrary access or use a simple buffer of the required size. If we were really ambitious, we'd use the slice class definition to make it easy to specify start, stop, and step values. This would allow us to pick every eighth item from the file without too much trouble. The slice class doesn't, however, support the selection of a randomized subset. What we really want is a paginator like this: def paginator(iterable, start: int, stop: int, selection: Callable[[int], bool]): for n, r in enumerate(iterable): if n < start: continue if n == stop: break if selection(n): yield r file_read_selection = lambda source, start, stop: paginator(source, start, stop, lambda n: True) file_read_slice = lambda source, start, stop, step: paginator(source, start, stop, lambda n: n%step == 0) The required file_read_selection() is built from smaller pieces. This function, in turn, is used to build file_read_selection() via functional composition. We can use this for randomized selection, also. Here are functions with type hints instead of lambdas. def file_read_selection(source: csv.DictReader, start: int, stop: int) -> Iterable[Record]: return paginator(source, start, stop, lambda n: True) def file_read_slice(source: csv.DictReader, start: int, stop: int, step: int) -> Iterable[Record]: return paginator(source, start, stop, lambda n: n%step == 0) Specifying the type for a generic iterable and the matching result iterable seems to require a type variable like this: T = TypeVar('T') def paginator(iterable: Iterable[T], ...) -> Iterable[T]: This type of hint suggests we can make wide reuse of this function. That's a pleasant side-effect of functional composition. Reuse can stem from stripping away the various interface details to decompose the problem to essential elements. Conclusion What's essential here is design by composition, and decomposition to make that possible. We got there by stepping away from file names to file objects. We segregated physical format and logical layout, also. Each application of the Interface Segregation Principle leads to further decomposition. We unbundled the pagination from the file I/O. We have a number of smaller functions. The original feature is built from a composition of functions. Each function can be comfortably tested as a separate unit. Each function can be reused. Changing the features is a matter of changing the combination of functions. This can mean adding new functions and creating new combinations. Published at DZone with permission of Steven Lott , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/extracting-data-subsets-and-design-by-composition
CC-MAIN-2020-24
refinedweb
1,461
58.79
In part 2 of this article, we are going to create a data driven web service that will return JSON and XML to the client, and then use jQuery to add a new item to the database and display it in our page. In part 1, we looked at creating simple web services and now we’re going to look at making something more practical and interesting. We’ll start from where we left off with the source code as it was at the end of the part 1 which you can download from here (restwebdemo_pt1) if you want to follow along. If not, the final source can be downloaded from here (restwebdemo_pt2). - First off, we’re going to change the entity manager that is available for injection to be request scoped. To do this, open up DataRepositoryProducer.javaand change the @ConversationScopedannotation on the getEntityManager()method to be @RequestScoped. The reason for this is documented here in A Little Less Conversation. - Next we are going to create a simple dao for Course objects, and the only reason to do this is to demonstrate the integration of CDI and the ability to layer your code. Create a new class called CourseDaowith the following code. package org.fluttercode.restwebdemo.bean; @Stateless @LocalBean public class CourseDao { @Inject @DataRepository private EntityManager entityManager; public void save(Course course) { entityManager.persist(course); } public Course update(Course course) { return entityManager.merge(course); } public Course find(Long id) { return entityManager.find(Course.class, id); } } This just injects an entityManager and uses it to locate, save and update Course objects. - Now create a new CourseServicebean that will handle the web services. To start with we want to make it a stateless EJB and inject the course Dao. We are going start by re-implementing the method to return the course name for the given id. @Path("courseName/{id}") @GET public String getCourseName(@PathParam("id") Long id) { Course course = courseDao.find(id); if (course == null) { return "Course not found"; } else { return course.getTitle(); } } To see this method in action, deploy the application and go to. Now we know everything is working and hooked up together, we can look at adding some new functionality. Let’s start by returning a course with a given id from the service. This is fairly simple given what we already know. The only thing to determine now is what format to return the object as and to convert it to that type. Luckily, Java EE already provides JAXB which can take an object graph and convert it to XML for us as long as we annotate the classes with the annotations to let the JAXB implementation know how to convert it. The same annotations can be used by the body writer that handles JSON. First we’ll annotate the Course class and make a couple of changes that we need to. Next we’ll create methods to return a Cource object from the service in XML or JSON format. - Open the Courseclass and add the following annotations to the class. @Entity @XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Course extends BaseEntity { ... ... } This tells the JAXB processor that this class can be serialized and to access the values using fields in the class. This also means that any other annotations we want to add to control the serialization needs to be applied to the fields. - Now in our CourseServiceclass we will create methods to return the course entity and we will create one for JSON and one for XML. @Path("find/{id}/xml") @GET @Produces(MediaType.APPLICATION_XML) public Course getCourseAsXml(@PathParam("id") Long id) { return courseDao.find(id); } @Path("find/{id}/json") @GET @Produces(MediaType.APPLICATION_JSON) public Course getCourseAsJson(@PathParam("id") Long id) { return courseDao.find(id); } This is a simple example, so we don’t want to serialize the teacher or enrolled properties which we can do by marking them with the @XmlTransient attributes. Also, remove the @NotNull annotation from the teacher attribute as we will need it blank later. The following code shows the fields with both the JAXB and JPA annotations. JAXB (like JPA) uses default conventions for fields that don’t have annotations : @Column(length = 32, nullable = false) @Size(max = 32) @NotEmpty(message = "title is required") private String title; @Column(length = 8, nullable = false) @Size(max = 8 ) @NotEmpty(message = "code is required") private String code; @ManyToOne(fetch = FetchType.LAZY) @XmlTransient private Teacher teacher; @ManyToMany(mappedBy = "enrolled") @XmlTransient private List<Student> students = new ArrayList<Student>(); We are just using a simple JAXB model for the sake of the example which is why we aren’t including the Teacher and Student classes. (note : At this point, I had to switch to using Hibernate as the JPA provider since JAXB didn’t like the interface EclipseLink used for proxying the properties. You can do this using the Glassfish update tool). If you redeploy the application and browse to you should be prompted to save a file, or it will display the text, but the content should be something like : {"createdOn":"2010-08-27T16:36:57.015-04:00", "id":"1", "modifiedOn":"2010-08-27T16:36:57.015-04:00", "title":"Computing for Beginners", "code":"CS101"} or if you go to you will get an XML version : <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <course> <createdOn>2010-08-27T16:36:57.015-04:00</createdOn> <id>1</id> <modifiedOn>2010-08-27T16:36:57.015-04:00</modifiedOn> <title>Computing for Beginners</title> <code>CS101</code> </course> Now we can grab objects from our web service, we should look at creating objects from the service. We add a new method that takes the title and code values, creates a new Course with those values and saves it using the courseDao. @Path("create") @PUT @Produces(MediaType.APPLICATION_JSON) public Course createCourse(@FormParam("title") String title,@FormParam("code") String code) { Course course = new Course(); course.setTitle(title); course.setCode(code); courseDao.save(course); return course; } Here I’ve used the FormParam annotations to plug form values into the method call. You’ll notice that using REST conventions, the method to create a course uses the PUT type of request. Now let’s create a page to enter a title and code and create the course. Notice that our method returns the created course so we can return the course back to the user. This is probably not ideal, but suits for the purposes of demonstration. Now lets create a new HTML page to allow for data entry and calling the web service to create the Course. <html> <head> <title>Insert title here</title> <script type="text/javascript" src=""> </script> </head> <body> <div id="message" style="display: none; background: #d0d0f0; padding: 12px">Message Div</div> <form action="rest/course/create" method="POST"> <fieldset> <legend>Create Course</legend> <p> Title<br /> <input id="title" /><br /> </p> <p> Code<br /> <input id="code" /><br /> </p> <input type="submit" id="submit" /> </fieldset> </form> </body> <script type="text/javascript"> //jquery pieces $(document).ready(function() { //change the submit button behaviouus. $('#submit').click(function () { var title = $("input#title").val(); var code = $("input#code").val(); params = "title="+title+"&code="+code; //alert("posting form : "+data); $.ajax({ type: "PUT", url: "rest/course/create", data: params, success: function(result) { showMessage("Created Course "+result.title+" with id "+result.id+" on "+result.createdOn); } }); return false; }); }); function showMessage(msg) { $('#message').html(msg); $('#message').fadeIn('fast'); $('#message').delay(3000).fadeOut('slow'); } </script> </html> This looks a lot code, but not really. We import jquery to help us post our form, and we create our form with the two fields. We use JQuery to add an event handler so when you click submit, it packages up the form, calls our web service with a PUT type of request and grabs the returned object as a JSON object, and displays a message using the values from the new instance obtained from the server. To verify that your course has been created, go to the front page and you should see it listed. That about wraps it up for this post, the source code can be downloaded from (restwebdemo_pt2), just unzip it, use mvn clean package and deploy the war to glassfish and use the URLs mentioned in the article.
http://www.andygibson.net/blog/tutorial/simple-restful-services-in-glassfish-pt-2/
CC-MAIN-2019-13
refinedweb
1,364
55.24
How to identify whether the publish topic publish (@client) def publish_it @client.publish('test/hai', 'message') # Ack the publish end @client.subscribe('test/#') @client.get do |topic,message| puts "#{topic}: #{message}" end There is no end to end (publisher to subscriber) delivery notification in MQTT. This is because as a pub/sub protocol there is no way for a publisher to know how many subscribers there are to a given topic, there could be anything from 0 to n. The QOS levels built into the spec ensure that messages are delivered from the publisher to the broker (and then from the broker to the subscribers). If you want to ensure a message is delivered then use either QOS level 1 or 2. QOS 1 will ensure a message is delivered at least once (possibly more if there are network problems) QOS 2 will ensure a message is delivered only once. In most of MQTT client libraries there is also the deliveryComplete callback which should be called once all the QOS handshake for a publish has been completed, if you add one of these you can be reasonably confident that the message has made it from the publisher as far as the broker. Unfortunately I can't see this implemented in the Ruby client
https://codedump.io/share/sg663dnjQAIL/1/acknowledgement-on-publish---mqtt
CC-MAIN-2018-17
refinedweb
213
56.89
I'm trying to time a while loop within a while loop, total time it takes to execute, and record the time it takes to do so, every time it loops. I need a way to achieve this using my code if possible, or open to different concepts I may not know of yet. import random import time import sys def main(): looperCPU = 500 start = time.time() while (looperCPU != 0): #start = time.time() #this is the computation for 3 secs time.sleep(3) random_number = random.randint(0,1000) #Send to printer for processing #.75 secs to 4.75 secs to generate a pause before printing secondsPause = random.uniform(.75,4.75) #This is the printer function printerLooper = True while printerLooper == True : print("Sleeping for ", secondsPause, " seconds") print(random_number) printerLooper = False # print("total time taken this loop: ", time.time() - start) looperCPU -= 1 main() When you set start outside your initial loop you are guaranteeing that you are getting the incorrect time it takes for the while loop to execute. It would be like saying: program_starts = time.time() while(True): now = time.time() print("It has been {0} seconds since the loop started".format(now - program_starts)) This is because start stays static for the entire execution time while your end time is steadily increasing with the passing of time. By placing start time within the loop you ensure that your start time is also increasing as the program runs. while (looperCPU != 0): start_time = time.time() # Do some stuff while printerLooper == True : print("Sleeping for ", secondsPause, " seconds") print(random_number) printerLooper = False end_time = time.time() print("total time taken this loop: ", end_time - start_time)
https://codedump.io/share/aJFrWQhQ02gl/1/time-a-while-loop-python
CC-MAIN-2016-50
refinedweb
269
67.65
Google's script editor giving run error when entering negative column/row offset values "The coordinates or dimensions of the range are invalid. (line 10, file "Code")" I am trying to set up a simple x,y style table lookup script in Google's script editor. The inline function couldn't quite give me the tools I needed to make this script work. I am in the very early steps of conceptualising my function, so it is extremely barebones while I figure out how to use this language. I just need a simple functionality to read the cells to the left of the active cell, it seems the offset function is my best bet. function getDistance(){ var ss = SpreadsheetApp.getActive(); var main = ss.getActiveSheet(); var distances = ss.getRange('Distances!A1:Z100'); var home = main.getRange(3, 3); //my attempt to initialize the variable to //get rid of error message var home = main.getActiveCell(); var cell1 = home.offset(0, -2); //this is where error occurs var cell2 = home.offset(0, -1); //if line above is set to positive, an error occurs here return distances.getValues(); } From what I could find in Google's API Library, the Range.offset() function does take negative values (other forum posts suggest this wasn't always the case). The offset is working, as I tested it in a spreadsheet. It correctly reads the cells to the left of the active cell. I'm just worried that, since this error won't go away, it won't detect any errors in the code past that point as the function is more developed. Is this a glitch or can I initialize the var home in a way that removes this error? See also questions close to this topic - Iterator Fault C++ when calling an auxiliary function I'm having a strange iterator fault when calling the function distance(Point, Point) from Main. Every point value is correctly stored though, I'm having issues calling distance function from main. What can be wrong in my code? Maybe is because distance, once called, is refering to something out of its scope? I attach main.cpp, triangle.h, triangle.cpp main.cpp #include <iostream> #include <cstdlib> #include "triangle.h" using namespace std; /* * */ int main(int argc, char** argv) { cout << "Hello!" << endl; Point P1(2,3); Point P2(5,8); Point P3(1,1); float d = distance(P1, P2); Triangle T1(P1, P2, P3); cout << "perimeter is " << T1.perimeter() << "\n" << endl; cout << "perimeter is " << T1.v1.get_x() << "\n" << endl; cout << "perimeter is " << T1.v1.get_y() << "\n" << endl; cout << "perimeter is " << T1.v2.get_x() << "\n" << endl; cout << "perimeter is " << T1.v2.get_y() << "\n" << endl; cout << "perimeter is " << T1.v3.get_x() << "\n" << endl; cout << "perimeter is " << T1.v3.get_y() << "\n" << endl; cout << "perimeter is " << d << "\n" << endl; return 0; } triangle.cpp #include <math.h> #include "triangle.h" float distance(Point v, Point w){ int p1x = v.get_x(); int p1y = v.get_y(); int p2x = w.get_x(); int p2y = w.get_y(); return sqrt((p2y-p1y)*(p2y-p1y) + (p2x-p1x)*(p2x-p1x)); } Point::Point(int a, int b){ x = a; y = b; } Point::Point(){ x = x; y = y; } void Point::set_values(int a, int b){ x = a; y = b; } int Point::get_x(){ return x; } int Point::get_y(){ return y; } Triangle::Triangle(Point a, Point b, Point c){ v1 = a; v2 = b; v3 = c; } float Triangle::area(){ return 1.0; } float Triangle::perimeter(){ return distance(v1, v2) + distance(v1, v3) + distance(v2, v3); } triangle.h #ifndef TRIANGLE_H #define TRIANGLE_H class Point { int x,y; public: void set_values(int, int); int get_x(); int get_y(); Point(int, int); Point(); }; class Triangle{ public: Point v1,v2,v3; Triangle(Point, Point, Point); float area(); float perimeter(); }; float distance(Point, Point); #endif /* TRIANGLE_H */ - How to add object created to a vector with the class constructor? I'm making a game for the first time with SDL2.0 in C++, and I've encountered a problem while trying to make a vector with all class instances of the class GameObject, this class includes all enemies and also the player character, and in the constructor of the class what I've done is it adds the instance being created to the vector but it doesn't work as it gives me an error "Severity Code Description Project File Line Suppression State Error LNK2001 unresolved external symbol "public: static class std::vector > GameObject::allEntities" (?allEntities@GameObject@@2V?$vector@VGameObject@@V?$allocator@VGameObject@@@std@@@std@@A) memory22 C:\Users\Marse\source\repos\memory22\memory22\gameObject.obj 1 " class GameObject { public: GameObject(const char* texturesheet, SDL_Renderer* ren, int x, int y, int _maxHp, int _currentHp, int _strength); ~GameObject(); static std::vector<GameObject> allEntities; ... } in another file: GameObject::GameObject(const char* texturesheet, SDL_Renderer* ren, int xx, int yy, int _maxHp, int _currentHp, int _strength) { allEntities.push_back(*this); //If I delete this line it works fine, but i need it to add the class instance being created to the vector renderer = ren; objectTexture = textureCreator::loadTexture(texturesheet, ren); GameObject::maxHp = _maxHp; GameObject::currentHp = _currentHp; x = xx; y = yy; } - Why am I receiving a "segmentation fault" when calling a stored function pointer? I have a class method that takes a function as a parameter and stores it in the class. This compiles, but when I try to call the stored function pointer, I receive a segmentation fault. // In GuiButton class private: void(*onClickFunction)(); // Method to store function void GuiButton::onClick(void(*onClickFunction)()) { this->onClickFunction = onClickFunction; } // Called when the button is clicked void GuiButton::buttonClick() { std::cout << "clicked" << std::endl; // This is printed this->onClickFunction(); // This is not called, aka "segmentation fault" } // The callback void hello() { std::cout << "hello" << std::endl; } // Set callback GuiButton myButton; myButton.onClick(&hello); I expect "hello" to print to the console, but I receive a segmentation fault instead. If I call onClickFunction()in onClickinstead of setting it, "hello" prints with no fault. I believe something goes out of scope once it's set in the class. - WebApp Content-Length Response Header GAS web app responses use chunked encoding (they seem to set the header Transfer-Encoding →chunked). But in my case I would need the Content-Length header instead. I know that from GAS we cannot change or set response headers, but anyone knows if through some setting, content type or deployment change this can be affected to use Content-Length header? - Embed google form within an Email programmatically I can use FormsApp to create a google form. Is there a way to embed that form in an e-mail automatically? Currently I can click on include in an e-mail through user interface and click on send. Is there a way to do this through Google Scripts? - Copying data between spreadsheets from sheets with the same name I want to copy the cell with the daily score from the sheets of all of my students in a spreadsheet where they are calculated and collected to another spreadsheet where they are used as currency to buy rewards. Both spreadsheets contain a sheet for every student that is named after that student, e.g. "John Smith" The original google script that I created worked, but it was poor coding because I had to repeat the coding for every single name, and therefore add a new paragraph of code every time we get a new student. I would like to create a new google script that is more elegant and powerful and works without specifying the students' names so that it never needs to be amended. I can't quite get it and keep hitting a "Syntax error" with the last line. function ImportDailyScore() { var dailyinput = "J27"; // Mon=J3, Tue=J9, Wed=J15, Thu=J21, Fri=J27 var dollaroutput = "B2"; // Today=B2, Yesterday=B3, etc. var dollarspreadsheet = SpreadsheetApp.getActiveSpreadsheet(); var checkinspreadsheet = SpreadsheetApp.openById('some id'); var checkinsheets = checkinspreadsheet.getSheets(); // get all the sheets from check in doc var dollarsheets = dollarspreadsheet.getSheets(); // get all the sheets from dollar doc for (var i=0; i<checkinsheets.length; i++){ // loop across all the checkin sheets var checkindata = checkinsheets[i].getRange(dailyinput).getValue(); var namedcheckin = checkinsheets[i].getSheetName() for (var j=0; j<dollarsheets.length; j++){ var nameddollar = dollarsheets[j].getSheetName(); if (namedcheckin = nameddollar, dollarsheets[j].getRange(dollaroutput).setValue(checkindata)) } } } For reference, the original code (which works just as I would like it to) but needs to specify the name of every single student is: function ImportDailyScore() { var dollarspreadsheet = SpreadsheetApp.getActiveSpreadsheet(); var checkinspreadsheet = SpreadsheetApp.openById('1Y9Ys1jcm1xMaLSqmyl_pFnvIzbf-omSeIcaI2FgjFIs'); var dailyinput = "J3"; // Mon=J3, Tue=J9, Wed=J15, Thu=J21, Fri=J27 var dollaroutput = "B4"; // Today=B2, Yesterday=B3, etc. var JohnCHECKIN = checkinspreadsheet.getSheetByName('John Smith'); var JohnCHECKINData = JohnCHECKIN.getRange(dailyinput).getValue(); var JohnDOLLAR = dollarspreadsheet.getSheetByName('John Smith'); JohnDOLLAR.getRange(dollaroutput).setValue(JohnCHECKINData); var JenniferCHECKIN = checkinspreadsheet.getSheetByName('Jennifer Scott'); var JenniferCHECKINData = JenniferCHECKIN.getRange(dailyinput).getValue(); var JenniferDOLLAR = dollarspreadsheet.getSheetByName('Jennifer Scott'); JenniferDOLLAR.getRange(dollaroutput).setValue(JenniferCHECKINData); etc. - Displaying/Formatting and Working/Calculating with Time in decimal notation (e.g., mm:ss.s) I have a series of time values that are in mm:ss.s format: 1:50.4, 1:51.3, 1:49.2, 2:00.1 These are split times in a spreadsheet tracking rowing performance where, for example, 1:50.4 is 1 minute and 50.4 seconds. I would like to: - Perform operations on a series of them (average, max, min, etc.); and - Format the field so that I can input the entries faster (for the above example, I'm hoping I can enter "110.4" and have the cell display "1:50.4"). I'm presuming that if I can just input them as seconds with 1 decimal point, then part 1 would be easy. Then it's just how to make the cell show the formatting I want in part 2. I've played around with the "Format" menu, and first tried "Duration" formatting, specifically, but no dice. I then tried using the "Custom date and time formats", but then I have to enter the time as "hh:mm:ss.sss", which is pretty cumbersome (although the calculations are easier, even though it treats the field as a time after 12AM if I enter 00:01:50.4--e.g., "12:01:50 AM"<-this is what shows in the entry field). Any suggestions would be appreciated. - Google Script - Trigger when a new value that meets a criteria is inserted to a specific sheet & column in Google Sheets Google Apps Script and Spreadsheet gurus, I need your help please. I have this spreadsheet: Background: This spreadsheet is being used to collect batch performance records. This sheet will be updated by a manager who will enter each new production batch into the next available row. If a new production batch is entered and has a variance greater than 10%, I will to get an email alert that a new batch has failed variance standards. Main Problem: I would like to trigger an email that will be sent when a new value, that meets the criteria "greater than 10%", appears in Range(K3:K52). In the past I have only been able to manually add triggers to my spreadsheets. I have tried but have not been able to figure out how to program triggers with code. Which I suspect is the way I will have to trigger my script. I also don't know how to get the script to recognize if values were previously there or if a value is a new entry. So far when I manually run my code, it sends me an alert email with the last batch who's variance is greater than 10%, rather than getting triggered when a new batch entry has a variance greater than 10%. I really appreciate any help I have been working on the script for a few days and this is the closest I could get. - Using kind of range in Java Enum I can define enum values with specific int values but I also want to represent some specific range in a Java enum. What I mean is the following: public enum SampleEnum { A(1), B(2), C(3), D(4), //E([5-100]); private SampleEnum(int value){ this.value = value; } private final int value; } Here, for example, is it possible to represent the range between 5 and 100 with a single value(here, it is "E") or is there a better way? - How do I create a list of time ranges? I'm trying to set up a way to be able to step backward and forward through sets of time ranges. I'm creating a simple display for my work's production floor. Every time a user scans a barcode a serial number is generated in our ERP system. In order to track progress throughout the day they want a display that will show how many serials (and because of that, how many widgets) were processed that day. The display would also show the previous shift's widgets. So at any given time you look at the display you'll see the current shift's progress, and then the previous shift's totals. We run 3 normal shifts throughout the week. 1st shift 6am to 2pm, 2nd shift 2pm to 10pm, 3rd shift 10pm to 6am. Then on both weekend days we have a weekend shift that runs 6am to 6pm. I currently have a spaghetti noodle mess of if/else statements that make this actually work, but I really want to clean it up and make it easier to maintain long-term. So let's say it's Monday 1st shift. I want to grab Monday 1st shift's totals, but I also want to step backwards by 1 shift to the weekend shift and grab those totals to display. The weekend shifts are really what threw this for a loop because now I have to be aware of the day of the week. 3rd shift is a little leery as well because it crosses over into multiple dates. Is there a clean way to set this up? A library out there that lets me define these things easily? Hope this made sense. Happy to clarify anything I'm missing. Thanks in advance for any guidance! - Using Index to build a range from a closed external workbook Ideally we would like to avoid using VBA for this (if it's possible). I have provided two sample spreadsheets to illustrate the issue as it's probably better with an example rather than giving confusing explanations. - Main sheet: - Suppliers sheet: Basically, we need to pull a range of an unknown length starting from an unknown offset from a closed workbook, then use MATCHto find the row index of a value we're looking for and return the value. We tried three different methods. All formulas works when the workbook is open but we need a solution to use without the workbook open. I didn't look into using INDIRECTsince MS strictly states that INDIRECTis not compatible with closed workbook. Here are the formulas with indentation for better visual: Method #1: Using index + count(if()) (array)( IF('Z:\...\[Suppliers.xlsx]suppliers'!$A:$A=$L$9,1) )-1 ) , MATCH( K11, INDEX('Z:\...\[Suppliers.xlsx]suppliers'!$D:$D, MATCH($L$9,'Z:\...\[Suppliers.xlsx]suppliers'!$A:$A,0) ) : INDEX('Z:\...\[Suppliers.xlsx]suppliers'!$D:$D, MATCH($L$9,'Z:\...\[Suppliers.xlsx]suppliers'!$A:$A,0) + COUNT( IF('Z:\...\[Suppliers.xlsx]suppliers'!$A:$A=$L$9,1) )-1 ) ,0) ,0) Method #2: Using index + countif() This solution returns #VALUE) + COUNTIF('Z:\...\[Suppliers.xlsx]suppliers'!$A:$A,$L$9))-1 ) ,0) ,0) Method #3: Using index + sumproduct()) + SUMPRODU) + SUMPRODUCT( -- ('Z:\...\[Suppliers.xlsx]suppliers'!$A:$A=$L$9))-1 ) ,0) ,0) If I understand correctly, the tricky bit is building a dynamic range using INDEX(ref,row,col):INDEX(ref,row,col)since every bits works on their own but when used in this fashion it returns an error. Note: You might need to update the absolute path to the spreadsheet suppliers.xlsxin main.xlsxif you download the provided demo All help is gladly appreciated! - OFFSET | FETCH Order By column - SQL Server Can anyone explain to me on a very basic level what the performance difference would be for these 2 queries. ORDER BY (select null) OFFSET @Offset ROWS FETCH NEXT @EntriesPerPage ROWS ONLY; ORDER BY (ItemID) OFFSET @Offset ROWS FETCH NEXT @EntriesPerPage ROWS ONLY; I would expect the first query to perform better because it is not ordering by a column but every time i test this the second query always performs better. Can anyone explain to me why the second query would run better for OFFSET / FETCH even though it is ordering by a column. I run the queries separately. I wipe the cache plan and buffers between each query so that it doesn't use the previous execution plan. Thanks - Find Fragflag, Offset and Length with given values of MTU. How to find them? Thanks - OFFSET sample id excel Im currently working on an excel sheet for my company, the model im being tasked to work in lookes like this: Im trying to make a match index from using col E and col G to fill out col H, my idea is to use F as a hidden helper col so im typing in one sample ID in col E and then 7 rows in col F will fill out with the same sample ID and then the match index would be kinda straight forward, I have a vague idea that using OFFSET is the way to go but i cannot get it working... Does anyone have an idea of how to get the OFFSET formular to work or maybe another idea as to how i can get this working?
http://quabr.com/52525677/googles-script-editor-giving-run-error-when-entering-negative-column-row-offset
CC-MAIN-2019-09
refinedweb
2,949
63.59
>>>>> "Rob" == Robert Collins <address@hidden> writes: >> Recently gcc added precompiled header support. This is mostly useful >> for C++, but C might benefit in some cases too. Rob> Are you planning on doing this, or just sketching the design and hoping Rob> for volunteer contributions? I'm hoping someone else will do it :-) Rob> What might be a useful starting point is some manual test cases or Rob> sample rules, to aim for. No problem. libstdc++ is already using it. I've appended some snippets from their Makefile.am. We could probably already get most of this by abusing _PROGRAMS. That's ugly though. I've also appended the section of the gcc manual explaining precompiled headers. Tom pch_input = ${host_builddir}/stdc++.h pch_output_builddir = ${host_builddir}/stdc++.h.gch pch_source = ${glibcxx_srcdir}/include/stdc++.h PCHFLAGS=-Winvalid-pch -Wno-deprecated -x c++-header $(CXXFLAGS) if GLIBCXX_BUILD_PCH pch_build = ${pch_input} pch_install = install-pch else pch_build = pch_install = endif # Build a precompiled C++ include, stdc++.h.gch. ${pch_input}: ${allstamped} ${host_builddir}/c++config.h ${pch_source} touch ${pch_input}; \ if [ ! -d "${pch_output_builddir}" ]; then \ mkdir -p ${pch_output_builddir}; \ fi; \ $(CXX) $(PCHFLAGS) $(AM_CPPFLAGS) ${pch_source} -O0 -g -o ${pch_output_builddir}/O0g; \ $(CXX) $(PCHFLAGS) $(AM_CPPFLAGS) ${pch_source} -O2 -g -o ${pch_output_builddir}/O2g; @node Precompiled Headers @section Using Precompiled Headers @cindex precompiled headers @cindex speed of compilation @option{-x} option to make the driver treat it as a C or C++ header file. You will probably want to use a tool like @command{make} to keep the precompiled header up-to-date when the headers it contains change. A precompiled header file will be searched for when @code{#include} is seen in the compilation. As it searches for the included file (@pxref{Search Path,,Search Path,cpp.info,The C Preprocessor}) the compiler looks for a precompiled header in each directory just before it looks for the include file in that directory. The name searched for is the name specified in the @code{#include} with @samp{.gch} appended. If the precompiled header file can't be used, it is ignored. For instance, if you have @code{#include "all.h"}, and you have @file{all.h.gch} in the same directory as @file{all.h}, then the precompiled header file will be used if possible, and the original header will be used otherwise. Alternatively, you might decide to put the precompiled header file in a directory and use @option{ @code{#error} command. This also works with @option{ @option{ @emph{directory} named like @file: @itemize @item Only one precompiled header can be used in a particular compilation. @item @code{#include}. @item The precompiled header file must be produced for the same language as the current compilation. You can't use a C precompiled header for a C++ compilation. @item The precompiled header file must be produced by the same compiler version and configuration as the current compilation is using. The easiest way to guarantee this is to use the same compiler binary for creating and using precompiled headers. @item Any macros defined before the precompiled header (including with @option{-D}) must either be defined in the same way as when the precompiled header was generated, or must not affect the precompiled header, which usually means that the they don't appear in the precompiled header at all. @item Certain command-line options. @end itemize @ref{Bugs}.
http://lists.gnu.org/archive/html/automake/2003-10/msg00008.html
CC-MAIN-2014-52
refinedweb
545
57.27
Collaboration Policy. For this problem set, you may either work alone and turn in a problem set with just your name on it, or work with one other student in your section. Liskov's Chapter 7 and Meyer's Static Typing and Other Mysteries of Life describe very different rules for subtypes. Liskov's substitution principle requires that the subtype specification supports reasoning based on the supertype specification. When we reasoned about a call to a supertype method, we reasoned that if a callsite satisfies the preconditions in the requires clause, it can assume the state after the call satisfies the postconditions in the effects clause. This means the subtype replacement for the method cannot make the precondition stronger since then our reasoning about the callsite may no longer hold (that is, the callsite may not satisfy the stronger precondition). Hence, the type of the return value of the subtype method must be a subtype of the type of the return value for the supertype method; the types of the parameters of the subtype method must be supertypes of the types of the parameters of the supertype method. This is known as contravariant typing. Bertrand Meyer prefers covariant typing: the subtype replacement method parameter types must be subtypes of the types of the parameters of the supertype method. We will generalize his rules to apply to preconditions and postconditions also: the subtype method preconditions must be stronger than the supertype method precondition (presub => presuper) and the subtype postconditions must be stronger than the supertype postconditions (postsub => postsuper). Note that unlike the corresponding Liskov substitution rule, (presuper && postsub) => postsuper, there is no need for presuper in the covariant rule since postsub => postsuper. The signature rule in Java is stricter: subtype methods must have the exact same return and parameter types of the method they override, although they may throw fewer exception types. Java does not place constraints on the behavior of methods, however, since the compiler is not able to check this. Consider the minimal Tree class and its BinaryTree subtype, both specified on the next page. The Java compiler will not allow the getChild method of BinaryTree. Here is the error message: BinaryTree.java:29: getChild(int) in BinaryTree cannot override getChild(int) in Tree; attempting to use incompatible return type found : BinaryTree required: Tree public class Tree // OVERVIEW: A Tree is a mutable tree where the nodes are int values. // A typical Tree is < value, [ children ] > // where value is the int value of the root of the tree // and children is a sequence of zero or more Tree objects // that are the children of this tree node. // A Tree may not contain cycles, and may not contain the same // Tree object as a sub-tree in more than one place. public Tree (int val) // EFFECTS: Creates a tree with value val and no children: // < value, [] > public void addChild (Tree t) // REQUIRES: t is not contained in this. // MODIFIES: this // EFFECTS: Adds t to the children of this, as the rightmost child: // this_post = < this_pre.value, children > // where children = [ this_pre.children[0], this_pre.children[1], ..., // this_pre.children[this_pre.children.length - 1], t] // NOTE: the rep is exposed! public Tree getChild (int n) // REQUIRES: 0 <= n < children.length // EFFECTS: Returns the Tree that is the nth leftmost child // of this. // NOTE: the rep is exposed! public class BinaryTree extends Tree // OVERVIEW: A BinaryTree is a mutable tree where the nodes are int values // and each node has zero, one or two children. // // A typical BinaryTree is < value, [ children ] > // where value is the int value of the root of the tree // and children is a sequence of zero, one or two BinaryTree objects // that are the children of this tree node. // A BinaryTree may not contain cycles, and may not contain the same // BinaryTree object as a sub-tree in more than one place. public BinaryTree (int val) // EFFECTS: Creates a tree with value val and no children: // < value, null, null > public void addChild (BinaryTree t) // REQUIRES: t is not contained in this and this has zero or one children. // MODIFIES: this // EFFECTS (same as supertype): // Adds t to the children of this, as the rightmost child: // this_post = < this_pre.value, children > // where children = [this_pre.children[0], this_pre.children[1], ..., // this_pre.children[this_pre.children.length - 1], t] public BinaryTree getChild (int n) // REQUIRES: 0 <= n < 2 // EFFECTS: If this has at least n children, returns a copy of the BinaryTree // that is the nth leftmost child of this. Otherwise, returns null. 2. (10) Does the addChild method in BinaryTree satisfy the Liskov substitution principle? Explain why or why not. 3. (10) Does the getChild method in BinaryTree satisfy the Eiffel subtyping rules? Explain why or why not. 4. (10) Does the addChild method in BinaryTree satisfy the Eiffel subtyping rules? Explain why or why not. Note that the Java compiler will allow the addChild method, but it overloads instead of overrides the supertype addChild method. That is, according to the Java rules the BinaryTree class now has two addChild methods — one is the addChild (Tree) method inherited from Tree, and the other is the addChild (BinaryTree) method implemented by BinaryTree. This can be quite dangerous since the overloaded methods are resolve based on apparent types, not actual types. For example, try this program: static public void main (String args[]) { Tree t = new BinaryTree (3); BinaryTree bt = new BinaryTree (4); t.addChild (bt); // Calls the addChild(Tree) method bt.addChild (new BinaryTree (5)); // Calls the addChild (BinaryTree) method bt.addChild (new Tree (12)); // Calls the addChild (Tree) method }Note that the first call uses the inherited addChild(Tree) because the apparent type of t is Tree, even though its actual type is BinaryTree. -Xmx512m -eaThe first argument sets the maximum size of the VM heap to 512 megabytes. The second argument turns on assertion checking. Rhocasa provides a graphical user interface (GUI) for manipulating a set of images by applying filters to generate new images. Every filter must be a subtype of the Filter datatype. The filters provided are shown in the class hierarchy below:.java.lang.Object ps4.Filterps4.Filter ps4.PointFilterps4.PointFilter ps4.BrightenFilterps4.BrightenFilter ps4.GreyscaleFilterps4.GreyscaleFilter ps4.BlurFilterps4.BlurFilter ps4.FlipFilterps4.FlipFilter ps4.MultiFilterps4.MultiFilter ps4.AddFilterps4.AddFilter ps4.TileFilterps4.TileFilter We provide two abstract subtypes of Filter: For the next two questions, you are to implement new filters. The behavior of these filters is not specified precisely; you can determine in your implemention a good way to provide the effect described. 7. (10) Develop a new filter that produces an image that is the "average" of two or more images. 10. (10) Modify the application to suppoed the parameterized filters. In addition to modifying the filter classes, you will need to modify the GUI to allow a user to enter the parameter for a parameterized filter. This will involve modifying the GUIHandler.actionPerformed method defined in GUI.java. Hint: look at how the MultiFilter is handled. Turn-in Checklist: You should turn in your answers to questions 1-10 on paper at the beginning of class on Friday, 6 October. including your code (but not unnecessary printouts of the provided code). Also, submit the image you produced for question 11 as a JPG and a zip file containing all of your code by email to [email protected]. There may be a token prize for the best image created.
http://www.cs.virginia.edu/~evans/cs205/ps/ps4/
CC-MAIN-2018-43
refinedweb
1,230
52.6
- ASP.NET and the .NET Framework (inactive) - Introducing ASP.NET Controls - Adding Application Logic to an ASP.NET Page - The Structure of an ASP.NET Page - Summary In this article, you learn how to build basic ASP.NET Web Forms Pages. Don't let the Forms part of the name mislead you. Web Forms Pages can do much more than display standard HTML forms. Most, if not all, the pages of your ASP.NET application will be Web Forms Pages, which enable you to create pages with interactive, dynamic, or database-driven content. Web Forms Pages are pieced together out of two building blocks. First, you assemble the dynamic portion of the user interface by using ASP.NET controls. ASP.NET controls enable you to display "smart" HTML forms, for example, and present interactive grids of database data. The first part of this article provides an overview of all the ASP.NET controls. The second building block of a Web Forms Page is the application logic, which includes the code that executes when you click a form button or the code that retrieves the database data displayed within a control. In the second part of this article, you learn how to add application logic to your Web Forms Pages to handle both control and page events. Finally, in the last part of this article, you are formally introduced to the structure of an ASP.NET Web Forms Page. You learn about the different sections of a Web Forms Page and the type of content appropriate to each section. After reading this article, you can start creating dynamic Web sites by building interactive, dynamic pages using ASP.NET controls and handling control and page events with application logic. ASP.NET and the .NET Framework ASP.NET is part of Microsoft's overall .NET framework, which contains a vast set of programming classes designed to satisfy any conceivable programming need. In the following two sections, you learn how ASP.NET fits within the .NET framework, and you learn about the languages you can use in your ASP.NET pages. The .NET Framework Class Library Imagine that you are Microsoft. Imagine that you have to support multiple programming languagessuch as Visual Basic, JScript, and C++. A great deal of the functionality of these programming languages overlaps. For example, for each language, you would have to include methods for accessing the file system, working with databases, and manipulating strings. Furthermore, these languages contain similar programming constructs. Every language, for example, can represent loops and conditionals. Even though the syntax of a conditional written in Visual Basic differs from the syntax of a conditional written in C++, the programming function is the same. Finally, most programming languages have similar variable data types. In most languages, you have some means of representing strings and integers, for example. The maximum and minimum size of an integer might depend on the language, but the basic data type is the same. Maintaining all this functionality for multiple languages requires a lot of work. Why keep reinventing the wheel? Wouldn't it be easier to create all this functionality once and use it for every language? The .NET Framework Class Library does exactly that. It consists of a vast set of classes designed to satisfy any conceivable programming need. For example, the .NET framework contains classes for handling database access, working with the file system, manipulating text, and generating graphics. In addition, it contains more specialized classes for performing tasks such as working with regular expressions and handling network protocols. The .NET framework, furthermore, contains classes that represent all the basic variable data types such as strings, integers, bytes, characters, and arrays. Most importantly, for purposes of this book, the .NET Framework Class Library contains classes for building ASP.NET pages. You need to understand, however, that you can access any of the .NET framework classes when you are building your ASP.NET pages. Understanding Namespaces As you might guess, the .NET framework is huge. It contains thousands of classes (over 3,400). Fortunately, the classes are not simply jumbled together. The classes of the .NET framework are organized into a hierarchy of namespaces. NOTE In previous versions of Active Server Pages, you had access to only five standard classes (the Response, Request, Session, Application, and Server objects). ASP.NET, in contrast, provides you with access to over 3,400 classes! A namespace is a logical grouping of classes. For example, all the classes that relate to working with the file system are gathered together into the System.IO namespace. The namespaces are organized into a hierarchy (a logical tree). At the root of the tree is the System namespace. This namespace contains all the classes for the base data types, such as strings and arrays. It also contains classes for working with random numbers and dates and times. You can uniquely identify any class in the .NET framework by using the full namespace of the class. For example, to uniquely refer to the class that represents a file system file (the File class), you would use the following: System.IO.File System.IO refers to the namespace, and File refers to the particular class. NOTE You can view all the namespaces of the standard classes in the .NET Framework Class Library by viewing the Reference Documentation for the .NET Framework. Standard ASP.NET Namespaces The classes contained in a select number of namespaces are available in your ASP.NET pages by default. (You must explicitly import other namespaces.) These default namespaces contain classes that you use most often in your ASP.NET applications: SystemContains all the base data types and other useful classes such as those related to generating random numbers and working with dates and times System.CollectionsContains classes for working with standard collection types such as hash tables, and array lists System.Collections.SpecializedContains classes that represent specialized collections such as linked lists and string collections System.ConfigurationContains classes for working with configuration files (Web.config files) System.TextContains classes for encoding, decoding, and manipulating the contents of strings System.Text.RegularExpressionsContains classes for performing regular expression match and replace operations. System.WebContains the basic classes for working with the World Wide Web, including classes for representing browser requests and server responses System.Web.CachingContains classes used for caching the content of pages and classes for performing custom caching operations System.Web.SecurityContains classes for implementing authentication and authorization such as Forms and Passport authentication System.Web.SessionStateContains classes for implementing session state System.Web.UIContains the basic classes used in building the user interface of ASP.NET pages System.Web.UI.HTMLControlsContains the classes for the HTML controls System.Web.UI.WebControlsContains the classes for the Web controls .NET Framework-Compatible Languages For purposes of this book, you will write the application logic for your ASP.NET pages using Visual Basic as your programming language. It is the default language for ASP.NET pages (and the most popular programming language in the world). Although you stick to Visual Basic in this book, you also need to understand that you can create ASP.NET pages by using any language that supports the .NET Common Language Runtime. Out of the box, this includes C# (pronounced See Sharp),JScript.NET (the .NET version of JavaScript), and the Managed Extensions to C++. Dozens of other languages created by companies other than Microsoft have been developed to work with the .NET Framework. Some examples of these other languages include Python, SmallTalk, Eiffel, and COBOL. This means that you could, if you really wanted to, write ASP.NET pages using COBOL. Regardless of the language that you use to develop your ASP.NET pages, you need to understand that ASP.NET pages are compiled before they are executed. This means that ASP.NET pages can execute very fast. The first time you request an ASP.NET page, the page is compiled into a .NET class, and the resulting class file is saved beneath a special directory on your server named Temporary ASP.NET Files. For each and every ASP.NET page, a corresponding class file appears in the Temporary ASP.NET Files directory. Whenever you request the same ASP.NET page in the future, the corresponding class file is executed. When an ASP.NET page is compiled, it is not compiled directly into machine code. Instead, it is compiled into an intermediate-level language called Microsoft Intermediate Language (MSIL). All .NET-compatible languages are compiled into this intermediate language. An ASP.NET page isn't compiled into native machine code until it is actually requested by a browser. At that point, the class file contained in the Temporary ASP.NET Files directory is compiled with the .NET framework Just in Time (JIT) compiler and executed. The magical aspect of this whole process is that it happens automatically in the background. All you have to do is create a text file with the source code for your ASP.NET page, and the .NET framework handles all the hard work of converting it into compiled code for you. ASP Classic What about VBScript? Before ASP.NET, VBScript was the most popular language for developing Active Server Pages. ASP.NET does not support VBScript, and this is good news. Visual Basic is a superset of VBScript, which means that Visual Basic has all the functionality of VBScript and more. So, you have a richer set of functions and statements with Visual Basic. Furthermore, unlike VBScript, Visual Basic is a compiled language. This means that if you use Visual Basic to rewrite the same code that you wrote with VBScript, you can get better performance. If you have worked only with VBScript and not Visual Basic in the past, don't worry. Since VBScript is so closely related to Visual Basic, you'll find it easy to transition between the two languages. NOTE Microsoft includes an interesting tool named the IL Disassembler (ILDASM) with the .NET framework. You can use this tool to view the disassembled code for any of the ASP.NET classes in the Temporary ASP.NET Files directory. It lists all the methods and properties of the class and enables you to view the intermediate-level code. This tool also works with all the ASP.NET controls discussed in this article. For example, you can use the IL Disassembler to view the intermediate-level code for the TextBox control (located in a file named System.Web.dll).
http://www.informit.com/articles/article.aspx?p=25467&seqNum=2
CC-MAIN-2018-13
refinedweb
1,744
59.9
I have a collection of python scripts that could benefit from the use of Authentication Aliases like testing a datasource before adding to WAS, running remote commands through wsadmin, or setting up a DB Repository without using a plaintext password in the command, but I'm having difficulties figuring out how to use them. I am using sample code found in How to access authentication alias from EJB deployed to Websphere 6.1 and Q & A: Frequently asked questions about WebSphere Application Server security as a guide. Here is the code I am using: ./wsadmin.sh -lang jython from com.ibm.wsspi.security.auth.callback import WSMappingCallbackHandlerFactory from com.ibm.wsspi.security.auth.callback import WSMappingCallbackHandler from javax.security.auth.login import LoginContext; from javax.security.auth Subject; from java.util import HashMap from com.ibm.wsspi.security.auth.callback import Constants map = HashMap() map.put(Constants.MAPPING_ALIAS, 'TestAuthAlias') subject = new Subject(); #cb = WSMappingCallbackHandlerFactory.getInstance().getCallbackHandler(map, null); cb = WSMappingCallbackHandler(map, None) loginContext = LoginContext("DefaultPrincipalMapping", subject, cb); loginContext.login(); I am hitting 2 roadblocks: Don't know if I can bypass it with the next line or not. loginContext = LoginContext("DefaultPrincipalMapping", subject, cb); fails with javax.security.auth.login.LoginException: javax.security.auth.login.LoginException: No LoginModules configured for DefaultPrincipalMapping. I am running this on WAS 8.5 that is packaged with IBM BPM 8.5.0.1. I was able to get the Java code working in a BPM Java Component so I know that the code is usable. Do I need to do something different with Subject, LoginContext, or javax.security.auth.login.Configuration? Answer by Henning Burgmann (3254) | Jan 23, 2015 at 04:39 AM In order to get access to a J2C Authentication Alias you have to be authenticated to WebSphere Application Server (WAS). Therefore, you can not use the approach that you have chosen. I suggest that you set the properties com.ibm.SOAP.loginUserid and com.ibm.SOAP.loginPassword in the file soap.client.props located in the directory profiles//properties, if you use SOAP to connect to WAS. With the utility PropFilePasswordEncoder you can encode the password value. For details about that utility see: I was running the above command in {WAS_INSTALL_ROOT}/bin/wsadmin.sh. The first thing I do in all of my websphere environments is set soap.client.props with loginUserid and password, so I have no idea how your response is beneficial. I will rephrase my question so it is easier to interpret. Answer by Henning Burgmann (3254) | Feb 04, 2015 at 06:23 AM I have misunderstood your use case. I thought that you wanted to use the J2C authentication alias for the initial login at the WAS server. As far as I understand your sample you run the provided code in a client application. In the default configuration the JAAS login configuration "DefaultPrincipalMapping" is not available for clients. You can use that login configuration only in an J2EE application that runs on the server. I suggest that you use the general "WSLogin", which is available for client and server applications. You can check your JAAS login configurations in the files: <profile_dir>/properties/wsjaas_client.conf <profile_dir>/properties/wsjaas.conf You can find details regarding developing and configuring programmatic logins with JAAS on the following page in KnowledgeCenter WAS ND v8.5.5: 15 people are following this question. Can you change the umask value of a WebSphere Application Server JVM using the wsadmin scripting tool? 1 Answer How to configure Single sign on ( Cross-Cell SSO) using Snoop Servlet in Websphere Application Server? 2 Answers How to check for potential impact of setting jaxws.share.dynamic.ports.enable = true 1 Answer Remote management of WebSphere 8.5 ND 2 Answers What is causing the "CWPMI0010W: Cannot find the file: /wbeModule.xml messages" in systemout.log 1 Answer
https://developer.ibm.com/answers/questions/171178/wsadmin-use-authentication-alias.html
CC-MAIN-2019-35
refinedweb
638
51.04
ExtendedAttributes QML Type The ExtendedAttributes type holds additional data about a Place. More... Signals - void valueChanged(string key, variant value) Methods Detailed Description The ExtendedAttributes type is a map of PlaceAttributes. To access attributes in the map use the keys() method to get the list of keys stored in the map and use the [] operator to access the PlaceAttribute items. The following are standard keys that are defined by the API. Plugin implementations are free to define additional keys. Custom keys should be qualified by a unique prefix to avoid clashes. Some plugins may not support attributes at all, others may only support a certain set, others still may support a dynamically changing set of attributes over time or even allow attributes to be arbitrarily defined by the client application. The attributes could also vary on a place by place basis, for example one place may have opening hours while another does not. Consult the plugin references for details. Some attributes may not be intended to be readable by end users, the label field of such attributes is empty to indicate this fact. Note: ExtendedAttributes instances are only ever used in the context of Places. It is not possible to create an ExtendedAttributes instance directly or re-assign a Place's ExtendedAttributes property. Modification of ExtendedAttributes can only be accomplished via Javascript. The following example shows how to access all PlaceAttributes and print them to the console: import QtPositioning 5.5 import QtLocation 5.6 function printExtendedAttributes(extendedAttributes) { var keys = extendedAttributes.keys(); for (var i = 0; i < keys.length; ++i) { var key = keys[i]; if (extendedAttributes[key].label !== "") console.log(extendedAttributes[key].label + ": " + extendedAttributes[key].text); } } The following example shows how to assign and modify an attribute: //assign a new attribute to a place var smokingAttrib = Qt.createQmlObject('import QtLocation 5.3; PlaceAttribute {}', place); smokingAttrib.label = "Smoking Allowed" smokingAttrib.text = "No" place.extendedAttributes.smoking = smokingAttrib; //modify an existing attribute place.extendedAttributes.smoking.text = "Yes" See also PlaceAttribute and QQmlPropertyMap. Signal Documentation This signal is emitted when the set of attributes changes. key is the key corresponding to the value that was changed. The corresponding handler is onValueChanged. Note: The corresponding handler is onValueChanged. Method Documentation Returns an array of place attribute.
https://doc-snapshots.qt.io/qt5-5.15/qml-qtlocation-extendedattributes.html
CC-MAIN-2022-27
refinedweb
371
51.34
LDGETNAME(3X) LDGETNAME(3X) NAME ldgetname - retrieve symbol name for COFF file symbol table entry SYNOPSIS #include <<stdio.h>> #include <<filehdr.h>> #include <<syms.h>> #include <<ldfcn.h>> char *ldgetname (ldptr, symbol) LDFILE *ldptr; SYMENT *symbol; AVAILABILITY Available only on Sun 386i systems running a SunOS 4.0.x release or earlier. Not a SunOS 4.1 release feature. DESCRIPTION ldgetname() returns a pointer to the name associated with symbol as a string. The string is contained in a static buffer local to ldget- name() that is overwritten by each call to ldgetname(), and therefore must be copied by the caller if the name is to be saved. ldgetname() can be used to retrieve names from object files without any backward compatibility problems. ldgetname() will return NULL (defined in stdio.h) for an object file if the name cannot be retrieved. This situation can occur: o if the ``string table'' cannot be found, o if not enough memory can be allocated for the string table, o if the string table appears not to be a string table (for example, if an auxiliary entry is handed to ldgetname() that looks like a reference to a name in a nonexistent string table), or o if the name's offset into the string table is past the end of the string table. Typically, ldgetname() will be called immediately after a successful call to ldtbread() to retrieve the name associated with the symbol ta- ble entry filled by ldtbread(). The program must be loaded with the object file access routine library libld.a. SEE ALSO ldclose(3X), ldfcn(3), ldopen(3X), ldtbread(3X), ldtbseek(3X) 19 February 1988 LDGETNAME(3X)
http://modman.unixdev.net/?sektion=3&page=ldgetname&manpath=SunOS-4.1.3
CC-MAIN-2017-30
refinedweb
276
56.05
February 16, 2022 • ☕️ 3 min read Assume you have a Node.js script that validates your content for problems. It is nice to have some output to indicate these errors. For example as follows: To identify that it is a problem (error or warning) it is nice to do some coloring of the problem type, for example with the chalk library. I use a few simple functions to create these messages which I write to the output using console.log(). createWarningOrErrorString.ts: (note, this is TypeScript) import * as path from "path"; import chalk from "chalk"; // use version 4 for TypeScript, until TypeScript 4.6 is available export function createErrorString(message: string, errorType: string, filepath = "", line = 1, column = 1): string { const filepathString = filepath === "" ? "<nofile>" : path.join(process.cwd(), filepath); const errorMessage = `${chalk.bgRed("ERROR")}: ${filepathString}(${line},${column}): ${errorType} - ${message}`; return errorMessage; } export function createWarningString(message: string, errorType: string, filepath = "", line = 1, column = 1): string { const filepathString = filepath === "" ? "<nofile>" : path.join(process.cwd(), filepath); const warningMessage = `${chalk.bgRed("WARNING")}: ${filepathString}(${line},${column}): ${errorType} - ${message}`; return warningMessage; } Assume we have an npm script validate as follows (where in my case validate.js is transpiled from validate.ts): "scripts": { "validate": "node validate.js" } If we run npm run validate in a terminal window within VSCode we get the output including the error and warning messages, but they will not end up in the “Problems” panel is Visual Studio Code. There are two reasons for that: The solution to both problems are VSCode tasks. A VSCode task is executed in a separate terminal task tab, named after the executing task: The nice thing is that VSCode parses the output generated in this tab for problems. But first we need to define a VSCode task for this: .vscode/tasks.json: { "version": "2.0.0", "tasks": [ { "label": "Validate", "detail": "Validate all content and parse errors from output", "type": "npm", "script": "validate --silent", "problemMatcher": [ { "owner": "content-linter", "fileLocation": ["autoDetect", "${workspaceFolder}"], "pattern": { "regexp": "^(ERROR|WARNING):\\s*(.*)\\((\\d+),(\\d+)\\):\\s+(.*)$", "file": 2, "line": 3, "column": 4, "severity": 1, "message": 5 } } ], "options": { "statusbar": { "label": "$(check-all) Validate", "color": "#00FF00" } } } ] } Note that the above task configuration parses the errors/warning in the output, to show them in the “Problems” panel. So the line: C:\P\competenceframework\packages\content\src\competenceframework-settings.json(1,1): SchemaValidationError - rings is not allowed. Is parsed using the regular expression ^(ERROR|WARNING):\\s*(.*)\\((\\d+),(\\d+)\\):\\s+(.*)$. Resulting in the following information in the “Problems” pane: To run this task execute the task, press F1, select Tasks: Run Task, and next select the Validate task. Note that the above task configuration contains some addition information in options. This drives the configuration of a VSCode extension Tasks to add tasks in the VSCode status bar: I the above example I created two tasks in the status: Validate and Build. Now you can start your tasks with a single click, parse the output, and show the results in the “Problems” pane. Normally the “Problems” pane only shows problems in open files, but using tasks you can report on all problems that occured during the execution of the task. VSCode has great documentation on tasks. Check it out!
https://www.sergevandenoever.nl/vscode-parse-output-for-problems-using-tasks/
CC-MAIN-2022-33
refinedweb
527
55.34
Screen QML Type The Screen attached object provides information about the Screen an Item or Window is displayed on. More... : int Attached Methods - int angleBetween(Qt::ScreenOrientation a, Qt::ScreenOrientation b) Detailed Description The Screen attached object is valid inside Item or Item derived types, after component completion. Inside these items it refers to the screen that the item is currently being displayed on. The attached object is also valid inside Window or Window derived types, after component completion. In that case it refers to the screen where the Window was created. It is generally better to access the Screen from the relevant Item instead, because on a multi-screen desktop computer, the user can drag a Window into a position where it spans across multiple screens. In that case some Items will be on one screen, and others on a different screen. To use this type, you will need to import the module with the following line: import QtQuick.Window 2.2 It is a separate import in order to allow you to have a QML environment without access to window system features. Note that the Screen type is not valid at Component.onCompleted, because the Item or Window has not been displayed on a screen by this time. Attached Property Documentation This contains the available height of the collection of screens which make up the virtual desktop, in pixels, excluding window manager reserved areas such as task bars and system menus. If you want to position a Window at the bottom of the desktop, you can bind to it like this: y: Screen.desktopAvailableHeight - height This QML property was introduced in Qt 5.1. This contains the available width of the collection of screens which make up the virtual desktop, in pixels, excluding window manager reserved areas such as task bars and system menus. If you want to position a Window at the right of the desktop, you can bind to it like this: x: Screen.desktopAvailableWidth - width This property was introduced in Qt 5.2. This contains the primary orientation of the screen. If the screen's height is greater than its width, then the orientation is Qt.PortraitOrientation; otherwise it is Qt.LandscapeOrientation. If you are designing an application which changes its layout depending on device orientation, you probably want to use primaryOrientation to determine the layout. That is because on a desktop computer, you can expect primaryOrientation to change when the user rotates the screen via the operating system's control panel, even if the computer does not contain an accelerometer. Likewise on most handheld computers which do have accelerometers, the operating system will rotate the whole screen automatically, so again you will see the primaryOrientation change. This contains the width of the screen in pixels. Attached Method Documentation Returns the rotation angle, in degrees, between the two specified.
http://doc.qt.io/qt-5/qml-qtquick-window-screen.html
CC-MAIN-2016-44
refinedweb
474
52.8
In this tutorial, we will learn threading, multithreading, threadpool, thread join in C#. In this post I will cover following aspects of threading, which I feel is basic you must know, threading is huge, many aspects to play with! To create thread in .Net Framework, we need to use System.Threading namespace, all thread-associated classes are under System.Threading namespace Just think, when we need threading in development! When we have some long time taking task to be completed, and at the same time user should be able to perform other task without being interrupted, in that situation we use threading. If you are not familiar with using Task then I would say please look at C# Task tutorial with example, that will help you to understand Threading better. Here is an example of how to create a new Thread. using System; using System.Threading; Thread t1 = new Thread(); Here we write a function for some long time taking task, that will Fetch 100000 records from database and send customized email to each user. class ThreadingSample { public static void MyLongTask1() { List<object> objCollection = null; foreach (object o in objCollection) { // here we can do any database task or sending email. // fetching 100000 records from database and send them customized email System.Console.WriteLine("Processing... my thread running"); } } } We need to put the above log task in a thread, so our GUI remains free for user to perform other activities. void ProcessNow() { Thread MyThread = new Thread(new ThreadStart(MyLongTask1)); MyThread.Start(); } A collection of configured Threads ready to serve incoming asynchronous task is called "ThreadPool". ThreadPool manages a group of threads, ThreadPool technique improves the responsiveness of the application. Once a thread completes its task, it sent to the pool to a queue of waiting threads, where it can be reused. Reusability of same thread avoids any application to create more threads and helps in less memory consumption. using System.Threading; using System.Diagnostics; ThreadPool.QueueUserWorkItem(new WaitCallback(Process)); Here is an example of how you can create ThreadPool using ThreadPool.QueueUserWorkItem. // call (1) ThreadPool.QueueUserWorkItem(a => Method1(4, "my param 1")); // call (2) ThreadPool.QueueUserWorkItem(new WaitCallback(delegate (object state) { Method1(6, "my param 10"); }), null); In above thread pool example I have called a void Method1 with two parameters, you can call your void method from there, ideally that should be any long task method that you want to add in thread pool. static void Method1(int number, string message) { for (int i = 0; i <= number; i++) { TotalScore = TotalScore + i; } Console.WriteLine($"{message}, TotalScore {TotalScore}"); } After adding in thread pool you may be interested to know how many thread are running at that moment, here is how you can get current thread count. int max, max2; ThreadPool.GetMaxThreads(out max, out max2); int available, available2; ThreadPool.GetAvailableThreads(out available, out available2); int runningThread = max - available; Console.WriteLine($"running therads count: {runningThread}"); If you want to know the total thread count of the current process, use following code. int threadCount = Process.GetCurrentProcess().Threads.Count; Console.WriteLine($"threadCount-{threadCount}"); thread join() method is used blocks the current thread, and makes it wait until its all child threads to finish their task. Join method also has overload where we can specify the timeout in milliseconds. If we don’t use join() method in main() thread, then the main thread may die before child thread class MethodUtil { public void LongFunction1() { Console.WriteLine("LongFunction1 started"); Thread.Sleep(5000); Console.WriteLine("LongFunction1 complete"); } public void LongFunction2() { Console.WriteLine("LongFunction2 started"); Thread.Sleep(5000); Console.WriteLine("LongFunction2 complete"); } } Here is a quick small piece of code to understand how thread joins works. MethodUtil util = new MethodUtil(); Thread t1 = new Thread(new ThreadStart(util.LongFunction1)); t1.Start(); Thread t2 = new Thread(new ThreadStart(util.LongFunction2)); t2.Start(); t1.Join(); t2.Join(); for (int i = 1; i <= 10; i++) { if (t1.IsAlive) { Console.WriteLine("LongFunction1 working"); Thread.Sleep(500); } else { Console.WriteLine("LongFunction1 Completed"); break; } } Console.WriteLine("Main Thread Completed"); Console.ReadLine(); Multithreading is basically creating multiple threads; where all above threading principal remain same. However, while creating multiple thread, we need to be more cautious about each thread process and result, should consider all possibilities like what if thread takes more than expected time, what are the dependency factors, what if tread is stuck, what would be the impact on other threads etc. MethodUtil util = new MethodUtil(); Thread t1 = new Thread(new ThreadStart(util.LongFunction1)); t1.Start(); Thread t2 = new Thread(new ThreadStart(util.LongFunction2)); t2.Start(); Here are few useful thread handling functions. Stop the execution of the thread permanently. Pauses the execution of the thread temporarily, if already suspended then nothing will happen. Resumes the execution of a suspended thread. IsAlive is a Boolean property, which indicates if the thread is still running. Please feel free to ask more threading question, we will try to update this post with answer of your query. You should also look at following tutorials
https://www.webtrainingroom.com/csharp/threading-example
CC-MAIN-2021-49
refinedweb
825
57.98
psiginfo, psignal - write signal information to standard error [CX] #include <signal.h>#include <signal.h> void psiginfo(const siginfo_t *pinfo, const char *message); void psignal(int signum, const char *message); The psiginfo() and psignal() functions shall write a language-dependent message associated with a signal number to the standard error stream as follows: - First, if message is not a null pointer and is not the empty string, the string pointed to by the message argument shall be written, followed by a <colon> and a <space>. - Then the signal description string associated with signum or with the signal indicated by pinfo shall be written, followed by a <newline>. For psiginfo(), the application shall ensure that-2017,. POSIX.1-2008, Technical Corrigendum 2, XSH/TC2-2008/0260 [629] is applied. return to top of pagereturn to top of page
https://pubs.opengroup.org/onlinepubs/9699919799/functions/psiginfo.html
CC-MAIN-2019-47
refinedweb
137
50.36
We can load a CCDA xml document into SDA3 object. Once parsing SDA3 object, how do we determine from which XPATH from CCDA the specific SDA3 elements were mapped to. Is there any way? We can load a CCDA xml document into SDA3 object. Once parsing SDA3 object, how do we determine from which XPATH from CCDA the specific SDA3 elements were mapped to. Is there any way?". () Some of our interfaces use globals for lookups, but we are currently looking at putting a groups of (document) interfaces in a separate production with a shared ‘Default Database for Routines’ to reduce code duplication. I Hi Hi people, I am migration my web application of Cache 2013 to Cache 2016, in Cache 2013 I have a integration with a Java aplication using Java Gateway mapping proxy classes and consuming a method that param is a object, and it works perfectly. But in Cache 2016 this integration don't work, I send the param as object but Cache send as String with the ref of object... I: I am inserting rows in a table. This table is appearing in all namespace as I did global mapping. So once I run insert command from a method, it insert the rows. When I run the same insert command from other namespace, it replace the existing data in table. Insert command is same in all namespace but the data I m inserting is different. What is Purpose of DeepSee and how to we mapping the Deepsee concepts with in Class file.Please give a simple program for how to we learn on Deep see. Is there any good book where is described process of mapping globals to classes? And book about Storage? I didn't find much information about these questions in documentation :-( I created a DTL to do HL7 mapping. The test function in the tools works the DTL perfectly but when used by the rule in my business process, the OBX segments are stripped and the MRN is gone. The assigning authority and ID type are added into the PID but the actual patient MRN is blank (3.1 value). Here is the source code. Currently,. Please will correct the code: a is a perse Unicode to Unicode mapping value.Str=other status. It is how to convert English to french . Hi. Hi, is there a way to copy namespace mappings from another namespace to a new namespace programmatically with a command? When using management portal there is an option to choose an existing namespace to copy the mappings from, I'm basically looking for the equivalent from command line / COS. Thanks Martin? Is there an out-of-the-box or accepted standard method for loading up mappings between different code sets and then referencing these mappings (both directions) from DTL? First thought was the built in Lookup() and corresponding data tables but these only work in one direction (key -> value) and not the reverse. Obviously I can build my own classes to support a two way mapping but am wondering if there's a standard way of achieving this. The mapping should contain the code and display name from each of the code sets and allow mapping based on either code or display name. Thanks I want to override the Get and Set methods of a class property. The class maps to a pre-existing global. The property is defined like so: Property Invalid As %Library.Boolean; with the property mapping to a node like ^GLOBAL(Code,"INVALID")=1 Code is a property in the same class. The value can be 0 or 1 or the node might not exist. When it doesn't exist I want the value of the SQL field to come out as 0 (false). I was planning to fully map out all the transforms that incoming CCDs go through before they are displayed in the Clinical Viewer or sent back to another facility. When I opened the XSL files, I see some very good comments like the following:?
https://community.intersystems.com/tags/mapping?filter=unanswered
CC-MAIN-2019-30
refinedweb
671
70.94
XUL Based Web Applications. Why Not? XUL Based Web Applications. Why Not? Join the DZone community and get the full member experience.Join For Free I have just done some support for remote XUL applications in ItsNat (not public yet). Maybe you do not know XUL - XUL is the web component system based on web technologies included in Gecko based browsers (for instance FireFox). It is not new, as the first release of Mozilla (v1.0) almost 10 years ago, included this technology. The chrome of FireFox uses XUL, that is, menus, buttons, toolbars, dialogs and so on. In some way XUL is similar to the component system based on tags of JSF or ZK (in fact ZK is strongly inspired in XUL). The difference is those tags are directly understood by FireFox, there is no translation to HTML. Using JavaScript, CSS and W3C DOM Events you can fully customize XUL native components in the clients, furthermore, XUL supports XHTML code and/or SVG code embedded. You can find two online examples using XUL here and here. XUL support in ItsNat is easy because the approach “The Browser Is The Server” fits very well with management in server of any namespace natively supported by the browser, in fact, ItsNat already supports pure SVG pages. XUL would allow development of remote (client/server) web applications that are very similar to desktop applications. Using XUL components could be added/removed/updated from server and sending client events to the server using AJAX (if some listener was registered) with no reload following the Single Page Interface pattern (in fact XUL was not designed for page based applications). The “free” components of ItsNat can work with no problem with XUL markup, as it does with SVG. The problem arises around interactive XUL components like checkboxes or listboxes, components similar to the HTML counterparts (input checkbox select). In these kind of elements we can use the low level event system of ItsNat to synchronize the server DOM when the control changes in client. Nevertheless, custom components would be interesting providing data models, selection listeners etc, much like in HTML. The se new components are not done, this make me think is there enough demand? As said before, XUL is not new. To add some example to the manual I have been searching XUL code and applications based on XUL, and the findings have been very disappointing. Most of the examples, tutorials, and articles are very old. Is XUL interesting and popular enough to spend any resources on it? Will be someday? Are there XUL applications beyond the FireFox chrome and fancy add-ons? Why is not XUL popular? I have found some answers: 1) It is a technology only working in a single family of browsers (Gecko). 2) In spite of the fact that it can work remotely (XUL code served by web servers), the main purpose is to provide the desktop experience to the user. The focus is not page navigation, in fact XUL does not have a form tag (the HTML form can be used). 3) Web frameworks have largely ignored XUL. You can find some basic support in action frameworks and template technology allowing free design. I can't find (I haven’t) a server centric framework with support of XUL and AJAX. Item 1 does not have a clean solution, in ItsNat automatic HTML and JavaScript generation would be the path to follow in browsers with no XUL support, however this is not an easy task, because the same behaviour is expected in FireFox (native XUL) and non-XUL browsers and because client events in HTML markup must be converted to DOM events received by the correct XUL element in server including simulation of bubbling and capturing. Item 2 is not longer a problem because current trend is to avoid so much as possible the ugly page reload per request, XUL focus for desktop is currently an advantage for web applications, page reload can be fully avoided using AJAX. Item 3 can change, may be ItsNat is the first framework server centric with XUL and AJAX support. In summary, the first paragraph is the main unsolved problem, true XUL applications only work in Gecko browsers. Fex and AIR are proprietary solutions trying to conquer the world of application development outside the web (Flex runs on top of the web), why can not XUL, a really web based technology, be a first class technology? Flex is executed on top of Flash plug-in, what about an Active X embedding FireFox in Internet Explorer? Why is this Active X abandonware? In desktop Prism is trying to push FireFox as a platform for desktop applications much like AIR is doing, why not these applications in XUL? Can XUL have a rebirth as a web application platform? Can the XUL component system be a future standard for web development adopted by other browsers? Or is it too late? Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/xul-based-web-applications-why-0
CC-MAIN-2019-30
refinedweb
843
61.36
For openstack large scale deployment, which one is more efficient DVR or VRRP? shameem farahth shameem farahth Asked: 2016-04-19 06:53:42 -0600 Seen: 92 times Last updated: Apr 19 '16 Why is ODL adding port eth1 to br-int by default Provider network with ovs & vlan not work Problem with namespace in DVR HA mirroring traffic between vm [closed] OVS Offloading with Bonding neutron services not starting up ImportError: cannot import name decorate Openstack REST API Call Returns Forbidden Openstack juno how to configure dvr in network-node and compute-node COA exam - openstack version OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license. Sorry, your question sounds like "Oranges or Apples which ones are better ?"
https://ask.openstack.org/en/question/91266/for-openstack-large-scale-deployment-which-one-is-more-efficient-dvr-or-vrrp/
CC-MAIN-2020-10
refinedweb
140
57.81
Fabulous Adventures In Coding Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric. Here is yet another question I got from a C# user recently: would say to absolutely go for a property in this case, for four reasons. First, properties are highly discoverable. Consumers of this class can use IntelliSense to see that there is a property on the class called "Description" much more easily than they can see that there is an attribute. Second, properties are much easier to use than attributes. You don't want to muck around with the code to extract strings from metadata attributes unless you really have to. Third, data such as names and descriptions is highly likely to be localized in the future. Making it a property means that the property getter code can read the string out of a resource, a resource which you can hand off to your localization experts when it comes time to ship the Japanese version. And fourth, let's go back to basic object-oriented design principles. You are attempting to model something -- in this case, the class is modeling a "business rule". Note that a business rule is not a class. Nor is a rule an interface. A rule is neither a property nor a method. A rule isn't any programming language construct. A rule is a rule; classes and structs and interfaces and whatnot are mechanisms that we use to implement model elements that represent the desired semantics in a manner that we as software developers find amenable to our tools. But let's be careful to not confuse the thing being modeled with the mechanisms we use to model it. Properties and fields and interfaces and classes and whatnot are part of the model; each one of those things should represent something in the model world. If a property of a "rule" is its "description" then there should be something in the model that you're implementing which represents this. We have invented properties specifically to model the "an x has the property y" relationship, so use them. That's not at all what attributes are for. Think about a typical usage of attributes: [Obsolete][Serializable]public class Giraffe : Animal { ... [Obsolete][Serializable]public class Giraffe : Animal { ... Attributes typically do not have anything to do with the semantics of the thing being modeled. Attributes are facts about the mechanisms - the classes and fields and formal parameters and whatnot. Clearly this does not mean "a giraffe has an obsolete and a serializable." This also does not mean that giraffes are obsolete or serializable. That doesn't make any sense. This says that the class named Giraffe is obsolete and the class named Giraffe is serializable. In short: use attributes to describe your mechanisms, use properties to model the domain. You made a great point. I think most - if not all - C# developers have thought about this decision from time to time. The last sentence is the best guidance I have seen so far. (BTW, this would be a great addition to the .NET Framework Design Guidelines). However, I think you did not mention another aspect of attributes. The object oriented programming paradigm in general does not allow you to model "aspects". To me, serialization, security, logging etc. are all so-called cross-cutting concerns. Attributes allow you to declaratively embedd them into your code. Although the way they are interpreted varies greatly (compiler, runtime, post-tranformation tool, add-in manager etc.) but I would say that all these "mechanics" (as you called it) achieve the same result: modeling an aspect of your domain. Since C# has attributes one could argue that C# allows you to do do a bit of aspect oriented programming (AOP). Eric, I use attributes for defining metadata a lot and they served me great so far (of course, I do property- and method-based metadata as well). In my experience, there are cases where the attributes should be preferred, and there are cases when properties have an advantage; I wouldn't go as far as choosing the "cosher" approach. From the top of my head, I'll give you a few reasons for using attributes: 1) Metadata belongs to a class, not to an instance of a class. You don't need to create an instance of a class only to find out that it shouldn't be used in the current context (yes I am aware that the 'attributes' approach is much slower because of reflection, that's not the point I'm trying to make). 2) An attribute is a single line of code, which is conveniently located at the top of the class definition. That makes it much easier to understand as compared to looking for a property which might or might not be defined somewhere in the middle of the file. 3) Ability to specify more than one attribute of the same type could be handy sometimes. Well, it can be emulated with the properties returning arrays or something like it, but it's ugly and besides, you'll have to change every class in the hierarchy when making this change. 4) Run-time attribute-based code generation - that's a big one! The points you mentioned are all good, but I don't think they qualify for being a universal rule as to whether attributes should be used for defining metadata. Let's revisit them: 1) Properties are highly discoverable - true, but oftentimes you don't need them. And even when you do need them, the solution could be to create a (possibly sealed) property in the base class that would return a value based on the attribute(s). 2) Properties are much easier to use than attributes - well, to me, the attributes-based approach produces code that is easier to read than the property-based one, and that makes a big difference. Regarding the ease of use, just write the generic "TA FindAttribute<TA, T>() where TA : Attribute" method and never worry again. 3) Localization - I don't see the issue at all, attribute property's getter can read the localized string from the resources as well, what's the difference? 4) Object-oriented design principles... what makes you think it's not "object-oriented"? Actually, as I mentioned in the first point, I find the attribute-based approach more solid. Finally, let's compare the approaches using the class from the project I'm working on. It has hundreds the actions like the following one (I apologize if the code loses the indentation in the browser, I don't know what the formatting tags are and there is no "preview" button) - First, the attribute-based approach. The reader usually groks the whole picture in a matter of seconds. [Category("Data")] [PopupMenu("Data|Extract Selected Rows")] [MainMenu("Data|Extract Selected Rows")] [EnabledCondition(NeedsDocument = true, NeedsSelection = true)] [Role(AppRole.Editor)] [Log(LogUser = true, LogTime = true)] public class ExtractSelectedRowsAction : Action { public override void Run() { DataUtils.ExtractSelectedRows(Document.Selection); } } Now, the property-based one. public string Category get { return "Data"; } public PopupMenuInfo PopupMenuInfo get { return new PopupMenuInfo(new [] { "Data", "Extract Selected Rows" }); } public MainMenuInfo MainMenuInfo return new MainMenuInfo(new [] public EnabledCondition EnabledCondition { return new EnabledCondition { NeedsDocument = true, NeedsSelection = true } } public AppRole AppRole get { return AppRole.Editor; } public LogInfo LogInfo return new LogInfo { LogUser = true, LogTime = true See what I'm talking about? @Andrew: I assumed we were talking about static properties. One of my favorite bloggers, Eric Lippert , has a great post on the " properties vs. attributes @Andrew PopupMenu, MainMenu, EnabledCondition ? Is that a home-grown app framework? (I ask because it sounds interesting) @CMC Yes, that's part of our framework. Actually, since in that case metadata does not belong to an instance of a class, we use the following technique often - UI-related attributes are serialized in the XML files which are read at application start-up (those are plugins) and the UI is populated without loading the actual plugin dlls (the corresponding plugin dll is only loaded when you invoke its action for the first time); that helps a lot since we have 50+ plugins. Of course, you could still achieve the same result with property-based metadata, but attributes make our life much easier in that case. Thank you for submitting this cool story - Trackback from DotNetShoutout Eric, how do you feel about using a [Description] attribute on enum values, to convert them to friendly strings? enum MyEnum { [Description("First value")] Value1, [Description("Second value")] Value2 @Weeble: But aren't static things (properties or methods) among the most un-object oriented features in the language. Consider things like: how they combine with inheritance, there is no polymorphic way of treating all static properties defined in the inheritance chain. You cannot "override" a static property in the base class, you just reintroduce the name. I believe Eric did mean instance properties. I also don't find much disagreemente between Andrew and Eric. I find that attributes are the (almost) ideal replacement of the role that metaclass properties play in dynamic languages. Such roles are almost always metalevel, so it agrees with Eric's suggestion of attributes for implementation mechanism. I believe that attributes should past two questions to be warrant their introduction: "Do I need to conceptually have an instance of the class to access this data?", "Will the data contained be used by the level-0 model code?". There are four combinations of answers that, I believe, should help clarify where to put the abstractions. If you need the data independent of an instance and that data won't be accessed by the domain model, then it is a metaclass property and attributes are ideal (most of Andrew's attributes fall in here). I also think instance properties are no good here because they force on the metamodel the knowledge of how to construct an instance, something which doesn't belong in there most of the time. But if the domain code does read and takes action depending on this data, then you should consider adding more abstractions to the domain model, as it seems that this data should be instance properties of some new type (representing some model level typification, as opposed to the metalevel typification of Type and such). If the data is instance specific, then if it affects the flow of domain code, then it is clearly an instance property, but if it doesn't and it is read only by "mechanisms", then it probably doesn't belong to the domain level protocol (although it may be there for complexity reduction purposes, as is the case for operations like == or GetHashCode or even ToString()) and you should weight the complexity of introducing metaobjects in the solution (objects that represent this instance but with a protocol specific for the metamodel). I think these questions are basically a refraiming of the Eric's distinction between mechanisms (metalevel models) and domain (level-0 models). As the last bit says, to me it generally comes down to 'is a' vs. 'has a'. The Attributes of a dog include being a mammal, being trainable... The Properties of a dog include it's fur color, tricks it knows... Property vs. Attribute is a much easier decision than Subclass vs. Interface Implementation vs. Attribute. for example the Serializable attribute vs. the ISerializable interface.
http://blogs.msdn.com/b/ericlippert/archive/2009/02/02/properties-vs-attributes.aspx
CC-MAIN-2015-32
refinedweb
1,892
52.9
Hello, I have been working on a speech synthesis program using JSAPI and FreeTTS. I now have the code with NO errors (!!!:)) However, when I run the code, it doesn't actually have any audio to it. If I change the text spoken to "This is text that is spoken." then it says "This is (then is sounds like) 'STOP'" really, really quickly. It also shows a few errors when I change the text. The error I've come across is: ClusterUnitDatabase: can't find tree for pau_z ClusterUnitDatabase Error: getUnitIndex: can't find unit type pau_z This appears to mean, to me, that a few of the sound 'specifiers' (for lack of a better word) are not imported. However, I've imported all of the files I know of that I have. I'm not sure where the problem is, and I would be grateful for any help and/or advise any one can give me on this. Here is the code: package helloworldsynthesis; import javax.speech.*; import javax.speech.synthesis.*; import java.util.Locale; public class Main { public static void main(String args[]) { try { // Create a synthesizer for English Synthesizer synth = Central.createSynthesizer(new SynthesizerModeDesc(Locale.ENGLISH)); System.out.println(synth); // Get it ready to speak synth.allocate(); synth.resume(); // Speak the "Hello world" string synth.speakPlainText("Hello World!", null); // Wait till speaking is done synth.waitEngineState(Synthesizer.QUEUE_EMPTY); // Clean up synth.deallocate(); } catch (Exception e) { e.printStackTrace(); } } } Thanks in advance, -WolfShield
https://www.daniweb.com/programming/software-development/threads/351304/java-speech-synthesis-is-mute
CC-MAIN-2018-30
refinedweb
244
67.15
getservbyname() Get a service entry, given a name Synopsis: #include <netdb.h> struct servent * getservbyname( const char * name, const char * proto ); Since: BlackBerry 10.0.0 Arguments: - name - The name of the service whose entry you want to find. - proto - NULL, or the protocol for the service. Description:().. Returns: A valid pointer to a servent structure, or NULL if an error occurs. Files: - /etc/services - Network services database file. Classification: Caveats: This function uses static data; if you need the data for future use, copy it before any subsequent calls overwrite it. Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/getservbyname.html
CC-MAIN-2019-22
refinedweb
113
60.72
[Previous: Chapter 10] [Qt Tutorial] [Next: Chapter 12] Files: In this example we introduce a timer to implement animated shooting. The CannonField now &painter); This private function paints the shot. QRect shotRect() const; This private function returns the shot's enclosing rectangle if one is in the air; otherwise the returned rectangle is undefined. int timerCount; QTimer *autoShootTimer; float shootAngle; float shootForce; }; These private variables contain information that describes the shot. The timerCount keeps track of the time passed since the shot was fired. The shootAngle is the cannon angle and shootForce is the cannon force when the shot was fired. #include <math.h> We include <math.h> because we need the sin() and cos() functions. (An alternative would be to include the more modern <cmath> header file. Unfortunately, some Unix platforms still don't support these properly.) CannonField::CannonField(QWidget *parent) : QWidget(parent) { currentAngle = 45; currentForce = 0; timerCount = 0; autoShootTimer = new QTimer(this); connect(autoShootTimer, SIGNAL(timeout()), this, SLOT(moveShot())); shootAngle = 0; shootForce = 0; setPalette(QPalette(QColor(250, 250, 200))); setAutoFillBackground(true); } We initialize our new private variables and connect the QTimer::timeout() signal to our moveShot() slot. We'll move the shot every time the timer times out. void CannonField::shoot() { if (autoShootTimer->isActive()) return; timerCount = 0; shootAngle = currentAngle; shootForce = currentForce; autoShootTimer->start(5); } This function shoots a shot unless a shot is in the air. The timerCount is reset to zero. The shootAngle and shootForce variables are set to the current cannon angle and force. Finally, we start the timer. void CannonField::moveShot() { QRegion region = shotRect(); ++timerCount; QRect shotR = shotRect(); if (shotR.x() > width() || shotR.y() > height()) { autoShootTimer->stop(); } else { region = region.unite(shotR); } update(region); } moveShot() is the slot that moves the shot, called every 5 milliseconds when the QTimer fires. Its tasks are to compute the new position, update the screen with the shot in the new position, and if necessary, stop the timer. First we make a QRegion that holds the old shotRect(). A QRegionRegion. Finally, we repaint the QRegion. This will send a single paint event for just the one or two rectangles that need updating. void CannonField::paintEvent(QPaintEvent * /* event */) { QPainter painter(this); paintCannon(painter); if (autoShootTimer->isActive()) paintShot(painter); } The paint event function has been simplified since the previous chapter. Most of the logic has been moved to the new paintShot() and paintCannon() functions. void CannonField::paintShot(QPainter &painter) { painter.setPen(Qt::NoPen); painter.setBrush(Qt::black); painter.drawRect(shotRect()); } This private function paints the shot by drawing a black filled rectangle. We leave out the implementation of paintCannon(); it is the same as the QWidget::paintEvent() reimplementation from the previous chapter. QRect CannonField::shotRect() const { const double gravity = 4; double time = timerCount / 20.0; double velocity = shootForce; double radians = shootAngle * result(0, 0, 6, 6); result.moveCenter(QPoint(qRound(x), height() - 1 - qRound(y))); return result; }Rect with size 6 x 6 and move its center point to the point calculated above. In the same operation we convert the point into the widget's coordinate system (see The Coordinate System). The qRound() function is an inline function defined in <QtGlobal> (included by all other Qt header files). qRound() rounds a double to the closest integer. class MyWidget : public QWidget { public: MyWidget(QWidget *parent = 0); }; The only addition is the Shoot button. QPushButton *shoot = new QPushButton(tr("&Shoot")); shoot->setFont(QFont("Times", 18, QPainter::drawEllipse() may help.] Change the color of the cannon when a shot is in the air. [Previous: Chapter 10] [Qt Tutorial] [Next: Chapter 12]
https://doc.qt.io/archives/qtopia4.3/tutorial-t11.html
CC-MAIN-2021-43
refinedweb
587
57.77
Windows Phone Toolkit PhoneTextBox in depthpublished on: 8/30/2011 | Tags: WP7Toolkit Mango windows-phone by WindowsPhoneGeek Recently a new version of the Windows Phone Toolkit was released: Windows Phone Toolkit - August 2011 (7.1 SDK) with some pretty interesting new components. Previously we covered all toolkit components in our 21 WP7 Toolkit in Depth articles covering all controls so it is time to continue this series with a few more posts. In this post I am going to talk about the new "PhoneTextBox" control in details. Basically, PhoneTextBox is an advanced TextBox control with ActionIcon support, Hints and more. It also exposes a set of properties for rich customization. Getting Started To begin using PhoneTextBox first add a reference to the Microsoft.Phone.Controls.Toolkit.dll assembly in your Windows Phone application project. UPDATE: If you have installed the toolkit via the .msi then you will not see the PhoneTextBox in the list with available controls. However this is a known issue and hopefully it will be fixed soon. For now here is the official discussion: .The workaround is to download and rebuild the source code. You can select Microsoft.Phone.Controls.Toolkit.dll directly from the "...\Silverlight for Windows Phone Toolkit Source & Sample - Aug 2011\Bin\" if you have downloaded the "Silverlight for Windows Phone Toolkit Source & Sample - Aug 2011.zip" instead. You can create an instance of the PhoneTextBox control either in XAML or with C#. - Define an instance of the PhoneTextBox control in XAML: you have to add the following namespace declaration: xmlns:toolkit="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit" <toolkit:PhoneTextBox NOTE: Make sure that your page declaration includes the "toolkit" namespace! You can do this by using VisualStudio Toolbox, Expression Blend Designer or just include it on your own. - ·Define an instance of the PhoneTextBox control in C#: just add the following using directive: using Microsoft.Phone.Controls; PhoneTextBox textBox = new PhoneTextBox(); textBox.Hint = "UserName"; Key Properties PhoneTextBox derives from TextBox so it has all properties of the TextBox plus the following new ones: - Hint Hint is a dependency property of type string. It gets or sets the Hint of the PhoneTextBox control. - HintStyle HintStyle is a dependency property of type Style. It gets or sets the Hint Style of the PhoneTextBox control. Example: This example demonstrates how to add a simple HintStyle that changes only the FonFamily, FontSize and Foreground properties without changing the whole ControlTemplate. <phone:PhoneApplicationPage.Resources> <Style TargetType="ContentControl" x: <Setter Property="FontFamily" Value="Calibri"/> <Setter Property="Foreground" Value="Aqua"/> <Setter Property="FontSize" Value="30"/> </Style> </phone:PhoneApplicationPage.Resources> <!--...--> <toolkit:PhoneTextBox - ActualHintVisibility ActualHintVisibility is a dependency property of type Visibility. It gets or sets whether the hint is actually visible. - LengthIndicatorVisible LengthIndicatorVisible is a dependency property of type Boolean. It determines whether the length indicator of the PhoneTextBox should be visible. - LengthIndicatorTheshold LengthIndicatorTheshold is a dependency property of type int. It determines the threshold after which the length indicator will appear. Example: <toolkit:PhoneTextBox - DisplayedMaxLength DisplayedMaxLength is a dependency property of type int. It represents the displayed maximum length of text that can be entered. This value takes priority over the MaxLength property in the Length Indicator display. Example: <toolkit:PhoneTextBox NOTE :In the given example you can enter more than 4 characters, but the length indicator displays the number of characters entered out of 4, even if the length of the text entered is greater than 4. - ActionIcon ActionIcon is a dependency property of type ImageSource. It gets or sets the ActionIcon image of the PhoneTextBox control. Example1: <toolkit:PhoneTextBox private void Search_ActionIconTapped(object sender, EventArgs e) { //do some action here. I.e open MessageBox, etc. MessageBox.Show("The action icon was tapped!"); } Example2: <toolkit:PhoneTextBox Key Events - ActionIconTapped Occurs when the ActionIcon is tapped. Example: phoneTextBox.ActionIconTapped += new EventHandler(phoneTextBox_ActionIconTapped); //... void phoneTextBox_ActionIconTapped(object sender, EventArgs e) { //... } That was all about the new "PhoneTextBox" from the Windows Phone Toolkit Aug 2011 in depth. Here is the full source code: I hope that the post was helpful. You can also follow us on Twitter: @winphonegeek for Windows Phone; @winrtgeek for Windows 8 / WinRT posted by: RoguePlanetoid on 8/30/2011 2:52:15 PM Great demo - this one can catch you out as in the DLL installed by the August Toolkit installer doesn't seem to have this as a component you can use seperately - I had to extract the code for the PhoneTextBox and add this as it's own component - unless this has changed, which I hope it has, as this is a very useful control! posted by: Rich on 8/30/2011 7:18:49 PM Thanks for the article. One question. How would you handle the ActionIconTapped event using MVVM? This control will be great, but I'm using the MVVM Light framework in my app and am trying to be consistent. ActionIconTapped in MVVM Light posted by: winphonegeek on 8/30/2011 7:31:11 PM Basically, you have two options: Option1: To use EventToCommand from MVVM Light Option2: To use a custom behavior (that handles the event and calls a command) @RoguePlanetoid posted by: winphonegeek on 8/30/2011 7:52:34 PM Thank you for pointing that out, we have updated the post. The problem with the .msi is a known issue, hopefully it will be fixed soon: Good Control posted by: Carlos Peres on 8/30/2011 11:16:09 PM Amazing post guys. Pretty good example as always. I was about to miss this control. I installed the toolkit several days ago but did not notice any PhoneTextBox. Now I downloaded the source and it is in there :) I was looking for such component for a long time. I mean there are similar payed controls but the Toolkit`s PhoneTextBox is the only FREE one. RE:ActionIconTapped in MVVM Light posted by: winphonegeek on 8/31/2011 3:28:38 PM We posted an new article that explains: How to bind a Windows Phone control Event to a Command using MVVM Light The article uses PhoneTextBox and ActionIconTapped as an example. Password box? posted by: Aaron on 1/23/2012 12:01:39 AM Great article, thanks! I was wondering if anyone has come up with a way to use this for a password box, i.e. when the characters are masked. Hint at the center posted by: Aditya on 2/10/2012 11:38:02 AM How to align hint at the center of phonetextbox? I have set horizontal alignment to center but its not working. Thanks posted by: Thanks on 4/13/2012 11:35:08 AM Thanks posted by: waz on 5/11/2012 10:09:28 AM how to change the border color when focus Cannot change alignment of Hint posted by: Phien Le on 8/21/2014 2:14:04 AM I have multi-row phoneTextBox, so I updated for HintStyle by: <Setter Property="VerticalAlignment" Value="Top" /> but it doesn't work. Is there any wrong? text changing back to grey after watermark is displayed posted by: PHenry on 11/4/2014 6:17:01 AM I'm trying to use this control, however, when I start typing, the foreground color is black, then I hit backspace a few times so the hint message appears, and its grey\gray, that's fine, but after I start typing...the new text isn't in black, it's in grey\gray too. How do I show that in Black? In love with toolkits posted by: Asadujzaman Shamim on 11/23/2014 1:20:34 PM You made my life a lot more easier. Thank you guys for great work. multiline is possible ??? posted by: Nirmal on 2/3/2015 10:18:01 AM hey ur toolkit was so useful for adding hint in textboxes but does it allow multiline functionality of textbox ???? plz help me wid dat
http://www.geekchamp.com/articles/windows-phone-toolkit-phonetextbox-in-depth
CC-MAIN-2018-43
refinedweb
1,313
55.64
hi I am getting problem with Composite key .How can i create class for it. I read following code from this forum few days back. public class ItemKey implements java.io.Serializable { public String productId; public String vendorId; public ItemKey() { }; public ItemKey(String productId, String vendorId) { this.productId = productId; this.vendorId = vendorId; } public String getProductId() { return productId; } public String getVendorId() { return vendorId; } public boolean equals(Object other) { if (other instanceof ItemKey) { return (productId.equals( ((ItemKey)other).productId) && vendorId.equals( ((ItemKey)other).vendorId)); } return false; } public int hashCode() { return productId.hashCode(); } } When using BMP and using following code to remove EJB Object there is problem :: publiv void ejbRemove() throws RemoteException { ItemKey pk=(ItemKey)ctx.getPrimaryKey(); String id1=pk.productId; //Problem Here String id2=pk.vendorID; //Problem Here ........ ........ delete from ItemTable where id=id1 and idd=id2 ..... } Can anybody clarify me how to remove,... in case of composite key. What ctx.getPrimarykey() return in this case and what the value of pk will be. What in case of CMP Can anybody help me out in this case.Please send me code and other details of it. Thanks somal_sood at yahoo dot com composite key problem (2 messages) - Posted by: somal sood - Posted on: December 13 2000 03:00 EST Threaded Messages (2) - composite key problem by Uday Natra on December 14 2000 19:01 EST - composite key problem by Uday Natra on December 14 2000 19:03 EST composite key problem[ Go to top ] What is the error U r getting?? I couldn't see any wrong with Ur code. I did the same and it works fine. - Posted by: Uday Natra - Posted on: December 14 2000 19:01 EST - in response to somal sood composite key problem[ Go to top ] In case of CMP the Container will read the contents of the primary and will issue a delete statement accordingly. U need not do anything. - Posted by: Uday Natra - Posted on: December 14 2000 19:03 EST - in response to Uday Natra
http://www.theserverside.com/discussions/thread.tss?thread_id=2650
CC-MAIN-2017-22
refinedweb
331
58.79
STRCASECMP(3) Linux Programmer's Manual STRCASECMP(3) strcasecmp, strncasecmp - compare two strings ignoring case #include <strings.h> int strcasecmp(const char *s1, const char *s2); int strncasecmp(const char *s1, const char *s2, size_t n); The. The strcasecmp() and strncasecmp() functions return an integer less than, equal to, or greater than zero if s1 is, after ignoring case, found to be less than, to match, or be greater than s2, respectively.strcasecmp(), strncasecmp() │ Thread safety │ MT-Safe locale │ └────────────────────────────┴───────────────┴────────────────┘ 4.4BSD, POSIX.1-2001, POSIX.1-2008.. bcmp(3), memcmp(3), strcmp(3), strcoll(3), string(3), strncmp(3), wcscasecmp(3), wcsncasecmp(3) This page is part of release 4.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2016-07-17 STRCASECMP(3)
http://man7.org/linux/man-pages/man3/strcasecmp.3.html
CC-MAIN-2017-17
refinedweb
142
54.63
C, Objective-C, C++... D! Future Or failure? 791) It was essentially BCPL stripped of anything Thompson felt he could do without, in order to fit it on very small computers, and with some changes to suit Thomson's tastes (mostly along the lines of reducing the number of non-whitespace characters in a typical program). According to Ken, B was greatly influenced by BCPL, but the name B had nothing to do with BCPL. B was in fact a revision of an earlier language, bon, named after Ken Thompson's wife, Bonnie.:Microsoft will come out with it's own version (Score:5, Funny) Re:Microsoft will come out with it's own version (Score:4, Funny) Or instead of sharp, use the hash orientation and you could have D-bong... mind you'd have to be on drugs to be able to use it though ;-) full C compatability? (Score:4, Interesting) In "small program" languages like perl, giving people lots of ways to do things is a feature. In a "large program" language, providing both C compatability and garbage collection is a maintainability nightmare. You'll have people who use both, or worse yet, who only understand one, so to understand the mixed code that results from a hybrid langage like this, you'll have to be utterly proficient with -both- languages. Re:full C compatability? (Score:4, Insightful) I wish the article actually compared to objective-c, as the story's poster seemed to imply... Re:full C compatability? (Score:5, Informative) Re:full C compatability? (Score:5, Insightful) Because it's not handled by D's garbage collection, it still needs to be freed. I'm sure this will make those developers who love to leak memory even worse. Re:full C compatability? (Score:5, Insightful) It happens a lot. I've seen a few expensive C-based libraries that clearly show their designers struggled with the classic caller-callee allocation dillema and lost. Debugging memory leaks in programs that use these libraries is typically hopeless and requires high effort-versus-progress computationally-expensive run-time checks to find them. I like C quite a bit, but it is disheartening to see such a simple malloc() function cause so much pain. Re:full C compatability? (Score:5, Interesting) One thing I'd really like to see in Java is Object.free(), which would tell the garbage collector that the object is garbage and can be freed, and cause any further use of the object to throw a FreedMemoryException. This would be useful both as a hint to the system (letting it get rid of things the object references early) and as a debugging aid (so you can find cases where stuff remains in use after you don't think it will). Of course, it violates Java's sandbox design to have a C-style free() which recycles the address space. Re:full C compatability? (Score:5, Insightful) Just about anything you need, there's a C library for it. Think nice things like opengl,pam,openssl,GUI librairies, database libraries, and heaps more. Having access to those is very nice, and you don't have to wait anyone to port those to a new language(which probably won't happen anyway.) I'd imagine how far C++ had gotten if it couldn't use C libraries.. Don't need to imagine -- it was the whole point (Score:5, Informative) Stroustrup is on record many times over as saying that link compatibility with some existing language was a design criterion for him. If not C, then something else. It is an axiom in the C++ community that compatibility with C is both C++'s greatest strength and its greatest weakness. Re:full C compatability? (Score:5, Insightful) For those of us who don't like unpredictable... pauses... in our programs while the garbage collector does its work, will we be able to turn off garbage collection entirely or run the garbage collector only at specified times? I'll answer my own question: even if this is possible, if D ever becomes a serious language, we will be using libraries written by other people, libraries that do rely on garbage collection. So, no, we won't (realistically) be able to turn off the garbage collector, which means that we won't be able to write real-time programs, and it'll even be touchy writing programs, such as, oh, audio or video players, that require near real-time performance. (Not to mention the disappointment we all felt with the various java window-widget APIs (AWT, Swing) that looked great but couldn't run fast enough to respond to the mouse.) Look folks, taking care of your own garbage wasn't possible in C for a library writer (even ones returning opaque pointers to structs that allocated their own memory) because you had to rely on the library user to call your cleanup function(s). But the library user could clean-up. The problem was essentially that some programs didn't care enough to be careful -- pointers actually had to be tracked. Now, it's fine if a library user wants to add on a garbage collector by re-writing malloc to track allocations. But libraries, which are intended to be used by lots of programmers, to write code, and by lots and lots of end users who run code should not use garbage collectors themselves -- because that forces the library user to use garbage collection too. But in C++, library writers can write libraries that take care of their own garbage even when used by careless users, because the compiler will automatically call class destructors which can do clean-up. (Yes, except in the case of derived classes -- the writer of the derived class has to explicitly write a dtor to ensure the parent class dtor is called.) And in C++, with the Standard Template Library, there's little need for non-library writers to do explicit allocation at all -- std::vector and std::string and std::auto_ptr, just by themselves, take care of most of the problems of memory leaks and buffer overruns. If you're using C++ and you feel that you're a good enough programmer that there's real need for you to be calling So why complicate things with garbage collector and tracking down circular references and unpredictable pauses? Garbage collection is a bad answer for non-trivial programs, and pretty much necessary for trivial programs. Re:full C compatability? (Score:3, Informative) Yeah, nice FUD. Java is slow because it's bytecode, not because it's garbage collect Re:full C compatability? (Score:5, Insightful) Re:full C compatability? (Score:4, Interesting) Re:full C compatability? (Score:5, Informative) My point was that to avoid the problem of circular references, the garbage collector does have to be more sophisticated, and sophistication takes time and (memory) space -- time and space that a program may or may not be able to spare. "for very many applications modern garbage collectors provide pause times that are completely compatible with human interaction. Pause times below 1/10th of a second are often the case," Pause times below 1/10th of a second? Hmm, how much below? TV-quality video is 24 frames per second, so a one-tenth second pause means dropping two or three frames. Acceptable? Perhaps, but not desirable. "Does garbage collection cause my program's execution to pause? Not necessarily." Yes, if you read my post carefully -- perhaps you missed a word or two when the garbage collector in your head did some clean-up -- I didn't say that pauses were inevitable. My complaint -- and not just mine, it's no revelation that garbage collection has may detractors -- is that the pauses are not predictable by writer of the program. With non-garbage collected language, I know that memory allocation will either succeed ort fail, and I know (or a library writer knew) when allocation happens, because I'm explicitly coding it. So I know, at this particular point in my program, either allocation succeeded or failed. But garbage collection can happen at any time, and cause a pause at any point in my program -- even when I'm needing to re-fill under-run buffers or read volatile memory or make time-critical choices. With garbage collection, I no longer have an algorithmic program, in which I can say what it's doing at any particular point in the code. Then come back and make some informed comments, instead of spouting nonsense. Thank you. That overly hostile arrogance suggests you're either a zealot or a fourteen year-old. That sort of blustering generally indicates someone who isn't that confident in himself or his argument, and so wishes to preempt questioning by being a posturing like a "tough guy"; it's particularly prevalent on the net -- though I'll grant that you didn't hide behind an Anonymous Coward post. Adults can disagree and discuss things without resorting to insults and attitude -- and I think you'll be able to do that too, with a little more experience. Re:full C compatability? (Score:5, Informative) No it doesn't. Handling of circuler references falls naturally out of most GC algorithms. One of the simplest possible memory algorithms is the mark & compact GC, which handles circuler references naturally. Pause times below 1/10th of a second? Hmm, how much below? TV-quality video is 24 frames per second, so a one-tenth second pause means dropping two or three frames. You disable the GC in those cases. A good GC will give you the option to manually manage memory in certain cases (say, through a pool allocator), so in any time-sensitive paths, you can disable the GC and rely on those other options. There are also real-time GC's that have absolutely bounded pause times. But garbage collection can happen at any time, and cause a pause at any point in my program -- even when I'm needing to re-fill under-run buffers or read volatile memory or make time-critical choices. You do realize that you have this issue with any modern OS? A malloc() can take tens of thousands of clock-cycles if it decides to mmap() to get more backing memory, and the kernel decides to block the app. Re:full C compatability? (Score:4, Informative) Then again, malloc needs sophistication as well, and can be every bit as slow as a good garbage collector. Indeed, even garbage collectors for C (try google with "garbage collector") can outperform the regular glibc malloc sometimes, even when there is NO reference counting involved. Which btw is another issue, reference counting + malloc pretty much combining the bad (and slow) things from both worlds. Pause times below 1/10th of a second? Hmm, how much below? TV-quality video is 24 frames per second, so a one-tenth second pause means dropping two or three frames. Acceptable? Perhaps, but not desirable. Such pausetime on a machine capable of playing TV-quality video in the first place indicate an awful garbage-collector. Even a stop-and-copy shouldn't take that much time, and these days we have generational collectors which only bother with the "youngest" stuff, that is, stuff most likely garbage. And you can make that incremental, it's not even very hard, and you can then slice the "pause" into almost as small parts as you want. There are collectors which provide real-time guarantees. Mallocs usually don't. With non-garbage collected language, I know that memory allocation will either succeed ort fail, and I know (or a library writer knew) when allocation happens, because I'm explicitly coding it. So I know, at this particular point in my program, either allocation succeeded or failed. Except this isn't necessarily true either. One example is Linux, which doesn't guarantee that there is memory left, because memory isn't allocated when you map pages, but when you touch them first time. If you allocate memory, and there's not enough free virtual memory to fill in the pages when you actually need them first time, then OOMkiller is called. Speaking of which, unless you lock (all) pages into memory, you won't know whether there'll be pauses anyway, since that memory of yours might just as well be a block of hard-drive space. Welcome to the world of virtual memory. Guess already which pause takes longer, a call to an incremental collector or the swap-in? Oh, and do you have a deterministic thread scheduler in your OS? Finally, if you have an incremental collector (designed for this) you could run it with priority lower than your "real-time" tasks, and you could then collect only when the processor would be idle otherwise. Dijkstra's classical tri-coloring was actually developed for a scenario where there is one processor for running the task (mutator) and another for collecting the heap (collector). That you didn't think of this pretty much proves you've got no idea about garbage collectors. Just because there are bad collectors doesn't mean they all are bad. And even stock solutions, over are the days when Lisp machines hanged for hours to collect their memory. Unless you are running the CPU at 100% all the time, you'll have plenty of time to collect.) 2. Java and 3. If a new popular language does come on the scene, you won't notice it until it has nearly taken over the world. Oh, and developers will love it so much they'll drop everything else (like what happened with Java). Re:Old news (Score:4, Interesting) Java is successfull because: Obligatory java response... (Score:5, Informative) Re:Obligatory java response... (Score:5, Informative) That's because 1.4 is the CURRENT version (Score:4, Informative) Well, since 1.5 is still in beta [sun.com] , I don't see how this is an invalid comparison. Re:That's because 1.4 is the CURRENT version (Score:4, Insightful) Re:Obligatory java response... (Score:4, Insightful)) Computers > Programming > Languages > D [google.com] New programming languages are interesting, and sometimes I wonder what the next "big thing" will be. Will we have another big, revolutionizing, new concept like "object-oriented programming" that you simply must know in a near future? Re: Fads (Score:5, Insightful) There are fads in programming just as there are in clothing and management methodologies. And there are always people telling you to adopt the flavor of the month, I mean wave of the future if you don't want to become obsolete. And you can usually ignore them. I sat out PL/1, which, well, gee, it had BIG BLUE behind it (in a day when IBM's domination was far more complete than Microsoft's is now). And it doesn't seem to have done me much harm. True, you can score big by being the person who actually has the "two years experience in" (language-that's-only-existed-for-two-years) that the recruitment ads want, but if you go this route remember that it's easy to be knowledgeable in the latest language if you've just spent some unpaid years in college learning it. If you want to make a career out of always having the skill that's in demand, keep in mind that the only reason the skill is in demand is because it is rare--and you'll need to be quite clever at guessing the next fad, and dedicated about finding out how to educate yourself in it while keeping your day job. Re: Fads (Score:4, Insightful) However I just look as this forum and I can't fail to notice that most of the mainstream languages are so because of what they can offer to a certain target of people. For instance you can see how C / C++ remain unbeaten in the low-level programming field. A friend of mine told me perl is used a lot in science (and web programming as well). Something like Java is quite useful for multi-platform development. Visual Basic makes fast development for Windows true. And of course other languages have their purposes too. So to put it simple, I get the feeling that the future will divide programmers into different fields of programming. Much more than we are split now, that is. So I am not sure that the "wave of the future" will be just one winner, like it's been in the past. I already can see that there are several winners for several different reasons. My 2 cents, Diego Rey try, catch, finally (Score:3, Interesting) In theory this would be an ideal solution. It forces programmers to think about what they're doing. In practice, it doesn't. Coders are too busy thinking about the actual problem. Error checking gets in the way. They end up implementing the quickest way of ignoring the problem. The result is that we're no better off than if we just checked return values. The application should be doing what the user wants. Not the other way round. Re:try, catch, finally (Score:5, Insightful) Exceptions provide an obvious answer to the problem of how to handle different types of problems. If a file doesn't exist and someone tries to open it, a FileNotFoundException is thrown. If a file exists but the permissions don't allow access, an IOException is thrown. Exceptions also provide a MUCH cleaner way of propagating errors. If one method calls another method to open a file, and the file can't be opened, how do you tell the original caller that there was a problem? With exceptions, you simply declare that your method throws IOException, and then (typically) skip the try-catch-finally block. actually, the more important reason for exceptions (Score:5, Interesting) Exceptions let you throw the error where it happens and catch it where it makes the most sense, however far down the stack that may be. Re:actually, the more important reason for excepti :Dropping multiple inheritance ? (Score:4, Insightful) How about StreamSocket [slamb.org]. Okay, multiple inheritance isn't required in the strictest sense, but object-oriented programming isn't, either. MI makes this class make much more sense - it is both a stream and a socket. In a language providing only support for multiple interfaces, you'd have to reimplement at least one of those in the derived class. You'd probably end up just dispatching all of the calls in the derived class to a shared implementation elsewhere. Not nearly as clean. Or you could pull a Java and have a getStream() method on the StreamSocket. (Make the caller do the dispatching to the shared implementation.) I don't like it either. Plus, if you were gonna copy multiple inheritance from c++ you'd need to copy all those nasty casting operators. I don't see how eliminating MI makes any of them unnecessary: Re:The preprocessor is archaic? (Score:3, Informative) Looking forward to job ads (Score:5, Funny) Looking forward to job ads saying : Duh !! Re:Looking forward to job ads (Score:3, Interesting) D already exists? (Score:3, Interesting) Can anybody confirm this in any way? p.s. If I'm not mistaken there's also an "F", based on Fortran if I'm not mistaken. Unneeded history (Score:4, Insightful) D is designed to address the shortcomings of C++. While a powerful language, years of history and unneeded complexity has bogged down that language. They want to overcome C++'s "history" while still maintaining C compatibility. Suddenly, I'm confused. A, B, C, D, ... R! (Score:5, Funny) What about C++0x? (Score:3, Interesting) Last I heard about that was in this Slashdot story [slashdot.org] from 2001...exactly 3 years ago, nearly to date. But that was supposed to be the next official holy grail, no? C! (Score:5, Funny) with apologies to eminem... to the tune of 'without me' Two GUI classes go on the inside; on the inside, on the inside Two GUI classes go on the inside; on the inside, on the inside Guess who's back Back again C is back Tell a friend Guess who's back, guess who's back, guess who's back, guess who's back guess who's back, guess who's back, guess who's back.. Sun's created a monster, cause nobody wants to code Java no more or basic, but something quicker Well if you want speed, this is what I'll give ya A language called C that won't let you do "is a" Some "has a" that makes me feel sicker than the bugs when I build patch that's critical using make to compile and be building with a language that allows object orientating Your var name's too long, now stop line breaking Cause I'm back, I'm a new var and instantiating I know that you got a job Bill and Steve but your company's trust problem's complicating So GCC won't follow ANSI or copy memory, so let me see They try to recompile with visual C But it feels so bloated, without C So, connect with SLIP, or create a RIP Fuck that, write a function, and shift some bits And get ready, and use a pattern like proxy MS just settled their lawsuits, expect a levy! Little Hallions, MS feelin litigious Embarrassed that users still listen to RMS They start feelin like ellen feiss 'til someone comes on the television and yells SWITCH!!! A visionary, beard's lookin' scary Could start a revolution, lives in a bear cave A rebel, although emacs ain't real fast and there's the fact that I only got one class And it's a disaster, such a castastrophe for you can see so damn much of my class; meant to use C. Well I'm back, i-j-k-x-y-z-out-ta-var-names Fix your damn indentifier tune your code and I'm gonna open it, under vim, maybe pico and variables, no such thing as a member I'm interesting, the best thing since assembly but not Polluting the namespace with inherits We're Testing, your functions please Feel the tension, soon as someone commits some C Here's my webpage, my code is free who'll pay the rent? What, You code with vi? An object in AT&T, you can get your ass kicked worse than those little C++ bastards And Ruby? just like a static property not even used with KDE and QT You're not like C, you're too slow, let go It's over, nobody'll code in OO! Now let's go, -9's the signal I'll be there with a whole list of XM and L I use SOAP, XPATH with XSL And you know perl's just like coding in symbols everybody only just codes C so this must mean, some com-pile-ing but it's just me i'm obfuscating And though I'm not the first king of controversy And i'm not the worst thing since assembly but I am the worst thing since 86 XFree do use BASIC and JSP and used it to get myself wealthy Here's a concept that works twenty million new coders emerge but no matter how many fish in the sea half of them can't even code C Re:C! (Score:4, Funny) (sorry for the OoO, but else it wouldn't post cause I had to few chars per line) OoOoOoOoO WRITE IN C OoOoOoOoO OoOoOoOoO (sung to The Beatles "Let it Be") OoOoOoOoO OoOoOoOoO When I find my code in tons of trouble, OoOoOoOoO Friends and colleagues come to me, OoOoOoOoO Speaking words of wisdom: OoOoOoOoO "Write in C." OoOoOoOoO OoOoOoOoO As the deadline fast approaches, OoOoOoOoO And bugs are all that I can see, OoOoOoOoO Somewhere, someone whispers" OoOoOoOoO "Write in C." OoOoOoOoO OoOoOoOoO Write in C, write in C, OoOoOoOoO Write in C, write in C. OoOoOoOoO LISP is dead and buried, OoOoOoOoO Write in C. OoOoOoOoO OoOoOoOoO I used to write a lot of FORTRAN, OoOoOoOoO for science it worked flawlessly. OoOoOoOoO Try using it for graphics! OoOoOoOoO Write in C. OoOoOoOoO OoOoOoOoO If you've just spent nearly 30 hours OoOoOoOoO Debugging some assembly, OoOoOoOoO Soon you will be glad to OoOoOoOoO Write in C. OoOoOoOoO OoOoOoOoO Write in C, write in C, OoOoOoOoO Write In C, yeah, write in C. OoOoOoOoO Only wimps use BASIC. OoOoOoOoO Write in C. OoOoOoOoO OoOoOoOoO Write in C, write in C, OoOoOoOoO Write in C, oh, write in C. OoOoOoOoO Pascal won't quite cut it. OoOoOoOoO Write in C. OoOoOoOoO OoOoOoOoO Guitar Solo OoOoOoOoO OoOoOoOoO Write in C, write in C, OoOoOoOoO Write in C, yeah, write in C. OoOoOoOoO Don't even mention COBOL. OoOoOoOoO Write in C. OoOoOoOoO OoOoOoOoO And when the screen is fuzzy, OoOoOoOoO And the editor is bugging me. OoOoOoOoO I'm sick of ones and zeroes. OoOoOoOoO Write in C. OoOoOoOoO OoOoOoOoO A thousand people people swear that T.P. OoOoOoOoO Seven is the one for me. OoOoOoOoO I hate the word PROCEDURE, OoOoOoOoO Write in C. OoOoOoOoO OoOoOoOoO Write in C, write in C, OoOoOoOoO Write in C, yeah, write in C. OoOoOoOoO PL1 is 80's, OoOoOoOoO Write in C. OoOoOoOoO OoOoOoOoO Write in C, write in C, OoOoOoOoO Write in C, yeah, write in C. OoOoOoOoO The government loves ADA, OoOoOoOoO Write in C. Re:C! (Score:4, Funny) You fucking geek. You must be new h... No, wait... Ooops (Score:3, Funny) Libraries (Score:5, Insightful) Re:Libraries (Score:4, Informative) Nice to see a system language (Score:5, Insightful) Really, kudos to Walter Bright for this little piece. It needn't become popular, if it stays good it's plenty more than enough. High Hopes (Score:5, Insightful) The only thing about D that bothers me is the inclusion of the Garbage Collector and several other runtime components that occur in the background of your program. I'm not sure I really like that; it sounds a little *too* close to Java, if you get my drift. What I'd really love to see, and what I hope D inspires if not actually implements, is a language with the power of C/C++, but the easier syntax of Java. D *seems* to be the first step in that direction. I hope it goes further. Hmmm.... (Score:4, Insightful) Walter Bright's D is free as in beer but not speech, and there's only the one compiler. Do we really need another language that's a bit like C++? It's not entirely closed source (Score:4, Informative) David Friedman has already had success connecting this frontend to the GCC code generator, in fact: Success is Elusive (Score:5, Insightful) D is certainly a very interesting language; However, there are many interesting languages. Over the years, I've explored Prolog, Modula-2/3, Oberon, Haskell, Ocaml, and others. All of those embody some very interesting concepts; in some cases. they may be "better" than mainstream languages. But the fact remains that no one has ever paid me (or anyone else I know) to write code in Ocaml, Haskel, Oberon, Prolog, or D. For the most part, it is C, C++, and Java that feed my family; upon occasion, clients need Python and Fortran 95. I'd love to be paid for a project in D or Ocaml; I'm not going to bet the farm on that happening. I wish the world of languages (both human and computer) was more diverse -- but reality suggests a hard road to popularity for original concepts like D. I respect and appreciate Walter Bright's abilities; his Zortech compiler paved the way for C++, and provided excellent optimization. I wish him luck in promoting his vision. Wishing for this yesterday (Score:4, Interesting) Being able to use pointers if need be is also something really nice about this language that I have found that i really long for in Java at times (not so often to actually use, but oh how much easier it would be to explain the way some things work if pointer wasn't a dirty word in java). I have not really looked at C# much, but it seems to be freed of many of the complaints about Java (lack of pointers for example), but still has the problem of being a bytecode compiled language running in a VM, and adds the problem of being owned by the company that everyone loves to hate (or at least not trust). AFAIK C# also is not C compatible. I think these facts leave at least a niche for D, and if it's well done it could soon become one of the DeFacto languages of the future. It seems like development has been going on for quite a while on this, I'm honestly suprised that i've never run across it before, since I have been, mostly out of curiosity, looking for just this. I'm not sure how it will pan out, but I am definitly going to give this new language a shot. D as Delphi? (Score:3, Funny) All the C++ programmers are laughing at you... (Score:3, Funny) Re:All the C++ programmers are laughing at you... (Score:5, Interesting) So if somebody with those credentials thinks there are things we could do better, maybe we should at least take the time to listen to him.... Obsession with C-like syntax (Score:5, Insightful) There are far better syntax models for an object-oriented programming language than C. I wish people who feel a need to create new languages were willing to base their efforts on a framework more suited to their goals. Bander (in curmudgeon mode) D does not have dynamic classloader (Score:5, Insightful) P, not D. (Score:3, Interesting) Buncha wetbacks. Re:P, not D. (Score:5, Interesting) See the Wikipedia [wikipedia.org] Full C compatibility sucks (Score:5, Insightful) In sum: C++'s biggest problem is its C legacy. Tear it away, add real type-safety, and you have a language much more powerful and safer than Java. D WON'T compile C code (Score:5, Informative). Personally, I've been praying for years for a language like this to get adopted. Why is it that I can only use full object oriented programming for web/network applications?! Sure.. I know you can do more than this with Java or C#, but is it really practical?? Usually it's just a massive drain on resources. If you need high performance, then you can't do better than C++. Unfortunately, C++ is a transitional language (just look at it's name..). A pure object oriented, fully compilable language that has no VM is desperately needed. I can't believe it's 2004, and such a thing still hasn't been adopted. I hope D (or something like it catches on.. As much as I loved it when it first came out, I'm sick of wrestling with C++ code. I was hoping for more C (Score:5, Interesting) I'm not sick of C at all. I was hoping for more like ANSI C 04 or something (like ansi c99), more low-level, more control, less objects, less behind-the-scenes crap like garbage collection. The quality of code is always higher with C than C++, unless VERY well programmed with C++, and for that reason alone, C code is reused more despite being less reusable. C++ allows for more cheap right-out-of-college employees, while C gives us quality code that lingers for decades. Think UNIX for a second, and give me an example of something in C++ that has lived so long and so well. I hate fatter higher-level languages, and we all seem to hate backwards compatibility. If a language has 100 keywords, and you make the next version backwards compatible with 100 more keywords, any sample code can have 200 different keywords in it. Thats making it all tough. C is like RISC, fewer instructions that can be used more creatively, so a smaller amount of code can give you more functionality. Its all a conspiracy by computer manufacturers. Say you come up with a language that produces binaries slower than Java, all of a sudden a Pentium 3.0GHz with HT is too slow for it, the market keeps pushing for faster and capitalism works. doesnt matter at all that you can run a file/print/mail/application/web server on a 386sx or an ARM MCU 2mm^2 in size running some operating system made in C. Shenanigans (Score:4, Interesting) Now before anyone goes on and on about the existence of GC [hp.com] for C/C++, my definition of "real" GC is that it has to be a NON-conservative, compacting, ephemeral collector. Collectors outside this definition have their place: they help you clean up leaks. But they cannot guarantee two features which are crucial to collectors in any modern language: safety and speed. Safety. You just can't tell the difference between a pointer and an int in C. Thus there are all sorts of ways of hiding pointers as ints in the language, causing memory leaks. Conversely, if you've encoded a pointer in some way, or have allocated hanging off the edge of a struct (a *very* common occurrance -- Objective-C uses this as its basic objejct storage procedure underneath, or used to) the collector may reap your memory before you're done using it. Ungood. Speed. One of the things that makes HotSpot kewl is that it moves around memory as it does collection; as a result long-lived objects get compacted together, taking advantage of cache loads. This can't be easily done in GC if it's not allowed to fool with your pointers safely. The point of garbage collection is to be ubiquitous and invisible. This isn't possible in C/C++. Google incompatible (Score:4, Funny) Exactly how are we expected to Google for such things? Please give your projects distinctive names with more than one character, thanks. OBJECTIVE-C: APPLE VERSION? (Score:5, Interesting) Nice to see once more another myriad of articles that espouse all sorts of wonderful capabilities while either due to ignorance or purposeful deception leaves Apple's Objective-C compiler out of the comparison list. No matter. All in due course. Limbo is the only legitimate successor of C (Score:5, Interesting) Limbo was developed in Bell Labs by the the same people that created Unix, C, and Plan 9 [bell-labs.com], and someone once described it as "the language the creators of C would have come up with if they had been give and enormous amount of time to fix and improve C", that is exactly what Limbo is. Dennis M. Ritchie: The Limbo Programming Language [vitanuova.com] Brian W. Kernighan: A Descent into Limbo [vitanuova.com] Together with Inferno [vitanuova.com], Limbo forms the best platform for distributed applications. Inferno and Limbo were recently released under an open source license and you can download them here: Inferno/Limbo are the only hope for some sanity in the software industry! Best wishes uriel Screw More Languages (Score:4, Insightful) After Java and C#, why do we need yet another rendition of "what C should have been"? As far as I'm concerned the standards for C (89,99) are a reasonable place for this language given all its history. More than anything else we need more standardization of libraries, in the same vein as libc or the STL, but updated to include almost 20 years worth of experience with all kinds of drivers. Programmers don't need a new language as much as they need powerful, open, standardized, updated libraries. Nice Comparison (Score:4, Interesting) Nice feature comparison, except for the fact that it's wrong. Perhaps the authors of D would do better if they actually learned C++ first? Designing a new language when you're clueless is the first sign of disaster. Look at Java. Resizeable arrays: D Yes, C++ No BZZT. Arrays of bits: D Yes, C++ No BZZT. Built-in strings: D Yes, C++ No BZZT. Array slicing: D Yes, C++ No BZZT. Array bounds checking: D Yes, C++ No BZZT. Associative arrays: D Yes, C++ No It's called a map. Inner classes: D Yes, C++ No BZZT (perhaps they meant specifically the automatic parent resolution?) typeof: D Yes, C++ No BZZT. foreach: D Yes, C++ No BZZT. Complex and Imaginary: D Yes, C++ No BZZT. Struct member alignment control: D Yes, C++ No Give me a break, every C++ compiler supports this. It's just implementation defined. Now go look at D's page on Design By Contract in C++: here [digitalmars.com] Notice that any C++ programmer can come up with a far better implementation than theirs using child class destructors and inlining. In fact, Stroustrup even put one in his book in case you're having trouble getting the brain in gear. The comparison list combines cluelessness and sophistry ("C++ doesn't have this feature! It's in the STL, not the language" - oh please) to try to promote their own half-baked language. Conclusion: Yet another half-baked useless language. The problem with C++ (Score:5, Informative) C++ itself is undergoing a revision. But the plans for it aren't that good. The big problem with the C++ committee is that most of the members don't want to admit the language has major problems. Neither does Strostrup, who has written that only minor corrections are needed. If that was really true, we wouldn't need all those variants on C++ (Java, D, C#, Objective-C, Managed C++, etc.) The committee is dominated by people who like doing cool things with templates. Most of the attention is focused on new features for extending the language via templates. It's possible to coerce the C++ template system into running programs at compile time (see Blitz [oonumerics.org]). Painfully. LISP went down this dead end, where the language was taken over by people who wanted to extend the language with cool macros. (See the MIT Loop Macro. [cmu.edu]) We all know what happened to LISP. What isn't happening is any serious attempt to make C++ a safer language. C++ is the the only major language that provides abstraction without memory safety. That's why it causes so much trouble. C++ objects must be handled very carefully, or they break the memory model. This usually results in bad pointers or buffer overflows. Java, etc. are protected against that. This is the basic reason that writing C++ is hard. It's not fundamentally necessary to give up performance for memory safety. I've written a note on "strict mode" for C++ [animats.com], an attempt to deal with the problem. I'm proposing reference counts with compile-time optimization, rather than garbage collection. The model is close to that of Perl's runtime, which handles this well. Garbage collection doesn't really fit well to a language with destructors, because the destructors are called at more or less random times. Microsoft's Managed C++ does that, and the semantics of destructors are painful. With reference counts, destructor behavior is repeatable and predictable, so you can allocate resources (open files, windows) in constructors and have things work. The main problem with reference counts is overhead, but with compiler optimization support and a way to take a safe non-reference-counted pointer from a reference counted object, you can get the overhead way down and reference count updates out of almost all inner loops. C++ itself isn't that bad. The language could be fixed. But I don't see it happening. Microsoft has gone off in a different direction with C#. SGI, HP, DEC, Bell Labs, SCO, and Sun are defunct or in no position to drive standards any more. What C++ needs is some hardass in a position to slam a fist on the table and say "Fix it so our software doesn't crash all the time". It doesn't have one. Re:Toss out C. (Score:4, Insightful) Re:Toss out C. (Score:4, Insightful) However, I'm not sure I agree for software where security is little or no concern but speed is the main issue. One example of that kind of software is games. I'm author of a 3D engine in C++ and I also program in Java for a living. I think that for things like these low-level languages like C and C++ are still the way to go. You can argue that computers are getting faster and faster. But user expectations about what those games can do are also constantly rising. Greetings, Re:Toss out C. (Score:3, Insightful) Re:Toss out C. (Score:5, Insightful) Leads to cleaner code, in my opinion. C++ doesn't require you to do it, but I still do. declare all your functions before using them What's the big deal about this? If your tired of typing, you either need to learn to copy/paste, use an IDE that will generate code for you, or find a new industry. C takes much more time to compoile than Java/C# because all the stupid headers take forever to parse. Ever hear of "Make" and "Makefiles"? You don't need to keep recompiling things than havn't changed. Pointers are a problem because they allow unsafe code that forces the hardware to make up for lack of security in the software. Repeat after me, security is a software problem. Pointers, in some capacity, are needed for low level programming. If you don't need access to hardware, then you might have a reason to consider something besides C. Re:the most interesting part of that table (Score:4, Informative) Troll away, maybe people will start paying attention when the dozens of useful Java libraries are available from C#. *shrug* What I really need right now though is a nice, clean looking language with access to well-thought-out libraries for image loading/saving and virtual filesystem access. I can't find any of this stuff in a portable form other than on Java, unfortunately. :-( Re:What about Objective C? (Score:4, Insightful) I agree that the world doesn't need yet another extended C. If you're going to build a modern buzzword-compliant language, build it from scratch! Considering the era C came from, it's a fundamentally good procedural language. Not perfect. Probably not even great. Just good. In particular, its terse syntax and heavy reliance on operators instead of keywords makes C code dense and hard to read. You can write readable C code, but it takes a conscious effort, some documenting, and some discipline *not* to use every clever coding trick that pops into your mind. (I've read one opinion that C really stands for Clever, because it encourages you to do all sorts of excessively clever things that you'll later regret.) The reason why the whole software industry seems hell-bent on created mutated versions of C, several decades later, is beyond my understanding.
http://developers.slashdot.org/story/04/04/19/1124204/c-objective-c-c-d-future-or-failure?sdsrc=rel
CC-MAIN-2013-48
refinedweb
7,274
71.24
[vi simulator] how to highlight a word I am trying to enhance my visimulator plugin, when I am searching the text with “/abc” or “?abc”, I want to highlight the matched words. i want to the effect to be exactly same as highlighting when double clicking on the word. Currently I only noticed SCI_COLOURISE. any advice? Regards, SCI_COLOURISEis to have the lexer (e.g. C++, HTML, JavaScript, etc) re-parse and determine styles. What you would want are indicators. This is what N++ uses for its smart highlighting. They are fairly easy to use and give you a wide range of options. Keep in mind Notepad++ uses a handful of indicators itself so make sure not to accidentally reuse ones already in use. I just find a way to highlight a word, but I met another issue. After I highlight a word, and I notice it did not automatically clear when I double click on another word. There’s nothing “automatic” about it, you have to manually handle everything. See: SCI_INDICATORCLEARRANGE I thought it will be cleaned automatically if I use the same indicator as the one used when double clicked I guess in theory if you use the same indicator number then it should be cleared but I can’t say for certain. What indicator number are you using? Maybe provide an example/code of what you are trying to do? SendMessage(nppData._scintillaMainHandle, SCI_INDICSETFORE,INDICATOR_CURRENT, 0x00FF00); // green #00d000 SendMessage(nppData._scintillaMainHandle, SCI_INDICSETALPHA, INDICATOR_CURRENT, 100); SendMessage(nppData._scintillaMainHandle, SCI_INDICSETSTYLE,INDICATOR_CURRENT, INDIC_ROUNDBOX); SendMessage(nppData._scintillaMainHandle, SCI_SETINDICATORCURRENT, INDICATOR_CURRENT, 0); SendMessage(nppData._scintillaMainHandle, SCI_INDICATORFILLRANGE,5, 7); and I tried to set the value of indicator_currentto 31 and 27. and hoping it will be automatically cleaned when double clicked on an word. Notepad++ uses 29 for its smart highlighting. Is it possible to invoke the npp’s methods? I was thinking why I need to reinvent the wheel. e.g. How can I call the find function? I did not find any sample about it, is it possible? Is it possible to invoke the npp’s methods? In some cases yes but in this case no. I was thinking why I need to reinvent the wheel. Luckily it is a small wheel. Here is the Notepad++ code that does the smarthighlighting. Unfortunately it isn’t as simple as copy/paste. If you want to see a “leaner” version of how to do something similar to smarthighlighting take at this Lua code. This isn’t perfect but works under alot of circumstances. You can even incorporate some of the things Notepad++ is doing as well. I understand you are writing it in C but most of the lines of Lua translates into a single line of C. For example something like: editor.FirstVisibleLine Is equivalent to: SendMessage(nppData._scintillaMainHandle, SCI_GETFIRSTVISIBLELINE, 0, 0); The one caveat being editor:findtext()which you can take a look at SCI_FINDTEXT Anyway! I will just use the Ctrl+F3. Don’t have much time to work on a editor plugin. I use command line for most of time. smarthighlighting take at this Lua code Hi dail, I took a look at the code found at the link you provided, and although I don’t use Luascript with Notepad++ I think I can follow the logic well-enough. Knowing Pythonscript and how the callback mechanism works there, it appears your intent in this code is to fire it off every time an “update UI” event is generated. The problem is (and again I’m thinking of my Pythonscript familiarity) I think that the code itself will cause more of those “update UI” events to be generated, thus spiraling somewhat out of control (at least while the caret is on a “word”), calling your callback many many times. Am I missing something here? That is actually a really good question. I’ve always gone under the assumption that it doesn’t fire off another “updateUI” event. Looking at the Scintilla documentation for SCN_UPDATEUIstates: Either the text or styling of the document has changed or the selection range or scroll position has changed. Note: “style” is separate from “indicators” There are separate notifications when indicators have changes (i.e. SCN_MODIFIED). If you take a look at the Notepad++ source code you will also see that it does the “highlighting” during the SCN_UPDATEUIevent. See this line of code So it appears that doing this is safe. So for some fun I ported the code to Pythonscript on my lunch break. Due to limited time, I left out the callback stuff, and am just putting the caret on a word and running the script and seeing all occurrences in the currently visible window of the editor tab become highlighted. However, I have another script that tells me which callbacks are happening (I run that first). When I invoke the “highlight” script, I see MODIFIED callbacks occur with SC_PERFORMED_USER | SC_MOD_CHANGEINDICATOR flags set (which seems correct), but I also see UPDATEUI callbacks occurring as well, after the MODIFIED ones. Thus this leads me back to suspecting that it would just generate something of a big callback loop if actually set up as part of a UPDATEUI callback; I’ll try that out soon. - Claudia Frank last edited by Hi Scott, dail is correct, as long as you take care that your callback function doesn’t generate ui updates it is save to use. I’m using it in my updated regextester script for some weeks now without a problem. Cheers Claudia So last night I actually tried it out with it installed as an update-ui callback. After installation, putting the caret inside a word results in the FIRST occurrence of that word flashing rapidly. It seems like what I feared would happen is actually happening–something in itself is triggering multiple re-calling of the callback, and it is clearing and setting the indicator over and over. Maybe there is something wrong with my Pythonscript port of this code; it is short enough so I have included it below. Perhaps someone can see a deficiency? I see dail used “return False” a few places in the original code; I never heard of a callback returning a boolean; however I included it in the port (along with some “else” placements which would permit removing the return statements without affecting functionality). Anyway, here’s the code:) def callback_sci_UPDATEUI(args): def getRangeOnScreen(): firstLine = editor.getFirstVisibleLine() lastLine = firstLine + editor.linesOnScreen() startPos = editor.positionFromLine(firstLine) endPos = editor.getLineEndPosition(lastLine) return (startPos, endPos) def) clearIndicatorOnScreen() (startPos, endPos) = getRangeOnScreen() temp = editor.findText(FINDOPTION.WHOLEWORD | FINDOPTION.MATCHCASE, startPos, endPos, word) while temp != None: (s, e) = temp editor.indicatorFillRange(s, e - s) temp = editor.findText(FINDOPTION.WHOLEWORD | FINDOPTION.MATCHCASE, e, endPos, word) editor.callback(callback_sci_UPDATEUI, [SCINTILLANOTIFICATION.UPDATEUI]) This is some interesting results. If I get some time I will play around with it as well with Lua. It is odd you are getting this behavior. The example Lua I posted I’ve been using for months just fine. It might be worth looking a bit more into the Notepad++ “smarthilighter” since it does pretty much the same thing but works fine. I see dail used “return False” a few places in the original code This is purely a LuaScript thing. (It doesn’t actually do anything but I recommend returning false from a LuaScript callback for forwards-compatibility reasons). There is one other caveat to keep in mind. PythonScript callbacks are asynchronous (whereas LuaScript callbacks are purely synchronous). Not sure if this is affecting anything but definitely worth keeping in mind. The Pythonscript docs say that I can use editor.callbackSync() to make it synchronous, but then it goes on to say that if I do that, I can’t call editor.findText(), which this script uses. :( - Claudia Frank last edited by Claudia Frank Hello Scott, you are right, using asynchronous callback leads to the update flickering. I’m not quite sure why this happens. I’m using the synchronous callback together with the research function and this seems to work well. Your code would look like) import re def callback_sci_UPDATEUI(args): print 'callback_sci_UPDATEUI' def match_found(m): editor.setIndicatorCurrent(INDICATOR_TO_USE) editor.indicatorFillRange(m.span(0)[0], m.span(0)[1] - m.span(0)[0]) def getRangeOnScreen(): print 'getRangeOnScreen' firstLine = editor.getFirstVisibleLine() lastLine = firstLine + editor.linesOnScreen() startPos = editor.positionFromLine(firstLine) endPos = editor.getLineEndPosition(lastLine) return (startPos, endPos) def clearIndicatorOnScreen(): print ) print 'word:{}'.format(word) clearIndicatorOnScreen() (startPos, endPos) = getRangeOnScreen() # temp = editor.findText(FINDOPTION.WHOLEWORD | FINDOPTION.MATCHCASE, startPos, endPos, word) editor.research(word, match_found, re.IGNORECASE) # while temp != None: # (s, e) = temp # editor.indicatorFillRange(s, e - s) # temp = editor.findText(FINDOPTION.WHOLEWORD | FINDOPTION.MATCHCASE, e, endPos, word) editor.callbackSync(callback_sci_UPDATEUI, [SCINTILLANOTIFICATION.UPDATEUI]) I know you could have done this yourself but thought … Cheers Claudia I’m not familiar enough with the internal Scintilla code but I think it is making sure it doesn’t get stuck in one of these types of loops. I added a bit of extra code to my plugin to log exactly when notifications are getting received. It shows when it enters and leaves the notifications. As shown below, it enters the SCN_UPDATEUI, at which point it calls my Lua callback which adds 3 indicators, thus the 3 SCN_MODIFIED pairs, then leaves the SCN_UPDATEUI notification. ->SCN_UPDATEUI ->SCN_MODIFIED <-SCN_MODIFIED ->SCN_MODIFIED <-SCN_MODIFIED ->SCN_MODIFIED <-SCN_MODIFIED <-SCN_UPDATEUI I would say between this and the fact that @Claudia-Frank successfully used the synchronous callbacks, means that your code was receiving these notifications out of order due to the asynchronous callbacks (which is a problem I ran into with the PythonScript a while ago).
https://community.notepad-plus-plus.org/topic/12360/vi-simulator-how-to-highlight-a-word
CC-MAIN-2019-47
refinedweb
1,595
57.67
hey guys, i have some work to do in C on a windows platform, windows 7 actually. now i've look into the suggested IDE compiler combo you guys suggested and i've pickup Dev c++ i just type a test program, just to see how the compiler works #include <stdio.h> main () { printf("Test"); system("pause"); } now i get errors like: linker error undefined reference to `_dyn_tls_init_callback` linker error undefined reference to `_cpu_features_init` this as nothing to do with the code, is it because i'm on win7 ? should i try another IDE or i'll get the same error ? what would be nice is to be able to work in vim, i already use vim to code java. now that's a combo question, if someone have a quick vim setup for C, that can compile directly in cmd mode but, just a quick hand on the 1st part of this thread would be nice, so i can get this thing going. thx guys Dark
https://www.daniweb.com/programming/software-development/threads/407291/compiler-c
CC-MAIN-2018-30
refinedweb
167
73.81
I am not very familiar with any of raspberry pi product or the scripts used to program them. The result i am trying to accomplish: - I am trying to create a camera with a trigger switch for recording. (when trigger pressed the camera records, when released the camera stops recording). - Trigger switch input is 3v when not pressed & 0v when pressed. - The script should run every time the raspberry pi is turned on. The script that i used: ((sample.py) - saved file name) The script after 'while True' should be on a continues loop. Code: Select all import time import datetime import RPi.GPIO as GPIO GPIO.setmode(GPIO.BOARD) GPIO.setup(29,GPIO.IN,pull_up_down=GPIO.PUD.UP) def startrecording() raspivid -t 600000 -ex auto -b 17000000 -o /home/pi/t1_'date+%d%m%Y_%H%M-%S'.h264 def stoprecording() pkill raspivid while True: GPIO.wait_for_edge(29,GPIO.FALLING) startrecording() GPIO.wait_for_edge(29,GPIO.RISING) stoprecording() To make the script run on startup i: sudo nano /etc/rc.local (edited the script) sudo python /home/pi/sample.py (line above 'Exit 0') Once i rebooted the raspberry pi i got a 'Invalid Syntax' Can anyone help me as i really don't know what i am doing?
https://www.raspberrypi.org/forums/viewtopic.php?p=1354043
CC-MAIN-2021-10
refinedweb
209
68.26
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Dears, for request we need to add a "due date". (by when they want something to be done) Depending on the request we can already say that the earliest can be today + X days. How can this be done? Kind regards, Matt @Matthew Van Kuyk, let me restate what I think you are asking for to be sure I am following. In the end I think you will need something like Scriptrunner or maybe Automation for JIRA to make this work. You want to create a custom 'due date' field and you want to automatically set the default to "today + x days" based upon Issue Type. example: Task = today +5d Bug = today +2d Hi Jack, Thanks you doe your reply. It is not the field itself that should be x + 5days. It should be the date picker which should make all days available from x+5days on. Additionaly a script should be ran to check if the value entered is indeed in the correct range (that I can do with script runner). Kind regards, Matt Ok so you want to restrict the first date of a date picker? So if i create an issue on 9/1/2017 the first date I can pick for this field should be 9/6/2017? If so, I’m unsure how to accomplish. I’m unsure if there is an addon that could do this. It might mean changing the code, creating a specialize date picker. Hopefully someone w/ more knowledge could chime in here. @Nic Brough, any thoughts on this one? I don't think even Behaviours (part of Scriptrunner) can get into the display at that depth. You could default the value and do the validation on create/transition with Behaviours and Scripts, but limiting the calender would require dedicated code I think. Thanks for chiming in Nic. this was what I was expecting but unsure. Nic, Jack, thank you for your valuable input. As a workaround I have created a validation script which will check if the selected date is in the correct range. import com.opensymphony.workflow.InvalidInputException; import com.atlassian.jira.component.ComponentAccessor; try{ //set the value of the customer request type of Request Internal Move def requestInternalMove= "sd/d28a8b4a-343c-453c-9e87-68a1f7b2710d" def issueManager = ComponentAccessor.getIssueManager() def customFieldManager = ComponentAccessor.getCustomFieldManager() def cf = customFieldManager.getCustomFieldObjectByName("Customer Request Type") def cFieldValue = issue.getCustomFieldValue(cf) log.error "Customer Request Type: " + cFieldValue; if(cFieldValue.toString().equals(requestInternalMove)){ //Set de amount of dates needed int noOfDays = 14; //i.e two weeks Calendar calendar = Calendar.getInstance(); calendar.add(Calendar.DAY_OF_YEAR, noOfDays); Date date = calendar.getTime(); //Check if the due date is later than the 2 weeks if (issue.dueDate?.before(date)) { invalidInputException = new InvalidInputException("2 weeks are required to provide this service, please provide a date after " + date) } } }catch(Exception ex){ invalidInputException = new InvalidInputException("An error ocured please contact the helpdesk") log.error ex.getMessage() } Kind regards, Matt.
https://community.atlassian.com/t5/Jira-Software-questions/How-to-specify-the-range-in-a-Date-Range-Picker/qaq-p/643272
CC-MAIN-2018-09
refinedweb
508
50.02
Introduction Machine Learning is a branch of Artificial Intelligence. It contains many algorithms to solve various real-world problems. Building a Machine learning model is not only the Goal of any data scientist but deploying a more generalized model is a target of every Machine learning engineer. Regression is also one type of supervised Machine learning and in this tutorial, we will discuss various metrics for evaluating regression Models and How to implement them using the sci-kit-learn library. Table of Contents - Regression - Why we require Evaluation Metrics - Mean Absolute Error(MAE) - Mean Squared Error(MSE) - RMSE - RMSLE - R squared - Adjusted R Squares - EndNote Regression Regression is a type of Machine learning which helps in finding the relationship between independent and dependent variable. In simple words, Regression can be defined as a Machine learning problem where we have to predict discrete values like price, Rating, Fees, etc. Why We require Evaluation Metrics? Most beginners and practitioners most of the time do not bother about the model performance. The talk is about building a well-generalized model, Machine learning model cannot have 100 per cent efficiency otherwise the model is known as a biased model. which further includes the concept of overfitting and underfitting. It is necessary to obtain the accuracy on training data, But it is also important to get a genuine and approximate result on unseen data otherwise Model is of no use. So to build and deploy a generalized model we require to Evaluate the model on different metrics which helps us to better optimize the performance, fine-tune it, and obtain a better result. If one metric is perfect, there is no need for multiple metrics. To understand the benefits and disadvantages of Evaluation metrics because different evaluation metric fits on a different set of a dataset. Now, I hope you get the importance of Evaluation metrics. let’s start understanding various evaluation metrics used for regression tasks. Dataset For demonstrating each evaluation metric using the sci-kit-learn library we will use the placement dataset which is a simple linear dataset that looks something like this. Now I am applying linear regression on the particular dataset and after that, we will study each evaluation metric and check it on our Linear Regression model. from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=2) from sklearn.linear_model import LinearRegression lr = LinearRegression() lr.fit(X_train,y_train) y_pred = lr.predict(X_test) let’s start Exploring various Evaluation metrics. 1) Mean Absolute Error(MAE) MAE is a very simple metric which calculates the absolute difference between actual and predicted values. To better understand, let’s take an example you have input data and output data and use Linear Regression, which draws a best-fit line. Now you have to find the MAE of your model which is basically a mistake made by the model known as an error. Now find the difference between the actual value and predicted value that is an absolute error but we have to find the mean absolute of the complete dataset. so, sum all the errors and divide them by a total number of observations And this is MAE. And we aim to get a minimum MAE because this is a loss. Advantages of MAE - The MAE you get is in the same unit as the output variable. - It is most Robust to outliers. Disadvantages of MAE - The graph of MAE is not differentiable so we have to apply various optimizers like Gradient descent which can be differentiable. from sklearn.metrics import mean_absolute_error print("MAE",mean_absolute_error(y_test,y_pred)) Now to overcome the disadvantage of MAE next metric came as MSE. 2) Mean Squared Error(MSE) MSE is a most used and very simple metric with a little bit of change in mean absolute error. Mean squared error states that finding the squared difference between actual and predicted value. So, above we are finding the absolute difference and here we are finding the squared difference. What actually the MSE represents? It represents the squared distance between actual and predicted values. we perform squared to avoid the cancellation of negative terms and it is the benefit of MSE. Advantages of MSE The graph of MSE is differentiable, so you can easily use it as a loss function. Disadvantages of MSE - The value you get after calculating MSE is a squared unit of output. for example, the output variable is in meter(m) then after calculating MSE the output we get is in meter squared. - If you have outliers in the dataset then it penalizes the outliers most and the calculated MSE is bigger. So, in short, It is not Robust to outliers which were an advantage in MAE. from sklearn.metrics import mean_squared_error print("MSE",mean_squared_error(y_test,y_pred)) 3) Root Mean Squared Error(RMSE) As RMSE is clear by the name itself, that it is a simple square root of mean squared error. Advantages of RMSE - The output value you get is in the same unit as the required output variable which makes interpretation of loss easy. Disadvantages of RMSE - It is not that robust to outliers as compared to MAE. for performing RMSE we have to NumPy NumPy square root function over MSE. print("RMSE",np.sqrt(mean_squared_error(y_test,y_pred))) Most of the time people use RMSE as an evaluation metric and mostly when you are working with deep learning techniques the most preferred metric is RMSE. 4) Root Mean Squared Log Error(RMSLE) Taking the log of the RMSE metric slows down the scale of error. The metric is very helpful when you are developing a model without calling the inputs. In that case, the output will vary on a large scale. To control this situation of RMSE we take the log of calculated RMSE error and resultant we get as RMSLE. To perform RMSLE we have to use the NumPy log function over RMSE. print("RMSE",np.log(np.sqrt(mean_squared_error(y_test,y_pred)))) It is a very simple metric that is used by most of the datasets hosted for Machine Learning competitions. 5) R Squared (R2) R2 score is a metric that tells the performance of your model, not the loss in an absolute sense that how many wells did your model perform. In contrast, MAE and MSE depend on the context as we have seen whereas the R2 score is independent of context. So, with help of R squared we have a baseline model to compare a model which none of the other metrics provides. The same we have in classification problems which we call a threshold which is fixed at 0.5. So basically R2 squared calculates how must regression line is better than a mean line. Hence, R2 squared is also known as Coefficient of Determination or sometimes also known as Goodness of fit. Now, how will you interpret the R2 score? suppose If the R2 score is zero then the above regression line by mean line is equal means 1 so 1-1 is zero. So, in this case, both lines are overlapping means model performance is worst, It is not capable to take advantage of the output column. Now the second case is when the R2 score is 1, it means when the division term is zero and it will happen when the regression line does not make any mistake, it is perfect. In the real world, it is not possible. So we can conclude that as our regression line moves towards perfection, R2 score move towards one. And the model performance improves. The normal case is when the R2 score is between zero and one like 0.8 which means your model is capable to explain 80 per cent of the variance of data. from sklearn.metrics import r2_score r2 = r2_score(y_test,y_pred) print(r2) 6) Adjusted R Squared The disadvantage of the R2 score is while adding new features in data the R2 score starts increasing or remains constant but it never decreases because It assumes that while adding more data variance of data increases. But the problem is when we add an irrelevant feature in the dataset then at that time R2 sometimes starts increasing which is incorrect. Hence, To control this situation Adjusted R Squared came into existence. Now as K increases by adding some features so the denominator will decrease, n-1 will remain constant. R2 score will remain constant or will increase slightly so the complete answer will increase and when we subtract this from one then the resultant score will decrease. so this is the case when we add an irrelevant feature in the dataset. And if we add a relevant feature then the R2 score will increase and 1-R2 will decrease heavily and the denominator will also decrease so the complete term decreases, and on subtracting from one the score increases. n=40 k=2 adj_r2_score = 1 - ((1-r2)*(n-1)/(n-k-1)) print(adj_r2_score) Hence, this metric becomes one of the most important metrics to use during the evaluation of the model. EndNote I hope it was easy to catch all the important 6 metrics we have discussed. There is not anyone metric that always performs well and helps to build the generalized model. There can be situations where you have to use different evaluation metrics and if you are a beginner then you should try all these metrics which will help you to get a better understanding of each to evaluate when you can use which metric. I would encourage you to pick any dataset, apply a Machine learning algorithm and try to evaluate a model on different evaluation metrics. The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.You can also read this article on our Mobile APP
https://www.analyticsvidhya.com/blog/2021/05/know-the-best-evaluation-metrics-for-your-regression-model/
CC-MAIN-2021-25
refinedweb
1,641
61.06
Discussions XML & Web services: Returing Objects using JAX-WS Returing Objects using JAX-WS (1 messages) Hi, I need some help in returning objects using JAX-WS on serverside. I could write simple JAX-WS programs returning basic data type. However, I am unable to return the objects. The problem is that the object elements are not shown on WSDL. Hence, the client is unable to understand the object. Simple Sample Program: public class X { private Integer i; public X() { this.i = 0; } public getI() { return this.i } } @WebService public class Main() throws RemoteException { @WebMethod public X api() { X m = new X(); return m; } } The problem is I am unable to use X.getI() at the client side (compiler error). If I replace X with String, I am able to read String. On checking the WSDL, any information related to Class X is not included. Any help is appreciated. Sophie - Posted by: Sophie Herbert - Posted on: July 11 2008 02:05 EDT Threaded Messages (1) - Re: Returing Objects using JAX-WS by manuel passerini on July 16 2008 02:40 EDT Re: Returing Objects using JAX-WS[ Go to top ] try to implement java.io.Serializable and overwrite the hashCode() and equals() method in X. compiler error: it seems you have your stubs generated? try this: javax.xml.ws.Service aService = Service.create(new URL("url to ws wsdl"), new QName("", "MyWebServiceName")); MyWSInterface serviceProxy = aService.getPort(MyWSInterface.class); ... you maybe have to split projects (one spi, one for client and one for the implementation on the appl server). this way, you can directly call the service proxy, without creating/generating any client stubs. slowfly - Posted by: manuel passerini - Posted on: July 16 2008 02:40 EDT - in response to Sophie Herbert
http://www.theserverside.com/discussions/thread.tss?thread_id=50017
CC-MAIN-2014-15
refinedweb
291
58.58
Enforcing Style in a Python Project Want to share your content on python-bloggers? click here. A linter and a styler can help you to write cleaner and more consistent code. In this post we’ll look at how to set up both for a Python project. What are Pre-Commit Hooks? Git has the ability to execute specific actions when certain events occur. The connections between the actions and the events are known as hooks. These hooks are configured via files in the .git/hooks folder. Git hooks provide the perfect mechanism for insuring that the code committed to a repository is clean and consistent. We’ll be setting up pre-commit hooks that will run actions immediately before each commit to the Git repository. A commit will only succeed if all of the associated actions are succesful. The Pre-Commit Framework Despite the relative simplicity of their implementation, Git hooks can be somewhat fiddly. The pre-commit framework makes it easier to manage and maintain pre-commit hooks. It eliminates a lot of the fiddliness. The way that the pre-commit framework replaces a collection of distinct hooks with a single hook (the pre-commit hook) and a configuration file. At commit time the pre-commit hook is triggered and it runs all of the actions specified in the configuration file. Install Installing the pre-commit framework is simple. You’ll probably want to do this in a virtual environment. pip install pre-commit At this point you should also add pre-commit to your project requirements.txt. Now add pre-commit as a hook. pre-commit install This will create a hook file at .git/hooks/pre-commit. Configure The actions run by pre-commit are configured via the .pre-commit-config.yaml file. Run the following to generate a simple default configuration. pre-commit sample-config >.pre-commit-config.yaml The contents of the configuration file should look something like this: repos: - repo: rev: v3.2.0 hooks: - id: trailing-whitespace - id: end-of-file-fixer - id: check-yaml - id: check-added-large-files This configuration will run four distinct actions ( trailing-whitespace, end-of-file-fixer, check-yaml and check-added-large-files) against each file in the repository. Test You can test the configuration by manually running the hooks against all files. pre-commit run --all-files If you then run git status you’ll likely find that one or more of the files in your repository has been modified. In my case this generally involves adding empty lines at the end of various files. Take a look at the changes and if you are happy, stage and commit the changes. Lint The Flake8 linter is used to check for syntatic problems in Python code. To enable Flake8 add the following to the .pre-commit-config.yaml file. - repo: rev: 5.0.4 hooks: - id: flake8 You might want to check the Flake8 repository to see if there are more recent releases and update the rev field accordingly. Configuration You can tweak some Flake8 options by creating a .flake8 file. Its contents might look something like this: [flake8] max-line-length = 120 exclude = database/__init__.py A complete list of available options can be found here. Ignoring Code You can tell Flake8 to ignore a specific line of code by adding a noqa hint as a comment at the end of the line. from .database import * # noqa You can be more specific by telling Flake8 which errors it should ignore. from .database import * # noqa: F403 from .database import * # noqa: F401, F403 Style The Black code formatter will enforce a consistent formatting style on Python code. Add the following to the .pre-commit-config.yaml file. - repo: rev: 22.8.0 hooks: - id: black You might want to check the Black repository to see if there are more recent releases and update the rev field accordingly. Prosper With this setup in place your code will be checked every time you commit. Many issues will be automatically fixed. Others will be highlighted and you’ll have to manually intervene. This works particularly well if you are part of a team because it means that everybody on the team will be committing and pushing code without any syntactic issues and with consistent formatting. Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2022/09/enforcing-style-in-a-python-project/
CC-MAIN-2022-40
refinedweb
726
57.27
The doctors and technicians called it at 1:00pm on June 23rd 2006. The critically acclaimed, technological marvel known as WinFS passed on. It was truly a sad day. I posted this link to an email forum that I’m on and it generated the obvious question ‘From a business perspective, why will a company want to upgrade from XP? What business advantage does Vista offer?’ I posted the response below and it generated an off list request for me to blog it. So let’s talk about the blogging thing. I just can’t get into it. Sorry. I tried. With the email lists, I can get back instant feedback on a topic and real, meaningful discussion will often occur. I find these discussions enlightening. Over a good number of years now, I have learned copious amounts of technical information, as well as political, cultural and various other ‘als’, in this format and I value it. Blogs and Forums just don’t deliver the same way email delivers. So any posts here (I took down my personal blog) will be informative stuff like this, and I suspect they will be rare occurrences. So that said, the answer I posted (with some minor edits). Business will upgrade because they are locked into software assurance licensing agreements. OEM will support it because they are obligated under contract to ship current Windows versions. There is no compelling reason to purchase outright a Windows Vista upgrade for almost all businesses and the majority of consumers. For the first time ever a company is releasing an operating system where the primary design goals benefit someone other than the computers owner. The only new technology in Windows Vista that survived the toppling of the three pillars is the secure kernel architecture designed to enable DRM. Windows Vista will ship with technologies called the 'protected process' and the 'trusted installer'. These two technologies are designed for the media industry. They create a segment of your computer that you cannot tamper with. The architectural changes left in Vista are the only 'features' of the Longhorn era to see the light of day. Vista, to be as blunt as possible, is not being built for consumers or businesses. It's being built so Microsoft and Verizon can make money selling you movies. For almost 2 years, I've asked every Microsoft employee that I can find involved with Vista the 'my mom' question. It goes like this, "In one simple statement tell me why my mom should run Windows Vista over Windows XP". I have never gotten a honest answer, which is fair; it's a loaded question. Anyone familiar with Vista knows that outside of some new eye candy and the DRM kernel mods there is no beef on the Vista bone. On a personal note, IMO the eye candy is pathetic compared to the now aging visuals included with Mac OS 10.x. Don't get me wrong. The work they did on the new video API and audio API is amazing stuff that really revolutionizes some things in OS architectures. But these changes are completely transparent to the end user and don't provide any immediate compelling benefit. Again, the only thing the changes enable is DRM - you need application specific audio and video channels if certain applications are going to be encrypted and will be required to run under a special process model. The 'benefit' to the end user that Microsoft is selling is "per-app mixing", the ability to set one applications volume different than another applications volume. Per-app mixing is weak. In fact it is an absurd reason to perform a major reimplementation of critical subsystems in an operating system of any scope, much less one as large as Windows. Now call me crazy, but I don't get the feeling the 'my mom' crowd is out there clamoring and begging to get per-app mixing onto their computer. The stock answers I get from Microsoft to the ‘my mom’ question are "it's more secure" and "it's a platform for future technologies". These are half truths. What Microsoft really means is Windows Vista is secure from the threats presented by the consumer to content owners and that it provides a platform for digital distribution. Microsoft really missed the mark. They had a innovated vision that would propel the next generation of personal computing technology into the mainstream. The three tier stack of Avalon, WinFS and Indigo were compelling and forward thinking. The watered down version of Indigo, the poorly performing and *unused by Vista* Avalon stack and the complete evisceration of the relational file system are a complete about face from that original vision. Why they 86'd that plan and decided to get into the movie distribution business is beyond me. So that's my nickels worth. IMO if you don't care about having licensed high definition audio and video running on your computer, then Vista doesn't provide you with any benefit; business or consumer. That was my post. Now for something else that is getting under my craw… It’s that friggen date. It just seems way to coincidental for a post like this to go live at 1:00pm without foresight and planning. Now go read the previous post. So as early as a few days prior Microsoft was hyping WinFS with sessions and classes at TechEd. They were energizing a base of hard core developers about a technology they knew would never see the light of day. It’s this behavior that has pitted me against myself when it comes to Microsoft. Our relationship is now love-hate. I love Office, Visual Studio, ASP.NET and Windows XP. I hate the company, the decisions they make and the way the conduct business. I hate the way they treat me as a developer and an enthusiast about their products. It is simply misleading and wrong, and it flies in the face of transparency (remember that MS buzzword) to do things like they have done. It’s sad really. That arrogance and disrespect for ethics and honesty will cripple a great company. I have no options. There are too many benefits to sticking with tools like Visual Studio and the CLR to do something radical like switch to Linux and start programming Java. But I would if they presented a viable option and I never thought I would make a statement like that. WinFS was the nail in the coffin for Microsoft and I. They are no longer a company I respect and revere. They are now someone I have to watch out for and be weary of. They have lied to me one too many times. Watch out Microsoft, Google understands the consumer whom you so arrogantly abuse. This blog post is an asp.net server control that wraps the syntax highlighter javascripts files found here: made some more notes on it here where i started to post it (couldn't 10,000 character limit *ouch*) ------- /// using namespace { [ : { } _language = } ViewState[_vskLanguage] = _noGutter = ViewState[_vskNoGutter] = _noControls = ViewState[_vskNoControls] = _collapse = ViewState[_vskCollapse] = _firstLine = ViewState[_vskFirstLine] = _rows = ViewState[_vskRows] = _columns = ViewState[_vskColumns] = _name = ClientID; _name = ViewState[_vskName] = _scriptPath = ViewState[_vskScriptPath] = _controlBody = (( cssLink.ID = pageHead.Controls.Add(cssLink); scripts.RegisterClientScriptInclude( name); scripts.RegisterStartupScript( { writer.WriteBeginTag( writer.WriteAttribute( writer.Write( writer.WriteLine(); writer.Write(_controlBody); writer.WriteEndTag( builder.Append( exitEarly = builder.AppendFormat(_cvFirstLine, FirstLine); #region csharp, vbnet, delphi, javascript, php, python, sql, xml, Unknown } #endregion Please... Please... Please.... Microsoft PLEASE give us a way to This *release* version of VS2005 is dog and I just can't take it anymore. Here is the 3rd bug I've found (I never found any at all in Everett)... using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using System.IO; using Net=System.Net; using System.Net.Sockets; using Microsoft.SqlServer.Server; [Serializable] [SqlUserDefinedType(Format.UserDefined, MaxByteSize=sizeof(Int64))] public struct IPAddress : INullable, IBinarySerialize { #region Private private Net.IPAddress _ipAddress; #endregion #region Construction private IPAddress(Net.IPAddress ipAddress) { _ipAddress = ipAddress; } #region INullable and SQL UDT Required Members public override String ToString() return _ipAddress.ToString(); } public Boolean IsNull get { if (null == _ipAddress) return true; else return false; } public static IPAddress Null IPAddress result = new IPAddress(); return result; public static IPAddress Parse(SqlString s) if (s.IsNull) return Null; // create our result Net.IPAddress newIPAddress = null; // attempt the parse if (Net.IPAddress.TryParse(s.ToString(), out newIPAddress)) { IPAddress result = new IPAddress(newIPAddress); else { throw new ArgumentException("Invalid IPAddress String", "s"); } #region IBinarySerialize Members (Required for user defined serialization) public void Read(BinaryReader reader) Int32 byteLength = (Int32)reader.BaseStream.Length; Byte[] bytes = new Byte[byteLength]; bytes = reader.ReadBytes(byteLength); _ipAddress = new System.Net.IPAddress(bytes); public void Write(BinaryWriter writer) writer.Write(_ipAddress.GetAddressBytes()); #region System.Net.IPAddress Public Member Wrappers public override Boolean Equals(object obj) if (obj is IPAddress) IPAddress ipAddress = (IPAddress)obj; return _ipAddress.Equals(ipAddress); return false; public override Int32 GetHashCode() return _ipAddress.GetHashCode(); public AddressFamily AddressFamily get { return _ipAddress.AddressFamily; } public Boolean IsIPv6LinkLocal get { return _ipAddress.IsIPv6LinkLocal; } public Boolean IsIPv6Multicast get { return _ipAddress.IsIPv6Multicast; } public Boolean IsIPv6SiteLocal get { return _ipAddress.IsIPv6SiteLocal; } public Int64 ScopeId get { return _ipAddress.ScopeId; } set { _ipAddress.ScopeId = value; } public Boolean IsLoopback get { return Net.IPAddress.IsLoopback(_ipAddress); } #endregion #region TODO - Need to talk to some network nerds about the best way to do this public Boolean IsPrivateNetwork throw new NotImplementedException(); public Boolean IsReservedAddress get { throw new NotImplementedException(); } public Boolean IsPublicNetwork get { return !IsPrivateNetwork && !IsReservedAddress; } } It seems like the personal drama just won't end. I've been back in crisis mode over the last couple of days trying to put out some fires between a landlord and a friend. Long story short, I'm either gonna have to sue somone or lose a bunch of money I really can't afford to lose. Ugh... I've written a bit of code and a good chunk of the post about the Dollarville traffic light. It jumps right into threading code. I'm trying to present it in the context of dealing with the time line of the stop light from the time line of something else, but I'm not getting that far with the analogy. I was originally going to go with traffic, you know change the light to a regular light instead of a flasher and then incorporate traffic. But that's just friggen complicated. I'd have to write alot of code (which sucks); but even more importantly a traffic emulator is a really complex thing. I got a nifty stoplight built, but the analogy breaks down because there really are no interactions between the light and... exactly. It's just a light that hangs and runs. The only real interaction is has is with a driver or a pedestrian. haha. I've written myself into a corner. Whatever, I'm sure I'll find a balance when I sit back down to do the rest of the work. l8r... :) I’d like to introduce you to Dollarville. Dollarville is a quant little town in the middle of nowhere. Like many old, small towns Dollarville has two main roads. There is Main Street and the state highway 9. These two roads intersect in the middle of town. There is a stop light that always flashes yellow for the state road and red for Main Street. Downtown Dollarville has a general store, a gas station, a fire station and a post office. Janet Dollar runs the general store. Her nephew Tommy Dollar owns the gas station. Chief Dollar works at the fire station and Aunt Dollar takes up shop at the post office. Life in Dollarville is serene and most days pass without anything exciting happening. This suits the Dollars just fine because every month on the 15th all hell breaks loose. On the 15th a caravan of travelers and merchants travel up the state highway 9 on their way to the big government auctions. The problem is that since Dollarville wasn’t designed to accommodate this much vehicle and foot traffic, on the 15th Dollarville turns to gridlock. The single stop light that always flashes becomes a crippling point for people trying to use Main Street to cross the state highway. The parking lot of the gas station becomes another large bottleneck. But these problems pale in comparison to the personal problems dumped upon the four members of the Dollar family this year. Mama Dollar is *really* sick and every hour one of the Dollars will need to close up their own shop and take 5 or 10 minutes to check up on mom. This is not a problem on every day except for the 15th. In order to solve this problem the Dollars have decided to hire a person who will move from store to store and help with the basic services for the 5 or 10 minutes while the proprietor is out looking in on Mama. Over the course of the next dozen or so entries we are going to use Dollarville, it’s people and their problems to look at common threading problems, techniques but more importantly concepts. In the last post I said that managing the intersection of timelines was the secret to threading and I can’t stress that enough. I’m off to write some code. It’s time to bring the Dollars to life. :) Writing multithreaded code is just as easy as writing data access code or writing file IO code with .net. I believe that so long as you have a solid understanding of the threading object model, how that model is suppose to work and some common implementation techniques of the model it will be easy. I think this way about all object oriented code. Object oriented thread programming is no different than object oriented database programming. It’s just code. The difference is that threading presents a unique set of challenges that are a little more complicated to grasp up in the brain. Threading adds an additional dimension to your programming. It fundamentally changes the pallet from a linear 3D surface to a rich, non-linear 4D surface. Hmm… 3D, 4D… I might have just confused some people (we aren’t talking graphics :p ), so let’s get into the dimensions of programming. · Polymorphic Class · Interface · Type These dimensions are surfaces in object oriented programming that we use in conjunction with Boolean logical and flow control to make the computer do things. Most people have no trouble wrapping their mind around these baseline 3 dimension. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!. I really like 2 games. World of Warcraft and Unreal Tournament 2004. Because these games are so large and take so long to install and because I seem to blow away my workstation every couple of weeks. I have gotten into the habit of keeping them installed on a seperate drive and simply importing the registry setting to 'install' them when I reinstall my OS. This model has worked fine for a long time. However I recently moved to the release version of x64 XP and this didn't work. Unreal would not run at all and WoW would run but the performance was total crap. When I started investigating the performance I noticed that Wow.exe was showing in the task manager as wow.exe as opposied to wow.exe*32 like it should. This was perplexing. My first attempt at a fix was to move the game into the Program Files(x86) directory thinking this would tell Windows to load me up as x86. This wasn't the case. Second I tried to put the application into 'Windows XP' compatibility mode. No luck thier either. After spelunking around the registry, I noticed that there was a new subtree titled Wow6432node inside of HKLM\Software. Moving the registry keys into here fixed it. Interesting problem, so I figured I'd share. UPDATE:: I sat down to play WoW tonight and it loaded it up and ran it as x64 again. I'm not sure what is going on. If anyone knows how to force an app to run as x86, I'd love to know how. UPDATE:: I take that back. It must have had something to do with the 1.40 patch installer launching the game. Now it is working fine again. Disregard that last update. I got team system installed today. It was a breeze. There are couple of things I'd like to point out about the install that might matter if you are running on x64 hardware. While the installation docs do claim you can install on x64, it does not mention an important fact about running x64 ASP.NET. IIS can only load one version of the framework at a time. When you install .net 2.0 it changes IIS to run .net 2.0. This will break an app tier install because the app tier, which is WSS based requires the 1.1 version of the framework. This means that you will need to tweak some registry keys to force the correct version of ASP.NET to load. I personally haven't experienced the problem first hand as I installed on 32bit Windows, but having some experience with the x64 ASP.NET modules leads me to believe that the foundation servers CAN NOT run on 64 hardware even though the docs claim that it can (see reporting services would need the 2.0 runtime, while WSS would need 1.1.I got a buddy on the Team System team who is looking into this, so expect something official from Microsoft at some point in the near future. My advice at this point in time would be to either use a 2 tier configuration with the data tier running on x64 and the app tier running on x86, or just use a single tier x86 configuration.Again, this is just me talking here and I haven't really tested it myself. It just seems like there isn't any logical way for it to work. Stay tuned for more information. UPDATE: 04/20/05 Yeah. this is confirmed. x64 is not currently supported for TFS for the reason outlined above. The docs are incorrect with regards to Opteron support.
http://aspadvice.com/blogs/pmurphy/default.aspx
CC-MAIN-2013-48
refinedweb
3,055
65.73
From. PROVIDENCE, R.I. [Brown University] — Scientists are using ever more complex models running on ever more powerful computers to simulate the) The paper is available for download at Arxiv.org PDF Dr. Judith Curry also has a discussion of this at Climate Etc. 51 thoughts on “Statistical physics applied to climate modeling” Physics is exactly what’s required to put climate science on a firm footing. The relevant quote is from NZ’s own Ernest Rutherford: “all science is either physics or stamp collecting”, although at that time (1902) there was no climate science — else he might have added a less complimentary third adjunct. All physicists are fully qualified to evaluate climate models — mind you, the climateers will hotly dispute that because physicists are by and large a very skeptical group! Sure statistical modeling is a much simpler way to go. It takes a whole lot less CPU cycles that direct simulation, it’s simpler to develop & code, it is easier to debug, it is much less subject to programming errors, etc., etc. However, you can’t model a system you don’t thoroughly understand, either with statistical modeling or direct simulation. Computers don’t think for them selves… they’re nothing but big fast adding machines. I don’t think anyone currently knows enough about the oceans, atmosphere and climate to make accurate models using either of the above approaches work. A climate model based on physics? Words fail me. About ****ing time. hoffer. Its been happening for a long time spend some time watching. best hour you will spend Replacing horsepower with mind power? This looks promising from the basic premesis. …Brad Marston, professor of physics at Brown University. Hmmm, gotta admire this man: Speaking of Climate Models. If you can’t model ENSO with any precision you cannot predict the future state of our atmosphere. The feeling I get from this article is that we have a great understanding of Climate; it’s only a lack of a decent computer system that holds us back. If it was only that easy. Just when you think you have ENSO figured out, she pulls a fast one. There are certain things that can be calculated easily using physics principles instead of doing really complicated things. For example, if we have a bag of sand on a frictionless rail and fire a bullet into it, we can find the resultant velocity of the bag if we know the mass of the bullet and its speed before entry as well as the mass of the bag of sand. We do not need to analyze billions of collisions between the bullet and a huge number of sand particles. Of course weather is much more complex! It will be interesting to see if things can be simplified and made more accurate in the process. Sure. And the physics of carbon dioxide is “well known” and can be calculated into statistical modeling (and we have heard that before). There are many examples of garbage physics in our history. This is no different. I have no positive things to say about this research. It appears to be same old same old on the inside, just new wrapping. Call me a cynic but the basic problem will still exist – garbage in, garbage out. Applying physics to climate ‘science’ eh? A novel idea. I doubt it will prove very popular amongst people who, for example, did biology to avoid doing physics. Or people who struggle with maths and don’t know how to get Excel to make a graph… they are mind-bogglingly complex, take a long time run, and require the world’s most powerful computers 42 (sorry, it’s Friday night) :0 This all seems something of a waste of time at least until we actually understand all of the cycles at work in the process — including the external factors like the solar cycles. physics….. We have chemical biology….so why not biological physics…. earth’s climate….climate change…..climate statistics……climate systems They’ve used so many different words…what are they really doing here? …are they modeling the weather….then they don’t need biology if they are modeling man made climate change…then they do GIGO is far less of a problem with physicists because they are trained to watch the signal-to-noise, and to know the difference. It’s an integral part of being a physicist. davidmhoffer says: All climate models are based on physics. The point here is that they are applying some techniques used by physicists to deal with the system on a less-detailed level, thus allowing for the possibility of more rapid calculations that still get the statistical properties of the climate right without getting bogged down in the weather “details”.. NZ Willy says: (1) As a physicist, I don’t think that I would say that “all physicists are fully qualified to evaluate climate models”. We certainly have a good background on which to understand climate modeling and become knowledgeable about it…but there is still a lot to learn to actually become conversant in the subject. Also, in regards to the first part of your statement, I think that the notion of reductionism, i.e., that the only interesting science is physics (or, even more specifically, particle physics, is pretty well dead: Interesting phenomena emerge as the result of the collective macroscopic behavior that occurs as one looks at larger scales. (2) Physicists may be a “skeptical” bunch in the true sense of the word, but if you mean that most physicists are generally not worried about AGW, I don’t think that is correct. The APS has issued a statement on the issue ( ), major introductory physics textbooks (such as the ones that we use for both algebra- and calculus-based courses at RIT) talk briefly about it, … By the way, I should mention that I know Brad Marston, having overlapped with him when I was a grad student and he was a postdoc…and I have the utmost respect for him. He has also played a major role in the formation of the new Topical Group on the Physics of Climate within the American Physical Society. I hope there is something useful here . Time will tell, at least the man is open source and challenging others to come play with this idea. I keep hoping a real science will arise from the ruin that is climatology, but will remain cynical until the ology turns to science. Mosher, Shore, I did not know that. Do I need to add sarc tags? GIGOQ&E GIGO quicker & easier. “The new approach focuses on fundamental forces that drive climate…” =================== Don’t be coy, just tell us exactly what they are. I am puzzled as to why so much discussion goes on about the effect of CO2 on radiated heat energy from the Earth’s surface and middle atmosphere. Reference is also made to the Glass house effect. In a glass house, the air that has been warmed by incoming radiation from the sun convects upward, this warm air is trapped mechanically by the glass roof. The glass structure prevents winds from sweeping the heated air out of the building, replacing it with cool air. The glass isn’t there to prevent heat radiating out of the building, but is there to keep the air in the building trapped. Any heat loss from the glass house is by conduction through the glass and some radiation. To my knowledge a large proportion of the heat transferred to the upper atmosphere is by convection. If you live in the tropical regions you can observe the clear skies in the morning and watch the huge clouds forming during the day followed by massive thunder storms and heavy rain late in the day. CO2 and its overall effect on heat transfer to space is minimal and I think can be almost ignored Water vapour is transferred to high altitude by convection, where it condenses loosing its latent heat of which a good percentage gets radiated out into space. Water in this case is a natural refrigerant, and in my opinion forms an important component of the Earth’s temperature control system. I think CO2 plays a very small role in the Earth’s temperature control. I know that some climate scientist rely on “models”. The programs in the computers can only reflect the opinion of the programmer/s, as computers cannot think, but merely perform very rapid calculations using algorithms programmed into them by people with opinions. I feel that very often these opinions are wrong as born out by the divergence of model predicted and observed temperatures. At a minimum, this could be useful as a double-check. No doubt this ‘model’ is logically equivalent to the standard Climate Schmience algorithm: def run_climate_model(required_answer=”It’s worse than we thought”): do_complicated_looking_stuff() return required_answer Theoretically, it’s a good idea. Practically, a lot of work is required. The approach is widely used for finite element modeling for solid mechanics and circuit simulation for electrical engineering, and no doubt many other areas that I have no knowledge of. However, it’s successful in those areas because the physics of the simulated bodies is well understood and predictable. From what I have seen, climate physics is very much not in that club. IMHO, statistical simulation is not likely to make much progress until the underlying physics of the climate is a whole lot better understood than it is now. And, that’s just not a simulation problem. Strictly as a layman on this, but how well does statistical physics deal with boundary conditions and external inputs? That is things like transition form ocean to land, mountain ranges, or changes in solar output? I am really surprised at all the negative comments, I think this approach has merit. Yes the climate models are based on physics, but they have two problems IMO. The first is that they are trying to model the physics at a level of detail that is impossible to model. There isn’t enough compute horsepower on earth to model every molecule of the earth surface, oceans and atmosphere, and there never will be. So the models fill in the gaps with statistics and trends which is their second problem. That’s just a fancy name for curve fitting and curve fitting fails over and over when there is an underlying flaw. That’s exactly the problem we see with the models today. No matter how well they match known observations, they fail to match new observations. Which is why they were “right” in the early days but now have been wrong for…depending on the data set… as much as 23 years. As time goes on they will get more off track as the fundamental errors overtake the curve fitting. This model takes a different approach. It is in part curve fitting too. But instead of being mostly curve fitting on top of physics full of gaps, it is mostly known laws of physics with a manageable amount of curve fitting. Like Werner said, one doesn’t have to know the interaction between a bullet and every grain of sand in the bag (traditional climate models) to figure out how fast the bag will be going after it absorbs the bullet (what these guys are doing). The fact that they modeled something and got the same result as the climate models is very interesting. When you get the same answer two completely different ways, you are probably on the right track. They still need to verify against observations (and I wish they had done that) but the fact is they got the same answer, and here is the important part of that: They did it with a fraction of the horsepower of the climate models, in fact, they did it with an app you can download and run yourself. Let’s consider the merits of that should their technique be proven out against observations. What else could they model with the same approach? I’m betting a lot of things. And the more things they can model with this technique, the more multiple models can be tied together to become a larger model. Sure, that’s what the traditional models do too, but you could do what they are doing with a couple of hundred desk tops instead of thousands upon thousands of compute cores. Isn’t that worth exploring? They’ve made their code public for gosh sakes, let’s wait until a few other people have had a look and let’s see what comes of it. Either it will be debunked or it will be proven of value, but I think there’s enough merit to give it a shot rather than just throwing it under the bus as yet another failed computer model. Would like to see rgbatduke weigh in on this… Regardless of whether the method has merit or not, applications in climate & solar science will have only one physical driver, only one statistical driver, and more generally one and only one driver: Politics. Cool, in the future we will be able to get the incorrect answer even faster than today. This is an interesting approach, but why bother with the “higher cumulant” (yet) instead of perturbing the simulation thermodynamically i.e add incident energy on an incline hemisphere (simulate energy flux on a tilted spinning “Earth”). Could you just use the Gibbs free energy differentials based on the ideal (or “real” even) gas equation or statistically…then add complexity incrementally like two fluids (water and air) then phase changes and see if ice forms at the poles for instance. my two cents. John Kaye (March 8, 2013 at 8:18 pm) wrote: = def run_climate_model(required_answer=”It’s worse than we thought”): do_complicated_looking_stuff() return required_answer = You sir have been awarded academic life on funding easy street. All science is simplification. John of Okham, etc. I worked on big complex software for 30 years, and it’s my observation that people who work on such systems are chronically addicted to complexity. The proposal that simple is better is an anathema to them. I am certain climate modelers have fallen into this trap. This idea fascinates me. We have a climate modeling tool released into the wild where we can all work with it. It may not be much, after all it’s just an app and I didn’t see any mention of the source code, but it could really be the start of something. The approach could be very fruitful. I don’t know if my programing skills or knowledge of fluid flow physics would be enough to do very much, but surely some of us have enough of both to do a lot better. The possibility of developing some true open source models of how the Earth works is just tantalizing. A few years back I read some Isaac Asimov articles on how ice ages worked and how the configuration of the continents might determine when they could happen. I tried to do my own crude little simulation to check out some of the ideas. I took a big chicken cooker, put a little water in it and set it on a small burner. Then I set some aluminum scraps in it to simulate mountains and such, sprinkled in a little sawdust, and turned on the heat. Sometimes the sawdust would move around in patterns that looked a lot like weather maps. I couldn’t make it work well enough to see what I was really after, but I still had the feeling it was showing me some important things about how weather worked. This might be a whole lot better. I’ll have to see how much hardware it needs. I’m more of a penguin breeder than apple maggot or window shade, but I might have a mac in my cyber junk pike good enough to run it. then it’ll be time to dig out my old physics books and see what they tell me. After all it seems likely that I already know more physics and stats then the average “climate scientist” ;) ;) ;) This still does not get over the reality that climate is more complex that just atmospheric physics. We will not get any sense from climate studies until for a start the true geological outputs of CO2 gases and the biological creation and usage of these gases are fully understood. Why is is just accepted that man’s emissions are compared to the net of natural used and created gases i.e. the residual atmospheric CO2 rather than nature’s emissions. Where else outside climate studies is the comparison of net and gross quantities considered the norm? Let’s be honest the so called science has always been just tenth rate statistical analysis or we could have accurate forecasts for any place or period in the future up to the claimed climate forecast date. What climate statistics generated by such statistical models are useful? Global average temperature isn’t experienced by anyone, so that’s out… Average number of hurricanes per year? Well, maybe the ones that actually hit land… Average rainfall per year? Well, that could be a single day flood, or a nice evenly wet year… I mean, the real problem here is that what *matters* is the weather at a time/space resolution that is very small – I want to know what the weather is going to be for a given 100 square miles on a given day if I’m going to make some sort of grand plan. I’m not sure if this proposed exercise is going to give anyone any sort of actionable intelligence. I guess as a first pass at useful info, they should build a model that accurately predicts just one statistical portion of climate – ENSO. Predict ENSO out say, 10 years the way you can do with tide charts, and maybe they’re onto something. This approach will not work in any useful way. The time evolution of a chaotic system is deterministic but unpredictable. Apart from being able to put some upper and lower limits on average global average temperature based on energy flux from the sun, Boltzman and heat produced in the earth, which can be done anyway, there is not a snowball’s chance in hell of predicting a regional climate change natural or otherwise with any useful degree of confidence. Try it with the 3-body problem. Apart from the constraints of total Gravitation potential energy and kinetic energy being constant and limiting how far the bodies can move apart there is no chance of predicting a behaviour after a specific time. Only an APPLE app. Need an ANDROID version. Replace Mann with an app. , sounds good to me ! davidmhoffer says: March 8, 2013 at 8:47 pm “I am really surprised at all the negative comments, I think this approach has merit.” I would be surprised by anyone who claims that this approach is in any way better or of a higher quality than the existing broken models. “The fact that they modeled something and got the same result as the climate models is very interesting.” It is not surprising at all. They arrive at the same bogus conclusion faster. So in the past we saw these predictions for the year 2100. Now as computers become cheaper and faster they let their grad students write papers that predict the climate in the year 3000 (they have no other use for the CPU horsepower). In the future we will see, with this great new technique of outputting nonsense faster, simulations to the year 100,000 C.E. (or whatever atheists use instead of A.D.). And I forecast that these papers will tell you that you should be worried. Very worried. DirkH; I would be surprised by anyone who claims that this approach is in any way better or of a higher quality than the existing broken models. >>>>>>>>>>>>>>>>>> Your logic is that existing models don’t work, so it is impossible to build models that do. Why bother to even try. Mandelbrot already looked into this. He said it could not be done and spawned chaos theory in trying to explain why. davidmhoffer says: March 9, 2013 at 8:04 am “Your logic is that existing models don’t work, so it is impossible to build models that do. Why bother to even try.” As long as the prevailing theory is that the Earth’s climate is NOT controlled by solar cycles but is a freely oscillating chaotic system it does make indeed not one wit of sense to try to model it a hundred years into the future and expect any predictive skill. See also the definition of insanity according to I think Einstein. DirkH, Dinostratus, son of mulder: You guys are confused about what chaos theory says. It does not say that any sort of prediction is impossible. It says that predicting details that are sensitive to the initial conditions is impossible. Take Brad Marston’s example of the ideal gas equation: The motions of the individual molecules are subject to chaos theory (besides being simply way too numerous to follow!) and yet we can still make conclusions about the macroscopic behavior of the gas.. DirkH As long as the prevailing theory is that the Earth’s climate is NOT controlled by solar cycles but is a freely oscillating chaotic system >>>>>>>>>>>> That’s not what they are doing. So long as it is not necessary to spend $$$ as a result of its predictions without thoroughly verifying its output, then it sounds reasonable. “joeldshore says: March 9, 2013 at 5:46 pm You guys are confused about what chaos theory says. It does not say that any sort of prediction is impossible. It says that predicting details that are sensitive to the initial conditions is impossible.” I think you’ll be able to read in the example of the 3-body problem that I appreciate the point you are making but no amount of statistical methods will predict how the jetstream will react say over Europe and hence whether in 10 years or 20 years or 50 years or 100 years time Europe will be wetter or drier than now or warmer or colder than now, or windier or less windy than now ie something useful that we don’t know. Impress me with a type of prediction that will be useful from this method. joeldshore says: March 9, 2013 at 5:46 pm “DirkH, Dinostratus, son of mulder: You guys are confused about what chaos theory says. It does not say that any sort of prediction is impossible. It says that predicting details that are sensitive to the initial conditions is impossible.” So you run your model 20 times, average it, call it a multimodel ensemble mean and print it in the IPCC report as if that would improve anything. Excuse me while I laugh; tell me again how big the state space is. What you are doing is pseudoscience, plain and simple. “Take Brad Marston’s example of the ideal gas equation: The motions of the individual molecules are subject to chaos theory (besides being simply way too numerous to follow!) and yet we can still make conclusions about the macroscopic behavior of the gas.” I have yet to see any such result from climate science. Meanwhile climate science has the audacity, the chuzpe to tell us how to rebuild our energy sector. Climate scientists should be held liable for the damage it has already caused. .” Oh noes! The Horrors! I hear that any increase above 2 deg C is violating Schellnhuber’s planetary guidelines! Thermal runaway awaits us all! Joel Shore, how is the global average temperature time series that a GCM computes (assuming for the moment that average had any statistical meaning or were defined) not affected by the chaos in the system? What separates the low frequency part of the power spectrum (where we find the “climate” signal ) from the high frequency part (where we find unpredictable and chaotic weather)? How much dampening of high frequency fluctuation between model runs do you achieve in decibel when you average 32 model runs (assuming for the moment that the temperatures were normally distributed which they aren’t)? ‘.”’ This so-called “direct statistical simulation” is what I have been asking climate scientists to undertake for years now. I fervently hope that these scientists succeed. If they do, they will establish beyond the shadow of a doubt that our understanding of climate is inadequate for prediction or even for reasonable forecasts of climate. That is because climate scientists do not have reasonably well confirmed physical hypotheses, something analogous to the gas equation, for most forcings or feedbacks. We simply have not done the empirical research necessary to learn what water vapor does or what clouds do under conditions of rising CO2 concentrations. We need research on water vapor and cloud formation employing technologies along the lines of the Argo project but far more extensive than Argo. “Conceptually, the technique focuses attention on fundamental forces driving climate, instead of “following every little swirl,” Marston said. A practical advantage would be the ability to model climate conditions from millions of years ago without having to reconstruct the world’s entire weather history in the process.” Finally, there would be physics in the models. At present, apologists for CAGW tell us that the models are based on the best physics but fail to tell us that the physics does not actually appear in the models as rigorously formulated hypotheses along the lines of the gas equation. One great advantage of using such hypotheses in the models is that on occasion such hypotheses can be falsified. Introducing falsification into models runs would be a step toward bringing them into the arena of science. .” I am encouraged that Marston is sophisticated enough to address the mathematics and the computational problems. To talk with climate scientists, you could come away believing that the model is transparent to the scientist. That is far from the truth. Any model worth serious attention is based on heuristics that are not scientific principles and are not akin to them. A serious discussion of computer heuristics in climate models is long overdue. DirkH says: No…What climate scientists are doing is doing science. And, then it is up to us as a society to decide whether we want our public policy to be based on the best available science or whether we want it instead to be based on the opinions of people with ideological blinders on. What chaos theory tells you is that the exact trajectory that you follow is very sensitive to initial conditions. So, for example, if you run a numerical weather prediction model for 4 weeks and see what weather pattern you get, it will bear little resemblance to the actual weather pattern that occurs that day. (And, in fact, it will bear little resemblance to the weather pattern predicted by the very same model with slightly perturbed initial conditions.) However, the weather pattern it predicts will still be a reasonable one. Likewise, when a GCM is run over 100 years of rising greenhouse gases with different sets of initial conditions, the various ups-and-downs of the global average temperature look different in the different simulations but they all predict roughly the same climate 100 years hence. It is really not that complicated: If I do many different trials of flipping a fair coin 1 million times, then the specific pattern of heads and tails that I get will be different in the different trials, but the statistical behavior of the patterns will be basically the same. And, if I repeat the experiment with a coin that is now biased so that it lands heads more than tails, again the specific pattern of heads and tails that I get will be different in different trials, but the changes in the statistical behavior of the patterns from what I saw with the fair coin will be similar. “joeldshore says: March 10, 2013 at 4:40 pm ………What chaos theory tells you is that the exact trajectory that you follow is very sensitive to initial conditions.” It tells more than that, every iteration in a computer model contains numerical roundings so with chaotic systems at every point in say a 100 year iteration, the model’s “system state” does not reflect the initial conditions. By the time you’ve done 100 years it will reflect nothing like the initial state that was set. There is no way of knowing if the “initial state” the model reflects at any point is realistic let alone after 100 years. Hence, although the result may look “reasonble”, I contend you can draw no conclusion that will be useful. son of mulder: You may contend that, but you are wrong. We are not trying to predict the weather on some particular day 100 years from now (or even one particular year in the sense of whether one particular year will be a relatively warm El Nino year or a relatively cool La Nina year). We are trying to predict how the climate changes in response to changing greenhouse gases. The fact that the exact trajectory won’t be correct is irrelevant just like the exact trajectory is irrelevant to the question of how the climate differs between winter and summer here in Rochester.
https://wattsupwiththat.com/2013/03/08/statistical-physics-applied-to-climate-modeling/
CC-MAIN-2017-47
refinedweb
4,926
59.64
The C# 2.0 specification says The null literal evaluates to the null value, which is used to denote a reference not pointing at any object or array, or the absence of a value. The null type has a single value, which is the null value. But every version of the specification since then does not contain this language. So what then is the type of the null literal expression? It doesn't have one; the specification never says what the type of a null literal. It says that a null literal can be converted to any reference type, pointer type, or nullable value type, but on its own, considered outside of the context which performs that conversion, it has no type. When Mads and I were sorting out the exact wording of various parts of the specification for C# 3.0 we realized that the null type was bizarre. It is a type with only one value -- or is it? Is the value of a null nullable int really the same as the value of a null string? And don't values of nullable value type already have a type, namely, the nullable value type? 1 So already this is very confusing. Worse, the null type is a type that Reflection knows nothing about; there's a Type object associated with void which has no values at all, but none associated with the null type. It is a type that doesn't have a proper name, is in no namespace, that GetType() never returns, that you can't specify as the type of a local variable or field or method return type or anything. In short, it really is a type that is there for completionists: it ensures that every compile-time expression can be said to have a type. Except that C# already had expressions that had no type: method groups in C# 1.0, anonymous methods in C# 2.0 and lambdas in C# 3.0 all also have no type. If all those things can have no type, clearly the null literal need not have a type either. Therefore we removed references to the useless "null type" in the C# 3.0 specification. As an implementation detail, the Microsoft implementations of C# 1.0 through 5.0 all do have an internal object to represent the "null type". They also have objects to represent the non-existing types of lambdas, anonymous methods and method groups. This implementation choice has a number of pros and cons. On the pro side, the compiler can ask for the type of any expression and get an answer. On the con side, it means that sometimes bugs in the type analysis that really ought to have crashed the compiler, and hence been found by testing early, instead cause semantic changes in programs! My favourite example of that is that it is possible in C# 2.0 to use the illegal expression null ?? null. A careful reading of the specification shows that this expression should fail to compile. But due to a bug, the compiler fails to flag it as an erroneous usage of the ?? operator, and goes on to infer that the type of this expression is the null type, even though that expression is not a null literal. That error then goes on to cause many other downstream bugs as the type analyzer tries to make sense of the expression. In Roslyn we debated what to do about this; if I recall correctly the final decision was to make two APIs, one which asks "what is the type of this expression?", and one which asks "what is the type of this expression given a certain context?". In the first case, the null literal expression has no type and so null is returned; in the second, the type that the null literal is being converted to can be returned. - The reader who critically notes that it is question-begging to ask whether values of a given type have a type ought to instead applaud my consistency. Tautologies are by definition consistent. ↩ This seems somewhat like something you blogged about before, with regard to "Literal Zero", which is regarded differently from an integral-type constant whose value happens to be zero. Would I be correct in guessing that a Literal Zero also has no Type object associated with it, even though one can do things with a Literal Zero that cannot be done with any integral-type constant? Good question. The situations are similar but not identical. The literal zero clearly has a type; it's an int. However there are special rules that say that this particular expression has different conversion rules than the rules that normally apply to ints. The difference between conversions that are justified because of the type of an expression and conversion thats are justified because of some special lexical format of the expression is tricky and the spec has historically done a poor job of carefully noting the distinction. Mads and I made a lot of improvements in this area in the C# 3 and 4 specifications and my spies tell me that there are similar tweaks to the wording in the works for the C# 5 specification, which has yet to be released. I'm not sure what you mean that the C# 5 spec hasn't been released. Did you mean this () C# 5 spec is not official yet? When last I checked it hadn't been released, but there it is. Thanks for the link! If C# had defined a family of types CompilerIntLiteralZero (along with perhaps CompilerUnsignedLiteralZero, CompilerLongLiteralZero, etc.) which had an implicit widening conversions integer types, but also sported a few overloaded operators (e.g. CILZ+CILZ yields CILZ, etc.) how would the semantics of those compare with the present rules? I guess one would have to do something about "var foo = 0;" (lest "foo" become a CILZ), but that might be handled by defining an "InferType()" attribute and having CILZ marked as inferring type "int" [such an attribute could also be useful in things like Fluent interfaces, where it should be possible to access any single member of a returned object, but not persist it]. Would there be any other semantic differences? As for the null literal, not only is it universally convertible to any reference type; it also is regarded by the compiler as a legitimate value for any nullable type. Personally, I'm not sure I like the way that nullable types pretend to be reference-comparable to null [I think it's more confusing than "HasValue"], but it is what it is. I wonder if there were ever plans for a "Nullable" constraint that would have forced something to be either an object reference or a nullable struct. That would have been really sweet. I was always wondering why is Void type needed in the reflection, cannot (Type)null be used for the same purpose? What about those wondering why we have a null value in the first place? Was it considered too exotic to leave out? C# was designed to be familiar to users of C and C++, and to interoperate cleanly with existing COM and Win32 API code; all of that suggests that null references are a reasonable feature. Strange enough, my VS 2012 has just successfully compiled the following code: class Program { static void Main(string[] args) { string s = null ?? null; } } Well you can always target older versions of C# using new IDEs, so check what version the application is targeting, although my app is currently targeting 4.5 and I'm still getting this compiled. Roslyn however does not compile this statement. This is not true. You cannot use older C# compilers in newer Visual Studios. You can only target different versions of **.NET**, which is a completely separate matter and has nothing to do with the C# language. This is not strange; I did not say that I fixed the bug! Since the behavior of the code is essentially harmless we considered it not worth the bother to fix it. Well, you said "it is possible in C# 2.0 to use the illegal expression null ?? null", which provoked the implicit "... but not in C# 3.0 and higher": exception proves the rule. By the way, I wonder what was the logic behind specs not allowing `null ?? null`? Is it a part of some more general consideration? PS: I wish C# had non-nullable [class] references out of the box. Oh, I see why the sentence is confusing now. I'll try to re-word that. The problem is precisely that it is hard to figure out what the type of the expression should be. Suppose you have "string x = (null??null)" -- ok, is that legal or not? Since the right hand side is not the literal null, we cannot use the rule that says that the literal null is convertible to string. So what is its type? If it is object then this assignment should be illegal because you can't assign object to string. It's a mess, and the expression is useless, so it should be illegal. Why should the expression be illegal? Regard "null ?? anything" as equivalent to "anything", and the expression becomes "string x = null;" which is perfectly legal. Not sure such a thing would be produced in hand-typed code, but it's not implausible that it might be easier for a code generator to output such a thing in some scenarios than add code to replace it the expression with "null". Well... I don't like null, this "type" have some weirds behaviours, for example: string s = null; char[] c = null object f = s; object o = c; Console.WriteLine(f == o);//prints true Let the theory alone we (can I say we?) know this happens because in normal implementation null is represented by the zero address and so, 0 is equal to 0, but what if each type had it's own null (or default so empty strings and arrays are respective default) and null compared equal only to nulls of the same type? From the implementation perspective, you'd have to have a special "null reference" value that isn't 0 for each type, which means that initialization of reference fields and locals becomes more tricky and less efficient (but this is probably irrelevant in practice). However, I think it is the way it is because C# was trying to have semantics reminiscent of C++, and having different null values for different types would be a big surprise for everyone who has worked with nullable references before. Plus, there's always the question of how beneficial this is. Is it really that useful to have null references of the same type compare equal, but null references of different types to compare different? Is this really something that comes up in real code? Or is this just for pure aesthetics? Two of the *fundamental* rules of the type system in .NET (and also Java, btw) are that the default value for *any* type which can be stored in an array or structure field (including structures of any complexity) will be all-bytes zero, and for any pair of references types T and U such that T:U, a cast from T to U is *always* representation-preserving. Since all reference types are derived from Object, this implies that even if class types T and V are unrelated, (Object)default(T) will be indistinguishable from (Object)default(V). Those two rules considerably simplify some aspects of the .NET runtime (and the JVM), since they mean that the only information necessary to initialize a type at runtime is a single number (its size), and compilers don't have to generate any code for up-casts of reference types. Consequently, I expect them to apply to all future versions of both .NET and Java from now until the end of the universe. BTW, it might have been possible to change the behavior of String if that had been defined as a structure which contained a single field of type StringObject [which would behave as String does now] or perhaps Char[]. Had that been done, the default value of String could have behaved like an empty string (as it did in COM) rather than a null reference. The only 'problem' with such a scheme would be that passing strings as "object" would have added another layer of boxing unless the runtime special-cased such conversions (as it started doing in .NET 2.0 with Nullable[T]). If String were a value type as described, casting a default-valued instance it to Object could yield a reference to a zero-character string. You just did what I have done several times in the past and were bite so much that I fear it now, you take implementation in account when discussing the logic of a feature, solves a lot of problems, none of them the "root problem". What do you mean by "representation-preserving"? The value of the pointer doesn't change? IIRC this inst true if U is an interface or if T is a value type, now, if by "representation-preserving" you means that the default value of T must always compare equal to V then I disagree (overload the operador if this is required for a particular case). Now, that there are interfaces that complicates what that "feature" would simplify, what's the problem remaining? In the case of String, yes, it could be a struct and the runtime special case it (as it does for every value type), but the double indirection problem could also be solved in a different way, also, why structs exists to start with? As well as Eric is always telling us that mutable value types are a bad thing, on the other hand, any IEquatable non-nullable immutable reference type can be copied without chnaging the result, so why not let the run time choose when to handle today structs as value types or as reference types? BTW, I understand he argument that C# was designed as it was to be compatible with other languages, wich doesn't means it didn't inherit some bad features. I specifically limited my discussion of representation-preserving casts to *reference* types, since every value-type definition actually describes two related but different kinds of things: a type of storage location and a type of heap object. A value-type heap object is an instance of System.Object, but a value-type storage location holds neither an instance of System.Object nor a reference to one. A widening conversion exists from the storage location type to a heap reference, and narrowing conversions exist from some heap reference types to the storage-locations type, but the heap type and storage location type effectively exist in different universes. As for the significance of "representation preserving", the runtime assumes that if class type T derives from U, a variable of type T may be copied into one of type U simply by copying all the bits. This means that because for any class type T, default(T) must be all-bits zero and (Object)default(T) must also be all-bits zero, (Object)default(T) must equal default(Object). Thanks for clarifying "representation preserving", but how about interfaces? Or (If ever implemented in C#, but I guess some of the same reasons prevent it) multiple inheritance? While convenient (altough already there are exceptions) for the CLR representation preserving doesn't stop from implementating many other features? A major part of the reason C# does not support multiple inheritance of anything but interfaces is that doing so would either require that upcasts not be identity-preserving or that virtual methods behave oddly. Assume X:W, Y:W, and Z:X,Y. W has a virtual method Foo which is overridden by X and Y but not Z, and zz is of type Z. It would be odd for (W)(X)zz.Foo to invoke Y's override, or for (W)(Y)zz.Foo to invoke X's override, but one of those things would have to happen if (W)(X)zz and (W)(Y)zz are identical. Interfaces avoid this problem by requiring that every class which implements an interface provide its own implementation of all methods and properties defined therein (as well as its own backing fields for holding any state implied thereby). Note that if Z had provided its own override of Foo, there would have been no problem, since zz.Foo, (X)zz.Foo, (Y)zz.Foo, (W)(X)zz.Foo, (W)(Y)zz.Foo, (W)zz,Foo, and even (W)(Object)Foo, and (Z)(Object)Foo would all refer to the method defined in Z. Let the identity-preserving problem rest for a while, the other reason for not supporting multiple-inheritance is the diamond problem? Ok, let's say interfaces comes with a "default implementation" being like classes except they have no state, yes, an interface A may define a method X, interfaces B and C inherits from A and provides a default implementation for X, the interface D inherits from B and C, yes the diamond problem may happen, the C++ solution may result in some weird behaviour, but is this problem that common in real life? Because I know of a problem wich also causes weird behavior and is pretty common in current C# version, for example, in the Enumerable class there is the Skip extension method taking an IEnumerable as parameter wich works on a pretty inefficient way, if there was an efficient implementation of Skip in IList (or any custom list) and a method wich take an IEnumerable as parameter and calls the Skip method would call the inefficient version since no dynamic dispatch is possible in this case. I guess this spefici problem can be solved by other means other than interfaces with implementations or multiple inheritance but those other means are even worse. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1407 Hi Eric. While reading this entry I initially violently disagreed with you, because in my thinking it wasn’t possible to have a value without a type. Then I discussed this with a friend and he opened up a new way of looking at it to me: namely, the compiler turns the expression (whether it be a null literal or a lambda expression or anything else that you say has “no type”) into a value that exists in the universe of the program that is being compiled. Before this step, there is no such value; the running program has no concept of the lambda expression before this conversion. Therefore, I think I now understand what you’re saying and I think the term “conversion” is misleading. The “conversion” that you refer to here is no different from turning the parse-tree node “5” into a value of type “int” (which is not called a “conversion”), and it is very different from the operation denoted by the code “(object) value” (which *is* called a “conversion”, and it does not matter whether it’s a boxing conversion, a reference conversion, or any other conversion). Therefore, I think calling this a “conversion” is a category error. You are not converting from anything that exists in the program’s universe. I take your point, but the compile time analysis is by its nature often about things that have no existence at runtime. When you say class C<T> { T t; }the type of t at compile time is T, but it surely will not be "T" at runtime; there is no such beast. The compile-time type system is a proof system; it's a bunch of logical deductions that are made according to formal rules. The run-time type system is a tag on each object. Obviously they are designed to be strongly related to one another, but they are not identical by any means. Eric, you say there are downstream bugs resulting from the null ?? null one, and yet the null ?? null still works. Does this mean that the downstream bugs are also still there? If so, what are they? Would be interesting to know. The bugs we knew about we fixed. I don't recall the exact details but it was something like "var x = null ?? null;" would infer x to have the null type, and then crash during code generation. That wasn't the bug, but it was something like that, where type inference would infer "the null type" as the type of something, and then things would go bad from there. What would you think of the idea of allowing types to indicate (via attribute) that the compiler should exclude them from type inference (perhaps substituting some other type)? It would seem like a simple thing to implement, with a pretty big payoff both for internal situations you describe, but also many other situations as well. There are a number of situations in which "Foo.This.Bar()" would make sense, but either the return value of "Foo.This" shouldn't be used for any purpose except for one member-access operation which should be performed before anything else, or else the Bar() method could be simplified if it knew that it was acting upon an object to which no other reference existed anywhere in the universe (a situation which would apply in "Foo.This.Bar()" but would not apply if the value of "Foo.This" were persisted). Why is this case different from var x = null? Pingback: F# Weekly #30 2013 | Sergey Tihon's Blog You didn't explicitly say it, but Roslyn seems to disallow var a = null ?? null from compiling (without a cast on one of the operands). Tested using the C# Interactive tool. That's correct; we took the breaking change in Roslyn. I also wrote same In VS 2012 IDE as var x = null; and var x = null ?? null; . Both shown red underline below x. An error message was "Can not assign value to implicitly-typed local variable. Is it causing to create type of by assigning null value to x? Or is there any other thing which is happening? Since neither of those expressions have types, the compiler is unable to infer what type you intended to replace the "var". Pingback: NULL: Direnmenin Otopsisi | Senselogi©
http://ericlippert.com/2013/07/25/what-is-the-type-of-the-null-literal/
CC-MAIN-2013-48
refinedweb
3,736
60.65
Readers, how are you doing today? I’ve been looking through websites for the past months for a video editing software to make my Youtube videos. And then I found this package that I will introduce you to today. Hopefully, it will blow your mind by showcasing the power of the Python moviepy library. Table of Contents - 1 Challenges in video editing - 2 What is Python moviepy? - 3 Python MoviePy Tutorial – Automating Video Editing - 4 Ending Note Challenges in video editing There are many objectives to video editing. Some of the popular processes that can be solved by automation include: - composition many videos in a complicated but similar manner - automate video or GIF creation on a web server (Django, Flask, etc.) - tedious tasks, such as tracking objects by title insertions, cutting scenes, making end credits, subtitles, etc. - code your own video effect templates. - create animations from images created by another library (Matplotlib, seaborn, etc.) What is Python moviepy? MoviePy is a video editing library in Python: cutting, concatenating, adding names, video compositing (a.k.a. non-linear editing), video encoding, and custom effect development. For some examples of its use, see the MoviePy gallery. All the most popular audio and video formats, including GIF, can be read and written by MoviePy and it runs on Windows/Mac/Linux. Features of Python moviepy: - Plain and intuitive. On one side, simple operations can be performed. For beginners, the code is easy to read and easy to understand. - Flexible. You have complete control over the video and audio clips, and it is as simple as Py to make your own effects. - Portable. The code uses software that is very popular (Numpy and FFMPEG) and can run on almost any computer with almost any Python version. What it can’t do: - read from a webcam - render a video live on a distant machine Python MoviePy Tutorial – Automating Video Editing Let’s get right to the meat of the tutorial now! How to automate your video editing process with the use of Python moviepy. 1. Join Video Clips with Python Two easy ways to bring clips together are to concatenate them (to play them in a single long frame, one after the other) or stack them (to them side by side in a single larger clip). The last clip is a clip that runs one after the other in clips 1, 2, and 3: from moviepy.editor import * clip1 = VideoFileClip("myvideo.mp4") clip2 = VideoFileClip("myvideo2.mp4").subclip(50,60) clip3 = VideoFileClip("myvideo3.mp4") final_clip = concatenate_videoclips([clip1,clip2,clip3]) final_clip.write_videofile("concat.mp4") The subclip function here lets you cut a part of the video. The syntax of subclip is subclip(self, t_start=0, t_end=None). The time can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. If t_end is not provided, it is assumed to be the duration of the clip. Notice that there is no need for the clips to be the same height. If they aren’t, they will all appear centred in a clip wide enough to accommodate the largest of them, optionally filling the borders with a colour of your choice. 2. Stacking Video Clips You can split your screen into multiple videos playing on each too. It’s called stacking, and is done with clip_array function: from moviepy.editor import VideoFileClip, clips_array, vfx clip1 = VideoFileClip("myvideo.mp4").margin(10) clip2 = clip1.fx( vfx.mirror_x) clip3 = clip1.fx( vfx.mirror_y) clip4 = clip1.resize(0.50) final_clip = clips_array([[clip1, clip2], [clip3, clip4]]) final_clip.resize(width=480).write_videofile("stacked_vid.mp4") 3. Floating Video Windows If you want to float video windows on top of each other, you can use the CompositeVideoClip method in the moviepy library. video = CompositeVideoClip([clip1,clip2,clip3]) Think of this as layering videos on top of each other. If all videos are of different sizes, you’ll see all the videos playing behind each other. But, if clip2 and clip3 are the same size as clip1, then only clip3 at the top of the video is visible… unless clip3 and clip2 have masks that conceal parts of them. Notice that the composition has the scale of its first clip by chance (as it is generally a background). But often you want to make your clips float in a larger composition, so you can specify the final composition scale. 4. Set Start Time for Video Clips You can also set the time for the floating window to appear: clip1 = clip1.set_start(5) # start after 5 seconds So we can compose a clip like: video = CompositeVideoClip([clip1, # starts at t=0 clip2.set_start(10), # start at t=10s clip3.set_start(15)]) # start at t=15s 5. Adding Transitions to Video Clips using Python MoviePy In the example above, maybe clip2 will start before clip1 is over. In this case you can make clip2 appear with a cross-fade transition of one second: video = CompositeVideoClip([clip1, # starts at t=0 clip2.set_start(10).crossfadein(1), clip3.set_start(15).crossfadein(1.5)]) The crossfade is the most popular transition ever, but there are others in the library. Ending Note This is going to be the most interesting library for you to work on if you are into media and video editing. Play around with the MoviePy library enough and you might even be able to automate most of your repetitive workflows. Hope you learned something interesting here! Let us know your thoughts in the comments below.
https://www.journaldev.com/46531/python-moviepy-video-editing
CC-MAIN-2021-17
refinedweb
916
73.98
Versalino Uno Quickstart About the Versalino Uno electronics & microcontroller drivers, but to also be used for the development of cost effective end user devices. The Versalino Uno, just like the Arduino Uno is based on the Atmel Atmega328P-PU, and is fully compatible with the Arduino IDE (integrated development environment) and just about any code that works on the Arduino Uno. However what the Versalino Uno does differently is it takes a minimalistic and configurable approach to design. It’s pinout, and design reduce costs and size even though they are made in the USA. Versalino Uno Pinout Have you ever had to re-solder pins on an Arduino Shield just so that you could get it working with another shield? Have you ever had an Arduino Shield that didn’t bother to let you stack another shield on-top of it? Well with the Versalino Uno those are problems of the past. Listed above are the Arduino equivalent pins to each of the Buses in case you wanted to forego the use of the Versalino library. The Versalino Uno is not just another Arduino clone. The Versalino Uno was completely re-engineered from the ground up to address real design problems that folks face every day on other systems. The key advantage of the Versalino line over other Arduino and Arduino compatible clones is that we developed a standardized BUS system that makes it possible to design a load board to take less than half of the available pins without sacrificing the ability to do tons of stuff with it. The Versalino BUS structure also allows the shape and size of your Versalino board to change without losing the compatibility with your loadboards. That is why every board that has been designed for the Versalino Uno is 100% compatible with the Versalino Nano (even though the Versalino Nano is half it’s size). Finally the COM port on the Versalino is designed to provide you with plug and play Bluetooth compatibility with the Virtuabotix BT2S Slave and Virtuabotix BT2S Master, this is an extremely useful, and cost effective way to convert your project from a wired to a wireless solution. This additional port structure allows for the design of many serial communication devices that can be added to your system without interfering with other boards. Getting started guide: You will need the following things to follow this guide all the way through: A computer with a USB port, the Versalino Uno, the Versalino FTDI, and an LED. What is an IDE?If you are new to programming you may be wondering what an IDE is. An IDE is an Integrated Development Environment, which aside from being a mouthful, is an extremely useful tool that is used to allow an individual to use high level code like C++ to develop programs more quickly. In the case of the Arduino IDE you are provided with a simplified C++ based approach to developing code that can be quickly developed, and uploaded to your Versalino projects. As you learn how to use this system you can even go as far as writing your own libraries, and taking advantage of direct machine coding techniques, and native C++ structures and classes. Install the Arduino IDE on your computerThe first step before you can do anything with the Versalino, or any other Arduino compatible system is to install the Arduino IDE. Download and Install the Arduino IDE with the link belowThle Arduino IDE can be found for Windows, Mac OS or Linux at the following link -->Click to go to download page <--. Follow the instructions for your specific operating system as provided by Arduino in the link above. Note: Though there is a windows installer available now, most windows users have reported that they had a better experience downloading and extracting the zip archive version. This is as of version 1.0.5 and the windows installer may become more stable over time. What is an FTDI?FTDI itself is actually stands for Future Technology Devices International, a company whose initials (FTDI) have become synonomous with USB to Serial converters. The Versalino FTDI is named so because of the fact that it uses an FTDI chip as it's main processor. The reason you need a USB to Serial converter (like the Versalino FTDI) is to load programs onto, and communicate with your Versalino, and other Serial UART enabled devices. The key advantages of the Versalino FTDI over alternative versions is the fact that it has a built in Virtuabotix BT2S Com port for instant Bluetooth to Bluetooth connectivity between your Versalino and PC, and the fact that it has easily selectable voltage levels. Install the FTDI (USB to Serial) Drivers on your PCThough we recommend and assume you are using the Versalino FTDI, the following devices can also be used to program the Versalino: The Arduino USB Serial Light Sparkfun or Adafruit versions of the FTDI programmer And for the adventurous the Atmel AVRISP programmer Note: The Arduino USB Serial Light uses different drivers than those we will be discussing in this section, those drivers can be found in the driver folder of the Arduino IDE directory that you installed in the last step. Installing FTDI drivers on Windows 7 or laterIf you are using Windows 7 or later, chances are that all you have to do is plug in the Versalino FTDI and your system will identify and install the drivers for you automatically, but if you are on Windows XP or earlier, or your system did not auto-detect the device drivers then you will want to install the drivers manually below. Installing FTDI drivers on older systems, and Mac OS/LinuxThe FTDI Drivers can be found and installed from the following location -->Click to go to download page and select drivers from the appropriate operating system to install <--. Check if your drivers installed properlyNow that you have installed your drivers you can plug the Versalino FTDI into the computer using the USB Mini B cable (make sure you have the 5V setting selected with the switch on the back). If everything is installed correctly you should now have a new COM port available on your computer. What is a Library?A library (in the Arduino IDE) is a specially structured piece of code that is intended to add or improve the existing functionality of your C++ environment. The libraries often handle complex problems, and device communication so that you do not have to handle low level, complex, or repetitive tasks directly in your code. Libraries can do anything that you can do in your sketches, or in C++ in general (as long as there is room on your device), but are generally used to store classes and functions to be used in your projects. Why should I install the Versalino library?First of all, you do not have to install the Versalino library for you to be able to use the Versalino. For all intents and purposes you can program the Versalino as if it were an Arduino Uno, but you would lose the ease of use of the pin-out, and may have to dedicate more time to selecting pin numbers, especially if you would like to take advantage of the Versalino's unique bus structure. Because of this we think it is much easier to use the Versalino library to take advantage of the simplified bus structure, and the direct use of standard Versalino pin names in your code. Download and extract the library-->Click to go to the Versalino product page and download the latest Versalino library <--. Extract the zip folders contents and makes sure that you have just the library name as your folder. In example if your extracted contents folder was named VersalinoV1S2B then you will want to remove the V1S2B from the folder name, and then check inside that folder. If that folder contained a folder named versalino then you would use the subfolder as your library folder. If you don't use the folder that actually contains the .h and .cpp files, or you use a folder that is not named exactly the same as the main .h/.cpp file in that directory then the library will not work after it has been installed (because the IDE will not be able to find the appropriate files for installation. Installing libraries on WindowsIf you are using the Arduino IDE on windows you simply need to navigate to the folder where the Arduino.exe is and drop the Versalino libraries folder into the "Libraries" folder. So drop the library folder into the following directory on windows: arduino-1.x.x-windows (where the .x.x is the IDE version you installed) -> arduino-1.x.x -> libraries Installing libraries on Mac OS/LinuxInstalling a library in Mac OS or Linux can be a little more tricky, especially if this is the first time you are using the Arduino IDE. Unlike on Windows, your IDE does not have an accessible folder structure, so you will have to run the IDE before you can proceed. Once you have opened the Arduino IDE it will create a Sketchbook folder on your profile, you can easily find the location of this folder by using the top menu File -> Preferences and looking at the address in the sketchbook location at the top of the Preferences window. Once you have navigated to your Sketchbook directory you will have to create a subdirectory named "libraries" if one hasn't already been created for you. Now you can simply drop the Versalino (or other) library folder into that directory. Check if your library is installed properlyRegardless of the operating system you used once the library folder has been placed in the "libraries" directory you will have to ensure that the Arduino IDE has been closed, and then re-open it before the library can be used. If you have installed the library properly you should now see a Versalino subsection in the File -> Examples submenu. If your install did not work, then you likely need to check the folder name, and make sure that you did not copy extra extracted folders into the libraries directory. Adjust and retry until you see the Versalino subcategory on the Examples menu. Congratulations!With all the boring stuff out of the way, it is finally time to get things rolling. Keep following the steps below to start doing something with your Versalino. Thanks again for choosing the Versalino, and best of luck with your nerdly adventures! Now for your very first Versalino project! Make sure you have your Versalino Uno, Versalino FTDI, and an LED before proceeding. Connect the Versalino FTDI First connnect the Versalino FTDI to your computer with your USB Mini B cable and make sure that a new COM port is available on your computer like you did in earlier setup steps. Once you are satisfied with the setup you can plug it into the Versalino Uno/Versalino Nano. Make sure that the pins of the Versalino FTDI match up with the pins on the Versalino Uno/Nano PGM port (I.E. match G pin to G pin, V pin to V pin and so forth). If you have it lined up correctly the voltage selector will be facing toward the outside of the Versalino board. WARNING: DO NOT CONNECT THE VERSALINO FTDI TO THE VERSALINO BACKWARDS, OR SADNESS AND MAGIC BLUE SMOKE MAY RESULT. IF YOU CONNECTED IT THE WRONG WAY QUICKLY DISCONNECT THE USB AND TRY AGAIN THE CORRECT WAY. Open the Arduino IDEWhen you open the Arduino IDE it should conveniently create an empty sketch for you. A sketch is just the name Arduino gave to their .ino files which are the files you save your simplified C++ code for the Arduino and Versalino platforms with. Your code should look something like this once you have it copied properly into the Arduino IDE. Also note the menu at the top of the window which you will be using to configure and upload in the next part of this step. Copy the following lines of code into the empty sketch: #include <Versalino.h> //this loads the Versalino library // we will be connecting the LED to BUSA pin D1 // so we declare it as a variable to use later // just in case we decide to change the pin or BUS int led = BUSA.D1; // } Check compiler settingsIf you are programming the Versalino Uno ensure that "Arduino Uno" is selected from the "Tools -> Boards" menu. If you are programming the Versalino Nano ensure that "Arduino Nano W/ ATmega328" is selected from the "Tools -> Boards" menu. Also check to make sure that the COM port of your Versalino FTDI is selected from the "Tools -> Serial Port" menu. Upload the Arduino Sketch to your VersalinoNow you should be ready to program your Versalino with your newly created sketch. Select "File -> Upload" and wait while your sketch is compiled, and uploaded to the Versalino. If you have set up everything correctly up to this point you should see a short series of blink red and green lights on the Versalino FTDI indicating that the program is being loading through the transmit and receive lines of the serial port. Once you see "Successfully Uploaded" at the bottom of the screen you are good to move onto the next step. Before we connect the LED to the VersalinoNOTE: First disconnect the USB from the Versalino FTDI before plugging anything into the Versalino BUS. An easy way to determine which side of the LED is supposed to connect to Ground, and which is supposed to connect to Vdd or your IO pin you can actually look at the length of the LED leads (which means the pins that come off of the LED). The shorter of the two pins is always ground, and the longer goes to the IO pin or VDD. If your leads have been clipped you may have to check with your manufacturer on what the notch indicator on the LED body means, some LEDs have reverse notch indicators, you shouldn't assume that your LED is the same as another. If you can see inside your LED however the larger of the two elements is always ground so you may be able to tell them apart that way. (Alternatively you can try connecting the LED in both directions and pick the one that works 😀 ). Note: If you are using a high powered LED, or more than one LED you should put a resistor in series to prevent too much current from being drawn from your Versalino, but with a single LED that draws 20 mA or less you can safely connect it directly to your IO pin (IO pin means input output pin). Connect the LED Watch in awe as your first project comes to lifeNow that everything is plugged in you can reconnect the USB cable, and watch as your LED begins to blink just as you told it to. Feel free to bask in the mysterious glow as long as you see fit, and congratulations on finishing your first project! Congratulations, now you know how to load a sketch onto the Versalino Uno. You will be amazed by the world of devices and projects that you have just opened up. We can’t wait to see what you can achieve with the Versalino in your toolbox! Below are just a few of the projects that other nerdly heroes like yourself have already done with the Versalino, be sure to share your projects with us if you want them added to the list. Carlos said on December 10, 2013 Hello Adrian,This is my firs Arduino and I also want to make exactly the same thing as you did but i have stbemlud
https://versalino.com/versalino-uno-quickstart/
CC-MAIN-2018-13
refinedweb
2,635
55.07
Write code that declares, constructs and initializes arrays of any base type using any of the permitted forms, both for declaration and for initialization. Arrays in Java are similar in syntax to arrays in other languages such as C/C++ and Visual Basic. However, Java removes the feature of C/C++ whereby you can bypass the [] style accessing of elements and get under the hood using pointers. This capability in C/C++ , although powerful, makes it easy to write buggy software. Because Java does not support this direct manipulation of pointers, this source of bugs is removed. An array is a type of object that contains values called elements. This gives you a convenient bag or holder for a group of values that can be moved around a program, and allows you to access and change values as you need them. To give a trivial example you could create an array of Strings, each one containing the names of members in a sports team. The array can be passed into methods that need to access the names of each team member. If a new member joins the team, one of the old names can be modified to become that of the new member. This is much more convenient than having an arbitrary number of individual variables such as player1, player2, player3 etc Unlike variables which are accessed by a name, elements are accessed by numbers starting from zero. Because of this you can "walk" through an array, accessing each element in turn. Arrays are very much like objects, they are created with the new keyword, and have the methods of the great grandparent Object class. Arrays may store primitives or references to objects. Every element of an array must be of the same type The type of the elements of an array is decided when the array is declared. If you need a way of storing a group of elements of different types, you can use the collection classes which are a new feature in the Java2 exam, and are discussed in section 10. You can store an array of object references, which you can access, extract and use like any other object reference. The declaration of an array does not allocate any storage, it just announces the intention of creating an array. A significant difference to the way C/C++ declares an array is that no size is specified with the identifier. Thus the following will cause a compile time error int num[5]; The size of an array is given when it is actually created with the new operator thus int num[]; num = new int[5]; You can think of the use of the word new as similar to the use of the word new when initialising a reference to an instance of a class. The name num in the examples is effectively saying that num can hold a reference to any size array of int values. This can be compressed into one line as int num[] = new int[5]; Also the square brackets can be placed either after the data type or after the name of the array. Thus both of the following are legal int[] num; int num[]; You can read these as either An integer array named num An integer type in an array called num. You might also regard it as enough choice to cause confusion This is particularly handy if you are from a Visual Basic background and are not used to constantly counting from 0. It also helps to avoid one of the more insidious bugs in C/C++ programs where you walk off the end of an array and are pointing to some arbitrary area of memory. Thus the following will cause a run time error, ArrayIndexOutOfBoundsException int[] num= new int[5]; for(int i =0; i<6; i++){ num[i]=i*2; } The standard idiom for walking through a Java array is to use the length member of the array thus int[] num= new int[5]; for(int i =0; i<num.length; i++){ num[i]=i*2; } Just in case you skipped the C/C++ comparison, arrays in Java always know how big they are, and this is represented in the length field. Thus you can dynamically populate an array with the following code int myarray[]=new int[10]; for(int j=0; j<myarray.length;j++){ myarray[j]=j; } Note that arrays have a length field not a length() method. When you start to use Strings you will use the string, length method, as in s.length(); With an array the length is a field (or property) not a method. Arrays in Java always start from zero. Visual Basic arrays may start from 1 if the Option base statement is used. There is no Java equivalent of the Visual Basic redim preserve command whereby you change the size of an array without deleting the contents. You can of course create a new array with a new size and copy the current elements to that array. An array declaration can have multiple sets of square brackets. Java does not formally support multi dimensional arrays, however it does support arrays of arrays, also known as nested arrays. The important difference between multi dimensional arrays, as in C/C++ and nested arrays, is that each array does not have to be of the same length. If you think of an array as a matrix, the matrix does not have to be a rectangle. According to the Java Language Specification () "The number of bracket pairs indicates the depth of array nesting." In other languages this would correspond to the dimensions of an array. Thus you could set up the squares on a map with an array of 2 dimensions thus int i[][]; The first dimension could be X and second Y coordinates. Instead of looping through an array to perform initialisation, an array can be created and initialised all in one statement. This is particularly suitable for small arrays. The following will create an array of integers and populate it with the numbers 0 through 4 int k[]=new int[] {0,1,2,3,4}; Note that at no point do you need to specify the number of elements in the array. You might get exam questions that ask if the following is correct. int k=new int[5] {0,1,2,3,4} //Wrong, will not compile! You can populate and create arrays simultaneously with any data type, thus you can create an array of strings thus String s[]=new String[] {"Zero","One","Two","Three","Four"}; The elements of an array can be addressed just as you would in C/C++ thus String s[]=new String[] {"Zero","One","Two","Three","Four"}; System.out.println(s[0]); This will output the string Zero. Unlike other variables that act differently between class level creation and local method level creation, Java arrays are always set to default values. Thus an array of integers will all be set to zero, an array of boolean values will always be set to false. Thus the following code will compile without error and at runtime will output 0. public class ArrayInit{ public static void main(String argv[]){ int[] ai = new int[10]; System.out.println(ai[0]); } } By contrast with primitive variables, the following code will throw a compile time error with a message something like “variable i might not have been initialized” ublic class PrimInit{ public static void main(String argv[]){ int i; System.out.println(i); } } How can you re-size an array in a single statement whilst keeping the original contents? 1) Use the setSize method of the Array class 2) Use Util.setSize(int iNewSize) 3) use the size() operator 4) None of the above You want to find out the value of the last element of an array. You write the following code. What will happen when you compile and run it? public class MyAr{ public static void main(String argv[]){ int[] i = new int[5]; System.out.println(i[5]); } } 1) Compilation and output of 0 2) Compilation and output of null 3) Compilation and runtime Exception 4) Compile time error You want to loop through an array and stop when you come to the last element. Being a good Java programmer, and forgetting everything you ever knew about C/C++ you know that arrays contain information about their size. Which of the following can you use? 1)myarray.length(); 2)myarray.length; 3)myarray.size 4)myarray.size(); Your boss is so pleased that you have written HelloWorld he she has given you a raise. She now puts you on an assignment to create a game of TicTacToe (or noughts and crosses as it was when I were a wee boy). You decide you need a multi dimensioned array to do this. Which of the following will do the job? 1) int i =new int[3][3]; 2) int[] i =new int[3][3]; 3) int[][] i =new int[3][3]; 4) int i[3][3]=new int[][]; You want to find a more elegant way to populate your array than looping through with a for statement. Which of the following will do this? 1) myArray{ [1]="One"; [2]="Two"; [3]="Three"; end with 2)String s[5]=new String[] {"Zero","One","Two","Three","Four"}; 3)String s[]=new String[] {"Zero","One","Two","Three","Four"}; 4)String s[]=new String[]={"Zero","One","Two","Three","Four"}; What will happen when you attempt to compile and run the following code? public class Ardec{ public static void main(String argv[]){ Ardec ad = new Ardec(); ad.amethod(); } public void amethod(){ int ia1[]= {1,2,3}; int[] ia2 = {1,2,3}; int ia3[] = new int[] {1,2,3}; System.out.print(ia3.length); } } 1) Compile time error, ia3 is not created correctly 2) Compile time error, arrays do not have a length field 3) Compilation but no output 4) Compilation and output of 3 4) None of the above You cannot "resize" and array. You need to create a new temporary array of a different size and populate it with the contents of the original. Java provides resizable containers with classes such as Vector or one of the members of the collection classes. 3) Compilation and runtime Exception You will get a runtime error as you attempt to walk off the end of the array. Because arrays are indexed from 0 the final element will be i[4], not i[5] 2) myarray.length; 3) int[][] i=new int[3][3]; 3)String s[]=new String[] {"Zero","One","Two","Three","Four"}; 4) Compilation and output of 3 All of the array declarations are correct, if you find that unlikely, try compiling the code yourself The Java Language Specification This topic is covered in the Sun tutorial at Jyothi Krishnan on this topic at Bruce Eckel Thinking In Java Connecticut State University
http://www.jchq.net/certkey/0101certkey.htm
crawl-002
refinedweb
1,815
58.21
i don't think you can declare your loop variables inside the loop, try doing it this way int i = 0; for(i; i<8; i++) i don't think you can declare your loop variables inside the loop, try doing it this way int i = 0; for(i; i<8; i++) malloc will return a NULL pointer if the size of the block to be allocated is zero, or if there was insufficient memory #include <iostream> using namespace std; int mult(int a, int b); int main(){ int selection; int num1; it has both ftp and http sections, the data can be easily moved to either I have written a file processing script and now i'd like to have the program retrieve the data files from the server itself, instead of me having to pull the updated files myself at the end of the... again READ THE INFORMATION HERE!!!!! try looking up the return type of the length function......I think you'll be able to figure out what you are doing wrong from there. rather, you should look at your loop, specifically at your two if conditions.......your loop only executes once, no matter what number you enter descriptions of various C header files try renaming the variable int gcf ..... its the same as the name of your function I have some flexibility with my choice of parameters, If i pass the array directly, than i can use sizeof() to determine the number of elements By error I mean something went wrong, whether it was that system couldn't open your command interpreter or some other reason....the way to tell what went wrong is to check errno (read through the... love simple answers.....i'm oblivious sometimes! Thanks a bunch! System() has two possible returns it looks like either 0 (or whatever your compiler evaluates to true) or -1 where -1 indicates an error so if... Say i have a function that is being passed two pointers to two separate arrays of doubles and that these arrays occupy the same ammount of memory. how can I: Initialize a third pointer to the... preprocessor directives? #ifndef MAZE_H #define MAZE_H class Maze{ }; Please post your compiler errors typedef struct{ int wordCounter; string theWord; }Wrd; [NOTE] Your original version was correct on the definition of the struct, just my bias to declare structs in the manner... thanks for the clarification. been trying to teach myself windows programming..still have a long ways to go. Firstly: Why is that not valid? strcmp returns an interger value of 0 if the char* are the same. [NOTE] woops, sorry i was reading your corrected code instead of his original, sorry about that.... Try using the conversion function A2T() instead of just trying to force a TCHAR the factorial (!) is a mathematical operator that carries out degenerative multiplication. I.E. N!= N*(N-1)*(N-2)*(N-3)*....*(N-(N-1)). ahh thanks daved.....you actually just solved a huge mystery in a processing program i was working on, i was wondering why i always had to subtract 2 from the return of size() instead of one just 1... I see, this can be done with just one data structure to store your input, but as with most programming problems there are a million different ways to do the same thing! :) You should feel free to do...
https://cboard.cprogramming.com/search.php?s=e9ed1947006ce946b6c26962da8206cd&searchid=2967109
CC-MAIN-2020-10
refinedweb
564
68.2
Jane Medium : By designing a simple test circuit , Verified MicroPython stay MM32F3273 Running on . It is preliminarily confirmed that it can run the transplanted MicroPython. key word: MM32F3273,MicroPython,STM32,Bootloader,ISP stay In the morning Design with SD Card MM32F3277 MicroPython Experiment board , Now prepare to test based on MM32F3273(LQFP-48) Run on a pin encapsulated circuit MicroPython System . this 5 slice MM32F32773273 The application was sent with the help of smart Su Yong . ▲ chart 1.1.1 Schematic diagram of test version The following is to realize the design of rapid plate making PCB chart . ▲ chart 1.1.2 A single chip used for rapid plate making PCB The design ▲ chart 1.1.3 After one minute plate making , Then weld for testing Use MM32-LINK, The future comes from smart MicroPython Download to MM32F3272 in . Because it comes from MindMotion Of MicroPython Need to use an external high-frequency crystal oscillator , Therefore, the crystal oscillator signal should be measured after power on . ▲ chart 1.2.1 Crystal oscillator 8MHz Clock signal stay UART1 Of TX It should be possible to measure REPL Prompt signal given after power on . The following can be seen through the oscilloscope after power on UART1-TX Transmitted waveform . Prove it MicroPython Yes MM32F3272 It's up and running . ▲ chart 1.2.2 Measure after power on UART1-TX Transmitted waveform MM32 Also has the UART-ISP function , So whether it can use STM32 Of UART-ISP Corresponding BootLoader Download the program ? So let's test that out . ** USBBT Link error 1. ** USBBT erase pages error ! 1 And that proves that ,MM32 Of UART-ISP And STM32 Of UART-ISP Not compatible . system Make a transfer interface , Test in MM32F3272G6P Running on MicroPython. from machine import Pin,UART import utime led = Pin('PB2', mode=Pin.OUT_PUSHPULL) btn = Pin('PB8', mode=Pin.IN_PULLUP) print("Test Pin In/Out.") while True: utime.sleep_ms(100) led.low() utime.sleep_ms(100) led.high() ▲ chart 2.1 Running results through Designed a simple test circuit , Verified MicroPython stay MM32F3273 Running on . It is preliminarily confirmed that it can run the transplanted MicroPython. stay be based on MM32F3273 Of MicroPython Experimental circuit board - Work is not normal , The specific reason is not clear . ■ Links to related literature : ● Related chart Links :
https://pythonmana.com/2021/11/20211125110148574H.html
CC-MAIN-2022-21
refinedweb
382
57.67
32136/sending-keys-in-windows-10-iot So, I developed a capacitive I2C keyboard for a Raspberry Pi 2 with Windows 10 IoT, so when my I2C controller detects a keystroke I need to send a key to the current page. In windows forms I have used: SendKeys.Send("{ENTER}"); How can I send keys? Sorry, but it is not allowed in UWP due to some APIs restriction to be called only by user interaction. Instead, you can programmatically write text to the textboxes like: // To simulate key 'A' and 'B' Textbox1.Text += 'A'; Textbox1.Text += 'B'; // To simulate backspace if Textbox contains any character if (Textbox1.Text.Length > 0) { Textbox1.Text = Textbox1.Text.Remove(Textbox1.Text.Length - 1); } Problem with this snippet is, you can't simulate special key like ALT, CTRL, F1-F12, Shift and WinKey. On Windows IoT you have to use Windows.Devices.SerialCommunication namespace ...READ MORE It is possible, but you should understand ...READ MORE Of course, you, can! That is, in ...READ MORE You can take a look at this ...READ MORE Since it uses SPI, there shouldn't be ...READ MORE I was looking for a solution too, ...READ MORE It can be done by making changes ...READ MORE You'll be glad to know that C# ...READ MORE You could follow this link here to ...READ MORE All you have to do is stop and ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/32136/sending-keys-in-windows-10-iot?show=32138
CC-MAIN-2020-45
refinedweb
242
77.84
Glad to be on this very helpful website. I have a problem with my Java program that will probably either be an easy fix, or impossible to fix. You know how when you run a program that's open in NetBeans, it shows the output within the NetBeans application? I am trying to create a program that allows anybody who puts it on their computer to execute it, even if they have not installed an IDE like NetBeans or Eclipse. And when somebody executes my program, I want it to show the same thing as when I run it in NetBeans, with the same output and everything. The program doesn't use a GUI or anything like that. I managed to create an executable .jar file with the "Clean and build project" option, and I made a .bat file that successfully executes the program. This should achieve my goal of allowing anyone to run it. When I start up the .bat file, it works, and shows a white-text-black-background screen that runs the program exactly as it ran while in NetBeans. The problem is that when I run the program (with the .bat file), the text is too small... I've tried looking everywhere for a solution to this, but I could only find discussion about how to make things work with GUIs, or other more complicated things than what my program needs. I am willing to work with GUI stuff if it is necessary, but I don't think it will help, due to what a GUI is. From my understanding, a GUI is not one big thing, but is a user interface composed of smaller parts (such as pop-up input prompts and scroll bars) that are each made by the programmer. I don't need any fancy scroll bars etc., I just need my program to execute like it does when ran in NetBeans (pretty sure this is called the console), and I need to change the text size of the program text when it executes. I greatly appreciate any help, even if you aren't sure if it will work or not. If the answer requires a lengthy explanation and you don't feel like explaining, that's okay; just tell me what I'd have to learn to figure this out and I can research it if necessary. I just created one. Try using this one and tell us if it helped or not. EDIT Added a JTextField to read data. It is more advanced code than the previous one, since it uses concurrency. I tried to make it simple, these are the functions you can use: MyConsole (): Constructor. Create and show the console print (String s): Print the sString println (String s)Print the sString and add a new line read (): Makes you wait untill the user types and presses Enter closeConsole (): Closes the console Here is the code: public class MyConsole implements ActionListener { private JFrame frame; private JTextArea myText; private JTextField userText; private String readText; private Object sync; /* * Main and only constructor */ public MyConsole() { // Synchronization object sync = new Object(); // Create a window to display the console frame = new JFrame("My Console"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(400, 200); frame.setLocationRelativeTo(null); frame.setResizable(true); frame.setContentPane(createUI()); frame.setVisible(true); } /* * Creates user interface */ private Container createUI() { // Create a Panel to add all objects JPanel panel = new JPanel (new BorderLayout()); // Create and set the console myText = new JTextArea(); myText.setEditable(false); myText.setAutoscrolls(true); myText.setBackground(Color.LIGHT_GRAY); // This will auto scroll the right bar when text is added DefaultCaret caret = (DefaultCaret) myText.getCaret(); caret.setUpdatePolicy(DefaultCaret.ALWAYS_UPDATE); // Create the input for the user userText = new JTextField(); userText.addActionListener(this); panel.add(new JScrollPane(myText), BorderLayout.CENTER); panel.add(userText, BorderLayout.SOUTH); return panel; } /* * Waits until a value is typed in the console and returns it */ public String read(){ print("==>"); synchronized (sync) { try { sync.wait(); } catch (InterruptedException e) { return readText = ""; } } return readText; } /* * Prints s */ public synchronized void print(String s){ // Add the "s" string to the console and myText.append(s); } /* * Prints s and a new line */ public synchronized void println(String s){ this.print(s + "\r\n"); } /* * Close the console */ public void closeConsole(){ frame.dispose(); } @Override public void actionPerformed(ActionEvent e) { // Check if the input is empty if ( !userText.getText().equals("") ){ readText = userText.getText(); println(" " + readText); userText.setText(""); synchronized (sync) { sync.notify(); } } } } Here is how to use it (an example). It just asks your age and writes something depending on your input: public static void main(String[] args) { MyConsole console = new MyConsole(); console.println("Hello! (Type \"0\" to exit)"); int age = 1; do{ console.println("How old are you ?"); String read = console.read(); try { age = Integer.valueOf(read); if ( age >= 18){ console.println("Wow! " + age + " ? You are an adult already!"); }else if ( age > 0 ){ console.println("Oh! " + age + " ? You are such a young boy!"); }else if (age == 0){ console.println("Bye bye!"); }else{ console.println("You can't be " + age + " years old!"); } }catch (Exception e) { console.println("Did you write any number there ?"); } } while ( age != 0 ); console.closeConsole(); } And here is a image:
https://codedump.io/share/wVCFuQjjfEdz/1/changing-text-size-of-the-text-someone-using-my-program-sees
CC-MAIN-2018-09
refinedweb
853
65.73
Yes. You can use PostSharp for free to build commercial software as long as your software does not require the library PostSharp.Core.dll. Most of the time, your software is only linked to PostSharp.Public.dll and PostSharp.Laos.dll. However, if you require professional level of support, you could be interested by commercial licenses. You have to pay for PostSharp when your software requires PostSharp.Core.dll. So if you distribute developer tools that uses PostSharp to perform analysis and transformation of assemblies of your customers (for instance an O-R framework or an application server), and do not release your product exclusively under an OSI-recognized license, you need to buy a commercial license. No, since you don't distribute the plug-in outside your company. As soon as you distribute your product outside your company (for any reason: testing, general distribution or whatever else). If your product remains inside your company, it is not 'contaminated' by GPL. You can release your plug-in under any open-source license recognized by the Open-Source Institute. However, code linked to your plug-in will have to be released under OSS as well, because the PostSharp license is contagious even indirectly. Yes. You can acquire commercial licenses today. Yes. Coding Glove, the company behind PostSharp, provides two levels of technical support. Not that we know. The project started in September 2004 as a hobby project. It took 2 years to build PostSharp Core, 1 year to build PostSharp Laos and finalize the product, and 1 year to stabilize it and market it. So the project is not exactly young, even if it has become popular quite recently. Well, we don't make a lot. There are two sources of revenue: direct and indirect. Direct revenues come from commercial licenses, author/speaker's fees, donations and sponsorship. Indirect revenues come from consulting: PostSharp opens doors we would not even know about otherwise, and allows us to invoice more than what is the norm in the country we operate from. Gael Fraiteur is the sole copyright owner of PostSharp. You are welcome to develop plug-ins based on PostSharp. If some functionalities or extension points are missing, we will gladly add them to PostSharp. Because we don't want to loose code ownership because of small contributions. Holding the whole copyright, we are able to sell commercial licenses. We don't want to loose this ability because of a few percents of contributed code. Contributors are welcome to write plug-ins that can eventually be packaged together with PostSharp. However, we want to keep a clean separation between code owned by different contributors. Note that most successful open-source projects rely on private ownership of source code. Collective ownership is rare, even if there are some famous exceptions. Yes, but you have to make it clear that your plug-in is not a part of PostSharp itself, and is not endorsed by the trademark holders. So don't name your product "PostSharp Logging Framework" but "Logging Framework for PostSharp". The same with the namespace: you cannot start of your product with "PostSharp", because it would be misleading.
https://www.postsharp.net/blog/post/frequently-asked-questions
CC-MAIN-2019-04
refinedweb
526
56.55
How Bad Is Your Colormap? (Or, Why People Hate Jet – and You Should Too) I made a little code snippet that I find helpful, and you might too: def grayify_cmap(cmap): """Return a grayscale version of the colormap""" cmap = plt.cm.get_cmap(cmap) colors = cmap(np.arange(cmap.N)) # convert RGBA to perceived greyscale luminance # cf. RGB_weight = [0.299, 0.587, 0.114] luminance = np.sqrt(np.dot(colors[:, :3] ** 2, RGB_weight)) colors[:, :3] = luminance[:, np.newaxis] return cmap.from_list(cmap.name + "_grayscale", colors, cmap.N) What this function does is to give you a lumninance-correct jet. If you want to take a step toward joining the in-crowd of chromatically-sensitive data viz geeks, your best bet is to start by bashing jet. Even if you don't know it by name, I can guarantee that if you've read many scientific papers, you've seen jet before. For example, here's a snapshot of a plot from neuroscience journal which is skewered by an appropriately ranty blogpost on the subject: Jet is the default colorbar originally used by matlab, and this default was inherited in the early days of Python's matplotlib package. The reasons not to use jet are numerous, and you can find good arguments against it across the web. For some more subdued and nuanced arguments, I'd start with the paper Rainbow Color Map (Still) Considered Harmful and, for more general visualization tips, Ten Simple Rules for Better Figures. So what do I have to add to this discussion that hasn't been already said? Well, nothing really, except the code snippet I shared above. Let me show you what it does. %matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 6) y = np.linspace(0, 3)[:, np.newaxis] z = 10 * np.cos(x ** 2) * np.exp(-y) We'll use matplotlib's imshow command to visualize this. By default, it will use the "jet" colormap: plt.imshow(z) plt.colorbar(); At first glance this might look OK. But upon closer examination, you might notice that jet's Luminance profile is incredibly complicated. Because your eye has different levels of sensitivity to light of different color, the luminance is not simply the sum of the RGB values as you might naively expect, but some weighted Euclidean sum of the individual values. You can find more information than you'd ever need to know on imagemagick's website. When you take the jet colormap used above and convert it to luminance using the code snippet above, you get this: plt.imshow(z, cmap=grayify_cmap('jet')) plt.colorbar(); It's a mess! The greyscale-only version of this colormap has strange luminance spikes in the middle, and makes it incredibly difficult to figure out what's going on in a plot with a modicum of complexity. Much better is to use a colormap with a uniform luminance gradient, such as the built-in grayscale colormap. Let's plot this beside the previous two: cmaps = [plt.cm.jet, grayify_cmap('jet'), plt.cm.gray] fig, axes = plt.subplots(1, 3, figsize=(12, 3)) fig.subplots_adjust(wspace=0) for cmap, ax in zip(cmaps, axes): im = ax.imshow(z, cmap=cmap) ax.set_title(cmap.name) fig.colorbar(im, ax=ax) In particular, notice that in the left panel, your eye is drawn to the yellow and cyan regions, because the luminance is higher. This can have the unfortunate side-effect of highlighting "features" in your data which may not actually exist! We can see this Luminance spike more clearly if we look at the color profile of jet up close: def show_colormap(cmap): im = np.outer(np.ones(10), np.arange(100)) fig, ax = plt.subplots(2, figsize=(6, 1.5), subplot_kw=dict(xticks=[], yticks=[])) fig.subplots_adjust(hspace=0.1) ax[0].imshow(im, cmap=cmap) ax[1].imshow(im, cmap=grayify_cmap(cmap)) show_colormap('jet') Once you have the grayscale lined-up with the color version, it's easy to point out these luminance spikes in the jet spectrum. By comparison, take a look at the Cube Helix colormap: show_colormap('cubehelix') This is a rainbow-like colormap which – by design – has a uniform luminance gradient across its progression of colors. It's certainly not the best choice in all situations, but you could easily argue that it's always a better choice than jet. fig, axes = plt.subplots(36, 6, figsize=(10, 7)) fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.1, wspace=0.1) im = np.outer(np.ones(10), np.arange(100)) cmaps = [m for m in plt.cm.datad if not m.endswith("_r")] cmaps.sort() axes = axes.T.ravel() for ax in axes: ax.axis('off') for cmap, color_ax, gray_ax, null_ax in zip(cmaps, axes[1::3], axes[2::3], axes[::3]): del null_ax color_ax.set_title(cmap, fontsize=10) color_ax.imshow(im, cmap=cmap) gray_ax.imshow(im, cmap=grayify_cmap(cmap)) There are some colormaps in here that have very nice, linear luminance gradients, and this is something you should keep in mind when choosing your color map. Much more could be written about choosing an appropriate color map for any given data; for a more in-depth discussion of matplotlib's maps (and some interesting luminance illustrations), you can refer to matplotlib's choosing a colormap documentation. If you're interested in streamlined statistical plotting in Python with well thought-out default color choices, I'd suggest taking a look at Michael Waskom's seaborn project, and especially the associated Color Palette Tutorial. I hope you find this grayify_cmap snippet helpful, and thanks for reading!
https://jakevdp.github.io/blog/2014/10/16/how-bad-is-your-colormap/
CC-MAIN-2018-39
refinedweb
941
59.3
Python 2.7.12 and Python 3.5.2 releases Now contains final candidate releases of Python 2.7.12 and Python 3.5.2. Final versions are expected to be released in two weeks? Pythonista implements 2.7.5 and 3.5.1. Will Pythonista 3 get release in the AppStore in time to be current? import platform print(platform.python_version()) EDIT: @omz did it... Today he shipped Pythonista 3(.5.1) while CPython 3.5.1 is still the current version. ;-) @ccc don't know if you missed it, because ole posted on Twitter and Slack (not sure about here). He submitted version 3 and 2.1 to the App Store yesterday(?). I have a question mark because I'm in Japan time lol. So...it's possible! CPython 3.5.2 is now available... @ccc , lol. I think you need a holiday, come to Thailand and have a few drinks with me 😬👍👍👍 Edit: you can help me to stop making a fool out of myself CPython 2.7.12 is now available...
https://forum.omz-software.com/topic/3215/python-2-7-12-and-python-3-5-2-releases/5
CC-MAIN-2021-21
refinedweb
175
88.94
Variable References and Mutability of Ruby Objects This article discusses references and variables in Ruby along with the mutability or immutablility of objects. It serves as a lead-in to separate articles that discuss mutating and non-mutating methods, and pass by reference/pass by value in Ruby. Note: This article was originally published on the Launch School blog on 2016–07–23 This: >> greeting = 'Hello' => "Hello" This tells Ruby to associate the greeting with the String object whose value is 'Hello'. In Ruby, greeting is said to reference (or point to) the String object. We can also talk of the variable as being bound to the String object, or binding the: >> greeting => "Hello">> greeting.object_id => 70101471431160 We use `#object_id` frequently in this article. Every object in Ruby has a unique object id, and that object id can be retrieved simply by calling `#object_id` on the object in question. Even literals, such as numbers, booleans, `nil`, and Strings have object ids: >> 5.object_id => 11>> true.object_id => 20>> n nil.object_id => 8>> "abc".object_id => 70101471581080 Get comfortable with using #object_id, both while reading this article, and whenever you have trouble understanding why an object has an unexpected value. Let’s assign greeting to a new variable: >> whazzup = greeting => "Hello">> greeting => "Hello">> whazzup => "Hello">> greeting.object_id => 70101471431160>> whazzup.object_id => 70101471431160 mutate the object (to change its state or value): >> greeting.upcase! => "HELLO">> greeting => "HELLO">> whazzup => "HELLO">> whazzup.concat('!') => "HELLO!">> greeting => "HELLO!">> whazzup => "HELLO!">> greeting.object_id => 70101471431160>> whazzup.object_id => 70101471431160 Since both variables are associated with the same object, using either variable to mutate the object is reflected in the other variable. We can also see that the object id does not change. Internally, we now have this: Reassignment Let’s assign a new object to one of these variables: >> greeting = 'Dude!' => "Dude!">> puts greeting => "Dude!">> puts whazzup => "HELLO!">> greeting.object_id => 70101479528400>> whazzup.object_id => 70101471431160 Here, we see that greeting and whazzup no longer refer to the same object; they have different values and different object ids. Crazy, right? Internally, we now have: What this shows is that reassignment to a variable doesn’t mutate the object referenced by that variable; instead, the variable is bound to a different mutated — that is, their values can be altered; immutable objects cannot be mutated — they can only be reassigned. Some objects can be mutated, others can’t. Again, other languages may do something different. In C++ and Perl, for instance, string objects are mutable, but in Java and Python, string objects are immutable. Understanding mutability of an object is necessary to understanding how your language deals with those objects. Immutable Objects In Ruby, numbers and boolean values are immutable. Once we create an immutable object, we cannot change it. “But,” we hear you ask, “What about this code?” >> number = 3 => 3>> number => 3>> number = 2 * number => 6>> number => 6 “Doesn’t this show that the object 3 was changed to 6?" Nope. As we saw above, this is reassignment which, as we learned, doesn’t mutate the object. Instead, it binds a different object to the variable. In this case, we create a new Integer with a value of 6 and assign it to number. There are, in fact, no methods available that let you mutate the value of any immutable object. All you can do is reassign the variable so it references a different object. This disconnects the original object from the variable. Internally, the reassignment looks like this: Lets demonstrate this in irb: >> a = 5.2 => 5.2>> b = 7.3 => 7.3>> a => 5.2>> b => 7.3>> a.object_id => 46837436124653162>> b.object_id => 65752554559609242: >> a = b => 7.3>> a => 7.3>> b => 7.3>> a.object_id => 65752554559609242>> b.object_id => 65752554559609242 irb now displays the same value for each variable. More interestingly, it shows that the object ids for both a and b are the same. The object that originally held the value 5.2 is no longer available through either a or b. Let’s try to alter the object now: >> b += 1.1 => 8.4>> a => 7.3>> b => 8.4>> a.object_id => 65752554559609242>> b.object_id => 32425917317067566 On the first line, we try to alter the object referenced by b by incrementing b by 1.1. This yields 8.4 and, as we can see, b is also set to 8.4. a has not been changed, and still references the 7.3 object. But, b now references a completely new object. Though we changed the value associated with b, we didn't mutate the object -- the object is immutable. Instead, += created a brand-new Float object and bound b to the new object. Simple assignment never mutates an immutable object: >> a = 5.5 => 5.5>> a.object_id => 49539595901075458 Instead of mutating changes to the object’s state in some way. Whether mutation is permitted by setter methods or by calling methods that perform more complex operations is unimportant; as long as you can mutate an object, it is mutable. A setter method (or simply, a setter) is a method defined by a Ruby object that allows a programmer to explicitly change the value of part of an object. Setters always use a name like something=. For our purposes in this series of articles, we're mostly interested in array element setters, e.g., the Array#[]= method, which is called like this: >> a = [1, 2, 3, 4, 5] >> a[3] = 0 # calls setter method >> a # => [1, 2, 3, 0, 5] Other setters show up in conjunction with classes, a topic we discuss in RB120. Here’s a simple example: class Dog def name=(new_name) @name = new_name end enddog = Dog.new dog.name = "Fido" # calls setter method for `name` attribute Consider Ruby Array objects; you can use index assignment to alter what object is referenced by an element: >> a = %w(a b c) => ["a", "b", "c"]>> a.object_id => 70227178642840>> a[1] = '-' # calls `Array#[]=` setter method => "-">> a => ["a", "-", "c"]>> a.object_id => 70227178642840 This demonstrates that we can mutate the array that a refers to. However, it doesn't create a new array since the object id remains the same. We can see why this is by looking at how a is stored in memory: We can see that a is a reference to an Array, and, in this case, that Array contains three elements; each element is a reference to a String object. When we assign - to a[1], we are binding a[1] to a new String. We're mutating the array given by a by assigning a new string to the element at index 1 ( a mutate the object or leave it unchanged. It’s easy enough to see that any method can avoid mutating its arguments. However, whether or not the method can mutate an argument is less clear; the ability to mutate mutated. Objects passed to methods in this way are said to be passed by value, and the language is said to be using a pass by value object passing strategy. Other languages pass references to the method instead — a reference can be used to mutate appears to mutate mutated. Since immutable objects cannot be changed, they act like Ruby passes them around by value. This isn’t a completely accurate interpretation of how Ruby passes immutable objects, but it helps us determine why the following code works as it does: def increment(a) a = a + 1 endb = 3 puts increment(b) # prints 4 puts b # prints 3 Here, the numeric object 3 is immutable. You can reasonably say that b's value is not mutated by #increment since 3 is passed by value to #increment where it is bound to variable a. Even though a is assigned to 4 inside the method and returned to the caller, the original object referenced by b is untouched. Mutable objects, on the other hand, can always be mutated simply by calling one of their mutating methods. They act like Ruby passes them around by reference; it isn’t necessary for a method to mutate an object that is passed by reference, only that it can mutate the object. As you’ll recall, pass by reference means that only a reference to an object is passed around; the variables used inside a method are bound to the original objects. This means that the method is free to mutate those objects. Once again, this isn’t completely accurate, but it is helpful. For instance: def append(s) s << '*' endt = 'abc' puts append(t) # prints abc* puts t # prints abc* Here, the String object abc is mutable. You can reasonably say that the object referenced by t is mutated by #append since t's value is passed by reference to #append where it is bound to variable s. When we apply the << operator to s, the change is reflected through t as well. Upon return from the method, the value of t has been mutated. However, t still points to the same object in memory; it merely has a different value. Conclusion In this article, we’ve seen that Ruby variables are merely references to objects in memory; that is, a variable is merely a name for some object. Multiple variables can reference the same object, so mutating an object using a given variable name will be reflected in every other variable that is bound to that object. We’ve also learned that assignment to a variable merely changes the binding; the object the variable originally referenced is not mutated. mutated mutated. We’re now equipped with the tools we need to explore the differences between mutating and non-mutating methods. Continue reading at Ruby’s Mutating and Non-Mutating Methods.
https://launchschool.medium.com/variable-references-and-mutability-of-ruby-objects-4046bd5b6717?readmore=1&source=user_profile---------2----------------------------
CC-MAIN-2021-43
refinedweb
1,608
64.61
Content-type: text/html #include <sys/stream.h> int putctl1(queue_t *q, int type, int p); Architecture independent level 1 (DDI/DKI). q Queue to which the message is to be sent. type Type of message. p One-byte parameter. The putctl1() function, like putctl(9F), tests the type argument to make sure a data type has not been specified, and attempts to allocate a message block. The p parameter can be used, for example, to specify how long the delay will be when an M_DELAY message is being sent. putctl1() fails if type is M_DATA, M_PROTO, or M_PCPROTO, or if a message block cannot be allocated. If successful, putctl1() calls the put(9E) routine of the queue pointed to by q with the newly allocated and initialized message. On success, 1 is returned. 0 is returned if type is a data type, or if a message block cannot be allocated. The putctl1() function can be called from user, interrupt, or kernel context. See the putctl(9F) function page for an example of putctl1(). put(9E), allocb(9F), datamsg(9F), putctl(9F), putnextctl1(9F) Writing Device Drivers STREAMS Programming Guide
http://backdrift.org/man/SunOS-5.10/man9f/putctl1.9f.html
CC-MAIN-2016-44
refinedweb
190
67.25
Prologue:My previous article was about debugging data bindings in XAML. In this article we shall discuss Data Binding in Style Setter - one of the new features in Silverlight 5 Beta.Short intro to Style Setter Binding:How did you do the style setter binding in earlier versions of Silverlight? The following figure clearly answers that quesion. We simply bound the value to the Property. So what is new in Silverlight 5 Beta? How will you bind using Binding? These are the questions we shall discuss about some new features of Silverlight 5 Beta.Quick Note:In this demo application we will create a class called Person with the Property Names which is a collection of strings. We are going to bind the ItemsSource property of Listbox using Style Setter. Other than this there won't be any bindings for the Listbox. For more understanding, in another example we are going to bind the Text property of TextBlock using a hard coded value.Preparing the Solution:Just fire up VS 2010; create a Silverlight project with the name "StyleSetterBindingInSL5Beta" as shown in the figure. Follow the [No: #].Designing the UI:It's pretty cool! Just Place a list box and text block control inside the stack panel.Here is the complete XAML. The Code-Behind:Before explaining the XAML, let us have a look at the code behind file. As we discussed earlier, we are going to bind the ItemsSource property of Listbox and Text, Margin and Foreground property of TextBlock.We need to create a collection to bind with the ItemsSource in the style setter. So here is the class we have created in this project. We are going to bind the collection of names to ItemsSource property. The Names property is here. Well, now the Names property is ready. Since a Listbox is a collection control we need a collection of names collection, right? Just create a collection like the following one. So the name collection is also ready. But we need to fire this method to create a collection. Simply call the LoadNames () in the Person class constructor so that the Names collection property will be assigned.The Code-Behind is done. Now back to the XAML.To bind the ItemsSource property with the Names property of the Person class we need to import a namespace into the XAML as in the following figure. Give the alias name for the namespace added. We are adding this namespaces to access the Names property which is in class Person.Quick Note:In the Style Setter Binding we should bind the class which has the collection property as StaticResource. As we know the Styles should be in the Usercontrol resources section.Adding Static Resources to XAML: As shown in the figure, we need to add the Person class as a static resource in the Usercontrol resources section. Here in this figure we are setting the static resources for TextBlock for the Margin property [Bottom, Top, and Left], Text property and Color [Foreground].The static resource is ready for the TextBlock; it needs to be bound with the TextBlock.Binding the Style Setter Value: We have added the Person class as a static resource in XAML. In previous versions of Silverlight we bound the value with the data directly. In Silverlight 5 we have a Binding option. Here the SampleSource is the key of the Person static resource and the Names is the property which is in the Person class to be bound to the Listbox.We have a text block too, right? Let us see the binding for the text block. We have bound the static resources with the Value of the TextBlock style setter using the Binding option with Key.The style setter binding is done. Now simply place a text block and list box like the figure given below. Application in Action:Hit the play button to see the application in action. That's it. We are done. Summary:We have seen about the new feature of Silverlight 5 Beta called Data Binding in XAML style setter.Thanks for spending your precious time here. Please provide your valuable feedbacks and comments, which enable me to give a better article the next time.Thanks. View All
http://www.c-sharpcorner.com/uploadfile/dbd951/data-binding-in-xaml-style-setter-silverlight-5-beta/
CC-MAIN-2017-39
refinedweb
707
75.4
In this part of my Java Video Tutorial, I cover how to draw 2D graphics in Java. This tutorial will serve as an introduction before I make a paint application with Java. I cover drawing lines, curves, ellipses, rectangles and numerous other shapes. Then we look at strokes, fills and gradients. All of the code follows the video and can serve as a cheat sheet, or an additional teaching tool. If you like videos like this, please tell Google [googleplusone] Sharing is always appreciated Code from the Video import javax.swing.JComponent; import javax.swing.JFrame; import java.awt.*; import java.awt.geom.*; @SuppressWarnings("serial") // By extending JFrame we have our applet public class Lesson47 extends JFrame{ public static void main(String[] args){ new Lesson47(); } public Lesson47(){ this.setSize(500, 500); this.setTitle("Drawing Shapes"); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.add(new DrawStuff(), BorderLayout.CENTER); this.setVisible(true); } // Creating my own component by extending JComponent // JComponent is the base class for all swing components. Even custom ones private class DrawStuff extends JComponent{ // Graphics is the base class that allows for drawing on components public void paint(Graphics g){ // Extends graphics so you can draw dimensional shapes and images Graphics2D graph2 = (Graphics2D)g; // Sets preferences for rendering // KEY_ANTIALIASING reduces artifacts on shapes // VALUE_ANTIALIAS_ON will clean up the edges graph2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); // The Shape interface knows how to draw many different shapes /* Arc2D, Arc2D.Double, Arc2D.Float, Area, BasicTextUI.BasicCaret, * CubicCurve2D, CubicCurve2D.Double, CubicCurve2D.Float, DefaultCaret, * Ellipse2D, Ellipse2D.Double, Ellipse2D.Float, GeneralPath, Line2D, * Line2D.Double, Line2D.Float, Path2D, Path2D.Double, Path2D.Float, * Polygon, QuadCurve2D, QuadCurve2D.Double, QuadCurve2D.Float, * Rectangle, Rectangle2D, Rectangle2D.Double, Rectangle2D.Float, * RectangularShape, RoundRectangle2D, RoundRectangle2D.Double, * RoundRectangle2D.Float */ // A line that goes from x1, y1 to x2, y2 Shape drawLine = new Line2D.Float(20, 90, 55, 250); // Start x, start y, width, height, start angle degrees, angular extent, OPEN, CHORD, PIE // Angular extent refers to how many degrees the arc continues from the start angle Shape drawArc2D = new Arc2D.Double(5, 150, 100, 100, 45, 180, Arc2D.OPEN); Shape drawArc2D2 = new Arc2D.Double(5, 200, 100, 100, 45, 45, Arc2D.CHORD); Shape drawArc2D3 = new Arc2D.Double(5, 250, 100, 100, 45, 45, Arc2D.PIE); // Draw ellipse in a rectangle defined x1, y1, x2, y2 Shape drawEllipse = new Ellipse2D.Float(10, 10, 100, 100); // Round off the rectangle be defining arc height then arc width Shape drawRoundRec = new RoundRectangle2D.Double(25, 25, 50, 50, 45, 45); // Draw a curve with 4 points CubicCurve2D cubicCurve = new CubicCurve2D.Double(); // You can also set the curve outside of the definition // x1, y1, ctrlx1, ctrly1, ctrlx2, ctrly2, x2, y2 cubicCurve.setCurve(110, 50, 300, 200, 200, 200, 90, 263); // Draw rectangle by defining upper left x, y and width then height Shape drawRect = new Rectangle2D.Float(300, 300, 150, 100); // // Draw a curve with 3 points // x1, y1, ctrlx1, ctrly1, x2, y2 Shape drawQuadCurve = new QuadCurve2D.Float(300, 100, 400, 200, 150, 300); Shape drawTransRect = new Rectangle2D.Double(300, 300, 75, 50); // Paint object defines the color used for rendering graph2.setPaint(Color.BLACK); // Draws a shape based on the preferences that have been set graph2.draw(drawLine); graph2.draw(drawArc2D); graph2.draw(drawArc2D2); graph2.draw(drawArc2D3); graph2.draw(drawEllipse); // Set the fill color graph2.setColor(Color.GREEN); // Draw a shape with a fill graph2.fill(drawRoundRec); graph2.fill(drawRect); graph2.setPaint(Color.BLACK); graph2.draw(cubicCurve); graph2.draw(drawRect); graph2.draw(drawQuadCurve); // This makes everything drawn after to be 60% transparent graph2.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 0.40F)); // This eliminates transparency graph2.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 1.0F)); // starting x point, starting y point, start color, end x, end y, end color // You can use hex color codes 0x66ffff equals color.CYAN // VERTICAL GRADIENT GradientPaint theGradient = new GradientPaint(0,0, Color.BLUE, 0,60, new Color(0x66ffff)); // HORIZONTAL GRADIENT // GradientPaint theGradient = new GradientPaint(0,0, Color.BLUE, 75,0, new Color(0x66ffff)); // To make the last color start in the center // GradientPaint theGradient = new GradientPaint(0,0, Color.BLUE, 0,60, new Color(0x66ffff), true); graph2.setPaint(theGradient); graph2.fill(new Rectangle2D.Float(10, 10, 150, 100)); graph2.fill(drawTransRect); } } } This is really a great tut from you! Thank you 🙂 Hi Derek, Thanks a lot again for your great tutorials. I have two questions : 1. How can I draw an arrow instead of a line ? I have graphical objects and I want to show their relations by drawing a directed arrow connecting them. 2. In your tutorial we add the drawing by initializing an object from DrawStuff class , and in the paint method you create the Shape objects and draw them. Suppose I have a list of Shape objects , how can I send it to this method to draw them? Thanks in advance for your help. Your very welcome 🙂 You could store polygons like a line and then draw them on the screen. I get into manipulating polygons as the tutorial continues. Sorry I did not get you fully So is there a way to draw an arrow instead of a line by giving starting and ending coordinates ? Do we have an arrow as a Shape object like a Line2D? You’ll have to create a polygon in the shape of an arrow. Then you can draw that polygon like any other.
http://www.newthinktank.com/2012/06/java-video-tutorial-47/?replytocom=38939
CC-MAIN-2020-16
refinedweb
883
50.94
Hi Nick,I promised you and James to get back to a throughout review for morethan just the backends. It's still in progress, but here is what Ithink is most important: - Sort of the the namepspace for both the file names and function names. I think you reluctantly agreed to do that a while ago anyway, but I think it's time to bite the bullet now. Please agree on a common prefix for both function names and modules. I think the target name in the directory structure is the best one, but I really don't care too much. The transport_ prefix used in some code is really misleading, and the se_ in other isn't too helpful either. - make sure backends, frontends and core/ code under drivers/target/ are properly separated - clean up the exported - both as in EXPORT_SYMBOL and simply global functions. There's a lot of things that should be static or not exported to modules but is right now. The scripts/namespace.pl script in the kernel tree is a great helper for that. - Similarly the headers could use some re-arrangement. I've been trying to make sense of what each header does but couldn't find it. In the optimal world you'd have one header for the front-end API, one of the back-end API and one or more for common structures and defintions. All with a comment explaining what they are there for.
http://lkml.org/lkml/2010/11/8/152
CC-MAIN-2013-48
refinedweb
244
78.99
Variadic Templates The problem statement is simple: write a function that takes an arbitrary number of values of arbitrary types, and print those values out one per line in a manner appropriate to the type. For example, the code: print(7, 'a', 6.8); should output: 7 'a' 6.8 We'll explore how this can be done in standard C++, followed by doing it using the proposed variadic template C++ extension. Then, we'll do it the various ways the D programming language makes possible. C++ Solutions The Standard C++ Solution The straightforward way to do this in standard C++ is to use a series of function templates, one for each number of arguments: #include <iostream> using namespace::std; void print() { } template<class T1> void print(T1 a1) { cout << a1 << endl; } template<class T1, class T2> void print(T1 a1, T2 a2) { cout << a1 << endl; cout << a2 << endl; } template<class T1, class T2, class T3> void print(T1 a1, T2 a2, T3 a3) { cout << a1 << endl; cout << a2 << endl; cout << a3 << endl; } ... etc ... This poses significant problems: One, the function implementor must decide in advance what the maximum number of arguments the function will have. The implementor will usually err on the side of excess, and ten, or even twenty overloads of print() will be written. Yet inevitably, some user somewhere will require just one more argument. So this solution is never quite thoroughly right. Two, the logic of the function template body must be cut and paste duplicated, then carefully modified, for every one of those function templates. If the logic needs to be adjusted, all of those function templates must receive the same adjustment, which is tedious and error prone. Three, as is typical for function overloads, there is no obvious visual connection between them, they stand independently. This makes it more difficult to understand the code, especially if the implementor isn't careful to place them and format them in a consistent style. Four, it leads to source code bloat which slows down compilation. The C++ Extension Solution Douglas Gregor has proposed a variadic template scheme [1] for C++ that solves these problems. The result looks like: void print() { } template<class T, class... U> void print(T a1, U... an) { cout << a1 << newline; print(an...); } It uses recursive function template instantiation to pick off the arguments one by one. A specialization with no arguments ends the recursion. It's a neat and tidy solution, but with one glaring problem: it's a proposed extension, which means it isn't part of the C++ standard, may not get into the C++ standard in its current form, may not get into the standard in any form, and even if it does, it may be many, many years before the feature is commonly implemented. D Programming Language Solutions The D Look Ma No Templates Solution It is not practical to solve this problem in C++ without using templates. In D, one can because D supports typesafe variadic function parameters. import std.stdio; void print(...) { foreach (arg; _arguments) { writefx(stdout, (&arg)[0 .. 1], _argptr, 1); auto size = arg.tsize(); _argptr += ((size + size_t.sizeof - 1) & ~(size_t.sizeof - 1)); } } It isn't elegant or the most efficient, but it does work, and it is neatly encapsulated into a single function. (It relies on the predefined parameters _argptr and _arguments which give a pointer to the values and their types, respectively.) Translating the Variadic C++ Solution into D Variadic templates in D enable a straightforward translation of the proposed C++ variadic syntax: void print()() { } void print(T, A...)(T t, A a) { writefln(t); print(a); } There are two function templates. The first provides the degenerate case of no arguments, and a terminus for the recursion of the second. The second has two arguments: t for the first value and a for the rest of the values. A... says the parameter is a tuple, and implicit function template instantiation will fill in A with the list of all the types following t. So, print(7, 'a', 6.8) will fill in int for T, and a tuple (char, double) for A. The parameter a becomes an expression tuple of the arguments. The function works by printing the first parameter t, and then recursively calling itself with the remaining arguments a. The recursion terminates when there are no longer any arguments by calling print()(). The Static If Solution It would be nice to encapsulate all the logic into a single function. One way to do that is by using static if's, which provide for conditional compilation: void print(A...)(A a) { static if (a.length) { writefln(a[0]); static if (a.length > 1) print(a[1 .. length]); } } Tuples can be manipulated much like arrays. So a.length resolves to the number of expressions in the tuple a. a[0] gives the first expression in the tuple. a[1 .. length] creates a new tuple by slicing the original tuple. The Foreach Solution But since tuples can be manipulated like arrays, we can use a foreach statement to 'loop' over the tuple's expressions: void print(A...)(A a) { foreach(t; a) writefln(t); } The end result is remarkably simple, self-contained, compact and efficient. Acknowledgments - Thanks to Andrei Alexandrescu for explaining to me how variadic templates need to work and why they are so important. - Thanks to Douglas Gregor, Jaakko Jaervi, and Gary Powell for their inspirational work on C++ variadic templates.
http://www.digitalmars.com/d/2.0/variadic-function-templates.html
crawl-001
refinedweb
908
63.8
WL#4005: checkpoint and backup to Amazon s3 from Cluster Affects: Server-5.2 — Status: In-Progress — Priority: Medium Amazon s3 is a fault-tolerant distributed storage service. When applications are deployed on the Amazon ec2 Clustered environment, s3 is the only persistent storage. However, s3 operates as a web service, not as a filesystem. Because of this, normal database usage on ec2 is a fraught with peril. Most databases expect to be able to write to a local file, and subsequently expect that once that file is written, their work is done. An enterprising admin could take frequent dumps and inject them into s3, but there is a long lag inherent to this that might be unacceptable. There is also a FUSE implementation that can mount s3 as a filesystem, but here the latency associated with a disk write would likely also be unacceptable. NDB divorces individual transaction latency from disk latency, and itself has a concept of Asynchronous writes to disk. Adding the capability to the Ndbfs implementation of writing directly to and reading from s3 could allow for interesting deployments on ec2, and perhaps elsewhere as well. The current implementation will focus on adding behavior to the AsyncFile object, so that based on file path information, it will either write files to s3 or to the local filesystem. s3 specifies storage in terms of buckets and objects. Information about S3 itself can be found at A bucket is similar to a directory or namespace. An individual s3 user may have up to 100 buckets, which may have names up to 255 bytes in length. The overall bucket namespace is global, so care must taken to create a unique bucket name for each cluster. The bucket will be chosen by the user and added to the cluster configuration file. If the bucket does not exist during an initial restart, it will be created. If it does not exist during a normal restart, it will result in node failure. Within buckets, objects are placed. The object namespace per bucket is flat, but the naming keys can contain any UTF-8 character. So while there cannot be "subdirectories" there is nothing preventing an object from being named "ndb_1_fs/D10/DBLQH/S22.FragLog". An object can be up to 5GB in size. The objects only support GET, PUT and DELETE. A bucket has no limit on the number of objects stored. Objects do not support file-like seeking or appending. The data stored in an object may only be read or written in total. The reading uses HTTP GET and the writing uses HTTP PUT. To support the buffered reading and writing and seeking that occurs, individual ndb files will be split into multiple objects per block to be stored of the form: "ndb_1_fs/D10/DBLQH/S22.FragLog.1" with an addition object "ndb_1_fs/D10/DBLQH/S22.FragLog" stored which contains information about how many blocks have been stored for that file. File storage locations will be extended to accept URI form locations, this way, one could choose to store backups on S3 and data files locally, or any combination thereof. Local file storage would look like: DataDir=/var/lib/mysql-cluster or DataDir= while S3 storage would be: DataDir=s3://mybucketname Authentication to S3 is via shared secret in the form of a Secret Key and a Key ID. Configuration options will be added to contain the AWS Secret Access Key and the AWS Access Key ID. Copyright (c) 2000, 2017, Oracle Corporation and/or its affiliates. All rights reserved.
https://dev.mysql.com/worklog/task/?id=4005
CC-MAIN-2017-43
refinedweb
590
55.13
I had a case recently where a customer was using the Microsoft Personal Storage Provider (MSPST) in a service. Everything was working fine until they moved to Windows Vista, the kernel objects MSPST uses (mutants, file mappings, etc) to allow two processes to synchronize access to the same physical .pst file were'nt working. Why? Because of Session 0 Isolation in Windows Vista. When the service application was moved into its own Windows session it could not longer access those objects. The result was that the MSPST created new objects in the local namespace for Session 0. MSPST then continued processing assuming that it was responsible for intialization of those objects. In the end MSPST tried to a delete a file Outlook was also using in Session 1, which caused a sharing violation. MSPST not knowing how to handle this returned MAPI_E_DISK_ERROR.
http://blogs.msdn.com/b/dvespa/archive/2007/06/07/windows-vista-mspst-and-service-code-don-t-mix.aspx
CC-MAIN-2014-23
refinedweb
142
64
Google & Facebook Shopping Adsby AdAmplify Corp. AdAmplify Your Google and Facebook Ad Revenue All reviews Acuva Unlike most of the Shopify stores, we have complex configuration options in our products like AC/DC Adapter and Filter Type so it was really challenging to feed our products to Google Merchant Center. This well-designed App solved our problem. They have prompt support and a very good eye for details. They follow up with you until the issue is resolved. Started using this App last month and so far, very impressed. Developer reply Thanks for taking the time to comment. We’re happy to hear our custom variant mapping solved your problem of linking your unique options to Google’s standard attributes. The app was designed to make it easy for Shopify merchants to map these options to Google’s (and more recently Facebook's) attributes by doing it automatically. But we recognized there are cases where a merchant must also be able to customize their options for their particular needs (like yours for AC or DC options), so we designed a Mapping Dictionary to make this easy. With our recent addition of our Facebook catalog feed, we now make sure that all 3 major platforms " play nice” with each other :) We’ve enjoyed working with you and your team and look forward to seeing your Return on Ad Spend increase substantially! Zip.Dog I wanted to use Google Shopping Actions for Zip.dog, so I checked out the options, Tried a couple of others. After issues with those, I switched to AdAmplify. I’m now running Actions, Shopping Ads, and Surfaces using their app. It was a breeze to use. I was able to fix some issues Google flagged based on the messages on their feed management page. I had trouble getting my variants grouped for Actions, though, and asked for help. Their team went way beyond what I expected, and, with everything looking good on my side, they escalated the issue to Google. They kept on it, giving me regular updates until Google resolved things on their side. Their app is a “must use,” and the service from AdAmplify is first class. Developer reply Thank you for your review, we have enjoyed working with Zip.dog as well. Google is very stringent lately but because we work with them on a continuous basis we have gotten to know how to best work with them and how to work towards solving the issues that stores are running into. We look forward to seeing your sales increase with the Google campaigns! 2nd Wind We had a disappointing previous experience with Google Shopping ads (no revenue generated) but decided to give it another try using AdAmplify and their Google Shopping Feed. This is a very new app but it is nicely designed with each step clearly laid out. It’s obvious they have a lot of tech and business savvy in the ecommerce space (helps that they are located here in North America). Our products were uploaded without any issues; really liked the fact that we can manage the products we include in the feed right within the app itself. AdAmplify helped us set up our first campaign and in the end we asked them to manage it for us as we don’t have the time and expertise to stay on top of tuning our campaigns for the results we’re looking for. They did a really solid job and as a result our first month saw a solid return (380% on our ad spend). I recommend you consider them for your Google Shopping ad campaigns. Developer reply Thanks for taking the time to write this review. We've had a number of clients come to us with poor experiences. We've enjoyed working with you and you've taken the right approach by identifying the products that had the best return on ad spend and focussing on those in your campaign. One feature of our app allows you to easily suspend products from the feed that aren't performing. We'll continue to keep an eye on your results. GIA Minerals I’m a newbie to Google Shopping but I used AdAmplify for another program they run and wanted to give their Google Shopping app feed a try. The app was easy to install and not knowing Google Shopping at all, they helped me through the setup and answered my first time user questions. Once I understood the Google side of things, everything made sense. I liked that I could use my second images instead of the primary ones and could customize my product titles to include my brand. Also, being able to control the products right inside the app saved me from having to learn the Google Merchant Center. We had a few products disapproved by Google however the app gave us the reasons and suspended them from the feed until we fixed them. Nice! I highly recommend this app. Developer reply Thank you for the kind review! Our experience running a multi-vendor e-commerce site (with Gia products) was helpful in designing an app that would meet the requirements of vendors who are looking for more sales with Google Shopping. The Google Merchant Center can be overwhelming for people, so using the app to manage your products and Google messages is better than having to spend your time in the Merchant Center. 72hours.ca We have recently downloaded this app to test and see how it compares to the previous Google shopping feed apps that we have tried before. I have to say we made the right decision about moving our store to AdAmplify. The App was seamless and we used to have 200+ disapproved products and now 98% of our products are approved. The support team has been a great help with assisting us with the disapproved products and with the app in general. It's very easy to use and this Google shopping feed app for Shopify is by far the best one comparing to the other 5 we tried before. I definitely recommend them if you are having tons of problems with your content API, products feed, disapproved products or even just Google shopping in general. Developer reply Thank you for taking the time to write this review. We enjoyed working with you and your team and are happy to implement a couple of great features that you suggested. Nice to hear your comments on the improvements you saw in disapprovals. Having spent most of our working lives developing enterprise-level software for large companies allowed us to build a lot of robustness into the app to resolve these types of issues that frustrate so many merchants. We are pleased that it worked out so well, feel free to reach out if you have any further questions or features you’d like to see. Essential Rose Life We recently purchased this app from AdAmplify. Syncing our Shopify products to the Google Merchant Center was quite straightforward and easy. We had our account and a few products disapproved by Google but the app gave us directions on how to fix these issues. We also had a few products we didn’t want to include and the app made it easy to exclude these on its feed management page. We’re now running our first Google Shopping ads and having our products appear on Surfaces across Google. And hey, a big shoutout to Kevin who's got a lot of first hand experience running Google shopping campaigns for merchants and gave us great advice and support. Developer reply Thanks so much. Great working with you and helping give your products more exposure. We know from the experience running our own stores how difficult it can be to improve sales in today’s ultra-competitive environment. Google Shopping ads is a great tool and happy we’re able to help get you a positive return on your ad spend in your first month. Also appreciate that you called out some features we built into the app to make life easier for our customers as well.
https://apps.shopify.com/adamplify-google-shopping-app/reviews?rating=5
CC-MAIN-2020-40
refinedweb
1,357
68.91
Tabletests This article is an introduction and justification for tabletest and tabletest3. These are small Python and Python3 (respectively) packages I’ve written. They are used when writing so-called “tabletests” or data-driven tests. I’ll cause no controversy by saying that functions which are small and concise are a “good thing”. Such functions are easy to work with, easy to understand and easy to test. For the sake of argument, suppose we have a small function for converting strings of binary digits into the integers they represent. We’ll call it parse_bin. It operates on strings such as "1010" and outputs numbers. Basic stuff really. If we were to code it in Python, it might look something like this: def parse_bin(bin_str): """Parse a string of binary digits and produce the integer value.""" result = 0 pow = 1 for digit in reversed(bin_str): assert digit == '0' or digit == '1' digit_dec = 1 if digit == '1' else 0 result += digit_dec * pow pow = pow << 1 return result The function scores pretty well on being easy to work with and, hopefully, it’s pretty easy to understand as well. However, it is, I claim, quite hard to test. For example, using Python’s standard unittest library, the test suite might look something like this: import unittest class ParseBin(unittest.TestCase): def test_parse_bin_empty_string(self): self.assertEqual(parse_bin(''), 0) def test_parse_bin_zero(self): self.assertEqual(parse_bin('0'), 0) def test_parse_bin_one(self): self.assertEqual(parse_bin('1'), 1) def test_parse_bin_leading_zero(self): self.assertEqual(parse_bin('00'), 0) def test_parse_bin_leading_zero2(self): self.assertEqual(parse_bin('01'), 1) def test_parse_bin_two(self): self.assertEqual(parse_bin('10'), 2) def test_parse_bin_three(self): self.assertEqual(parse_bin('11'), 3) This looks clunky. It has too much boilerplate and too little action. The worst part is that adding another test is quite involved. We need to define a new function and write a small amount of very repetitive code for it. Hence, we’ll want to skip on testing and do the minimum required, rather than write a more comprehensive battery of tests. For example, we haven’t tested very large integers and the overflow patterns, or invalid inputs etc. While this example is certainly contrived, one could easily imagine things escalating for more complex functions. A natural second version factors out the common testing code and makes just a single, data-driven test. It might look something like this: import unittest class ParseBin(unittest.TestCase): TEST_CASES = [ ('', 0), ('0', 0), ('1', 1) ('00', 0), ('01', 1), ('10', 2), ('11', 3), ('000', 0), ('001', 1), ('010', 2), ('011', 3), ('100', 4), ('1000000000000000', 2**15), ] def test_parse_bin(self): for (input, expected) in TEST_CASES: self.assertEqual(parse_bin(input), expected) This approach is an improvement since it makes it easy to add new test cases. In fact, we only need to add an (input, expected) pair to add a new case, which is the minimum we could expect. This even opens the door for automatically generated cases, rather than hand coded ones. The approach comes with its own limitations, however. For example, we’ve been made responsible for the boilerplate of iterating over each test case. This is a little bit like being responsible for calling the setUp and tearDown methods ourselves. Sure, they’re separated into methods, and reusable, but the situation looks like one which should be handled by the testing framework, rather than by us. Furthermore, testing is coupled. If one case fails, the whole suite fails. For this simple example, it is pretty straightforward to figure out where the failure occurs. For more complicated setups, this might not be the case. The coupling itself is troubling regardless of other concerns, since it is a good principle to have tests be independent. Linked to the last issue, sophisticated test runners might run tests in parallel. Since we’ve combined all the previous separate tests into a single one, we’ve lost that capability. The test might become too big and require additional resources or its execution time might become unwieldy. Finally, there is a nice feeling to adding a new test and seeing a new entry in the test runner output for it. We definitely loose this treat by writing things this way. At this point, one might argue that the cure is worse than the illness. Certainly, there are a lot of drawbacks. We need not resign ourselves to clunky XOR unwieldy tests however. We can have the best of both worlds. All of this is a long way to introduce the tabletest library, which is a unittest extension which allows one to have data driven tests, but with all the advantages of separate and independent unit tests. At this point, it would be better to let the code speak for itself. The third and final version of the test suite looks something like this: import tabletest class ParseBin(tabletest.TableTestCase): TEST_CASES = [ ('', 0), ('0', 0), ('1', 1) ('00', 0), ('01', 1), ('10', 2), ('11', 3), ('000', 0), ('001', 1), ('010', 2), ('011', 3), ('100', 4), ('101', 5), ('110', 6), ('111', 7), ('1000000000000000', 2**15), ('10000', 16), ('100001', 33), ('0110001', 49), ] @tabletest.tabletest(TEST_CASES) def test_parse_bin(self, test_case): (input, expected) = test_case self.assertEqual(parse_bin(input), expected) The difference between the two versions is that we’ve replaced the manually iterating version of test_parse_bin with a new one, which is annotated with the tabletest annotation. Hopefully the way to use it is clear. Under the hood, the library generates a version of test_parse_bin for each test case. Therefore the code that gets executed looks like the first version rather than the second. We still get all the goodness of independent tests and boilerplate-free development, without having to develop it ourselves. Finally, the test runner is going to show one entry for each test case, which will keep us hooked on writing them. The usage should be straightforward and surprise free. For more info, tune in to the next post in the series. Anyhow, this is it for now.
https://horia141.com/tabletests.html
CC-MAIN-2021-04
refinedweb
989
65.52
The next problem that cropped up during the implementation of the AST code optimizer is related to branch elimination and the elimination of any code after a return. Within a FunctionDef node, we would (ideally) like to blow away If nodes with a constant - but false - test expression. e.g.: def foo(): if False: # ... stuff ... For most functions, this will cause no problems and the code will behave as expected. However, if the eliminated branch contains a "yield" expression, the function is actually a generator function - even if the yield expression can never be reached: def foo(): if False: yield 5 In addition to this, the following should also be treated as a generator even though we'd like to be able to get rid of all the code following the "return" statement: def foo(): return yield 5 Again, blowing away the yield results in a normal function instead of a generator. Not what we want: we need to preserve the generator semantics. Upon revisiting this, it's actually made me reconsider the use of a Const node for the earlier problem relating to arbitrary constants. We may be better off with annotations after all ... then we could mark FunctionDef nodes as being generators at the AST level to force the compiler to produce code for a generator, but eliminate the branches anyway. The other alternative I can think of is injecting a yield node somewhere unreachable and ensuring it doesn't get optimized away, but this seems pretty hacky in comparison. Any other ideas? Cheers, Tom
http://grokbase.com/p/python/python-dev/0853rf4s1a/ast-optimization-branch-elimination-in-generator-functions
CC-MAIN-2015-22
refinedweb
257
56.59
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Support Forum » Cannot Make Music with the MicroController Just received the nerdkit and went through the manual. Now I'm trying the "Make Music with the MicroController" project. Copied the musicbox1.c source code and created a makefile. Tried to compile the C code from the command window and the output is: make -C ../libnerdkits make[1]: Entering directory `C:/clients/nerdkits/Code/libnerdkits' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `C:/clients/nerdkits/Code/libnerdkits' avr-gcc -g -Os -Wall -mmcu=atmega168 -Wl,-u,vfprintf -lprintf_flt -Wl,-u,vfscan f -lscanf_flt -lm -o musicbox1.o musicbox1.c ../libnerdkits/delay.o ../libnerdki ts/lcd.o musicbox1.c: In function 'play_tone': musicbox1.c:54: error: 'PORTA' undeclared (first use in this function) musicbox1.c:54: error: (Each undeclared identifier is reported only once musicbox1.c:54: error: for each function it appears in.) musicbox1.c:54: error: 'PA1' undeclared (first use in this function) musicbox1.c: In function 'main': musicbox1.c:82: error: 'DDRA' undeclared (first use in this function) musicbox1.c:82: error: 'PA1' undeclared (first use in this function) musicbox1.c:85: error: 'PORTA' undeclared (first use in this function) musicbox1.c:85: error: 'PA7' undeclared (first use in this function) musicbox1.c:97: error: 'PINA' undeclared (first use in this function) make: *** [musicbox1.hex] Error 1 There appears to be some undefined variables like PORTA, PA7, PINA, etc. Where are these variables being defined? I don't see them in the header files (I didn't look at all of them though). That program was written for a different microcontroller. It needs some modifications to work with the Atmega168, which doesn't have an "A" port. Is it a simple change? Suppose I change the code so it all references PORTC? Has anyone gotten this to work? Hi nerdguy, You're exactly right. Yes, you can change all of the references to pins PA1 (the buzzer) and PA7 (the button), as well as registers PORTA, PINA, DDRA, to reference some other pins that do actually exist on the ATmega168. You would also want to remove the "OSCCAL = 176;" line from the main function -- this simply isn't relevant to the ATmega168 when we're using an external crystal. Finally, this project was designed when we were shipping kits with a 2-row, 24-character wide LCD. With the current 4-row, 20-character-wide LCD, you may want to adjust how and where the text is displayed. I can also point you to another forum post where another member posted their code regarding this project. I haven't personally looked at it but perhaps it will be useful in combination with the above information. Hope that helps! Mike mrobbins, Thanks for you reply and for confirming my guesses. I replaced all references for "A" to "C", changed the CPU definition and got rid of the push button code (haven't gotten to that project yet) to start the music. This got the LCD working correctly. Had to shorten the lines from 25 chars to 20 chars. I got the speaker working by switching the leads. Evidently, the speaker can be connected only one way for it to work correctly. This was not documented anywhere. (This is probably obvious to you but not to an inexperienced hardware person like me.) Anyway, like I said before, I just got the nerdkit and went through the "nerdkits guide". Now I'm trying to go through the projects on your website. No doubt, I'll have other questions. It's so nice to find my question already asked :) For me the remedy for the code was to redirect the lcd.h include and the delay.h include to ../libnerdkits/lcd.h and ../libnerdkits/delay.h (Maybe the reason for the "nothing to be done for 'all'" error?) #include "../libnerdkits/delay.h" #include "../libnerdkits/lcd.h" Instead of: #include "delay.h" #include "lcd.h" Then as mentioned above, I did away with PORTA/PINA references and used B instead. Still working out what is going on in the play_tone function and getting the button to start the thing but hey at least I know it works lol. I compiled my program initialload.c and i got "Make: nothing to be done for intialload.c" What did i do wrong? hariharan: if you're still having trouble, please show us what's in your makefile. i tried to modify the code for the music box, but it does not work, why? the code: // musicbox1.c // for NerdKits with ATtiny26L // [email protected] // F_CPU defined for delay.c #define F_CPU 14745600 #include <avr/io.h> #include <avr/interrupt.h> #include <avr/pgmspace.h> #include <util/delay.h> #include <inttypes.h> #include <stdlib.h> #include "../libnerdkits/delay.h" // PIN DEFINITIONS: // // PA0 -- temperature sensor analog input // PA1 -- piezo buzzer // PA4 -- LCD RS (pin 4) // PA5 -- LCD E (pin 6) // PA7 -- button (pullup high) // PB3-6 -- LCD DB4-7 (pins 11-14) void play_tone(uint16_t delay, uint8_t duration) { // delay is half-period in microseconds // duration is in 10ms increments // example: 440Hz --> delay=1136 // duration = 2*delay * cycles (all in same units) // cycles = 10000 * duration / delay / 2 // cycles = 100 * duration / (delay/50) uint16_t tmp = 100 * duration; uint16_t delaysm = delay / 50; uint16_t cycles = tmp / delaysm; while(cycles > 0) { PORTC |= (1<<PC4); delay_us(delay); PORTC &= ~(1<<PC4); delay_us(delay); cycles--; } } // define some notes // Frequencies from // converted to half-periods (us) by calculating // 1000000/2/frequency // where frequency is in Hz #define D5 851 #define E5 758 #define Fsh5 675 #define G5 637 #define A5 568 #define B5 506 #define C6 477 #define D6 425 #define DUR 40 int main() { // internal RC oscillator calibration for 8MHz. // enable the piezo as output DDRC |= (1<<PC4); // enable internal pullup on PA7 (the button) PORTC |= (1<<PC5); // loop forever! while(1) { // wait for button press... while(PINC & (1<<PC5)) { // do nothing } play_tone(D5, DUR); play_tone(E5, DUR); play_tone(D5, DUR); play_tone(G5, DUR); play_tone(Fsh5, 2*DUR); play_tone(D5, DUR); play_tone(E5, DUR); play_tone(D5, DUR); play_tone(A5, DUR); play_tone(G5, 2*DUR); play_tone(D5, DUR); play_tone(D6, DUR); play_tone(B5, DUR); play_tone(G5, DUR); play_tone(Fsh5, DUR); play_tone(E5, DUR); play_tone(C6, DUR); play_tone(B5, DUR); play_tone(G5, DUR); play_tone(A5, DUR); play_tone(G5, 2*DUR); // delay a bit delay_ms(500); } return 0; } Hi Hari, If you let us know what errors you are getting we would be glad to help you figure out why you are getting those errors, but more importantly we can start helping you figure out how to interpret the errors you are getting so you can start learning how to troubleshoot these problems for yourself. Humberto Please log in to post a reply.
http://www.nerdkits.com/forum/thread/769/
CC-MAIN-2019-18
refinedweb
1,135
66.33
figaro_elixir alternatives and similar packages Based on the "Configuration" category conform8.9 0.0 figaro_elixir VS conformEasy release configuration for Elixir apps. Vapor8.7 7.5 figaro_elixir VS VaporRuntime configuration system for Elixir confex8.3 1.5 figaro_elixir VS confexHelper module that provides a nice way to read environment configuration at runtime. dotenv7.5 0.0 figaro_elixir VS dotenvA port of dotenv to Elixir. Skogsrå6.2 5.6 figaro_elixir VS SkogsråLibrary to manage OS environment variables and application configuration options with ease weave6.0 4.0 figaro_elixir VS weaveJIT configuration loader that works with Kubernetes and Docker Swarm. ex_conf5.0 0.0 figaro_elixir VS ex_confSimple Elixir Configuration Management. Flasked4.7 0.0 figaro_elixir VS FlaskedInjecting ENV vars into application configuration at runtime (12factor app for Elixir) figaro2.6 0.0 figaro_elixir VS figaroSimple Elixir project configuration. configparser_ex2.4 0.0 figaro_elixir VS configparser_exA simple Elixir parser for the same kind of files that Python's configparser library handles. sweetconfig1.4 0.0 figaro_elixir VS sweetconfigRead YAML configuration files from any point at your app. CFEnv0.1 0.0 figaro_elixir VS CFEnvEnvironmental helpers for cloudfoundry, parsing and returning values off VCAP_SERVICES and VCAP_APPLICATON Scout APM: Application Performance Monitoring Do you think we are missing an alternative of figaro_elixir or a related project? README Figaro Elixir This project is based on figaro gem for Rails written by Steve Richert. It's was created to manage ENV configuration for Elixir applications. How does it work? Figaro parses a git-ignored YAML file in your application and loads its values into environmental variables. This is very handy for production environments when you don't want to store some of credentials in your repository. Installation Add Figaro Elixir as a dependency in your mix.exs file. defp deps do [ # ... {:figaro_elixir, "~> 1.0.0"} ] end You should also update your applications list to include Figaro: def application do [ applications: [ # ... :figaro_elixir ] ] end Once you've done that, run mix deps.get in your command line to fetch the dependency. Usage The basic requirement is to have application.yml file in your project config directory. Figaro will read it, parse it and use it to store environmental variables. Please note that ENV is a simple key/value store with the following features: - all values are converted to strings - deeply nested configuration structures are not possible Simple example You can very easily start using Figaro for Elixir. Just create an appropriate file: # config/application.yml foo: bar baz: qux And run iex -S mix in your terminal. You will have an access to configuration values via FigaroElixir.env or System environmental variables: iex(1)> FigaroElixir.env %{"baz" => "qux", "foo" => "bar"} iex(2)> FigaroElixir.env["baz"] "qux" iex(3)> System.get_env("foo") nil iex(4)> System.get_env("FOO") "bar" Keep in mind that system environmental variables keys are uppercased. Environment-specific configuration The power of Figaro elixir comes from distinguishing environments based on Mix.env property. You may have a file defined like this: a: a b: ~ test: c: 1 d: ~ And then after running MIX_ENV=test iex -S mix you will see: iex(1)> FigaroElixir.env %{"a" => "a", "b" => "~", "c" => "1", "d" => "~"} iex(2)> FigaroElixir.env["c"] "1" iex(3)> System.get_env("C") "1" That's it. You don't have to do anything more. Caveats If you are using escript build tool, you need to have :mix among your apps in mix.exs file and copy application.yml file to your rel/project_name/config directory. About the author My name is Kamil Lelonek, I'm a full-stack developer and polyglot programmer. I love playing with different languages, technologies and tools. You can visit my website read my blog or follow me on twitter. In case of any problems or suggestions do not hesitate and create a pull request.
https://elixir.libhunt.com/figaro-elixir-alternatives
CC-MAIN-2020-45
refinedweb
631
51.14
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).() thanks a lot and happy to learn your issue is solved. About the multi-path separated with semicolon, I can confirm it is bugged at the moment but will be fixed in the next release.. import c4d from c4d import gui #(): # I'm not sure to understand what's the goal here. Could you give us more information ? Why you need to be sure the order is the same? hi, hoo thanks a lot @mikeudin I somehow didn't though about this one xD Spline Segment are explained in the Cinema 4D documentation About the initialisation, if you just move the point that should work but i would recommend to init again SplineLengthData. if you add a point, you need to initialize the SplineLengthData again. There's no direct way of doing that. I see two possibilities: The problem is, as you can see on this thread, that some parameters of the xref are not accessible with python. The problem with links can be handle with AliasTrans. Unfortunately, this is not exposed. As you can test yourself, you can have multiple object manager with different filter and/or path search. This is usually not a good sign. But i can create an entry to see if this is possible for the futur. sorry for the late reply. Did you try using GetHDirty on the cache of the voronoi object? If you use the Active Object plugin (from our sdk example) you can see that the voronoi object doesn't change its dirty state while every object on the cache does. I tried it and it seem to work as expected. Using GetHDirty hallow you to store only one value for the whole hierarchy. Could you provide a way to reproduce it? I've been trying this with a simple spline i draw with the spline pen. The length are not the same but none return 0.0. (One is the spline length, the other the lineObject representing it) Are you using multiple segments?
https://plugincafe.maxon.net/user/m_magalhaes
CC-MAIN-2021-31
refinedweb
376
74.19
tracks. This is a very human problem. So let's ignore all the human aspects and run an impartial1 and unbiased2 algorithm on the issue! The Splitline Algorithm is, conceptually, very simple: - Divide the entire map in half based on population. - Repeat (1) for each half. - Once you have reached the target number of voters in each segment, stop. There's an excellent paper about how to apply this to the USA's districts. But, I couldn't find anything which applied the algorithm to the UK. So I thought I'd give it a go for fun3. First, start with the Office for National Statistics' Population Density Map Before we even begin, there are a few obvious issues. Firstly, the map isn't contiguous. So what happens to the Shetland Isles (population 22,000) and the Isle of Wight (population 140,000)? There are, on average, 100,000 people to every MP4. Do Shetland Islanders want to be lumped together with 78k other people from the mainland who might not necessarily share their values? Do the people from the Isle of Wight want to be split in half with one half being tied to non-islanders? The second (related) issue is NI. I'm not going to get into the long history there5. But I think it is fair to say that an algorithmic segmentation might cause a few raised eyebrows. So I'm going to concentrate on England, Scotland, and Wales for this section. Here's a really naïve (and inaccurate) split based on eyeballing it. Applying the algorithm again, and we can split the area into four equal population parts: Repeat a few hundred times and you have equal population constituencies. You could slice the country into long horizontal strips. That would be equal - but not necessarily practical. OK, that's enough mucking around, time to try applying it for real. Getting the data The first question is what resolution of population do you want? Using country-level population density isn't fine-grained enough. Using existing constituency data is just going to replicate the existing boundaries. Like this: So... street level? ONS have local authority level data - which isn't quite granular enough for my purposes. Instead, I downloaded a 1.2GB CSV from Data for Good at Meta (previously Facebook). The data looks like this: "Lat","Lon","Population" "50.62069444448494","-2.023750000001619","0.785894169690091" "54.91486111115504","-1.378472222223325","3.3208914089403367" "52.725416666708846","0.11152777777786699","1.116925979478443" "52.72736111115329","0.12402777777787699","1.116925979478443" "52.779583333375555","0.100694444444525","1.3609065999360417" They have a GeoTIFF which renders the whole of the UK like this: Zooming in to Edinburgh, shows the city is well-populated but not the countryside around it. London, however, is dense. With occasional pockets of emptiness. (Notes to self, to make a GeoTIFF into a browseable web map, run: gdal_translate -of VRT -ot Byte -scale population_gbr_2019-07-01.tif temp.vrt Then: gdal2tiles.py temp.vrt Finally, change to the newly generate directory and run python3 -m http.server 9000 and - hey presto - web maps!) Python and Pandas Oh My! import pandas as pd df = pd.read_csv("population_gbr_2019-07-01.csv") total_population = df['Population'].sum() Gets us a total population of 66,336,531. Which looks right to me! Let's say we want 100,000 people (not voters) per constituency. That'd give us 663 areas - which is about what the UK has in the House of Commons6. OK, which way do we want to split these data? A proposal by Brian Langstraat suggests splitting only in the horizontal and vertical directions. First, let's sort the data South to North. df = df.sort_values(by = 'Lat', ignore_index = True) Which gives us Lat Lon Population 0 49.864861 -6.399306 0.573312 1 49.868194 -6.393472 0.573312 2 49.874306 -6.369583 0.573312 3 49.884306 -6.342083 0.573312 4 49.886528 -6.341806 0.573312 ... ... ... ... 19232801 60.855417 -0.886250 0.109079 19232802 60.855417 -0.885694 0.109079 19232803 60.855417 -0.885417 0.109079 19232804 60.855694 -0.884861 0.109079 19232805 60.855972 -0.884028 0.109079 Now we need to add up the Population until we reach total_population / 2 - that will tell us where to make the first cut. half_population = total_population / 2 index = 0 cumulative_total = 0 for x in df["Population"] : if (cumulative_total >= half_population): print(str(index)) break else : cumulative_total += x index += 1 That tells us that the row which is halfway through the population is 8,399,921. df.iloc[8399921] Gives us a Latitude of 52.415417 - which is Huntington. So a properly bisected map of the UK's population looks like: 50% of the population live above the black line, 50% live below it. Let's take the top half and split it vertically. df = df[8399921:] df = df.sort_values(by = 'Lon', ignore_index = True) total_population = df['Population'].sum() half_population = total_population / 2 index = 0 cumulative_total = 0 for x in df["Population"] : if (cumulative_total >= half_population): df.iloc[index] break else : cumulative_total += x index += 1 Which gives us 52.675972,-2.082917 - an industrial estate in Wolverhampton. In this map, 25% of the total population live to the East of the black line, and 25% to the West: And this is where we start to see one of the problems with the naïve splitting algorithm. A chunk of Aberdeen has been sliced off from its neighbours. We can see that there will be a likely constituency of Shetlands, a bit of Aberdeen, and a slice of North-East England. These may not share common needs! Straight-line slicing bisects otherwise "natural" groupings of people. Sure, gerrymandering is bad - but this sort of divvying up makes for the strangest bedfellows. Shortest Splitline The Shortest Splitline Algorithm doesn't is similar to the above but, rather than restricting itself to vertical and horizontal lines, looks for the line with the shortest distance which contains 50% of the population. A Different Approach - South Up Algorithm Let's just start at the bottom left of the map and work our way up. Here's the South West (Scilly Isles not shown7.): Let's plot everything, just to make sure the data are all there: import matplotlib.pyplot as plt df.plot(x="Lon", y="Lat", kind="scatter", c="black") plt.show() OK! Let's grab the first 100,000 people. df = df.sort_values(by = ['Lat', 'Lon'], ignore_index=True) target = 100000 index = 0 cumulative_total = 0 for x in df["Population"] : if (cumulative_total >= target): df.iloc[index] break else : cumulative_total += x index += 1 area = df[:index] area.plot(x="Lon", y="Lat", kind="scatter", c="black") plt.show() Which results in: Hurrah! Scillies and South West England! Exactly 100,000 people live in that area. Let's do the next few in different colours df = df.sort_values(by = ['Lat', 'Lon'], ignore_index=True) df['Colour'] = pd.Series(dtype="U") # Add a Unicode string column for colour target = 100000 index = 0 cumulative_total = 0 # There is probably a much more efficient way to do this loop for x in df["Population"] : if (cumulative_total <= target): df.Colour.iloc[index] = "mistyrose" elif (cumulative_total > target and cumulative_total <= target * 2): df.Colour.iloc[index] = "peru" elif (cumulative_total > target * 2 and cumulative_total <= target * 3): df.Colour.iloc[index] = "mediumpurple" elif (cumulative_total > target * 3 and cumulative_total <= target * 4): df.Colour.iloc[index] = "olivedrab" elif (cumulative_total > target * 4): break cumulative_total += x index += 1 area = df[:index] area.plot(x="Lon", y="Lat", kind="scatter", c="Colour") plt.show() That's (mostly) going South to North, so we get those unnatural looking stripes which have weird incongruent chunks. Bubble Split Algorithm Rather than drawing lines, let's use a "Bubble Split" approach. Starting in, for example, the most South Westerly point in the dataset and then growing to its neighbours until it hits a population of 100,000. This will use's SciPy's KDTree Algorithm from scipy import spatial import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.read_csv("population_gbr_2019-07-01.csv") df = df.sort_values(by = ['Lat', 'Lon'], ignore_index=True) points = df[["Lat", "Lon"]].to_numpy() kdtree = spatial.KDTree(points) # To find the nearest neighbour of a specific point: kdtree.query( [59.1,-6.2] )[1] counter = 1 population = 0 target = 100 while (population <= target): nearest_index = kdtree.query( [59.1,-6.2], [counter] )[1] population += df.loc[nearest_index, "Population"].values[0] counter += 1 population Looping through is very slow and crawls to a halt after a few thousand iterations. So let's cheat. This grabs the nearest million points and finds their total population. nearest_million = kdtree.query( [59.1,-6.2], 1000000 )[1] df["Population"].iloc[ nearest_million ].sum() There's no way to iterate through the results, so its easiest to grab a bunch and iterate through that instead. counter = 0 population = 0 target = 100000 while (population <= target): end = (counter + 1) * 10000 start = counter * 10000 population += df["Population"].iloc[ nearest_million[start:end] ].sum() print("On " + str(end) + " Pop: " + str(population)) counter += 1 These can now be plotted using: indices = kdtree.query( [59.1,-6.2], end )[1] to_plot = df.iloc[ indices ] KDTrees are not designed to be altered - so deleting nodes from them is impossible. Instead, the nodes have to be deleted from the data, and then a new KDTree constructed. index_to_delete = kdtree.query( [59.1,-6.2], end )[1] df = df.drop(index = index_to_delete) points = df[["Lat", "Lon"]].to_numpy() kdtree = spatial.KDTree(points) Bounding Boxes Drawing a box around some points is useful. It provides a geographic border and also means we don't need to worry about map colouring algorithms. For this, we'll use SciPy's ConvexHull algorithm: import matplotlib.pyplot as plt from scipy.spatial import ConvexHull, convex_hull_plot_2d import numpy as np indices = kdtree.query( [x,y], end )[1] area = df.iloc[ indices ] s_array = area[["Lat", "Lon"]].to_numpy() hull = ConvexHull(s_array) plt.plot(s_array[:,0], s_array[:,1], 'o') # Remove this to only display the hull for simplex in hull.simplices: plt.plot(s_array[simplex, 0], s_array[simplex, 1], 'k-') plt.show() Here's the result - can you spot what I did wrong? Putting it all together This scrap of code reads the data, sorts it, constructs a KDTree, starts at the South West tip, finds the 100,000 people nearest to that point, and draws a bounding box around them: # Import the libraries import pandas as pd import matplotlib.pyplot as plt from scipy import spatial from scipy.spatial import ConvexHull, convex_hull_plot_2d import numpy as np # Read the data df = pd.read_csv("population_gbr_2019-07-01.csv") # Sort the data df = df.sort_values(by = ['Lat', 'Lon'], ignore_index=True) # Create KDTree points = df[["Lat", "Lon"]].to_numpy() kdtree = spatial.KDTree(points) # Most South Westerly Point sw_lat, sw_lon = points[0] # Get first 100,000 people counter = 0 population = 0 increment = 5000 target = 100000 while (population <= target): end = (counter + 1) * increment start = counter * increment population += df["Population"].iloc[ nearest_million[start:end] ].sum() print("On " + str(end) + " Pop: " + str(population)) counter += 1 # Get the index numbers of the points with 100,000 people indices = kdtree.query( [sw_lat, sw_lon], end )[1] #') # Remove this to only display the hull # Draw the hull for simplex in hull.simplices: plt.plot(plot_array[simplex, 0], plot_array[simplex, 1], 'k-') # Display the plot plt.show() Which produces: Running that a few more times gives this (sorry for chopping off the Scilly Isles): Can you see why I call this "Bubble Split"? Already we can see the limits to this approach. The orange-coloured subdivision has a little incongruent bit across the estuary of the River Fal. Here's the (hugely inefficient and slow) code to generate 40 areas of roughly 100,000 people: # Import the libraries import pandas as pd import matplotlib.pyplot as plt from scipy import spatial from scipy.spatial import ConvexHull, convex_hull_plot_2d import numpy as np def get_100k_people(df, nearest_million) : counter = 0 population = 0 increment = 1000 target = 100000 while (population <= target): end = (counter + 1) * increment start = counter * increment population += df["Population"].iloc[ nearest_million[start:end] ].sum() #print("On " + str(end) + " Pop: " + str(population)) counter += 1 return end def plot_hull(df, indices) : #', markersize=1) # Remove this to only display the hull # Draw the hull #for simplex in hull.simplices: # plt.plot(plot_array[simplex, 0], plot_array[simplex, 1], 'k-') # Read the data df = pd.read_csv("population_gbr_2019-07-01.csv") # Sort the data df = df.sort_values(by = ['Lat', 'Lon'], ignore_index=True) for areas in range(40): # Create KDTree points = df[["Lat", "Lon"]].to_numpy() kdtree = spatial.KDTree(points) # Most South Westerly Point sw_lat, sw_lon = points[0] # Get the nearest 1 million points nearest_million = kdtree.query( [sw_lat, sw_lon], 1000000 )[1] # How many points contain a cumulative total of 100k people end = get_100k_people(df, nearest_million) # Get the index numbers of those points indices = kdtree.query( [sw_lat, sw_lon], end )[1] # Draw plot_hull(df, indices) # Delete used Indices df = df.drop(index = indices) df = df.reset_index(drop = True) # Display the plot plt.show() Other Choices I made the rather arbitrary choice to start in the South West and proceed Northwards. What if, instead, we start with the point with the lowest population density and work upwards. Here's a video of the sequence: As you can see, it starts off pretty well, but the final few areas are randomly distributed throughout the map. I kinda like the idea of a meta-constituency of small villages. But I'm not sure if that's practical! This next video starts with the highest population density and works downwards: Here are the two different approaches. Click for massive. Left starts at the lowest density. Right starts at the highest density. Fascinating to see where they diverge, and which bits look more "natural". Anyway, go play with maps, data, & algorithms. It's fun! pic.twitter.com/DPCbsbyMsb — Terence Eden (@edent) August 28, 2022 NB, there's no guarantee that the generated images will have dimensions be divisible by two - so here's some hacky ffmpeg code to crop the images! ffmpeg -framerate 1 -pattern_type glob -i '*.png' -c:v libx264 -r 30 -pix_fmt yuv420p -vf "crop=trunc(iw/2)*2:trunc(ih/2)*2" output.mp4 Or, to scale the video to 1280 wide ffmpeg -framerate 1 -pattern_type glob -i '*.png' -c:v libx264 -r 30 -pix_fmt yuv420p -vf scale=1280:-2 output.mp4 Algorithms aren't neutral It's tempting to think that computer code is neutral. It isn't. Even something as seemingly innocuous as choosing a starting point can cause radical change. It may be aesthetically pleasing to draw straight lines on maps - but it can cause all sorts of tension when communities are divided, or combined, against their will8. It's a fun exercise to take population density data and play around with it algorithmically. It shows the power and the limitations of automated decision making. - LOL! ↩ - Even bigger LOL! ↩ - This is a personal blog. I don't work for the Boundary Commission. I do not have the power to enact this. ↩ - It is, of course, a lot more complicated than that. ↩ - Go watch the entirely accurate documentary "Derry Girls". ↩ - Look, OK, it's complicated. There are conventions about The Speaker and all sorts of other electoral gubbins. This is just a fun weekend exercise. Let's not get hung up on it. ↩ - Sorry Scilly Isles! I had a lovely holiday there. You should go visit! ↩ - See, for example, the entire history of colonialism. ↩
https://shkspr.mobi/blog/2022/09/running-a-shortest-splitline-algorithm-on-the-uk/
CC-MAIN-2022-40
refinedweb
2,555
59.8
I'm new to Python. This is a homework problem, but it is tough as I only have a little experience in Java. The code is supposed to print the first Catalan-numbers using its recursive definition: C(n + 1) = C(n) * (4n + 2) / (n + 2) import numpy c = [] c.append(1) for i in xrange(0,1000000000): c.append((4*i+2)*c[i]/(i+2)) print (c[i]) if c[i]>= 1000000000: break numpy.savetxt("catalan",numpy.c_[i, c[i]]) Index out of range. Your first i is equal to 1 but c only has one element, which is index 0. So just change the range to (0,1000000000). By the way, don't use range, use xrange, it'll be faster and take less memory. When you use range, Python creates an array of that size. An array of size 1000000000 takes a ton of memory. Instead, xrange create an iterator so it takes much less memory.
https://codedump.io/share/B5629PLdceB9/1/what-is-wrong-with-this-python-code-about-the-catalan-numbers
CC-MAIN-2017-26
refinedweb
161
85.49
September 2008 Board reports (see ReportingSchedule). Next board meeting is on September 17, 2008, so the aggregated Incubator report must be sent on September 15th. REPORT IS CLOSED Your project might need to report even if it is not listed below, please check your own reporting schedule or exceptions. BlueSky We have done the following: 1.Two modules of the project,that is DTU and Tserver,have been modified by using stlport to replace gpl based c++ code except for some minor mistakes. 2.The official website has been updated and formal reports are added. 3.We strive hard to learn the related contents of how to incubate successfully in Apache community ,which is sure a great help to our future work. JSecurity 2008-September JSecurity Incubator status report JSecurity is a powerful and flexible open-source Java security framework that cleanly handles authentication, authorization, enterprise session management and cryptography. JSecurity has been incubating since June 2008. Since last month, a new external release has been issued (0.9.0-RC2), and some bug fixes, and discussion about the configuration format. The code source should be injected to the Apache repository soon, when the external 0.9.0 release will be out. It's a matter of days, may be a week, accordingly to the latest discussion on the mailing list. JIRA is set up, but it should be used. The status is being maintained at log4php log4php is a port of the log4j package for PHP. Very limited activity over the last 3 months. 3 external patches, provided via JIRA were submitted in September, and these are in the process of being vetted and will likely be committed. There are external users but almost no activity on the mailing lists. Incubating since 07/2007. We just announced our first release from the incubator Pig 0.1.0! In addition, a major system redesign is underway that introduces type system, improves performance, and provides better platform for future work. The rework is expected to complete in October 2008. - The development community is growing with addition of Daniel Dai as a new committer. More work needed to attract developers to the project. - The user community is also growing with more activity on the user mailing list. In addition, a tutorial and user function repository were added to help users to come up to speed on the product. Also there is ongoing work on the user documentation. Incubating since: October 2007 RAT River River is aimed at the development and advancement of the Jini technology core infrastructure. Jini technology is a service oriented architecture that defines a programming model which both exploits and extends Java technology to enable the construction of secure, distributed systems which are adaptive to change. This reporting period showed almost no activity, which is quite disappointing. Based on the question by one of our mentors "What's up with River" it became clear some active committers were forced to downsize their participation due to being drowned by other activities, in case of many of the Sun committers this is due to a change of jobs. There are however signs that others, well known in the Jini community, want to lend their hand and help with getting out our next release. Things that needs to be done before graduation: the API in the com.sun namespace must be changed to org.apache, probably has to await the incorporation of patches lingering around and after our automated test framework is in place - overall participation of non Sun committers must increase and we should grow our community by getting more people involved Incubating since: December 2006 Shindig Shindig is a reference implementation of the OpenSocial and gadgets stack. Incubating since: 2007-12-06 High-level status summary: Shindig preparing for an incubation release of the v0.8 OpenSocial spec On track for an incubation release of Shindig on September 30, compliant to OpenSocial v0.8.1 - Active community, code base is maturing well, and is in use by many very large sites - Apache-provided Zone is up and running - 2 new committers, actively seeking more - Top 2 things to resolve prior to graduation: - Improve diversity of committers (progress, but on-going) - Run through at least one release (0.8 release will be a good one) Hama Hama is a parallel matrix computational package based on Hadoop. Incubating since: 19 May 2008 - The Hama website was published. - The automated CI server for Hama integration builds was installed. - The users were beginning to evaluate/report the function and perfomance of the Hama. We need to make a guide for the users and developers. Apache PhotArk will be a complete open source photo gallery application including a content repository for the images, a display piece, an access control layer, and upload capabilities. PhotArk has been accepted for Incubation in August 19. SVN, Mailing Lists and Committer accounts are ready Community is starting to work on website and initial code.
http://wiki.apache.org/incubator/September2008
CC-MAIN-2016-36
refinedweb
825
54.32