text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
14 April 2009 18:30 [Source: ICIS news] WASHINGTON (ICIS news)--President Barack Obama said on Tuesday that he sees encouraging signs of economic progress, but he cautioned that this year still will be difficult and promised unrelenting effort toward economic recovery. In what the White House billed as a major address on the economy, Obama told an audience at ?xml:namespace> “And taken together, these actions are starting to generate signs of economic progress,” he said. He said the $787bn (€590bn) federal stimulus bill passed by Congress and approved by him in February is beginning to have effect in saving jobs that otherwise might have been cut in schools and police departments across the country, boosting infrastructure projects and construction jobs and increasing credit supply for small business and consumer lending. “This is all welcome and encouraging news but it does not mean that hard times are over,” Obama said. He said that “2009 will continue to be a difficult year for “The severity of this recession will cause more job losses, more foreclosures and more pain before it ends,” he said. “Credit is still not flowing nearly as easily as it should.” He said the ongoing process of restructuring the auto and financial sectors “will involve difficult and sometimes unpopular choices”, perhaps a reference to reports that the White House may force General Motors into bankruptcy. “All of this means that there is much more work to be done,” the president said. “And all of this means that you can continue to expect an unrelenting, unyielding, day-by-day effort from this administration to fight for economic recovery on all fronts,” he added. Obama outlined what he termed “a new foundation” - the title of his speech - for the The president appeared to be setting “a new foundation” as the core of his administration going forward, echoing famous policy campaigns of earlier Democrat administrations such as the “New Deal” of Franklin Roosevelt in the 1930s, the "New Frontier" of John Kennedy in the early 1960s and the "Great Society" programmes of Kennedy’s successor, Lyndon Johnson, in the late 1960s. The president placed emphasis on near-term reforms for regulating the He also renewed his commitment to action on global warming, saying that the only way to transition the However, Obama did not say he expects to see completed legislation on that controversial topic this year. Repeating the “glimmers of hope” phrase he first used in a press conference last Friday, Obama offered a cautious forecast for an eventual economic recovery. “There is no doubt that times are still tough. By no means are we out of the woods just yet,” he said. “But from where we stand, for the very first time, we are beginning to see glimmers of hope.” ($1 = €0.75)
http://www.icis.com/Articles/2009/04/14/9208037/obama-sees-signs-of-economic-progress-but-more-struggle.html
CC-MAIN-2015-22
refinedweb
465
50.91
Help plzz Help plzz This does not qualify as asking for help. Nobody has the time to go through your code and try to understand what the problem is. You need to be specific about what you need to do, what have you done so far and what you are having problems with You declare home name as a global variable on line 40. You then redefine the variable elsewhere, which, according to the rules of scope, will cause the program to look at the name closest to the working code. The one with no information. Delete the global so that you can use the other home name variables, and change your functions so that they return the variables they change. To do that you have to list the struct name before the function name, both in the prototype and definition. Read more here: Cprogramming.com Tutorial: Functions There's also no reason to hide small programs in an attachment when you ask things about it: Code:#include<conio.h> #include<stdio.h> #include<stdlib.h> struct home { char fullname[50], address[20]; int age; }; struct fullinfo { struct home name; }; void function(struct home name) { printf("Enter name:"); scanf("%s",&name.fullname); printf("\nEnter address:"); scanf("%s",&name.address); printf("\nEnter age:"); scanf("%d",&name.age); } void show(struct fullinfo bin) { struct home name; printf("\nName:%s %s",bin.name.fullname); printf("\nAddress:%s %s",bin.name.address); printf("Age:%d %d",bin.name.age); } int rec( struct fullinfo bin, struct home name) { return (strcmp(name.fullname, bin.name.fullname)==0) && (strcmp(name.address, bin.name.address)==0) && (strcmp(name.age, bin.name.age)==0); } struct home name; void show(struct fullinfo bin); int rec( struct fullinfo bin, struct home name); void main() { struct home name; struct fullinfo bin; int fin; clrscr(); function(name); show(bin); printf("\n1. Edit\n2. Delete \n 3. Exit\n"); printf("Enter Here:"); scanf("%d",&fin); switch(fin) { case 1: { function(name); show(bin); getch(); } case 2: { printf("\nFullname:"); printf("\nAddress:"); printf("\nAge:"); exit(0); } case 3: { exit(0); } } getch(); } i cant get what you mean, sorry, i just want to ask if anybody can edit my program so that i can get: input: name: andrew address: jones age: 24 show (output); ask if want to be edit name/address/age: if edit name; name: miles if edit address; address: hideaway if age; age: 22 then show final output.>>>> \thats all, i keep breaking some of my programs but i just cant do this Last edited by andrewkho; 03-16-2010 at 09:43 PM. Reason: add We're not here to do everything for you. And while someone apparently does have time to go through your code and to try to understand the problem -- we're just not here to do all the work for you. Generally the way this works is you explain specifically what you want to do and what you can't seem to do, and we work from there. Quzah. Hope is the first step on the road to disappointment.
https://cboard.cprogramming.com/c-programming/124755-how-edit-output-struct-call-output.html
CC-MAIN-2017-26
refinedweb
510
70.23
User talk:RicoZ Contents - 1 Reservoir tagging - 2 Deleting pages - 3 nudism=* onRelation - 4 Mass edits in UK - 5 Reverting of end_date - 6 Re: Lakewalker really dead? - 7 aerialway=goods - 8 Water level - Over,Under,In Between... - 9 waterway=riverbank and natural=water - 10 class:bicycle:roadcycling - 11 man_made=tunnel - 12 JOSM-Validator - 13 Re: ES:Cave - new proposal to map the interiors of caves - 14 bridges vs. keepright Reservoir tagging Hi Rico, I just reverted one of your edits. I's a shame because I actually agree with migrating towards natural=water + water=reservoir. However, I think we have to go about this differently. Right now, the wiki definition does in no way suggest that the tag includes infrastructure, so there's no reason to be confused about anything, except perhaps the status of the deprecation. If you change the definition, then you also change the meaning of all the landuse=reservoir in the database, and therefore invalidate perfectly correct mapping. I'm open to strongly recommending that water=reservoir should be preferred, but please keep the definition intact. --Tordanik 15:16, 11 May 2014 (UTC) - Hm, I don't think that I have changed the definition - I wrote that there is a considerable confusion about the status of this feature. If you look at this discussion I think it is fair that there is no reason to doubt that the confusion could be hardly worse and it should be written to the wiki. Also note that the German wiki actually says that the use is deprecated, in addition the link to "Status" links nowhere. - Perhaps we could reinstate my edit as it was except for the first sentence? RicoZ (talk) 11:25, 13 May 2014 (UTC) - Your change to the first sentence is indeed my main complaint, so that would be a big step forward. - I would also like to see "... identical to the water body itself or whether it denotes a larger area/broader concept of an area used ..." replaced with "... whether it should be considered identical to the water body itself or whether it should be redefined to denote a larger area/broader concept of an area used ...", to further emphasize that the latter is not the current definition. - With these two changes, I would be happy with the remainder of your text. --Tordanik 15:40, 16 May 2014 (UTC) - After some thinking I left the "redefined to denote a larger area/broader concept of an area used" part completely away. Even if most people would like to have it, with so many uses of this tag it seems unrealistic that the tag could be redefined so for the other meaning a different tag/value will have to be invented. RicoZ (talk) 17:46, 16 May 2014 (UTC) Deleting pages Hi, when tags are still present in the database, it is better to say so on it's wiki page instead of nominating them for deletion. This way, people that stumble upon one of those tags will know what to do. Cheers, Jgpacker (talk) 11:55, 11 August 2014 (UTC) - The problem is ... and some more. Our search function is not the best one in the world, wikicleanup has way to go, national wikis out of sync. I can understand the desire to keep the old information available but the cost of keeping obsolete misleading pages around is just too much in my opinion. I had not the slightest clue that bridge=swing is not a legal value before the recent ml discussion and I do use the search function as good as I can. Some of the pages were never legal key/value combinations, just informal proposals or not even that. Maybe move them to a special namespace deleted:wiki ? RicoZ (talk) 12:41, 11 August 2014 (UTC) - If the problem is taginfo, just add something like {{ValueDescription |key=bridge |value=swing |description=Deprecated. Use x=y + a=b instead |status=abandoned}} to those pages. The description will appear in taginfo. --Jgpacker (talk) 12:48, 11 August 2014 (UTC) - Have tried that now for bridge=bascule, the result is - there is still this evil "✔" saying the combination has a valid wiki page. Many people after seeing that won't read the fine print on the wiki page. RicoZ (talk) 13:09, 11 August 2014 (UTC) - Now you have to wait until taginfo updates (the wiki info is cached), but you can see an example of what's going to look like here. --Jgpacker (talk) 13:15, 11 August 2014 (UTC) - It does not seem to help the original problem. bridge=bascule will still appear with a wikipage="✔" in . It remains to be seen how many people will actually read the description. - On the other hand what is it good for? The wiki page has zero content apart of the k/v template for a combination which as far as I can see was never approved. I would argue it would be clearly better if it wasn't there because there are too many ways it will pollute the search results, taginfo and who knows what else. Of course it is the first result if you search "bascule" or "bridge bascule". In the second case bridge:movable does not even come up in the first page of results. RicoZ (talk) 13:51, 11 August 2014 (UTC) - You know, it is not only possible for users to create wiki pages for the tags they use, it's actually recommended on the wiki (it's also said they should be prepared for a possible future change of tagging). - It is useful to keep those pages in the wiki (as long as there are instances of those tags on OSM). If someone stumbles upon this tag, he can easily see on the wiki how he should treat this tag (as long as the wiki page is updated). --Jgpacker (talk) 14:33, 11 August 2014 (UTC) - PS: To clear up any confusion, I'm not saying we should keep obsolete information in the wiki. I'm saying we have to update those pages instead of removing them, because they can be useful. --Jgpacker (talk) 15:00, 11 August 2014 (UTC) - It is not really easy to navigate along dozens of abandoned proposals to find the one that is active... had that many times. - Trying to think of some other solutions. Could the pages be renamed in a way so that taginfo would not pick up the entries of obsolete pages? Prepending "abandoned:" or whatever would make it immediately clear in search results and prevent tools from picking up the wrong entry. RicoZ (talk) 16:55, 11 August 2014 (UTC) - You could add a redirect to these pages while keeping the templates KeyDescription or ValueDescription. Taginfo will still read them. See Key:admin_level and it's taginfo page as an example. --Jgpacker (talk) 17:16, 11 August 2014 (UTC) - PS: If you find proposals that are clearly abandoned or discontinued and not in use, you can try to use Template:Archived_proposal. Oh, and if you really think the icon "✔" on taginfo is a problem, then you should ask Joto to change it to another icon. That icon doesn't mean a thing except that there is a wiki page for that key or tag. --Jgpacker (talk) 17:22, 11 August 2014 (UTC) - Yes, I think the "✔" on taginfo is a problem and Joto did already reply on github saying that he would prefer not to add code to taginfo to special case wikipages that are abandoned or scheduled for deletion. Taginfo and the miserable search engine are the main problems and no solution that I can think off will solve those. RicoZ (talk) 20:47, 11 August 2014 (UTC) - I meant the icon ✔ shouldn't be used at all (in no case) if it somehows conveys the sense of "tag approoval" (because that's not what this icon means in taginfo). Also, adding a redirect to the page as I mentioned before does solve the issue with the search engine while keeping the tag description in taginfo, please try it. --Jgpacker (talk) 21:13, 11 August 2014 (UTC) nudism=* onRelation Hi, I saw you made a change in the Proposed features/Nudism indicating the key nudism=* could be used on relations. On which kind of relation can this key be used on? If possible, mention it on the proposal. If the only kind of relation nudism=* can be used on is a Multipolygon relation, then it shouldn't be indicated it can be used on relations, because this kind of relation is a special case which represents an Area. See The Future of Areas for more details. --Jgpacker (talk) 13:20, 31 August 2014 (UTC) - The motivation was that it was suggested to use it for resorts among others. I have minimal knowledge how to tag resorts but assumed that some of them may involve relations, not sure about that? Also, taginfo says it has been used on relations, will check what that is. BTW thanks for the areas link but all that I understood from it is that there is a problem and I knew that already. Do you have some favorite solution? RicoZ (talk) 14:55, 31 August 2014 (UTC) - Taginfo says it's used on relations because taginfo doesn't show areas at all, because areas don't really exist in OSM (as an object type). Multipolygon relations are complex areas and closed ways are simple areas. It is an unsolved problem the fact that the wiki differentiates between areas and the taginfo box doesn't (it is a common source of confusion), but I believe the idea is to actually create the area datatype in OSM to solve this. I just verified, and all relations tagged with nudism=* are multipolygons. As far as I know, resorts aren't tagged with a non-multipolygon relation, so we shouldn't classify the key nudism=* as allowed on relations. --Jgpacker (talk) 15:34, 31 August 2014 (UTC) - PS: If there is another kind of relation that can be tagged with the key nudism=*, then it's ok to say it can be used on relations, but I advise briefly mentioning this kind of combination somewhere on the wiki. --Jgpacker (talk) 15:37, 31 August 2014 (UTC) - Try this query: . It queries the relations without their members. The relations can't be seen on the map because of that, but can be seen as raw data. --Jgpacker (talk) 16:08, 31 August 2014 (UTC) - That works. They are all multipolygons as expected. Wondering, can't see anything wrong looking eg at - is there any way to see which one is causing trouble? It seems that most don't cause any trouble. Or is it expected that any query involving a multipolygon touching a coastline will cause such problems? RicoZ (talk) 16:48, 31 August 2014 (UTC) Mass edits in UK Please check with talk-gb before making widespread changes to tags in the United Kingdom. I believe this is what is required by the Mechanical Edits Policy. SK53 (talk) 15:52, 1 September 2014 (UTC) - I have not done any mechanical edits in the UK and I have consulted the authors of most of the special bridges by PM. RicoZ (talk) 20:45, 1 September 2014 (UTC) Reverting of end_date I reverted your end_date change, while I agree with you to some extent I'm not convinced you should just change it to "unapproved". Erik Johansson (talk) 14:59, 16 December 2014 (UTC) - Please see Proposed_features/Status and relevant discussions (also ml). end_date is not just "unapproved" - it will break stuff really badly. It should be considered rejected per same arguments as the previously mentioned feature. - There are many alternative proposals which do not break anything and work quite nicely in my opinion - see Comparison of life cycle concepts. I hope one of those can eventually make it but end_date=* is not realistic for the next few years. I have quickly reverted some of your change, please add information as you see it fit but for now end_date is in most cases unsuitable.RicoZ (talk) 21:32, 16 December 2014 (UTC) Re: Lakewalker really dead? You can download scanaerial from github: --katpatuka (talk) 05:51, 26 January 2015 (UTC) aerialway=goods Hi, you deprecated aerialway=goods. Was this discussed before? goods=yes does not really fit, since this is defined as a access restriction "(light commercial vehicles; e.g., goods vehicles with a maximum allowed mass of up to 3.5 tonnes)". Also there is no other aerialway=* type which would fit for the aerialway on the photo at aerialway=goods.--Klumbumbus (talk) 23:29, 17 February 2015 (UTC) - Ok, look like it was not such a bright idea to suggest goods=yes but perhaps foot=no should be good enough?. The photo at aerialway=goods did not match the description on its own page very well - if you think that something is need for such small open-gondola aerialways we can make up some. - The important point for me is to distinguish access (goods only) and type (gondola, carpet..) - many kinds or aerialways can be used to transport goods so it is not so good to have it restricted to aerialway=goods. - What you see on the photo and this is wastly different - and with current tagging would end up mapped as aerialway=goods. RicoZ (talk) 00:00, 18 February 2015 (UTC) - Better now? RicoZ (talk) 13:07, 18 February 2015 (UTC) - Yes, better, but still, I would not call it deprecated in the tables. Since aerialways for goods are only accessible for a handfull of persons, I think for 99,9% of mappers and data users it is completely adequate to tag an aerialway for goods only with aerialway=goods. They don't care how exactly the freight is attached to the aerialway. And they don't care if it looks this way or this way. I also see not that big difference between these both. I even can imagine that such an aerialway can be modifed, depending on the freight. If the freight is a big single block, it is directly attached and if the freight consists of several items, it is put into a basket, which is attached to the aerialway. Why do you think that "The photo at aerialway=goods did not match the description on its own page very well"? I think "A cable/wire supported lift for goods. Passenger transport is usually not allowed."perfectly fits to this photo. --Klumbumbus (talk) 18:17, 18 February 2015 (UTC) - I have started a mailing list discussion in the meantime, lets see if that brings some more insight. The exact type of a freight aerialway may not matter sometimes but because nearly every type of aerialways - from zp-lines over magic carpets, drag lifts and cable cars are used to transport freight (exclusively or mixed mode) I am still convinced it is better to have a way to tag this which is orthogonal to the construction. As far as I can see the access and usage tags are sufficiently descriptive for this purpose. Should we have something like an aerialway=unclassified in addition to that? RicoZ (talk) 16:49, 21 February 2015 (UTC) Water level - Over,Under,In Between... Bonjour RicoZ, I've just seen an comment you left on Talk:Tag:natural=bare rock, where you seems to be aware about discussions/problems on natural features rendering around coastlines. It is something (the problem) I'd like to understand since I had to model those things in the past (on a large topographic database). Are you aware about any discussion forum/list that deals with the topic where I might be able to help/understand? --jfd553 (talk) 15:24, 13 May 2015 (UTC) - Unfortunately the information is spread around very many places and there is no detailed agreement how it should be modeled. Mappers map it somehow and hope it works, the mapnik team tries to render it. - - - - - - And we did not start coral reefs and underwater rendering yet.. - Other renderers have different problems and frequently need some workarounds to make things work. Just trying to map areas having a complex mixture of surfaces (sand,mud,grass,...) within a tidal zone! However, from the links you provided (thank) I understand that ... - It is sometime possible depending on the context! And I have just learn that from your links, even if I've looked at the wiki for years to find the proper way to map them. - Even if nobody is mapping for the rendering (!-) most cases discussed in the links results from different attempts to map features in intermittent water area, without an adequate guideline. About these links, in one of them (#1547) imagico lists different options to get the mud rendered properly. My preference would be for (5), but whilst you wait for v3.0, I would leave it as it is today because of the richness of the render (i did not knew about yesterday!) , even if it causes a problem that seems to occur only at river-coastline interface. Furthermore, my preference does not concern only mud but all similar tags used by contributors (sand, pebbles,... even if one could argue they are not really features). Concerning my preference for option (5), which is related to my experience in modeling topographic features, I would propose (if I may) to use the concept of intermittent water instead of the concept of tidal environment. Tidal refers to the cycle at which an area is covered by water (twice a day) while it is obvious from your links the problem appears in areas where the water cycle is different. For instance, it may affect rivers (flash flood, dried season, ...), lakes (dried season, dam controlled water level,...), all areas that are sometime covered with water because of daily, weekly, monthly, yearly processes - natural or not. I think that if something can be settled and documented in the wiki - where casual mappers like me could find the information - it will really make mapper's life easier (and the map nicer)! --jfd553 (talk) 17:57, 15 May 2015 (UTC) - Comments regarding #1547 should be attached directly to the ticket. - The whole Key:landcover/natural/landuse modeling requires some good ideas, both land and water features and especially where several properties overlap. Right now we can document that waterway=* is rendered above eg natural=bare_rock and similar but the same does not work for coastline because of an (currently) unfixable mapnik problem. There is no agreement how to do underwater/tidal mapping. The way mapnik handles rendering is pushed to extreme sophistication and therefore fragile and hard to implement by mobile renderers. - I have attempted to gather existing proposals and ideas that are missing here: User:RicoZ#Geologgy.2C_Geopgraphic_landforms_and_vegetation_landcover. RicoZ (talk) 11:18, 16 May 2015 (UTC) waterway=riverbank and natural=water Hi, you have changed several pages to state that you can only use one or the other, but never both. This is in direct conflict to the proposal that introduced water=river as a replacement for waterway=riverbank in the first place, which recommends a co-existence until the migration is complete. So what is your reason for the change? --Tordanik 16:23, 1 June 2015 (UTC) - It took me a good hour to find out why some islets in the Isar weren't rendered and this was the solution: . Maybe there were some other problems in the data but adding natural=water () caused the breakage and removing natural=water fixed it. None of the QA tools was able to catch anything:(( - So it is asking for trouble and does not solve any problem. The proposal is unnecessarily overcautious here. If someone wants to convert from riverbank to natural=water he can do it cleanly. There is no reason to leave the waterway=riverbank behind once the conversion is done because natural=water has been supported since many years and any data consumer not supporting it has much bigger problems. - I am wondering if the conversion will be ever done or is worth any effort. The old riverbanks method was not ideal but the improvement with the new method is so marginal - if any - and still leaves enough problems that I am not doing it, perhaps there will be some fresh ideas. RicoZ (talk) 21:21, 1 June 2015 (UTC) class:bicycle:roadcycling You seem to have created the page yet no-one has ever used it . I'm confused - was there a reason for it? --SomeoneElse (talk) 07:35, 24 August 2015 (UTC) - I was editing the pages related to class:bicycle and may have created it as a redirect to documentation. Not sure why it display that nobody uses it, most likely some stupid typo of mine because this one gets at least some usage. - Got it now I think, key name in description box was wrong (extra ":mtb:"). - I know it is questionable if such controversial low-use keys should be documented but given that some bicycle routers refuse to route over paths it might be good idea to add it to those to indicate if they are somehow suitable for this or that type of bicycle. Using "surface" or access might not be sufficient in all cases. RicoZ (talk) 09:48, 26 August 2015 (UTC) - Yes - not sure where the extra "mtb" came from. Do you know of anything that uses (e.g.) class:bicycle:roadcycling ? It's still worth adding of course if it's the best way of expressing the concept; something else can come along and use it later. man_made=tunnel Hi RicoZ, some time ago you mentioned that it would be helpful to create a tag that is similar to man_made=bridge. I've started writing a proposal, see Proposed features/man made=tunnel. Please review and comment! --Biff (talk) 19:42, 18 May 2016 (UTC) JOSM-Validator Hi RicoZ, zu : meiner Erfahrung nach werden aber nur die geänderten Objekte beim Hochladen automatisch validiert. Insofern zeigt es einem potenzielle Fehler, die man evtl (ja, nicht immer) selbst gemacht hat. Wenn man aber alle Daten direkt nach dem Download validiert, dann werden ja alle Objekte geprüft - nicht nur die, die man ändern wird. Vielleicht sollte man das noch etwas deutlicher schreiben? Siehst du das auch so? Ich halte ein validieren nach dem Herunterladen nicht für so sinnvoll (außer man mag auf Fehlersuche gehen) - kann man sich eh nicht merken, wo überall Probleme gemeldet werden. Viele Grüße --Aseerel4c26 (talk) 18:59, 31 May 2016 (UTC) - Da hast Du Recht, ich dachte JOSM tut immer alles checken, tut er nicht. Trotzdem habe ich sehr oft den Eindruck Fehler angezeigt zu bekommen die schon da waren ( ). Also ich denke es würde einem Anfänger bestimmt nicht schaden den Validator vor dem editieren anzuschmeißen um einen Überblick über die Probleme zu bekommen die nur darauf warten den ahnungslosen Neuling zu erschlagen. RicoZ (talk) 20:44, 31 May 2016 (UTC) - Ja, ich denke, wie gesagt, auch, dass JOSM nur die geänderten Objekte beim Hochladen validiert. Das heißt eben, dass wenn man ein Objekt ändert, wo schon ein potenzieller Fehler drin war, er dann eben angezeigt wird – obwohl man ihn nicht selbst reingebaut hat. Man hat nur unglücklicherweise ein solches Objekt angefasst, wo schon ein potenzieller Fehler drin war. Wäre gut, wenn der Validator beim Hochladen automatisch noch einen zweiten Check machen würde: nämlich mit der Ursprungsversion des Objekts, um zu sehen, ob da der potenzielle Fehler auch schon drin war. Dann könnte er das als Hinweis neben der Fehlermeldung anzeigen. Ich hoffe, ich hab das jetzt aus dem Kopf richtig niedergeschrieben. - Ach, und jetzt, klickte ich auch deinen ticket-Link an. Siehst du wohl genauso. Mich als erfahrenen Benutzer von JOSM stört es einfach beim Hochladen oft, dass ich nicht weiß, ob ich etwas "verbockt" habe, oder es schon vorher so war. Beispielsweise irgendwelche Bus-Routen-Roles sind mir herzlich egal, wenn ich eigentlich gerade mit den Gedanken bei Fahrradrouten bin... vor allem, wenn ich den potenziellen Fehler eben nicht selbst verschuldet habe. So, genug für heute, vielleicht kommt mir die nächsten Tage noch eine tolle Idee... - Ich denke auch, dass der Validator sinnvoll auch für relativ neue Mapper sein kann, aber eher in einer eigenen Fehlerbehebungssitzung, als wenn man eigentlich beispielsweise einen neu erkundeten tracktype eintragen will. Das lenkt nur ab (und macht den Changeset-Inhalt vermischter / den Changeset-Kommentar unpassender). :-) --Aseerel4c26 (talk) 21:54, 31 May 2016 (UTC) - Man kann natürlich bevor man ein Objekt anfasst dieses Auswählen und durch den Validator schicken. Oder Alles in einer kleinen rechteckigen Area wo man gerade etwas editieren möchte. Oder alle Objekte von einem bestimmten Typ, user uvm. Vielleicht lohnt sich eine eigene Sektion in dem Guide über den Umgang mit dem Validator, den Fehlermeldungen usw.. zumal dieser imho momentan in fast allen Belangen sehr viel besser ist als keepright. Ich würde einen neuen User schon empfehlen zumindest einmal mit dem Validator "rumzuspielen" bevor er ans editieren geht, wobei ein neuer track da eine eine Ausnahme sein könnte weil man da so viel hoffentlich nicht falsch machen kann und dieser auch trotz vielfältiger Fehler noch nützlich sein kann. RicoZ (talk) 10:45, 1 June 2016 (UTC) - Habe eben angefangen diese Sektion zu schreiben.. dabei ist mir aufgefallen - wenn ich ein/mehrere Objekte auswähle und Validate mache werden anscheinend viele Checks nicht gemacht? Ich kann 2 Wege die sich "falsch" kreuzen auswählen und es wird nichts angezeigt? RicoZ (talk) 13:40, 5 June 2016 (UTC) - Habe das Problem gelöst und gleichzeitig ein neues: offenbar war mein Problem, daß ich durch die Rechteck-auswahl nur die Nodes ausgewählt habe, nicht aber die zugehörigen Wege. Jetzt eine ganz dumme Frage.. wie wählt ein vermeintlicher Anfänger alle Objekte in einem Rechteckigen Bereich aus? RicoZ (talk) 13:52, 5 June 2016 (UTC) Re: ES:Cave - new proposal to map the interiors of caves Done Mike95 (talk) 12:38, 26 September 2016 (UTC) bridges vs. keepright Hi Rico, ich kann verstehen, dass du eventuell etwas über keepright verärgert bist, aber bitte baue doch deine Kommentare in den zugehörigen Abschnitt (es geht ja nur um Brücken) in DE:Keep Right Users Guide ein. Auf Quality assurance habe ich es schon etwas angepasst. --Aseerel4c26 (talk) 09:55, 4 October 2016 (UTC) - Keepright ist auf gutem Wege aber das hier betrachte ich als einen wirklich wichtigen Bug in der grundlegenden Funktionalität und das sollte nicht in irgendeinem Abschnitt verschwinden den vielleicht jeder 20te Benutzer liest. Immerhin haben 25% der Brücken und viele Tunnels keinen expliziten layer und Keepright könnte für diese falsche oder stark irreführende Fehlermeldungen erzeugen. Die angezeigte Fehlermeldung sagt zunächst auch gar nichts über Tunnel/Brücken sondern allgemein so etwas wie crossing ways, so daß es mMn definitiv nicht in den Abschnitt über Brücken/Tunnel gehört. Ich kann im Moment leider nicht genau sagen wieviele tatsächlich diese Fehlermeldung erzeugen - passiert bei der glücklicherweise relativ seltenen Kombination bridge ohne layer crossing bridge mit layer=1 und enstprechend für Tunnel. RicoZ (talk) 10:34, 4 October 2016 (UTC) - Wenn die Kombination, wo es Probleme gibt, so selten ist, dann ist doch das Problem auch nicht so groß. Und außerdem gilt natürlich wie für alle angezeigten potenziellen Fehler, dass es eben nur "potenzielle" Fehler sind. Nunja, von mir aus soll's vorerst dort oben stehen bleiben, wenn es dir wirklich so wichtig ist. :-) --Aseerel4c26 (talk) 10:54, 4 October 2016 (UTC) - Es sind momentan so um die 580 Tsd Brücken ohne layer-tag da und einige Tunnel dazu. Wenn die seltene Kombination in 1% der Fälle auftritt kann es immer noch genug User irritieren. Kann sein, daß es viel seltener ist und ich bin einfach durch einen dummen Zufall nach fünf Minuten auf so einen Fall gestoßen aber ich hoffe es wird trotzdem nicht ewig bestehen. RicoZ (talk) 11:16, 4 October 2016 (UTC)
http://wiki.openstreetmap.org/wiki/User_talk:RicoZ
CC-MAIN-2017-09
refinedweb
4,646
68.91
Practice With Angular 2 Part III – Step Into Angular 2. - dam.sam.nang Practice With Angular 2 Part III – Step Into Angular 2. If you are following my article of Angular 2. You will see and check it one by one of following code and introduce. So what’s about this article ? You will heard about the Angularjs: - First step into Angular 2 - Handling event on Angular 2 Before you read this article, Please follow [my previous article] First step into Angular To understand this article, you need to clone source code from (this project)[] We can go look on our project after you finish the clone on index.html, it is the place we load all our dependencies, style css on our application, you can see <my-app> </my-app> It is place of our code angular reload on inside <my-app></my-app>and my-appis going to revert to app.component. Now we don’t need input index.htmlanymore we just put everything inside app.component import { Component } from '@angular/core'; @Component({ selector: 'my-app', template: ` <header> <nav class="navbar navbar-inverse"> <div class="navbar-header"> <a href="/" class="navbar-brand"> My Angular 2 app! </a> </div> </nav> <div class="jumbotron"> <h1> Welcome to Our App!</h1> </div> <footer class=”text-center”> CopyRight © 2018 </footer> `, styles: [` .jumbotron { box-shadow: 0 2px 0 rgba(0, 0, 0, 0.2); } `] }) export class AppComponent {} The result we put header on app.componentbecause app.componentis angular, and Angular can help us to do thing as Application, as we can show the user login and log out and we already can do different task that coming on application. In Angular 2, We can pass data from our class to our template very easily since they are bound to each other. By defining a property on our class. We are able to access it from our template: . . . <div class="jumbotron"> <h1> Welcome to Our App! </h1> <p>{{message}}</p> </div> <footer class="text-center"> CopyRight © 2018 </footer> `, styles: [` .jumbotron { box-shadow: 0 2px 0 rgba(0, 0, 0, 0.2); } `] }) export class AppComponent { message = 'Hello angular 2!'; } So when we reload our applicaton again you will see look like images: How it is usage, You don’t need to use call controller as Angular 1 Now we will other staff like users, . . . <main> <div class="jumbotron"> <h1> Welcome to Our App! </h1> <p>{{ message }}</p> </div> <h2>List users</h2> <div * <div * <p>The user is {{user.name}} ({{user.username}})</p> </div> </div> </main> <footer class="text-center"> CopyRight © 2018 </footer> `, styles: [` .jumbotron { box-shadow: 0 2px 0 rgba(0, 0, 0, 0.2); } `] }) export class AppComponent { message = 'Hello angular 2!'; users = [ {id: 1, name: 'A', username: 'a'}, {id: 2, name: 'B', username: 'b'}, {id: 3, name: 'C', username: 'c'} ]; } On this code, you can take a look on array hash of users, we have 3 user on the list, with Angular 2 we use *ngForfor loop user from array hash of user. However you can check it on this to understand some functions of Angular 2 on this website for more details displaying-data Example: Clean up the typescript files with our first we already pass data from class to the template, Now you can look to our source code folder appdirectory, You can look at it, we have a lot of app.component.jsthat we don’t need to work on it, We only work on typescript file, not javascript files. So our purpose we need to move that files into one folder or other directory when we reload our typescript. Now we go file tsconfig.jsonand we going to move all our file javascript and generate from typescript { "compilerOptions": { "target": "es5", "module": "commonjs", "moduleResolution": "node", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "removeComments": false, "noImplicitAny": false, "lib": ["es2015", "dom"], "outDir": "dist" } } Now we restart the server npmand we can see in our folder project, it generates the new folder is called dist, inside that folder we can all the file javascript that generate from typescript it, has move into it. However we need to change our load file map, that you can see on the file systemjs.config.js // map tells the System loader where to look for things var map = { 'app': 'app', // 'dist', '@angular': 'node_modules/@angular', 'angular2-in-memory-web-api': 'node_modules/angular2-in-memory-web-api', 'rxjs': 'node_modules/rxjs' }; we need to change directory to change load from appto dist, So the new systemjs.config.jsbecome to // map tells the System loader where to look for things var map = { 'app': 'dist', // 'dist', '@angular': 'node_modules/@angular', 'angular2-in-memory-web-api': 'node_modules/angular2-in-memory-web-api', 'rxjs': 'node_modules/rxjs' }; Now we start move our clean up our code on app.component.ts . . . <main> <div class="row"> <div class="col-sm-4"> <h2>List users</h2> <div * <ul class="list-group users-list"> <li *The user is {{user.name}} ({{user.username}})</li> </ul> </div> </div> <div class="col-sm-8"> <div class="jumbotron"> <h1> Welcome to Our App! </h1> <p>{{ message }}</p> </div> </div> </div> </main> . . . You can take a look our view application look like this image Handling Event On Angular To be continue with this, our purpose will going to click on each user name users that show on above, we want to show the profile or detail of user on the left content. To handle DOM events, we just need to wrap that event that we want to watch for with (), on file app.component.ts, we need to add activeUserto remark what is the active user that we are click on it. . . . <h2>List users</h2> <div * <ul class="list-group users-list"> <li class="list-group-item" * The user is {{user.name}} ({{user.username}}) </li> </ul> </div> . . . export class AppComponent { message = 'Hello angular 2!'; users = [ {id: 1, name: 'A', username: 'a'}, {id: 2, name: 'B', username: 'b'}, {id: 3, name: 'C', username: 'c'} ]; activeUser; selectUser(user) { this.activeUser = user; } } With Angular 2, we don’t need to use ng-clicklook like Angular 1, Angular 2 is supported html event, You can take a look with html event as below and you user other html by inside (): - click: The user cliecks an HTML element. - change: An HTML element has been changed. - mouseover: The user moves the mouse over an HTML element. - mouseout: The user moves the mouse away from HTML element. - keydown: The user pushes a keyboard key. - load: The browser has finished loading the page. You can see more HTML DOM event from w3school Now let’s go on to display active user on right content. On file app.component.ts . . . <div class="jumbotron" * <h1> Welcome to Our App! </h1> <p>{{activeUser.name}}<small>{{activeUser.username}}</small></p> </div> </div> . . . Conclusion For this article you can know about clearly clean up javascript, template load of component angular 2, event html that applies for your angular 2. However this is the basic for you guy that can use and apply it on your project. Document: - HTML events - Angular Displaying Data - Template Syntax - Source Code Nguồn: Viblo
https://laptrinhx.com/topic/40597/practice-with-angular-2-part-iii-step-into-angular-2
CC-MAIN-2018-17
refinedweb
1,180
62.17
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Create dynamic selection on many2one fields with onchange This question is really close to this one : My code so far: the xml: field name="ds_id" on_change="onchange_fill_fields(ds_id)" the method: def onchange_fill_fields(self, cr, uid, ids, ds, context=None) res = {'value': {}} ds_domain = [] ds_domain.append(('model_id', '=', model_name)) # model_name is the name of the model we have chose('res.parnter',account etc) return {'value': res.get('value', {}), 'domain': {'field_name': ds_domain}} The problem is that I want with this function to change the selection to an other class. columns={ 'first_obj': fields.one2many('second_obj', 'first_obj_id', string='models') 'ds_id': fields.many2one('ds_obj', string='Data ', required=True), } (the ds_id is an other class, where there is a selection with models ) Other class columns={ 'second_obj: fields.many2one('sec_obj', string='Camp'), 'field_name': fields.many2one('ir.model.fields', 'Model Fields', domain=[('model_id', '=', ' ')]), } So if i pick res.partner(for example) from the selection (ds_id), I want to see the res.partner fields in a selection to the other class!Is this possible?? If it was in the same class,it would be easy.But this is different. I know that I have to change the "domain" to be like this ('model_id', '=', model_name) Any Help? About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/create-dynamic-selection-on-many2one-fields-with-onchange-87572
CC-MAIN-2018-13
refinedweb
250
51.75
On 01/13/2012 08:39 PM, Turquette, Mike wrote:> On Fri, Jan 13, 2012 at 8:18 PM, Saravana Kannan<[email protected]> wrote:>> On 12/17/2011 03:04 AM, Russell King - ARM Linux wrote:>>>>>> On Fri, Dec 16, 2011 at 04:45:48PM -0800, Turquette, Mike wrote:>>>>>>>> On Wed, Dec 14, 2011 at 5:18 AM, Thomas Gleixner<[email protected]>>>>> wrote:>>>>>>>>>>>>>>> that. Shouldn't it either return (including bumping the prepare_count>>>>> again) or call clk_disable() ?>>>>>>>>> No.>>>>>> I agree with Russell's suggestion. This is what I'm trying to do with the>> MSM platform. Not sure if I'm too optimistic, but as of today, I'm still>> optimistic I can push the MSM driver devs to get this done before we enable>> real prepare/unprepare support.>> Just to reach closure on this topic: I don't plan to change> __clk_unprepare in the next version of the patches. The warnings are> doing a fine job of catching code which has yet to be properly> converted to use clk_(un)prepare.To be fair, I also have to improve the stub clk_prepare/unprepare to maintain ref counts and do refcount checking before I plan to cut off to the real prepare/unprepare implementations. So, I'm guessing Mike is just trying to partly add that support in this patch series.My goal is to have MSM converted fully before switching to this. So, this code that we are debating about won't directly impact MSM. For that reason, I won't be trying to hold off the more important common clock framework due to unconventional error handling.Thanks,Saravana-- Sent by an employee of the Qualcomm Innovation Center, Inc.The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.
http://lkml.org/lkml/2012/1/13/396
CC-MAIN-2013-48
refinedweb
295
64.41
WinS capable Today I want to follow up with some exciting news about the availability of WinSock and several associated open source libraries that help deliver on the cross-platform technology promise. We have heard from many of you about the desire to reuse existing cross-platform native open source libraries and infrastructure in your Windows Store and Phone apps. A few weeks ago, we announced Windows Store and Phone support for CMake (and so did Kitware), and have also been investigating other popular libraries. As we started working on these libraries, we saw that many of them require native socket programming support for Windows Store apps. Previously, apps using WinSock APIs were blocked at certification by the Windows Store. We’re happy to announce that starting with the release of Visual Studio 2013 Update 3 (download) in August, the Windows App Certification Kit (WACK) allows use of WinSock APIs in your Windows Store apps. And since WinSock was already allowed to pass certification on Windows Phone, this work completes the story. Now WinSock is available universally across Windows 8.1, Windows Phone 8.1, and universal apps. This enables several great WinSock-dependent libraries for Windows Store: Libwebsockets is a lightweight pure C library built to use minimal CPU and memory resources and provide fast throughput in both directions. NuGet packages for Windows Phone are available here. MS Open Tech will release libwebsockets packages for Windows Store apps as well. libcURL is a free, open source client to get documents/files from servers using a variety of supported protocols. NuGet packages for Windows Phone are available here. MS Open Tech will release libcURL packages for Windows Store apps as well. OpenSSL is a popular toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols as well as a full-strength general purpose cryptography library. We have made contributions to OpenSSL library to make it work for Windows Phone and Windows Store apps. You can find our fork of OpenSSL that supports Windows Phone and Store apps here. Please note that we are working with OpenSSL Foundation team to get our pull request reviewed and accepted to the OpenSSL repository. While OpenSSL enables you to re-use your existing cross-platform code, you can also use the Windows Security and Cryptography APIs (namespace: Windows.Security.Cryptography*) which provide secured communication features natively in WinRT, when you decide to rewrite your code or build new Windows apps. We hope you find this new support useful as you’re building your apps. Are there other libraries or functionality that you need? Let us know in the comments. Updated May 8, 2018 12:47 pm Join the conversation Hi! While it’s not about networking, I think it so could be interesting for Windows Store devs. I have created a Windows Runtime API over libFLAC, it’s open source: By the way, I hope you guys are working to improve Windows Runtime and make it the first-class programming platform in Windows 10. It deserves it. Great news Kevin. Will this allow ICMP packets or are we still locked out of that functionality for Windows Store apps? I have several IT admin app ideas that could be built but cannot because of these API limitations. OpenAL – I wrote an xaudio2 system for our game to run on WinRT (on the plane to Hawaii which makes a great office). The issue isn’t the openal core, it’s that it references things like getenv() and the registry. Also just a pain to figure out how to configure. Facebook – WebAuthenticationBroker only returns short-lived tokens. SilentMode is broken as it appears to open the ‘already authz’ed’ webpage in the background then proceeds to timeout waiting for ‘okay’ to be clicked. Lots of basic threading & os-level c & win32 functions. Things like Shawn’s library is a start () but there’s just a looong list of functions that aren’t defined when compiling for WinRT that could often be defined in some manner, albeit sometimes non-optimal. Each API excluded makes sense but the overall result is a lot of work for every developer trying to port or support xplat apps. There should have been attempt to create a mock/shim for each API removed. I have a lot of things like this in our win32/osx/ios/android/winrt/winphone/linux codebase: #define CreateSemaphore(attributes, count, maxCount, name) \ CreateSemaphoreEx(attributes, count, maxCount, name, 0, SEMAPHORE_ALL_ACCESS) #define getenv(env) \ (NULL);(env) (In case you want to learn more, I’m Seattle local & ex-MSFT. Khouzam knows how to contact me from our discussions on cmakems.) Based on new compilation errors it looks like Update 4 added quite a bit of this. I see nothing in Release Notes or anywhere else. But I’ve got a lot the following, which I welcome: 2> C:\Program Files (x86)\Windows Kits\8.1\Include\um\processthreadsapi.h(209) : see declaration of ‘CreateThread’ 2> C:\Program Files (x86)\Windows Kits\8.1\Include\um\processthreadsapi.h(408) : see declaration of ‘TlsAlloc’ Is there any update on libcurl and openssl support? Hello, I would like to integrate some libraries I developed with libcURL into windows 8.1 for store apps. Is libcURL supported in windows 8.1 for store apps?. Thanks in advance. I download Microsoft’s OSG fork for OpenSSL As per below link I run ms\do_vsprojects.bat Open below project vsout\openssl.sln But there is no code file of MainPage.xaml Main logic is in this file of calling ssl socket. Which project version of give me code file ? Pls anyone helps me! Will you release the libcurl libraries for WinRT as well (as stated in the text above)? Did they accept the pull request? Hi, I just want to know whether multicast functionalities of Winsock are supported on Windows Store & Phone. This page says that the Multicasting related struct is only supported on Desktop App.
https://blogs.windows.com/buildingapps/2014/10/13/winsock-and-more-open-source-for-your-windows-store-apps/
CC-MAIN-2019-13
refinedweb
996
65.01
Tim Peters wrote: > (snip) > As is, the package name used by a release is part of its published > interface. You can't change it without causing pain, any more than you can > make incompatible changes to public class methods or input-output behavior. > In return, package clients are uniform, simple and portable, making life > easiest for the people who know least. The burden is on package authors to > choose names wisely, and that's where the burden should be. Not all packages are part of the external interface. In fact, all Zope names are essentially internal, since Zope is an application. The issue is not so much access to access from outside as it is access between packages within Zope. Further, the current support for relative imports allows a package to be moved into another package without breaking the pulic interface wrt the containing package. Here's an example that I hope will be motivating: Suppose Marc-Andre has a package mx with subpackages DateTime and stringtools. If mx was installed in the Python path then a module in the mx.DateTime package could get at stringtools like: import mx.stringtools So far, so good.. Becase relative imports are allowed in the current import scheme, they can use mx as usual. A NiftyDB module can import DateTime as follows: import mx.DateTime So even though mx is istalled as a sub-package, the public interface is unchanged, at least wrt the containing package. Unfortunately, the internal import of stringtools in the DateTime package: import mx.stringtools will fail, because mx is no longer a top-level module..
https://mail.python.org/pipermail/python-dev/1999-September/000855.html
CC-MAIN-2019-22
refinedweb
267
64.61
(Based on the 2013 Bar Syllabus and Updated with the Recent BIR Issuances and the Latest Supreme Court and CTA Jurisprudence as of January 31, 2013) This is the second installment of my two-part reviewer on taxation. It covers 8 topics, namely: (1) Estate Tax (2) Donors Tax (3) Tax Remedies (4) Organization and Functions of the BIR (5) Local Government Taxation (6) Real Property Taxation (7) Tariff and Customs Code; (8) Judicial Remedies (CTA). It is a consolidated and updated version of my reviewers in Tax 2 and Taxation Law Review. This reviewer is based on notes from Atty. Montero and Assoc. Dean Gruba and the books and reviewers of Atty. Mamalateo and Atty. Domondon. I also added some stuff from Atty. Mickey Ingles reviewer and Justice Dimaampao. For the transfer taxes, I added stuff from Starr Weigands notes. References have also been made to the 2013 Bedan Red Book and the 2012 UP Tax Reviewer. Further, I added the recent and relevant revenue regulations and other BIR issuances (especially those issued in 2012) and the latest SC and CTA jurisprudence (as of January 31, 2013). Most of the digests were sourced from Du Baladad and Associates (BDB Law) and from Baniqued & Baniqued. The reviewer will make reference to codal provisions. Thus, I recommend that you read this with a copy of the NIRC and other Laws Codal (2012 edition) by Atty. Sacadalan-Casasola Possessors may reproduce and distribute my reviewer provided my name remains clearly associated with my work and no alterations in the form and content of my reviewer are made. No stamping please. May this reviewer prove useful to you. If it does, please share it to others. Happy studying! --------------------------------------------------------------------------Note: Before we discuss Estate Tax, let us discuss the concept of Transfer Taxes. TABLE OF CONTENTS --------------------------------------------------------------------------- II. NIRC B. Estate Tax ................................................. 2 C. Donors Tax ............................................. 18 D. Value-Added Tax .................................... 25 E. Tax Remedies ......................................... 59 F. Organization and Function of the Bureau of Internal Revenue................................... 100 III. Local Government Code A. Local Government Taxation ................ 104 B. Real Property Taxation ........................ 120 IV. Tariff and Customs Code ......................... 137 V. Judicial Remedies (CTA) ......................... 152 --------------------------------------------------------------------------- Maximum tax rate of estate tax is 20% on net estates exceeding Php 10 million and the first Php 200,000 is exempt Maximum tax rate is 15% on the net gifts exceeding Php 10 million and the first Php 100,000 is tax exempt Estate tax is computed on the basis of the net estate transferred at the time of the death of the decedent Donors tax is computed on the basis of net gifts given during a calendar year Q: Compare and contrast donation mortis causa and donation inter vivos. Mortis Causa Inter Vivos Both are transfers without onerous consideration takes effect upon the death of the transferor Ownership will pass only upon death takes effect during the lifetime of the transferor Ownership will pass during the donors life time subject to donors tax Q: Is the accrual of the estate tax distinct from the obligation to pay the same? Yes. The accrual of the tax is distinct from the obligation to pay the same. Upon the death of the decedent, succession takes place and the right of the State to tax the privilege to transmit the estate vests instantly upon death (see RR 02-2003 [December 16, 2002]. Generally, the estate tax is paid at the time the estate tax return is filed by the executor, administrator or the heirs. The period to file an estate tax return within six months from the death of the decedent except in meritorious cases where an extension not exceeding 30 days is granted. (see Section 90, Tax Code) ---------------------------------------------------------B. ESTATE TAX -----------------------------------------------------------------------------------------------------------------------1. Basic Principles --------------------------------------------------------------Q: What transfer is subject to estate tax? The transfer of the net estate of every decedent, whether resident or non-resident is subject to estate tax. Q: A died. He left a will which provided that all real estate shall not be sold or disposed of 10 years after his death and when such period lapses, the property shall be given to B. (1) When does the estate tax accrue? The estate tax accrues as of the death of the decedent. Q: Based on the same facts as stated above, B contended that the inheritance tax should be based on the value of the estate at the lapse of the 10-year period. Is Bs contention correct? Page 2 of 164 Last Updated: 30 July 2013 (v3) No, the tax accrues at the time of death notwithstanding the condition. Since death is the generating source from which the power of the State to impose estate taxes takes its being and if upon the death of the decedent, succession takes place and the right of the state to tax vests instantly, the tax is to be measured by the value of the estate as it stood at the time of the decedents death, regardless of any postponement of actual possession or any subsequent increase or decrease in value. (LORENZO V. POSADAS [JUNE 18, 1937]) 3. Provide for an equal distribution of wealth 4. It is the most appropriate and effective method for taxing the privilege which the decedent enjoys of controlling the dispositions 5. It is the only method of collecting the share which is properly due to the State as a partner in the accumulation of property which was made possible on account of the protection given by the State Statepartnership theory Ability to Theory --------------------------------------------------------------5. Time and transfer of properties --------------------------------------------------------------Q: When are properties transferred to successors? and rights --------------------------------------------------------------4. Purpose or object --------------------------------------------------------------Q: What are the purposes for imposing the estate tax? The generally accepted purposes for imposing the estate tax are as follows: 1. To generate additional revenue for the government 2. To reduce the concentration of wealth PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 The properties and rights are transferred to the successors at the time of death of the decedent ( Art. 777, NCC). However, despite the transfer of properties and rights at the time of death, the executor or administrator shall not deliver a distributive share to any party interested in the estate unless there is a certification from the CIR that estate tax has been paid. (see Section 94, Tax Code) Note: In the determination of the estate tax, you should note 4 things: (1) The classification of the decedent based on nationality and/or domicile (2) The nature and the location of the assets (3) The computation and valuation of the assets (which includes deductions) and (4) Rates. 8. Determination of gross estate and net estate --------------------------------------------------------------Read Section 85, 1 Q: How is gross estate determined? Decedent Determination estate of gross --------------------------------------------------------------6. Classification of decedent --------------------------------------------------------------Q: Who are the taxpayers liable to pay estate tax? 1. 2. 3. 4. Resident citizens Non-resident citizens Resident alien Non-resident alien All properties, real or personal, tangible or intangible, wherever situated, plus items includible in gross estate Only those properties situated in the Philippines provided that with respect to intangible personal property, its inclusion in the gross estate is subject to the rule of reciprocity under Section 104 of the Tax Code Non-Resident Alien Note: Only natural persons can be held liable for estate tax. A corporation cannot be liable for the obvious reason that they cannot die (naturally speaking). --------------------------------------------------------------7. Gross estate vis--vis net estate --------------------------------------------------------------Q: Distinguish Gross Estate from Net Estate Net Estate The value of the gross estate less the ordinary and special deductions (see Section 86, Tax Code) Q: What is the rule in determining the situs of intangible personal property for estate tax purposes? As a general rule, we apply the principle of res mobilia sequuntur personam (chattels follow the person). In other words, the intangible property is taxed based on the domicile of the owner. However, SECTION 104 provides that certain intangibles be deemed located in the Philippines, namely: 1. Franchises being exercised in the Philippines 2. Shares, obligations, or bonds issued by domestic corporations, or partnerships, business or industry located in the Philippines 3. Shares, obligations or bonds issued by foreign corporations --------------------------------------------------------------- a. at least 85% of the business of which is located in the Philippines; or b. which have acquired situs in the Philippines 4. All intangibles owned by residents Non-Resident Alien Net estate is equal to gross estate less ordinary deductions and exclusions allowed by law Note: Non-resident alien decedent cannot avail of special deductions. Q: What is meant by reciprocity as applied to intangibles of a non-resident alien for estate tax purposes? As provided in Section 104, not residing in that foreign country. --------------------------------------------------------------9. Composition of gross estate --------------------------------------------------------------Read Section 85, 1 and Section 104, Tax Code Q: What does the gross estate of a decedent consist of? Decedent Composition estate of gross 1. Real property within and without the Philippines 2. Tangible personal property within and without the Philippines 3. Intangible personal property within and without the Philippines 1. Real property within the Philippines 2. Tangible personal property within the Philippines 3. Intangible personal property within the Philippines unless there is reciprocity in which case it is not taxable Non-Resident Alien Note: In sum, all assets, real or personal, tangible or intangible wherever located of a citizen and resident alien is subject to estate tax while for nonresident aliens, estate tax is imposed only on properties within the Philippines provided in the case of intangible personal property, it is subject to the rule of reciprocity under Section 104 of the Tax Code. Q: For purposes of estate taxation, how is the fair market value of the following properties determined? Real Property Fair market value determined by: 1. the CIR (zonal value) or 2. that shown in the schedule of values fixed by Provincial and City Assessors, whichever is higher If unlisted: 1. Unlisted common shares are valued based on their book value 2. Unlisted preferred shares are valued at par value. If listed: The fair market value shall be the arithmetic mean between the highest and lowest quotation at a date nearest the date of death, if none is available on the date of death itself. The probable life of the beneficiary in accordance with the latest basic standard mortality table shall be taken into account 1. The construction cost per building permit or 2. FMV per latest tax declaration b. Transfers in contemplation of death c. Revocable transfers d. Property under general power of appointment e. Proceeds of a life insurance taken out by the decedent upon his own life where the beneficiary is the estate, his executor or administrator irrespective of whether or not insured retained power of revocation or any beneficiary designated as recovable f. Transfers for insufficient consideration Note: These are considered substitutes for testamentary dispositions. Although inter vivos in form, they are mortis causa in substance. Note that in all these transfers, if they were made for a bona fide consideration, they shall not form part of the gross estate. Decedents Interest Read Section 85(A) Q: What include? does the decedents interest Shares Stock of It includes any interest having value or capable of being valued, transferred by the decedent at his death Transfer in contemplation of death Read Section 85(B) Q: When is a transfer considered one made in contemplation of death? A transfer is considered made in contemplation of death when the impelling motive or reason for the transfer is the thought of death, regardless of whether the transferor is near the possibility of death or not. Note: The presumption that transfers made within three years before death are made in contemplation of death as provided under PD 1705 is no longer applicable. --------------------------------------------------------------10. Items to be included in gross estate --------------------------------------------------------------Q: What items/transfers should be included in the gross estate? a. Decedents interest at the time of death PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Q: What factors should be considered in determining whether a transfer was made in contemplation of death? One should consider the following: 1. The type of heir (whether compulsory or voluntary) 2. The timing of the transfer 3. Other special factors Page 6 of 164 Last Updated: 30 July 2013 (v3) Q: What is the relevance of the type of heir in determining if the transfer was made in contemplation of death? When there is a donation inter vivos is made to a person who is not a forced heir, the presumption is that such transfer is a donation inter vivos. However, if the recipient of the property is a forced heir, the presumption is that such transfer was made to accelerate inheritance and hence, such transfer is mortis causa. This presumption may be rebutted by evidence to the contrary. (see VIDAL DE ROCES V. POSADAS [M ARCH 13, 1933]) and Z. The CIR contends that such transfers should form part of the gross estate for purposes of estate taxation. Is the CIR correct? No. The donation inter vivos was made to a legatee who is not a forced heir. Thus, absent any evidence to the contrary, the presumption holds that such transfer is a donation inter vivos. Such being the case, the transfer shall not form part of the gross estate (see TUASON V. POSADAS [JANUARY 23, 1930]). Q: Name some instances/factors which would disprove the claim that the transfer was made in contemplation of death. When the reason for the transfer was the desire of the decedent to: 1. see his children enjoy the property while the donor is still alive 2. save income of property taxes 3. settle family disputes 4. relieve donor from administrative burden 5. to reward services rendered 6. to provide independent income for dependents In GESTOPA V. CA [OCTOBER 5, 2000], the Supreme Court enumerated some indications that the transfer was a donation inter vivos, to wit: 1. Property was donated out of love and affection 2. When a reservation on the donation is made only with respect to the right of usufruct which denotes naked ownership was already transferred 3. When the transferors retained sufficient property only for the purpose of maintaining their status in life, thereby implying that it was alright to part with the property even during the transferors lifetime 4. Donee accepted the donation since in a donation mortis causa acceptance is not required. Q: Using the same facts above, it was determined that the transfer was made three months before his death. Will the transfer form part of the gross estate? Yes. In VIDAL DE ROCES V. POSADAS [M ARCH 13, 1933], the decedent died without forced heirs but instituted a certain person as a legatee in his will. The presumption that such transfer was a donation inter vivos did not hold because of the timing of the transfer, which was a short period before death. Q: Prior to his death, A gave his son B a parcel of land through a deed of donation. Upon As death, the CIR contends that the transfer should form part of the gross estate for purposes of estate taxation. Is the CIR correct? Yes. Since the recipient of the property, the son, is a forced heir, the presumption is that such transfer was made in contemplation of death. Thus, the transfer should form part of the gross estate. ( see DIZON V POSADAS [NOVEMBER 4, 1933]) Q: During his lifetime, Father Z donated some of his property to A, B, C on the condition that they provide him rice and money every year. Father Z died. The CIR contends that the transfers should form part of the gross estate of Father Z. Is the CIR correct? No. In donations inter vivos, as in the present case, the donees acquired the right to the property while the donor was still alive, subject only to their acceptance and the condition that they pay the donor rice and/or money. ( see ZAPANTA V. POSADAS [DECEMBER 29, 1928]) Q: A donated parcels of land to X, Y, and Z. A died without any forced heir. In her well, she bequeathed personal property to X, Y, PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Proceeds of Life Insurance Read Section 85(E) Q: When shall proceeds of the life insurance of the decedent form part of his gross estate? They shall form part of the gross estate if the beneficiary is: 1. The estate of the deceased, his executor or administrator, irrespective of whether the insured retained the power of revocation 2. Any beneficiary (third person) designated in the policy as revocable Note: (1) If the policy expressly stipulates that the designation of the beneficiary is irrevocable, then the amount of the proceeds shall not be included in the gross estate. (2) It is revocable when the beneficiary may still be changed and the decedent has still retained interest in the policy. It is irrevocable when the beneficiary may no longer be changed as they have acquired a vested interest. For third persons whose designations are irrevocable, the proceeds of life insurance shall not form part of the gross estate. If it is revocable, it shall form part of the gross estate. General Power of Q: Differentiate the estate tax treatment of property passing under a general power of appointment and one under a special power of appointment. Kind of appointment General Nature Tax Treatment Donor gives the donee the power to appoint any person as successor to enjoy the property. Donor gives the donee the power to appoint a person within a limited group to succeed in the enjoyment of the property Transfers for Insufficient Consideration Read Section 85(G) Q: What are consideration? transfers for insufficient Special Transfers for insufficient consideration are those transfers that are not bona fide sales of property for an adequate and full consideration in money or moneys worth. The excess of the fair market value at the time of the death over the value of the consideration received by the decedent shall form part of his gross estate. Note: (1) The rule on transfer for insufficient consideration applies to (a) Transfers in contemplation of death (b) Revocable transfers and (c) Transfers under general power of appointment. (2) As a numerical example FMV at time of transfer FMV at time of death Consideration received at time of transfer Amount included in estate Expenses, losses, indebtedness, taxes, etc (ELIT) Read Section 86(A)(1) Funeral expenses Q: What the conditions for the deductibility of funeral expenses? 1. Whether paid or unpaid 2. Up to the time of interment 3. The actual amount or in an amount equal to 5% of the gross estate, whichever is lower, but in no case to exceed P200,000 Note: (1) Actual funeral expenses shall mean those which are actually incurred in connection with the interment or burial of the deceased. The expenses must be duly supported by receipts or invoices or other evidence to show that they were actually incurred. (2) The amount in excess of the P200,000 threshold shall not be allowed as a deduction nor will it be allowed to be claimed as a deduction under claims against the estate. In determining whether there was sufficient consideration, compare the FMV of the property at the time of the transfer with the amount of consideration received at the time of the transfer. However, the amount to be included in the estate is computed by taking the difference between the FMV of the property at the time of death and the amount of consideration received at the time of transfer. Example 1: Since the property was sold for 30 less than its FMV at the time of the transfer, there is insufficient consideration. Hence, the difference between the consideration received and the FMV at time of death shall form part of the gross estate. Example 2: This is not a transfer for insufficient consideration, hence, it shall not form part of the gross estate. This is a bona fide sale for an adequate and full consideration in moneys worth. --------------------------------------------------------------11. Deductions from estate --------------------------------------------------------------Q: Enumerate the deductions from the gross estate. The deductions from the gross estate are:. Special deductions (FSMA) a. Family home b. Standard deduction c. Medical expenses d. Amount received by heir under RA 4917 Q: A died leaving an estate valued at P20,000,000. His heirs spent P500,000 for all the funeral services. How much should be allowed as a deduction? The amount deductible is only P200,000. To determine amount deductible, compare P500,000 and P1,000,000 (5% of P20 million). The lower amount is P500,000. However, it is beyond the P200,000 threshold. Thus, only P200,000 will be allowed as a deduction. 5. Cost of burial plot, tombstones, monument or mausoleum but not their upkeep. In case the deceased owns a family estate or several burial lots, only the value corresponding to the plot where he is buried is deductible; 6. interment and/or cremation fees and charges; and 7. All other expenses incurred for the performance of the rites and ceremonies incident to interment. (See RR 2-2003 [December 16, 2002]) Judicial expenses Q: What are the requisites for deductibility of judicial expenses? Judicial expenses to be deductible 1. Must be incurred during the settlement of the estate but not beyond the last day prescribed by law (within 6 months from the date of death of the decedent) or the extension thereof (in meritorious cases, the CIR may grant reasonable extension not exceeding 30 days) for the filing of the estate tax return. 2. The judicial expenses are incurred in: a. Inventory-taking of assets comprising the gross estate b. Administration c. Payment of debts of the estate d. The distribution of the estate among the heirs (RR 2-2003) the death anniversary were not allowed as they had nothing to do with the administration of the estate. Claims against the estate Q: What are claims against the estate? These are debts or demands of pecuniary nature which could have been enforced against the deceased in his lifetime and could have been reduced to simple money judgments. It may arise out of: 1. Contract 2. Tort 3. Operation of law any legislative intent in our tax laws, which disregards the date-of-death valuation principle which is the US rule on deductions. The amount deductible is the debt which could have been enforced against the deceased in his lifetime, nothing more and nothing less (DIZON V. CIR [APRIL 30, 2008]) Note: In sum, post-death developments should not be considered in determining the net value of the estate Q: What are the requisites for deductibility of claims against the estate? 1. Must be a personal obligation of the deceased existing at the time of his death except those incurred incident to his death or those medical expenses 2. Liability must have been contracted in good faith 3. The claim must be a debt or claim which is valid in law and enforceable in court 4. Indebtedness not condoned by the creditor or the action to collect from the decedent must not have prescribed Q: There were claims against the estate of the deceased which allegedly exceed the gross estate which resulted in the administrator reporting no estate tax liability. The BIR contested the amounts of the claims against the estate deductions stating that lower amounts were paid as compromise payments during the settlement of the estate and these amounts should be what will be considered in arriving at the net estate. Will the compromise amounts be the amounts considered as deductions to the gross estate? No, the deduction allowable is that amount determined at the time of death. Post-death developments are not material in determining the amount of deduction, especially for the claims against the estate deduction. There is no law, nor Claims against insolvent persons Q: What are the requisites for claims against insolvent persons to be deductible? 1. The amount has been initially included as part of the gross estate; and 2. The incapacity of the debtors to pay their obligations is proven, not merely alleged. or indebtedness on Losses Q: What are the requisites for losses to be deductible from the gross estate? Losses are deductible: 1. 2. were incurred during the settlement of the estate arose from fires, storms, shipwreck or other casualties or from robbery, theft or embezzlement are not compensable are not claimed as deduction for income tax purposes were incurred not later than the last day for payment of the estate tax 3. 4. 5. Vanishing Deduction Read Section 86(A)(2), Tax Code Q: What is a vanishing deduction? A vanishing deduction is a deduction allowed on the property left behind by the decedent which he had acquired previously by inheritance or donation Note: The rationale is to minimize the effects of double taxation on the same property within a short period of time; the law allows a deduction to be claimed on the said property. Q: Are claims for taxes against the estate not filed in time barred forever? No. As a general rule, they notice; otherwise they are barred forever. However, as an exception, taxes assessed against the estate of a deceased person need not be submitted to the committee on claims in the ordinary course of administration. They may be collected even after the distribution of the decedents estate among his heirs who shall be liable therefore in proportion of their share in the inheritance. (Vera v. Fernandez [March 30, 1979]) Q: What are the conditions for the deductibility of property previously taxed or vanishing deduction? 1. Death 2. Identity of property (the property with respect to which deduction is sought can be identified as the one received from the prior decedent) 3. Inclusion of the property (the property must form part of the gross estate situated in the Philippines of the prior decedent or was a taxable gift of the donor) 4. Previous taxation of property (Estate tax or donors tax due thereon must have been paid) 5. No vanishing deduction on the property was allowed to the estate of the prior decedent Q: What are the conditions for the deductibility of property previously taxed or vanishing deduction? 1. Determine the FMV of the PPT at the time of the prior decedents death and the FMV at the time of the present decedents death then get the lower of these two amounts 2. Prorate: 2. 3. 4. The total value of the family home must be included as part of the gross estate Allowable deduction must be in an amount equivalent to: a. the current FMV of the family home as declared or included in the gross estate or b. the extent of the decedents interest (whether conjugal/community or exclusive property), whichever is lower The deduction not exceed Php 1,000,000. Medical expenses Read Section 86(A)(6) Q: What are the requisites for deductibility of medical expenses? 1. 2. The expenses were incurred by the decedent within 1 year prior to his death The expenses are duly substantiated with receipts The deductible expense shall not exceed Php 500,000 Read Section 86(A)(3), Tax Code Q: What are allowed Transfers for Public Use? deductions as The deduction on transfers for public purpose refers to the amount of all bequests, legacies, devises, or transfers to or for the use of the Government or any political subdivision thereof, for exclusively public purposes, Note: The amounts of medical expenses incurred in excess of P500,000 shall no longer be allowed as a deduction for medical expenses. Neither can any unpaid amount thereof in excess of the P500,000 threshold nor any unpaid amount for medical expenses incurred prior to the one-year period from date of death be allowed to be deducted from the gross estate as claim against the estate (see Section 6, RR 2-2003) Amount received by heir under RA 4917 Read Section 86(A)(7), Tax Code Q: Discuss the deductibility of amounts received by heirs under RA 4917. Amounts received from the decedents employer as a consequence of the death of the decedentemployee as retirements benefits under RA 4917 (An Act Providing that Retirement Benefits of Employees of Private Firms shall not be subject to Page 13 of 164 Last Updated: 30 July 2013 (v3) Family home Read Section 86(A)(4), Tax Code Q: What are the requisites for deductibility of a family home? 1. The family home must be the actual residential home of the decedent and his family at the time of his death as certified by the barangay captain any Tax whatsoever) is allowed as a deduction provided that the amount of benefit is included in the gross estate. Net share of the Surviving Spouse Read Section 86(C), Tax Code Deductions allowed to Non-Resident Estate Read Section 86(B) to (D), Tax Code 2. Special deductions (FSMA) a. Family home b. Standard deduction c. Medical expenses d. Amount received by heir under RA 4917 3. Share in conjugal property --------------------------------------------------------------12. Exclusions from estate --------------------------------------------------------------Q: What are the exclusion from the gross estate? 1. The capital (exclusive property) of he surviving spouse is considered as an exclusion in the gross estate under Section 85(H) of the Tax Code Note: Under Section 86(C), the share of the surviving spouse n the absolute community/conjugal partnership is considered as a deduction Gross Estate includes only that part of the gross estate located in the Philippines Deductions:. Other items which are excluded: a. GSIS proceeds/benefits b. Accruals from SSS c. Proceeds of life insurance where the beneficiary is irrevocably appointed d. Proceeds of life insurance under a group insurance taken by employer (not taken out upon his life) e. War damage payments f. Transfer by way of bona fide sales g. Transfer of property to the government or to any of its political subdivisions h. Merger or usufruct in the owner of the naked title i. Properties held in trust by the decedent j. Acquisition and/or transfer expressly declared as not taxable --------------------------------------------------------------13. Tax credit for estate taxes paid in a foreign country --------------------------------------------------------------- 2. Overall basis: The total amount of the credit shall not exceed the same proportion of the tax against which such credit is taken, which the decedents net estate situated outside the Philippines taxable under the NIRC bears to his entire net estate. Note: To best illustrate: --------------------------------------------------------------14. Exemption of certain acquisitions and transmissions --------------------------------------------------------------Read Section 87, Tax Code or registerable property such as real property, motor vehicle, shares of stock or other similar property for which a clearance from the BIR is required as a condition precedent for the transfer of ownership thereof in the name of the transferee. Other Administrative Requirements Read Section 91-97, Tax Code Q: When should the estate tax be paid? General Rule: At the time the return is filed by the executor, administrator or the heirs Exception: The CIR, if he finds that the payment on the due date would impose undue hardship, may grant an extension of: 1. Not to exceed 5 years in case the estate is settled judicially 2. Not to exceed 2 years in case the estate is settled extrajudicially --------------------------------------------------------------15. Filing of notice of death --------------------------------------------------------------Read Section 89, Tax Code Q: When is notice of death required to be given to the BIR? 1. In all cases of transfers subject to tax; or 2. Where, though exempt from tax, the gross value of the estate exceeds P20,000 --------------------------------------------------------------16. Estate Tax Return -------------------------------------------------------------Read Section 90, Tax Code Q: When is an estate tax return required? 1. When the estate is subject to estate tax 2. When, though exempt from tax, the gross value of the estate exceeds Php 200,000 3. Regardless of the gross value of the estate when the said estate consists of registered PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 although her liability would not exceed the amount of her share in the estate. Q: May estate tax be collected even after the distribution to the heirs? Yes. As held in GOVERNMENT V. PAMINTUAN [OCTOBER 11, 1930], a claim for taxes and assessments whether assessed before or after the death of the decedent, is not required to be presented to the committee on claims and appraisals. The Heirs are liable for the deficiency income taxes, in proportion to their share in the inheritance. As held in CIR V. PINEDA [SEPTEMBER 15, 1967], an heir is individually answerable for the part of the tax proportionate to the share he received from the inheritance. His liability, however, cannot exceed the amount of his share. On the other hand, a holder of property belonging to the estate is liable for the tax up to the amount of the property in his possession. the pertinent remedial laws that implies the necessity of the probate or estate settlement court's approval of the state's claim for estate taxes, before the same can be enforced and collected. Q: A died and B (wife) tried to withdraw the joint savings deposit they maintained at the PNB Tarlac but failed because C, who claimed to be the couples adopted child, objected thereto. C claims that B cannot withdraw any amount from the bank account because she should follow legal procedures governing settlement of the estate of a deceased, unless a competent court issues an order allowing her to withdraw invoking Section 97 of the Tax Code. Can the money be released to B? No. Section 97 of the National Internal Revenue Code states: If a bank has knowledge of the death of a person, who maintained a bank deposit account alone, or jointly with another, it shall not allow any withdrawal from the said deposit account unless the Commissioner had certified that the taxes imposed thereon by this Title have been paid; Provided, however, That the administrator of the estate depositors are still living at the time of withdrawal by any one of the joint depositors and such statement shall be under oath by the said depositors. (POLIDO V. CA [JULY 10, 2007]) Q: Is the approval of the probate court or the court settling the estate of the decedent a mandatory requirement in the collection of the estate tax? No. As held in M ARCOS II V. CA [JUNE 5, 1997], PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 ---------------------------------------------------------C. DONORS TAX -----------------------------------------------------------------------------------------------------------------------1. Basic Principles --------------------------------------------------------------Read Section 98 Q: What donations are covered by the donors tax? The donors tax is imposed only on donaitons inter vivos. Donations mortis causa partake of the nature of testamentary dispositions are subject to estate tax In the case of Gestopa v CA [October 5, 2000], the Supreme Court held that the donation of the deceased spouses to their illegitimate daughter was a donation inter vivos. The spouses executed the deed out of love and affection for the donee, which is a mark of a donation inter vivos. The donor reserved sufficient properties for their maintenance in accord with their standing in society, indicating the donor intended to part with the property donated. And, the donee accepted the donation, which is only required in donations inter vivos. Note: Its purpose is to complement estate taxation by preventing tax-free depletion of the transferors estate during his lifetime. --------------------------------------------------------------4. Purpose or object --------------------------------------------------------------Q: What are the purposes for the imposition of donors tax? 1. To raise revenues 2. To tax the wealthy and reduce certain other excise taxes 3. To discourage inter vivos transfers of property which could reduce the mortis causa transfers on which a higher tax, the estate tax would be collected 4. It will tend to reduce the incentive to make gifts in order that distribution of future income from the donated property may be to a number of persons with the result that the taxes imposed by the higher brackets of the income tax are avoided. --------------------------------------------------------------5. Requisites of valid donation --------------------------------------------------------------Q: What are the requisites of a valid donation? 1. Capacity of donor 2. Donative intent (intention to donate) 3. Delivery, whether actual or constructive, of the subject gift 4. Acceptance by the done 5. Form prescribed by law Note: (1) As to (1) All persons who may contract or dispose of their property may make a donation (Art. 735, NCC). The donors capacity shall be determined as of the time of the making of the donation (Art. 737, NCC). 1. It must be in public document 2. The property donated and the value of the charges which the done must satisfy must be specified 3. The donee must accept through a deed or similar instrument. (Art. 749, NCC) Q: ABC Steamship insured the life of A who was then its President and General Manager. He was responsible for the success of the company for which he was compensated for. The company initially designated itself as the beneficiary of the policies but, after As death, it renounced all its rights, title and interest therein in favor of As heirs. The CIR subjected the donation to donors tax. The heirs contend that it was a remuneratory donation on full and adequate compensation for the valuable services of A and as such is not subject to donors tax. Is the contention of the heirs correct? No. The donation is not remuneratory as A has been fully compensated for his services. A donation made by the corporation to the heirs of a deceased officer out of gratitude for the officer's past services is considered a donation and is subject to donee's gift tax. The fact that his services contributed in a large measure to the success of the company did not give rise to a recoverable debt, and the conveyances made by the company to his heirs remain a gift or donation. (Pirovano v. CIR [July 31, 1965]) --------------------------------------------------------------6. Transfers which may be constituted as donation a) Sale/exchange/transfer of property for insufficient consideration b) Condonation/remission of debt --------------------------------------------------------------Q: What are considered donations for tax purposes? 1. Sales, exchanges and other transfers of property for less than an adequate and full consideration in money or moneys worth Except: Transfers of real property considered as capital assets which is subject to CGT. 2. Condonation or remission of debt where the debtor did not render service in favor of the creditor Note: Condonation or remission of a debt would constitute a donation to the extent of the fair value of the debt condoned or remitted. Therefore, the creditor would be considered a donor for donors tax purposes and would be liable for the tax thereon. Q: A sold his lot not used for business tto his brother B for P500,000 when at that time the lot was valued in the market at P1 million. A bought it for P100,000. In addition, A sold some of the shares of his company ABC Corp to his senior executives. He sold the ABC Corp shares for P300,000 when the Page 19 of 164 Last Updated: 30 July 2013 (v3) market value was at P500,000. His original cost in the shares is P100,000. Are the sales subject to donors tax? The sale of the lot is not subject to donors tax as it is a real property classified as a capital asset and such is subject to the 6% CGT. The sale of the shares, however, are subject to the donors tax of 30% based on the difference between the selling price and the market value. Q: Supposing that instead of a general renunciation, B renounced her hereditary share in As estate to X who is a special child, would the renunciation be subject to donors tax? Yes,. (Section 11, RR No. 2-2003) Note: Without a source of income or acceptable form of acquisition of substantial amount to purchase properties, the inclusion of the names of minor children in the certificates of title of properties shall be deemed an implied donation, which is subject to donors tax. SPS. HORDON H. EVONO AND MARIBEL C. EVONO VS. CIR, ET. AL., CTA EB NO. 705 (CTA CASE NO. 7573), JUNE 4, 2012 Q: Creditors A, B and C condoned the debt of XYZ Corp pursuant to a court approved restructuring. Are the creditors liable for donors tax? No. The transaction is not subject to donors tax since the condonation was not implemented with a donative intent but only for business consideration. The restructuring was not a result of the mutual agreement of the debtors and creditors. It was through court action that the debt rehabilitation plan was approved and implemented. [BIR Ruling DA 028-2005 [January 24, 2005]) --------------------------------------------------------------7. Transfer for less than adequate and full consideration --------------------------------------------------------------Read Section 100 Q: When is there a transfer for less than an adequate and full consideration in money or moneys worth? Where property, other than real property classified as capital asset subject to final capital gains tax, is transferred for less than an adequate and full consideration in money or moneys worth, the amount by which the fair market value of the property exceeded the value of the consideration shall, for purposes of donors tax, be deemed a gift. Note: (1) The element of donative intent is conclusively presumed in transfers of property for less than an adequate or full consideration in money or moneys worth. (2) Why is real property, classified as capital asset, that is transferred for less than an adequate and full consideration in money or moneys worth not deemed a gift subject to donors tax? Well, it is already subject to final capital gains tax, which is 6% of the gross selling price of fair market value of the property, whichever is higher. So what the seller avoids in the payment of the donors tax, it pays for in CGT. Q: Whether the transfer of property from the distressed Asset Asia Pacific, Inc. pursuant to the Special Purpose Vehicle (SPV) Act of 2002 subject to donors tax? No. The transaction above is not a donation. Hence, it is not subject to donors tax. [BIR Ruling No. 1092011] Note: Thus, if the transfer was made pursuant to law, it is not subject to donors tax. Q: A died leaving as his only heirs, his surviving spouse B, and three minor children, X, Y and Z. Since B does not want to participate in the distribution of the estate, she renounced her hereditary share in the estate. Is the renunciation subject to donors tax? No. The general renunciation by an heir, including the surviving spouse, as in the case of B, of her share in the hereditary estate left by the decedent is not subject to donors tax. This is so bec ause the general renunciation by B was not specifically and categorically done in favor of identified heir/s to the exclusion or disadvantage of the other co-heirs in the hereditary estate (Section 11, RR No. 2-2003). PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 rubber products, the Central Bank required it to develop a rubber plantation. BF Goodrich purchased land under the Parity Amendment. Thereafter, the DOJ rendered an opinion stating that upon expiration of the Parity Amendment, ownership rights over such lands, including right to dispose or sell them, would be lost. Hence, BF Goodrich sold the rubber plantation to Siltown Realty for a price less than its declared fair market value. The BIR assessed BF Goodrich for deficiency donors tax representing the difference between the fair market value and the actual purchase price of the property. BIR contended that BF Goodrich filed a false income return. Did BF Goodrich commit falsity in its income return? No. It is possible that real property may be sold for less than adequate consideration for a bona fide business purpose; in such event, the sale remains an "arm's length" transaction. In this case, Goodrich was compelled to sell the property even at a price less than its market value, because it would have lost all ownership rights over it upon the expiration of the parity amendment. In other words, it was attempting to minimize its losses. At the same time, it was able to lease the property for 25 years, renewable for another 25. This can be regarded as another consideration on the price. The fact that Goodrich sold its real property for a price less than its declared fair market value did not by itself justify a finding of false return. Even though a donor's tax, which is defined as "a tax on the privilege of transmitting one's property or property rights to another or others without adequate and full valuable consideration," is different from capital gains tax, a tax on the gain from the sale of the taxpayer's property forming part of capital assets, the tax return filed by Goodrich to report its income was sufficient compliance with the legal requirement to file a return. In other words, the fact that the sale transaction may have partly resulted in a donation does not change the fact that Goodrich already reported its income by filing an income tax return. [CIR v. B.F. Goodrich Phils [February 24, 1999] --------------------------------------------------------------8. Classification of donor --------------------------------------------------------------Q: Who are liable to pay donors tax? 1. 2. 3. 4. 5. 6. Resident citizen Non-Resident Citizen Resident Alien Non-Resident Alien Domestic Corporation Foreign Corporation Note: In contrast to estate taxes, a corporation can be subject to donors tax because it is capable of entering into a contract of donation through the appropriate Board Resolution. --------------------------------------------------------------9. Determination of gross gift --------------------------------------------------------------Q: Distinguish Gross Gift from Net Gift Gross Estate Refers to all property, real or personal, tangible or intangible, that is given by the donor to the done by way of gift, without the benefit of any deduction. Net Estate Means the net economic benefit from the transfer that accrues to the done. Non-Resident Aliens Note: In sum, all assets, real or personal, tangible or intangible given by way of gift wherever located of a citizen and resident alien is subject to donors tax while for nonresident aliens, donors tax is imposed only on properties located in the Philippines provided in the case schedule of values fixed by the provincial and city assessors (zonal value), whichever is higher. If there is no zonal value, taxable base is FMV that appears in the latest tax declaration. For improvements The value of the improvement is the construction cost per building permit and/or occupancy permit plus 10% per year after year of construction or the FMV per latest tax declaration The fair market value at that time will be considered the amount of gift Q: ABC a multinational corporation doing business in the Philippines donated 100 shares of stock of said corporation to Mr. Z, its resident manager in the Philippines. What is the tax liability, if any, of ABC corporation? Foreign corporations effecting a donation are subject to donors tax only if the property donated is located in the Philippines. Accordingly, donation of a foreign corporation of its own shares of stock in favor of resident employees is not subject to donors tax. However, if 85% of the business of the foreign corporation is located in the Philippines or the shares donated have acquired business situs in the Philippines, the donation may be taxed in the Philippines subject to the rule of reciprocity. --------------------------------------------------------------10. Composition of gross gift --------------------------------------------------------------Read Section 104, Tax Code Q: What is included as part of gross gift? As a general rule, gross gifts include real and personal property, whether tangible or intangible or mixed, wherever situated Note: If the donor was a non-resident alien at the time of the donation, his real and personal property so transferred but which are situated outside the Philippines shall not be included as part of gross gift. In GIBBS V. CIR [APRIL 28, 1962], the parents made it appear that they transferred shares of stock in favor of their children for consideration, but it was found out that such was insufficient, and such agreements were made to evade taxes. The Supreme Court allowed the CIR to impose taxes for the full value of the shares of stock, not just the excess of the FMV over the consideration/price. --------------------------------------------------------------12. Tax Credit for donors taxes paid in a foreign country --------------------------------------------------------------Read Section 101(C), Tax Code Note: See discussion of tax credit under Estate Tax. Computation of the donors tax credit is the same as the computation for estate tax credit. Just change net estate to net gifts. --------------------------------------------------------------11. Valuation of gifts made in property --------------------------------------------------------------Read Section 102, Tax Code Q: How do we value the gifts subject to donors tax? For Real Property The value shall be based on either (1) the fair market value as determined by the CIR or (2) the fair market value as shown in the --------------------------------------------------------------13. Exemptions of gifts from donors tax --------------------------------------------------------------Read Section 101(A) to (B), Tax Code Note: There are really no deductions from gross gift. There are only exemptions. Q: Enumerate the exemptions from gross gifts (exempt from donors tax) 1. Dowries or donations made: a. on account of marriage b. before its celebration or within one year thereafter Page 22 of 164 Last Updated: 30 July 2013 (v3) c. by parents to each of their legitimate, recognized natural or adopted children d. to the extent of the first php10,000 2. Gifts made to or for the use of the national government or any entity created by any of its agencies which is not conducted for profit, or to any political subdivision of the said government 3. Gifts in favor of an education and/or charitable, religious, cultural or social welfare corporation, institution, accredited NGO, trust or philanthropic organization or research institution or organization provided not more than 30% of said gifts will be used by such done for administrative purposes. institution, accredited NGO, trust or philanthropic organization or research institution or organization to be exempted? 1. Not more than 30% of the said gift should be used for administrative purposes 2. The donee must be a non-stock, non-profit organization or institution 3. The donee organization or institution should be governed by trustees who do not receive any compensation 4. Said donee devotes all of its income to the accomplishment and promotion of its purposes 5. The NGO must be accredited by the Philippine Council for NGO Certification 6. The donor engaged in business shall give notice of donation on every donation worth at least P500,000 to the RDO which has jurisdiction over his place of business within 30 days after receipt of the qualified donees institutions duly issued Certificate of Donation (RR 2-2003) Q: In addition to exemptions provided under Section 101 of the Tax Code, are there any other exemptions allowed on gross gift? 1. Encumbrances on the property donated if assumed by the donee 2. Donations made to entities exempted under special laws (e.g. IBP, IRRI, National Museum, National Library) 3. Amount specifically provided by the donor as a diminution of the property donated. 4. Athletes Prizes and Awards (see RA 7549) Q: What are the requisites for a donation given to athletes as prize or award to be exempted? The donation must be prize or award given to athletes: 1. In local and international sports tournaments and competitions 2. Held in the Philippines or abroad; 3. Sanctioned by their respective national sports associations (RA 7549) Note: Remember Section 32(B)(7)(d), Tax Code which provides that all prizes and awards granted to athletes in local and international competitions and tournaments, whether held in the Philippines or abroad, and sanctioned by their national sports associations are excluded from gross income. Q: What are the requisites for dowries or gifts made on account of marriage to be exempted? 1. The gift was made on account of marriage 2. It was made before or within one year after the celebration of marriage 3. Donor is a parent 4. Donee is a legitimate, recognized natural or adopted child of the donor 5. The amount of the gift exempted is only to the extent of the first P10,000 (per parent, if made out of conjugal or community funds) Read Section 99(C), Tax Code Q: Are political contributions considered gifts and therefore liable for donors tax? Under Section 13 of RA 7166, such contributions, be duly reported to the COMELEC, shall no be subject to the payment of any gift tax. Note: In Abello v. CIR [February 23, 2005], the Supreme Court ruled that the contributions made by certain partners of the ACCRA law firm to the campaign of Senator Q: What are the requisites for gifts in favor of an education and/or charitable, religious, cultural or social welfare corporation, PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Read Section 99(A) to (B), Tax Code Q: What is the basis in computing donors tax? The tax shall be computed on the basis of the total net gifts made during the calendar year in accordance with the graduated donors tax rates. Note: To best illustrate In general -- --------------------------------------------------------------14. Person liable --------------------------------------------------------------Read Section 103, Tax Code Q: Who are liable for donors tax? Every person, whether natural or juridical, resident or nonresident, who transfers or causes to transfer property by gift, whether in trust or otherwise, whether the gift is direct or indirect and whether the property is real or personal, tangible or intangible. In other words, the donor is always liable to pay the donors tax. Gross gifts made Less: Deductions from the gross gifts = Net gifts made Multiplied by applicable rate = Donors tax on the net gifts If several gifts were made during the year -Gross gifts made Less: Deductions from the gross gifts = Net gifts made on this date Add: all prior net gifts during the year = Aggregate net gifts Multiplied by applicable rate = Donors tax on aggregate net gifts Less: donors tax paid on prior net gifts = Donors tax payable on the net gifts to date In other words, if the donor makes several gifts during the same calendar year, the gifts shall be added on a cumulative basis. The tax for each calendar year shall be computed on the basis of the total net gifts made during the calendar year in accordance with the schedule provided in Section 99(A). ---------------------------------------------------------D. VALUE-ADDED TAX -----------------------------------------------------------------------------------------------------------------------1. Concept --------------------------------------------------------------Q: Define Value-Added Tax (VAT). A Value-Added Tax is a tax assessed, levied, and collected. Note: This early on I want to make the distinction between an exempt entity (a taxpayer exempt from VAT) and an exempt transaction (a transaction exempt from VAT). The distinction proceeds from the nature of VAT as an indirect tax. If the law exempts the statutory taxpayer (aka the seller), this does not mean that the buyer is also exempt. The VAT can be shifted to the buyer. Also, if the law exempts the buyer from VAT meaning the seller cannot pass/shift the VAT to the buyer, this does not mean the seller is exempt. He must pay the tax. In both cases, the transaction is not exempt from VAT because someone will pay. But if the law says the transaction is exempt from VAT then neither the buyer nor the seller will have to pay VAT. That is the distinction. Remember that especially when we discussed zero-rated, effectively zero-rated and exempt transactions. --------------------------------------------------------------2. Characteristics/Elements of a VATTaxable Transaction --------------------------------------------------------------Q: What are the characteristics of the VAT? 1. It is a percentage tax imposed at every stage of the distribution process on the sale, barter, or exchange or lease of goods or properties and on the performance of service in the course of trade or business or on the importation of goods, whether for business or non-business. 2. It is a business tax levied on certain transactions involving a wide range of goods, properties and services, such tax being payable by the seller, lessor or transferor. 3. It is an excise tax or a tax on the privilege of engaging in the business of selling goods or services or in the importation of goods 4. It is an indirect tax, the amount of which may be shifted to or passed on the buyer, transferee or lessee of the goods, properties or services. 5. It is an ad valorem tax as its amount or rate is based on gross selling price or gross value in money or gross receipts derived from the transaction PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 reimbursement-of-cost-only basis and, as such, the services are not VAT-taxable. Is COMASERCO correct? No. In CIR V. CA AND COMASERCO [M ARCH 30, 2000] , the Supreme Court opined that VAT is a tax on transactions imposed at every stage of the distribution process on the sale, barter, exchange of goods or property, and on the performance of services, even in the absence of profit attributable thereto. The definition of the term in the course of trade or business applies to all transactions. Even a non-stock, non-profit corporation or government entity is liable to pay VAT for the sale of goods and services. In this case, even if the services rendered for a fee were on a reimbursement-on-cost arrangement and without realizing profit, the payments are still subject to VAT. Q: Pursuant to the governments privatization program, NDC decided its shares in the National Marine Corp. and 5 vessels. Magsaysay Lines bought the shares and vessels. The CIR contends that the sale of the 5 vessels is incidental to its NDCs VAT registered activity of leasing out personal property and thus VAT-taxable. Is the CIR correct? No. In CIR V. M AGSAYSAY LINES [JULY 28, 2006], the Supreme Court found that any sale, barter or exchange of goods or services not in the course of trade or business is not subject to VAT. In this case, the sale of the vessels was an isolated transaction, not done in the ordinary course of NDCs business and is thus not subject to VAT. Note: In THOMAS C. ONGTENCO VS. CIR, CTA CASE NO. 8190, DECEMBER 12, 2012, the CTA held that the taxpayers act of lending money to a corporation, where he is a director and stockholder cannot be considered as an act of lending in the course of his trade or business. His act of lending was not done in the ordinary course of his business or trade but merely an isolated transaction in order to help the company in its provincial expansion considering that, at that time, it was just starting and was having difficulties in getting and applying for loans from banks. The act of lending was a one-time assistance in his capacity as stockholder.. Q: Sony Philippines engaged the services of several advertising companies. Due to dire economic conditions, Sony International Singapore (SIS) gave Sony Philippines a dole-out to pay for said advertising expenses. Sony Philippines claimed as input VAT credits that VAT paid for the advertising expenses. The CIR disallowed this and assessed Sony Philippines deficiency VAT on the reimbursable received by it from SIS. The CIR contends that the reimbursable was a fee for a VATtaxable activity. Is the CIR correct? No. The Supreme Court held in CIR v. SONY PHILIPPINES [NOVEMBER 17, 2010] that Sony Philippines cannot be deemed to have received the reimbursable as a fee for a VAT-taxable activity. The absence of a sale, barter or exchange of goods or properties supports the non-VAT nature of the reimbursable. The Supreme Court distinguished this case from CIR V. CA AND COMASERCO [M ARCH 30, 2000] where even if there was similarly a reimbursement on cost arrangement between affiliates, there was in fact an underlying service. Here, the advertising services were rendered in favor of Sony Philippines, not SIS. Q: COMASERCO is a non-stock, non-profit organization, affiliated with Philamlife and organized to perform collection, consultative or technical services. The BIR assessed COMASERCO for deficiency VAT. COMASERCO argues that the services rendered to Philamlife were on a no-profit, PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 sales or outputs the VAT paid on its purchases, inputs and imports. The legal basis can be found in Section 110(A) of the Tax Code which provides that any input tax evidenced by a VAT invoice or official receipt on purchase or importation of goods or for purchase of services shall be creditable against output tax. Under the VAT method of taxation, which is invoicebased, an entity can subtract from the VAT charged on its sales or outputs the VAT it paid on its purchases, inputs and imports. (CIR V. SEAGATE TECHNOLOGY [FEBRUARY 11, 2005]). Note: (1) The Tax Credit method is the method used para malaman mo how much ang babayaran mo na VAT. We will talk about this in greater detail sa Determination of output/input vat. For now, Ill give you the basics which will suffice for understanding the succeeding topics. As discussed above, the taxpayer determines his tax liability by computing the tax on the gross selling price or gross receipt (output tax) and subtracting or crediting the earlier VAT on the purchase or importation of goods or on the purchase of service (input tax) against the tax due on his own sale. Gawin nating formula: --------------------------------------------------------------5. Tax Credit Method --------------------------------------------------------------Note: We wont understand Tax Credit Method if we do not define output tax and input tax. Okay example. Lets say seller ka ng wooden furniture. Anong kailangan mo para makagawa ka ng produkto mo? Eh di kahoy. Wooden furniture nga diba. So bumili ka ng kahoy. Yung nagbenta sa iyo binigyan ka ng invoice. Pagtingin mo sa invoice mo naka-indicate yung 12% VAT na binayaran mo sa pagbili mo ng kahoy. Yan ang input tax mo! So using the kahoy, you made lets say table s and chairs. Eh since ibebenta mo ito, subject ka sa VAT. Tawag mo dyan output tax. Under the Tax Credit Method, puwede mo ibawas ang 12% na binayaran mo sa pagbili ng kahoy doon sa babayaran mo na 12%VAT sa pagbenta mo ng final product mo, yung tables and chairs. Because of that nabawasan mo ang VAT liability mo. (2) As explained in ABAKADA GURO PARTY LIST V. ERMITA [SEPTEMBER 1, 2005], the VAT system was previously a single stage system under a cost deduction method and was payable only by the original sellers. Now, the VAT system is a multi-stage system a mixture of the cost deduction method and the tax credit method. --------------------------------------------------------------6. Destination Principle --------------------------------------------------------------Q: What is the destination principle (crossborder doctrine)? As a general rule, the value-added tax (VAT) system uses the destination principle. It means that the destination of the goods determines the taxation or exemption from VAT. Goods and services are taxed only in the country where they are consumed. Note: (1) This is the reason why export sales of goods are subject 0% while importations of goods are subject to 12%. Exported goods will be consumed in wherever country it is exported so it is zero-rated. On the other hand, we consume imported goods here in the Philippines that is why it is subject to 12% VAT. (2) In the case of services, consumption takes place where the service is performed. Note, however, na may exception to the destination principle when it comes to sale of services. Although the services are performed in the Philippines, there are certain sales of services that are zero-rated. We will discuss this later when we get to Section 108(B) or zero-rated sales of services. Note: RR 16-2011 [October 27, 2011] increased the threshold amounts for sale of residential lot, sale of house and lot, lease of residential unit and sale or lease of goods or properties or performance of services covered by Section 109(P), (Q) and (V) of the Tax Code. These are the changes: Section Amount in Pesos (2005) 1,500,000 2,500,000 10,000 1,500,000 Adjusted amounts 1,919,500 3,199,200 12,800 1,919,500 I suggest you update your codal with these adjusted amounts. Importante yan lalo na when we talk about exempt transactions. --------------------------------------------------------------8. VAT on sale of goods or properties a) Requisites of taxability of sale of goods or properties --------------------------------------------------------------Read Section 106(A)(1), Tax Code Q: What are considered as goods or properties for VAT purposes? All tangible and intangible objects which are capable of pecuniary estimation, including: 1. Real properties held primarily for sale to customers or held for lease in the ordinary course of business 2. The right or privilege to use patent, copyright, design or model, plan, secret formula or process, good will, trademark, trade brand, or other like property or right 3. The right or privilege to use in the Philippines of any industrial, commercial or scientific equipment 4. The right or the privilege to use motion picture files, films tapes and discs 5. Radio, television, satellite transmission and cable television line (see SECTION 106(A)(1), TAX CODE) --------------------------------------------------------------7. Persons liable --------------------------------------------------------------Read Section 105, Tax Code Q: In general, who are liable to pay the VAT? 1. Any person who, in the course of trade or business, sells, barters, exchanges or leases goods or properties, or renders services Except: A person, whether or not VATregistered, whose annual gross sales or receipts 1 does not exceed P1,919,500. 2. Any person who imports goods, whether in the course of trade or business or not. (see SECTION 105, TAX CODE, SECTION 4.105-1, RR 16-2005) If the annual gross sales or receipts does not exceed P1,919,500, he shall be liable instead for the 3% percentage tax on small business enterprises (see Section 116, Tax Code). The total amount of money or its equivalent which the purchaser pays or is obligated to pay to the seller in consideration of the sale, barter or exchange of the goods or properties excluding the VAT. Any excise tax, if any, on such goods or properties shall form part of the GSP Note: If the consideration of a sale is not wholly in money as in a partexchange or barter transaction, the base is the price that would have been charged in an open market sale for purely monetary consideration. exchange of goods or properties for a valuable consideration 2. The sale is undertaken in the course of trade or business or exercise of profession in the Philippines 3. The goods or properties are located within the Philippines and are for use or consumption therein 4. The sale is not exempt from VAT under Section 109 of the Tax Code, special law or international agreement binding upon the government of the Philippines. Note: (1) The absence of any of the above requisites exempts the transaction from VAT. However, percentage taxes may apply. Actually, the annual gross sales or receipts must exceed P1,199,500. Otherwise, it is subject to the 3% percentage tax on small business enterprises. (2) We can combine (3) and (4) by stating that the transaction should not be a VAT zero-rated or a VATexempt transaction. 2. 3. 4. The gross selling price shall mean the consideration stated in the sales document or 2 the fair market value, whichever is higher. 5. 6. including dacion en pago, barter or exchange, assignment, transfer or conveyance, or merely contract to sell involving real property The real property is located in the Philippines The seller or transferor is engaged in real estate business either as a real estate dealer, developer or lessor The real property is held primarily for sale or for lease in the ordinary course of his trade or business The sale is not exempt from VAT under Section 109, special law or international agreement binding upon the government of the Philippines. The threshold amount set by the law should be met The fair market value shall mean whichever is the higher of (1) the fair market value as determined by the CIR (zonal value) or (2) the air market value as shown in the schedule of values of the provincial and city assessors (real property tax declaration). In the absence of a zonal value, gross selling price shall refer to the market value shown in the latest real property tax declaration or the consideration, whichever is higher. (3) As to (6), the threshold amounts are: (1) The sale of a residential lot with a GSP must exceed P1,919,500 and (2) the sale of a residential house and lot or other residential dwelling with GSP must exceed P3,199,200. Otherwise, they are not exempt from VAT Installment sale of a residential house and lot or other residential dwellings 3 exceeding P1 million shall be subject to VAT. (See SECTION 4.106-4, RR 16-2005 [SEPTEMBER 1, 2005], AS AMENDED BY RR 04-07 [FEBRUARY 7, 2007], RR 16-2011 [OCTOBER 27, 2011], RR 3-2013 [FEBRUARY 20, 2012] AND RR 13-2012 [OCTOBER 12, 2012].) Of the amounts typically covering an advance payment, only the pre-paid rent is subject to VAT. Other forms of advance payment such as option money, security deposit, etc. are not subject to VAT. Q: A bought two adjacent condominium units which he intended to combine so as to fit his family. Each unit has a GSP of 2 million. The two units were separately documented. After 2 years, A decided to sell the two units. A contends that the units are exempt from VAT as the GSP did not exceeding 2.5 million. Is A correct? No. By virtue of the amendment introduced by RR 13-2012 [OCTOBER 12, 2012], the sale of real properties subject to VAT shall include the sale, transfer, or disposal within a 12-month period of two or more adjacent residential lots, house and lots, or other residential dwellings in favor of a buyer. Such adjacent real properties although covered by separate titles and/or separate tax declarations, when sold to one and the same buyer, whether covered by one or separate deeds of conveyance, shall be presumed as a sale of one residential lot, house and lot or residential dwelling. Q: Is the sale of the parking lot included in the sale of a condominium unit? No. The sale of parking lots is a separate and distinct transaction and is not covered by the rules on the threshold amount not being a residential lot, house and lot, or a residential dwelling and thus should be subject to VAT regardless of the amount of selling price. (see RR 13-2012 [OCTOBER 12, 2012]) --------------------------------------------------------------9. Zero-rated sales of goods or properties and effectively zero-rated sales of goods or properties --------------------------------------------------------------Read Section 106(A)(2), Tax Code Q: What are zero-rated transactions? A VAT zero-rated transaction are sales by VATregistered persons which are subject to 0% rate, meaning the tax burden is not passed on to the purchaser. A zero-rated sale by a VAT-registered person, which is a taxable transaction for VAT Page 30 of 164 Last Updated: 30 July 2013 (v3) Q: Assuming a VAT-taxable transaction, is the advance payment in a real estate transaction subject to VAT? purposes, shall not result in any output tax. However, the input tax on his purchases of goods, properties or services related to such zero-rated sale shall be available as tax credit or refund. Q: Distinguish VAT rating (VAT-taxable transactions) from zero rating (Zero-rated transactions). As explained by the Supreme Court in CIR V. BENGUET CORPORATION [JULY 14, 2006]: In transactions taxed at a 10% rate (now 12%), when at the end of any given taxable quarter the output VAT exceeds the input VAT, the excess shall be paid to the government; when the input VAT exceeds the output VAT, the excess would be carried over to VAT liabilities for the succeeding quarter or quarters. On the other hand, transactions which are taxed at zero-rate do not result in any output tax. Input VAT attributable to zero-rated sales could be refunded or credited against other internal revenue taxes at the option of the taxpayer Note: As an example, assume that VAT-registered person purchases materials from his supplier at P100, P9.6 of which was passed on to him by his supplier as the latters 12% output VAT. In a zero-rated transaction, the taxpayer can recover the P9.6 from the BIR either through a refund or a tax credit. When the taxpayer sells his finished product for lets say P120, he is not required to pay the output VAT of P2.4 (12% of the P20 value he has added to the P100 material). In a transaction subject to VAT, however, he may recover both the input VAT of P9.6 which he paid to the supplier and his output VAT of P2.4 by passing both these costs to the buyer. The buyer then pays P12, the total 12% VAT. The input VAT on the purchases of a VATregistered person with zero-rated sales may be allowed as tax credits or refunded The seller in an exempt transaction is not entitled to any input tax on his purchases despite the issuance of a VAT invoice or receipt; Persons engaged in transactions which are zero-rated, being subject to VAT, are required to register Zero-rated Effectively zero-rated Q: Distinguish exemption. zero rating from VAT- As differentiated by the Supreme Court in CIR v. CEBU TOYO CORPORATION [FEBRUARY 16, 2005]: Zero-rated VAT-Exempt to the refers to the sale of goods or supply of services to persons or entities whose exemption under special laws or international agreements to which the Philippines is a signatory effectively subjects such transactions to a zero rate. The tax rate is set at zero. When applied to the tax base, such rate obviously results in no tax chargeable against the purchaser As applied to the tax base, such rate does not yield any tax chargeable against the purchaser Q: Enumerate the requisites that must be complied with in order to be entitled to a refund or issuance of a TCC for input VAT due or paid attributable to zero-rated or effectively zero-rated sales. 1. There must be zerorated or effectively zero rated sales; 2. Input taxes were incurred or paid; 3. Such input taxes are directly attributable to zero rated or effectively zerorated sales; 4. Input taxes were not applied against any output VAT liability; and 5. The claim for refund was filed within the two year prescriptive period. (see SITEL PHILIPPINES CORPORATION V. CIR [CTA CASE NO. 7623, M ARCH 3, 2010]) Note: No more VAT TCCs shall be issued. In connection with this, Executive Order 68 [March 27, 2012] provides for the monetization of outstanding VAT TCCs. EO 68 allows qualified VAT-registered. RMO 21-2012 [August 9, 2012] provides the guidelines, policies and procedures for the implementation of the VAT TCC Monetization Program. The seller of such transactions charges no output tax, but can claim a refund of or a tax credit certificate for the VAT previously charged by suppliers The seller who charges zero output tax on such transactions can also claim a refund of or a tax credit certificate for the VAT previously charged by suppliers intended to be enjoyed by the seller who is directly and legally liable for the VAT, making such seller internationally competitive by allowing the refund or credit of input taxes that are attributable to export sales. The taxpayer need not file an application form and to secure BIR approval before sale intended to benefit the purchaser who, not being directly and legally liable for the payment of the VAT, will ultimately bear the burden of the tax shifted by the suppliers. The rules are: 1. Prior to RA 9337 (before November 1, 2005) application is needed except in sales to PEZA, sales to BOIregistered 100% manufacturerexporter 2. RA 9337 up to before RR 4-2007 (November 1, 2005 to April 5, 2007) application is needed; no exceptions 3. RR 4-2007 (April 6, 2007 onwards) need for application not expressly provided. 2. Foreign currency denominated sale the sale to a non-resident of goods assembled or manufactured in the Philippines for delivery to a resident in the Philippines paid in acceptable foreign currency and accounted for in accordance with BSP rules and regulations 3. Sales to persons or entities whose exemption under special laws and international agreements to which the Philippines is a signatory subjects such sales to 0% rate (effectively zero-rated transactions) Note: As to 1(e), considered export sales under E.O. 226 includes the sale of goods and services by a VATregistered person in the customs territory to ecozone and Freeport enterprises so as to make them automatically zero-rated (Section 4.106-5, RR No. 4-2007) As to 1(f), the goods subject to zero-rating are limited to goods and passengers transported from a port in the Philippines directly to a foreign port, or vice versa, without docking or stopping at any other port in the Philippines. (Ibid) Now, I want to discuss the VAT treatment of PEZAregistered enterprises. This has been the subject of much confusion. The cases added more to the confusion. What you have to note in reading the cases is whether it was decided before or after the effectivity of RMC 74-99. Before RMC 74-99, whether a PEZA-registered enterprise was exempt or subject to VAT depended on the type of fiscal incentives availed of by the said enterprise. PEZA entities can avail of two alternative or subsequent incentives of income tax holiday (ITH) or 5% preferential tax rate on gross income. If the entity avails of the 5% preferential tax rate, it is exempt from all taxes including VAT but if it avails of the ITH, it shall be exempt from income taxes for a number of years but not VAT (see CIR v. SEKISUI JUSHI PHILIPPINES [JULY 21, 2006]). This explains the decisions in CIR V. TOSHIBA INFORMATION EQUIPMENT [AUGUST 9, 2005] and CIR v. CEBU TOYO CORPORATION [FEBRUARY 16, 2005] where in both cases the Supreme Court held that the PEZA-registered enterprise is entitled to a VAT refund/credit because it opted to avail itself of the income tax holiday. Having availed of the income tax holiday and its export sales being a zero-rated transaction, the PEZA-registered enterprise was entitled to refund or credit for its unutilized input taxes. In both cases, the transactions were made prior to the effectivity of RMC 74-99. Now, after the effectivity of RMC 74-99, the tax treatment of sales of goods and services of PEZA-registered enterprises is now based on the principles of separate custom territory and cross border doctrine. As explained by the Court in the cases of CIR V. SEAGATE TECHNOLOGY [FEBRUARY 11, 2005], CIR v. SEKISUI JUSHI PHILIPPINES [JULY 21, 2006], CIR V. TOSHIBA INFORMATION EQUIPMENT [AUGUST 9, 2005], CIR V. CONTEX [JULY 2, 2004]: PEZA-registered enterprises, which would necessarily be located within ecozones, are VAT-exempt entities not because of Section 24 of RA 7926 (which imposes the 5% preferential tax rate on gross income of PEZA-registered enterprises in lieu of all taxes) but rather because of Section 8 of the same which establishes the fiction that ecozones are foreign territory. As a result, sales made by a supplier in the Customs Territory (national territory of the Philippines outside the borders of the ecozone) to a purchaser in the ecozone shall be considered as exportation from the Customs Territory. Conversely, sales made by a supplier from the ecozone to a purchaser in the Customs Territory shall be considered as an importation into the Customs Territory. The Philippine VAT system adheres to the cross-border doctrine which means that%) (now 12% VAT). Sales made by an enterprise within a nonecozone territory, i.e., Customs Territory, to an enterprise within an ecozone territory shall be free of VAT. This has been further clarified in RMC 50-2007 [July 302007]. Q: Summarize the current tax treatment of PEZA-registered enterprises as provided in RMC 74-99 and as further clarified in RMC 50-2007. 1. Any sale of goods, property or services by a VAT-registered supplier from the customsterritory to any Ecozone-registered enterprise regardless of incentive availed is zero-rated on the part of the VAT-registered seller because ecozones are foreign soil by fiction and thus the sale is considered an export sale. 2. Sales to an ecozone enterprise made by a nonVAT or unregistered supplier would only be exempt from VAT and the supplier shall not be able to claim credit/refund for its input VAT because, under Section 109(O) of the Tax Code, export sales by persons who are not VATregistered are exempt transactions. 3. If the ecozone-enteprise is an exporter, its input VAT are subject to refund not because of the incentives it availed but because of the nature of its transactions (export sales). 4. Any sale of goods or property by an ecozoneregistered enterprise to a buyer in the customs territory shall be subject to 12% VAT because it shall be considered an importation. The tax is imposed on the buyer/importer. 5. The sale of service or lease of properties by PEZA-registered enterprises to a customer or lessee from the customs territory shall be exempt from VAT if the service is performed within the ecozone. The lease of properties will be exempt if the property is located within the ecozone. However, if the properties are located outside of the ecozone, payments to such enterprise shall be considered as royalties and subject to final withholding VAT of 12% Sale of Goods VAT registered supplier from customs territory to PEZA registered enterprise 0% VAT Sale of Services 0% VAT enterprise seller if the service is performed outside or the property leased is located outside the ecozone, --------------------------------------------------------------10. Transactions deemed sale a) Transfer, use or consumption not in the course of business of goods/properties originally intended for sale or use in the course of business b) Distribution or transfer to shareholders, investors, or creditors c) Consignment of goods if actual sale not made within 60 days from date of consignment d) Retirement from or cessation of business with respect to inventories on hand --------------------------------------------------------------Read Section 106(B), Tax Code Q: What is meant by transactions deemed sale? There is no actual sale. However, the law deems that there is a taxable sale. 12% VAT imposed on buyer in addition to the import tax and customs duties VAT-exempt if the service is performed or rendered within the ecozone. Same rule applies to lease of properties if located in the ecozone. 12% VAT imposed on the PEZA-registered 1. Transfer of goods or properties not in the course of business (originally intended for sale or for use in the course of business) 2. Property dividends (transfer to shareholders as share in the profits of VAT-registered persons or to creditors in payment of debt) 3. Consignment of goods without the sale being made within 60 days 4. Retirement from or cessation of business with respect to inventories of taxable goods existing (see SECTION 106(B), TAX CODE) Note: (1) Before considering whether the transaction is deemed sale, it must first be determined whether the sale was in the ordinary course of trade or business. Even if the transaction was deemed sale, if it was note done in the ordinary course of trade or business, still the transaction is not subject to VAT (CIR v. MAGSAYSAY LINES [JULY 28, 2006]). --------------------------------------------------------------11. Change or cessation of status as VATregistered person a) Subject to VAT (i) Change of business activity from VAT taxable status to VAT-exempt status (ii) Approval of request for cancellation of registration due to reversion to exempt status (iii) Approval of request for cancellation of a registration due to desire to revert to exempt status after lapse of 3 consecutive years b) Not subject to VAT (i) Change of control of a corporation (ii) Change in the trade or corporate name (iii) Merger or consolidation of corporations --------------------------------------------------------------Read Section 106(C), Tax Code Q: When is a change in or cessation of status of a VAT registered person subject to VAT? 1. Change of business activity from VATtaxable status to VAT-exempt status When a VAT-registered person engaged in a VAT-taxable activity decides to discontinue such activity and engage in a non-VAT-taxable activity. When a person commenced a business with the expectation that is gross sales or receipts will exceed P1,919,500 but failed to exceed this amount during the first 12 months of operation. When a person who is VAT-exempt and not required to register for VAT opted to register as a VAT taxpayer and after the lapse of 3 years desire to revert to exempt status Q: San Roque Power entered into a purchase power agreement with NAPOCOR to develop the hydroelectric potential of the Lower Agno River. During the testing period, electricity was transferred by San Roque to NAPOCOR. Can the transfer be considered a sale of electricity? Yes. In SAN ROQUE POWER CORP. V. CIR [NOVEMBER 25, 2009], the Supreme Court held that although the transfer was not a commercial sale, the NIRC does not limit the definition of sale to commercial transactions in the normal course of business. Conspicuously, Section 106(B) of the NIRC, which deals with the imposition of VAT, does not limit the term sale to commercial sales, rather it extends the term to transactions that are deemed sale. In the said case, it was undisputed that San Roque transferred to NPC all the electricity that was produced during the trial period. The fact that it was not transferred through a commercial sale or in the normal course of business does not deflect from the fact that such transaction is deemed as a sale. 3. Approval of request for cancellation of a registration due to desire to revert to exempt status after lapse of 3 consecutive years Q: Does VAT apply to every importation? Q: When is a change in or cessation of status of a VAT registered person NOT subject to VAT? 1. Change or control of a corporation by acquisition of the controlling interest of such corporation by another stockholder or group of stockholders The goods or properties used in the business or those comprising the stock-in-trade will not be considered sold, bartered or exchanged because the corporation still owns them. Subject to VAT: a. Exchange of property by corporation acquiring control for the shares of stocks of the target corporation b. Exchange of properties by a person who wants to join the corporation of his properties held for sale or for lease for shares of stock whether resulting to corporate control or not 2. Change in trade or corporate name 3. Merger or consolidation The unused input tax of the dissolved corporation as of the date of merger or consolidation shall be absorbed by the surviving corporation. Where the customs duties are determined on the basis of the quantity or volume of the goods, the VAT shall be based on the landed cost plus excise taxes, if any. Yes. The VAT shall be imposed on every importation of goods, whether or not in the course of trade or business. This is unlike VAT on sale of goods or properties which must be in the course of trade or business. Otherwise, the person/transaction shall not be liable to pay VAT. (see CIR V. SEAGATE TECHNOLOGY [FEBRUARY 11, 2005]). Q: Anshari, an alien employee of ADB, who is retiring soon has offered to sell you his car, which he imported tax-free for his personal use. The privilege of tax exemption is recognized by tax authorities. If you decide to purchase the car, is the sale subject to tax? Yes. Section 107(B) provides that in case of tax-free importation of goods into the Philippines by persons, entities or agencies exempt from tax, where the goods are subsequently sold, transferred, or exchanged in the Philippines to non-exempt persons or entities, the purchasers, transferees, or recipients shall be considered as the importer thereof, who Page 36 of 164 Last Updated: 30 July 2013 (v3) --------------------------------------------------------------12. VAT on importation of goods a) Transfer of goods by tax exempt persons --------------------------------------------------------------Read Section 107(A), Tax Code --------------------------------------------------------------13. Tax on sale of service and use or lease of properties a) Requisites of taxability --------------------------------------------------------------Read Section 108(A), Tax Code Q: What is a sale or exchange of services? A sale of exchange of services means the performance of all kinds of services in the Philippines for others for a fee, remuneration or consideration. (See SECTION 108(A), TAX CODE for an extensive enumeration of the type of services including in said definition). Q: Are association dues, membership fees, and other assessment and charges collected by a condominium corporation/ homeowners association subject to VAT? Yes because they constitute as income payment or compensation for the beneficial services the condominium corporation/ homeowners association provides for its tenants and members ( RMC 652012). Note: (1) The fact that a condominium corporation/homeowners association is a non-stock, nonprofit organization is immaterial. As held in CIR V. CA & COMASERCO [MARCH 30, 2000], even a non-stock, nonprofit organization or government entity is liable to pay VAT on sale of goods and services. (2) Pursuant to Section 18 of RA 9904 (Magna Carta for Homeowners and Homeowners Association), the association dues and income derived from rentals of the homeowners associations may be exempted from tax subject to the following conditions: (a) The homeowners association must be a duly constituted Association as defined under Section 3(b) of RA 9904; (b) The LGU having jurisdiction over the homeowners association must issue a certification identifying the basic services being rendered by the association and its lack of resources to render such services; and (c) the association must present proof that the income and dues are used for the cleanliness, security and other basic services need by members, including maintenance of the facilities in their respective subdivisions and villages. (RMC 9-2013 [January 29, 2013] by tollway Yes. The Supreme Court in DIAZ V. SECRETARY OF FINANCE [JULY 10, 2011] answered this issue in the affirmative. The court held that VAT is imposed on all kinds of services and tollway operations who are engaged in construction, maintaining, and operating expressways are no different from lessors of property, transportation contractors, etc. Further, they also come under those described as all other franchise grantees which is not confined only to legislative franchise grantees since the law does not distinguish. They are also not a franchise grantee under Section 119 of the Tax Code which would have made them subject to percentage tax instead. Neither are the services part of the enumeration under Section 109 on VAT-exempt transactions. Note: RMC 63-2010 [JULY 19, 2010] was issued to implement Section 108 and impose VAT on the gross receipts of tollway operators from all types of vehicles starting August 16, 2010. Q: Are the gross receipts derived by operators or proprietors of cinema/theater houses from admission tickets subject to VAT? No. The Supreme Court in CIR v. SM PRIME HOLDINGS [FEBRUARY 26, 2010] held that although the enumeration of services subject to VAT under PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 5. The service is not exempt under the Tax Code, special law or internal agreement Note: Absence of any of the requirements renders the transaction exempt from VAT but may be subject to other percentage tax. Q: What is the tax treatment of the lease of residential units, where some are leased out for exceeding P12,800 while others are leased out for more than P12,800? The tax treatment shall be as follows: 1. The gross receipts from rentals not exceeding P12,800 per month per unit shall be exempt from VAT regardless of aggregate gross receipts 2. The gross receipts from rentals exceeding P12,800 shall be subject to VAT if the aggregate annual gross receipts from said units exceeds 4 P1,919,500,000. --------------------------------------------------------------14. Zero-rated sale of services --------------------------------------------------------------Read Section 108(B), Tax Code Q: Enumerate services. the zero-rated sales of SECTION 108(B) provides for the following: those mentioned in the preceding paragraph rendered to a person engaged in business conducted outside the Philippines or a nonresident person not engaged in business who is outside the Philippines when the services were performed, the consideration for which is paid for in acceptable foreign currency and accounted for in accordance with the rules and regulations of the BSP. 3. Services rendered to person or entities whose exemption under Special Laws or International Agreements effectively subjects the supply of such services to a 0% rate. (effectively zerorated transaction) 4. Sale of Services to Persons Engaged in International Shipping or Air Transport Operations 5. Sale of Services for Export-Oriented Enterprise whose export sales exceed 70% of total annual production 6. Transport of Passengers and Cargo by Air or Seal Vessels from the Philippines to a Foreign Country 7. Sale of Power Generated through Renewable Sources of Energy Q: Give the basis of VAT on sale of services and use or lease of properties? The basis shall be the gross receipts derived from the sale or exchange of services including the use or lease of properties. (see Section 108(A), Tax Code) Note: Gross receipts means the total amount of money or its equivalent representing the contract price, compensation, service fee, rental or royalty actually or constructively received during the taxable quarter for the services performed or to be performed for another person. Q: What are the requisites for the taxability of the sale of services and use or lease of properties? 1. There is a sale or exchange of service or lease or use of property enumerated in the law or other similar services 2. The service is performed or to be performed in the Philippines 3. The service is in the course of the taxpayers trade or business or profession 4. The service is for a valuable consideration actually or constructively received and Otherwise, the gross receipts will be subject to the 3% tax imposed under Section 116 of the Tax Code. Q: Acesite is the operator of Holiday Inn Hotel. It leases part of its premises to PAGCOR and caters food and beverages to Page 38 of 164 Last Updated: 30 July 2013 (v3) its patrons. Acesite contends that the sale of food and beverages to PAGCOR is zerorated and thus entitling them to claim a tax refund/credit. Is Acesite correct? Yes. In CIR v. ACESITE PHILIPPINES [FEBRUARY 16, 2007], the Supreme Court stated that services rendered to persons or entities whose exemption under special laws or international agreements to which the Philippines is a signatory effectively subjects the supply of such services to zero (0%) rate shall be subject to 0%. Since the law clearly provides for PAGCORs exemption, the sale of services of Acesite to PAGCOR is effectively zerorated. Hence, Acesite may refund the VAT it paid on its sale of food and beverages to PAGCOR. Note: Lets now discuss the most important zero-rated sale in the enumeration Section 108(B)(2). This is an exception to the destination principle. Remember that under the destination principle, goods and services are taxed only in the country where they are consumed. Section 108(B)(2) is an exception because although the services are performed in the Philippines, the sales of such services are zero-rated. while as a general rule, the VAT system uses the destination principle as a basis for the jurisdictional reach of the tax such that goods and services are taxed only in the country where they are consumed, exceptions to the destination principle are found in Section 108(B) of the 1997 Tax Code. In this case, Amex Phils. facilitated in the Philippines the collection and payment of receivables belonging to its Hong Kong-based foreign client, Amex HK, and getting paid for it in acceptable foreign currency and accounted for in accordance with the rules and regulations of the BSP. As such, they are deemed exceptions because although the services are performed in the Philippines, the sales of such services are considered zero-rated. Q: What are the requisites for the zero-rating of the sale of service under Section 108(B)(2)? 1. The service is performed in the Philippines 2. The service falls under any of the categories provided in Section 108(B) 3. It is paid for in acceptable foreign currency that is accounted for in accordance with the regulations of the Bangko Sentral ng Pilipinas 4. The recipient of such services is doing business outside the Philippines. Q: Placer Dome Inc (PDI) owns 39.9% of Marcopper. It undertook to clean-up and rehabilitate the Makalupnit and Boac Rivers in Marinduque which was affected by its mining operations. PDI engaged the services of Placer Dome Technical Services Limited (PD Canada), a non-resident foreign corporation in Canada which, in turn, engaged the services of Placer Dom Technical Services Philippines (PD Philippines). PD Philippines filed for a claim for tax credit/refund and contends that its sale of services to Placer Dome Canada was zero-rated. The CIR invokes the destination principle, contending that Placer Dome Philippines services, while rendered to a non-resident foreign corporation, are not destined to be consumed abroad. Is the CIR correct? No. In CIR V. PLACER DOME [JUNE 8, 2007], the Supreme Court reiterated its ruling in AMERICAN EXPRESS INTERNATIONAL V. CIR [JUNE 29, 2005] to the effect that the services enumerated in Section 108B constitute as exceptions to the destination principle and are zero-rated. Since Placer Dome Philippines services meet the requirements of Section 108(B)(2), it is zero-rated. Q: American Express Philippines (AMEX-P) is a Philippine Branch of AMEX International. AMEX-P is a servicing unit of AMEX Hong Kong (AMEX-HK) and facilitates the collections of AMEX-HK receivables from card members in the Philippines. AMEX-P claimed a refund for its input taxes arising from zero-rated sales of services to AMEX-HK. CIR argues that AMEX-Ps services must be consumed abroad in order to be zero-rated. Is the CIR correct? No. In AMERICAN EXPRESS INTERNATIONAL V. CIR [JUNE 29, 2005], the Supreme Court opined that PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Q: A foreign consortium composed of Burmeister Denmark and Mitsui Engineering entered into a contract with NAPOCOR for the operation and maintenance of two barges.. The Consortium appointed Burmeister Denmark as coordination manager. Burmeister Denmark established Page 39 of 164 Last Updated: 30 July 2013 (v3) Burmeister Mindanao which subcontracted the operation and maintenance of the two barges. NAPOCOR paid the foreign consortium while the consortium, in turn, paid Burmeister Philippines foreign currency inwardly remitted into the Philippines. The BIR refused to grant a refund since the services were not destined for consumption abroad. Are the services of Burmeister Philippines entitled to zero-rated status? Yes. In CIR V. BURMEISTER AND W AIN SCANDINAVIAN CONTRACTOR MINDANAO, INC. [JANUARY 22, 2007], they are entitled to zero-rated status and to the refund but only for the period covered prior to the filing of the CIRs answer in the CTA. This is so because prior, Burmeister was able to secure a ruling from the BIR allowing zero-rating of its sales. However, such ruling is valid only until the time that the CIR filed its answer in the CTA which amounted to a revocation of the said ruling. The revocation cannot be made retroactive. It must be noted, however, that without this special circumstance, Burmeister would not have been entitled to a zero-rated status. This is because the Consortium which was the recipient of the services rendered by Burmeister was deemed doing business within the Philippines. While the Consortiums principal members are non-resident foreign corporations, the Consortium itself is doing business in the Philippines. Hence, the transactions of BWSC Mindanao are not subject to VAT at zero percent. No. The services performed by AB ROHQ to X Corp do not qualify for zero-rating because X Corp cannot be considered doing business outside the Philippines. The phrase other persons doing business outside the Philippines under Section 108(B)(2) shall be deemed to pertain exclusively to affiliates, subsidiaries, or branches of ROHQs. X Corp, as the mother company of AB ROHQ, cannot be considered an affiliate, subsidiary or branch for the simple reason that X Corp and AB ROHWQ must be considered as one and the same entity for purposes of taxation. Further, X Corp is considered doing business in the Philippines through AB ROHQ. Q: ABC is a business process outsourcing company and is engaged in the business of providing call center services from the Philippines to domestic and offshore businesses. Can ABC claim for a refund or issuance of a TCC for its excess input tax paid on domestic purchases of goods and services which were allegedly attributable to ABCs zero rated sales of services? Yes provided it meets the following requisites: 1. the services must be other than processing, manufacturing or repacking of goods; 2. payment for such services must be in acceptable foreign currency accounted for in accordance with the BSP rules and regulations; and 3. the recipient of such services is doing business outside the Philippines. Note: In SITEL PHILIPPINES CORPORATION V. CIR [CTA CASE NO. 7623, MARCH 3, 2010], ACCENTURE VS. COMMISSIONER OF INTERNAL REVENUE [C.T.A. CASE NO. 7046, SEP. 22, 2009], PARLANCE SYSTEMS VS. COMMISSIONER OF INTERNAL REVENUE [C.T.A. CASE NO. 7459, JUL. 9, 2009], business process outsourcing companies were refused a refund of their excess input VAT because their sale of services were not zero-rated because they failed to prove that their clients were non-resident foreign corporations doing business outside the Philippines. Q: AB ROHQ is an ROHQ of X Corp, a foreign corporation organized under the laws of New York, USA. AB ROHQ is a VATregistered taxpayer engaged in providing services including logistics, research and development, product development, data processing and communication, and business development. It provides services solely and exclusively for its head office. AB ROHQ filed a claim for refund or issuance of TCC for input VAT paid on purchases arising from its alleged zero-rated sale of services to X Corp. Are the services rendered by AB ROHQ to its head office deemed VAT zero-rated? SECTION 109(A) TO (V) provides for the following: a) Sale or importation of agricultural and marine 6 food products in their original state. b) Sale or importation of fertilizers; seeds, seedlings and fingerlings; fish, prawn, livestock 7 and poultry feeds c) Importation of personal and household effects belonging to the residents of the Philippines returning from abroad d) Importation of professional instruments and implements, wearing apparel, domestic animals and personal household effects belonging to persons coming to settle for the first time in the Philippines e) Services subject to percentage tax f) Services by agricultural contract growers and milling for others of palay into rice, corn into grits and sugarcane into raw sugar In MINDANAO GEOTHERMAL PARTNERSHIP VS. COMMISSIONER OF INTERNAL REVENUE, CTA CASE NO. 7801, JULY 10, 2012, the CTA held that in order to qualify for VAT zero-rating under Section 108(B)(7) of the NIRC, as amended, the taxpayer must be able to prove that it is a generation company and that it is engaged in the sale of power or fuel generated through renewable source of energy. g) Medical, dental, hospital and veterinary services 8 except those rendered by professionals h) Educational services rendered by private educational institutions duly accredited by DEPED, CHED, and TESDA and those by governmental educational institutions i) Services rendered pursuant to an employeeemployer relationship Services rendered by regional or area headquarters established in the Philippines --------------------------------------------------------------15. VAT exempt transactions a) VAT exempt transactions, in general b) Exempt transactions, enumerated --------------------------------------------------------------Read Section 109, Tax Code Q: What are VAT-exempt transactions?. j) Those underlined are the notable VAT-exempt transactions. These enumeration is exclusive. 6 Such products are still considered in their original state even if they have undergone simple processes of preparation or preservation for the market, such as freezing, drying, salting, broiling, roasting, smoking, or stripping. Polished and/or husked rice, corn grits, raw cane sugar and molasses, ordinary salt and copra shall be considered in their original state. 7 Does not include specialty feeds for race hourses, fighting cocks, aquarium fish, zoo animals, and other animals generally considered as pets. 8 But see discussion on VAT exemption of doctors registered with the PRC and lawyers registered with the IBP. k) Transactions which are exempt under international agreements to which the Philippines is a signatory or under special laws l) Sales by agricultural cooperatives duly registered with the Cooperative Development Authority u) Services of banks, non-bank financial intermediaries performing quasi-banking functions and other non-bank financial intermediaries v) Sale or lease of goods or properties or performance of services other than the transactions mentioned in the preceding paragraphs, the gross annual sales and/or receipts do not exceed the amount of 14 P1,919,500. . m) Gross receipts from lending activities by credit or multi-purpose cooperatives duly registered with the Cooperative Development Authority whose lending is limited to members n) Sales by non-agricultural, non-electric and noncredit cooperatives duly registered with the 9 Cooperative Development Authority o) Export sales by persons who are not VATregistered p) Sales of real properties not primarily held for sale to customers or held for lease in the ordinary course of trade or business or sales 10 within the low-cost cap of below 1,919,500 for 11 a residential lot and P3,199,200 for a house and lot and other residential dwelling q) Lease of a residential unit with a monthly rental 12 not exceeding P12,800 r) Sale, importation, printing or publication of books and any newspaper, magazine, review or bulletin which appears at regular intervals with fixed prices for subscription and sale and is not devoted principally to publication of paid advertisements Q: Are medical services rendered b doctors registered with the PRC and legal services rendered by lawyers registered with the IBP subject to VAT? No. RR 7-2004 [M AY 7, 2004] excludes services by doctors registered with the PRC and services by lawyers registered with the IBP as well as GPPs for the sole and exclusive purport of practising law or medicine from the coverage of VAT on services s) Sale, importation, or lease of passenger or 13 cargo vessels and aircraft t) Importation of fuels, goods and supplies by persons engaged in international shipping or air transport operations Provided that the share capital contribution of each member does not exceed P15,000 10 Previously 1.5 million. Amended by RR 16-2011 [OCTOBER 27, 2011]. 11 Previously 2.5 million. Amended by RR 16-2011 [OCTOBER 27, 2011]. 12 Previously P10,000. Amended by RR 16-2011 [OCTOBER 27, 2011]. 13 Includes engine, equipment, and spare parts thereof for domestic or international transport operations. 14 Previously 1.5 million. Amended by RR 16-2011 [OCTOBER 27, 2011]. 15 Note that in FIRST PLANTERS PAWNSHOP VS. CIR [JULY 30, 2008], the Supreme Court held that First Planters Pawnshop was subject to VAT as it was a lending investor. It must be noted that the factual circumstances of the said case pertained to a taxable period prior to RA No. 9238. What is important to note in this case is that the Supreme Court stated that pawnshops should now be treated as non-bank financial intermediaries and, as such, not subject to VAT. Q: Is the sale of e-books and e-journals appearing at regular intervals with fixed prices for subscription and sale and not devoted principally to publication of paid advertisements subject to VAT? No. The terms book, newspaper, magazine, review and bulletin shall refer to printed materials in hard copies and do not include those in digital or electronic format or computerized versions (RMC No. 75-2012 dated November 22, 2012) Q: S and ABS-CBN entered into an agreement where S will provide his services exclusively to ABS-CBN as a talent for the latters TV and radio shows. Is he liable to pay VAT? No provided that there exists no employer-employee relationship between S and ABS-CBN. In SONZA V. ABS-CBN [JUNE 10, 2004], the Supreme Court held that an independent contractor is liable to pay VAT. Section 109 only exempts from VAT services rendered pursuant to an employer-employee relationship. Note: Although hindi kasama sa coverage, note na rin that by virtue of RA No. 10378 approved March 7, 2013, transport of passengers by international carriers is a VATexempt transaction. In COMMISSIONER OF INTERNAL REVENUE VS. SEMIRARA MINING CORPORATION [CTA EB NO. 752, M ARCH 22, 2012], the CTA held that a coal operator with coal operating contract with the government is exempt from value-added tax. In order to encourage and promote said policy, Section 16 of PD 972 expressly grants tax incentive to operators of a contract under the said Decree which exempts them from all taxes except income tax. --------------------------------------------------------------16. Input tax and output tax, defined --------------------------------------------------------------Note: I already discussed this. --------------------------------------------------------------17. Sources of input tax a) Purchase or importation of goods b) Purchase of real properties for which a VAT has actually been paid c) Purchase of services in which VAT has actually been paid d) Transactions deemed sale e) Presumptive input f) Transitional input --------------------------------------------------------------- Read Section 110(A), Tax Code Q: What are the sources of input tax? 1. Purchase or importation of goods a. For sale; or b. For conversion into or intended to form part of a finished product for sale including packaging materials; or c. For use as supplies in the course of business; d. For use as materials supplied in the sale of service; e. For use in trade or business for which deduction for depreciation or amortization is allowed under the Tax Code except automobiles, aircraft and yachts. 2. Purchase of real properties for which ha VAT has actually been paid 3. Purchase of services in which VAT has actually been paid 4. Transactions deemed sale 5. Presumptive input tax 6. Transitional input tax (see Section 4.110-1, RR 16-2005) FORT BONIFACIO DEVELOPMEN CORPORATION V. CIR, G.R. NO. 173425, SEPTEMBER 4, 2012 DOCTRINE: Prior payment of taxes is not required for a taxpayer to avail of the 8% transitional input tax credit. FACTS: Fort Bonifacio Development Corporation (FBDC) purchased from the government in 1995 portion of the Fort Bonifacio reservation, now known as the Fort Bonifacio Global City. No VAT on the sale of the land was passed on by the government to FBDC. On January 1, 1996, Republic Act 7716 took effect, amending certain provisions of the NIRC. One of the amendments is the extension of the coverage of the VAT to sale of real properties held primarily for sale to customers or held for lease in the ordinary course of business. In September 1996, FBDC submitted to the BIR an inventory of all its real properties, claiming that it is entitled to the transitional input tax credit on said inventories. FBDC started selling Global City lots in October 2006. For the 1st quarter of 1997, FBDC paid output taxes on the sale of lots after deducting input taxes. Realizing that the transitional input taxes were not applied against the output VAT, which would have resulted to no net output VAT liability (the transitional input taxes being higher), FBDC filed a claim for refund for the VAT payment. The Court of Tax Appeals (CTA) denied the claim on the ground that the benefits of the transitional input tax credit comes with the condition that business taxes should have been paid. Since FBDC acquired the property from the government free of VAT, it cannot avail of the transitional input tax credit. The Court of Appeals (CA) affirmed the decision of the CTA, saying that FBDC is not entitled to the transitional input tax credit since it did not pay any VAT when it purchased the Global City property. HELD: The Supreme Court (SC) reversed the decision of the CA and granted the refund. According to the SC, there is nothing in Section 105 of the old NIRC that indicate that prior payment of taxes is necessary for the availment of the transitional input tax credit. All that is required is for the taxpayer to file a beginning inventory with the BIR. --------------------------------------------------------------e) Presumptive input --------------------------------------------------------------Read Section 111(B), Tax Code Q: What is the rule on presumptive input tax credits? Persons or firms engaged in the processing of sardines, mackerel and milk, and in the manufacturing or refined sugar, cooking oil and packed noodle-based instant meals, shall be allowed a presumptive input tax, creditable against the output tax, equivalent to 4% of the gross value in money of their purchases of primary agricultural products which are used as inputs to their production. --------------------------------------------------------------19. Determination for output/input tax; VAT payable; excess input tax credits a) Determination of output tax b) Determination of input tax creditable c) Allocation of input tax on mixed transaction d) Determination of the output tax and VAT payable and computation of VAT payable or excess tax credits --------------------------------------------------------------Note: Remember the formula! --------------------------------------------------------------18. Persons who can avail of input tax credit --------------------------------------------------------------Q: Who may avail of input tax credit? 1. The importer upon payment of VAT prior to the release of goods from customs custody 2. The purchaser of the domestic goods or properties upon consummation of he sale 3. The purchaser of services of the lessee or licensee upon payment of compensation, rental, royalty or fee 4. Purchaser of real property under cash/deferred payment basis upon consummation of the sale or if upon instalment basis upon every instalment payment (Section 4.110-2, RR 16-2005) --------------------------------------------------------------b) Determination of input tax creditable --------------------------------------------------------------Read Section 105(C), Tax Code Q: How is determined? the creditable input tax The amount of input taxes creditable during a month or quarter shall be determined by: 1. Adding all the creditable input taxes arising from the transactions during the month or quarter plus any amount of input tax carried over from the preceding month or quarter 2. Reduced by the amount of claim for VAT refund or TCC and other adjustments such as purchase returns or allowances, input tax Page 46 of 164 Last Updated: 30 July 2013 (v3) attributable or allocated to exempt sales, and input tax attributable to sales to government subject to final withholding VAT (Section 4.110-5, RR 16-2005) The following input taxes were passed on by its VAT suppliers: Input tax on taxable goods at 12% Input tax on zero-rated sales Input tax on sale of exempt goods Input tax on sale to government Input tax on depreciable capital Not attributable to any specific activity (monthly amortization for 60 months) - P5,000 - P3,000 - P2,000 - P4,000 - P20,000 --------------------------------------------------------------c) Allocation of input tax on mixed transaction --------------------------------------------------------------Q: Explain the rule on the apportionment of input VAT on mixed transactions. SECTION 4.110-4 OF RR16-2005 [SEPTEMBER 1, 2005] provides that a VAT-registered taxpayer Exception: Input taxes that can be directly attributable to VAT taxable sales to the Government or any of its political subdivisions, instrumentalities or agencies shall not be credited against output taxes arising from sales to non-Government entities. 2. The input tax attributable to VAT-exempt sales shall not be allowed as credit against output tax, but should be treated as part of cost of asset or operating expense 3. If any input tax cannot be directly attributed to either a VAT-taxable or VAT-exempt transaction, the input tax shall be pro-rated to the VAT taxable and VAT-exempt transactions and only the ratable portion pertaining to transactions subject to VAT may be recognized for input tax credit. Note: To illustrate by way of computation. ABC Corporation had the following sales during the month: Sale to private entities subject to 12% Sale to private entities subject to 0% Sale of exempt goods Sale to govt subject to 5% FWT Total Sales for the month - P100,000 P100,000 - P100,000 - P100,000 - P400,000 The creditable input VAT available for each of the respective type of transactions entered into by ABC Corp are as follows: 1. 2. 3. 4. For the sales subject to 12% VAT (i) actual input of P5,000 and (ii) ratable portion of P5,000 For the sales subject to 0% VAT (i) actual input VAT of 3,000 and (ii) ratable portion of P5,000 For sale of exempt goods no input VAT is creditable as the transactions are VAT-exempt For the sales to government no input VAT is creditable as the law imposes a 5% FWT obligation on the government agency-payor. How was the ratable portion of creditable input VAT for VAT-taxable and zero-rated sales computed? For input VAT creditable on VAT-taxable sales: --------------------------------------------------------------d) Determination of the output tax and VAT payable and computation of VAT payable or excess tax credits --------------------------------------------------------------Read Section 110(B), Tax Code Q: Give the three possible scenarios that may arise in computing the VAT payable. If at the end of any taxable month or quarter: Output tax = input tax Output tax > input tax Output tax < input tax No VAT payable The excess shall be paid by the VAT-registered person The excess shall be carried over to the succeeding quarter or quarters transactions Note: We will discuss what the required invoices are later. Copy of the Monthly Remittance Return of Value Added Tax Withheld (BIR Form 1600) filed by the resident payor in behalf of the non-resident evidencing remittance of VAT due which was withheld by the payor Payment Order showing payment of the advance VAT. Note: If input vat results from zero-rated or effectively zero-rated transactions, any excess over the output taxes shall be refunded to the taxpayer or credited against other internal revenue taxes, at the taxpayers option. --------------------------------------------------------------20. Substantiation of input tax credits --------------------------------------------------------------Q: What are the substantiation requirements of input tax credits? Input taxes must be substantiated and supported by the following documents, and must be reported in the information returns required to be submitted to the Bureau: 1. For importation goods the of Import entry or other equivalent document showing actual payment of VAT on the imported goods Invoice showing the information required under Section 113 and 237 of the Tax Code Public instrument i.e., deed of absolute sale, deed of conditional sale, contract/agreement to sell, etc., together with VAT invoice issued by the seller. Official receipt showing the information required under Section 113 and 237 of the Tax Code. Inventory of goods as shown in a detailed list to be submitted to the BIR Invoice required --------------------------------------------------------------21. Refund or tax credit of excess input tax a) Who may claim for refund/apply for issuance of tax credit certificate b) Period to file claim/apply for issuance of TCC c) Manner of giving refund d) Destination principle or cross-border doctrine --------------------------------------------------------------Read Section 112(c), Tax Code Q: Who may claim for refund/apply for issuance of tax credit certificate? A VAT-registered person whose sales of goods, properties or services are zero-rated or effectively zero-rated may apply for the issuance of a TCC or refund of input tax attributable to such sales (Section 4.112-1, RR No. 16-2005). Note: The refund or application for issuance of TCC must be filed with the appropriate BIR Office-Large Taxpayers Service (LTS) or RDO having jurisdiction over the principal place of business of the taxpayer. Direct exporters may file their claim for TCC with the One Stop Shop Center of the DOF. (see Section 4.112-1, RR No. 16-2005). The filing of the claim with one office shall preclude the filing of the same claim with another office. 2. For the domestic purchase of goods and properties 3. For the purchase of real property The proper party to seek refund of an indirect tax is the statutory taxpayer, not the person on whom it is shifted to. (EXXON MOBIL PHILIPPINES V. CIR [JANUARY 26, 2011]; SILKAIR V. CIR [FEBRUARY 25, 2010] on sale FACTS: Petitioner filed an administrative claim for refund on the excise taxes paid on the purchase of jet fuel from its supplier oil company for the period of July 1, 1998 to December 31, 1998, which it alleged to have been erroneously paid based on Section 135(a) and (b) of the Tax Code of 1997. Due to inaction by respondent Commissioner, petitioner filed a Petition for Review with the Court of Tax Appeals. The CTA denied the petition and ruled that while petitioners country indeed exempts from excise taxes petroleum products sold to international carriers, petitioner nevertheless failed to comply with the second requirement under Section 135 (a) of the 1997 Tax Code as it failed to prove that the jet fuel delivered by Petron came from the latters bonded storag e tank. Upon the denial of the motion of reconsideration, petitioner elevated the case to the CA. The CA affirmed the denial and ruled that petitioner is not the proper party to seek for the refund of the excise taxes paid. HELD: The Supreme Court held that excise taxes, which apply to articles manufactured or produced in the Philippines for domestic sale or consumption or for any other disposition and to things imported into the Philippines,. The proper party to question, or seek a refund of an indirect tax is the statutory taxpayer, the person on whom the tax is imposed by law and who paid the same even if he shifts the burden thereof to another . Petitioner, as the purchaser and end-consumer, ultimately bears the tax burden, but this does not transform its status into a statutory taxpayer. application for effective zero-rating. The BIR issued a ruling stating that the supply of electricity by Mirant to NAPOCOR shall be subject to 0% VAT. On April 14, 1998, Mirant paid Mitsubishi the VAT component billed by the latter for services rendered. Mirant files nd its quarterly VAT return for the 2 quarter of 1998, where it reflected the input VAT paid to Mitsubishi. Subsequently, on December 20, 1999, Mirant filed an administrative claim for refund of unutilized input VAT arising from purchase of capital goods from Mitsubishi and its domestic purchase of goods and services attributable to its zero-rated sales of powergeneration services to NAPOCOR. The claim was denied for being filed beyond the prescriptive period of two years. The Supreme Court held that Mirants claim has prescribed. Unutilized input VAT payments must be claimed within two years reckoned from the close of the taxable quarter when the relevant sales were made pertaining to the input VAT even if the payment for the VAT was made some quarters after 16 that. The fact that there was a pending request for zero-rating cannot be a basis for the late filing of return and payment of taxes. Further, Mirant cannot avail itself of the provisions of either Section 204(C) or 229 of the NIRC which, for the purpose of refund, prescribes the payment of the tax as the starting point for the two-year prescriptive limit for the filing of a claim. These provisions apply only to instances of erroneous payment or illegal collection of internal revenue taxes. Note: In the case of claims for refund of unutilized VAT on account of cessation of business, the 2-year period shall commence from the date of cancellation of registration of the taxpayer and not from the close of the taxable quarter when the sales were made (Associated Swedish Steels v. CIR [CTA Case No. 7850, August 23, 2012). The cancellation of VAT registration commences from the first day of the month following the application, under Section 236 of the Tax Code. (Ibid) --------------------------------------------------------------b) Period to file claim/apply for issuance of TCC --------------------------------------------------------------Q: What is the prescriptive period to file the claim for refund or application for issuance of TCC? The written application for the issuance of a TCC or refund must be filed with the BIR within 2 years after the close of the taxable quarter when the relevant sales were made. Q: In claims for VAT refund/credit, what is the reckoning point for the two-year prescriptive period? The reckoning period is from the close of the taxable when the relevant sales were made. Note: For this matter, It is important to discuss the leading case of CIR V. MIRANT PAGBILAO CORP. [SEPTEMBER 12, 2008]. Q: What is the period within which tax refund/credit of input taxes shall be made? The CIR shall grant a tax credit certificate/refund for creditable input taxes within 120 days from the date In CIR V. MIRANT PAGBILAO CORP. [SEPTEMBER 12, 2008], Mirant generated power which it sells to NAPOCOR in which connection it secured the services of Mitsubishi Corporation of Japan. In the belief that its sale of power generation services to the NPC was VAT zero-rated because of NAPOCORs tax exempt status, Mirant filed an PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 16 Note that previously in ATLAS CONSOLIDATED MINING V. CIR [JUNE 8, 2007], the rule was that the two-year prescriptive period for filing a claim for refund/credit of input VAT on zero-rated sales was counted from the date of filing of the return of submission of complete documents in support of the application. (see Section 112(C), Tax Code) Note: The 120-day period is counted from the submission of the complete documents with the BIR. (PILIPINAS TOTAL GAS, INC. VS. COMMISSIONER OF INTERNAL REVENUE [CTA, JANUARY 05, 2012]) Non-submission of complete documents at the administrative level is not fatal to a judicial claim (PHILEX MINING CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE [CTA CASE NO. 8228, MAY 31, 2012]) What is fatal to the taxpayer's cause is its failure to submit sufficient evidence such as invoices and receipts in support of its claim before the CTA and not its failure of to submit complete documents before the BIR and not before the CTA. COMMISSIONER OF INTERNAL REVENUE VS. PHILIPPINE AIRLINES, INC., CTA EB CASE NO. 775, JULY 24, 2012 the Court the authority to entertain the same. COMMISSIONER OF INTERNAL REVENUE, VS. TEAM SUAL CORPORATION [C.T.A. EB NO. 686, MAY 22, 2012]; CE CASECNAN WATER AND ENERGY COMPANY, INC. VS. CIR, CTA EB NO. 726 [CTA CASE NO. 7739, June 26, 2012]; PHILEX MINING CORPORATION VS. CIR [CTA EB NO. 778 CTA CASE NO. 7720, JUNE 26, 2012] As the provision is phrased, the word "may" relates to the taxpayer's option to appeal or not to appeal, upon the denial of its claim for refund or after the expiration of the 120-day period. However, if the tax payer opts to appeal, such claim must be filed within the 30-day period given from receipt of the denial or the expiration of the 120-day period. Thus, it is the option to appeal which is permissive, however, the period to appeal must be mandatorily complied with. (MINDANAO II GEOTHERMAL PARTNERSHIP VS. COMMISSIONER OF INTERNAL REVENUE, CTA EB CASE NO. 750, JULY 5, 2012) Q: What is the remedy in case of denial of the CTA of the claim for refund or if the CIR failed to act on the claim within the 120-day period? In case of full or partial denial of the claim for tax credit certificate/refund: a) The taxpayer may appeal to the CTA within 30 days from the receipt of said denial, otherwise the decision shall be come final b) If no action on the claim for tax credit certificate/refund has been taken by the CIR after the 120 day period in which he must decide, the taxpayer may appeal to the CTA within 30 days from the lapse of the 120 day period. Note: Judicial claim for refund should be filed within thirty (30) days from the receipt of the decision of the Commissioner of Internal Revenue (CIR) or upon the expiration of the one hundred twenty (120) days in case of inaction of the CIR. KEPCO PHILIPPINES CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE, [CTA EB NO. 736 695, JANUARY 10, 2012]; DIAGEO PHILIPPINES, INC. VS. COMMISSIONER OF INTERNAL REVENUE, [CTA CASE NOS. 7846 AND 7865, JANUARY 16, 2012]; PHILEX MINING CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE, C.T.A. EB NO. 728, AUGUST 31, 2012; PILIPINAS TOTAL GAS, INC. VS. CIR, C.T.A. EB NO. 776, OCTOBER 11, 2012; NORTHWIND DEVELOPMENT CORPORATION VS. CIR, CTA CASE NO. 7918, OCTOBER 03, 2012 In case of inaction by the BIR, judicial claim for refund filed beyond thirty (30) days from the expiration of the one hundred twenty (120) days is filed out of time and deprives Q: Can the taxpayer appeal to the CTA without waiting for the lapse of the 120 day period? No. Where the taxpayer did not wait for the decision of the CIR or the lapse of the 120-day period, the filing of the said judicial claim with the CTA is premature. The non-observance of the 120-day period is fatal to the filing of a judicial claim. Note: In this regard, let us discuss the leading case of CIR V. AICHI FORGING COMPANY OF ASIA [ OCTOBER 6, 2010]. In CIR V. AICHI FORGING COMPANY OF ASIA [ OCTOBER 6, 2010], Aichi Forging is a VAT-registered corporation engaged in manufacturing and processing of steel. Aichi filed a tax credit/refund for its unutilized input tax from purchases and importation attributed to its zero-rated sales. The CIR and CTA ruled that the administrative and judicial claims were filed beyond the period allowed by law. Moreover, the CIR puts in issue the fact that the administrative claim and the judicial claim were filed on the same day. The CIR opines that simultaneous filing of the claims contravenes the NIRC which requires the prior filing of an administrative claim. The Supreme Court first reiterated that the unutilized input VAT must be claimed within two years after the close of the taxable quarter when the sales were made as laid down in CIR V. MIRANT PAGBILAO CORP. [SEPTEMBER 12, 2008]. Going to the administrative and judicial claims, the Court ruled that the administrative claim was timely filed while the judicial claim was premature. In this case, Page 51 of 164 Last Updated: 30 July 2013 (v3) applying the Administrative Code which states that a year is composed of 12 calendar months instead of the Civil Code (a year is equivalent to 365 days), it is clear that Aichi timely filed its administrative claim within the two-year prescriptive period. On the other hand, the claim of Aichi must be denied for nonobservance of the 120-day period Where the taxpayer did not wait for the decision of the CIR or the lapse of the 120-day period, it having simultaneously filed the administrative and the judicial claims, the filing of the said judicial claim with the CTA is premature. The non-observance of the 120-day period is fatal to the filing of a judicial claim. The claim of Aichi that such non-observance is not fatal as long as both the administrative and judicial claim is filed within the 2-year prescriptive period is without legal basis. The 2 year prescriptive period refers to applications for refund/credit filed with the CIR and not to appeals made to the CTA. Applying the two-year period to judicial claims would render nugatory Section 112(D) of the NIRC, which already provides for a specific period within which a taxpayer should appeal the decision or inaction of the CIR. The 120-day period is crucial in filing an appeal with the CTA. Note: In other words, the 2-year prescriptive period applies only to the filing of the administrative claim meaning the filing of the claim for refund or application for TCC with the CIR. If you want to file a suit with the CTA, you wait for the 120-day period to lapse. Dahil dun, you cannot simultaneously file a claim with the CIR and file a suit with the CTA. This early on I will tell you that the rule is different pagdating sa refund or credit of an erroneously or illegally collected tax under Section 229. Doon, both the administrative and judicial claim must be filed within the 2 year prescriptive period. Further, you need not wait for the BIR to act. You can simultaneously file your claim for refund or credit and the suit with the CTA. We will discuss that later in Tax Remedies. Thus: 1. For the administrative claim, file within 2 years from end of the taxable quarter when sales were made. 2. For judicial claim, BIR has 120 days to decide. If adverse decision within the 120 day period, 30 days from receipt of decision to appeal to CTA. If no BIR decision within 120 days, 30 th days from the 120 day to appeal to the CTA. Note: (1) Thus, Aichi affirmed the Courts ruling in Mirant in that the 2-year prescriptive period shall be reckoned from the end of the taxable quarter when the relevant sales were made but clarified that such prescriptive period applies only to the filing of the administrative claim. See THIRD MILLENNIUM OIL MILLS, INC. VS. CIR [CTA EB NO. 729 (CTA CASE NO. 7583), JUNE 7, 2012]; CIR VS. PENN PHILIPPINES, INC., CTA EB NO. 693 [CTA CASE NO. 7457), JUNE 27, 2012] The taxpayers compliance with the 120 -day period under Section 112(C) is both mandatory and jurisdictional. See PROCTER & GAMBLE ASIA, PTE. LTD. VS. CIR [CTA EB CASE NO. 740 (CTA CASE NO. 7683), JUNE 18, 2012]; CARGILL PHILIPPINES, INC. VS. CIR, [CTA EB CASE NO. 779 (CTA CASE NOS. 6714 & 7262), JUNE 18, 2012]; PHILEX MINING CORPORATION VS. CIR, [CTA EB NO. 817 (CTA CASE NO. 7798), JUNE 13, 2012]; DIAGEO PHILIPPINES, INC. VS. CIR, [CTA EB NO. 806 (CTA CASE NO. 7778), JUNE 21, 2012]; PHILEX MINING CORPORATION VS. CIR, CTA EB NO. 808 (CTA CASE NOS. 7859 & 7886), JUNE 6, 2012; CE CASECNAN WATER AND ENERGY COMPANY, INC. VS. CIR, CTA EB NO. 726 (CTA CASE NO. 7739), JUNE 26, 2012; (2) Citing Aichi, the CTA in numerous cases have held that filing the judicial claim without waiting for the lapse of the 120-day period is fatal. The premature filing of the judicial claim warrants dismissal. SEE DEUTSCHE KNOWLEDGE SERVICES, PTE. LTD. VS. COMMISSIONER OF INTERNAL REVENUE [CTA CASE NO. 8165, JANUARY 08, 2013]; CASECNAN WATER AND COMPANY, INC. VS.COMMISSIONER OF INTERNAL REVENUE [CTA EB NO. 836, JANUARY 28, 2013]; HEDCOR SIBULAN, INC. VS. COMMISSIONER OF INTERNAL REVENUE [C.T.A. CASE NO. 8051, JANUARY 05, 2012]; SITEL PHILIPPINES CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE, [C.T.A. EB NO. 668, JANUARY 06, 2012]; CE CEBU GEOTHERMAL POWER COMPANY, INC. VS. COMMISSIONER OF INTERNAL REVENUE [CTA EB NO. 741, JANUARY 12, 2012]; CBK POWER COMPANY LIMITED VS. COMMISSIONER OF INTERNAL REVENUE [CTA EB NO. 760, FEB 1, 2012]; SAN ROQUE POWER CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE [CTA CASE NO, 7937, FEBRUARY 8, 2012]; AIR LIQUIDE Q: How do we reconcile CIR V. MIRANT PAGBILAO CORP. [SEPTEMBER 12, 2008] and CIR V. AICHI FORGING COMPANY OF ASIA [ OCTOBER 6, 2010]? In both Mirant and Aichi In Mirant The 2-year prescriptive period is counted from the end of the taxable quarter when the sales were made. The 2-year prescriptive period applies to both the administrative and judicial claim. Thus, both claims must be filed within 2 years from the end of the taxable quarter when the sales were made The 2-year prescriptive period only applies to the administrative claim. In Aichi --------------------------------------------------------------c) Manner of giving refund --------------------------------------------------------------Q: What is the manner of giving refund? Refunds shall be made upon warrants drawn by the Commissioner or by his duly authorized representative without the necessity of being countersigned by the Chairman, Commission on Audit, the provisions of the Administrative Code of 1987 notwithstanding: That refunds shall be subject to post audit by the Commission on Audit. (See Section 112(D), Tax Code) Note: If you ask for a tax credit, you get what you call a Tax Credit Certificate (TCC). However, note Executive Order 68 [March 27, 2012]. No more issuance of VAT TCCs and the EO provides for the monetization of outstanding VAT TCCs. EO 68 allows qualified VATregistered. . Q: Outline the process for the refund or credit of excess or unutilized input taxes under Section 112(c). 1. Filing and Payment 2. Administrative claim within 2 years counted from the close of the taxable quarter when the relevant sales were made 3. Submission of additional and relevant support documents within 60 days from filing of claim 4. Appeal to CTA Division within 30 days from receipt of notice of denial or from lapse of 120 days of inaction counted from submission of documents. The appeal should not be made within the 2-year prescriptive period. Otherwise, the judicial claim is premature. The Motion for Reconsideration or New Trial to CTA Division within 15 days from receipt of decision. 5. Appeal to CTA En Banc within 15 days from receipt of resolution. Motion for Reconsideration to the CTA En Banc within 15 days from receipt of decision 6. Appeal to the SC within 15 days from receipt of resolution under Rule 45 --------------------------------------------------------------22. Invoicing Requirements a) Invoicing requirements in general b) Invoicing and recording deemed sale transactions c) Consequences of issuing erroneous VAT invoice or VAT official receipt ----------------------------------------------------------------------------------------------------------------------------a) Invoicing requirements in general --------------------------------------------------------------Read Section 113(A), Tax Code Read Section 113(B), Tax Code Q: What information should be contained in the VAT invoice or VAT official receipt? 1. A statement that the seller is a VAT-registered person, followed by his taxpayer's identification number (TIN); 2. The total amount which the purchaser pays or is obligated to pay to the seller with the indication that such amount includes the value-added tax provided, that: a) The amount of the tax shall be shown as a separate item in the invoice or receipt; b) If the sale is exempt from value-added tax, the term "VAT-exempt sale" shall be written or printed prominently on the invoice or receipt; c) If the sale is subject to zero percent (0%) value-added tax, the term "zero-rated sale" shall be written or printed prominently on the invoice or receipt; d) If the sale involves goods, properties or services some of which are subject to and some of which are VAT zero-rated or VATexempt, the invoice or receipt shall clearly indicate the breakdown of the sale price between its taxable, exempt and zero-rated components, and the calculation of the value-added tax on each portion of the sale shall be shown on the invoice or receipt: Provided, That the seller may issue separate invoices or receipts for the taxable, exempt, and zero-rated components of the sale. 3. The date of transaction, quantity, unit cost and description of the goods or properties or nature of the service; and 4. In the case of sales in the amount of one thousand pesos (P1,000) or more where the sale or transfer is made to a VAT-registered person, the name, business style, if any, address and taxpayer identification number (TIN) of the purchaser, customer or client. (see Section 4.113-1(B), RR 16-2005) Q: Is there a difference between an invoice and official receipt for purposes of substantiation? In KEPCO PHILIPPINES V. CIR [NOVEMBER 24, 2010], in ruling on Kepcos contention that an invoice and an official receipt are interchangeable, the Supreme Court stated that only a VAT invoice might be presented to substantiate a sale of goods or properties, while only a VAT receipt could substantiate a sale of services. The VAT invoice is the sellers best proof of the sale of the goods or services to the buyer while the VAT receipt is the buyers best evidence of the payment of goods or services received from the seller. Even though VAT invoices and receipts are normally issued by the supplier/seller alone, the said invoices and receipts, taken collectively, are necessary to substantiate the actual amount or quantity of goods sold and their selling price (proof of transaction), and the best means to prove the input VAT payments (proof of payment). Hence, VAT invoice and VAT receipt should not be confused as referring to one and the same thing. Certainly, neither does the law intend the two to be used alternatively Note: The unamended Section 113 did not distinguish between an invoice and a receipt when used as evidence of a zero-rated transaction. Thus, in the case of transactions which took place during the period of the unamended law, the Court could accept either or both of the documents as evidence of zero-rated transactions (SOUTHERN PHILIPPINES V. CIR [OCTOBER 19, 2011]; AT&T COMMUNICATIONS SERVICES PHILIPPINES V. CIR [AUGUST 3, 2010] Q: What are the invoicing and recording requirements for deemed sale transactions? Deemed transaction sale Invoicing and requirements recording 1. Transfer, use or consumption not in the course of business of goods or properties originally intended for sale or use in the course of business 2. Distribution or transfer to shareholders/invest ors or creditors A memorandum entry in the subsidiary sales journal to record withdrawal of goods for personal use disposed of it VATregistered buyers, an invoice or instrument of sale or transfer shall be prepared, citing the invoice number wherein the tax was imposed on the deemed sale. At the same time, the tax paid corresponding to the goods sold should be separately indicated in the instrument of sale Invoice, at the time of the transaction, which should include all the info prescribed in Sec. 113(B) --------------------------------------------------------------c) Consequences of issuing erroneous VAT invoice or VAT official receipt --------------------------------------------------------------Read Section 113(D), Tax Code Q: What are the consequences of issuing erroneous VAT invoices or VAT official receipts? 1. If a person who is not VAT-registered issues an invoice or receipt showing his TIN, followed by the word VAT, the erroneous issuance shall result to the following: a) The Non-VAT person shall be liable to the: i. ii. iii. percentage taxes applicable VAT due on the transactions without the benefit of any input tax credit 50% surcharge as penalty 3. Consignment of goods if actual sale is not made within 60 days 4. Retirement from or cessation of business with respect to all goods on hand Invoice, at the time of the transaction, which should include all the info prescribed in Sec. 113(B) An inventory shall be prepared and submitted to the RDO who has jurisdiction over the taxpayers principal place of business not later than 30 days after retirement or cessation from the business. An invoice shall be prepared for the entire inventory, which shall be the basis of the entry into the subsidiary sales journal. The invoice need not enumerate the specific items appearing in the inventory regarding the description of the goods. However, the sales invoice number should be indicated in the inventory filed and a copy thereof shall form part of this invoice. i. If the business is to be continued by the new owners or successors, the entire amount of output tax on the amount deemed sold shall be allowed as input taxes. ii. If the business is to be liquidated and the goods in the inventory are sold or b) The VAT shall, if the other requisite information required is shown on the invoice or receipt, be recognized as an input tax credit to the purchaser. 2. If a VAT-registered person issues a VAT invoice or VAT official receipt for a VAT-exempt transaction, but fails to display prominently on the invoice or receipt the term VAT -exempt Sale, the issuer shall be liable to account for the VAT imposed. The purchaser shall be entitled to claim an input tax credit on said purchase. (see Section 4.113-4, RR 16-2005) Note: Failure or refusal to comply with the requirement that the amount of tax shall be shown as a separate item Q: What is the effect of the failure to comply with the invoicing requirements on the claim for refund or credit of input VAT on zerorated sales? The claim for refund of unutilized or excess input taxes on the alleged zero-rated sales will be denied. The invoicing requirements are mandatory and the failure to comply is fatal in claims for a refund or credit of input VAT on zero-rated sales. (SILICON PHILIPPINES V. CIR [JANUARY 21, 2011]. See also MICROSOFT PHILIPPINES V. CIR [APRIL 6, 2011]; PANASONIC COMMUNICATION IMAGING CORP V. CIR [FEBRUARY 8, 2010]; JRA PHILIPPINES V. CIR [OCTOBER 11, 2010]; HITACHI GLOBAL STORAGE TECHNOLOGIES PHILIPPINES CORP V. CIR [OCTOBER 20, 2010]; KEPCO PHILIPPINES CORP V. CIR [NOVEMBER 24, 2010]. WESTERN MINDANAO POWER CORPORATION V. CIR, G.R. NO. 181136, JUNE 13, 2012 DOCTRINE: Failure to print the word zero-rated on the VAT invoices or official receipts is fatal in claims for a refund or credit of input VAT on zero-rated sales, even if the claims were made prior to the effectivity of R.A. 9337. FACTS: Taxpayer contends that RR 7-95 constitutes undue expansion of the scope of the legislation it seeks to implement on the ground that the statutory requirement for imprinting the phrase zero-rated on VAT official receipts appears only in Republic Act No. 9337. This law took effect on 1 July 2005, or long after petitioner had filed its claim for a refund. HELD: the Supreme Court held that in a claim for tax refund or tax credit, the applicant must prove not only entitlement to the grant ofnthe claim under substantive law. It must also show satisfaction of all the documentary and evidentiary requirements for an administrative claim for a refund or tax credit. Hence, the mere fact that taxpayers application for zero-rating has been approved by the CIR does not, by itself, justify the grant of a refund or tax credit. The taxpayer claiming the refund must further comply with the invoicing and accounting requirements mandated by the NIRC, as well as by revenue regulations implementing them. It further held that RR 7-95 proceeds from the rule-making authority granted to the Secretary of Finance by the NIRC for the efficient enforcement of the Q: Kepco filed a claim for refund of unutilized input VAT based on its zero-rated sale of power to NAPOCOR. A substantial portion of the claim was denied for having been supported by VAT invoices which only had the TIN-VAT stamped and not printed. Is Kepco entitled to the claim for refund? No. In KEPCO PHILIPPINES V. CIR [NOVEMBER 24, 2010], the Supreme Court ruled that the requirement that the TIN be imprinted and not merely stamped is Page 56 of 164 Last Updated: 30 July 2013 (v3) a reasonable requirement imposed by the BIR.. The failure to adhere to this rule will not only expose the taxpayer to penalties but should also serve to disallow the claim. 3. Any person who imports goods 4. Professional practitioners whose gross professional fees exceed P1,919,500 for any 12-month period. Q: Is the printing of the Authority to Print (ATP) required in the invoices or receipts? No. The ATP need not be reflected in the invoices or receipts because there is no law or regulation requiring it. Failure to print the ATP on the invoices or receipts should not result in the outright denial of a claim or the invalidation of the invoices or receipts for purposes of claiming a refund. But, while there is no such law, the Tax Code requires persons engaged in business to secure ATP from the BIR prior to printing invoices or receipts. Since the ATP is not indicated in the receipts or invoices, the only way to verify whether the invoices or receipts are duly registered is by requiring the claimant to present its ATP from the BIR. Without which, the invoices or receipts would not have probative value for the purpose of refund. (SILICON PHILIPPINES V. CIR [JANUARY 21, 2011]). Note: A taxpayer exempt from VAT but opting to be registered as VAT taxpayer may be held liable for VAT deficiency for failure to print the words VAT-exempt sale on the official receipts issued to its PEZA-registered lessee First Sumiden Realty, Inc. vs. CIR, CTA Case No. 8151, September 27, 2012 Refund claim under Section 229 of the Tax Code does not require proof of compliance with the invoicing requirements. ERICSSON TELECOMMUNICATIONS, INC. VS. COMMISSIONER OF INTERNAL REVENUE, CTA CASE NO. 8027, AUGUST 2, 2012 Q: What are the rules regarding the time for filing the return and payment of the tax? Every person liable to pay VAT shall file a: a. The monthly VAT Declarations of taxpayers whether large or not shall be filed and the taxes paid not later than the 20th day following the end of each month b. A quarterly VAT return of the amount of his gross sales or receipts within 25 days after the close of each taxable quarter prescribed for each taxpayer. Note: (1) A VAT-registered person shall pay VAT on a monthly basis. Amounts reflected in the monthly VAT return for the first 2 months of the quarter shall be included in the quarterly VAT return which reflects the cumulative figures for the taxable quarter. Payments in the monthly returns shall be credited in the quarterly return to arrive at the net VAT payable or excess input tax as of the end of the quarter (2) Taxable quarter shall mean the quarter that is synchronized to the income tax quarter of the taxpayer. --------------------------------------------------------------24. Withholding of final VAT on sales to government --------------------------------------------------------------Read Section 114(C), Tax Code Q: What is the rule on withholding of VAT by government agencies? The government or any of its political subdivisions, instrumentalities or agencies, including GOCCs, shall, before making payment on account of each purchase of goods or services subject to VAT, deduct and withhold a final VAT equivalent to 5% of the gross payment thereof provided that the payment for lease or use of properties or property rights to non-resident owners shall be subject to 10% withholding tax at the time of payment. (Section 4.114-2, RR 16-2005) Note: The 5% final VAT shall represent the net VAT payable of the seller or, otherwise stated, the presumed input VAT cost of the entity dealing with the government --------------------------------------------------------------23. Filing of return and payment --------------------------------------------------------------Read Section 114(A) and (B), Tax Code Q: Who are required to file a VAT return? 1. Every person or entity who in the course of his trade or business, sells or leases goods, properties and services subject to VAT if the aggregate amount of actual gross sales or receipts exceed P1,919,500 for any 12month period 2. A person required to register as a VAT taxpayer but failed to register PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 No. In LVM CONSTRUCTION CORPORATION V. SANCHEZ [DECEMBER 5, 2011], the Supreme Court held that as an entity which dealt directly with the government insofar as the main contract was concerned, LVM was itself required by law to pay the 8.5% (now 5%) VAT which was withheld by DPWH. Given that the JV complied with their own obligation when they paid their VAT from their gross receipts and the fact that the contract between LVM and the JV did not stipulate any obligation on LVM assuming the VAT, LVM has no basis to withhold payments. Although the burden to pay an indirect tax like the VAT can be passed on, the liability to pay the same remains with the seller. IN this case, both LVM and the JV are liable for their respective VAT obligations as respective sellers. 3. Q: LVM Construction Corp. was engaged by the DPWH for the construction of roads and bridges. LVM subcontracted one of the projects to a Joint Venture. After completion, the JV demanded full payment to which LVM responded that they discovered that no deductions for VAT were made on previous payments and as such they were going to deduct 8.5% (now 5%) from the payments still due. The JV disputed this and argued that all the receipts issued to LVM would have made JV subject to VAT and, hence, LVM could claim such as input tax. Can LVM rightfully deduct the amount representing the withholding VAT due on its transaction with DPWH? ---------------------------------------------------------E. TAX REMEDIES ---------------------------------------------------------Note: I want to start by saying that the bar syllabus creates an impression that the remedies of the taxpayer are assessment, collection and refund. That is wrong. Assessment and collection are the powers of the taxing authority/government. Under the power of collection, different remedies are available to the government namely: (1) tax lien, (2) compromise, (2) distraint of personal property or levy of real property or garnishment of bank deposits (3) sale of property, (4) forfeiture, (5) compromise and abatement, (6) penalties and fines, (7) suspension of business operations, (8) civil action and (9) criminal action. (1) to (7) are the administrative remedies while (8) to (9) are the judicial remedies. Taxpayers have two remedies: (1) administrative protest (you protest the assessment) and (2) claim for refund. In this chapter, I wont discuss the topics under the Syllabus in the order provided because if I do, I dont think we will have a good understanding of tax remedies. Heres what Ill do. Ill follow the outline up to Protest. And then Ill rearrange the topics under b) Collection and 2. Government Remedies and integrate the discussion. After that, Ill discuss Refunds. (b) Suspension of running of statute of limitations (iv) General provisions on additions to the tax (a) Civil penalties or Surcharges (b) Interest (c) Compromise penalties (v) Assessment process (a) Tax audit (b) Notice of informal conference (c) Issuance of preliminary assessment notice (d) Exceptions to issuance of preliminary assessment notice (e) Reply to preliminary assessment notice (f) Issuance of formal letter of demand and assessment notice/final assessment notice (g) Disputed assessment (h) Administrative decision on a disputed assessment ----------------------------------------------------------------------------------------------------------------------------a) Assessment (i) Concept of assessment (a) Requisites for valid assessment (b) Constructive method for income determination (c) Inventory method for income determination (d) Jeopardy assessment (e) Tax delinquency and tax deficiency --------------------------------------------------------------Read Sections 56 and 71, Tax Code Q: Define assessment The term assessment may refer to: 1. The official action of an administrative officer in determining the amount of tax due from a taxpayer 2. A notice to the effect that the amount therein stated is due from the taxpayer as a tax with a demand for payment of the tax or deficiency stated therein. --------------------------------------------------------------a) Assessment (i) Concept of assessment (a) Requisites for valid assessment (b) Constructive method for income determination (c) Inventory method for income determination (d) Jeopardy assessment (e) Tax delinquency and tax deficiency (ii) Power of the Commissioner to make assessments and prescribe additional requirements for tax administration and enforcement (a) Power of the Commissioner to obtain information and to summon/examine and take testimony of persons (iii) When assessment is made (a) Prescriptive period for assessment (1) False, fraudulent, and non-filing of returns PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 (2) A jeopardy assessment is an indication of the doubtful validity of the assessment, hence it may be subject to a compromise. --------------------------------------------------------------(a) Requisites for valid assessment --------------------------------------------------------------Q: What are the requisites of a valid assessment? 1. A formal letter of demand and assessment notice shall be issued by the CIR or his duly authorized representative 2. The letter of demand calling for payment of the taxpayers deficiency tax or taxes shall state the facts, the law, rules and regulations or jurisprudence on which the assessment is based. Otherwise, the formal letter of demand and assessment notice shall be void 3. The same shall be sent to the taxpayer only be registered mail or by personal delivery 4. If sent by personal delivery, the taxpayer or his duly authorized representative shall acknowledge receipt thereof in duplicate copy of the letter of demand, showing the following: i. His name ii. Signature iii. Designation and authority to act for and in behalf of the taxpayer, if acknowledge received by a person other than the taxpayer himself; and iv. Date of receipt thereof (see Section 3.1.4, RR No. 12-99) Note: (1) Previously, it is sufficient that the taxpayer be notified of the findings of the CIR. The rule now is that the taxpayer must be informed of not only the law but also of the facts on which an assessment would be made. (see CIR V. REYES [JANUARY 27, 2006]. (2) An assessment must be based on actual facts and not on mere presumptions (see CIR V. BENIPAYO [JANUARY 31, 1962]) (3) In CIR V. PASCOR REALTY [JUNE 29, 1999], the Supreme court held that an assessment must not only contain a computation of tax liabilities but also a demand for payment within the prescribed period. (4) An assessment is deemed made only when he BIR releases, mails or sends such notice to the taxpayer. (Ibid) (5) In ADAMSON V. CA [MAY 21, 2009], at issue was whether the CIRs recommendation letter for the filing of a --------------------------------------------------------------(b) Constructive method for income determination (c) Inventory method for income determination --------------------------------------------------------------Q: What are the constructive methods of income determination? The following are the general methods developed by the BIR for reconstructing a taxpayers income where the records do not show the true income or where no return was filed or what was filed was a false or fraudulent return. a. b. c. d. e. f. Percentage method Net worth method Bank deposit method Cash expenditure method Unit and value method Third party information or access to records method g. Surveillance and assessment method Note: As to the third party information or access to records method, see Section 5(b) of the Tax Code. If the revenue officers were not given the opportunity to examine the taxpayers documents, they are authorized unde r Section 5 of the Tax Code to gather information from third parties (CIR V. HON. RAUL M. GONZALES [OCTOBER 15, 2010]) Q: What is the inventory method for income determination? (Net worth method) The general theory underlying this method is that the taxpayers money and other assets in excess of the liabilities after accurate and proper adjustment of non-deductible and non-taxable items not accounted for in his tax return is deemed to be unreported income. In other words, the theory is that the unexplained increase in net worth of the taxpayer is presumed to be derived from taxable sources. Q: What are the conditions for the use of the net worth method? 1. That the taxpayers books of accounts do not reflect his income or the taxpayer has no books or if he has books, he refuses to produce them, or that the few records that he had were destroyed 2. That there is evidence of possible source or sources of income to account for the increases of net worth or expenditures 3. That here is a fixed starting point or opening net worth 4. That the circumstances are such that the method does reflect the taxpayers income with reasonable accuracy and certainty, and proper and just additions of personal expenses and other non-deductible expenditures were made, and correct, fair, and equitable credit adjustments were given by way of eliminating non-taxable items (see RMC No. 43-74) --------------------------------------------------------------(ii) Power of the Commissioner to make assessments and prescribe additional requirements for tax administration and enforcement (a) Power of the Commissioner to obtain information and to summon/examine and take testimony of persons --------------------------------------------------------------Read Section 6, Tax Code Q: Enumerate the powers of the CIR in the assessment of taxes. 1. Examination of returns and determination of tax due 2. Use of the best evidence obtainable 3. Authority to conduct inventory-taking, surveillance, and to prescribe presumptive gross sales and receipts 4. Authority to terminate the taxable period 5. Authority to prescribe real estate values 6. Authority to inquire into bank deposits 7. Authority to accredit and register tax agents 8. Authority to prescribe additional procedural or documentary requirements --------------------------------------------------------------(e) Tax delinquency and tax deficiency --------------------------------------------------------------Q: When is delinquent? the taxpayer considered 1. Self-assessed tax per return filed by the taxpayer on the prescribed date was not paid at all or only partially paid or 2. Deficiency tax assessed by the BIR became final and executory Examination of returns and determination of tax due Q: When a taxpayer files his return, can he still (1) withdraw it; or (2) amend it? Once filed, the taxpayer may no longer withdraw it but he may amend it subject to the following requirements: 1. It is made within 3 years from filing 2. No notice for audit or investigation has been actually served to him (see Section 6, Tax Code) Use of the best evidence obtainable Q: Explain the best obtainable evidence rule. The rule is that an assessment must made based on the best evidence obtainable. In CIR V. HANTEX TRADING [M ARCH 31, 2005], the Supreme Court opined that assessments must be based on actual facts. It ruled that best evidence includes the corporate and accounting records of the taxpayer who is subject of the assessment process while the best evidence obtainable does not include mere photocopies of records and documents. Such photocopies have no probative value and cannot be used as basis for any deficiency taxes against the taxpayer. Note: (1) The BIR is allowed to make or amend a tax return from his own knowledge or obtained through testimony or otherwise. (see CIR v. Hantex Trading Co. [March 31, 2005]) (2) The rule is that in the absence of accounting records of a taxpayer, his tax liability may be determined by estimation. The CIR is not required to compute such tax liabilities with mathematical exactness (Ibid) consultation with competent appraisers both from the public and private sectors. Authority to inquire into bank deposits Q: Does the CIRs power to obtain information include the power to inquire into bank deposits? No as a general rule. However, the CIR is authorized to inquire into the bank deposits of: 1. A decedent to determine his gross estate 2. Any taxpayer who has filed an application for compromise of his tax liability under Section 204(A)(2) of the Tax Code by reason of financial incapacity to pay his tax liability. 3. Specific taxpayers subject of a request for exchange of information by a foreign tax authority pursuant to an international convention or agreement on tax matters to which the Philippines is a signatory or a party of provided that the requesting foreign tax authority is able to demonstrate the foreseeable relevance of certain information required to be given to the request (see RA 10021 (Exchange of Information on Tax Matters Act of 2009) and RR 10-2010 [OCTOBER 6, 2010]) 4. Where the taxpayer has signed a waiver authorizing the CIR or his duly authorized representatives to inquire into the bank deposits Authority to conduct inventory-taking, surveillance, and to prescribe presumptive gross sales and receipts Q: In what instance will the CIR exercise such authority? It will exercise such authority if there is reason to believe that the taxpayer is not declaring his correct income, sales or receipts for internal revenue purposes Authority to terminate the taxable period Q: In what instances can the CIR terminate the taxable period of a taxpayer? When the taxpayer is: a. Retiring from business b. Intending to leave the country c. Removing his property d. Obstructing tax collection Authority to accredit and register tax agents Q: Who are tax practitioners/tax agents? RR 11-2006 [JUNE 15, 2006] defines a tax practitioner/agent as those who are: 1. engaged in the regular preparation, certification, audit and filing of tax returns, information returns or other statements or reports 2. engaged in the regular preparation of requests for ruling, petitions for reinvestigation, protests, requests for refund or tax credit certificates, compromise settlement and/or abatement of tax liabilities and other official papers and correspondence Authority to prescribe real estate values Q: Does the CIRs power to prescribe real estate values include the power to unilaterally reclassify the zonal valuation of properties? As held in CIR V. AQUAFRESH SEAFOODS [OCTOBER 20, 2010], the Supreme Court ruled that although the CIR has the authority to prescribe real property values and divide the Philippines into zones, the law is clear that the same should be done upon PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 3. regularly appear in meetings, conferences, and hearings before any office of the BIR officially on behalf of a taxpayer or client in all matters relating to a client's rights, privileges, or liabilities Note: Tax practitioners and agents are required to apply for accreditation. RR 11-2006 [JUNE 15, 2006], as amended by RR 4-2010 [FEBRUARY 24, 2010] and RR 142010 [NOVEMBER 25, 2010] provide for the guidelines on accreditation of tax practitioners/agents as a pre-requisite for their practice and representation before the BIR. Q: A was assessed for deficiency taxes on his Feb 1, 2010 income tax returns by the BIR. The formal demand letter and assessment was stamped Jan 31, 2013, denoting the date of its release in the mail. In Feb 2, 2013, A has not yet received the formal demand letter and assessment. He contends that the assessment is already barred by prescription. Is A correct? No. The assessment is not barred by prescription. The BIR has 3 years to assess from the date of last filing. As long as the release of the assessment/demand is effected within the prescriptive period, the assessment is deemed made on time even though the taxpayer actually received the assessment/demand after the expiration of the prescriptive period (see BASILAN ESTATES V. CIR [SEPTEMBER 5, 1967]). --------------------------------------------------------------(iii) When assessment is made (c) Prescriptive period for assessment (2) False, fraudulent, and non-filing of returns (d) Suspension of running of statute of limitations --------------------------------------------------------------Note: Whats the importance of determining when the assessment is made or deemed made? Ill give you two reasons. First, it is important in order to know if the right to assess has already prescribed. The assessment must be made within the 3-year prescriptive period. Any assessment made thereafter shall be barred. Second, the date in which the assessment was made is the reckoning point of the prescription of the power to collect. Q: What is the exception to the above rule that assessment is deemed made when BIR releases, mails, or sends such notice to taxpayer? If the receipt is disputed and for this presumption of receipt of mail to apply, the CIR must prove that: 1. The letter was properly addressed 2. The letter was mailed; otherwise, presumption of receipt cant apply. (see NAVA V. CIR [JANUARY 30, 1965]) In REPUBLIC V. CA [APRIL 30, 1987], the Supreme Court held that a direct denial of receipt of a mailed demand letter by the addressee shifts the burden upon the party favored by the presumption of receipt of letter to prove that the mailed letter was indeed 17 received. In COMMISSIONER OF INTERNAL REVENUE VS. GJM PHILIPPINES M ANUFACTURING, INC. [CTA EB CASE NO. 637, M ARCH 6, 2012], the CTA held that if the taxpayer denies receiving the final assessment notice, it is incumbent upon the BIR to prove that the assessment was indeed received by the taxpayer. 17 Also important to note in this case is the ruling that a follow-up letter which reiterates demand for payment of taxes is considered a notice of assessment. Q: What is the significance of the taxpayers indicating in the previous years ITR its new address? As held in CIR V. BPI AS LIQUIDATOR OF PARAMOUNT ACCEPTANCE CORP [SEPTEMBER 23, 2003], any service of assessment notice on the old address subsequent to such previous year invalidates the assessment. Read Section 203 and 222, ax Code Q: When does the governments right to assess prescribe? General Rule: The governments right to assess prescribes in 3 years from the date of the last day of filing. However: 1. If the return is filed after such date, the 3 year period is reckoned from date of actual filing 2. If the return is filed before the last day, then considered as filed on last day. Exceptions: Section 222, Tax Code provides for the following instances 1. False return 2. Fraudulent return 3. Failure to file a return In such cases, the tax may be assessed or a proceeding in court for collection may be filed without assessment at any time within 10 years from discovery of the falsity, fraud, or omission. Note: (1) In contrast, the right to collect the tax prescribes in 5 years and the period is reckoned from the date the assessment is made. --------------------------------------------------------------(a) Prescriptive period for assessment (3) False, fraudulent, and non-filing of returns --------------------------------------------------------------Note: As a preliminary matters, lets talk about how to compute the legal period. If we will follow the old Administrative Code and the Civil Code, the BIR may assess the deficiency tax only within 1,095 days because they both state that a year is 365 days. 365 times 3 equals 1095. So, kapag nag-assess ang CIR sa dulo ng 3 year period na may leap year, prescribed na! Bakit? A leap year has 366 days. So 365 + 365 + 366 equals 1096 days! Kapag may libro or notes ka na ganyan pa rin sinasabi, patay tayo diyan! The doctrine to that effect as laid down in NAMARCO v. Tecson [29 SCRA 70] has been abandoned! The rule now is very simple. Susundin natin ang Administrative Code of 1987. A year is 12 calendar months. Wag mo na bilangin ang total number of days. So if the CIR assesses in the last day of the last month of the 3-year period, hindi pa prescribed yun. Ano kapag may leap year yung isang taon sa 3-year period. It aint gonna matter. On a serious note, the relevant case is CIR V. PRIMETOWN PROPERTY [AUGUST 28, 2007]. In that case, the taxpayer filed a claim for tax refund of income tax paid in 1997. (2) May there be a proceeding in court when no assessment is made within the 3 year period? Yes in the case of false return, fraudulent return, or failure to file return. You can file within 10 years from discovery. Yes. A false return merely implies deviation from the truth, whether intentional or not, while a fraudulent return refers to an intentional evasion of tax. ( see AZNAR V. CTA [AUGUST 23, 1974]) Q: What is the effect if the assessment is made beyond the prescribed period? Assessments made beyond the prescribed period would not be binding on the taxpayer. ( see TUPAZ V. ULEP [OCTOBER 1, 1999]; CIR v. AYALA SECURITIES CORPORATION [M ARCH 31, 1976] Q: A filed his tax return in 2000. The CIR assessed A for deficiency taxes in 2004 alleging fraud in its complaint. Has the right to assess prescribed? Yes. As held in REPUBLIC V. LIM DE YU [APRIL 30, 1964], it is not enough the fraud is alleged in the complaint, it must be proven and established. Q: What if the return is incomplete, will the prescriptive period to assess run? No. As held in REPUBLIC V. M ARSMAN DEVELOPMENT COMPANY [APRIL 27, 1972], in order that the filing of a return may serve as a starting point of the period for making an assessment, the return must be as substantially complete as to include the needed details on which the full assessment may be made. Q: The CIR contends that seven lots were deliberately omitted by A in his return filed as the representative of the heirs. A contends that the lots were excluded because one belonged to one of the heirs, three were already declared in the return of the surviving spouse, and three were actually included. Is there a deliberate intent to evade taxes on the part of A? No. As held in REPUBLIC V. HEIRS OF CESAR JALANDONI [SEPTEMBER 20, 1965], the omission as described above was not deliberate and did not amount to fraud indicative of an intention to evade payment of the proper tax due the government. 18 Note that the case was governed under the old law which provides for 6 tears to assess and another 5 years to collect. CIR must sign) indicating that the BIR has accepted and agreed to the waiver 4. The date of the acceptance by the BIR should be indicated. Both the dae of execution by the taxpayer and the date of the acceptance by the BIR should be before the expiration of the period of prescription or before the lapse of the period agreed upon in case a subsequent agreement is executed. 5. The waiver must be executed in 3 copies, the original to be attached to the docket, the second copy for the taxpayer and the third copy for the Office accepting the waiver. Taxpayer must be furnished a copy of the waiver in order to perfect the agreement since the waiver is not a mere unilateral act Note: (1) The signatures of both the CIR and the taxpayer are required for a waiver of the prescriptive period, thus a unilateral waiver on the part of the taxpayer does not suspend the prescriptive period (CIR v. CA [February 25, 1999]) This led some to believe that the Waiver form prescribed under RMO No. 20-90 should be used instead of the waiver form mandated under RDAO No. 05-01. RMC No. 29-2012 clarifies that while the provisions of RMO No. 2090 should be strictly complied with in order for a Waiver to be valid, the Waiver form prescribed in RMO No. 20-90 should no longer be used as the same has been revised per RDAO No. 05-01. In SMC STOCK TRANSFER SERVICE CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE [CTA CASE NO. 7944, JANUARY 10, 2012], the CTA held that the waiver of the Statute of Limitations executed by the taxpayer is defective if: (a) It fails to indicate the fact of receipt by the taxpayer of his file copy of the waiver The Court noted that the fact of receipt by the taxpayer must be indicated in the original copy, which is to be attached to the docket of the case; (b) It fails to indicate the specific kind of tax and the amount of tax due if the amount of tax were not indicated in the said waiver, there is no agreement to speak of; (c) It was not duly notarized; (d) Both the acceptance by the BIR and the execution by the taxpayer of the subsequent waiver was made at a time when the period previously agreed upon had already lapsed. See also UNION CEMENT CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE [CTA CASE NO. 6842, JANUARY 18, 2012]; EAST ASIA POWER RESOURCES CORPORATION V. CIR [CTA CASE NO. 7936, FEBRUARY 6, 2012]; NEXT MOBILE, INC. VS. COMMISSIONER OF INTERNAL REVENUE, CTA CASE NO. 7965, DECEMBER 11, 2012 A waiver of the defense of prescription which does not indicate the date of acceptance by the BIR does not toll the running of the three-year prescriptive period. FIRST GAS POWER CORPORATION VS. CIR, CTA CASE NO. 7281, SEPTEMBER 24, 2012 Q: What is the effect of failure to conform to the requirements of a waiver of the statute of limitations? A waiver of the statute of limitations under the Tax Code must conform strictly with the provisions of Revenue Memorandum Order No. 20-90 in order to be valid and binding. (See RMC 06-05 [February 2, 2005]; PHILIPPINE JOURNALISTS INC. V. CIR [DECEMBER 16, 2004]). The period to assess and collect taxes may only be extended upon a written agreement between the CIR and the taxpayer executed before the expiration of the 3-year period. RMO 20-90 and RDAO 05-01 lay down the procedure for the proper execution of the waiver. If not followed, any assessment issued by the BIR beyond the 3-year period is void. (CIR V. KUDOS METAL CORP [M AY 5, 2010]; see also AVON PRODUCTS V. CIR [MAY 13, 2010]) Note: RMC No. 29-2012 [June 29, 2012] clarifies the form to be used for Waiver of the Statute of Limitations. In RMO 20-90, there is a particular waiver form attached as an Annex. Revenue Delegation Authority Order (RDAO) No. 05-01 was issued in August 2, 2001 prescribing a new waiver form to be used. With the decision of the SC in PHILIPPINE JOURNALISTS INC. V. CIR [DECEMBER 16, 2004], RMC No. 06-05 was issued on February 2, 2005 citing the said decision that "a waiver of the statute of limitations under the Tax Code must conform strictly with the provisions of Revenue Memorandum Order No. 20-90. Q: ABC Bank executed two Waivers of the Defense of Prescription covering internal revenue taxes due for the years 1994 and 1995, extending the period of the BIR to assess up to December 31, 2000. A Formal Letter of Demand was issued by the BIR which was protested by ABC Bank. Another Formal Letter of Demand was received by ABC with a reduced assessment which was paid by ABC on the same day except for two other taxes. ABC argues that the waivers it executed were not valid because it was not signed or conformed to by the CIR. Are the waivers valid? Yes. Partial payment of the assessment issued within the extended period to assess as provided in the Waiver of Defense of Prescription is an implied admission of the validity of the waiver. (RCBC v. CIR [September 7, 2011]) (c) Compromise penalties ----------------------------------------------------------------------------------------------------------------------------(a) Civil penalties or Surcharges --------------------------------------------------------------Read Section 247-248, Tax Code Q: What are the civil penalties (surcharges) under the Tax Code and in what instances are they imposable? 1. 25% surcharge, which is imposable in case of: a. Failure to file a return and pay tax due thereon b. Filing with unauthorized revenue office c. Failure to pay deficiency tax within time prescribed in assessment notice d. Failure to pay full or part of the amount shown in ITR required to be filed or the full amount of tax due for which no return is required to be filed on or before the date prescribed for its payment 2. 50% surcharge, which is imposable in case of: a. Willful neglect to file the return within the period prescribed b. False or fraudulent return is willfully made (see Section 248, Tax Code). Note: (1) Surcharges are imposed in addition to the tax required. They are in the nature of penalties and shall be collected at the same, in the same manner, and as part of the tax (see Section 248(A), Tax Code) (2) There is a prima facie evidence of false or fraudulent return when there is a substantial under-declaration of taxable sales, receipts or income in an amount exceeding 30% of that declared per returm. (3) As held in PHILIPPINE REFINING COMPANY V. CA [MAY 8, 1996], it is mandatory to collect penalty and interest at the stated rate in case of delinquency. The intention of the law is to discourage the delay in the payment of taxes due the Government, and, in this sense, the penalty and interest is not penal but compensatory for the concomitant use of the funds by the taxpayer beyond the date when he is supposed to have paid them to the government. --------------------------------------------------------------(a) Suspension of running of statute of limitations --------------------------------------------------------------Read Section 223, Tax Code Q: When is the running of the period of prescription suspended? It is suspended when: 1. The CIR was prohibited from making the assessment or beginning distraint/levy and 19 for 60 days thereafter 2. Taxpayer requests reinvestigation which is granted by the CIR 3. Taxpayer cannot be located in address 4. A warrant of distraint and levy is served (not only issued) and no property could be found 5. Taxpayer is out of the Philippines --------------------------------------------------------------(iv) General provisions on additions to the tax (a) Civil penalties or Surcharges (b) Interest (1) In general (2) Deficiency interest (3) Delinquency interest (4) Interest on extended payment 19 An example would be when an injunction is allowed under the CTA law is availed of. Q: ABC is a cement company. Initially, the BIR ruled that cement is a mineral product rather than a manufactured product and is therefore subject to ad valorem tax, not sales tax. Subsequently, the CIR ruled that cement is a manufactured product and therefore subject to sales tax. The BIR then assessed ABC for deficiency sales tax and imposed the 25% surcharge. Is the 25% surcharge imposable? No. In CIR V. REPUBLIC CEMENT CORP [AUGUST 10, 1983], the Supreme Court noted that the 25% penalty contemplates a case where the liability for the tax is undisputed or indisputable. In this case, the assessments are disputed. The dispute as to the tax liability of Republic Cement for sales tax arose not simply because of ordinary divergence of views in good faith vis--vis the interpretation of the law, the position of Republic Cement was founded upon the original stand of the BIR itself that cement is a mineral product. Under such circumstances, the 25% surcharge imposition must be deleted. tasked to implement the tax are sufficient justification to delete the imposition of surcharges (MICHEL J. LHUILLIER PAWNSHOP V. CIR [SEPTEMBER 11, 2006]) Some Problems on Civil Penalty Impositions Q: If a taxpayer who files a return subsequently realizes that the return filed was insufficient, will his amended return be subject to the 25% surcharge? No. As long as the taxpayer files the amended return before the lapse of any demand by the BIR to pay his deficiency assessment, the taxpayer is not liable for any surcharge. Q: Taxpayer A filed and paid taxes on April 15, 2009 worth 5 million. On May 15, 2009, he realized he should have paid 6 million and thus pays the additional 1 million. Is A subject to the 25% surcharge? No. None of the violations mentioned was committed by the taxpayer. Q: What is the nature of the fraud contemplated in the act of making a fraudulent return which would subject the taxpayer to a 50% surcharge? In CIR V. AIR INDIA [JANUARY 29, 1988], the Supreme Court explained the fraud contemplated by the law in this way: It must be intentional fraud, consisting of deception willfully and deliberately done or resorted to in order to induce another to give up some legal right. Negligence, whether slight or gross, is not equivalent to the fraud with intent to give up some legal right. Negligence, whether slight or gross, is not equivalent to the fraud with intent to evade the tax contemplated by the law. It must amount to intentional wrongdoing with the sole object of avoiding the tax. Q: Taxpayer B filed and paid taxes on April 15, 2009 worth 5 million. On May 15, 2009, the BIR issued an assessment and required B to pay an additional 1 million on or before June 15, 2009. If B pays before June 15, 2009, is he subject to the 25% surcharge? No. None of the violations mentioned was committed by the taxpayer. Q: Taxpayer C did not file any return nor pay any taxes on April 15, 2009. On May 15, 2009, he realized he should have paid 6 million and thus pays the whole 6 million. Is he subject to the 25% surcharge? Yes. Taxpayer C failed to file a return and pay the tax due thereon which is the first type of act which requires a 25% surcharge imposition. Q: As a result of divergent rulings on whether he is subject to tax or not, the taxpayer failed to pay taxes on time. The CIR imposed surcharges and interests for such delay. The taxpayer invokes good faith. Is good faith a defense? Yes. The settled rule is that good faith and honest belief that one is not subject to tax on the basis of previous interpretation of government agencies PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Q: Taxpayer D filed and paid taxes on April 15, 2009 worth 10 million. On May 15, 2009, the BIR issued an assessment and required D to pay an additional 5 million on or before June 15, 2009. If D pays after June 15, 2009, is he subject to any surcharge? Yes. Taxpayer D will be subject to the 50% surcharge since (a) he failed to pay within the time prescribed in the notice of assessment; and (b) the under declaration is 50% or in excess of the 30% threshold which raises the prima facie presumption of a false or fraudulent return. As such allegation is only prima facie, it may be rebutted. from the date of notice and demand until it is paid (see Section 249, Tax Code) Note: (1) For delinquency interest, it is important to note that the instances in which it is applied is the same as those enumerated under 25% surcharge except (b) filing with unauthorized officer. (2) Note that in deficiency interest, it is imposed on the deficiency not the amount of tax due. On the other hand, in delinquency interest, it is imposed on the tax due. Thus, it is an interest earning interest. (3) Interest on deficiency tax may be waived when the assessment is highly controversial as in the case of CAGAYAN ELECTRIC POWER & LIGHT CO. V. CIR [SEPTEMBER 25, 1985], where there was a withdrawal of its exemption from income tax and a subsequent reinstatement of such exemption. Thus, non-payment during the short time when the taxpayer was exempt was not subjected to interest payment. The deficiency interest should be computed from the date prescribed for the payment of the deficiency tax until full payment thereof. On the other hand, delinquency interest should be computed from the due date prescribed under the Assessment Notice until the full payment thereof. REPUBLIC CEMENT CORPORATION (AS SURVIVING CORPORATION IN A MERGER INVOLVING FR CEMENT CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE, CTA EB CASE NO. 821, JULY 18, 2012) Deficiency interest and delinquency interest, having different nature for their existence, cannot be assailed as double imposition of interests as the law itself allows the simultaneous imposition of these two kinds of interests. Deficiency interest on any deficiency tax shall be assessed from the date prescribed for its payment until the full payment thereof; while the assessment of delinquency interest that is imposed upon failure to pay a deficiency tax, or any surcharge or interest thereon, shall be reckoned from the due date appearing in the notice and demand of the Commissioner until the amount is fully paid.TAKENAKA CORPORATION PHILIPPINE BRANCH, CTA EB CASE NO. 745 (CTA CASE NO. 7701), SEPTEMBER 4, 2012 --------------------------------------------------------------(d) Interest (1) In general (2) Deficiency interest (3) Delinquency interest (4) Interest on extended payment --------------------------------------------------------------Read Section 249, Tax Code Q: What are the types of interests collected under the Tax Code 1. In general there shall be assessed and collected any unpaid amount of tax, interest at the rate of 20% per annum or such higher rate as may be prescribed from the date prescribed for payment until fully paid 2. Deficiency interest any deficiency in the tax due shall be subject to 20% per annum 3. Delinquency interest the unpaid amount shall be subject to 20% per annum in case of: a. Failure to pay the amount of tax due on any return required to be filed b. Failure to pay the amount of tax due for which no return is required c. Failure to pay a deficiency tax or surcharge or interest thereon on the due date appearing on the notice and demand of the CIR 4. Interest on Extended Payments if any person is qualified and elects to pay installments but fails to pay the tax or any installment on or before the date prescribed, there shall be assessed and collected interest at the rate of 20% per annum on the tax or deficiency tax or part thereof unpaid PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 --------------------------------------------------------------(e) Compromise penalties --------------------------------------------------------------Note: I will discuss this fully later under Compromise and Abatement but note that in the two instances where the CIR may compromise payment of internal revenue taxes (doubtful validity of the assessment and financial incapacity), there is what you call a compromise penalty. A compromise penalty is the amount agreed upon between the taxpayer and the CIR to be paid as a penalty in cases of a compromise. For doubtful validity of the assessment, the minimum compromise rate is 40%. For other cases --------------------------------------------------------------(v) Assessment process (i) Tax audit (j) Notice of informal conference (k) Issuance of preliminary assessment notice (l) Exceptions to issuance of preliminary assessment notice (m) Reply to preliminary assessment notice (n) Issuance of formal letter of demand and assessment notice/final assessment notice (o) Disputed assessment (p) Administrative decision on a disputed assessment --------------------------------------------------------------Note: Lets simplify the discussion. Ill give you two versions of the assessment process: a simplified and an expanded version. I want you first to get an overview of the whole process first and then in the expanded version, we will discuss what happens in each step and other details. 1. The CIR or Revenue Regional Director (RD) issues a Letter of Authority (LA) to the Revenue Officer (RO) a. The LA must be served within 30 days from date of issuance. Otherwise, it shall become null and void. b. The LA is issued to the RO by the: i. CIR or his duly authorized representatives after a return has been filed or ii. Revenue Regional Director for all audit cases within his regional jurisdiction except in: (1) Cases involving civil or criminal tax fraud falling under the jurisdiction of the Tax Fraud Division of the Enforcement Service (2) Policy cases under audit by Special Teams in the National Office (RMO No. 36-99) Note: (1) The Letter of Authority is the authority given to the revenue officer to perform assessment functions. There must be a grant of authority before any revenue officer can conduct an examination or assessment and the revenue officer must not go beyond the authority given [CIR v. SONY PHILIPPINES [NOVEMBER 17, 2010]. (2) A LA that was issued to cover an audit of unverified prior years is invalid. A LA should cover a taxable period not exceeding one taxable year. The practice of issuing LOAs covering audit of unverified prior years is prohibited. If the audit of a taxpayer shall include more than one taxable period, the other periods shall be specifically indicated. (see RMO 43-90 [SEPTEMBER 20, 1990]. In CIR V. SONY PHILIPPINES [NOVEMBER 17, 2010], a Letter of Authority was issued covering the period 1997 and unverified prior years. The deficiency VAT assessment was based on records from January to March 1998. The Supreme Court held that the CIR went beyond the scope of their authority as indicated in the LOA. Further, the fact that the LOA covers unverified prior years invalidates it and a VAT deficiency assessment made on the basis thereof must be disallowed. (3) Eh ano itong tinatawag na Letter Notice? A Letter Notice (LN) is a discrepancy notice issued by the CIR after conducting data matching processes, informing the taxpayer of findings of discrepancy. A LN covers only a tax indicated therein on a given particular period or quarter rd (e.g. VAT liabilities for 2002 3 quarter). It must be noted, however, that under RMC 40-2003 [JULY 7, 2003] and Regional Office or to the Commissioner or his duly authorized representative, as the case may be, for appropriate review and issuance of a deficiency tax assessment, if warranted. Note: Ano ba ang nangyayari sa Informal Conference? This is where youre given the chance to present your side. And of course, sasabihin mo ay tama ang binayad mo! 2. The RO conducts an Audit within 120 days from date of issuance and service of the LOA a. If the audit is not completed within the 120 day period, the LA is revalidated. b. If the RO finds: i. No deficiency, the audit ends ii. Any deficiency, the RO will inform the taxpayer and write in his report whether the taxpayer agrees with his findings: (1) If the taxpayer is amenable, the taxpayer pays the tax (2) If the taxpayer is not amendable, the RO shall state such fact in his report of investigation and submit the same to the RDO or by the Special Investigation Division (in case of the Revenue Regional Office) or by the Chief of Division (in the case of the BIR National Office). 3. RO sends Notice of Informal Conference (NIC) a. The taxpayer shall be informed, in writing, by the RDO or by the Special Investigation Division (in case of the Revenue Regional Office) or by the Chief of Division (in the case of the BIR National Office) of the discrepancy or discrepancies in the taxpayers payment of his internal revenue taxes for the purpose of Informal Conference 4. Taxpayer responds within 15 days from receipt of NIC a. If the taxpayer responds within 15 days, there will be an Informal Conference b. If the taxpayer fails to reply, he shall be considered in default. The Revenue District Officer or the Chief of the Special Investigation Division of the Revenue Regional Office, or the Chief of Division in the National Office, as the case may be, shall endorse the case with the least possible delay to the Assessment Division of the Revenue PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 5. The Assessment Division of the Revenue Regional Office or CIR or his duly authorized representative issues a Preliminary Assessment Notice a. If there is no sufficient basis to assess, dismissed. b. If there is sufficient basis to assess, a Preliminary Assessment Notice (PAN) shall be issued for the proposed assessment, showing in detail, the facts and the law, rules and regulations, or jurisprudence on which the proposed assessment is based c. A PAN is not required in the following instances i. Assessment is purely mathematical error ii. Excise tax on excisable article not paid iii. Discrepancy between tax withheld and remitted iv. Goods imported by tax-exempt entity are sold to a taxable entity v. Claim for refund is filed when it was previously carried over Note: The PAN must be issued by the BIR before issuing the FAN and letter of demand. In CIR V. METRO STAR SUPERAMA [DECEMBER 8, 2010], where the taxpayer received only a FAN, the Supreme Court ruled that such amounted to a denial of due process. The taxpayer must be informed of the facts and law upon which the assessment is made. The law imposes a substantive, not merely a formal requirement. However, if you fall under the 5 exceptions, then you can go straight to the FAN. The issuance of Preliminary Assessment Notice is mandatory in tax assessments except in a few instances, specifically enumerated by law, where it is not required. COMMISSIONER OF INTERNAL REVENUE VS. UNIOIL CORPORATION, CTA EB CASE NO. 857, NOVEMBER 13, 2012 See LAURENCE LEE V. LUANG V. HON. SIXTO S. ESQUIVIAS IV [CTA CASE NO. 7967, JANUARY 5, 2102] where the CTA held that in the absence of proof that taxpayer received preliminary assessment notice, the assessment is void. 6. Taxpayer responds within 15 days from receipt of PAN via a Reply a. If the taxpayer fails to respond within 15 days, he shall be considered in default, in which case, a formal letter of demand and assessment notice shall be caused to be issued calling for payment of the taxpayer's deficiency tax liability, inclusive of the applicable penalties. Note: (1) Failure to file a reply to the PAN will not bar the taxpayer from protesting the FAN. Why? The PAN is not the final assessment contemplated by the NIRC which can be protested. The only consequence of failure to file a reply to the PAN is that the taxpayer shall be considered in default and the BIR can now make a final assessment. Supreme Court reiterated that the assessment must state the fact, the law, the rules and regulations or jurisprudence on which the assessment is based, otherwise the assessment shall be void (see also CIR V. METRO STAR SUPERAMA [DECEMBER 8, 2010] and CIR v. ENRON SUBIC POWER CORPORATION [JANUARY 19, 2009]; FLUOR DANIEL PHILIPPINES V. CIR [CTA CASE NO. 7793, APRIL 17, 2012]) (2) Remember the requisites of a valid assessment we discussed earlier. If the assessment does not have these requisites or, in other words, if the assessment is not valid, the implication is that the 30-day period allowed to the taxpayer in which to appeal to the CTA shall not begin to run. (3) The taxpayer or his duly authorized representative may protest administratively against the FAN within thirty (30) days from date of receipt thereof. Otherwise, the FAN will become final and executory. You can no longer appeal to the CTA. Lets now discuss the remedy of the taxpayer if youre given a FAN. 7. The CIR or his duly authorized representative issues a Formal Letter of Demand and Assessment Notice (FAN) which may be objected to via Protest within 30 days from receipt of the FAN a. The formal letter of demand and assessment notice shall be issued by the Commissioner or his duly authorized representative. b. The letter of demand calling for payment of the taxpayer's deficiency tax or taxes shall state the facts, the law, rules and regulations, or jurisprudence on which the assessment is based, otherwise, the formal letter of demand and assessment notice shall be void c. The same shall be sent to the taxpayer only by registered mail or by personal delivery. If sent by personal delivery, the taxpayer or his duly authorized representative shall acknowledge receipt thereof in the duplicate copy of the letter of demand. Note: (1) The requirement that the assessment must first sate the facts and the law on which the assessment is based is not merely a procedural requirement but a substantive requirement which determines the taxpayers ability to protest. Thus, the same must be complied with otherwise the assessment is void. Thus, assessment notices which only have computations are invalid. This is the reason why the new Tax Code provides that the taxpayer be informed and not merely notified. Given that this new rule benefits the taxpayer, the same may be applied retroactively (CIR V. AZUCENA REYES [JANUARY 27, 20 2006]. In CIR V. GONZALEZ [OCTOBER 13, 2010], the --------------------------------------------------------------a) Protest (ii) Rendition of decision by Commissioner (a) Denial of protest (1) Commissioners actions equivalent to denial of protest (a) Filing of criminal action against taxpayer (b) Issuing a warrant of distraint and levy (2) Inaction by Commissioner (iii) Remedies of taxpayer to action by Commissioner (a) In case of denial of protest 20 Further, the formality of a control number in the assessment notice is not a requirement for its validity but rather the contents thereof which should inform the taxpayer of the declaration of deficiency tax. (b) In case of inaction by Commissioner within 180 days from submission of documents (c) Effect of failure to appeal --------------------------------------------------------------Read Section 228, Tax Code Note: First, Ill give you the overview of the steps in an administrative protest and then well go to the topics in the outline. Q: Outline the steps in disputing an assessment starting from the filing of the return until the appeal to the Supreme Court. 1. Filing of the Return - period begins on date of filing or last day required by law, whichever is later) 2. Issuance of LA served to the taxpayer within 30 days from issuance 3. Audit within 120 days from date of receipt of LA by taxpayer 4. Notice of Informal Conference taxpayer submits explanation within 15 days from receipt of notice 21 5. Preliminary Assessment Notice (PAN) taxpayer submits reply within 15 days from receipt of notice 6. Final Assessment Notice 7. Taxpayer files protest within 30 days from receipt of FAN and Formal Notice of Demand 8. Relevant supporting documents submitted within 60 days from filing of letter of protest 9. CIRs denial of protest or inaction for 180 days 10. Appeal to CTA Division within 30 days from date of receipt of CIRs denial or from the lapse of 180 days of inaction counted from submission of documents to CIR. CTA Division has to decide the case within 30 days after submission for decision. Motion for Reconsideration or New Trial to CTA Division within 15 days from receipt of decision. 11. Appeal to CTA En Banc within 15 days from receipt of resolution. 12. Appeal to the SC within 15 days from receipt of resolution under Rule 45 -------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------(1) Protested assessment --------------------------------------------------------------Q: What is a protested assessment? A protested assessment or a disputed assessment is where the taxpayer questions an assessment and asks the BIR to reconsider or cancel the same because he believes he is not liable therefor --------------------------------------------------------------(2) When to file a protest --------------------------------------------------------------Q: When should a taxpayer file a protest with the CIR? The taxpayer or his duly authorized representative may protest administratively against the formal letter of demand and assessment notice within thirty (30) days from date of receipt thereof. (see RR No. 1299) 21 When PAN is not required, from filing of return, a final assessment notice will be issued. tax, inclusive of the applicable surcharge and/or interest. No action shall be taken on the taxpayer's disputed issues until the taxpayer has paid the deficiency tax or taxes attributable to the said undisputed issues. (see RR No. 12-99) Note: In contrast, payment prior to protest is required in real property taxes and customs duties. (3) The period utilized for reinvestigation is deducted from the period within which to collect. (see REPUBLIC V. LOPEZ 22 [MARCH 30, 1963]) The running of the statute of limitations shall not be suspended or interrupted unless the taxpayers request for reinvestigation is acted upon by the Commissioner. BRAVO ALABANG, INC. VS. COMMISSIONER OF INTERNAL REVENUE, CTA CASE NO. 8199, NOVEMBER 29, 2012 --------------------------------------------------------------(3) Forms of protest --------------------------------------------------------------Q: What are the two ways of protesting an assessment notice for an internal revenue tax? (Two forms of protest) 1. Request for Reconsideration refers to a plea for reevaluation of an assessment on the basis of existing records without need of additional evidence. It may involve both a question of fact or of law or both 2. Request for Reinvestigation refers to a plea for reevaluation of an assessment on the basis of newly discovered evidence or additional evidence that a intends to present in the investigation. It may also involve a question of fact or law or both (see RR No. 12-85) Q: What happens if the CIR does not consider or act upon the request for reinvestigation? As there was no evidence that the request was considered or acted upon, it did not suspend the running of the period for filing an action for collection (see REPUBLIC V. ABECEDO [M ARCH 29, 1968]) In BPI v. CA [OCTOBER 17, 2005] as reiterated in BPI V. CA [M ARCH 17, 2008], the Supreme Court emphasized that the BIR must first grant the request for reinvestigation as a requirement for the suspension of the statute of limitations. Q: What is the difference between a request for reinvestigation and a request for reconsideration for purposes of tolling the running of the prescriptive period? It is the request for reinvestigation acted upon which suspends the prescriptive period to collect. A request for reconsideration does not toll the prescriptive period (see BPI V. CIR [OCTOBER 17, 2005]; CIR V. PHILIPPINE GLOBAL COMMUNICATIONS [OCTOBER 31, 2006]) Note: (1) The ruling in CIR V. CAPITOL SUBDIVISION [APRIL 30, 1964] to the effect that the prescriptive period to collect a deficiency tax is interrupted when there is a request for review or reconsideration is no longer controlling. (2) Why does a request for reinvestigation toll the running of the prescriptive period? Well, a reinvestigation will take more time because you need to receive and evaluate additional evidence. Q: Can a taxpayer invoke the defense of prescription when he made repeated requests for reinvestigation and repeated requests for extension of time to pay? No. As held explained by the Supreme Court in REPUBLIC V. ARCACHE [FEBRUARY 29, 1964]: W hile we may argue with the Court of Tax Appeals that a mere request for re-examination or re-investigation may not have the effect of suspending the running of the period of limitation for in such a case there is need of a written agreement to extend the period between the Collector and the taxpayer, there are cases however where a taxpayer may be prevented from setting up the defense of prescription even if he has no previously waived it in writing as when by his repeated requests or positive acts the Government has been, for good reasons, persuaded to postpone collection to make him feel that the demand was not unreasonable or that no harassment or injustice is meant by the Government. 22 Example: If the assessment was made on 1/1/2000 and the collection was made on 1/1/2006 but it was shown that from 1/1/2000 to 1/1/2003 or a period of 2 years that the assessment was being reinvestigated, the action to collect has not yet prescribed since deducting the 2 year period when reinvestigation was made will only amount to 4 years and is thus still within the 5 year period to collect. --------------------------------------------------------------(4) Content and validity of protest --------------------------------------------------------------Q: What are the requirements for the validity of a taxpayers protest? 1. Must be in writing and addressed to the CIR 2. Must contain the information required, namely: a. Name of the taxpayer and address for the immediate past 3 taxable years b. Nature of the request, specifying the newly discovered evidence he intends to present c. Taxable periods covered by the assessment d. Amount and kind of tax involved and the assessment notice and number e. Date of receipt of assessment notice or letter of demand f. Itemized statement of the finding to which the taxpayer agrees (if any) as basis for the computation of the tax due, which must be paid immediately upon filing of protest g. Itemized schedule of the adjustments to which the taxpayer does not agree 3. The taxpayer must not only show the errors of the BIR but also the correct computation through: a. A statement of the facts, the applicable law, rules and regulations, or jurisprudence on which the taxpayers protest is based. Otherwise, his protest shall be considered void and without force and effect. b. If there are several issues involved in the disputed assessment and the taxpayer fails to state the facts, the applicable law, rules and regulations, or jurisprudence in support of his protest against some of the several issues on which assessment is based, the same shall be considered undisputed issue or issues, in which case, the taxpayer shall be required to pay the corresponding deficiency tax or taxes attributable 4. It must be filed within the reglementary period of 30 days from receipt of the notice of assessment --------------------------------------------------------------(b) Effect of failure to protest --------------------------------------------------------------Q: What is the effect of failure to protest the FAN? If the taxpayer fails to file a valid protest against the formal letter of demand and assessment notice within thirty (30) days from date of receipt thereof, the assessment shall become final, executory and demandable. --------------------------------------------------------------(c) Period provided for the protest to be acted upon --------------------------------------------------------------Q: What is the period for the CIR to act upon a valid protest against the FAN? The CIR or his duly authorized representative may act on the taxpayers protest within 180 days from the date of submission by the taxpayer of the required documents in support of his protest Note: The 30-day period to appeal set by Section 228 of the NIRC, as amended, should be reckoned from the lapse of the 180-day period for the BIR to act on the protest without any decision having been rendered and not from the date the taxpayer received the Final Demand and Assessment Notice (LA FLOR DELA ISABELA, INC. V. CIR [C.T.A. EB NO. 672, FEBRUARY 02, 2012]) --------------------------------------------------------------(iii) Remedies of taxpayer to action by Commissioner (a) In case of denial of protest (b) In case of inaction by Commissioner within 180 days from submission of documents (c) Effect of failure to appeal ----------------------------------------------------------------------------------------------------------------------------(a) In case of denial of protest --------------------------------------------------------------Q: What are the remedies of the taxpayer if the protest is denied? 1. Appeal to the CTA within 30 days from date of receipt of the said decision. Otherwise, the assessment becomes final, executory and demandable. 2. Instead of appealing to the CTA at once, the taxpayer may first opt to file a MR of the denial of the administrative protest with the CIR. If the MR is denied, the taxpayer may then appeal o the CTA, but only within the remaining period of the original 30-day period to appeal (if any) (see FISHWEALTH CANNING CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE [JANUARY 21, 2010]) Note: This is not an the MR being contemplated in FISHWEALTH CANNING CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE [JANUARY 21, 2010] which tolls the running of the period to appeal to the CTA. Q: Enumerate some acts of the CIR that may be considered as denial of the taxpayers protest? 1. An indication to the taxpayer by the CIR in clear and unequivocal language of his final denial not the issuance of the warrant of distraint and levy. What is the subject of the appeal is the final decision not the warrant of distraint. (CIR v. Union Shopping [May 21, 1990]) 2. Filing by the BIR of a civil suit for collection of the deficiency tax is considered a denial of the request for reconsideration (CIR v. Union Shopping [May 21, 1990]) 3. Filing of criminal action against the taxpayer (Ibid) 4. A BIR demand letter sent to the taxpayer after his protest of the assessment notice is considered as the final decision of the CIR on the protest (Surigao Electric v. CTA [57 SCRA 523]) 5. A letter of the CIR reiterating to a taxpayer his previous demand to pay an assessment is considered a denial of the request for reconsideration or protest and is appealable to the CTA (CIR v. Ayala Securities [70 SCRA 204]) 6. Final notice before seizure considered as CIRs decision of taxpayers request for reconsideration who received no other response. (CIR v. Isabela Cultural Corp [July 11, 2001]) Q: What is the remedy of the taxpayer if it is the duly authorized representative of the CIR who denied the protest? The taxpayer may elevate his protest to the CIR within 30 days from date of receipt of the final decision of the Commissioner's duly authorized representative since the latters decision is not be considered final, executory and demandable. Q: The BIR issued a Formal Letter of Demand which stated The opinions promulgated by the Secretary of Justice are advisory in natureand any aggrieved party has the court for recourse. The taxpayer did not protest the assessment and instead filed a Petitioner for Review with the CTA. Is the taxpayer correct? Yes. Estoppel is an exception to the doctrine of exhaustion of administrative remedies as when the wording of the Formal Letter of Demand with Assessment Notices led the taxpayer to believe that it was in fact a final decision of the CIR. The statement of the BIR led the taxpayer to believe that Page 77 of 164 Last Updated: 30 July 2013 (v3) only a final judicial ruling in its favor would be accepted by the CIR (ALLIED BANK V. CIR [FEBRUARY 5, 2010]. --------------------------------------------------------------(b) In case of inaction by Commissioner within 180 days from submission of documents --------------------------------------------------------------Q: What happens if the protest is not acted upon within 180 days by the CIR? 1. File a petition for review with the CTA within 30 days after the expiration of the 180 day period 2. Await the final decision of the CIR on the disputed assessment and appeal such final decision to the CTA within 30 days after receipt of a copy of such decision (see CIR V. FIRST EXPRESS PAWNSHOP COMPANY, INC [JUNE 16, 2009]; RCBC V. CA [APRIL 24, 2007]) Director of the BIR for the reason that the case was not elevated to the Court of Tax Appeals as mandated by the provisions of the last paragraph of Section 228 of the Tax Code. By virtue thereof, the said assessment notice has become final, executor and demandable. HELD: The Supreme Court held that it is not correct to say that the assessment became final and executory by the sole reason that the taxpayer failed to appeal the inaction of the Commissioner within 30 days after the 180-day reglementary period because in effect, it limited the remedy of the taxpayer under Section 228 of the NIRC to just one, that is - to appeal the inaction of the Commissioner on its protested assessment after the lapse of the 180-day period. --------------------------------------------------------------(a) Effect of failure to appeal --------------------------------------------------------------Q: What is the effect of the failure of the taxpayer to appeal the denial of the protest by the CIR to the CTA in due time? Failure of the taxpayers to appeal to the CTA in due time make the assessments in question, final, executory and demandable. (see DAYRIT V. CRUZ 23 [SEPTEMBER 26, 1988]). Taxpayers failure to file a petition for review with the CTA within the statutory period renders the disputed assessment final, executory and demandable. PHILIPPINE DREAM COMPANY, INC. VS. BUREAU OF INTERNAL REVENUE, CTA CASE NO. 7700, DECEMBER 06, 2012 Q: If a taxpayer files out of time his petition for review with the CTA, can he wait for the final decision of the CIR and then appeal the same to the CTA? No. After availing of the first option (filing of the petition for review) which was however filed out of time, a taxpayer cannot successfully resort to the second option (await final decision and appeal the same to the CTA) on the pretext that there is yet no final decision on the disputed assessment because of the CIRs inaction. (see also LASCONA LAND V. CIR [M ARCH 5, 2012]) Q: Will the failure of the taxpayer to appeal the inaction result in the finality of the FAN? No. The failure of the taxpayer to appeal the inaction on the disputed assessment by the CIR or his representative within 30 days after the lapse of 30 days from the submission of supporting documents will not result in the finality of the FAN (see RCBC V. CA [APRIL 24, 2007]) Q: Is the requirement that the appeal of the decision of the CIR to the CTA be brought within 30 days jurisdictional? 23 The Court also stated that a suit for collection of internal revenue taxes where the assessment has already become final and executory is akin to an action to enforce judgment. In RCBC V. CIR [JUNE 16, 2006], the Supreme Court held that while the right to appeal a decision of the CIR to the CTA is merely a statutory remedy, nevertheless the requirement that it must be brought within 30 days is jurisdictional. If a statutory remedy provides as a condition precedent that the action to enforce it must be commenced within a prescribed time, such requirement is jurisdictional and failure to comply may be raised in a motion to dismiss. Note: From here on, you will notice that I have already deviated from the order in the bar syllabus. Ill integrate the discussion of Collection and Government Remedies as they are closely related. In fact, the government remedies are meant to ensure collection, But before that, I want to dispose of the topic of injunctions. This is just a review. We already discussed this in General Principles. b) Collection (i) Requisites (ii) Prescriptive periods --------------------------------------------------------------Read Section 203, 222-223, Tax Code Q: What are the requisites for the collection of taxes? We must make a distinction between delinquency tax and deficiency tax. 1. Delinquency tax can be immediately collected administratively through issuance of a warrant of distraint or levy and/or through judicial action (see Section 205, Tax Code) 2. Deficiency tax can be collected also through administrative and/or judicial remedies but has to go through the process of filing the protest by the taxpayer against the assessment and the denial of such protest by the CIR. --------------------------------------------------------------(v) Non-availability of injunction to restraint collection of tax --------------------------------------------------------------Read Section 218, Tax Code Q: Can an injunction be issued to restrain the collection of any internal revenue tax, fee or charge? General Rule: No court can issued an injunction, as provided under Section 218, Tax Code. Exception: Section 11, RA 9282 provides that an injunction may be issued by the CTA to restrain the collection of taxes when in the opinion of the Court the collection may jeopardize the interest of the Government and/or the taxpayer, the Court at any stage of the proceeding may suspend the said collection and require the taxpayer either to deposit the amount claimed or to file a surety bond for not more than double the amount with the Court. Note: (1) TROs and injunctions issued by courts other than the CTA against the BIR should be annulled and cancelled for lack of jurisdiction [see RMO 042-10 [MAY 4, 2010].) (2) As held in ANGELES CITY V. ANGELES ELECTRIC CORPORATION [JUNE 29, 2010], the prohibition on the issuance of a writ of injunction to enjoin the collection of taxes is applied only to national internal revenue taxes, not to local taxes. However, the Supreme Court noted that such injunctions enjoining the collection of local taxes are frowned upon. Q: When may collection of taxes be made? It may be made within 5 years from assessment Q: Summarize the prescriptive periods for the collection of taxes. No ITR, False ITR, Fraudulent ITR Collection w/ prior assessment Assess within 3 years Assess within 10 years from actual filing or last from discovery of fraud, day to file, whichever is falsity or omission later Collect within 5 years Collection within 5 years from date of assessment from date of assessment 24 by summary or judicial by summary or judicial Collection w/o prior assessment This cannot be done Collection within 10 anymore because there years from date of must be an assessment discovery of the falsity, before collection in the fraud, omission by Regular ITR 24 The rule is to the effect that once there is already an assessment, the period to collect is always 5 years even if the return is fraudulent, false, or was not filed. judicial only. proceedings Note: Apparently, there is a conflict as to the proper prescriptive period for collecting taxes when a return was filed by the taxpayer and such return is not false or fraudulent. Domondon says it is 3 years. Sababan, Mamalateo, and Dimaampao says that it is 5 years. Gruba and Montero adhere to this view. 5 years, it is then! Majority wins. Q: A tax was assessed in September 27, 1999. The CIR filed a suit to collect deficiency taxes in December 27, 2009. The CIR claims that there was a waiver of the 5year prescriptive period and presented a waiver dated December 17, 2005. Is the waiver valid? No. As held in REPUBLIC V. ABECEDO [M ARCH 29, 1968], the waiver must be executed within the 5 year period. A waiver executed beyond the five-year limitation is in effective and, as such, the CIR can no longer revive the right of action. Q: What are the alternatives of the CIR in cases of a false, fraudulent return or the failure to file a return in terms of collection? As held in REPUBLIC V. RET [M ARCH 31, 1962], CIR has two alternatives: 25 the 1. Assess the tax within 10 years from the discovery of the falsity, fraud or failure and then collect within 5 years by judicial or summary proceedings 2. Do not assess and instead collect the tax without assessment within 10 years from the discovery of the falsity, fraud or failure by judicial proceedings only. Thus, when there is an assessment, the 10 year period to collect from discovery of falsity, fraud, and failure is not applicable. Q: What is effect of the failure of the waiver to bear the written consent of the CIR? In CIR V. CA [FEBRUARY 25, 1999], the Supreme Court reiterated that waiver of the five-year prescriptive period must be in writing and signed by both the BIR Commissioner and the taxpayer. Hence, a waiver which does not have the consent of the CIR is invalid and without any binding effect. Q: How should the waiver be construed when the specified period in the waiver refers to both assessment and collection? If the waiver refers to both assessment and collection and interpreting such will in effect shorten the collection period, then such waiver is deemed to refer to assessment only and not collection (see REPUBLIC V. LIM DE YU APRIL 30, 1964]) Q: The CIR maintains that the prescription of his right to collect the amount of deficiency taxes is governed by Article 1145 of the Civil Code, which gives him 6 years. Is the CIR correct? No. As held in GUAGUA ELECTRIC LIGHT COMPANY V. CIR [APRIL 24, 1967], the right to assess and collect is governed by the Tax Code and not by Article 1145 of the Civil Code. A special law (Tax Code) shall prevail over a general law (Civil Code). Q: Can a letter of demand be deemed an assessment such that the 5-year period for collection shall commence from the time such letter was sent? Yes. In REPUBLIC V. LIMACO & DE GUZMAN [AUGUST 31, 1962], the Supreme Court held that a letter of demand should be deemed an assessment if it declares and fizes th tax to be payable against the party liable thereto and demands the settlement thereof. Hence, the 5-year period for collection of the tax due should commence anew from time said letter of demand was sent to the taxpayer. 25 In the said case, the Supreme Court noted that Section 332 (no w Section 222) does not apply in the collection of income taxes by summary proceedings. But when the collection of income taxes is to be effected by court action, the provision is controlling. Q: What is the effect of pendency of appeal on the running of the prescriptive period? Under SECTION 223 OF THE TAX CODE, the running of the prescriptive period to collect deficiency taxes shall be suspended for the period during which the CIR is prohibited from beginning a distraint or levy or instituting a proceeding in court and for 60 days thereafter. In REPUBLIC V. KER & CO. [SEPTEMBER 29, 1966], the Supreme Court held that the pendency of a taxpayers appeal has the effect of temporarily staying the hands of the CIR. The running of the prescriptive period is suspended. In PROTECTORS SERVICES V. CA [APRIL 12, 2000], the Supreme Court held that the act of a taxpayer in filing a petition before the CTA to prevent the collection of the assessed deficiency tax and in elevating the case to the Supreme Court for review after the CTA dismissed the petition suspended the running of the statute of limitations. No. In VERA V. FERNANDEZ [M ARCH 30, 1979], the Supreme Court held that claims for taxes are collectible even after distribution of decedents estate among his heirs who are liable in proportion of their share in the inheritance to the payment of taxes. Claims for taxes against the estate are excepted from the statute of non-claims and are not barred forever. Q: An informer filed a case with the CTA against the taxpayer and BIR. The informer was seeking to (1) declare the taxpayer as having an assessment; and (2) as a consequence, to collect his informers reward. This case was filed by the informer within 3 years from the time that the taxpayer filed his return. However, apart from this action initiated by the informer, no other action was filed by the government seeking to collect against the taxpayer. Has the right to collect already prescribed? No. In PNOC V. CA [APRIL 26, 2005], the Supreme Court held that the BIR is deemed to be compliant with the requirement that collection be made within the 5 years from time of assessment since if the informant won, the CTA would have ordered the erring parties to pay the tax. At the very least, the filing by the informer of the case would have suspended the running of the period because the BIR is prohibited from making collection because there was a pending case. Q: Is the government barred by prescription from claiming deficiency taxes against an estate? PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 --------------------------------------------------------------2. Government Remedies a) Administrative Remedies (i) Tax lien (ii) Compromise and Abatement (a) Authority of the Commissioner to compromise and abate taxes (b) Compromise (c) Abatement (iii) Distraint of personal property including garnishment (a) Summary remedy of distraint of personal property (1) Purchase by the government at sale upon distraint (2) Report of sale to the BIR (3) Constructive distraint to protect the interest of the government (iv) Summary remedy of levy on real property (1) Advertisement and sale (2) Redemption of property sold (3) Final deed of purchaser (v) Forfeiture to government for want of bidder (a) Remedy of enforcement of forfeitures (1) Action to contest forfeiture of chattel (b) Resale of real estate taken for taxes (c) When property to be sold or destroyed (d) Disposition of funds recovered in legal proceedings or obtained from forfeiture (vi) Further distraint or levy (vii) Suspension of business operation (viii) Statutory offenses and penalties b) Judicial Remedies (i) Civil and criminal actions (a) Suit to recover tax based on false and fraudulent returns --------------------------------------------------------------Read Section 205, Tax Code Q: What are the remedies of the government for the collection of taxes? 1. Administrative Remedies a. Tax lien b. Distraint of personal property, or levy of real property or garnishment of bank deposits c. Sale of property d. Forfeiture e. Compromise and abatement f. Penalties and fines; g. Suspension of business operations 2. Judicial Remedies a. Civil action b. Criminal action predicated on a tax lien is superior to the claim of a private litigant predicted on a judgment. The tax lien attaches not only from the service of the warrant of distraint of personal property but from the time the tax had become due and payable. In both cases, the distraint was made long before the writ of execution was issued to implement the levy on execution. --------------------------------------------------------------(ii) Compromise and Abatement (a) Authority of the Commissioner to compromise and abate taxes (b) Compromise (c) Abatement --------------------------------------------------------------Read Section 204, Tax Code --------------------------------------------------------------(b) Compromise --------------------------------------------------------------Q: What is a compromise? A compromise is an agreement whereby the parties, by making reciprocal concessions, avoid litigation or put an end to one already commenced (see ART. 2208, CIVIL CODE) --------------------------------------------------------------(i) Tax lien --------------------------------------------------------------Read Section 219, Tax Code Q: What is a tax lien? It is a legal claim or charge on property, real or personal, established by law as security in default of the payment of tax (HSBC v. Rafferty [39 Phil. 105]) Q: The CIR served a warrant of distraint over four barges owned by ABC Company to satisfy various deficiency taxes. Later, the same four barges were levied upon execution to satisfy a judgment for unpaid wages and other benefits of the employees of ABC Company. Which claim is superior? The claim of the government is superior. As held in CIR v. NLRC [November 9, 1994] reiterating the doctrine laid down in REPUBLIC V. ENRIQUEZ [OCTOBER 21, 1988], the claim of the government PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 a stage of judicial proceedings where a final judgment has already been rendered because there is nothing to compromise as the Government has definitely and finally won the litigation. Q: What are the grounds for the compromise of payment of internal revenue taxes? 1. Doubtful validity of the assessment 2. Financial incapacity Note: Refer to RR 30-2002 [December 16, 2002] for the instances where the tax can be compromised under these two grounds. be handled by the Regional Evaluation Board or the National Evaluation Board on a case-to-case basis 5. Cases which become final and executory after final judgment of a court where compromise is requested on the ground of financial incapacity of the taxpayer 1. Delinquent accounts 2. Cases under administrative protest after issuance of the FAN to the taxpayer which are still pending in the RO, RDO, Legal Service, Large Taxpayer Service, Collection Service, Enforcement Service and other officers of the National Office 3. Civil tax cases being disputed before the courts 4. Collection cases filed in courts 5. Criminal violations other than those already filed in court or those involving criminal tax fraud 1. Withholding tax cases unless the applicanttaxpayer invokes provisions of law that cast doubt on the taxpayers obligation to withhold 2. Criminal tax fraud cases confirmed as such by the CIR or his duly-authorized representative 3. Delinquent accounts with duly approved schedule of installment payments 4. Cases where the final reports of reinvestigation or reconsideration have been issued resulting to reduction in the original assessment and the taxpayer is agreeable to such decision by signing the required agreement form for the purpose. On the other hand, other protested cases shall Q: Can the compromise offer of a taxpayer be lower than the prescribed rates? Yes, but the approval by the Evaluation Board which is composed of the CIR and the 4 Deputy Commissioners is required. Note: The Evaluation Board must also approve the compromise if the basic tax involved exceeds P1 million. (a) Remedy of enforcement of forfeitures (2) Action to contest forfeiture of chattel (b) Resale of real estate taken for taxes (c) When property to be sold or destroyed (d) Disposition of funds recovered in legal proceedings or obtained from forfeiture (xi) Further distraint or levy --------------------------------------------------------------Q: What are the requisites for a valid distraint and levy? 1. The taxpayer must be delinquent 2. There must be a subsequent demand for its payment 3. The taxpayer must fail to pay the delinquent tax at the time required 4. The period within which to collect the tax has not yet prescribed Read Section 206, Tax Code Q: In what instances can the CIR place under constructive distraint26 the property of a taxpayer? 1. Delinquent taxpayer 2. Taxpayer is retiring from any business subject to tax 3. Taxpayer is intending to leave the Philippines 4. Taxpayer is intending to remove his property therefrom 5. Taxpayer is intending to hide or conceal his property 6. Taxpayer is intending to perform any act tending to obstruct the proceedings for collecting the tax due or which may be due from him --------------------------------------------------------------(iii) Distraint of personal property including garnishment (b)Summary remedy of distraint of personal property (1) Purchase by the government at sale upon distraint (2) Report of sale to the BIR (3) Constructive distraint to protect the interest of the government (ix) Summary remedy of levy on real property (1) Advertisement and sale (2) Redemption of property sold (3) Final deed of purchaser (x) Forfeiture to government for want of bidder 26 In a constructive distraint, the taxpayer or any person having possession or control of the property will sign a receipt covering the property distrained and obligate himself to preserve the same intact and unaltered and not to dispute the same without authority from the CIR. Note: The next steps will depend if the Bid is less than amount of tax/ FMV of goods distrained. Bid less than amount of tax/FMV of goods distrained 7. Commissioner may purchase property for the National Government (Section 212, Tax Code) 8. Property may be resold and the net proceeds shall be remitted to the National Treasury as internal revenue. (Section 212, Tax Code) Bid equal or more than amount of tax/FMV of goods distrained 7. Officer sells the goods to the highest bidder for cash or with the Commissioners approval, through commodity/ stock exchanges. (Section 209, Tax Code) 8. Excess of proceeds over the entire claim, shall be returned to the owner. No charge shall be imposed for the services of the officer (Section 209, Tax Code) 9. Within 2 days after the sale, officer shall report to the Commissioner. (Section 211, Tax Code) 10. Within 5 days after sale, distraining officer shall enter return of proceedings in the records of RCO, RDO and RRD (Section 213, Tax Code) not his the on 1. Person owing any delinquent tax to fails to pay w/in the time required Note: The authority who will do the distraint of personal property will depend on whether the delinquent tax is more than Php 1 million. Php Commissioner seizes sufficient personal property to satisfy the tax, charge & expenses of seizure (Section 207 (A), Tax Code) 2. RDO seizes sufficient personal property to satisfy the tax, charges & expenses of seizure (Sec. 207 (A), Tax Code) 3. 4. 5. 6. Distraining Officer accounts for the goods distrained (Section 208, Tax Code) RDO posts notice in at least 2 public places in the municipality/city where the distraint is made. One place of posting must be at the mayors office. Time of sale shall not be less than 20 days after the notice (Section 209, Tax Code) Goods shall be restored to owner, if charges are paid (Section 210, Tax Code) Officer conducts public auction Note: If the personal property of the taxpayer is sufficient to satisfy his tax delinquency, the CIR or authorized representative shall, within 30 days after execution of the distraint, proceed with the levy taxpayers real property. (Section 209(B), Tax Code) redeem said property by paying full amount of the taxes and charges (Section 215, Tax Code) 10. The Commissioner may, after 20 days notice, sell property at public auction or at private sale with approval of the Secretary of Finance. Proceeds shall be deposited with the National Treasury (Section 216, Tax Code) the foreclosed asset of natural persons and the period within which to pay CGT or CWT and DST on the foreclosure of Real Estate Mortgage shall be reckoned from the date of registration of the sale in the Office of the Register of Deeds For juridical persons in an extrajudicial foreclosure, Section 47 of the General Banking Law provides that its right of redemption shall be until, but not after the registration of the certificate of sale with the Register of Deeds, which in no case shall be more than 3 months after foreclosure, whichever is earlier. (RMC No. 55-2011 [November 10, 2011]) The right of redemption shall be reckoned from the approval of the executive judge [CIR v. UPCB [October 23, 2009]) 9. Within 1 year from sale, the owner may redeem, by paying to the RDO the amount of the taxes, penalties, and interest thereon from the date of delinquency to the date of sale, and 15% per annum interest on purchase price from the date of purchase to the date of redemption. (Section 214, Tax Code) 10. Owner shall not be deprived of the possession and shall be entitled to the fruits until 1 year expires (Section 214, Tax Code) 2. By filing an answer to the petition for review filed by the taxpayer with the CTA Q: Which court has exclusive original jurisdiction in tax collection cases involving final and executory assessments for taxes, fees, charges and penalties? 1. The CTA if the principal amount of taxes and fees, exclusive of charge and penalties is Php 1 million and above. 2. The proper MTC or RTC if the principal amount of taxes and fees, exclusive of charge and penalties, is less than Php 1 million. --------------------------------------------------------------(xii) Suspension of business operation --------------------------------------------------------------Read Section 115, Tax Code Q: When may the CIR suspend the business operation of a VAT-registered person? The CIR or his authorized representative may suspend the business operation and temporarily close the business of a VAT-registered person for understatement of taxable sales or receipts by 30% or more of his correct taxable sales or receipts for the taxable quarter Note: The duration of the suspension of business operation is for a period of not less than 5 days and shall be lifted only upon compliance of whatever requirements imposed by the CIR in the collection order. Q: Assuming that the principal amount of taxes and fees is less than Php 1 million, can the lower court acquire jurisdiction over a a tax collection case while there is a pending case in the CTA disputing the assessment? No. As held in YABES V. FLOJO [JULY 20, 1982], the Supreme Court held that the lower court can acquire jurisdiction over a claim for collection of deficiency taxes only after the assessment made by the CIR has become final and unappealable, not where there is still a pending CTA case. --------------------------------------------------------------(iv) Statutory offenses and penalties --------------------------------------------------------------Note: I already discussed civil penalties or surcharges and interests in Assessment. As to statutory offenses, I will include them in the discussion of criminal action. --------------------------------------------------------------b) Judicial Remedies (i) Civil and criminal actions (b) Suit to recover tax based on false and fraudulent returns --------------------------------------------------------------Read Section 220-221, Tax Code Civil Actions Q: What are the two ways by which the civil tax liability of a taxpayer is enforced by the government through civil actions? 1. By filing a civil case for the collection of a sum of money with the proper regular court Q: When an assessment has become final for failure to protest, can the taxpayer still raise the issue of prescription? Yes. As held in CIR V. HAMBRECHT & QUIST PHILIPPINES [NOVEMBER 17, 2010], the Supreme Court held that the fact that an assessment has become final for failure of the taxpayer to file a protest within the time allowed only means that the validity or the correctness of the assessment may no longer be questioned on appeal. However, the validity of the assessment itself is a separate and distinct issue from the issue of whether the right of the CIR to collect the validly assessed tax has prescribed. Q: Is a decision on a request for reinvestigation a condition precedent to the filing of an action of taxes already assessed? No. In REPUBLIC V. LIM TIAN TENG SONGS & CO [M ARCH 31, 1966], the Supreme Court ruled that a decision on a request for reinvestigation is not a condition precedent to the filing of an action of taxes already assessed. Nowhere in the Tax Code is the CIR required to rule first on a taxpayers request for reconsideration before he can go to court for the purpose of collecting the tax assessed. The requirement to rule on disputed assessments before bringing action for collection is applicable only on where the assessment was actually disputed, adducing reasons in support thereto. In this case, the taxpayer did not actually contest the assessment by stating the basis thereof. (see DAYRIT V. CRUZ [SEPTEMBER 26, 1988]) Q: Define willful in the context of the third element of a violation of the Tax Code for failure to make or file the return? In PEOPLE V. KINTANAR [CTA CRIM. CASE NO. 006, DECEMBER 3, 2010, affirmed by the Supreme Court in a minute resolution [G.R. 196340] dated February 2012], the Supreme Court defined willful in this light: willful in the tax crimes statutes means voluntary, intentional violation of a known legal duty, and bad faith or bad purpose need not be shown. Further, the Supreme Court stated that an act or omission is "willfully" done if done voluntarily and intentionally and with specific intent to do something the law forbids, or with specific intent to fail to do something the law requires to be done; that is, with bad purpose to either disobey or disregard the law. A willful act may be described as one done intentionally, knowingly and purposely, without justifiable excuse. As held in PEOPLE OF THE PHILIPPINES VS. JUDY ANNE SANTOS Y LUMAGUI [CTA CRIM. CASE NO. O-012, JANUARY 16, 2012], the element of wilful failure to supply correct and accurate information must be fully established as a positive act or stale of mind. It cannot be presumed nor attributed to mere inadvertent or negligent acts. Negligence, whether slight or gross, is not equivalent to the fraud with intent to evade the tax contemplated by the law. Fraud must amount to intentional wrongdoing with the sole object of avoiding the tax. Criminal Actions Q: Name the most common punishable under the Tax Code? Read Section 254-255, Tax Code crimes 1. Attempt to evade or defeat tax (Section 254) 2. Failure to File return, supply correct and accurate information, pay tax, withhold and remit tax, and refund excess taxes withheld on consumption (Section 255) Note: As to other statutory offenses, refer to Sections 253 to 282. Q: What are the elements of a violation of Section 255 of the Tax Code for failure to make or file a return? 1. The accused is a person required to make or file a return 2. The accused failed to make or file the return at the time required by law 3. The failure to make or file the return was willful (see PEOPLE V. KINTANAR [CTA CRIM. CASE NO. 006, DECEMBER 3, 2010]; PEOPLE OF THE PHILIPPINES VS. JUDY ANNE SANTOS Y LUMAGUI [CTA CRIM. CASE NO. O-012, JANUARY 16, 2012]) Q: What are the elements of a violation of Section 255 in relation to Sections 253(d) and 256 of the Tax Code for failure of a corporation to make or file a return (holding the corporate officers criminally liable)? 1. The corporate taxpayer is required to pay tax and it failed to pay such tax at the time required by law; 2. The accused is the president, general manager, branch manager, treasurer, officer-in-charge, or employee responsible for the violation of the corporate taxpayer; and 3. The accused willfully fails to pay the corporate taxes. (PEOPLE OF THE PHILIPPINES VS.JOSEPH TYPINGCO [CTA CRIM. CASE NO. 0-114, M AY 16, 2012] assessment is not necessary before a criminal charge can be filed and such criminal charge need only be supported by a prima facie showing of failure 27 to file a required return. This was likewise reiterated in Adamson v. CA [May 21, 2009] where the Court held that there is no need for precise computation and formal assessment in order for criminal complaints can be filed against the taxpayer. An assessment is not necessary for a criminal prosecution for willful attempt to defeat and evade the income tax. Note: However, for criminal prosecution to proceed before assessment, there must be a prima facie showing of a willfull attempt to evade taxes (CIR v. Fortune Tobacco [June 4, 1996]) Q: Does the acquittal of the taxpayer from the criminal action affect his liability to pay the tax? No. In REPUBLIC V. PATANAO [JULY 21, 1967], the Supreme Court held that since the taxpayers civil liability is not included in the criminal action, his acquittal in the criminal proceeding does not necessarily entail exoneration from his liability to pay taxes. His legal duty to pay taxes cannot be affected by his attempt to evade taxes. Said obligation is not a consequence of the criminal act charged nor is it a mere civil liability arising from a crime that could be wiped out by judicial declaration of non-existence of the criminal acts charged. Q: What is the effect of satisfaction of the civil liability to the criminal liability in tax cases? The subsequent satisfaction of civil liability by payment or prescription does not extinguish the taxpayers criminal liability. Q: Can subsidiary imprisonment be imposed on the tax which the taxpayer is sentences to pay? It depends. Subsidiary imprisonment cannot be imposed in case of insolvency on the part of the taxpayer but it may be imposed in the case of failure to pay the fine imposed (see Section 280, Tax Code) 27 The Court also stressed that a criminal complaint is instituted not to demand payment, but to penalize the taxpayer for violation of the Tax Code. Read Section 281, Tax Code Q: What is the prescriptive period for violations of the Tax Code? All violations of any provision of the Tax Code shall prescribe after 5 years. --------------------------------------------------------------b) Refund (i) Grounds and Requisites for refund (ii) Requirements for refund as laid down by cases (a) Necessity of written claim for refund (b) Claim containing a categorical demand for reimbursement (c) Filing of administrative claim for refund and the suit/proceeding before the CTA within 2 years from date of payment regardless of any supervening cause (iii) Legal basis of tax refunds (iv) Statutory basis for tax refund under the Tax Code (a) Scope of claims for refund (b) Necessity of proof for claim or refund (c) Nature of erroneously paid tax/illegally assessed collected (d) Tax refund vis--vis Tax Credit (e) Essential requisites for claim of refund (v) Who may claim/apply for tax refund/tax credit (a) Taxpayer/withholding agents of non-resident foreign corporations Page 90 of 164 Last Updated: 30 July 2013 (v3) Read Section 282, Tax Code Q: What is the reward given to persons instrumental to the discovery of violations of the Tax Code? A sum equivalent to 10% of the revenues, surcharges, or fees recovered and/or fine or penalty imposed and collected or P1 million, whichever is lower. Entitlement to Informers Reward Yes No No The offender offered to compromise No revenue, surcharges or fees were actually recovered The information refers to case (vi) Prescriptive period for recovery of tax erroneously or illegally collected (vii) Other consideration affecting tax refunds --------------------------------------------------------------Read Section 229, Tax Code Note: Before we even begin, note that the rules in Section 229 both statutory and jurisprudential does NOT apply to the refund or tax credit of excess and unutilized input tax (VAT). Section 229 applies to the recovery of erroneously or illegally collected internal revenue taxes. On the other hand, the refund or tax credit of excess and unutilized input tax is governed by Section 112(C). So kung sinabi recovery of input tax (sa VAT), apply Section 112(C). Kung recovery of erroneous or illegally internal revenue tax (mostly in income taxes), apply Section 229.Remember that. Keep this mind in our discussion of Refund here. Ill first discuss Refund in Section 229 by following the order in the syllabus and then later we will discuss the procedure for claim of refund in Section 229 and Ill compare it with Section 112(C). Hanggang ngayon kung titingnan niyo ang mga kaso involving refunds, marami pa rin nagkakamali diyan. Who is to blame? Well, ang kaso ng Aichi na we discussed under VAT. Malalaman natin mamaya bakit. from date of payment regardless of any supervening cause --------------------------------------------------------------Q: What are the requirements for a claim of a tax refund or a tax credit? 1. There is a tax collected erroneously or illegally, or a penalty collected without authority, or a sum excessively or wrongfully collected (see Section 229, Tax Code) 2. There must be a written claim for refund filed by the taxpayer to the CIR (see Vda. De Aguinaldo v. CIR [February 26, 1965]) Exceptions (no written claim required) a. When on the face of the return upon which payment was made, such payment appears clearly to have been erroneously paid, the CIR may refund or credit the tax even without a written claim (Section 229, Tax Code) b. A return filed showing an overpayment shall be considered as a written claim for credit or refund. (Sec. 204(C), Tax Code) 3. The claim must be a categorical demand for reimbursement (see Bernejo v. CIR [July 25, 1950]) 4. The claim for refund must be filed within 2 years from the date of the payment of the tax regardless of any supervening cause (Section 229, Tax Code) 5. The taxpayer must show proof of payment of the tax (See CIR v. Li Yao [December 27, 1963]) Note: Payment under protest is not required in order to obtain a refund of erroneously or illegally collected internal revenue taxes. (Section 229, Tax Code) As to (3): The idea is first to afford the CIR an opportunity to correct the action of subordinate officers and second to notify the Government that such taxes have been questioned and the notice should then be borne in mind in estimating the revenue available for expenditure (see Bermejo v. CIR [July 25, 1950]) As to (5), before recovery is allowed, it must be established that there was actual collection and receipt by the government of the tax sought to be recovered and this --------------------------------------------------------------(i) Grounds and Requisites for refund --------------------------------------------------------------Q: What are the grounds for refund or credit of internal revenue taxes? 1. The tax was illegally collected There is no law that authorizes the collection of the tax) 2. The tax was excessively collected There is a law that authorizes the collection but the tax collected was more than what the law allows 3. The tax was paid through a mistaken belief that the taxpayer should pay the tax This is a case of solutio indebiti --------------------------------------------------------------(ii) Requirements for refund as laid down by cases (a) Necessity of written claim for refund (b) Claim containing a categorical demand for reimbursement (c) Filing of administrative claim for refund and the suit/proceeding before the CTA within 2 years (4) For actions for refund of corporate income tax, the twoyear prescriptive period is counted from the time of actual filing of the Final Adjustment Return or Annual Income Tax Return not on the date when the taxes were paid on quarterly basis. (see CIR V. CA [JANUARY 21, 1999]). It is at this point that it can already be determined whether there has been an overpayment of the taxpayer. (see CIR V. PHILAMLIFE [MAY 29, 1995]). See PRHC Property Managers. Inc. vs. Commissioner of Internal Revenue [CTA Case No. 8071, January 6, 2012] where the CTA held that the reckoning of the 2-year prescriptive period for the filing of a claim for refund of excess creditable withholding tax or quarterly income tax starts from the date of filing of the annual income tax return See also MCKINSEY & CO., (PHILS.) VS. COMMISSIONER INTERNAL REVENUE, CTA CASE NO. 8078, JULY 30, 2012 OF --------------------------------------------------------------(c) Filing of administrative claim for refund and the suit/proceeding before the CTA within 2 years from date of payment regardless of any supervening cause --------------------------------------------------------------Q: What is the prescriptive period for recovery of erroneously or illegally collected internal revenue taxes? The claim for refund must be filed within 2 years from the date of payment of the tax regardless of any supervening cause (Section 229, Tax Code) Note: (1) Note Section 56 of the Tax Code which provides that payment is made at the time the return is filed. But when the final adjusted return was filed earlier than the time the return could still be filed, the 2-year period is counted from the date the return was filed (CIR v. CA [January 21, 1999]) (2) In case of payments through the withholding tax system, the tax liability is deemed paid when he same falls due at the end of the tax year (Gibbs v. CIR [November 29, 1965]) (3) If the tax is paid in installments, the two year prescriptive period is counted from the time of the payment of the last installment. As held in CIR v. PALANCA [OCTOBER 29, 1966], where the tax account was paid by installment, then the computation of the 2 year prescriptive period should be from the date of last installment. The period to file a claim for refund of excess creditable withholding taxes by a tax-exempt entity is not reckoned from the filing of the final adjustment return, but from the time the taxes were erroneously withheld LISP-1 LOCATORS ASSOCIATION INCORPORATED VS. COMMISSIONER OF INTERNAL REVENUE, CTA CASE NO. 7905, NOVEMBER 29, 2012 (5) In case the taxpayer merely made a deposit, the 2-year period is counted from the conversion of the deposit to payment (Union Garment v. Collector [CTA Case No. 416, November 17, 1965]). (6) For VAT, the two year prescriptive period is counted from the time of filing of the quarterly VAT return, i.e. within 25 days after the close of each taxable quarter (CIR v. Mirant [September 12, 2008] (7) In case of dissolution of a corporation, the 2-year prescriptive period for refund begins thirty (30) days after the approval by SEC of its plan for dissolution MINDANAO I GEOTHERMAL PARTNERSHIP VS. COMMISSIONER OF INTERNAL REVENUE, CTA CASE NO. 8250, NOVEMBER 9, 2012 Q: Is a RMC which extends the 2 year period to file a claim for refund to 10 years valid? No, the RMC cannot go beyond what is provided in the law and the State cannot be put into estoppel (see PBCOM V. CIR [JANUARY 28, 1999]) Q: What is the judicial remedy with respect to a refund or recovery of tax erroneously or illegally collected? The remedy is the filing of a suit or proceeding with the CTA: Page 92 of 164 Last Updated: 30 July 2013 (v3) 1. Within 30 days from receipt of the denial by the CIR of the application for refund 2. Before the expiration of the 2 years prescriptive period. Note: The 30-day period to appeal to the CTA should be within the 2-year prescriptive period Q: What must the taxpayer do in case of a situation where the CIR is taking time to decide the claim and the period of 2 years is about to end? If the the 2 year period is about to lapse, the taxpayer may already appeal to the CTA even if the CIR has not yet made any decision on the claim for refund. In GIBBS V. COLLECTOR OF INTERNAL REVENUE [FEBRUARY 29, 1960], the Supreme Court noted that if the CIR takes time in deciding the claim and the period of two years is about to end, the suit or proceeding must be started in the CTA before the end of the 2 year period without awaiting the decision of the CIR. In CIR V. SWEENEY [AUGUST 21, 1959], the Supreme Court stated that taxpayers need not wait for the action of the CIR on the request for refund before taking the matter to Court. Note: (1) The implication of this is that a simultaneous filing of the application with the BIR for refund/credit and the institution of the suit with the CTA is allowed. (2) The rule is different in the refund or tax credit of excess or unutilized input taxes for VAT. Sa recovery of excess or unutilized input taxes, premature if you file the judicial claim within the 2 year prescriptive period. Dito sa refund of erroneously or illegally collected tax, hindi premature un! In fact, kapag hindi ka nagfile within, fatal un sa claim mo. We will discuss this later. absolutely exempt from income tax regardless of the nature of tax, the taxpayers claim was barred by prescription since the filing of the supplemental petition (and not an original action) was not granted and therefore it did not have any judicial effect to toll the running of the 2 year period. It was only when a subsequent petition for review was filed did the prescriptive period toll. Further, this is not a case where the 2-year period can be considered nonjurisdictional since there are no exceptional or supervening circumstances to speak of. Q: Name some reasons of equity and other special circumstances that jurisprudence has considered to extend the 2 year prescriptive period. 1. When the taxpayer made advance income tax payment heeding former President Corazon Aquinos call and was made to believe that its request for tax credit will be acted upon and favourably considering that its carry over was unutilized since the company suffered losses for the next 4 years (see PNB V. CA [OCTOBER 25, 2005]) 2. When the taxpayer and the CIR agreed to wait for the result of another case having the same issue (see PANAY ELECTRIC CO. V. CIR [M AY 28, 1958]) 3. When the CIR initially agreed to grant the refund and later denied the same Q: Will the filing of a supplemental petition be sufficient to toll the prescriptive period for the claim for refund? It depends. If it was granted, it would toll the prescriptive period. Otherwise, it would not have the effect of tolling the prescriptive period. In FAR EAST BANK AND TRUST COMPANY V. CIR [M AY 2, 2006], the Supreme Court held that the claim for refund has been barred by prescription since the supplemental petition was not admitted. While retirement funds/employment trusts are still PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Q: If the availment of the tax credit/refund is due for reasons other than the erroneous or wrongful collection of taxes, what prescriptive period shall apply? As held in CIR v. PNB [OCTOBER 25, 2005] citing CIR V. PHILAMLIFE [MAY 29, 1995], availment of a tax credit due for reasons other than the erroneous or wrongful collection of taxes may have a different prescriptive period. Absent any specific provision in the Tax Code or special laws, the period would be 10 years under Article 1144 of the Civil Code. --------------------------------------------------------------(a) Scope of claims for refund (b) Necessity of proof for claim or refund --------------------------------------------------------------Note: For scope of claims for refund, refer to the grounds as discussed earlier. Also make reference to Section 204(c) as to internal revenue stamps.As to necessity of proof, refer to the discussion on requirements for refund. --------------------------------------------------------------(iii) Legal basis of tax refunds --------------------------------------------------------------Q: What is the legal basis of tax refunds? Tax refunds are founded on the legal principle which underlies quasi-contracts abhorring a persons unjust enrichment at the expense of another. The pertinent laws governing this principle are found in Art. 2142 and Art. 2154 of the NCC, to wit: 1. Certain lawful, voluntary and unilateral acts give rise to the juridical relation of quasicontract to the end that no one shall be unjustly enriched or benefited at the expense of another (Art. 2142) 2. If something is received when there is no right to demand it, and it was unduly delivered through mistake, the obligation to return it arises. (Art. 2154) --------------------------------------------------------------(c) Nature of erroneously paid tax/illegally assessed collected --------------------------------------------------------------Q: What is the nature of a claim for tax refund? A claim for tax refund is in the nature of a claim for exemption and should be construed strictissimi juris against the taxpayer. (see CIR V. TOKYO SHIPPING [M AY 26, 1995]) --------------------------------------------------------------(iv) Statutory basis for tax refund under the Tax Code (a) Scope of claims for refund (b) Necessity of proof for claim or refund (c) Nature of erroneously paid tax/illegally assessed collected (d) Tax refund vis--vis Tax Credit (e) Essential requisites for claim of refund --------------------------------------------------------------Q: What is the statutory basis for a tax refund under the Tax Code? See Section 204(c) and Section 229. Q: ABC Corp filed its annual income tax return for 2001 showing net loss. Hence, it argues that the tax withheld on its income was not utilized against income. Accordingly, ABC Corp filed a claim for refund and presented its income tax return showing the incurred losses. The CIR Page 94 of 164 Last Updated: 30 July 2013 (v3) argued that ABC must prove its reported losses to be entitled to the refund. Is the CIR correct? No. In CIR V. ASIAN TRANSMISSION CORPORATION [JANUARY 26, 2011], the Supreme Court ruled that while it is indeed true that the taxpayer bears the burden to establish the losses, the taxpayer has fulfilled this duty when it presented its income tax return showing he incurred losses. --------------------------------------------------------------(d) Tax refund vis--vis Tax Credit --------------------------------------------------------------Note: We already discussed this in Income Taxes but nonetheless let us review. PSPC then utilized the said TCCs for its excise taxes and were then issued TDM (Tax Debit Memo) and ATAPs (Authority to Accept Payment) by the BIR. However, the BIR assessed PSPC for delinquent excise taxes alleging that PSPC is not a qualified transferee of the TCCs. CA ruled that the PSPC was not entitled to the benefit of the TCCs and thus upheld the assessment. Was the use of PSPC of the TCCs valid? Yes. As held in PILIPINAS SHELL V. CIR [DECEMBER 21, 2007], there is no suspensive condition for the validity of TCCs as they are effective immediately and only computational errors are allowed as basis to invalidate TCCs. Also, even if the source is defective, it does not affect PSPCs right as it acted in good faith and the agencies approved of the use of TCCs. In CIR V. PETRON [M ARCH 21, 2012], the Supreme Court had occasion to reiterate that TCCs are valid and effective from their issuance and are not subject to post-audit as a suspensive condition for their validity. Note: However, by virtue of RR 14-2011 [JULY 29, 2011], all Tax Credit Certificates (TCCs) issued by the BIR are no longer transferable or assignable to any person. --------------------------------------------------------------(e) Essential requisites for claim of refund --------------------------------------------------------------Note: I already discussed this under the requirements for a claim for refund or tax credit. Q: May a taxpayer ask for both a tax refund and a tax credit? No. As held in PHILAM ASSET M ANAGEMENT V. CIR [DECEMBER 14, 2005], a taxpayer may apply for either a tax refund or tax credit, but not both. The choice of one precludes the other. Note: If you avail of the tax credit, you get what is called a Tax Credit Certificate (TCC). There is no suspensive condition for its validity. Remember that. --------------------------------------------------------------(v) Who may claim/apply for tax refund/tax credit (a) Taxpayer/withholding agents of non-resident foreign corporations --------------------------------------------------------------Q: Who is the proper party to claim a tax credit/refund? The proper party to seek a refund is the statutory taxpayer, who is the person on whom the tax is imposed by law and who paid the same, even if that Q: PSPC acquired some TCCs (tax Credit Certificates) through the One Stop Shop Inter-Agency Tax Credit and Duty Drawback Center from other BOI-registered entities. PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 person shifted the tax to another (see SILKAIR SINGAPORE V. CIR [NOVEMBER 14, 2008]) (BANCO See also ORIX AUTO LEASING PHILIPPINES CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE, CTA CASE NO. 8001, NOVEMBER 28, 2012; PHILIPPINE BANK OF COMMUNICATIONS VS. CIR, CTA CASE NO. 7915, JUNE 6, 2012; MANILA NORTH TOLLWAYS CORPORATION VS. CIR, C.T.A. EB NO. 812, OCTOBER 11, 2012) WINEBRENNER & IIGO INSURANCE BROKERS, INC. VS. COMMISSIONER OF INTERNAL REVENUE, CTA CASE NO. 8277, DECEMBER 19, 2012 Note: In the third requisite, the taxpayer need not prove the fact of remittance to the BIR of the taxes withheld by the various payors (withholding agents). CIR V. MIRANT [JUNE 15, 2011] In a claim for refund of its excess income tax payment or creditable withholding taxes paid, claimant has the burden of proof to establish the factual basis of his or her claim for tax credit or refund. Presentation of forgotten evidence is disallowed. MIRANT NAVOTAS II CORPORATION VS. CIR, CTA EB NO. 754 (CTA CASE NO. 7618), JUNE 5, 2012 Q: Is the withholding agent who filed the claim for tax refund obliged to remit the same to the taxpayer? Yes. In CIR V. SMART COMMUNICATIONS [AUGUST 25, 2010], the Supreme Court ruled that while the withholding agent has the right to recover the taxes erroneously or illegally collected, he nevertheless has the obligation to remit the same to the principal taxpayer under the principle of unjust enrichment. Q: What are the requisites for claim for tax credit or refund of a creditable withholding tax? 1. Claim must be filed within the two-year prescriptive period from date of payment of the tax 2. It must be shown on the return that the income received was declared as part of gross income 3. The fact of withholding must be established by a copy of a statement duly issued by the payor to the payee showing the amount paid --------------------------------------------------------------(vi) Prescriptive period for recovery of tax erroneously or illegally collected --------------------------------------------------------------Note: I have already discussed the 2-year prescriptive period as well. Now, lets compare the procedure for claiming a tax refund under Section 229 and that of the refund of excess or unutilized input taxes under Section 112(c). Motion for Reconsideration or New Trial to CTA Division within 15 days from receipt of decision. 5. Appeal to CTA En Banc within 15 days from receipt of resolution. 6. Appeal to the SC within 15 days from receipt of resolution under Rule 45 the judicial claim is premature. The Motion for Reconsideration or New Trial to CTA Division within 15 days from receipt of decision. 11. Appeal to CTA En Banc within 15 days from receipt of resolution. Motion for Reconsideration to the CTA En Banc within 15 days from receipt of decision 12. Appeal to the SC within 15 days from receipt of resolution under Rule 45 Q: Outline the steps for tax refund/credit of erroneously or illegally collected internal revenue tax under Section 229 and compare it with the recovery of excess or unutilized input tax under Section 112(C) Section 229 Recovery of erroneously or illegally collected internal revenue tax 1. Payment period begins on the date of payment of tax or penalties regardless of any supervening cause 2. Administrative claim within 2 years from payment filed with the CIR 3. Submission of additional and relevant support documents within 60 days from filing of claim 4. Appeal to CTA Division within 30 days from receipt of notice of denial or from inaction of the CIR counted from submission of documents. Appeal should be made within the 2 years prescriptive period. Section 112(c) Recovery of excess or unutilized input tax 7. Filing and Payment 8. Administrative claim within 2 years counted from the close of the taxable quarter when the relevant sales were made 9. Submission of additional and relevant support documents within 60 days from filing of claim 10. Appeal to CTA Division within 30 days from receipt of notice of denial or from lapse of 120 days of inaction counted from submission of documents. The appeal should NOT be made within the 2-year prescriptive period. Otherwise, Note: (1) Majority of authorities including Atty. Montero is of the view that with regard to refund of erroneously or illegally collected tax, the CIR must act within a period of 120 days. That period, however, is found in Section 112(A) which applies to refunds of erroneously or illegally collected tax. Further, the 180 day period provided in Section 228 applies to a protest. What should we follow? 120 or 180? Well, it doesnt matter. In th e refund of erroneously or illegally collected tax, as long as you file your claim for refund within the 2-year period, youre fine. In fact, you may simultaneously file a claim for refund and a file a suit with the CTA. This brings me to my second point. (2) As held in the case of CIR V. AICHI FORGING COMPANY OF ASIA [OCTOBER 6, 2010], non-observance of the 120-day period is fatal to the judicial claim. Thus, you cannot simultaneously file your claim for refund of excess or unutilized input tax and file a suit with the CTA. The 2 year prescriptive period applies only to the administrative claim meaning that you should file your claim with the CIR within 2 years. As to the judicial claim, you wait for the 120 days to lapse. --------------------------------------------------------------(vii) Other consideration affecting tax refunds --------------------------------------------------------------Note: Lets discuss here the Irrevocability Rule under Section 76. That is found in Title II (Income Tax). Its not included in the portion of the syllabus on Income Tax. I will discuss it here because it relates to tax credit or tax refund. Read Section 76, Tax Code Q: What are the options available to the corporation when the sum of the quarterly tax payments made during the taxable year is more than the total tax due on the entire taxable income of that year? The corporation shall either: 1. Pay the balance of tax still due 2. Carry-over the excess credit 3. Be credited or refunded with the excess amount paid payment. It opted to carry-over this excess as tax credit to the succeeding taxable year. This was applied to the 1999 taxable year leaving again an excess income tax payment. The taxpayer then applied for a refund for this amount. HELD: The Supreme Court cited Section 76 of the Tax Code, which provides that TCC shall be allowed therefore. Having chosen to carry-over the excess quarterly income tax, the taxpayer here cannot thereafter choose to apply for a cash refund or for the issuance of a TCC for the amount representing such overpayment. The taxpayers claim for refund should be denied as is option to carry over has precluded it from claiming the refund of the excess income tax payment. Q: Does the irrevocability rule apply to the claim of refund or issuance of TCC? No. The irrevocability rule in Section 76 of the Tax Code applies only to the option to carry-over the excess income tax payment, and not to the claim for refund or issuance of a TCC. Nowhere in Section 76 was it stated that the option to claim refund or TCC, once chosen, is irrevocable. UNITED COCONUT PLANTERS BANK VS. COMMISSIONER OF INTERNAL REVENUE, CTA EB CASE NO. 725, AUGUST 23, 2012; STABLEWOOD PHILIPPINES, INC. VS. CIR, CTA EB 751 (CTA 7705) Q: If the corporate taxpayer fails to signify his intention in the Final Adjustment Return, is it barred from making a valid request for refund should it choose this option later on? No. As held in PHILAM ASSET M ANAGEMENT V. CIR [DECEMBER 14, 2005], failure to indicate a choice will not bar a valid request for a refund, should this option be chosen by the taxpayer later on. Q: What is the implication when a corporation fills out the portion Prior Years Excess Credits in the Final Adjustment Return? As held in PHILAM ASSET M ANAGEMENT V. CIR [DECEMBER 14, 2005], the fact that the corporation filled out the portion prior years excess credits in the Final Adjustment Return means that it categorically availed itself of the carry-over option. If an application for tax refund has been or will be filed, that portion should necessarily be blank. --------------------------------------------------------------- Q: Differentiate the power of the CIR to interpret tax laws and the power to decide tax cases. The power to interpret tax laws is under the exclusive and original jurisdiction of the CIR, subject to the review by the Secretary of Finance On the other hand, the power to decide tax cases, while vested also in the CIR, is subject to the exclusive appellate jurisdiction of the CTA. Q: Can the Secretary of Finance motu proprio review a ruling of the CIR? Yes. DOF ORDER NO. 007-02 [M AY 7, 2002] provides that the Secretary of Finance may, of his own accord, review a ruling issued by the CIR. Note: The power to obtain information and to summon, examine and take testimony of persons AND the power to make assessment and prescribe additional requirements for tax administration and enforcement have already been discussed this under Tax Remedies To delegate power Read Section 7, Tax Code Q: What powers of the CIR are nondelegable? 1. To recommend the promulgation of rules and regulations 2. Issuance of first impression rulings 3. Compromise or abatement if the amount is over P500,000 4. Assign officers in charge of excisable articles To interpret tax laws and decide cases Read Section 4, Tax Code PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Q: A is the assistant commissioner of the BIR. Upon inquiry by ABC and XYZ company on the applicable excise tax rates, A signed a letter informing ABC and XYZ of the conduct of the survey, the results thereof and the applicable excise tax rates. ABC and XYZ contend that that A acted without authority and that it should be the CIR who signed such issuance. Are ABC and XYZ correct? No. Under Section 7 of the NIRC, the CIR is authorized to delegate to his subordinates the powers vested in him except, among others, the power to issue rulings of first impression. Here, the subject matter of the letter does not involve the exercise of the power to rule on novel issues. It merely implemented the revenue regulations then in force (see PARAYNO VS. LA SUERTE CIGAR AND CIGARETTE FACTORY [JUNE 11, 2009]) b) Specific provisions to be contained in rules and regulations c) Non-retroactivity of rulings ----------------------------------------------------------------------------------------------------------------------------a) Authority of Secretary of Finance to promulgate rules and regulations --------------------------------------------------------------Read Section 244, Tax Code Q: Who promulgates revenue rules and regulations? The Secretary of Finance, upon recommendation of the CIR, shall promulgate all needful rules and regulations for the enforcement of tax laws. Q: May the CIR delegate the power to approve the filing of tax collection cases? Yes. The CIR may validly delegate to subordinates the power to approve the filing of tax collection cases in court. In REPUBLIC VS. HIZON [DECEMBER 13, 1999], the Supreme Court upheld the delegation of that power to the Chief of Legal Division of Region IV, the act having been likewise verified by the Regional Director. To ensure the provision and distribution of forms, receipts, certificates, and appliances and acknowledgment of payment of taxes Read Section 8, Tax Code Q: Give some notable powers and duties of a Revenue Regional Director? 1. Implement tax laws in the regional area 2. Administer and enforce tax laws including assessment and collection of all internal revenue taxes 3. Issue Letters of Authority (LOA) for the examination of taxpayers in the region (see SECTION 11, TAX CODE) --------------------------------------------------------------b) Specific provisions to be contained in rules and regulations --------------------------------------------------------------Read Section 245, Tax Code Q: Enumerate and define the tax-related administrative issuances Revenue Regulations (RRs) are issuances signed by the Secretary of Finance, upon recommendation of the Commissioner of Internal Revenue, the specify, prescribe or define rules and regulations for the effective enforcement of the provisions of the National Internal Revenue Code (NIRC) and related statutes. are issuances that provide directives or instructions; prescribe guideline; and outline processes, operations, activites, workflows, methods and procedures necessary in the implementation of stated policies, goals, objectives, plans and programs of the Bureau in all areas of operations, except auditing. are rulings, opinions and --------------------------------------------------------------1. Rule-making authority of the Secretary of Finance a) Authority of Secretary of Finance to promulgate rules and regulations PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Revenue BIR Rulings. are issuances that publish pertinent and applicable portions, as well as amplifications, of laws, rules, regulations and precedents issued by the BIR and other agencies/offices. refer to periodic issuances, notices and official announcements of the Commissioner of Internal Revenue that consolidate the Bureau of Internal Revenues positions of the Tax Code, relevant tax laws and other issuances for the guidance of the public. are official position of the Bureau to Queries raised by taxpayers and other stakeholders relative to clarification and interpretation of tax laws. are issued by the BIR International Tax Affairs Division to rule on certain issues relating to interpretations of international tax treaty provision under which certain taxpayers or transactions can avail of tax exemptions or preferential tax rates. Q: Explain the rule on non-retroactivity of rulings Read Section 246, Tax Code General Rule: Revenue Regulations, Rulings, Circulars and other administrative issuances have no retroactive application Exception: If prejudicial to the taxpayer, they shall have retroactive application Exception to the Exception: Even if prejudicial to the taxpayer, they shall have retroactive effect in the following cases 1. The taxpayer deliberately misstates or omits material facts 2. The facts subsequently gathered are different from the facts on which the ruling was based 3. The taxpayer acted in bad faith Q: If a ruling was subsequently found by the CIR to be null and void, does the nonretroactivity principle still apply? No. The non-retroactivity principle does not apply when the ruling involved is null and void for being contrary to law. In BIR RULING NO. 370-2011 [OCTOBER 7, 2011], the CIR affirmed its position that the Poverty Alleviation and Eradication Certificates (PEAce) Bonds are not tax-exempt and subject to a 20% FWT. Previously, 2001 BIR Rulings have considered such instruments as tax-exempt. The CIR concluded that no right has been vested by virtue of the 2001 Rulings as they were null and void for being contrary to law. Q: What is the effect of RR 5-2012 [April 5, 2012] on rulings issued prior to January 1, 1998? RR 5-2012 [APRIL 5, 2012] provides that all rulings issued prior to January 1, 1998 will no longer have any binding effect. They can no longer be invoked as basis for any current business transaction/s or as a basis for securing legal tax opinions and rulings. RMC 22-2012 [M AY 7, 2012] clarified that BIR Rulings prior to January 1, 1998 remains valid: Page 102 of 164 Last Updated: 30 July 2013 (v3) --------------------------------------------------------------c) Non-retroactivity of rulings --------------------------------------------------------------PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 1. To the taxpayer who was issued the ruling 2. Covering the specific transaction which is subject of the ruling Q: May a BIR ruling be invoked by a taxpayer other than the one who requested the same? No. In CIR v. Filinvest Development Corp [July 19, 2011], the Supreme Court ruled that in keeping with the caveat attendant in every BIR ruling to the effect that it is valid only if the facts claimed by the taxpayer are correct, a BIR ruling could be invoked only by the taxpayer who sought the same. If the taxpayer is not the one who, in the first instance, sought the ruling from the BIR, he cannot invoke the principle of non-retroactivity of BIR rulings. --------------------------------------------------------------2. Power of the Commissioner to suspend the business operation of a taxpayer --------------------------------------------------------------Note: I already discussed this under Tax Remedies. ---------------------------------------------------------III. LOCAL GOVERNMENT CODE ------------------------------------------------------------------------------------------------------------------A. LOCAL GOVERNMENT TAXATION -----------------------------------------------------------------------------------------------------------------------1. Fundamental Principles --------------------------------------------------------------Read Section 130, LGC Q: What are the fundamental principles of local government taxation? a. Uniformity b. Taxes, fees, charges and other impositions shall be equitable and based on ability to pay for public purposes not unjust, excessive oppressive or confiscatory, no contrary to law, public policy, national economic policy, or in restraint of trade c. The levy and collection shall not be left to any private person d. Inures solely to the local government unit levying the tax e. The progressivity principle must be observed. granted by a direct mandate of the Constitution b. Limited although directly expressed by the Constitution, the power is subject to limitations and guidelines as the legislature may deem necessary to impose c. Legislative in nature the power to impose taxes is vested solely in he legislative body of each respective LGU d. Territorial the same can only be exercised within the territorial jurisdiction of a LGU It is granted by the Constitution under Section 5, Article X of the 1987 Constitution. It is not inherent in the Local Government. In MERALCO V. PROVINCE OF LAGUNA [M AY 5, 1999], the Supreme Court explained that prior to the 1987 Constitution; the taxing power of LGUs was exercised under limited statutory authority. Under the present Constitution, the taxing power of LGUs is deemed to exist, subject only to specific exceptions that the law may prescribe. Otherwise stated, the taxing power of LGUs is a direct grant of the Constitution, and is not a delegated power of Congress. Note: A law which deprives LGUs of their power to tax would be unconstitutional. --------------------------------------------------------------2. Nature and Source of taxing power a) Grant of local taxing power under the local government code b) Authority to prescribe penalties for tax violations c) Authority to grant tax exemptions d) Withdrawal of exemptions e) Authority to adjust local tax rates f) Residual taxing power of local governments g) Authority to issue local tax ordinances --------------------------------------------------------------Q: What is the nature of the local taxing power? a. Direct the power of the LGU to impose taxes although not an inherent power is Q: What is the legal basis of the grant of local taxing power under the LGC? The legal basis is found in Section 129 of the LGC, to wit: Q: Who has the authority to prescribe penalties for local tax violations? The Sanggunian of a LGU is authorized to prescribe fines or other penalties for violation of tax ordinances. (see Section 516, LGC) Note: The fines or other penalties shall in no case shall be less than P1,000 or more than P5,000 nor shall the imprisonment be less than 1 month nor more than 6 months. The Sangguniang Barangay may prescribe a fine of not less than P100 nor more than P1,000 Q: May the government grant tax exemption to taxpayers whose previous exemption has been withdrawn? Yes. Withdrawal of a tax exemption does not prohibit future grants of tax exemption (PLDT v. City of Davao [August 22, 2001]) Read Section 192, LGC Q: What is the residual taxing power of LGUs? LGUs may exercise the power to levy taxes, fees or charges on any base or subject, provided that the taxes, fees, and charges are Read Section 192, LGC Q: What is the effect of Section 193 (Withdrawal of Tax Exemption Privileges) of the LGC? Read Section 193, LGC Under Section 193, all existing tax exemption privileges granted to or presently enjoyed by all persons whether natural or juridical including GOCCs (except local water districts, cooperatives registered under RA 6938, non-stock and non-profit hospitals and educational institutions) were withdrawn upon effectivity of the LGC. In MERALCO V. PROVINCE OF LAGUNA [M AY 5, 1999], the Supreme Court noted that indicative of the legislative intent to carry out the Constitutional mandate of vesting broad tax powers to LGUs, the LGC has effectively withdrawn tax exemptions and incentives theretofore enjoyed by certain entities. (see also NAPOCOR V. CITY OF CABANATUAN [APRIL 9, 2003]. a. Not specifically enumerated in the LGC b. Not taxed under the provisions of the NIRC c. Not taxes under other applicable laws Read Section 186, LGC Q: Who has the authority to issue local tax ordinances? The power to impose a tax, fee, or charge or to generate revenue shall be exercised by the Sanggunian of the LGU concerned through an appropriate ordinance (see Section 132, LGC) Read Section 132, LGC --------------------------------------------------------------3. Local taxing authority a) Power to create revenues exercised through Local Government Units b) Procedure for approval and effectivity of tax ordinances --------------------------------------------------------------Note: As discussed above, each LGU has the power to create its own source of revenue (see Section 129, LGC) To be sure, public hearings are conducted by local legislative bodies to allow interested parties to ventilate their views on a proposed law or ordinance. These views, however, are not binding on the legislative body and it is not compelled by law to adopt the same. Q: Can a public hearing conducted after the passage of a tax ordinance cure the defect in its enactment (for failure to hold one prior to the enactment)? No. As held in ONGSUCO V. M ALONES [OCTOBER 27, 2009], the Supreme Court held that a public hearing conducted after the passage of a tax ordinance does not cure the defect in its enactment. The LGC requires that public hearings be held prior to the enactment by the LGU of the ordinance levying taxes, fees, and charges. Q: What is the effect of non-compliance with the publication/posting requirements of tax ordinances laid down in Section 188 of the LGC? Failure to follow the procedure in enactment of tax ordinances renders the same null and void. The publication requirement is mandatory. Q: What must be complied with under the provisions of the LGC for a valid local tax ordinance? 1. Public hearing is required with quorum, voting and approval and/or veto requirements complied with 2. Publication of ordinance within 10 days from approval for 3 consecutive days in an newspaper of general circulation and/or posting in at least 2 conspicuous and publicly accessible places Q: Can an ordinance with has been declared void for failure to publish for 3 weeks be remedied by passing another ordinance with purports to amend the ordinance that has been declared null and void? No. The new ordinance is still void since it cannot cure something which had never existed in the first place as the same was void ab initio (see COCACOLA BOTTLERS V. CITY OF M ANILA [JUNE 27, 2006]). Q: Is publication/posting of an ordinance fixing the assessment levels for different classes of real property in an LGU necessary? Yes. In FIGUERRES V. CA [M ARCH 25, 1999], the Supreme Court held that the publication/posting requirement under Section 188 of the LGC must be complied with in case of an ordinance imposing real property taxes, as well as an ordinance fixing the assessment levels for different classes of real Q: What is the nature of the public hearings under Section 187 of the LGC? In HAGONOY M ARKET VENDOR ASSOCIATION V. MUNICIPALITY OF HAGONOY, BULACAN [FEBRUARY 2, 2002], the discussed the nature of the public hearings on proposed tax ordinances in this light: PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 e) Common revenue raising powers (i) Service fee charges (ii) Public utility charges (iii) Toll fees or charges f) Community Tax --------------------------------------------------------------Read Sections 134-144, 146-149 and 151164, LGC. Q: What is the taxing power of the following LGUs: (a) province; (b) Municipalities; (c) Cities; and (d) Barangays? Province Expressly provided in the Code: 1. Local Transfer Tax (Section 135, LGC) 2. Business Tax on Printing and Publication (Section 136 LGC) 3. Local Franchise Tax (Section 137, LGC) 4. Tax on Sand, Gravel and Other Quarry Resources (Section 138, LGC) 5. Professional Tax (Section 139, LGC) 6. Amusement Tax (Section 140, LGC) 7. Tax on Route Delivery Truck or Vans (Section 141, LGC) Municipalities A municipality may levy on those taxes, fees and charges not otherwise levied by provinces (see Section 142, LGC) Expressly provided in the Code: 1. Local Business Tax (Section 143, LGC) 2. Fees on business and occupation (Section 146, LGC) 3. Fees on sealing and licensing of weights and measures (Section 148, LGC) 4. Fishery Rentals, Fees and charges (Section 149, LGC) Cities They may levy taxes which the province and municipality may impose. The tax rates, fees, and charges which the city may levy may exceed the maximum rates allowed for the province or municipality by not more than 50% except the rates --------------------------------------------------------------4. Scope of taxing power --------------------------------------------------------------Q: What is the scope of the taxing power of LGUs? The taxing power of LGU is limited only through the guidelines expressly provided for by the legislature. Beyond these limitations, the LGU is given a wide array or latitude to impose taxes not pre-empted by the NIRC? --------------------------------------------------------------5. Specific taxing power of LGUs a) Taxing powers of provinces (i) Tax on transfer of real property ownership (ii) Tax on business of printing and publication (iii) Franchise tax (iv) Tax on sand, gravel and other quarry services (v) Professional ax (vi) Amusement tax (vii) Tax on delivery truck/van b) Taxing power of cities c) Taxing power of municipalities (i) Tax on various types of businesses (ii) Ceiling on business tax impossible on municipalities within Metro Manila (iii) Tax on retirement of business (iv) Rules on payment of business tax (v) Fees and charges for regulation and licensing (vi) Situs of tax collected d) Taxing powers of barangays PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 of professional and amusement taxes (see Section 151, LGC) Barangays 1. Taxes on stores with fixed business establishments (gross receipts of P50,000 or less for cities, P30,000 for municipalities) 2. Service fees for use of barangay-owned properties and services rendered 3. Barangay clearance 4. Other fees and charges for (a) commercial breeding of fighting cocks, cockpits and cockfighting; (b) on places of recreation with admission fees; and (c) billboards, signboards and outdoor advertisements Common only to Cities and Municipalities 1. Community tax Common to all LGUs 1. Service fees and charges for services rendered 2. Public utility charges 3. Toll fees or charges Q: ABC Mining was issued a mining lease contract which granted it the right to extract and use for its purposes all mineral deposits within the boundary lines of its mining claim in Benguet. Later, the Provincial Treasurer demanded payment of sand and gravel tax for the quarry materials that ABC extracted. ABC countered that the sand and gravel tax applied only to commercial extractions. Is ABC correct? No. In LEPANTO CONSOLIDATED MINING COMPANY V. AMBANLOC [JUNE 29, 2010], the Supreme Court found that under the Revised Benguet Revenue Code, only gratuitous permits were exempt from the sand and gravel tax, and Lepantos permit was not a gratuitous permit. Hence, Lepanto was liable to pay the provincial sand and gravel tax. Local Transfer Tax Q: What are not covered by the local transfer tax of real property? The sale, transfer or other disposition of real property pursuant to the Agrarian Reform Program shall be exempt from local transfer tax. Business Tax on Printing and Publication Q: What is not covered by the business tax on printing and publication? The receipts from the printing and/or publishing of books or other reading materials prescribed by the DepEd as school texts or references shall be exempt from the business tax. Amusement Tax Q: Is the amusement tax on admission tickets to PBA games a national or local tax? It is a national tax. In PBA v CA [AUGUST 8, 2000], the Supreme Court held that it was the National Government which could collect amusement taxes from the PBA. While Section 13 of the Local Tax Code mentions other places of amusement, professional basketball games are definitely not within its scope under the principle of ejusdem generis. Q: Are gross receipts derived from admission tickets in showing motion pictures, films or movies also subject to VAT? No. The Supreme Court in CIR v. SM PRIME HOLDINGS [FEBRUARY 26, 2010] held that although the enumeration of services subject to VAT under. The intent of the legislature is not to impose VAT on persons already covered by the amusement tax. Q: ABC Bottlers Inc. maintained a bottling plant in Pavia, Iloilo but sold softdrinks in Iloilo City by means of a fleet of delivery trucks called rolling stores which went directly to customers. Iloilo City passed an ordinance imposing a municipal license tax on distributors/sellers in the area. Is ABC liable under the tax ordinance? Yes. In ILOILO BOTTLERS INC. V. CITY OF ILOILO [AUGUST 19, 1988], the Supreme Court found that the bottling company was engaged in the business of selling/distributing softdrinks in Iloilo City through its rolling stores where sales transactions with customers were entered into and sales were perfected and consummated by route salesmen, Hence, the company was subject to municipal license tax. Local Business Tax Q: Who are covered by the local business tax? 1. Manufacturers, assemblers and producers 2. Wholesalers, dealers and distributors 3. Exporters, manufacturers of essential commodities 4. Retailers (if both wholesale and retail, then pay both taxes) 5. Contractors 28 6. Banks and other financial institutions Q: The City of Cebu imposed a gross sales tax on sales of matches stored by Philippine Match Co. in Cebu City but delivered to 28 DOF LOCAL FINANCE CIRCULAR 01-93 provides for the guidelines governing the power of municipalities and provinces to impose a business tax on banks and other banking institutions. DOF LOCAL FINANCE CIRCULAR 2-93 provides for the guidelines for insurance companies. DOF LOCAL FINANCE CIRCULAR 3-93 provides for guidelines for financing companies. the said case, the Supreme Court differentiated gross receipts and gross revenue. Gross receipts include money or its equivalent actually or constructively received in consideration of services rendered or articles sold, exchanged, or leased, whether actual or constructive whereas gross revenue covers money or its equivalent actually or constructively received, including the value of services rendered or articles sold, exchanged or leased, the payment of which is yet to be received. Q: What is the ceiling on business tax imposed on municipalities within Metro Manila? The municipalities in Metro Manila may levy taxes at rates which shall not exceed by 50% the maximum rate prescribed in Section 143, LGC (see Section 144, LGC) Q: What are the conditions before a business may be subject to local business tax? Before a business may be subject to local business tax, the business must not be subject to VAT or percentage tax under the NIRC or if the business is subject to excise, VAT or percentage tax under the NIRC, the tax rate shall not exceed 2% of gross sales/ receipts of the preceding calendar year. Q: What are the conditions before business may be considered officially retired? A business subject to tax shall, upon termination thereof, submit a sworn statement of its gross sales or receipts for the current year. If he ax paid during the year be less than the tax due on said gross sales or receipts of the current year, the difference shall be paid before the business is considered officially retired (see Section 145, LGC) Q: Can an corporation? LGU tax condominium No. As held by the Supreme Court in Y AMANE V. BA LEPANTO CONDOMINIUM CORP [OCTOBER 25, 2005], condominium corporations are not businesses as the same is defined under the LGC which is a commercial activity regularly engaged with a view to profit. Even if a condominium corporation can levy fees, these are used merely to finance the expenses of the condominium and nothing more. Q: Are the local tax payments paid for the privilege of carrying on business in the year paid or for having engaged in business the previous year? It is paid for the privilege of carrying on business in the year paid. In MOBIL PHILIPPINES V. THE CITY TREASURER OF M AKATI [JULY 14, 2005], for the year 1998, Mobil paid a total of P2,262,122.48 to the City Treasurer of Makati as business taxes for the year 1998. The amount of tax as computed based on Mobils gross sales for 1998 is only P1,331,638.84. Since the amount paid is more than the amount computed based on Mobils actual gross sales for 1998, Mobile upon its retirement is not liable for additional taxes to the City of Makati. The Supreme Court found that the City Treasurer erroneously reated the assessment and collection of tax as if it were an income tax by rendering an additional assessment of P1,331,638.84 for the revenue 29 generated for the year 1998. If there is branch/sales office in the municipality or city where the sale or transaction is made, the tax shall accrue and shall be paid where such branch or sales outlet is located. If there is no branch/sales office in the city or municipality where the sale or transaction is made, the sale shall be recorded in the principal office and the taxes shall accrue and shall be paid to such city or municipality (where the principal office is located) With branch/sales office Yes No Recorded at Allocation b. None None ii. Note: An office may be considered a sales office (1) if the office only accepts orders but does not issue sales invoice; (2) if the office does not accept orders but issues sales invoices or (3) if the office accepts orders and issues sales invoices (see BLGF Opinion dated January 15, 2007) Section 150(b) The following sales allocation shall apply to manufacturers with factories, plants and plantations, etc.: If the plantation and factory are located in the same place 1. 30% of all sales recorded in the principal office shall be taxable by the city or municipality where principal office is located 2. 70% shall be taxable by the city or municipality where If the plantation and factory are not located in the same place, the 70% above shall be divided as follows 1. 60% to the city or municipality where the factory is located 2. 40% to the city or municipality where the plantation is located Plantation & factory in same location Yes No Allocation principal 30% 30% to Allocation factory, etc to Q: What is the situs of local business taxes as stated in Section 150 of the LGC? Section 150(a) 29 Another example: A corporation whose gross sales was 10 million in 2008 and 20 million in 2009, the local business tax payable in January 2009 is based on 10 million (gross receipts for 2008) but the same is payment for the right to do business in 2009. Thus, on the year of retirement, the company will only be liable if the actual local business tax on the basis of current year sales is more than the local business tax paid based on previous years sales. To continue the example, if the sales of the company are also P10 million as of the date of retirement in 2010, this means that the payment made in January 2010 based on the 2009 gross receipts is sufficient to cover the local business tax due upon retirement. located but where no transactions are made, may only collect Mayors permit fee and other regulatory fees. Q: ABC is engaged in manufacturing household products. It secured the services of an independent contractor XYZ to provide local physical distribution facilities within the specified places in the Philippines. XYZ has a warehouse in Tacloban City and makes deliveries to ABCs customers outside the city. Under the contract, ABC can also make deliveries of its products in other places of the country from its own warehouse in Makati. What is the situs of taxation of the sales made by ABC and XYZ? As held in BUREAU OF LOCAL GOVERNMENT OPINION DATED M ARCH 7, 1994, the products taken from the warehouse of PBE in Tacloban City and delivered to CPI's customers outside the city should be recorded and the tax thereon paid in Tacloban City where said warehouse is situated. As to the deliveries or sales made by CPI of products taken from its warehouse in Makati to places where it does not have any branch, sales office, or another warehouse, the same should be recorded in Makati where its principal office is located and the taxes due thereon should likewise be paid to said municipality. Q: Taxpayer has its principal office and also a branch in the City of Makati. At the same time, it has branches in the cities of Paranaque and Cebu. The City Treasurer of Makati assessed the taxpayer for deficiency local business tax for sales of the Paranaque branch allegedly not declared in the City of Paranaque. The City of Makati maintained that it had the authority to assess business taxes on revenues not properly taxed in Paranaque City and Cebu City. Is the City Treasurer of Makati correct? No. For purposes of collection of local taxes, businesses maintaining or operating branch or sales outlet elsewhere shall record the sale in the branch or sales outlet making the sale or transaction, and the tax thereon shall accrue and shall be paid to the municipality where such branch or sales outlet is located. Thus, the revenues of the branches outside Makati should not be part of the tax base for the determination of the local business tax to be paid in the City of Makati. In other words, Revenues of branches or sales outlet elsewhere should not be part of the tax base for the determination of the local business tax to be paid in the City where the principal office is located. (CITY OF M AKATI AND THE OFFICE OF THE CITY TREASURER OF MAKATI CITY VS. NIPPON EXPRESS PHILIPPINES CORPORATION [CTA AC CASE NO. 76 DATED FEBRUARY 17, 2012]) Q: MI is a corporation engaged in trading books. It holds an office in Pasig where all transactions are made. However, MI also maintains a warehouse in Mandaluying which serves as its storage area and no transactions are made therein. What is the situs of taxation of the sale of MIs books? As held in BUREAU OF LOCAL GOVERNMENT OPINION DATED M ARCH --------------------------------------------------------------6. Common limitations on the taxing power of LGUs --------------------------------------------------------------Read Section 133, LGC Q: What are the limitations on the taxing power of LGUs? As provided in SECTION 133, LGUs cannot impose the following: a. Income tax (except on bank and financial entities) Page 112 of 164 Last Updated: 30 July 2013 (v3) 29, 1993, Mi should be liable for gross sales tax to the then Municipality of Pasig. On the other hand, Mandaluyong, where the warehouse is PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 b. c. d. e. f. g. h. i. j. k. l. m. n. o. DST Estate and Donors taxes Customs Duties Taxes on goods passing through the LGU Taxes on agricultural and aquatic products sold by marginal farmers and fisherman Taxes on BOI-registered enterprises Excise taxes on articles under the Tax Code and taxes on petroleum products Percentage tax and VAT Taxes on gross receipts of transportation contractors Taxes on premium paid by way of reinsurance Taxes on registration of motor vehicles Taxes on Philippine products actually exported Taxes on Countryside and Barangay Business Enterprises and cooperatives Taxes and fees on the National Government No. As held in PROVINCE OF BULACAN V. CA [NOVEMBER 27, 1998], generally, the LGU can impose such tax even if not in LGC since Section 186 of the Code is sweeping. However, the province cannot levy on minerals from private lands because it is an excise tax on an article already covered by 30 the Tax Code. Q: Petron maintains a depot or bulk plant at the Navotas Fishport Complex where it engages in the selling of diesel fuels to vessels used in commercial fishing. Navotas City levied business taxes on its sale of petroleum products. Can the LGU levy the business tax on the sale of petroleum? No, the LGU cannot impose any local tax on petroleum products. As held in PETRON CORP. V. TIANGCO [APRIL 16, 2008], the prohibition with respect to petroleum products extends not only to excise taxes but all taxes, fees, and charges. Section 133(h) provides for two possible bases for exemption: (1) excise tax on articles enumerated under the Tax Code; and (2) taxes, fees, and charges on petroleum products. In the latter, the exemption refers not only to direct or excise taxes to be levied by the LGUs on petroleum products but on all types of taxes on petroleum products including business taxes. As provided in SECTION 186, LGUs cannot impose taxes that are specifically enumerated or taxed under the provisions of the Tax Code. Section 133(e) Q: Is a municipal ordinance imposing fees on goods (corn) that pass through a municipalitys territory valid? No. As held in PALMA DEVELOPMENT CORP V. ZAMBOANGA DEL SUR [OCTOBER 16, 2003], LGUs, through their Sanggunian, may impose taxes for the use of any public road such as a service fee imposed on vehicles using municipal roads to a wharf. However, Section 133(e) prohibits the imposition in the guise of wharfage, of fees as well as other taxes or charges in any form whatsoever on goods or merchandise. In this case, the LGU cannot tax the goods even in the guise of police surveillance fees. Section 133(j) Q: What is the rationale for the exemption of common carriers from local taxes? As held in FIRST PHILIPPINE INDUSTRIAL CORP V. CA [DECEMBER 29, 1998], the legislative intent in excluding from the taxing power of the LGU the imposition of business tax against common carriers is to prevent a duplication of the so-called common carriers tax. Section 133(h) Q: The Province of Bulacan passed an ordinance imposing tax on minerals extracted from public lands but went on to collect tax on minerals extracted from private lands. Since the LGC only provides for tax on public lands, is the action of the Province of Bulacan valid? 30 This applies the Preemption or Exclusionary Rule wherein the national government elects to tax a particular area, impliedly withholding from the LGU the delegated power to tax the same field. Section 186 Q: Are broadcasting and telecommunication companies liable to pay local transfer taxes? No. As held in both SMART COMMUNICATIONS V. THE CITY OF DAVAO [SEPTEMBER 16, 2008] and QUEZON CITY V. ABS-CBN BROADCASTING CORPORATION [OCTOBER 6, 2008], these franchise holders are now 31 subject to VAT. General Rule: Within the first 20 days of January or of each subsequent quarter, as the case may be Exception: For justifiable reason or cause, the Sanggunian may extend the time for payment without surcharge or penalties but only for a period not exceeding 6 months. (See Section 167, LGC) --------------------------------------------------------------7. Collection of business tax a) Tax period and manner of payment b) Accrual of tax c) Time of payment d) Penalties on unpaid taxes, fees or charges e) Authority of treasurer in collection and inspection of books --------------------------------------------------------------Read Section 165 to 171, LGC Q: What is the tax period for local taxes The tax period of all local taxes, fees and charges shall be the calendar year. Such taxes, fees, and charges may be paid in quarterly instalments (see Section 165, LGC) Note: Local taxes may be paid on an annual basis at the option of the taxpayer. In contrast, real property taxes must be paid annually. --------------------------------------------------------------8. Taxpayers Remedies a) Periods of assessment and collection of local taxes, fees or charges b) Protest of assessment c) Claim for refund of tax credit for erroneously or illegally collected tax, fee or charge --------------------------------------------------------------Note: The bar syllabus covers only tax remedies available to the taxpayer when the assessment has been made. Allow me to discuss the remedies available prior to the assessment which is provided for in the LGC and the Rules of Court. 31 The Supreme Court ruled in both cases that the in lieu of all taxes clause in their franchises applies only to national internal revenue taxes and not to local taxes. As such, they would have been liable to pay local transfer taxes. However, with the advent of the VAT law, such franchise holders are instead liable to pay VAT. Q: Outline the process on how an appeal involving questions of constitutionality or legality of tax ordinances. 1. Appeal to the Secretary of Justice within 30 days from effectivity 2. The Secretary of Justice has 60 days to decide but an appeal does not suspend the effectivity of the ordinance 3. Within 30 days from the Secretary of Justices decision or after 60 days inaction, an appeal may be filed with the RTC. Star Daily published Ordinance No. 9503-2005 on 1 to 3 February 2005. Ordinance No. 9503-2005 thus took effect on 19 February 2005. CEPALCO filed its petition for declaratory relief before the Regional Trial Court on 30 September 2005, clearly beyond the 30-day period provided in Section 187. CEPALCO did not file anything before the Secretary of Justice. Thus, the Court found that CEPALCO ignored the mandatory nature of the statutory periods. Q: Is payment under protest required before a party may appeal to the Secretary of Justice? No. As held in JARDINE DAVIS INSURANCE V. ALIPOSA [FEBRUARY 27, 2003], prior payment under protest is not required when the taxpayer is questioning the very authority and power of the assessor to impose the assessment and of the treasurer to collect the tax (as opposed to questioning the increase or decrease in the tax to be paid). Q: What authority is given to the Secretary of Justice with respect to review of tax ordinances? The Secretary of Justice can declare an ordinance void for not having followed the requirements of the law but he cannot replace it with his own law or he cannot say that is is unwise. In DRILON V. LIM [AUGUST 4, 1994], then Secretary of Justice Drilon set aside the Manila Revenue Code on two grounds, namely the inclusion of certain ultra vires provisions and its non-compliance with the prescribed procedure in its enactment. In ruling that the act of then Secretary Drilon was proper, the Supreme Court noted that when the Secretary alters or modifies or sets aside a tax ordinance, he is not allowed to substitute his own judgment for the judgment of the LGU that enacted the measure. In the said case, Secretary Drilon only exercised supervision and not control. Cagayan Electric Power and Light Co. v. City of Cagayan de Oro, G.R. 191761, November 14, 2012 DOCTRINE: Failure to appeal to the Secretary of Justice within the statutory period of 30 days from the effectivity of the ordinance is fatal to ones cause. FACTS: On January 10, 2005, the Sangguniang Panlungsod of Cagayan de Oro (City Council) passed Ordinance No. 9503-2005 imposing a tax on the lease orrental of electric and/ortelecommunication posts, poles or towers by pole owners to other pole users at ten percent(10%) of the annual rental income derived from such lease or rental. The City Council, in aletter dated 15 March 2005, informed Cagayan Electric Power and Light Company, Inc. (CEPALCO), through its President and Chief Operation Manager, Ms. Consuelo G. Tion, of the passage of the subject ordinance. On September 30, 2005, appellant CEPALCO, purportedly on pure question of law, filed a petition for declaratory relief assailing the validity of Ordinance No. 9503-2005 before the Regional Trial Court. HELD: The Court ruled that CEPALCO failed to exhaust administrative remedies. Section 5 of said ordinance provided that the Ordinance shall take effect after 15 days following its publication in a local newspaper of general circulation for at least three (3) consecutive issues. Gold Q: X, a taxpayer who believes that an ordinance passed by the City Council of Pasay is unconstitutional for being discriminatory against him wants to know from you, his tax lawyer, whether or not he could file an appeal. In the affirmative, he asks you where such appeal should be made: The Secretary of Finance, the Secretary of Justice or the CTA or the Q: May regular court issue an injunction to restrain LGUs from collecting taxes? Yes. In ANGELES CITY V. ANGELES ELECTRIC CORPORATION [JUNE 29, 2010], the Supreme Court held that the LGC does not specifically prohibit an injunction enjoining the collection of local taxes (as compared to the Tax Code which has an express prohibition). Nevertheless, the Court noted that injunctions enjoining the collection of local taxes are frowned upon and should therefore be exercised with extreme caution. Q: Olongapo City enacted an ordinance fixing monthly rental fees for the different stalls in the new public market. A questioned the validity of the said ordinance by filing an appeal with the Secretary of Justice. The Secretary deferred rendering a decision on the appeal and advised A to file his appeal with the RTC. Is the act of the Secretary proper? No. As held in CITY OF OLONGAPO V. STALLHOLDERS OF EAST B AJAC-B AJAC PUBLIC M ARKET [OCTOBER 19, 2000], the act of the Secretary of Justice was tantamount to an abdication of his jurisdiction over the appeal of the ordinance. The Secretary may not abdicate his authority to review tax ordinances. Q: What are the grounds for the suspension of the running of the prescriptive? a. The treasurer is legally prevented from the assessment or collection of the tax b. The taxpayer requests for reinvestigation and executes a waiver in writing before the expiration of the period within which to assess or collect; and c. The taxpayer is out of the country or otherwise cannot be located --------------------------------------------------------------a) Periods of assessment and collection of local taxes, fees or charges --------------------------------------------------------------Read 194, LGC Q: What are the rules on assessments? General Rule: An assessment must be made within 5 years from the date they become due. Exception: If there is fraud or intent to evade payment of the tax, the assessment may be made --------------------------------------------------------------b) Protest of assessment --------------------------------------------------------------Read Section 195, LGC Q: Outline the procedure in contesting a local tax assessment. 1. Assessment notice issued by local treasurer 2. File written protest with the local treasurer within 30 days from date of payment 3. The Treasurer has to decide within 60 days 4. An appeal to the RTC is then available upon denial or 60-day inaction by the treasurer 5. The RTC decision is appealable to the CTA En Banc 6. Appeal to the SC within 15 days from receipt of resolution. Note: (1) Unlike in RPT, no protest under payment is required. (2) Review by RTC over denial of protest by the local treasurer falls within the courts original jurisdiction. NATIONAL TRANSMISSION CORPORATION VS. MUNICIPAL TREASURER OF LABRADOR, PANGASINAN, REPRESENTED BY EDUALINO CASIPIT IN HIS CAPACITY AS MUNICIPAL TREASURER, CTA AC NO. 67, JUNE 25, 2012 In local tax assessments, the CTA En Banc does not have jurisdiction over cases decided by the Regional Trial Court in the exercise of its original jurisdiction. NATIONAL POWER CORPORATION VS. THE CITY GOVERNMENT OF TUGUEGARAO, CTA EB CASE NO. 696 (RTC CIVIL CASE NO. 7240), JUNE 5, 2012 (3) What is the venue of your appeal of the denial of the protest by the local treasurer? In NATIONAL TRANSMISSION CORPORATION VS. THE MUNICIPALITY OF MAGALLANES, AGUSAN DEL NORTE [C.T.A. AC NO. 68, JANUARY 5, 2012], the CTA held that the local Governments assessment for business taxes and other regulatory fees is civil in nature and basically a personal action. For purposes of instituting personal actions in court, the place where the taxpayers principal office is located may also be considered as the proper venue. (4) The failure of the taxpayer to file and perfect its appeal with the regional trial court within the prescribed period deprives the Court of the jurisdiction to entertain and determine the correctness of the assessment made by the city treasurer. ACESITE (PHILIPPINES) HOTEL CORPORATION VS. LIBERTY TOLEDO, IN HER CAPACITY AS CITY TREASURER OF THE CITY OF MANILA AND THE CITY OF MANILA [CTA, MAY 24, 2012]. Taxpayer has 60 days from the date of receipt of the assessment to file a protest; failing which, the assessment shall become final and executor SPC REALTY CORPORATION VS. MUNICIPAL TREASURER OF CAINTA, CTA AC NO. 77, NOVEMBER 15, 2012 --------------------------------------------------------------9. Civil Remedies by the LGU for collection of revenues a) Local governments lien for delinquent taxes, fees or charges b) Civil Remedies, in general (i) Administrative action (ii) Judicial action --------------------------------------------------------------Read Section 172-185, LGC Q: What is the governments lien? nature of a local Local taxes, fees, charges and other revenues constitute a lien, superior to all liens, charges, or encumbrances in favour of any person, enforceable by any appropriate administrative or judicial action (see Section 173, LGC) Note: The lien may only be extinguished upon full payment of the delinquent local taxes, fees, and charges, including related surcharges and interest (see Section 173, LGC) Q: What are the civil remedies available to the LGU for collection of revenues? a. Administrative action i. Distraint of personal property ii. Levy upon real property iii. Compromise b. Judicial action Note: Either of these remedies or both may be pursued concurrently or simultaneously at the discretion of the LGU concerned (see Section 174, LGC) --------------------------------------------------------------c) Claim for refund of tax credit for erroneously or illegally collected tax, fee or charge --------------------------------------------------------------Read Section 196, LGC PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 in and rights to personal property, and by levy upon real property and interest in or rights to real property. (see Section 174, LGC) Note: (1) The remedies of distraint and levy may be repeated if necessary until the full amount due including all expenses is collected (see Section 184, LGC) (2) For properties exempt from distraint or levy, see Section 185, LGC Q: Outline the procedure for distraint of real property 1. Warrant of Levy issued by the Local Treasurer (LT), which has the force of legal execution in the LGU concerned. (Section 176, LGC) 2. Warrant is mailed to or served upon the delinquent owner (Section 176, LGC) 3. Written notice of the levy and the warrant is mailed/served upon the assessor and the Registrar of Deeds of the LGU (Section 176, LGC) 4. 30 days from service of warrant, LT shall advertise sale of the property by: a. posting notice at main entrance of LGU hall/building and in a conspicuous place in the barangay where property is located and b. by publication once a week for 3 weeks (Section 178, LGC) Note: In cases of levy for unpaid local taxes publication is once a week for 3 weeks 2. 3. 4. 5. 6. 7. may only be extinguished upon payment of the tax and the related charges. (Section 173, LGC) Time for payment of Local taxes expires Local Treasurer (LT), upon written notice, seizes sufficient personal property to satisfy the tax, and other charges (Section 175, LGC) LT issues a certificate which serves as warrant for the distraint of personal property, (Section 175, LGC) Officer executing the distraint accounts for the goods, distrained (Section 175, LGC) Officer posts notice in office of the chief executive of the LGU where the property is distrained and in at least 2 other public places specifying the time & place of sale, and distrained goods. The time of sale shall not be less than twenty (20) days after the notice. (Section 175, LGC) Before the sale, the goods or effects distrained shall be restored to the owner if all charges are paid (Section 175, LGC) Note: The next steps in the procedure will vary depending on whether the property distrained is disposed of within 120 days from distraint. If disposed If not disposed 8. Officer sells the goods at public auction to the highest bidder for cash. w/in 5 days, the local treasurer shall report sale to the local chief executive concerned (Section 175, LGC) 9. Excess of proceeds over charges shall be returned to the owner of the property sold. 8. It shall be considered as sold to the LGU for the amount of the assessment made by the Committee on Appraisal and to the extent of the same amount, the tax delinquencies shall be cancelled. (Section 175, LGC) If there is a bidder If there is no bidder OR the highest bid is insufficient to cover the taxes and other charges 7. LT shall purchase the property in behalf of the LGU (Section 181, LGC) 7. Bidder pays and 30 days after the sale, the LT shall report the sale to the sanggunian (Section 178, LGC) 8. LT shall deliver to purchaser certificate of sale 9. Proceeds of sale in excess of delinquent tax, interest & expenses of sale remitted to the owner (Section 178, LGC) 10. Within 1 year from sale, owner may redeem upon payment of the 1. delinquent tax, 2. interest due, 3. expenses of sale (from date of delinquency to date of sale) and 4. addl interest of 2% per month on the purchase price from date of sale to date of redemption. Delinquent owner retains possession and right to the fruits (Section 179, LGC) 11. LT returns to the purchaser/bidder the price paid plus interest of 2% per month (Section 179, LGC) 12. If property is not redeemed, the local treasurer shall execute a deed of conveyance to the purchaser (Section 180, LGC) Note: in cases of levy for unpaid local taxes, LT may purchase if there is no bidder or if the highest bid is insufficient (Section 181, LGC) and other charges but for RPT, the LGU may purchase for only one reason there is no bidder! Its that simple. So memorize the procedure and just take note of these two distinctions between levying for local taxes and levying for RPT. 8. Registrar of Deeds shall transfer the title of the forfeited property to the LGU without need of a court order (Section 181, LGC) 9. Within 1 year from forfeiture, the owner, may redeem the property by paying to the local treasurer the full amount of the tax and the related interest and the costs of sale otherwise the ownership shall be vested on the local government unit concerned. (Section 181, LGC) 10. Sanggunian concerned may, by ordinance sell and dispose of the real property acquired under the preceding section at public auction. (Section 182, LGC) Note: (1) In both cases, levy may be repeated until the full amount due, including all expenses, is collected. (2) This is important! To make our lives easier, I want you to note that the procedure for levying real properties to satisfy local taxes is.wait for it.the SAME as the levy procedure for satisfying RPT. Wait hindi pa tapos! Its the same ---------------------------------------------------------B. REAL PROPERTY TAXATION ---------------------------------------------------------Q: What are real property taxes? These are direct taxes imposed on the privilege to use real property such as land, building, machinery and other improvements unless specifically exempted. Note: Before we can even talk about real property taxation, I would have to state the obvious that this tax only applies to, well, real property. In any problem involving real property taxation, you must first determine if its real property or not. If its not real property, then its not subject to real property taxation. Thus, Ill discuss what are considered real properties for purposes of RPT. . Note: Personal property may be classified as real property for purposes of taxation. Q: Are the steel towers of an electric company real property for the purpose of RPT? No. In BOARD OF ASSESSMENT APPEALS V. MERALCO [JAN. 31 1964], the Supreme Court held that the steel towers of MERALCO do not constitute real property or the purpose of the real property tax. The steel towers were regarded as poles and under its franchise Meralco's poles are exempt from taxation. Moreover, the steel towers were not attached to any land or building. They were removable from their metal frames.; PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Q: Define machinery. (see Section 199(o), LGC) 2. Machinery that is not permanently attached to real estate is: a. Subject to the real property tax if it is an essential and principal element of an industry, work or activity without which such industry, work or activity, cannot function; b. Not subject to the real property tax if it is not an essential and principal element of an industry, work or activity. 3. Notwithstanding rules 1 and 2, machinery of non-stock, non-profit educational institutions used actually, directly, and exclusively for educational purposes is not subject to real property tax. (see DOF LOCAL FINANCE CIRCULAR 001-2002 [APRIL 25, 2002]) finished products for sale nor to repair machineries offered to the general public for business or commercial purposes considered as realty subject to RPT? No. In MINDANAO BUS CO. V. CITY ASSESSOR & TREASURER [SEPT. 29, 1962], the Supreme Court held that for equipment to be real property, they must be essential and principal elements. In addition, the machinery should be essential to carry on business in a building or piece of land and this is not the case here since it was proven that the equipment was not essential because it is used only for repairs which could actually be done elsewhere. Q: Define improvement. Improvement is a valuable addition to the property or an amelioration in its condition amounting to more than a repair or replacement of parts. (see Section 199(m), LGC) Q: Are the gas station equipment and machinery (tanks, pumps, etc) permanently affixed by Caltex to its gas station and pavement, albeit on leased land, considered real property subject to real property taxes even if lessor does not become the owner of the said assets? Yes, because they are essential to the business of the taxpayer. In CALTEX V. CBAA [M AY 31, 1982], the Supreme Court ruled that he said equipment and machinery, as appurtenances to the gas station building or shed owned by Caltex and which fixtures are necessary to the operation of the gas station for without them the gas station would be useless and which have been attached or affixed permanently to the gas station site are taxable improvements and machinery. The case of DAVAO SAWMILL CO. V. CASTILLO [AUGUST 7, 1935] where at issue was whether the property was installed by the owner does not apply since in that case the issue was on execution of judgment against the lessee. Q: The City Assessor of CDO assessed as taxable the machinery of Asian College of Science and Technology (ACSAT), a nonstock, non-profit educational institution. Upon the issuance of DOF LOCAL FINANCE CIRCULAR 001-2002 [APRIL 25, 2002], the City Assessor declared the machinery as tax exempt effective the 2nd quarter of 2002. ACSAT argues that the exemption should retroact to the year 1998. Is ACSAT correct? Yes. In BLGF OPINION DATED DECEMBER 15, 2006], it was held that the request for retroactive effectivity in 1998 of exemption of the subject machinery owned by ACSAT should be given due course Q: Are equipment/machineries in cement or wooden platform and which were never used as industrial equipments to produce PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Page 121 of 164 Last Updated: 30 July 2013 (v3) Q: What is the taxability of the following properties of a bank: (1) vault doors; (2) safety deposit boxes; (3) surveillance cameras; (4) generator sets; (5) water pumps; (6) uninterrupted power supply equipment; (8) exhaust fans; and (9) ceiling fans? (1) Vault doors, (2) safety deposit box; and (3) surveillance cameras should be assessed as improvements for enhancing the utility of the bank. (4) to (9) do not fall within the definition of machinery subject to RPT. (see BLGF OPINION DATED M ARCH 22, 2005] Q: MERALCO installed two oil storage tanks on a lot in Batangas which it leased from Caltex. They are used for storing fuel oil for MERALCOs power plants. Are the oil storage tanks real property for purposes of RPT? Yes. In MERALCO V. CBAA [M AY 31, 1982], the Supreme Court held that while the two storage tanks are not embedded in the land, they are to be considered improvements on the land enhancing its utility and rendering it useful to the oil industry. The two tanks have been installed with some degree of permanence as receptacles for the considerable quantities oil needed by MERALCO for its operations. Q: What is the taxability of the following properties of a bank: (1) ATM Machine procash; (2) Cash vault door protect; (3) Security cash locker fortress; (4) Protect safe deposit boxes; (5) Security Devices; (6) Magitek UPS; (7) Airconditioning units; (8) Computers (CPU, printer, deskset, monitors, scanner/HP Flatbed, PC Server, modem, etc.); (9) Phone Panasonic Wireless; (10) Phone SNI Digital; and (11) Exhaust fans? Items Nos. 2-5 should have been classified as improvement subject to real property tax as discussed above; while item Nos. 6-11 should be classified as machinery of general purpose use thus exempt from payment of real property tax. ATMs, however, are correctly classified as machinery subject to real property tax. (see BLGF OPINION DATED FEBRUARY 17, 2005] --------------------------------------------------------------1. Fundamental Principles --------------------------------------------------------------Read Section 198, LGC Q: Enumerate the fundamental principles that shall guide real property taxation. 1. Real property shall be appraised at its current and fair market value 2. Real property shall be classified for assessment purposes on the basis of its actual use 3. Real property shall be assessed on the basis of a uniform classification within each LGU 4. The appraisal, assessment, levy and collection of real property tax shall not be let to any private person 5. The appraisal and assessment of real property shall be equitable. Q: What is the taxability of the following properties: (1) printing and developing machine owned by a photo center and (2) equipment being utilized by water refilling stations in purification process? The printing and developing machine owned by the photo center is a taxable real property considering that the same falls within the definition of "Machinery" without which, the work or activity of the said photo center cannot function, and therefore, an essential and principal element of the business of photography. On the other hand, the equipment being utilized by the water refilling stations thereat in purification process also fall within the definition of machinery and considered real property subject to 1. In the case of a province, at the rate not exceeding 1% of the assessed value 2. In the case of a city or municipality within Metro Manila, at the rate not exceeding 2% of the assessed value Note: The bar syllabus did not include special levies. Nonetheless, lets discuss the pertinent matters. I will not provide the codal anymore. Just refer to Section 235-245, LGC. --------------------------------------------------------------3. Imposition of real property tax a) Power to levy real property tax b) Exemption from real property tax ----------------------------------------------------------------------------------------------------------------------------a) Power to levy real property tax --------------------------------------------------------------Read Section 232 to 233, LGC Q: Do all types of LGUs have the power to impose real property taxes? No. Only provinces and cities as well as municipalities within Metro Manila may impose RPTs. (see SECTION 200 AND 232, LGC) Municipalities outside Metro Manila and barangays cannot impose RPT. Q: What are the conditions for the validity of a tax ordinance imposing special levy for public works? 1. The ordinance shall describe the nature, extent, and location of the project, state the estimated cost, and specify the metes and bounds by monuments and lines (see Section 241, LGC) 2. It must state the number of annual installments, not less than 5 years nor more than 10 years (see Section 241, LGC) 3. Notice to the owners and public hearing (see Section 242, LGC) Note: If you want to contest a special levy, the interested person may appeal to the LBAA and then to the CBAA following the same process as an administrative protest (see Section 244, LGC). Ill discuss the process later. No. In METRO M ANILA M ANILA INTERNATIONAL AIRPORT AUTHORITY v. CA [JULY 20, 2006], the Supreme Court, in resolving the issue on whether the lands and buildings owned by the Manila International Airport Authority were subject to real property tax, ruled in the negative. The Supreme Court opined that since MIAA is not a GOCC but instead as government instrumentality vested with corporate powers or a government corporate entity. As such, it is exempt from real property tax. However, it must be noted that previously in M ACTAN CEBU INTERNATIONAL AIRPORT AUTHORITY V. M ARCOS [SEPTEMBER 11, 1996], the Supreme Court ruled that MCIAA is a GOCC and since the last paragraph of Section 234 of the LCG unequivocally withdrew the exemptions from payment of RPT granted to natural or juridical including GOCCs, MCIAA is now liable for RPT. --------------------------------------------------------------b) Exemption from real property tax --------------------------------------------------------------Read Section 234, LGC Q: What are the properties exempt from RPT? a. Real property owned by the Republic or any of its political subdivisions (except when beneficial use has been granted to a taxable person) b. Charitable institutions, churches, parsonages, or convents appurtenant thereto, mosques, nonprofit or religious cemeteries and all lands, buildings or improvements actually, directly, and exclusively used for religious, charitable or educational purposes c. All machineries and equipment actually, directly and exclusively used by local water districts and GOCCs engaged in supply and distribution of water and/or generation and transmission of electric power d. All real property owned by duly registered cooperatives e. Machinery and equipment used for pollution control and environmental protection 32 (includes infrastructure) Q: Is the Philippine Fisheries Development Authority (PFDA) a GOCC and, hence, now liable for RPT? No. In PHILIPPINE FISHERIES DEVELOPMENT AUTHORITY V. CA [JULY 31, 2007], the Supreme Court ruled that the PFDA is not a GOCC but an instrumentality of the national government which is generally exempt from payment of RPT. However, said exemption does not apply to the portions of the properties which the PFDA leased to private 33 entities. Section 234(a) Q: Is the Metro Manila International Airport Authority (MMIA) a GOCC which will now be considered liable for RPT under the LGC? Q: Is the Philippine Reclamation Authority (PRA) a GOCC and, as such, liable for RPT? 32 Note that under RA 7942 (Philippine Mining Act of 1995), pollution control devices exempted from RPT include infrastructure. 33 Note that under Section 234 the exemption to the government and its political subdivisions does not apply to properties whose beneficial use has been granted to a taxable person No. In PHILIPPINE RECLAMATION AUTHORITY V. CITY OF PARANAQUE [JULY 18, 2012], the Supreme Court ruled that PRA is not a GOCC. Much like the MIAA, PPA, UP, PFDA, GSIS and BSP, it is considered a government instrumentality exercising corporate powers but which are not considered GOCCs as they are neither a stock (for not having the authority to distribute dividends), not a non-stock corporation (for not having members) corporation. In addition, the Constitution likewise provides that a GOCC is created under two conditions: (a) established for a common good and (b) meets the test of economic viability. While test (a) is complied with, the PRA was undoubtedly not created to engage in economic or commercial activities as it is the only entity engaged in reclamation which was described as essentially a public service. Thus, PRA is not liable for RPT. Q: Is the Light Rail Transit Authority (LRTA) a GOCC, and, as such, liable for RPT? Yes. Although not expressly stating that LRTA is a GOCC, the Supreme Court in LIGHT RAIL TRANSIT AUTHORITY V. CBAA [OCTOBER 12, 2000] stated that the LRTA is clothed with corporate status and corporate powers in the furtherance of its proprietary objectives. It operates much like any private corporation engaged in the mass transport industry. As such, it is liable for RPT. Q: ABC Association is a non-stock, nonprofit organization owned by XYZ Hospital in Cebu City. XYZ likewise owns the XYZ Medical Arts Center. The City Assessor assessed the XYZ Medical Arts Center Building with the assessment level of 35% for commercial buildings (instead of the 10% special assessment imposed on XYZ hospital and its buildings). Was the medical arts center built to house its doctors a separate commercial building? No. The Supreme Court in CITY ASSESSOR OF CEBU CITY V. ASSOCIATION OF BENEVOLA DE CEBU INC. [JUNE 8, 2007] ruled that the fact alone that doctors holding clinics in the separate medical center are consultants of the hospital and the ones who treat the patients takes way the medical center from being categorized as commercial. The Supreme Court classified the medical arts center building as special for the following reasons: (1) the medical arts center was an integral part of the hospital; (2) the medical arts center facility was incidental to and reasonably necessary for the operations of the hospital; and (3) charging rentals for the offices used by its accredited physicians was a practical necessity and could not be equated to a commercial venture. Q: ABC Company owned two parcels of land in Pasig City. Portions of the properties are leased to different business establishments. Being part of ill-gotten wealth of the Marcoses, the owner of ABC voluntarily surrendered ABC Company to the Republic through the PCGG. Now, Pasig City seeks to impose RPT on the properties of ABC. Are the properties of ABC liable for RPT? It depends. In PASIG CITY V. REPUBLIC [AUGUST 24, 2011], the Supreme Court held that the portions of the properties not leased to taxable entities are exempt from RPT while the portions leased to taxable entities are subject to RPT. Section 234(c) Q: What are the requisites to claim exemption from RPT for machineries and equipment used by LWDs and GOCCs? 1. The machineries and equipment are actually, directly, and exclusively used by the LWDs and GOCCs Section 234(b) Note: Remember our discussions in General Principles of Taxation Q: The Philippine Lung Center leased portions of its real property out for PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 2. The LWDs and GOCCs claiming exemption must be engaged in the supply and distribution of water and/or generation and transmission of electric power. and/or actually uses the machineries and equipment for generation and transmission of power. The CBAA affirmed. Are the properties exempt from RPT? No. NAPOCORs basis for exemption which is Section 243(c) provides that the machinery and equipment used for generation and transmission of power must be actually, directly and exclusively used by the GOCC. The machineries and equipment here are owned by BPPC, subject only to the transfer of these properties to NAPOCOR after the lapse of the 15-year period agreed upon. BPPCs use of the machineries and equipment are actual, direct and immediate while NAPOCORs is contingent and, at this stage of the BOT Agreement, not sufficient to support its claim for exemption ( see NAPOCOR V. CBAA [JANUARY 30, 2009]). Similarly, in NAPOCOR V. PROVINCE OF QUEZON [JULY 15, 2009], at issue was whether NAPOCOR as a GOCC can claim exemption under Section 234 of the LGC for the taxes due from the Mirant Pagbilao Corporation whose tax liabilities the NAPOCOR has contractually assumed under the BOT Agreement where Mirant would build and finance a power plant and transfer the same to NAPOCOR after 25 years without compensation. The Supreme Court ruled that NAPOCOR does not have the legal interest that the law requires to give it personality to protest the tax imposed by law on Mirant. Further, the machinery and equipment must actually, directly and exclusively be used by the GOCC. Here, NAPOCORs use is merely contingent Q: FELS entered into a lease contract with NAPOCOR over two engine power barges at Balayan Bay Batangas. The lease contract stipulated that NAPOCOR shall be responsible for all taxes (including RPT on the barges), fees and charges that FELS may be liable except income tax of FELS and its employees and construction permit and environmental fees. FELS was assessed for RPT and the LBAA upheld the assessment stating that while the barges may be classified as personal property, they are considered real property for RPT purposes because they are installed at a specific location with a character of permanency. Are the power barges subject to RPT? Yes. First, Article 415(9) of the Civil Code provides that docks and structures which, though floating, are intended by their nature and object to remain at a fixed place on a river, lake or coast. Barges fall under this provision. Second, FELS cannot claim exemption given that the requirement is that to be exempt the machineries and equipment must be actually, directly and exclusively used by GOCCs engaged in the generation of power. Since the agreement between FELS and NAPOCOR is that FELS will own and operate the barges and not NAPOCOR. (see FELS ENERGY . PROVINCE OF BATANGAS [FEBRUARY 16, 2007]). Section 234(e) Q: ABC Mining operates a Siltation Dam and Decant System. The Provincial Assessor of Marinduque assessed the same for RPT. Is the subject property exempt from RPT? The answer would be yes in light of SECTION 91 OF RA 7942 IN RELATION TO SECTION 3(AM) which includes infrastructure in the definition of pollution control devices exempt from RPT. Nonetheless, it must be noted that in PROVINCIAL 34 ASSESSOR OF M ARINDUQUE V. CA [APRIL 30, 2009], Q: FPPC entered into a BOT Agreement with NAPOCOR for the construction of a powerplant. Under the agreement BPPC was created to own, manage and operate the powerplant. The BOT Agreement provided that after a period of time, the power plant shall be transferred to NAPOCOR without payment of any compensation and that NAPOCOR shall be responsible for payment of RPT. BPCC was assessed for RPT. NAPOCOR filed a petition to declare the properties exempt from RPT. The LBAA ruled that the properties were not exempt as this is only available to a GOCC which owns PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 34 The Supreme Court pointed out that the disputed assessment notice took effect on 1 January 1995. The governing law was the the Supreme Court ruled that the tax exemption of machineries and equipment used for pollution control and environmental protection is based on usage, i.e., direct, immediate and actual application of the property itself to the exempting purpose. Here, the Supreme Court found that the subject property was not a machinery used for pollution control, but a structure adhering to the soil and intended for pollution control. Note: It must be noted that, by virtue of Section 234 of the LGC, any exemption from RPT previously granted or presently enjoyed by all persons, whether natural or juridical, including all GOCCs were withdrawn upon the effectivity of the LGC. We have to note that Congress has the power to exempt an entity again from RPT notwithstanding the withdrawal made by the LGC. Q: Prior to the LGC, XYZ telecom was exempted from paying RPT under its original franchise. Years after the effectivity of the LGC, Congress passed a law amending XYZs franchise and contained a reenactment of the tax provision in XYZs original franchise granting it RPT exemption. Is XYZ liable for RPT? No. As held in CITY GOVERNMENT OF QUEZON CITY V. BAYAN TELECOMMUNICATIONS [M ARCH 6, 2006], the Supreme Court held that the RPT exemption enjoyed by Bayantel under its original franchise, but subsequently withdrawn by force of Section 234 of the LGC, has been restored by the new law which amended its original franchise. Q: ABC Telecom was granted a 25-year franchise to install, operate and maintain telecommunications system throughout the Philippines under a law which states that The grantee shall be liable to pay the same taxes on its real estate, building, and personal property exclusive of this franchise. As they were not being issued a Mayors permit, ABC Telecom paid RPT under protest. ABC argued that the phrase exclusive of this franchise means that only the real properties not used in furtherance of its franchise are subject to RPT. Is ABCs contention correct? No, the properties of ABC whether or not used in its telecommunications business is subject to RPT. In DIGITAL TELECOMMUNICATIONS PHILIPPINES INC. V. CITY GOVERNMENT OF BATANGAS [DECEMBER 11, 2008], the Supreme Court held that the phrase exclusive of this franchise qualifies the term personal property. This means that the legislative franchise, which is an intangible personal property, shall not be subject to taxes. This is to put franchise grantees in parity with non-franchisees as the latter obviously do not have franchises which may potentially be subject to RPT. There is nothing in the law which expressly or even impliedly exempts the company from RPTC. Finally, the company cannot rely on the BGLF opinion as they have no authority to rule on claims for RPT exemption. --------------------------------------------------------------4. Appraisal and assessment of real property tax a) Rule on appraisal of real property at fair market value b) Declaration of real property c) Listing of real property in assessment rolls d) Preparation of schedules of fair market value (i) Authority of assessor to take evidence (ii) Amendment of schedule of fair market value e) Classes of real property f) Actual use of property as basis of assessment g) Assessment of real property (i) Assessment levels (ii) General Revisions of assessments and property classification (iii) Date of effectivity of assessment or reassessment (iv) Assessment of property subject to back taxes (v) Notification of new or revised assessment h) Appraisal and assessment of machinery --------------------------------------------------------------Note: In most of these items, I will simply provide the codal provisions as they are self-explanatory. I will focus on the important matters. 1991 LGC. All references to RA No. 7942, which came into effect only on 14 April 1995, were all out of place. a) Rule on appraisal of real property at fair market value --------------------------------------------------------------Read Section 201, LGC (i) Authority of assessor to take evidence (ii) Amendment of schedule of fair market value --------------------------------------------------------------Read Section 212, LGC --------------------------------------------------------------b) Declaration of real property --------------------------------------------------------------Read 202-204, LGC Q: What is the purpose of a tax declaration? A tax declaration only enables the assessor to identify the property for purposes of determining the assessment levels. It does not bind the assessor when he makes the assessment. Q: What are the different approaches in estimating the FMV of real property for RPT purposes? 1. Sales Analysis Approach the sales price paid in actual market transactions is considered by taking into account valid sales data accumulated from among the Register of Deeds, notaries public, appraisers, brokers, dealers, bank officials, and various sources stated under the LGC 2. Income Capitalization Approach the value of an income-producing property is no more than the return derived from it. An analysis of the income produced is necessary in order to estimate the sum which might be invested in the purchase of the property --------------------------------------------------------------c) Listing of real property in assessment rolls --------------------------------------------------------------Read Section 205, LGC --------------------------------------------------------------d) Preparation of schedules of fair market value PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 3. Reproduction cost approach the formal approach used exclusively in appraising manmade improvements such as buildings and other structures, based on such data as materials and labor costs to reproduce a new replica of the improvement (Allied Banking Corp v. Quezon City Government [October 11, 2005] citing Local Assessment Regulations No. 1-92) Note: An ordinance whereby the parcels of land sold, ceded, transferred and conveyed for remuneratory consideration after the effectivity of this revision shall be subject to real estate tax based on the actual amount reflected in the deed of conveyance or the current approved zonal valuation of the BIR prevailing at the time of sale, cession, transfer and conveyance, whichever is higher, as evidenced by the certificate of payment of the CGT issued therefore is invalid being contrary to public policy and for restraining trade (see Allied Banking Corp v. Quezon City Government [October 11, 2005] ) Agricultural Land Commercial Land Industrial Land Mineral Lands --------------------------------------------------------------(i) Authority of assessor to take evidence --------------------------------------------------------------Read Section 213, LGC --------------------------------------------------------------(ii) Amendment of schedule of fair market value --------------------------------------------------------------Read Section 214, LGC --------------------------------------------------------------e) Classes of real property --------------------------------------------------------------Read Section 215 to 216, LGC Is land devoted principally to the planting of trees, raising of crops, livestock and poultry, dairying, salt making, inland fishing and similar aquaculture activities and other agricultural activities and is not classified as mineral, timber, residential, commercial or industrial land Is land devoted principally for the object of profit and is not classified as agricultural, industrial, mineral, timber or residential land Is land devoted principally to industrial activity as capital investment and is not classified as agricultural, commercial, timber, mineral or residential land Are lands in which minerals exist in sufficient quantity or grade to justify the necessary expenditures to extract and utilize such minerals Q: What are the special classes of real property under the LGC? All lands, buildings, and other actually, directly and exclusively: improvements 1. Used for hospitals, cultural or scientific purposes 2. Owned and used by local water districts 3. Owned and used by GOCCs rendering essential public services in a. Supply and distribution of water; b. Generation and transmission of electric power --------------------------------------------------------------f) Actual use of property as basis of assessment --------------------------------------------------------------Read Section 217, LGC Q: The real property of Mr. and Ms. X, situated in a commercial area in front of the public market, was declared in their tax declaration as residential because it is used as their family residence. However, when the spouses left for the US to stay there permanently with their children, the property has been rented to a single proprietor engaged in sale of appliances and Page 129 of 164 Last Updated: 30 July 2013 (v3) Residential Land agricultural products. The Provincial assessor reclassified the property as commercial for tax purposes. Mr. and Ms. X appealed to the LBAA and argued that the tax declaration classifying their property as residential is binding. Is the contention of the spouses correct? No. The law focuses on the actual use of the property for classification, valuation and assessment purposes regardless of ownership. Section 217 of the LGC provides that real property shall be classified, valued, and assessed on the basis of its actual use regardless of where located, whoever owns it, and whoever uses it. --------------------------------------------------------------g) Assessment of real property (i) Assessment levels (ii) General Revisions of assessments and property classification (iii) Date of effectivity of assessment or reassessment (iv) Assessment of property subject to back taxes (v) Notification of new or revised assessment --------------------------------------------------------------Q: Define assessment. Assessment is the act or process of determining the value of a property or proportion thereof subject to tax, including the discovery, listing, classification, and appraisal of properties. --------------------------------------------------------------(ii) General Revisions of assessments and property classification --------------------------------------------------------------Read Section 219, LFC Q: What are the steps to be followed for the mandatory conduct of general revision of real property assessments under Section 219 of the LGC? 1. Preparation of the Schedule of FMVs 2. The enactment of Ordinances a. Levying an annual ad valorem tax on real property and an additional tax accruing to the SEF b. Fixing the assessment levels to be applied to the market values of the real properties c. Providing necessary appropriation to defray expenses incident to general revision of real property assessments d. Adopting the Schedule of FMVs prepared by the assessors (see LOPEZ V. CA [FEBRUARY 19, 1999] Note: This is not included in the Syllabus but just note that the following are the instances where he assessor shall make a valuation of real property: (1) the real property is declared and listed for taxation purposes for the first time; (2) there is an ongoing general revision of property --------------------------------------------------------------(iii) Date of effectivity of assessment or reassessment --------------------------------------------------------------Read Section 221, LGC --------------------------------------------------------------(iv) Assessment of property subject to back taxes --------------------------------------------------------------Read Section 222, LGC --------------------------------------------------------------(v) Notification of new or revised assessment --------------------------------------------------------------Read Section 223, LGC --------------------------------------------------------------h) Appraisal and assessment of machinery --------------------------------------------------------------Read Section 224-225, LGC --------------------------------------------------------------5. Collection of real property tax a) Date of accrual of real property tax and special levies b) Collection of tax (i) Collecting authority (ii) Duty of assessor to furnish local treasurer with assessment rolls (iii) Notice of time for collection of tax c) Periods within which to collect real property tax d) Special rules on payment (i) Payment of real property tax in installments (ii) Interests on unpaid real property tax (iii) Condonation of real property tax e) Remedies of LGUs for collection of real property tax PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 (i) Issuance of notice of delinquency for real property tax assessment (ii) Local governments lien (iii) Remedies in general (iv) Resale of real estate taken for taxes, fees, or charges (v) Further levy until full payment of amount due ----------------------------------------------------------------------------------------------------------------------------a) Date of accrual of real property tax and special levies --------------------------------------------------------------Read Section 246, LGC --------------------------------------------------------------b) Collection of tax (i) Collecting authority (ii) Duty of assessor to furnish local treasurer with assessment rolls (iii) Notice of time for collection of tax --------------------------------------------------------------Read 247 to 249, LGC --------------------------------------------------------------c) Periods within which to collect real property tax --------------------------------------------------------------Read 270, LGC Q: What is the rule on assessment of RPT? General Rule: The assessment must be made within 5 years from the date they become due Exception: If there is fraud or intent to evade taxes, assessment may be made within 10 years from discovery of fraud or intent to evade. ----------------------------------------------------------------------------------------------------------------------------(i) Issuance of notice of delinquency for real property tax assessment --------------------------------------------------------------Read Section 254, LGC In real estate taxation, the unpaid tax attaches to the property and is chargeable against the taxable person who had actual or beneficial use and possession of it regardless of whether or not he is the owner. (NATIONAL GRID CORPORATION OF THE PHILIPPINES VS. CENTRAL BOARD OF ASSESSMENT APPEALS [CTA EB NO. 801, JANUARY 29, 2013]) --------------------------------------------------------------d) Special rules on payment (i) Payment of real property tax in installments (ii) Interests on unpaid real property tax (iii) Condonation of real property tax --------------------------------------------------------------Read Section 250, 255, 276-277 LGC Q: In what instances can there be a condonation or reduction of RPT? 1. General failure of crops 2. Substantial decrease in the price agricultural or agri-based products 3. Calamity 4. When public interest so requires --------------------------------------------------------------(ii) Local governments lien --------------------------------------------------------------Read Section 257, LGC Q: What is the Local Governments Lien? The basic RPT constitutes as a lien on the property subject to tax, superior to all liens, charges or encumbrances in favor of any person, irrespective of the owner or possessor thereof, enforceable by administrative or judicial action and may only be extinguished by payment of the tax and related interests and expenses. In TESTATE ESTATE OF CONCORDIA LIM V. CITY OF M ANILA [FEBRUARY 21, 1990], the Supreme Court held that unpaid real estate taxes attaches to the property and is chargeable against the taxable person who had actual or beneficial use and possession of it, regardless of whether or not he is the owner. of Note: (1) In the case of (1) to (3), the condonation is done by the Sanggunian concerned by ordinance and upon recommendation of the Local Disaster Coordinating Council. In the case of (4), only the President may exercise this power. (2) In EXECUTIVE ORDER 27 [FEBRUARY 28, 2011], the President under the power given to him by Section 227 of the LGC reduced the RPT payable in Quezon by independent power producers under BOT contracts with GOCCs and condoned the penalties and surcharges of such RPT payables. --------------------------------------------------------------e) Remedies of LGUs for collection of real property tax (i) Issuance of notice of delinquency for real property tax assessment (ii) Local governments lien (iii) Remedies in general (iv) Resale of real estate taken for taxes, fees, or charges (v) Further levy until full payment of amount due PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 --------------------------------------------------------------(iii) Remedies in general (iv) Resale of real estate taken for taxes, fees, or charges (v) Further levy until full payment of amount due --------------------------------------------------------------Read Section 256 to 269, LGC Q: What are the remedies available to the LGU for the collection of RPT? 1. Administrative action thru levy of real property a. Distraint of personal property b. Lien on property subject to tax c. Levy on real property tax 2. Judicial action Note: The above simultaneous remedies are concurrent and If there is a bidder 9. Bidder pays and 30 days after the sale, the LT shall report the sale to the sanggunian 10. LT shall deliver to purchaser certificate of sale 11. Proceeds of sale in excess of delinquent tax, interest & expenses of sale remitted to the owner (Section 260, LGC) 12. Within 1 year from sale, owner may redeem upon payment of the 1. delinquent tax, 2. interest due, 3. expenses of sale (from date of delinquency to date of sale) and additional interest of 2% per month on the purchase price from date of sale to date of redemption. Delinquent If there is no bidder 9. LT shall purchase the property in behalf of the LGU (Section 263, LGC) Note: in cases of levy for unpaid local taxes, LT may purchase if there is no bidder or if the highest bid is insufficient (Section 181, LGC) 2. 3. 4. 5. 6. 10. Registrar of Deeds shall transfer the title of the forfeited property to the LGU without need of a court order (Section 263, LGC) 11. Within 1 year from forfeiture, the owner, may redeem the property by paying to the local treasurer the full amount of the tax and the related interest and the costs of sale otherwise the ownership shall be vested on the local government unit concerned. (Section 263, LGC) 12. Sanggunian concerned may, by owner retains possession and right to the fruits (Section 261, LGC) 13. LT returns to the purchaser/bidder the price paid plus interest of 2% per month (Section 261, LGC) 14. If property is not redeemed, the local treasurer shall execute a deed of conveyance to the purchaser (Section 262, LGC) ordinance sell and dispose of the real property acquired under the preceding section at public auction. (Sectiton 264, LGC) general law. Thus, the period shall be counted from the date of annotation of the sale. Q: Discuss the remedy of civil action for collection of real property tax. The civil action for the collection of real property tax shall be filed by the local treasurer in any court of competent jurisdiction within 5 or 10 years wherein real property taxes may be collected. (see Section 266, LGC) --------------------------------------------------------------6. Refund or credit of real property tax a) Payment under protest b) Repayment of excessive collections --------------------------------------------------------------Note: I will discuss payment under protest and refund under Taxpayers Remedies. Note: (1) In both cases, levy may be repeated until the full amount due, including all expenses, is collected. (Section 265, LGC) (2) Again recall what I said in the levying procedure for local taxes. The procedure for levying real properties to satisfy local taxes is the SAME as the levy procedure for satisfying RPT and other charges but for RPT, the LGU may purchase for only one reason there is no bidder! Its that simple. So memorize the procedure and just take note of these two distinctions between levying for local taxes and levying for RPT. Q: What is the redemption period for tax delinquent properties sold at public auction? Under the LGC, the redemption period is within 1 year from the date of sale. However, in CITY M AYOR OF QUEZON CITY V. RCBC [AUGUST 3, 2010], the Supreme Court ruled that while the LGC provides that the one year period begins from the date of sale on which date the delinquent tax is and other fees are paid, the local tax ordinance of Quezon City provides that the period is reckoned from the date of annotation of the sale. To reconcile the two conflicting laws, the Court applied the rule that a special law prevails over a PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 --------------------------------------------------------------7. Taxpayers remedies a) Contesting an assessment of value of real property (i) Appeal to the Local Board of Assessment Appeals (ii) Appeal to the Central Board of Assessment Appeals (iii) Effect of payment of tax b) Payment of real property tax under protest (i) File protest with local treasurer (ii) Appeal to the Local Board of Assessment Appeals (iv) Appeal to the CTA (v) Appeal to the Supreme Court -------------------------------------------------------------Note: This outline creates the impression that contesting an assessment and payment under protest are two different remedies of the taxpayer. That is wrong! Theyre part of the same process. The distinction should instead be made on whether the taxpayer is questioning the validity of the tax ordinance (in such case, the assessment would be illegal or void) or is disputing the correctness, reasonableness or excessiveness of the assessment. If the taxpayer is questioning the validity of the tax ordinance, the taxpayer may either question the legality of a tax ordinance before the DOJ Secretary under Section 187 of the LGC or question the constitutionality of the Read Section 226 to 231, LGC Q: Who may contest the assessment of real property? In order for a taxpayer to have legal standing to contest an assessment to the LBAA, he must be a person having legal interest in the property. In NAPOCOR V. PROVINCE OF QUEZON [JULY 15, 2009], the Supreme Court stated that legal interest is defined as interest in property or a claim cognizable at law, equivalent to that of a legal owner who has legal title to the property. A review of the provisions of the 1991 LGC on real property taxation shows that the phrase person having legal interest in the property has been repeatedly adopted and used to define an entity: 1. in whose name the real property shall be listed, valued, and assessed; 2. who may be summoned by the local assessor to gather information on which to base the market value of the real property; 3. who may protest the tax assessment before the LBAA and may appeal the latters decision to the CBAA; 4. who may be liable for the idle land tax, as well as who may be exempt from the same; 5. who shall be notified of any proposed ordinance imposing a special levy, as well as who may object the proposed ordinance; 6. who may pay the real property tax; 7. who is entitled to be notified of the warrant of levy and against whom it may be enforced; 8. who may stay the public auction upon payment of the delinquent tax, penalties and surcharge; and 9. who may redeem the property after it was sold at the public auction for delinquent taxes. 1. Pay the tax under protest and annotation of paid under protest in receipt 2. File written protest with local treasurer within 30 days from payment of the tax 3. Treasurer to decide within 60 days from receipt of the protest 4. From treasurers decision or inaction, appeal to the LBAA within 60 days 5. LBAA to decide within 120 days 6. Appeal LBAA decision to CBAA within 30 days from receipt of adverse decision 7. CBAA appealable to CTA en banc within 30 days from receipt of the adverse decision of the CBAA 8. Appeal to SC within 15 days from receipt of adverse decision of CTA Note: (1) In (4), if the treasurers decision is in favor of the taxpayer, he may now apply for a tax refund or tax credit. Q: Can the RTC issue an injunction against the collection of RPT if there is a pending appeal with the LBAA? Yes. In TALENTO V. ESCALADA, JR. [JUNE 27, 2008], the Supreme Court held that as a general rule, appeal shall not suspend the collection of RPT. However, an exception to the rule is where the taxpayer has shown a clear and unmistakable right to refuse or hold in abeyance the payment of RPT. In this case, the taxpayer showed that the assessments covered more than 10 years, the assessment included items which should properly be excluded, and the subject assessment should take effect on January 1 the following year. Further, the filing of a bond was deemed to have been in compliance with Section 11 of RA 9282. Yes. By claiming an exemption from realty taxation, NAPOCOR is simply raising the question of the correctness of the assessment. As such real property taxes must be paid prior to the making of the protest. On the other hand, if the taxpayer is questioning the authority of the local assessor to assess RPT, it is not necessary to pay the RPT prior to the protest. A claim for tax exemption, whether full or partial, does not question the authority of the local assessor to assess RPT (NAPOCOR v. Province of Quezon [January 25, 2010]) Refund or Credit of RPT Read Section 253, LGC Q: What is the rule on refunds of RPT? The taxpayer must file the written claim within 2 years from the date of payment of tax or from the date when the taxpayer is entitled to reduction or 36 adjustment. The provincial treasurer has 60 days to decide the claim for tax refund or credit Q: Can the taxpayer file a case directly to the RTC if it claims that it was questioning the authority of the treasurer to assess and not only the amount of the assessment? No. In OLIVARES V. JOEY M ARQUEZ [SEPTEMBER 22, 2004], it was found that the taxpayer raised issues on prescription, double taxation, and tax exemption. In such case, the correctness of the assessment must be dealt with and the treasurer has initial jurisdiction and his decision is appealable to the 35 LBAA. Payment under protest is required. Q: The Province of Quezon assessed Mirant for unpaid real property taxes. NAPOCOR, which entered a BOT with Mirant, protested the assessment before the LBAA, claiming the entitlement to tax exemption under Sec. 234 of the LGC. The RPT assessed were not paid prior to the protest. LBAA dismissed NAPOCORs petition for failure to make a payment under protest. Is NAPOCOR required o make a payment under protest? Q: What is the remedy available if the claim for tax refund or credit is denied? Follow steps 4 to 8 in the procedure in contesting an RPT assessment. 35 Unlike in JARDINE DAVIES INSURANCE BROKERS, INC. V. ALIPOSA [FEBRUARY 27, 2003], the taxpayer in this case should make a payment under protest as the issues included correctness of the assessment. 36 ---------------------------------------------------------IV. TARIFF AND CUSTOMS CODE -----------------------------------------------------------------------------------------------------------------------A. Tariff and duties, defined --------------------------------------------------------------Q: Define tariff. Tariff is the list or schedule of articles on which a duty is imposed upon the importation into the country with the rates at which they are severally taxed. Derivatively, it is the system of imposing duties or taxes on the importation of foreign merchandise. --------------------------------------------------------------C. Purpose for imposition --------------------------------------------------------------Q: What is the purpose of imposing a tax on imported articles? They are imposed to: 1. Raise government revenues 2. Protect consumers and manufacturers, as well as, Philippine products. --------------------------------------------------------------D. Flexible tariff clause --------------------------------------------------------------Read Section 401, TCC Q: What is the flexible tariff clause? The flexible tariff clause is a provision in the TCC which implements the constitutionally delegated power to the Congress to further delegate to the President of the Philippines, in the interest of national economy, general welfare, and/or national security upon recommendation of the NEDA: a. Increase, reduce or remove existing protective rates of import duty, provided that the increase should not be higher than 100% ad valorem b. Establish import quota or to ban imports of any commodity c. To impose additional duty on all imports not exceeding 10% ad volorem, whenever necessary --------------------------------------------------------------B. General Rule: all imported articles are subject to duty 1. Importation by government taxable --------------------------------------------------------------Read Section 100, TCC Q: What is the rule on imported articles? As a general rule, all imported articles shall be subject to duty even though previously exported from the Philippines. All importations of the government for is own use or that of its subordinate branches or instrumentalities, or corporations, agencies or instrumentalities owned or controlled by the government shall be subject to duties. --------------------------------------------------------------E. Requirements of importation 1. Beginning and ending of importation 2. Obligations of importer a) Cargo manifest b) Import entry c) Declaration of correct weight or value d) Liability for payment of duties e) Liquidation of duties f) Keeping of records ----------------------------------------------------------------------------------------------------------------------------1. Beginning and ending of importation --------------------------------------------------------------Page 137 of 164 Last Updated: 30 July 2013 (v3) Read Section 1201 to 1202, TCC Q: When does importation begin and when does it end? Importation begins when the conveying vessel or aircraft enters the jurisdiction of the Philippines with intention to unlade therein. Importation is deemed terminated upon payment of the duties, taxes, and other charges due upon the agencies or secured to be paid at the port of entry and he legal permit for withdrawal shall have been granted. Note: Why is important to know when importation begins and ends? The jurisdiction of the BoC to enforce the provisions of the TCC including seizure and forfeiture also begins from the beginning of imporation. Thus, the BoC obtains jurisdiction over imported articles only after importation has begun. On the other hand, the BoC loses jurisdiction to enforce the TCC and to make seizures and forfeitures after importation is deemed terminated. arrived within the limits of the port of entry. ( see SECTION 205, TCC) Q: When are imported articles deemed have been withdrawn from the warehouse in the PH for consumption? Imported articles shall be deemed "withdrawn" from. (see SECTION 205, TCC) Q: When does the BoC acquire exclusive jurisdiction over imported goods for the purpose of enforcing customs laws? From the moment imported goods are actually in the possession or control of the Customs authorities, even if no warrant for seizure or detention had previously been issued by the Collector of Customs in connection with the seizure and forfeiture proceedings. (see SUBIC BAY METROPOLITAN AUTHORITY V. RODRIGUEZ [APRIL 23, 2010]) Read Section 205, TCC Q: When are imported articles deemed to have entered the PH PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Q: A flight attendant arrived from Singapore. Upon her arrival she was asked whether she has anything to declare. She answered none, and she submitted her Customs Baggage Declaration Form which she accomplished and signed with nothing or written on the space for items to be declared. When her bag was examined some pieces of jewelry were found concealed within the lining of the bag. She was then convicted of violating Section 3601 for unlawful importation. She now appeals claiming that the lower court erred in convicting her under Section 3601 when the facts alleged both in the information and those shown by the prosecution constitute the offense under Section 2505 Failure to Page 138 of 164 Last Updated: 30 July 2013 (v3) she No. Section 2505 does not define a crime. It merely provides the administrative remedies which can be resorted to by the BoC when seizing dutiable articles found in the baggage of any person arriving in the Philippines which is not included in the accomplished baggage declaration submitted to the customs authorities and he administrative penalties that such person must pay for the release of such goods if not imported contrary to law. Such administrative penalties are independent of any criminal liability for smuggling that may be imposed under Section 3601. (Jardeleza v. People [February 6, 2006]) Read Section 1601 to 1604, TCC --------------------------------------------------------------F. Importation in violation of TCC 1. Smuggling 2. Other fraudulent practices --------------------------------------------------------------Q: Define smuggling. Smuggling is an act of any person who shall fraudulently import or bring into the Philippines or assist in so doing, any article, contrary to law or shall receive, conceal, buy or sell or in any manner facilitate the transportation, concealment or sale of such article after importation knowing the same to have been imported contrary to law. It includes the exportation of articles in a manner contrary to law. (see Section 3519, TCC) --------------------------------------------------------------2. Obligations of importer a) Cargo manifest b) Import entry c) Declaration of correct weight or value d) Liability for payment of duties e) Liquidation of duties f) Keeping of records --------------------------------------------------------------Q: What are the obligations of the importer? a. Cargo Manifest A cargo manifest is the document used in shipping, containing the list of the contents, value, origin, carrier and destination of the goods to be shipped. Replacement Parts, when in truth and in fact, the shipment contained fifteen units of Sportage and Galloper. (PEOPLE OF THE PHILIPPINES VS. ROEL PAQUIT SAYSON, CTA CRIM CASE NO. O-094, DECEMBER 12, 2012) Note: Mere possession of alleged smuggled goods is prima facie evidence of guilt of the smuggling unless the defendant could explain that his possession is lawful. 3. Conditionally-free importation --------------------------------------------------------------Q: What are the classes of importation under the TCC? 1. Dutiable Importation (Section 100, TCC) 2. Prohibited Importations (Section 101 and 1207, TCC) 3. Conditionally-Free Importations (Section 105, TCC) 4. Drawbacks (Section 106, TCC) Q: What are the other fraudulent practices against customs revenue aside from unlawful importation? Read Section 3602, TCC 1. Entry of imported articles or exported article by means of any false or fraudulent practices, invoice, declaration, affidavit, or other documents 2. Entry of goods at less than their true weights or measures or upon a classification as to quality or value 3. Payment of less than the amount due 4. Filing any false or fraudulent claim for the payment of drawback or refund of duties upon the exportation of merchandise 5. Filing any affidavit, certificate or other document ot secure to him or to others the payment of any drawback, allowance or refund of duties on the exportation of merchandise greater than that legally due thereon Note: In PEOPLE OF THE PHILIPPINES VS. MARIVIC BRIONES, DAVID BANGA, BENJAMIN VALIC, CTA CRIM CASE NO. 0158, JULY 23, 2012, the CTA held that in the prosecution for violation of Section 3602 of the Tariff and Customs of the Philippines, in relation to Article 172 of the Revised Penal Code, it must prove beyond reasonable doubt that the accused in conspiracy with the other accused, made or attempted to make an entry of the alleged imported article through the filing of the said Import Entry at the Bureau of Customs Q: X and his wife Y, Filipino living in the Philippines went on a 3-month pleasure trip around the world during the months of June, July, and August 2002. In the course of their trip, they accumulated some personal effects which were necessary, appropriate and normally used in leisure trips as well as souvenirs in noncommercial quantities. Are they returning residents for purposes of Section 105 of the TCC? No. The term returning residents refers to nationals who have stayed in a foreign country for a period of at least 6 months (see Section 105(f), TCC). Due to their limited duration of stay abroad, X and Y are not considered as returning residents but they are merely considered as travelers or tourists who likewise enjoy the benefit of conditionally-free importation (see Section 105(g), TCC) Q: What are drawbacks? Read Section 105, TCC Q: Jacob after serving a 5-year tour of duty as military attach in Jakarta, Indonesia returned to the Philippines bringing with him his personal effects, including a personal computer and a car. Would Jacob be liable for taxes on these items? No. Jacob will be exempted provided he complies with the requirements under Section 105 of the TCC. The requirements are: a. The car must have been ordered or purchased prior to the receipt by the Philippine mission or consulate of the recall order b. The car is registered in his name c. The exemption shall apply only to the value of the car d. The exemption shall apply to the aggregate value of his personal and household effects not exceeding 30% of the total amount received by him as salary and allowances during his assignment but not to exceed 4 years e. He must not have availed of the exemption more oftener than once every four years (see last para. Section 105, TCC) They refer to refunds or tax credits of duties paid on goods that are being exported or used in the 37 production of manufactured exports Examples include: 1. Fuel used for propulsion of vessels engaged in trade with foreign countries or coastwise trade 2. Petroleum Oils or Oils obtained from bituminous minerals, crude eventually used for generation of electric power and manufacture of city gas 3. On certain articles made from imported articles subject to certain conditions. 37 Section 106(e), TCC provides that Claims for refund or tax credit eligible for such benefits shall be paid or granted by the Bureau of Customs to claimants within sixty (60) days. REPUBLIC OF THE PHILIPPINES, CTA CASE NO. 8412, NOVEMBER 14, 2012 Q: What are ad valorem customs duties? --------------------------------------------------------------H. Classification of duties 1. Ordinary/Regular duties a) Ad valorem; methods of valuation (i) Transaction value (ii) Transaction value of identical goods (iii) Transaction value of similar goods (iv) Deductive value (v) Computed value (vi) Fallback value b) Specific 2. Special duties a) Dumping duties b) Countervailing duties c) Marking duties d) Retaliatory/discriminatory duties e)Safeguard duties ----------------------------------------------------------------------------------------------------------------------------1. Ordinary/Regular duties a) Ad valorem; methods of valuation (i) Transaction value (ii) Transaction value of identical goods (iii) Transaction value of similar goods (iv) Deductive value (v) Computed value (vi) Fallback value b) Specific --------------------------------------------------------------Q: What are ordinary or regular duties? These are axes that are imposed or assessed upon merchandise from or exported to a foreign country for the purpose of raising revenue. They may also be imposed to serve as protective barriers which would prevent the entry of merchandise that would compete with locally manufactured items. They are also referred to as tariff barriers These are customs duties that are computed on the basis of value (see Section 201, TCC) Q: What are the methods of determining dutiable values? Read Section 201-205, 1313 TCC The methods of determining the dutiable value are as follows (by order of preference): 1. Transaction value an ad valorem rate of duty equivalent to the price actually paid or payable for the goods when sold for export to the Philippines, as adjusted; 2. Transaction value of identical goods the transaction value of identical goods sold for export to the Philippines and exported at or about the same time as the goods being valued; identical goods shall mean goods which are the same in all respects, including physical characteristics, quality and reputation, discounting minor differences in appearances; 3. Transaction value of similar goods; 4. Deductive value an amount based on the unit price at which the imported gods or identical or similar imported goods are sold in the Philippines, in the same condition as when imported, in the greatest aggregate quantity, at or about the time of importation of the goods being valued, to persons not related to the persons from whom they buy such goods, as adjusted; 38 38 5. Computed value the aggregate value of the cost or value of materials and fabrication or other processing employed in producing the imported goods, amount for profit andNgeneral expenses, freight, insurance fees and other transportation expenses for the importation of the goods, among others; and 6. Fallback value an amount determined by using other reasonable means and on the basis of data available in the Philippines. Note: The transaction value is the primary method of determining dutiable value. If the transaction value of the imported article could not be determined using the above, the alternative methods should be used one after the other. 2. Special duties a) Dumping duties b) Countervailing duties c) Marking duties d) Retaliatory/discriminatory duties e)Safeguard duties --------------------------------------------------------------Q: What are special duties? These are additional import duties imposed on specific kinds of imported articles under certain conditions. It cannot be applied without the regular customs duties. It can only be applied in the presence of a special order from government officers. 39 Normal value for purposes of imposing the anti-dumping duty is the comparable price at the date of sale of like product, commodity or article in the ordinary course of trade when destined for consumption in the country of export (see Section 301(s)(3), TCC, as amended by RA 8752) ii. iii. At a price less than its normal value When destined for domestic consumption b. And such exportation i. Is causing or ii. Is threatening to cause material injury to a domestic industry iii. Materially retards the establishment of a domestic industry producing like product (see Section 301(a), TCC, as amended by RA 8752) Note: (1) The imposing authority for the anti-dumping duty is the DTI Secretary in the case of non-agricultural product, commodity, or article or the DA Secretary in the case of agricultural product, commodity or article. (2) Even when all the requirements for the imposition have been fulfilled, the decision on whether or not to impose a definitive anti-dumping duty remains the prerogative of the Tariff Commission (3) In the determination of whether to impose the antidumping duty, the Tariff Commission may consider among others, the effect of imposing an anti-dumping duty on the welfare of the consumers and/or the general public, and other related local industries (4) The amount of anti-dumping duty that may be imposed is the difference between the export price and the normal value of such product, commodity, or article. (2) The countervailing duty is equivalent to the bounty (cash award paid to an exporter), subsidy (fiscal incentives, not in the form of cash award, to encourage manufacturers or exporters) or subvention (any assistance other than bounty or subsidy). (3) The imposing authority for the countervailing duties is the DTI Secretary in the case of non-agricultural product, commodity, or article or the DA Secretary in the case of agricultural product, commodity or article. 2. Taxpayer a) Protest b) Abandonment c) Abatement and Refund ----------------------------------------------------------------------------------------------------------------------------1. Government a) Administrative/extrajudicial (i) Search, seizure, forfeiture, arrest b) Judicial (i) Rules on appeal including jurisdiction --------------------------------------------------------------Q: What are government? the remedies of the Q: When does the BoC normally avail itself of the administrative remedy instead of the judicial remedy and vice versa? a. Administrative Remedy when the goods to which the tax lien attaches, regardless of ownership is still in the custody or control of the government. In the case, however, of importations which are prohibited or undeclared, the remedy of seizure and forfeiture may still be exercised even if the goods are no longer in its custody b. Judicial Remedy when the goods are properly released and thus beyond the reach of a tax lien, the government can seek payment of the tax liability through judicial action since the tax liability of the importer constitutes a personal debt to the government, therefore, enforceable by action. --------------------------------------------------------------a) Administrative/extrajudicial (i) Search, seizure, forfeiture, arrest --------------------------------------------------------------Q: What are the extrajudicial (or administrative remedies) available to the government? 1. Enforcement of tax lien (Section 1204 and Section 1508, TCC) 2. Seizure and forfeiture (Section 2201-2212, 2301-2317, 2530-2536, TCC) Page 145 of 164 Last Updated: 30 July 2013 (v3) --------------------------------------------------------------I. Remedies 1. Government a) Administrative/extrajudicial (i) Search, seizure, forfeiture, arrest b) Judicial (i) Rules on appeal including jurisdiction PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Read Section 1024, 1058, NCC Q: What is a tax lien in relation to the TCC? The liability for duties, taxes, fees and other charges of an importer constitutes a lien upon the articles imported which may be enforced while such articles are in custody or subject to the control of the government. The Collector shall hold the delivery of any article imported or consigned to an importer whenever such importer has an outstanding and demandable account with the BoC. If subsequently authorized by the Commissioner and upon notice, the Collector may sell such importation or a portion thereof to cover the outstanding account of the importer. Customs officers may seize any vessel, aircraft, cargo, article, animal or other movable property when the same is subject to forfeiture or liable for any time as imposed under the TCC and related rules and regulations. (see Section 2205, TCC) Note: The BoC may conduct search and seizures even without the benefit of a warrant issued by a judge upon probable cause except if the search is to be conducted in a dwelling. --------------------------------------------------------------(i) Search, seizure, forfeiture, arrest --------------------------------------------------------------Note: This how seizure and forfeiture works. First, the articles are seized by the customs authorities. A warrant of seizure is used for said purpose. In the case of a search in a dwelling, a search warrant from the regular courts would have to be procured. Second, the Collector upon making any seizure shall issue a Warrant of Detention. The articles may be released if a bond is filed except if there is prima facie evidence of fraud in their importation in which case the seized articles may not be released by a bond. Then the forfeiture proceedings take place. The only issue is whether the seized goods should be forfeited. The case can be compromised or be subject of a settlement. The Collector may either issue a Declaration of Forfeiture or rule that the seized articles are not subject to forfeiture. Thus, either the importer or the government can be aggrieved by said decision. If the importer is aggrieved, he may file an administrative protest to the CoC and if denied, he can proceed to the CTA and so on. If the government is aggrieved, there is automatic review by the CoC and then by the DOF Secretary. If said bodies decide in favor of the government, the importer may proceed to the CTA and so on. Tada! Its not that complicated. This is how Ill organize the discussion below. First, Ill discuss seizure and arrest and provide the related provisions. Second, Ill discuss what properties are subject to forfeiture. Third, Ill discuss the forfeiture proceeding itself and which body has jurisdiction over the same. Read Section 2530-2536, TCC Q: What are the requisites for forfeiture of imported goods? c. An intention on the part of the importer/consignee to evade the payment of the duties due (Republic v. CA [October 2, 2001]) Read Section 2201-2212, TCC Q: What is the power of seizure and arrest? Danny, the truck owner, did not have a certificate of public convenience to operate his trucking business. Danny did not know that the shipment of garlic was illegally imported. Can the CoC of the port seize and forfeit the truck as an instrument in the smuggling? Yes, the CoC of the port can seize and forfeit the truck as an instrument of the smuggling since the same was used unlawfully in the importation of smuggled articles. The mere carrying of such articles on board the truck in commercial quantities shall subject the truck to forfeiture, since it was no being used as a duly authorized common carrier, which was chartered or leased as such (see Section 2530(a), TCC) Further, although forfeiture of the vehicle will not be affected if it is established that the owner thereof had no knowledge of or participation in the unlawful act, there arises a prima facie presumption of knowledge or participation if the owner is not in the business for which the conveyance is generally used. Thus, not having a certificate of public convenience to operate a trucking business, he is legally deemed no to have been engaged in the trucking business (see Section 2531, TCC) carriers subject to As a general rule, they are not subject to forfeiture. However, if the owner has knowledge of its use in smuggling and was a consenting party, it may be forfeited. Pursuant to Section 2530 of the Tariff and Customs Code of the Philippines, the mere carrying or holding on board of smuggled articles shall subject the vessel to forfeiture. However, the vessel is not subject to forfeiture if it is engaged as duly authorized common carrier and as such carrier it is not chartered or leased. THE COMMISSIONER OF CUSTOMS AND THE UNDERSECRETARY OF FINANCE VS. GOLD M ARK SEA CARRIERS, INC., CTA EB NO. 825, DECEMBER 24, 2012 Q: When is there prima facie knowledge by the owner of the common carrier? There is prima facie knowledge by the owner of the common carrier of its use in smuggling: a. If the conveyance was used for smuggling at least twice before b. If the owner is not in the business for which the conveyance is generally used c. If the owner is not in a position to own such conveyance Q: Discuss the administrative proceeding of forfeiture from issuance of warrant of detention to declaration of forfeiture. Page 147 of 164 Last Updated: 30 July 2013 (v3) Q: In smuggling a shipment of garlic, the smugglers used an eight-wheeler truck which they hired for the purpose of taking out the shipment from the customs zone. PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 well as appraisal and classification of the same 5. Collector, after hearing and in writing, makes a declaration of forfeiture or fixes amount of fine or takes such action as may be proper Note: As a result of (5), the aggrieved owner or imported may file what is called an administrative protest. In said protest, he is essentially questioning the decision of the Collector before CoC. In some cases, instead of a declaration of forfeiture, it is the government who is aggrieved. In such case, automatic review shall apply. See discussion under Administrative Protest. Q: Who has jurisdiction to hear and determine questions touching on the seizure and forfeiture of dutiable goods? The CoC sitting in seizure and forfeiture proceedings has exclusive jurisdiction to hear and determine all questions touching on the seizure and forfeiture of dutiable goods. As held in SUBIC BAY METROPOLITAN AUTHORITY V. RODRIGUEZ [APRIL 23, 2010], the Collector of Customs has exclusive jurisdiction over seizure and forfeiture proceedings and the regular courts cannot interfere with his exercise thereof or enjoin or interfere with it. The regular courts are precluded from assuming cognizance over such matters even through petitions for certiorari, prohibition, or mandamus. The RTC must defer to the exclusive original jurisdiction of the BOC in such proceedings. This is known as the doctrine of primary jurisdiction. prevent smuggling and other frauds upon customs c. To render effective and efficient the collection of import and export duties due he State which enables the government to carry out its functions d. The issuance by regular courts of preliminary injunction in seizure and forfeiture proceedings before the BoC may arouse suspicion that the issuance or grant was for consideration other than the strict merits of the case (see Zuno v. Cabredo [402 SCRA 75]) e. Under the doctrine of primary jurisdiction, the BoC has exclusive administrative jurisdiction to conduct searches and seizures and forfeitures of contraband without interference from the courts. It could conduct search and seizures without need of a judicial warrant except if the search is to be conducted in a dwelling place. from receipt refers to his decisions on administrative tax protests. Unless an appeal is made to the CTA in the manner and within the period prescribed by law, the action or ruling of the CoC shall be final and conclusive (Pilipinas Shell v. CoC [June 18, 2009]) Note that the CTA has jurisdiction only over decisions of the CoC in cases involving seizures, detention or release of property affected, not the decision of the Collector. --------------------------------------------------------------2. Taxpayer a) Protest b) Abandonment c) Abatement and Refund ----------------------------------------------------------------------------------------------------------------------------a) Protest --------------------------------------------------------------Read Section 2308 to 2315, TCC Q: What is an administrative tax protest? A tax protest case, under the TCC, involves a protest of the liquidation of import entries. In other words, it is a protest which questions the legality or correctness of assessed customs duties. --------------------------------------------------------------b) Judicial (i) Rules on appeal including jurisdiction --------------------------------------------------------------Read Section 2401, TCC Q: What are the judicial remedies that may be availed of by the Government? a. Civil Action b. Criminal Action Note: Such actions shall be brought in the name of the Government of the Philippines and shall be conducted by customs officers but no action shall be filed in court without the approval of the CoC. Q21.1. Discuss the procedure for customs protest from issuance of warrant of detention to appeal to the Supreme Court. well as appraisal and classification of the same 5. The Collector, after hearing and in writing, can either make a declaration of forfeiture (owner or importer is aggrieved) or rule otherwise (government is aggrieved). If the owner or importer is aggrieved by the decision of the Collector: 1. Protest to the Collector within 15 days 2. If aggrieved by the decision or action of the collector upon protest, appeal to the Commissioner within 15 days after notification in writing by the Collector of his action or decision 3. Appeal to CTA Division within 30 days from notice 4. Appeal to CTA En Banc 5. Appeal to SC by certiorari within 15 days If the government is aggrieved by the decision of the Collector: 1. Automatic review by COC 2. Automatic review by DOF Secretary 3. If owner or importer is aggrieved by decision of COC or DOF Secretary 4. Appeal to CTA Division within 30 days from notice 5. Appeal to CTA En Banc 6. Appeal to SC by certiorari within 15 days NORTHERN ISLANDS COMPANY, INC., CTA CASE NO. 8068, JUNE 6, 2012 a. Any person who abandons an article shall be deemed to have renounced all his interests and property rights therein. b. An abandoned article shall ipso facto be deemed the property of the Government. c. It does not relieve the owner from any criminal liability d. If the abandoned articles are transferred to a customs bonded warehouse, he operator shall be liable for the payment of duties and taxes in the case of losses of the stored abandoned imported articles (see R.V. Marzan v. CA [March 4, 2004]) --------------------------------------------------------------c) Abatement and Refund --------------------------------------------------------------Read Sections 1701-1708, TCC Q: What is abatement? Abatement is the reduction or non-imposition of customs duties on certain impored materials as a result of damage incurred during voyage; deficiency in contents of packages; loss or destruction of articles after arrival; or death or injury of animals. Note: The general rule is that no abatement of duties shall be made on account of damage incurred or deterioration suffered during voyage of importation and duties will be assessed on the actual quantity imported (see Section 1701, TCC). Q: What are the instances where the Collector may abate or refund the amount of duties accruing or paid by the importer? (exceptions to the general rule) 1. 2. 3. 4. 5. 6. Damage incurred during voyage Missing package Deficiency in contents of packages Articles lost or destroyed after arrival Dead or injured animals Refund of Excess payments (made due to manifest clerical errors) ---------------------------------------------------------V. JUDICIAL REMEDIES (CTA) ---------------------------------------------------------Note: The rules here are those found in R.A. 1125, as amended by RA 9282. Some sources and answers to past bar questions may still contain rules applicable to R.A. 1125 before its amendment. So make sure you have an updated codal and reference material. b. Decisions, resolutions or orders on MRs or MNTs of the Court in Division in the exercise of its exclusive original jurisdiction over: i. ii. Tax Collection cases Cases involving criminal offenses arising from violations of the NIRC or TCC and other laws administered by the BIR or BOC --------------------------------------------------------------A. Jurisdiction of the Court of Tax Appeals 1. Exclusive appellate jurisdiction 2. Criminal cases --------------------------------------------------------------Note: The CTA is composed of a Presiding Justice and 8 associate justices organized into three divisions. c. Decisions, resolutions or orders of the RTCs in the exercise of its appellate jurisdiction over: i. ii. iii. Local tax cases Tax Collection cases Criminal offenses arising from violations of the NIRC or TCC and other laws administered by the BIR or BOC --------------------------------------------------------------1. Exclusive Appellate Jurisdiction a.) Cases within the jurisdiction of the court en banc b) Cases within the jurisdiction of the court in divisions --------------------------------------------------------------Note: This refers to the exclusive jurisdiction to review by appeal of the CTA en banc and CTA in division. d. Decisions of the CBAA in the exercise of its appellate jurisdiction over cases involving assessment and taxation of real property originally decided by the provincial or city board of assessment appeals. (see Section 2, Rule 4, A.M. No. 05-11-07-CTA) Read Section 3(a), Rule 4, RRCTA Q: What are the cases within the exclusive appellate jurisdiction to review by appeal of the CTA in division? a. Decisions of the CIR i. In cases involving disputed assessments, refunds of internal revenue taxes, fees or other charges, penalties in relation thereto; or Other maters arising under the NIRC or other laws administered by the BIR Read Section 2, Rule 4, RRCTA Q: What are the cases within the exclusive appellate jurisdiction to review by appeal of the CTA en banc? a. Decisions or resolutions on MRs or MNTs of the Court in Division in the exercise of its exclusive appellate jurisdiction over: i. ii. iii. Cases arising from administrative agencies Local tax cases decided by the RTCs in the exercise of their original jurisdiction Tax collection cases decided by RTCs in the exercise of their original jurisdiction involving final and executory assessments for taxes, fees, charges, and penalties, where the principal amount of taxes and penalties claimed is less than P1,000,000 Criminal offenses arising from violations of the NIRC or TCC and other laws administered by the BIR or BOC ii. b. Inaction by the CIR where the NIRC provides a specific period of action i. In cases involving disputed assessments, refunds of internal revenue taxes, fees or other charges, penalties in relation thereto, or Other matters arising under the NIRC or other laws administered by the BIR ii. iv. c. Decisions, orders or resolutions of the RTCs in local tax cases decided or resolved by them in the exercise of their original jurisdiction Page 152 of 164 Last Updated: 30 July 2013 (v3) d. Decisions of the Commissioner of Customs i. In cases involving liability for customs duties, fees, or other money charges, seizure, detention or release of property affected, fines, forfeitures of other penalties in relation thereto; or Other matters arising under the Customs Law or other laws administered by the Bureau of Customs statement that the action is final. The rationale is that to let the taxpayer defer the period is to unduly put in his hand the collection of taxes. ii. Q: A taxpayer received on 15 Jan 1996 an assessment for internal revenue tax deficiency. On 10 Feb 1996, the taxpayer filed a petition for review with the CTA. Should the CTA entertain the appeal? No. Before the taxpayer can avail of a judicial remedy, he must first exhaust administrative remedies by filing a protest within 30 days from receipt of the assessment. An assessment by the BIR is not the CIRs decision from which a petition for review may be filed with the CTA. Rather, it is the action taken by the CIR in response to the taxpayers protest on the assessment that would constitute the appealable decision. e. Decisions of the Secretary of Finance on customs cases elevated to him automatically for review from decisions of the Commissioner of Customs which are adverse to the Government under Section 2315 of the TCC f. Decisions of the DTI Secretary in the case of non-agricultural product, commodity or article and the DA Secretary in case of agricultural product, commodity or article, involving dumping and countervailing duties under Sections 301 and 302 of the TCC and safeguard measures under the Safeguard Measures Act (RA 8800) where either party may appeal the decision to impose or not to impose said duties (see Section 3(a), Rule 4, A.M. No. 05-11-07-CTA) Note: Any dispute or controversy involving national internal revenue taxes or customs duties not falling within the purview of the exclusive appellate jurisdiction of the CTA must fall within the jurisdiction of the regular courts. A taxpayers suit impugning he constitutionality of a tax statute, for example, even if involving the NIRC or TCC would fall within the jurisdiction of the regular courts. Q: ABC Corp. received an income tax deficiency from the BIR. ABC filed a protest and submitted to the BIR all relevant supporting documents. The CIR did not formally rule on the protest. Thereafter, ABC was served summons and a copy of the complaint for collection of the tax deficiency filed by the BIR with the RTC. ABC filed a petition for review before the CTA. The BIR contends that the petition is premature since there was no formal denial of the protest of ABC. Is the BIRs contention correct? No. The CTA has jurisdiction because the action of the CIR qualifies as an appeal from the CIRs decision on the disputed assessment. When the CIR decided to collect the tax assessed without first deciding on he taxpayers protest, the effect of his action of filing a judicial action for collection is a decision of denial of the protest, in which event the taxpayer may file an appeal with the CTA ( Republic v. Lim Tian Teng & Sons [16 SCRA 584]) Q: Name some communications sent by the CIR to taxpayers that are deemed appealable to the CTA. As provided in Surigao ELECTRIC V. CTA [JUNE 28, 1974]: 1. a letter which stated the result of the investigation requested by the taxpayer and the consequent modification of the assessment; 2. letter which denied the request of the taxpayer for the reconsideration cancellation, or withdrawal of the original assessment 3. a letter which contained a demand on the taxpayer for the payment of the revised or reduced assessment; and 4. a letter which notified the taxpayer of a revision of previous assessments enforce its collection without further notice. In addition, the letter contained a notation indicating that petitioners request for reconsideration had been denied for lack of supporting documents. Q: Is the final demand letter issued by the BIR reiterating the demand for immediate payment considered a final decision appealable to the CTA? Yes. As held in CIR v. ISABELA CULTURAL CORP [JULY 11, 2001], the letter is deemed as the CIRs final act since failure to comply therewith exposes the property to distraint and levy. The Supreme Court stated that a final demand letter from BIR, reiterating to the taxpayer the immediate payment of a tax deficiency assessment previously made, is tantamount to a denial of the taxpayers request for reconsideration. Such letter amounts to a final decision on a disputed assessment and is thus appealable to the CTA. Q: AA Corp received a FAN for contractors tax. It protested the assessments. Thereafter, AA requested for the cancellation of the assessments. 4 years passed and nothing happened. CIR then issued 2 warrants of distraint to collect the taxes. One year later, CIR answered and denied AAs request for cancellation. The CIR, in its answer to AAs request for the cancellation of the assessments, requested the taxpayer to pay the deficiency taxes within ten days from receipt of the demand; otherwise, the Bureau would enforce the warrants of distraint. He closed his demand letter with this paragraph: This constitutes our final decision on the matter. If you are not agreeable, you may appeal to the Court of Tax Appeals within 30 days from receipt of this letter. What is the reckoning point of the appeal period to the CTA the issuance of the warrant of distraint or the letter embodying the final demand of payment?? The reviewable decision is the latter letter where the CIR clearly directed the taxpayer to appeal to the CTA and not the warrants of levy and distraint. No amount of quibbling or sophistry can blink the fact that said letter, as its tenor shows, embodies the Commissioner's final decision. He even directed the taxpayer to appeal it to the Tax Court. The directive is in consonance with this Court's dictum that the Commissioner should always indicate to the taxpayer in clear and unequivocal language what constitutes his final determination of the disputed assessment. That procedure is demanded by the pressing need for fair play, regularity and orderliness in administrative action. (see ADVERTISING ASSOCIATES, INC. VS. COURT OF APPEALS [DECEMBER 26, 1984]) VS. COMMISSIONER INTERNAL REVENUE [DECEMBER 9, 2005], the Supreme Court reiterated letter of demand dated January 24, 1991, unquestionably constitutes the final action taken by the BIR on petitioners request for reconsideration when it reiterated the tax deficiency assessments due from petitioner, and requested its payment. Failure to do so would result in the issuance of a warrant of distraint and levy to Q: U Corp was assessed deficiency income taxes (FAN). U Corp protested the assessment. BIR, without ruling on the protest, issued a warrant of distraint and levy. U Corp requested reinvestigation and reconsideration of issuance of the warrant. Page 154 of 164 Last Updated: 30 July 2013 (v3) Thereafter, BIR filed a collection suit to collect the taxes. U Corp then filed a petition for review with the CTA, on the theory that its period to appeal only began to run from its receipt of summons in the civil collection case. BIR argued the appeal was filed out of time, as the period began to run when the warrant of distrant and levy was issued. Who is correct? U Corp is correct. Under the circumstances, the CIR didnt clearly signify his final action on the disputed assessment. Thus, it was only when U Corp received the summons on the civil suit for collection of deficiency income that the period to appeal commenced to run. The request for reinvestigation and reconsideration was in effect considered denied by the CIR when the latter filed a civil suit for collection of deficiency income. [COMMISSIONER OF INTERNAL REVENUE VS. UNION SHIPPING CORPORATION (M AY 21, 1990)] jurisdiction. The CIR argued that the Revenue District Officer who signed the letter which became the basis of the instant petition, cannot be deemed an alter ego of the CIR for purposes of issuing a final decision on Festos protest under a delegated authority. As such, the subject letter is not the CIR's final decision on Festos protest; thus, the 30 day period to file an appeal was yet to commence, rendering the instant petition premature. Is the contention of the CIR correct? Yes, the appeal to the CTA was premature. As held in FESTO HOLDINGS, INC. VS. COMMISSIONER OF INTERNAL REVENUE [CTA CASE NO. 8226, SEPTEMBER 2, 2011], the Revenue District Officer who issued the letter cannot be considered as the CIR's decision appealable to this Court, in the absence of any proof that the former was authorized to decide and act in behalf of the latter on the protest of a taxpayer. Nowhere is it provided that a Revenue District Officer can issue decisions that are appealable to this Court. Therefore, there being no decision of the CIR in the present case, this Court cannot take cognizance of the present case. Q: The City of Makati received assessment notices imposing deficiency taxes. Makati protested. The BIR stated that the assessments were already final and executory. Nonetheless, Makati requested for another reinvestigation. The Revenue officer and deputy Commissioner granted this request. Did the reinvestigation of the case reversed the finality of the assessments? No. Only the Commissioner of Internal Revenue has the power to reverse, revoke or modify any existing ruling of the Bureau of Internal Revenue (BIR), which power cannot be delegated. In assessment cases, a reopening/reinvestigation after a final decision on disputed assessment has been issued must be initiated by the commissioner. Otherwise, the reopening / reinvestigation is without authority and failure to appeal the final decision on disputed assessment to CTA would render the assessment final and executory. Here, the reinvestigation was merely granted by a revenue officer and a deputy commissioner. (see CITY OF M AKATI VS. COMMISSIONER OF INTERNAL REVENUE [CTA CASE NO. 641, SEPTEMBER 16, 2011]) Q: Is the denial by the BIR of the protest on the PAN (not the FAN) appealable to the CTA? No, the denial of the CIR must be on the protest of the FAN. In ALLIED BANKING CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE [FEBRUARY 5, 2010], the Supreme Court ruled that it is the Formal Letter of Demand and Assessment Notice (FAN) that must be administratively protested or disputed within 30 days, and not the PAN. Q: The CIR filed a Motion to Dismiss the Petition for Review commenced by Festo Holdings on the ground of lack of PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Q: BIR issued a PAN to AB Corp for deficiency DST. AB protested the PAN. Thereafter, BIR sent a FAN to AB Corp. The letter provided: Page 155 of 164 Last Updated: 30 July 2013 (v3) demandable. Thereafter, AB immediately filed a petition for review with the CTA. Should the petition be dismissed? No. Ordinarily, the procedure is that its the FAN that must be administratively protested, as a prerequisite to subsequently filing a PFR with the CTA. However, the SC ruled in this case that the CIR was estopped from claiming the need for a protest. AB Corp cant be blamed for not filing a protest against the FAN since the language used and the tenor of the PAN indicate that it is the final decision of the CIR on the matter. The CIR is required to indicate, in a clear and unequivocal language, whether his action on a disputed assessment constitutes his final determination thereon in order for the taxpayer concerned to determine when his or her right to appeal to the tax court accrues. Thus, CIR is now estopped from claiming that he did not intend the PAN to be a final decision. Moreover in the Formal Letter of Demand with Assessment Notices, CIR (see ALLIED BANKING CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE [FEBRUARY 5, 2010]) Yes.. (see PHILIPPINE JOURNALISTS INC. VS. COMMISSIONER OF INTERNAL REVENUE [DECEMBER 16, 2004]) Q: Sec 7(a)(1) of RA 1125 as amended by RA 9262 provides that the CTA has exclusive appellate jurisdiction to review by appeal the decisions of the Commissioner of Internal Revenue in cases involving disputed assessments, refunds of internal revenue taxes, fees or other charges, penalties in relation thereto, or other matters arising under the National Internal Revenue or other laws administered by the Bureau of Internal Revenue. Does the CTA also have jurisdiction to determine the validity of warrants of distraint/levy and the waiver of statute of limitations? Q: The CIR, pursuant to the NIRC, issued a RMO imposing a 5% lending investors tax on pawnshops. The RMO identified pawnshops as a lending investor due to the nature of its activities. Leal, a pawnshop owner, filed with the RTC a petition for prohibition that sought to stop the CIR from implementing the RMO. CIR filed a motion to dismiss. CIR alleged RTC had no jurisdiction over the matter. Did the RTC have jurisdiction over the action to nullify the RMO? No, the CTA had exclusive jurisdiction. The questioned is actually a ruling or opinion of the CIR in implementing the Tax Code with regard taxability of pawnshops. The RMO was issued pursuant to CIRs powers under Section 244 of the NIRC (providing for the power of the Commissioner of Internal Revenue to make rulings or opinions in connection with the implementation of the provisions of internal revenue laws, including ruling on the classification of articles of sales and similar purposes). Thus, the petition should have been filed with the CTA. (see COMMISSIONER OF INTERNAL REVENUE VS. LEAL [NOVEMBER 18, 2002]) Similarly, in the case of ASIA INTERNATIONAL AUCTIONEERS, INC. VS. PARAYNO (DECEMBER 18, 2007], several RRs and RMOs were also considered as rulings/opinions of the CIR on the tax treatment of motor vehicles sold at public auction within the SSEZ. They were deemed issued pursuant to the power of the CIR to interpret provisions of the NIRC. Thus, when an action to annul such RRs/RMOs was filed with the RTC, SC held that such was improper, as it was the CTA that had exclusive jurisdiction Note: See also EGIS PROJECTS S.A. VS. THE SECRETARY OF FINANCE AND COMMISSIONER OF INTERNAL REVENUE [CTA CASE NO. 8413, JANUARY 29, 2013] where the CTA held that the issue on the constitutionality or validity of RMO Nos. 72-2010 and 1-2000 or its relevant provisions is beyond the jurisdiction of the CTA, but of the regular courts. SMART COMMUNICATIONS, INC. VS. MUNICIPALITY OF MALVAR, BATANGAS, CTA EB NO. 767 (CTA AC NO. 58), JUNE 26, 2012, where the CTA held that the issue on the validity or constitutionality of Ordinance is not within the jurisdiction of the CTA, but with the regular courts. However, in NEGROS CONSOLIDATED FARMERS ASSOCIATION MULTI-PURPOSE COOPERATIVE VS. COMMISSIONER OF INTERNAL REVENUE [CTA CASE NO. 7994; FEBRUARY 17, 2012], the CTA held that it has jurisdiction to rule on the validity of a rule or regulation issued by the Bureau of Internal Revenue. This case should not be controlling in light of the SC ruling in British American Tobacco. Q: A was assessed for income tax deficiency. The taxpayer failed to file a protest and thus the said assessment has become final and unappealable. Thereafter, the taxpayer filed a petition for review to the CTA arguing that the right of the CIR to collect the assessed tax has prescribed. The CIR contends that the CTA has no jurisdiction because when the law says that the CTA has jurisdiction over other matters it presupposes that the tax assessment has not become final and unappealable. Is the CIRs contention correct? No. The fact that an assessment has become final for failure of the taxpayer to file a protest within the time allowed only means that the validity or correctness of the assessment may no longer be questioned on appeal. However, the validity of the assessment itself is a separate and distinct issue from the issue of whether the right of the CIR to collect the validly assessed tax has prescribed. This issue of prescription, being a matter provided for by the NIRC, is well within the jurisdiction of the CTA to decide. (Commissioner of Internal Revenue v. Hambrecht & Quist Philippines, Inc., [November 17, 2010]) --------------------------------------------------------------2. Criminal Cases a) Exclusive original jurisdiction b) Exclusive appellate jurisdiction in criminal cases --------------------------------------------------------------Note: This applies to CTA in Divisions. Note that, with regard to criminal cases, the CTA en banc has exclusive appellate jurisdiction over the decisions or resolutions on MRs or MNTs of the Court in Division in the exercise of its exclusive appellate jurisdiction or in the exercise of its exclusive original jurisdiction over criminal offenses arising from violations of the NIRC or TCC and other Tax Laws. Read Section 3(b) and 3(a), Rule 4, RRCTA Q: What are the criminal cases within the exclusive original jurisdiction of the CTA? The CTA shall exercise exclusive original jurisdiction over all criminal cases where the principal amount involved of taxes and fees is P1,000,000 or more, exclusive of charges and penalties, arising from violations of the NIRC, TCC and other laws administered by the BOC or the BIR. Q: Does the CTA have jurisdiction relative to matters involving the constitutionality of regulations issued by the BIR? No. The doctrine in ASIA INTERNATIONAL AUCTIONEERS V. PARAYNO [DECEMBER 18, 2007] which ruled that the CTA has such jurisdiction has been reversed in BRITISH AMERICAN TOBACCO V. CAMACHO [AUGUST 20, 2008]. The regular courts have jurisdiction to rule upon the constitutionality of a tax law or a regulation issued by the BIR. Q: What are the criminal cases within the exclusive original jurisdiction of the regular courts? The regular courts have original jurisdiction in offenses and felonies where: a. The principal amount of taxes and fees, exclusive of charges and penalties, claimed is less than P1,000,000 b. There is no specified amount claimed --------------------------------------------------------------a) Who may appeal, mode of appeal, effect of appeal --------------------------------------------------------------Q: Who may appeal in the CTA? Read Section 11, RA 1125 and Section 3, Rule 8, RRCTA a. A party adversely affected by a decision, ruling, or the inaction of: i. CIR ii. CoC iii. DOF Secretary iv. DTI Secretary v. DA Secretary vi. RTC (in the exercise of its original jurisdiction) b. A party adversely affected by a decision or resolution of a Division on a MR or MNT c. A party adversely affected by a decision or ruling of the CBAA and the RTC in the exercise of their appellate jurisdiction. Q: What are the criminal cases within the exclusive appellate jurisdiction of the CTA? a. Appeals from judgments, resolutions or orders of the RTCs in tax cases originally decided by them in their respective territorial jurisdiction; and b. Petitions for review of the judgments, resolutions or orders of the RTCs in the exercise of their appellate jurisdiction over tax cases originally decided by the MeTCs, MTCs or MCTCs. Note: (1) The same rules apply with regard to the exclusive jurisdiction of the CTA in division over tax collection cases. (2) As held in YABES V. FLOJO [JULY 20, 1982], the Supreme Court held that the lower courts can acquire jurisdiction over a claim for collection of deficiency taxes only after the assessment made by the CIR has become final and unappealable, not where there is still a pending CTA case. --------------------------------------------------------------B. Judicial Procedures 1. Judicial action for collection of taxes a) Internal revenue taxes b) Local taxes i) Prescriptive period --------------------------------------------------------------Note: This has been thoroughly discussed under Tax Remedies and Local Government Taxation. I will not discuss them anymore. Q: What are the different modes of appeal? Read Section 4, Rule 8, RRCTA a. Petition for review under Rule 42 to be acted upon the CTA in division with respect to a decision, ruling or inaction of: i. CIR (on disputed assessments or claim for refund of internal revenue taxes erroneously or illegally collected) ii. CoC iii. DOF Secretary iv. DTI Secretary v. DA Secretary vi. RTC (in the exercise of their original jurisdiction) Period to file: 30 days b. Petition for review under Rule 43 to be acted upon the CTA en banc with respect to a decision or resolution of the Court in Division on a MR or 40 MNT. --------------------------------------------------------------2. Civil Cases a) Who may appeal, mode of appeal, effect of appeal i) Suspension of collection of tax a) Injunction not available to restrain collection ii) Taking of evidence iii) Motion for reconsideration or new trial b) Appeal to the CTA, en banc c) Petition for review on certiorari to the Supreme Court --------------------------------------------------------------- 40 In cases falling under the exclusive appellate jurisdiction of the CTA en banc, a petition for review of a decision or resolution of Period to file: 15 days. It may be extended to an additional period not exceeding 15 days. c. Petition for review under Rule 43 to be acted upon by the CTA en banc with respect to the decisions or rulings of: i. ii. CBAA RTCs (in the exercise of their appellate jurisdiction) Period to file: 30 days Q: CC Corp filed a petition in the RTC to nullify an ordinance enacted by the City of Manila. RTC dismissed the petition. CC Corp filed a petition for review with CTA. It was argued that the petition for review was filed out of time. Can the 30 day period to file a petition for review to the CTA of an adverse decision or ruling of the RTC (in the exercise of its original jurisdiction) be extended? Yes. As held in CITY OF M ANILA VS. COCA-COLA BOTTLERS PHILIPPINES, INC. [AUGUST 4, 2009], it is clear from the Section 3 of the Revised Rules of the CTA that to appeal an adverse decision or ruling of the RTC to the CTA, the taxpayer must file a Petition for Review with the CTA within 30 days from receipt of said adverse decision or ruling of the RTC. It must be pointed out that the rule is silent as to whether the 30 day period can be extended or not. However, Section 11 of Republic Act No. 9282 does state that the Petition for Review shall be filed with the CTA following the procedure analogous to Rule 42 of the Revised Rules of Civil Procedure.. In MUNICIPALITY OF CAINTA, RIZAL VS. BRILLANTE REALTY CORPORATION [CTA AC NO. 88, JANUARY 02, 2013], the CTA held that the thirty-day period to appeal an adverse decision of the Regional Trial Court to the Court of Tax Appeals may be extended for a period of 15 days, subject to filing of motion for extension before the CTA and payment of appropriate fees. the Court in Division must be preceded by the filing of a timely MR or MNT with the Division. (see Section 1, Rule 8, RRCTA) Q: Does a Motion for Reconsideration of the decision of the CIR toll the 30 day period to appeal the denial of the protest of the FAN to the CTA? No. A motion for reconsideration of the denial of the administrative protest does not toll the 30-day period to appeal to the CTA. (see FISHWEALTH CANNING CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE [JANUARY 21, 2010]) Note: (1) The CTA may issue injunction only in the exercise of its appellate jurisdiction. CIR vs. J.C. Yuseco [G.R. No. L-12518, October 28, 1961] (2) The prohibition on the issuance of a writ of injunction to enjoin the collection of taxes is applied only to national internal revenue taxes, not to local taxes. ANGELES CITY V. ANGELES ELECTRIC CORPORATION [JUNE 29, 2010] (3) TROs and injunctions issued by courts other than the CTA against the BIR should be annulled and cancelled for lack of jurisdiction [see RMO 042-10 [MAY 4, 2010].) --------------------------------------------------------------i) Suspension of collection of tax a) Injunction not available to restrain collection --------------------------------------------------------------Q: Does the perfection of an appeal suspend the collection of taxes? (effect of an appeal) No appeal taken to the CTA shall suspend the payment, levy, distraint and/or sale of any of the taxpayers. Note: Nonetheless, during the pendency of the appeal, the taxpayer may still enter into a compromise settlement of his tax liability for as long as any of the grounds for a compromise (doubtful validity of assessment and financial incapacity) is present. A compromise of a tax liability is possible at any stage of litigation even during appeal (Pampanga Sugar Co. v. CIR [114 SCRA 496] ) --------------------------------------------------------------ii) Taking of evidence --------------------------------------------------------------Q: When may the CTA receive evidence? Read Section 2, Rule 12, RRCTA The Court may receive evidence in the following cases: a. In all cases falling within the original jurisdiction of the CTA in division pursuant to Section 3, Rule 4 of the RRCTA b. In appeals in both civil and criminal cases where the Court grants a new trial pursuant to Section 2, Rule 53 and Section 12, Rule 124 of the Rules of Court Q: Who are authorized to take evidence? Read Section 3-4, Rule 12, RRCTA The following are authorized: a. Any justice of the court when i. The determination of a question of fact arises at any stage of the proceedings or ii. The taking of an account is necessary iii. The determination of an issue of fact requires the examination of a long account b. Any court official for the sole purpose of marking comparison with the original and identification by witnesses of the received documentary evidence Q: May the CTA issue an injunction to enjoin the collection of taxes by the BIR? Yes. When a decision of the CIR on a tax protest is appealed to the CTA, such appeal does not suspend the payment, levy, distraint and/or sale of any of the taxpayers property.. PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 --------------------------------------------------------------iii) Motion for reconsideration or new trial --------------------------------------------------------------Read Section 1, 4 and 5, Rule 15 Q: Who may file a MR or MNT? Any aggrieved party may seek a reconsideration or new trial of any decision, resolution or order of the court Note: (1) The period to file the MR or MNT is 15 days. (2) No second MR or MNT is allowed (see Section 7, Rule 15, RRCTA) Note, however, that a Motion for Reconsideration filed on the Amended Decision of the Court in Division is not a second motion for reconsideration, which is a proscribed under Section 7, Rule 15 of the CTA Rules, in relation to Section 2, Rule 52 of the 1997 Rules of Civil Procedure, as amended. MIRANT (NAVOTAS II) CORPORATION VS. COMMISSIONER OF INTERNAL REVENUE, CTA EB CASE NO. 783, JULY 18, 2012 Q: Is a prior MR required before filing a Petition for Review of a decision of a CTA division? Yes. The mandatory provisions of Rule 8, Section 1 of the Revised Rules of the Court of Tax Appeals requiring that the petition for review of a decision or resolution of the Court in Division must be preceded by the filing of a timely motion for reconsideration or new trial with the Division. The word "must" clearly indicates the mandatory -- not merely directory -- nature of a requirement. The rules are clear. Before the CTA En Banc could take cognizance of the petition for review concerning a case falling under its exclusive appellate jurisdiction, the litigant must sufficiently show that it sought prior reconsideration or moved for a new trial with the concerned CTA division. (see COMMISSIONER OF CUSTOMS VS. M ARINA SALES, INC. [NOVEMBER 22, 2010]) Q: Juday was criminally charged in the CTA for filing a fraudulent income tax return. Thereafter, she filed a motion to quash in the CTA 1st division. The MTQ was denied. MR was also denied. She then filed a motion for extension of time to file her petition for review in CTA en banc. Thereafter, she filed her petition for review with CTA en banc. CTA en banc denied both the petition for extension, and the petition for review, on the theory that the denial of the motion to quash was an interlocutory order, and therefore, unappealable. Was the dismissal by CTA en banc proper? Yes. CTA en banc did not err in denying petitioners Motion for Extension of Time to File Petition for Review. Petitioner cannot file a Petition for Review with the CTA en banc to appeal the Resolution of the CTA First Division denying her Motion to Quash. The Resolution is interlocutory and, thus, unappealable. Even if her Petition for Review is to be treated as a petition for certiorari, it is dismissible for lack of merit. (see JUDY ANNE L. SANTOS VS. PEOPLE OF THE PHILIPPINES AND BUREAU OF INTERNAL REVENUE [AUGUST 26, 2008]) --------------------------------------------------------------c) Petition for review on certiorari to the Supreme Court --------------------------------------------------------------Read Section 19, RA 1125 and Section 1, Rule 16, RRCTA Q: Who may file an appeal to the Supreme Court? Any party adversely affected by a decision or ruling of the Court en banc may appeal to the Supreme Court. --------------------------------------------------------------C. Criminal Cases a) Institution and prosecution of criminal actions i) Institution of civil action in criminal action b) Appeal and period to appeal i) Solicitor General as counsel for the people and government officials sued in their official capacity c) Petition for review on certiorari to the Supreme Court ----------------------------------------------------------------------------------------------------------------------------a) Institution and prosecution of criminal actions i) Institution of civil action in criminal action --------------------------------------------------------------Read Section 2, Rule 9, RRCTA Q: How are criminal actions instituted? All criminal actions before the CTA in Division in the exercise of its original jurisdiction shall be instituted by the filing of an information in the name of the People of the Philippines. Note: (1) The institution of the criminal action shall interrupt the running of the period of prescription (2) For violations of the NIRC and other laws enforced by the BIR, the CIR must approve the filing (3) For violations of the TCC and other laws enforced by the BOC, the CoC must approve their filing Q: What is the mode of appeal from the CTA en banc to the Supreme Court? The mode of appeal is a petition for review on certiorari under Rule 45. Q: ABC Corporation, engaged in the retail of medicines and other pharmaceutical drugs filed a claim for TCC pertaining to the 20% sales discounts granted to senior citizens. The CTA denied the claim for insufficiency of evidence. Thus, ABC filed its petition for review before the SC. Instead of filing a reply to the comments of respondent, ABC filed a motion to withdraw praying that the case be dismissed without prejudice. According to BAC, the amount of tax credit being claimed would just be included in its future claims for issuance of TCC. The CIR argues that the decision of the CTA became final and executory and thus the tax credit could no longer be claimed in the future. Is the contention of the CIR correct? Yes. By withdrawing the appeal the taxpayer is deemed to have accepted the decision of the CTA. And since the CTA had already denied taxpayers request for the issuance of TCC for insufficiency of evidence, it may no longer be included in taxpayers future claim. (Central Luzon Drug Corporation v. CIR [March 2, 2011] ). PIERRE MARTIN DE LEON REYES Ateneo Law Batch 2013 Read Section 2, Rule 9, RRCTA Q: Who shall prosecute the criminal action? The criminal actions shall be conduced and prosecuted under he direction and control of the public prosecutor Note: For violations of the NIRC and other laws enforced by the BIR and violations of the TCC and other laws enforced by the BoC, the prosecution may be conducted by their respective duly deputized legal officers. Read Section 12, Rule 9, RRCTA Q: Is the civil action deemed instituted with the criminal action? Yes. The criminal action and corresponding civil action for the recovery of civil liability for taxes and penalties shall be deemed jointly instituted in the same proceeding. The filing of the criminal action shall necessarily carry with it the filing of the civil action. No right to reserve the filing of such civil action separately from the criminal action shall be allowed. c) Petition for review on certiorari to the Supreme Court --------------------------------------------------------------Note: Same rule as in Civil Cases. --------------------------------------------------------------C. Taxpayers suit impugning the validity of tax measures or acts of taxing authorities --------------------------------------------------------------Q: What is a taxpayers suit? A taxpayers suit is a case where the act complained of directly involves the illegal disbursement of public funds derived from taxation. --------------------------------------------------------------b) Appeal and period to appeal i) Solicitor General as counsel for the people and government officials sued in their official capacity --------------------------------------------------------------Read Section 9, Rule 9, RRCTA Q: What are the modes of appeal with respect to criminal cases? a. Notice of Appeal pursuant to Sections 3(a) and 6, Rule 122 of the Rules of Court to the CTA in Division with respect to an appeal from criminal cases decided by the RTC in the exercise of its original jurisdiction b. Petition for Review under Rule 43 to the CTA En Banc with respect to criminal cases decided by i) CTA in Division in the exercise of its appellate jurisdiction ii) RTC in the exercise of its appellate jurisdiction Note: In both cases, the period to file is 15 days. Citizens Suit Case in which is in the nature of a public right, if not the duty of every citizen, to institute in protection of the general public The plaintiff is but a mere instrument of public concern. Plaintiff Read Section 10, Rule 9, RRCTA Q: Who shall act as a representative of the People and the Government? The Solicitor General shall represent the People and government officials sued in their official capacity in all cases brought to the CTA in the exercise of its appellate jurisdiction. Q: What are the requisites of a taxpayers suit? (for taxpayers to have locus standi to sue) As laid down in ANTI-GRAFT LEAGUE V. SAN JUAN [260 SCRA 251], the requisites of a taxpayers suit are: --------------------------------------------------------------- 1. Public funds are disbursed by a political subdivision or instrumentality and in doing so, a law is violated or some irregularity is committed; and; 2. Petitioner is directly affected by the alleged ultra vires act Hence, in LOZADA V. COMELEC [120 SCRA 337], it was held that the petitioners action for mandamus to compel the COMELEC to hold a special collection is not considered a taxpayers suit because it does not involve public expenditure. Further, there is no allegation that tax money is spent illegally. Also, in JOYA V. PCGG [225 SCRA 568], the Supreme Court held that such was not a taxpayers suit because the case did not involve a misapplication of public funds. In fact, the paintings and antique silverware alleged to have been public properties were acquire from private sources and not with public money. A constitutional question is ripe for adjudication when the government act being challenged has a direct adverse effect on the individual challenging it. As a general rule, a taxpayer must show that he would be prejudiced or benefited by the suit which questions the validity of the collection of taxes or the manner of expenditure of funds collected from taxation. Personal injury or benefit must be shown for judicial controversy to be ripe for judicial determination NOTE: However, it must be noted that where the public interest requires the resolution of the constitutional issues raised by the taxpayer, the doctrine of ripe for judicial determination is within the Courts discretion to set aside ABAKADA GURO PARTY-LIST V. PURISIMA [G.R. NO. 166715, AUGUST 14, 2008] ========== END OF REVIEWER ============ Thank you for using my reviewer. Again, if you find it useful, please share it to others. Also, if its not so much to ask, pray that my girlfriend and I do well and pass the bar exams. Ateneo Law Batch 2013 and all the other barristers who will come to possess this reviewer, good luck to us all. AMDG. For comments, corrections, and suggestions, please email me at [email protected]. Q: Must a taxpayer be a party to a government contract so that it can challenge the validity of a disbursement of public funds? No. The prevailing doctrine in taxpayers suit is to allow taxpayers to question contracts entered into by the national government or GOCCs allegedly in contravention of law. A taxpayer need not be a party to the contract to challenge its validity ( ABAYA V. EBDANE [515 SCRA 720])
https://fr.scribd.com/doc/175885362/REYES-Bar-Reviewer-on-Taxation-II-v-3
CC-MAIN-2019-26
refinedweb
72,911
50.16
Outlook Space Liberation - A Tampermonkey Script fischgeek ・1 min read I don't frequent outlook.com, but when I do, I can't tell you how much it annoys me that ad-block (in this case Ghostery) is unable to block these sections. So, I did what anyone would do, I created a tampermonkey script to help me out. Thought I would share it. Hope it brings others as much joy as it does me. // ==UserScript== // @name Outlook Space Liberation // @namespace // @version 0.1 // @description Gains that space back by annoying ads and upgrade prompts. // @author fischgeek // @match* // @grant none // @require // ==/UserScript== var $ = window.jQuery $(document).ready(function() { setInterval(function(){ $('[data-icon-name="OutlookLogo"]').parent().parent().parent().hide() $('[role="menu"]').prev().hide() $('span:contains("Upgrade to Office 365")').parent().parent().parent().hide() }, 500) }) Classic DEV Post from May 15 What are your five most used terminal commands? community post to share our most used terminal commands.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/fischgeek/outlook-space-liberation-a-tampermonkey-script-47jc
CC-MAIN-2019-39
refinedweb
156
51.95
IRC log of ws-desc on 2002-05-23 Timestamps are in UTC. 15:00:54 [RRSAgent] RRSAgent has joined #ws-desc 15:01:17 [JacekK] JacekK has joined #ws-desc 15:01:27 [dbooth] zakim, this is desc 15:01:29 [Zakim] ok, dbooth 15:01:32 [sanjiva] sanjiva has joined #WS-Desc 15:01:35 [dbooth] zakim, who is here? 15:01:36 [Zakim] I see A.Ryman, ??P1, K.Liu, ??P5, S.Searingen, Steve.Lind, DavidB, ??P14, Jonathan, GlenD, ??P16 15:01:38 [Zakim] +??P18 15:01:54 [marioj] marioj has joined #ws-desc 15:01:57 [SteveLind] SteveLind has joined #ws-desc 15:02:07 [Zakim] +??P27 15:02:09 [Zakim] +??P25 15:02:15 [JacekK] zakim, ??p27 is probably m 15:02:16 [Zakim] +M?; got it 15:02:16 [sanjiva] zakim, ??P27 is sanjiva 15:02:18 [Zakim] sorry, sanjiva, I do not recognize a party named '??P27' 15:02:18 [Zakim] +D.Gaertner 15:02:30 [JacekK] zakim, ??p27 is me 15:02:31 [Zakim] sorry, JacekK, I do not recognize a party named '??p27' 15:02:32 [sanjiva] zakim, ??p27 is sanjiva 15:02:33 [Zakim] sorry, sanjiva, I do not recognize a party named '??p27' 15:02:37 [Zakim] + +1.408.406.aaaa 15:02:39 [JacekK] Sanjiva, I think you are 25 15:02:43 [sanjiva] ok! 15:02:50 [Zakim] +Philippe 15:02:57 [JThrasher] JThrasher has joined #ws-desc 15:02:57 [JacekK] zakim, m is JacekK 15:02:59 [Zakim] +JacekK; got it 15:03:00 [sanjiva] zakim, ??P25 is me 15:03:01 [JacekK] zakim, mute JacekK 15:03:02 [Zakim] +Sanjiva; got it 15:03:02 [Zakim] JacekK should now be muted 15:03:06 [Zakim] +J.Thrasher 15:03:09 [Zakim] +Igor.Sedukhin 15:03:17 [Zakim] + +1.512.868.aabb 15:03:39 [Allen] Allen has joined #ws-desc 15:03:57 [JacekK] zakim, unmute JacekK 15:03:58 [Zakim] JacekK should no longer be muted 15:04:14 [JacekK] zakim, mute JacekK 15:04:15 [Zakim] JacekK should now be muted 15:04:22 [JacekK] zakim, mute JacekK 15:04:23 [Zakim] JacekK should now be muted 15:04:28 [Zakim] +Dale.Moberg 15:04:37 [igors] igors has joined #ws-desc 15:04:43 [Zakim] + +1.716.383.aacc 15:05:57 [DonWright] DonWright has joined #ws-desc 15:06:19 [Philippe] Dale sent regrets? so Dale.Moberg is not Dale Moberg... 15:06:22 [Zakim] + +1.530.219.aadd 15:06:23 [Zakim] +??P49 15:06:29 [Zakim] +D.Wright 15:06:41 [Don] Don has joined #ws-desc 15:08:26 [Zakim] +Don 15:08:53 [youenn] youenn has joined #ws-desc 15:09:28 [Zakim] - +1.512.868.aabb 15:10:41 [Zakim] +A.Sakala 15:11:06 [bill] bill has joined #ws-desc 15:11:06 [Don] Minutes approved. 15:11:19 [adisakala] adisakala has joined #ws-desc 15:11:23 [jeffsch] dbooth fix linking to registration for face-to-face [done] 15:11:29 [JacekK] zakim, unmute JacekK 15:11:30 [Zakim] JacekK should no longer be muted 15:11:35 [Don] Action Items: DONE [7] 2002.05.16 DBooth to find out how to get registration list. 15:11:41 [jeffsch] jeffsch has joined #ws-desc 15:12:03 [Don] DONE [8] 2002.05.16 Roberto to post revised extensibility proposal, annotations out, revised extension. 15:12:03 [Zakim] - +1.530.219.aadd 15:12:13 [JacekK] zakim, mute JacekK 15:12:14 [Zakim] JacekK should now be muted 15:12:23 [dbooth] Meeting: WS Description Teleconference 15:12:37 [dbooth] Topic: Usage Task Force 15:12:58 [Zakim] + +1.530.219.aaee 15:13:15 [Zakim] +??P56 15:13:17 [Zakim] +K.Ballinger 15:13:35 [bill] bill has joined #ws-desc 15:13:43 [dbooth] Sanjiva: The document is out. Two weeks have gone by. The last telecon was organizational. I hope something will happen today. 15:14:17 [Zakim] + +33.2.99.87.aaff 15:14:28 [dbooth] Jonathan: Waqar also sent another draft of usage scenarios. 15:14:48 [dbooth] ... Should we publish it now? Discuss it more at F2F? Hold off? 15:15:19 [dbooth] Waqar: I saw the use case doc from the Arch group and it contained a lot of the use cases that we have, and i added them. 15:15:30 [dbooth] ... How will that synchronization take place? 15:15:49 [dbooth] JM: That's up to us. 15:16:16 [dbooth] JeffM: I think we should publish it with a disclaimer saying that it is the current state and we're working with the Arch group on it. 15:16:49 [dbooth] JM: Any objections to publishing the usage scenario doc with a note that JeffM describes? 15:17:15 [dbooth] Arthur: Has the revision included the updates? 15:17:47 [dbooth] Waqar: I made a number of updates, but it didn't incorporate all. Should we discuss the ones that were omitted? 15:18:13 [dbooth] ... I incorporated the editorial comments, but not all of the more substantive things, because some were not clear. 15:18:43 [dbooth] ... I've asked for clarification, but haven't yet received much feedback. So I didn't know what to do about it. 15:18:44 [Zakim] - +1.530.219.aaee 15:18:45 [Zakim] +??P61 15:18:56 [dbooth] ... I was hoping that it would be reviewed and I would get more feedback. 15:19:01 [Zakim] + +1.512.404.aagg - is perhaps MikeBallantyne? 15:19:37 [dbooth] Arthur: There was nothing that i strongly objected to. My comments were mostly suggestions and clarifications. I have no objections to publishing. 15:20:00 [dbooth] JM: With no other objections noted, let's go ahead and publish it. 15:21:01 [dbooth] ... Eventually the usage scenarios will migrate to the Arch group, and this group will only have specific use cases. 15:21:08 [dbooth] ... That transition may be a little awkward. 15:21:47 [dbooth] Sandeep: The other good thing about the other group taking it up is that the task force is pretty dedicated, so I think they'll do a better job than we were able to do. 15:22:04 [dbooth] s/the/their/ 15:23:00 [dbooth] Philippe: I'd like to sync the status with the one we included in the requirements doc, and I'm not sure if I should modify that without the agreement of the WG. 15:23:16 [dbooth] JM: Anyone want to review philippe's edits before we publish it? 15:23:29 [dbooth] (None noted) 15:23:35 [dbooth] JM: We'll go ahead then. 15:23:57 [dbooth] ACTION: Philippe to update status section of the requirement's doc 15:24:06 [Zakim] -Dale.Moberg 15:24:21 [Philippe] ah, Dale was Sandeep. 15:24:23 [dbooth] ACTION: Philippe, Jonathan, Waqar to work on getting the usage scenarios published 15:24:39 [Philippe] action to myself: fix the bridge data 15:26:07 [dbooth] JM: THere was an issue from Joyce about overloaded methods. 15:26:14 [dbooth] Keith: I think we should discuss it. 15:26:36 [dbooth] ... I don't like the term "overloaded methods" because WSDL is not exposing methods directly. 15:26:49 [dbooth] ... They are "operations". 15:27:05 [dbooth] JeffM: I don't think it matters if we call them "methods" or "operations". 15:28:10 [dbooth] DBooth: I suggest we call them "operations". 15:28:50 [dbooth] JeffM: Now let's talk about the semantics. 15:29:45 [dbooth] JeffM: The issue is that WSDL currently allows overloaded operations; the question is whether we should disallow them. 15:30:18 [dbooth] Joyce: I propose that we should disallow overloading. 15:30:45 [dbooth] DBooth: I support that also. I think it is clearer to have different names for things. 15:31:18 [dbooth] Arthur: It also may have server implications, as you must decide which method to invoke. 15:31:46 [Zakim] -??P49 15:31:56 [dbooth] __: I agree. 15:32:17 [dbooth] JeffM: It also saves us from deciding when two parameter types are different, then you can overload. 15:32:17 [adisakala] Adi: I agree that we shouldnt allow overloaded operations 15:32:25 [dbooth] s/__/Adi 15:32:33 [Zakim] +??P44 15:33:15 [dbooth] JeffM: Is anyone in favor of overloaded operations? 15:33:25 [dbooth] (Silence) 15:33:47 [dbooth] JM: THe sentiment of this group is to disallow overloaded operations. 15:34:06 [dbooth] __: Can someone summarize what is the problem of overloading? 15:34:27 [Marsh] s/__/Kevin 15:34:33 [JacekK] Jonathan, sorry, I gotta drop out now... 15:34:37 [Zakim] -JacekK 15:34:42 [Marsh] OK, thanks 15:34:50 [dbooth] JeffM: One problem is that you must decide whether parameter types are different, and therefore the two operations can be overloaded. 15:35:16 [dbooth] ... (Couldn't keep up with typing the other reasons) 15:36:38 [dbooth] ... Another problem is that it makes the mapping to language methods more difficult. YOu might need to do somethign like name mangling (from C++). 15:38:06 [dbooth] Kevin: Sounds fine with me to take the feature out, but since it was in WSDL 1.1, we should document the reasons for dropping it. 15:38:23 [dbooth] ACTION: JeffM to write up rationale for dropping operation overloading. 15:39:26 [dbooth] JM: Another issue is whether one-way operations can return "false" or not. 15:39:51 [dbooth] JeffS: It seems that people were asking maybe for a new kind of message exchange pattern. 15:40:09 [dbooth] JM: WSDL 1.1 is pretty clear that a one-way doesn't have a fault. 15:40:26 [dbooth] s/false/fault/ 15:41:33 [dbooth] JeffM: Where would the fault go? 15:42:26 [dbooth] JeffS: We're talking about adding a different MEP, not changing one-way, but adding a new MEP that has an input, no output, and a fault. 15:43:49 [dbooth] DBooth: It also raises the question of whether faults should be treated separately from outputs. 15:44:16 [adisakala] Adi: Then in that case dont we need to consider faults on Notification operation which has only output message 15:44:42 [dbooth] Sanjiva: Can you clarify what kind of fault you mean? 15:45:18 [dbooth] ... i.e, app level or middleware level faults? 15:45:43 [dbooth] ... But if you don't get a fault, then it means your operation succeeded. 15:45:56 [dbooth] ... So it isn't one-way, it's two-way with an empty output. 15:46:13 [dbooth] Prasad: It's an app level thing. 15:46:30 [dbooth] ... It depends on your framework. 15:46:40 [dbooth] Keith: How would this be handled over HTTP? 15:46:55 [dbooth] Prasad: I guess it woudl be a Soap fault. 15:47:35 [dbooth] Keith: Because typically you get a 202 now, i.e., "accepted", which makes sense. 15:47:49 [dbooth] ... because the incoming msg will be processed. 15:47:56 [jeffsch] re: dbooth comment, will our abstract model clarify the relationship between the response and faults? 15:48:10 [dbooth] Prasad: The problem with the current approach is that you get either a 200 ok or you get a fault. 15:48:54 [dbooth] Keith: But that means I need to do the business processing to decide whether to return a 202 or a 500. 15:49:54 [dbooth] Prasad: This is a common business need. You want to know if something went wrong, but you don't need to hear back anything if all is ok. 15:50:21 [dbooth] Adi: Why do we need another message pattern? 15:51:03 [dbooth] DBooth: Would there be a problem if the client gets back a message saying that all is ok, and ignores it? 15:51:27 [dbooth] Prasad: But I don't want to get back too many "ok" responses that I must ignore. 15:52:22 [adisakala] Jeffery 15:52:27 [dbooth] JeffS: It sounds like you want a "Nack" model (negative ack). 15:52:35 [jeffsch] (thanks david) 15:53:36 [dbooth] Adi: If we consider this case, then we also need to consider it for the Notification case. 15:54:12 [dbooth] JeffM: Can't one-way's return faults? 15:54:24 [dbooth] Prasad: No, that's why we're talking about it. 15:55:50 [dbooth] JM: Suppose I have a subscripion svc, and a new satellite image is pushed to me every 3 hours. And if the data is not available, should I get a fault instead? 15:56:29 [Zakim] -K.Ballinger 15:56:46 [dbooth] DBooth: It seems like the question is whether faults are needed at all for app level issues. 15:57:38 [dbooth] JM: It sounds like we have a better understanding of the issue. Let's continue the discussion on email. 15:58:56 [dbooth] ... I suggest that we close the current issue, and have Prasad open a new issue re-titled "Negative Acknowledgement" 15:59:08 [dbooth] Prasad: Let's just rename it. 16:00:06 [dbooth] ACTION: Prasad, JeffS close the current issue, and have Prasad open a new issue re-titled "Negative Acknowledgement" 16:00:36 [Zakim] -S.Searingen 16:00:45 [dbooth] GlenD: Are MEP's hard coded in the spec, or can they be extended through extensibility. 16:00:54 [dbooth] JeffS: Hard coded. 16:01:47 [dbooth] DBooth: If they are extensible then you're getting into the topic of workflow. 16:02:54 [dbooth] ACTION: GlenD to post email adding an issue: Are MEP's hard coded in the spec, or can they be extended through extensibility? 16:03:25 [Philippe] Glenn: SOAP include a MEP and I'd like to be able express it in WSDL 16:03:39 [Philippe] Sanjiva: use the extensibility mechanism then 16:03:44 [dbooth] Kevin: I notice we have two issue lists. Sanjiva's and Jean-Jaque's. Are they synced? 16:04:18 [dbooth] JM: No. 16:06:51 [dbooth] Philippe: Is MEP an architecture issue? Soap defines one, and we define one also. 16:07:08 [dbooth] ... So should it go to the ARch group? 16:07:31 [dbooth] JM: We need to clarify the issue first. 16:07:47 [dbooth] Topic: Extensibility Proposal 16:08:04 [Marsh] 16:08:05 [roberto] 16:10:34 [dbooth] Roberto: My proposal was an adaptation of a previous one. It is separated from the "annotations" proposal. 16:10:59 [dbooth] ... For annotations there should be a very simple processing model. The hard thing to design is language extensions. 16:11:25 [dbooth] Igor: I was trying to look at two of the proposals. The difference seems to be the ability to include somethign from an arbitrary namespace. 16:11:42 [dbooth] Roberto: There are a couple of differences. 16:12:01 [dbooth] .. I got rid of the architected extensions. 16:12:16 [dbooth] JM: SO you'd have to have a WSDL required or WSDL extension element. 16:12:23 [dbooth] Roberto: Not necessarily. 16:13:38 [dbooth] JM: So if I start reading a WSDL doc I can tell when i reach the extension mark whether this doc will have some third-party binding info. 16:13:55 [dbooth] Roberto: Right, for anythign optional, you can't tell it beforehand. 16:14:13 [dbooth] JM: And the other difference was the mechanism for indicating what was required. 16:14:56 [dbooth] ... We need to define the inheritance of what's "required". 16:15:36 [dbooth] JM: Two key difference: (1) I can't tell from looking at the first part of the doc whether there is an extension that may be required. 16:17:13 [dbooth] Roberto: The difference is basically at the top element. 16:18:15 [dbooth] JM: Any other opinions about whether we need both WSDL required and a top level element? 16:19:19 [dbooth] JeffS: It feels like the extra WSDL extension that you're defining is really about the architected extension. 16:19:49 [dbooth] ... I lost the motivation for the more sophisticated mechanism. 16:20:02 [dbooth] ... I clearly understand allowing any/lax. 16:20:12 [dbooth] ... But i'm missing the motivation for the WSDL:Extension. 16:20:47 [dbooth] Roberto: A language can require on a processor the use of rules that are global. The processor may need to know beforehand. 16:20:59 [dbooth] ... To avoid backtracking during processing. 16:21:11 [dbooth] JeffS: To allow a one-pass processing model? 16:21:42 [dbooth] Roberto: If you're defining an extension, then you don't want to have to label every occurrence as "required". 16:22:32 [dbooth] JM: It seems like the extension element is more convenient. But do you still need the "required" attribute? 16:22:42 [dbooth] Sanjiva: It's nice to have the info up front. 16:23:19 [jeffsch] (Thanks Roberto) 16:23:32 [dbooth] Roberto: If we get rid of the "reqiured" attribute, then suppose I want an optional extension that has a global attribute. How would I do it? 16:24:08 [adisakala] Adi: Sorry for informing late. I got to leave alittle early today. 16:24:14 [adisakala] adisakala has left #ws-desc 16:24:25 [Zakim] -A.Sakala 16:25:58 [dbooth] JM: It sounds like we should merge my proposal and Roberto's by removing the clause 5.d. 16:26:29 [dbooth] ... ANd in my schema we would call out the Soap binding namespace. 16:26:37 [dbooth] ... And we keep the required attribute also. 16:26:37 [Zakim] -??P56 16:26:48 [Zakim] -??P61 16:27:11 [dbooth] Roberto: And in section 4 of mine, it seems that we are talking about changing that to turn off required. 16:27:52 [dbooth] ... There's another point also, in section 5.c. (incorrectly called 5.b.). 16:28:40 [dbooth] ... I think if we were to write a similar clause for annotations we would write "a processor MAY...". 16:29:03 [dbooth] ... So I can be optimistic that the processor will actually do somethign with it. 16:29:32 [dbooth] ... If we have annotations along the line of the previous version, then the processor would not HAVE to do anything with it. 16:29:53 [dbooth] JM: ANd specifically processing of an annotation should not result in any different behavior for the rest of the document. 16:30:12 [dbooth] Roberto: But for optional extensions you SHOULD do something with it. 16:30:39 [dbooth] JM: SHould that go into the spec? 16:30:44 [dbooth] Roberto: Yes. 16:30:46 [Zakim] -MikeBallantyne? 16:31:46 [dbooth] JeffS: I think we can boil it down to saying whether you MUST or MUST NOT do something with it. 16:32:02 [dbooth] Roberto: Ok. 16:33:26 [dbooth] ACTION: Roberto to take another round at updating his proposal 16:33:35 [Zakim] -J.Thrasher 16:33:37 [Zakim] -GlenD 16:33:37 [Zakim] -K.Liu 16:33:37 [Zakim] -Don 16:33:38 [Zakim] -??P14 16:33:39 [Zakim] -??P18 16:33:41 [Zakim] - +1.408.406.aaaa 16:33:42 [Zakim] -??P16 16:33:43 [Zakim] - +33.2.99.87.aaff 16:33:45 [Zakim] -Steve.Lind 16:33:46 [Zakim] -A.Ryman 16:33:48 [Zakim] -Sanjiva 16:33:50 [Don] Don has left #ws-desc 16:33:50 [Zakim] -Philippe 16:33:53 [Zakim] -D.Gaertner 16:33:56 [dbooth] [Meeting adjourned] 16:33:56 [Zakim] -Igor.Sedukhin 16:33:57 [Zakim] -DavidB 16:34:00 [Zakim] -??P44 16:34:01 [Zakim] -Jonathan 16:34:03 [Zakim] -??P5 16:34:06 [Zakim] -D.Wright 16:34:07 [Zakim] - +1.716.383.aacc 16:35:32 [jeffsch] I think we can boil it down to saying whether you MUST or MUST NOT _recognize_ the extension. 16:35:56 [jeffsch] ... what to do with the extension is defined by the extension's spec. 16:36:02 [jeffsch] (bye) 17:29:37 [dbooth] rrsagent, where is log? 17:29:37 [dbooth] I'm logging. Sorry, nothing found for 'where is log' 17:31:58 [dbooth] rrsagent, where am i? 17:31:58 [RRSAgent] See 18:34:57 [Marsh] Marsh has left #ws-desc 18:35:35 [Zakim] -??P1 19:01:55 [Zakim] WS_DescWG()11:00AM has ended 19:29:52 [dbooth] dbooth has joined #ws-desc 19:32:37 [dbooth] zakim, bye 19:32:38 [Zakim] Zakim has left #ws-desc 19:32:42 [dbooth] rrsagent, bye
http://www.w3.org/2002/05/23-ws-desc-irc
CC-MAIN-2016-22
refinedweb
3,465
76.01
Writing software means that you need to have a database sitting at the back end, and most of the time goes into writing queries to retrieve and manipulate data. Whenever someone talks about data, we tend to only think of the information that is contained in a relational database or in an XML document.The kind of data access that we had prior to the release of .NET 3.5 was only meant for or limited to accessing data that resides in traditional data sources as the two just mentioned. But with the release of .NET 3.5 and higher versions like .NET 4.0 and .NET 4.5, that has Language INtegrated Query (LINQ) incorporated into it, it is now possible to deal with data residing beyond the traditional homes of information storage. For instance, you can query a generic List type containing a few hundred integer values and write a LINQ expression to retrieve the subset that meets your criterion, for example, either even or odd.The LINQ feature, as you may have gathered, was one of the major differences between .NET 3.0 and .NET 3.5. LINQ is a set of features in Visual Studio 2011 that extends powerful query capabilities into the language syntax of C# and VB .NETLINQ introduces a standard, unified, easy-to-learn approach for querying and modifying data, and can be extended to support potentially any type of data store. Visual Studio 2012 also supports LINQ provider assemblies that enable the use of LINQ queries with various types of data sources including relational data, XML, and in-memory data structures.In this article, I will cover the following: Introduction to LINQLINQ is an innovation that Microsoft made with the release of Visual Studio 2008 and .NET Framework version 3.5 that promises to revolutionize the way that developers have been working with data before the release of .NET 3.5. Microsoft continued with the LINQ feature with the recent releases of .NET 4.0/4.5 and Visual Studio 2012. As I mentioned previously, LINQ introduces the standard and unified concept of querying various types of data sources falling in the range of relational databases, XML documents, and even in-memory data structures. LINQ supports all these types of data stores using LINQ query expressions of first-class language constructs in C# 2011. LINQ offers the following advantages: The LINQ assemblies provide all the functionality of accessing various types of data stores under one umbrella. The core LINQ assemblies are listed in Table 1-1.Table 1-1. Core LINQ Assemblies Architecture of LINQLINQ consists of the following three major components: Figure 1-1 depicts the LINQ architecture, that clearly shows the various components of LINQ and their related data stores.Figure 1-1. LINQ architectureLINQ to Objects deals with in-memory data. Any class that implements the IEnumerable<T> interface (in the System.Collections.Generic namespace) can be queried with Standard Query Operators (SQOs).LINQ to ADO.NET (also known as LINQ-enabled ADO .NET) deals with data from external sources, basically anything ADO.NET can connect to. Any class that implements IEnumerable<T> or IOueryable<T> (in the System.Linq namespace) can be queried with SQOs. The LINQ to ADO.NET functionality can be done by using the System. Data.Linq namespace.LINQ to XML is a comprehensive API for in-memory XML programming. Like the rest of LINQ, it includes SQOs, and it can also be used in concert with LINQ to ADO.NET, but its primary purpose is to unify and simplify the kinds of things that disparate XML tools, such as XQuery XPath, and XSLT, are typically used to do. The LINQ to XML functionality can be done by using the System.Xml.Linq namespace.In this article, we'll work with the three techniques LINQ to Objects, LINQ to SQL, and LINQ to DataSets.Using LINQ to ObjectsThe term LINQ to Objects refers to the use of LINQ queries to access in-memory data structures. You can query any type that supports IEnumerable<T>. This means that you can use LINQ queries not only with user-defined lists, arrays, dictionaries, and so on, but also in conjunction with .NET Framework APIs that return collections. For example, you can use the System.Reflection classes to return information about types stored in a specified assembly, and then filter those results using LINQ. Or you can import text files into enumerable data structures and compare the contents to other files, extract lines or parts of lines, group matching lines from several files into a new collection, and so on. LINQ queries offer three main advantages over traditional foreach loops: In general, the more complex the operation you want to perform on the data, the greater the benefit you will realize using LINQ as opposed to traditional iteration techniques.Try It Out: Coding a Simple LINQ to Objects QueryIn this exercise, you'll create a Windows Forms Application having one Text Box. The application will retrieve and display some names from an array of strings in a TextBox control using LINQ to Objects. Now your LinqToObjects form in Design view should be such as shown in Figure 1-2.Figure 1-2. Design view of LinqToObjects form5. Now double-click on the empty surface of the "LinqToObjects.cs" form and it will open the code editior window, showing the "LinqToObject_Load" event. Place the following code in the "LinqToObjects_Load" event as shown in Listing 1-1.Listing 1-1. LinqToObjects.cs //Define string array string[] names = { "Life is Beautiful", "Arshika Agarwal", "Seven Pounds", "Rupali Agarwal", "Pearl Solutions", "Vamika Agarwal", "Vidya Vrat Agarwal", "C-Sharp Corner Mumbai Chapter" }; //Linq query IEnumerable<string> namesOfPeople = from name in names where name.Length <= 16 select name; foreach (var name in namesOfPeople) { txtDisplay.AppendText(name+"\n"); }Run the program by pressing "Ctrl+F5", and you should see the results shown in Figure 1-3.Figure 1-3. Retrieving names from a string array using LINQ to ObjectsUsing LINQ to SQLLINQ to SQL is a facility for managing and accessing relational data as objects. It's logically similar to ADO.NET in some ways, but it views data from a more abstract perspective that simplifies many operations. It connects to a database, converts LINQ constructs into SQL, submits the SQL, transforms results into objects, and even tracks changes and automatically requests database updates.A simple LINQ query requires three things: Entity classes A data context A LINQ query Try It Out: Coding a Simple LINQ to SQL QueryIn this exercise, you'll use LINQ to SQL to retrieve all contact details from the AdventureWorks Person.Contact table. Navigate to Solution Explorer, right-click your LINQ project and select "Add Windows Form". In the "Add New Item" dialog make sure "Windows Form" is selected and then rename the "Form1.cs" to "LinqToSql". Click "Add". Drag a Text Box control onto the form, and position it at towards the center of the Form. Select this Text Box and navigate to the Properties window, and set the following properties: Now your LinqToSql form in Design view should be such as shown in Figure 1-4.Figure 1-4. Design view of LinqToSql form Before we begin with coding the functionality, we must add the required assembly references. LinqToSql will require an assemble reference of System.Data.Linq to be added to the LINQ project. To do so, in Solution Explorer, select the "References", right-click and choose "Add Reference". From the opened Reference Manager dialog, scroll down to the assembly list select System.Data.Linq and check the checkbox shown in front of it as shown in figure 1-5 and click "OK".Figure 1-5. Adding LINQ References Open the newly added form "LinqToSql.cs" in code view. Add the code shown in Listing 1-2 to LinqToSql.cs.Listing 1-2. LinqToSql.cs // Must add these two namespaces for LinqToSqlusing System.Data.Linq; using System.Data.Linq.Mapping; [Table(Name = "Person.Person")] public class Contact { [Column] public string Title; [Column] public string FirstName; [Column] public string LastName; } private void LinqToSql_Load(object sender, EventArgs e) { // connection string string connString = @"server = .\sql2012;integrated security = true;database = AdventureWorks"; try { // Create data context DataContext db = new DataContext(connString); // Create typed table Table<Contact> contacts = db.GetTable<Contact>(); // Query database var contactDetails = from c in contacts where c.Title == "Mr." orderby c.FirstName select c; //"); } } catch (Exception ex) MessageBox.Show(ex.Message); } } Now, to set the LinqToSql form as the startup form, open Program.cs in the code editor and modify the following:Application.Run(new LinqToObjects());to appear as:Application.Run(new LinqToSql());. Build the solution and then run the program by pressing Ctrl+F5, and you should see the results shown in Figure 1-6.Figure 1-6. Retrieving contact details with LINQ to SQL How It WorksYou define an entity class, Contact as in the following:[Table(Name = "Person.Person")] public class Contact { [Column] public string Title; [Column] public string FirstName; [Column] public string LastName; }Entity classes provide objects in which LINQ stores data from data sources. They're like any other C# class, but LINQ defines attributes that tell it how to use the class.The [Table] attribute marks the class as an entity class and has an optional Name property that can be used to provide the name of a table, that defaults to the class name. That's why you name the class Contact rather than Person.Contact. [Table(Name = "Person.Contact")]public class Contact and then you'd need to change the typed table definition to:Table<Contact> contacts = db.GetTable<Contact>();to be consistent.The [Column] attribute marks a field as one that will hold data from a table. You can declare fields in an entity class that don't map to table columns, and LINQ will just ignore them, but those decorated with the [Column] attribute must be of types compatible with the table columns they map to. (Note that since SQL Server table and column names aren't case sensitive, the default names do not need to be identical in case to the names used in the database.)You create a data context as in the following:// Create data contextDataContext db = new DataContext(connString);A data context does what an ADO.NET connection does, but it also does things that a data provider handles. It not only manages the connection to a data source, but also translates LINQ requests (expressed in SQO) into SQL, passes the SQL to the database server, and creates objects from the result set.You create a typed table as in the following:// Create typed tableTable<Contact> contacts = db.GetTable<Contact>();A typed table is a collection (of type System.Data.Linq.Table<T>) whose elements are of a specific type. The GetTable method of the DataContext class tells the data context to access the results and indicates where to put them. Here, you get all the rows (but only three columns) from the Person.Contact table, and the data context creates an object for each row in the contacts typed table.You declare a C# 2012 implicitly typed local variable, contactDetails, of type var:// Query database var contactDetails =An implicitly typed local variable is just what its name implies. When C# sees the var type, it infers the type of the local variable based on the type of the expression in the initializer to the right of the = sign.You initialize the local variable with a query expression as in the following:from c in contactswhere c.Title == "Mr."orderby c.FirstNameselect c;A query expression is composed of a from clause and a query body. You use a WHERE condition in the query body here. The from clause declares an iteration variable, c, to be used to iterate over the result of the expression, contacts, that is, over the typed table you earlier created and loaded. In each iteration it will select the rows that meets the WHERE clause. In other words Title must be "Mr.".Finally, you loop through the custs collection and display each customer. Except for the use of the var type, that was introduced as a new data type in C# 2008 and continue to exist in advance versions like C#2012. //"); }Despite the new C# 2008 features and terminology, this will still feel familiar. Once you get the hang of it, it's an appealing alternative for coding queries. You basically code a query expression instead of SQL to populate a collection that you can iterate through with a foreach statement. However, you provide a connection string, but don't explicitly open or close a connection. Further, no command, data reader, or indexer is required. You don't even need the System.Data or System.Data.SqlClient namespaces to access SQL Server.Pretty cool, isn't it?Using LINQ to XMLLINQ to XML provides an in-memory XML programming API that integrates XML querying capabilities into C# 2012 to take advantage of the LINQ framework and add query extensions specific to XML. LINQ to XML provides the query and transformation power of XQuery and XPath integrated into .NET.From another perspective, you can also think of LINQ to XML as a full-featured XML API comparable to a modernized, redesigned SystemXml API plus a few key features from XPath and XSLT. LINQ to XML provides facilities to edit XML documents and element trees in memory, as well as streaming facilities. A sample XML Document looks as in Figure 1-7.Figure 1-7. XML DocumentTry It Out: Coding a Simple LINQ to XML QueryIn this exercise, you'll use LINQ to XML to retrieve element values from an XML document. Navigate to Solution Explorer, right-click the LINQ project, and and select Windows Form. In the opened "Add New Item" dialog make sure Windows Form is selected and then rename the "Form1.cs" to "LinqToXml". Click "Add". Drag a Text Box control onto the form, and position it towards the center of the form. Select this Text Box and navigate to the Properties window. Now your LinqToXml form in Design view should look such as shown in Figure 1-8.Figure 1-8. Design view of LinqToXml form Open the newly added form "LinqToXml.cs" in code view. Add the code shown in Listing 1-3 in "LinqToXml.cs".Listing 1-3. LinqToXml.c using System.Xml.Linq; //Load the productstable.xml in memory XElement doc = XElement.Load(@"C:\VidyaVrat\C#\Linq\productstable.xml"); //Query xml doc var products = from prodname in doc.Descendants("products") select prodname.Value; //Display details foreach (var prodname in products) txtLinqToXml.AppendText("Product's Detail= "); txtLinqToXml.AppendText(prodname); txtLinqToXml.AppendText("\n"); Now, to set the LinqToSql form as the startup form, open the Program.cs in code editor and modify the:Application.Run(new LinqToSql());to appear as:Application.Run(new LinqToXml());. Build the solution, and then run the program by pressing "Ctrl+F5" and you should see the results as shown in Figure 1-9.Figure 1-9. Retrieving product details with LINQ to XML View All
https://www.c-sharpcorner.com/UploadFile/84c85b/using-linq-with-C-Sharp-2012/
CC-MAIN-2018-13
refinedweb
2,522
56.05
Expert Reviewed wikiHow to Calculate Annualized Portfolio Return Two Parts:Laying the GroundworkCalculating Your Annualized ReturnCommunity Q&A The calculation of your annualized portfolio return answers one question: what is the compound rate of return earned on the portfolio for the period of investment? While the various formulas used to calculate your annualized return may seem intimidating, it is actually quite easy to tabulate once you understand a few important concepts. Steps Part 1 Laying the Groundwork - 1Know the key terms. In discussing annualized portfolio returns, there are several key terms that will come up repeatedly and are important for you to understand. These are as follows: - Annual Return: Total return earned on an investment over a period of one calendar year, including dividends, interest, and capital gains. [1] - Annualized Return: Yearly rate of return which is inferred by extrapolating returns measured over periods either shorter or longer than one calendar year. [2] - Average Return: Typical return earned per time period calculated by taking the total return realized over a longer period and spreading it evenly over the (shorter) periods. [3] - Compounding Return: A return that includes the results of re-investing interest, dividends, and capital gains. [4] - Period: A specific length of time chosen to measure and calculate return, such as daily, monthly, quarterly, or annually. - Periodic Return: The total return of an investment measured over a specific length of time. [5] - 2Learn how compounding returns work. Compounding returns are growth on the gains that you have already earned. The longer your money compounds, the faster it will grow, and the greater your annualized returns will be. (Think of a snowball rolling downhill, getting bigger faster as it rolls.)[6] - Let’s say you invest $100 and earn 100% on it your first year, leaving you with $200 at the end of year one. If you gain just 10% in the second year, you will have earned $20 on your $200 by the end of year two. - However, if we say you earned just 50% during the first year, you would have $150 at the beginning of the second year. That same 10% gain in year two would earn 15 dollars rather than 20. This is a full 33% less than the 20 dollars you would have made in our first example. - To further illustrate, let’s say you lost 50% in year one, leaving you with just 50 dollars. You would then need to earn 100% just to get back to even (100% of $50 = $50, and $50 + $50 = $100). - The size and timing of gains play a huge role when accounting for compound returns and their effect on annualized returns. In other words, annualized returns are not a reliable measure of actual gains or losses. Annualized returns are, however, a good tool to use when comparing various investments against each other. - 3Use a time-weighted return to calculate your compound rate of return. To find the average of many things, such as daily rainfall or weight loss over several months, you can often use a simple average, or arithmetic mean. This is a technique you probably learned in school. However, the simple average does not account for the effect that each periodic return has on the others, or the timing of each return. To accomplish this, we can use a time-weighted geometric return.[7] (Don’t worry, we’ll walk you through this formula!) - Using a simple average doesn’t work because all periodic returns are dependent on each other. [8] - For example, imagine that you want to tabulate your average return on $100 over the course of two years. You earned 100% in the first year, meaning you had $200 at the end of year one (100% of 100 = 100). You lost 50% during the second, meaning you had $100 at the end of the second year (50% of 200 = 100). This is the same figure you started with at the beginning of year one. - A simple average (arithmetic mean) would add the two returns together and divide by the number of periods, which in this example is two years. The result would suggest that you earned an average return of 25% per year.[9] However, when you link the two returns, you can see that you actually earned nothing. The years cancel each other out. - 4Calculate your overall return. To start, you must calculate your total return over the full span of time you are assessing. For the purpose of clarity, we’ll use an example where no deposits or withdrawals were made. To calculate your total return, all you need is two numbers: the beginning portfolio value and ending value. - Subtract your Beginning Value from your Ending Value. - Divide this number by your Beginning Value. The resulting number is your Return. - In the case of a loss in the period under scrutiny, subtract the ending balance from the beginning balance. Then, divide by the beginning balance and consider the result a negative value. (This latter operation is a substitute for needing to add algebraically a negative number.) [10] - Do the subtraction first, then the division. This will give you your overall percent of return. - 5Know the Excel formulas for these calculations. The formula for Total Return Rate = (Ending portfolio value- beginning portfolio value)/beginning portfolio value. The formula for Compound Rate of Return = POWER((1 + Total Return Rate),(1/years)) - 1. - For example, if the beginning value of the portfolio was $1000 and its ending value was $2500 seven years later, the calculations would be: - Total Return Rate = (2500-1000)/1000 = 1.5. - Compound Rate of Return= POWER ((1 + 1.5),(1/7))-1 = .1398 = 13.98%. Part 2 Calculating Your Annualized Return - 1Calculate your annualized return. Once you've calculated the total return (as above), plug the result into this equation: Annualized Return=(1+ Return)1/N-1[11] The outcome of this equation will be a number that corresponds to your return each year over the full span of time. - In the exponent (the little number outside the parentheses), the “1” represents the unit we are measuring, which is 1 year. If you wish to be more specific, you could use “365” to capture a daily return. - The “N” represents the number of periods that we are measuring. So, if you are measuring your return over 7 years, you would use the number 7 in the place of "N." - For example, suppose that over a seven-year period, your portfolio grew in value from $1,000 to $2,500. - First, calculate your overall return: (2,500-1,000)/1000 = 1.50 (a return of 150%). - Next, calculate your annualized return: (1 + 1.50)1/7-1 = 0.1399=13.99% annual return. That's all there is to it! - Use the ordinary mathematical order of operations: do the operations inside the parentheses first, then apply the exponent, then do the subtraction. - 2Calculate semi-annual returns. Now, let's say that you want to find semiannual returns (returns occurring twice a year, every six months) over the course of this seven-year period. [12] The formula stays the same; you only need to adjust the number of periods that you are measuring. Your final result will be a semiannual return. - In this case, you will have 14 semiannual periods, two per year over the course of seven years. - First, calculate your overall return: (2,500-1,000)/1000 = 1.50 (a return of 150%). - Next, calculate your annualized return: (1 + 1.50)1/14-1 = 6.76%. - You can convert this into an annual return by simply multiplying by 2: 6.76% x 2 = 13.52%. - 3Calculate an annualized equivalent. You can also calculate the annualized equivalent of shorter returns. For example, imagine you only had a six-month return and wanted to know its annualized equivalent. Once again, the formula stays the same. - Suppose over a six-month period, your portfolio increases in value from $1,000 to $1,050. - Start by calculating your overall return: (1,050-1,000)/1,000=.05 (a 5% return over six months). - Now if you wanted to know what the annualized equivalent would be (assuming a continuation of this rate of return and compounding returns), [13] you would calculate the following: (1+.05)1/.50-1=10.25% annual return. - No matter how long or short the period of time, if you follow the formula above, you will always be able to convert your performance into an annualized return. Community Q&A - How do I annualize a return on an investment that has cash added or subtracted during the year?Michael R. Lewis Entrepreneur & Retired Financial Advisor(1) Total the beginning Account Balance and any additions during the year to learn Total Investments. (2) Add any withdrawals during the year to the Ending Account Balance. (3) Subtract the sum of Step 1 from the sum of Step 2 to get total return. (4) Divide total return by the sum of Step 1 to get the rate of return within the year. - How do I calculate total return for on an investment that amortizes monthly in equal amounts over a one year time period?Michael R. Lewis Entrepreneur & Retired Financial AdvisorDeduct the beginning Account Value from the total payments (interest and principal) received during the year to calculate interest during the year. Then divide the interest earned by the beginning Account Value to get an annual rate of return. - How do I calculate the return if there is a withdrawal?If there is just one withdrawal or deposit (or just a few withdrawals or deposits), treat separately each time period before, between, and after withdrawals or deposits. Use each balance to calculate the return for a particular time period. Annualize each of the returns and weight them by length of time period. Add the returns together to arrive at the total annual return. Watch for changes in interest rate, and adjust accordingly. - Can you explain Donagan's query with an example?Here's a simple example: You have a savings account worth $1,000 at the beginning of the year, earning 1% simple interest paid annually. You withdraw $100 at the end of September. What's your rate of return for the full year? For the first nine months, your balance is $1,000, and for the last three months it's $900. So for the first nine months the interest you earn is ($1,000) (1%) (9/12) = $7.50. For the last three months your interest is ($900) (1%) (3/12) = $2.25. Your total interest for the year is $9.75. (It would have been $10 if you hadn't made the withdrawal.) To find your rate of return, divide $9.75 by $1,000, which is 0.00975 or 0.975% (slightly less than 1%). The point is: treat each time period (with its unique balance) separately, then add the balances together for the total interest earned (and divide by the original balance to obtain your annual rate of interest). Tips - Learning to calculate and understand annualized portfolio returns is important, as your annual return will be the number that you use to compare yourself to other investments as well as benchmarks and peers. It will have the power to confirm your stock-picking prowess and, more importantly, aid in uncovering any possible shortfalls in your investment strategies. - Practice these calculations with some sample numbers to get comfortable with these equations. Practice will make these calculations come naturally and easily. - The paradox mentioned at the very beginning of this article is merely a recognition of the fact that investment performance is usually judged against the performance of other investments. In other words, a small loss in a falling market may be considered better than a small gain in a rising market. It's all relative. Warnings - Make sure to follow the correct mathematical order of operations or you will not get an accurate figure. Likewise, it's a good idea to double-check your work after performing these calculations. Sources and Citations - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ Article Info Categories: Financial Ratios In other languages: Español: calcular el rendimiento anualizado de una cartera de inversiones, Русский: рассчитать годовой инвестиционный доход, Português: Calcular a Taxa Composta Anual de Crescimento de uma Carteira de Investimento, Français: calculer la performance annualisée d'un portefeuille, Bahasa Indonesia: Menghitung Pengembalian Portofolio Disetahunkan Thanks to all authors for creating a page that has been read 232,679 times.
http://www.wikihow.com/Calculate-Annualized-Portfolio-Return
CC-MAIN-2017-26
refinedweb
2,076
55.74
It can be obtained on the official website: Let's try to simulate a small area for our tests. First of all, connect iTween-package from the Asset store: , we should see something something like that: Create a new scene with such content: Cube - is the object that we are going to animate. Now we move on iTween. Create a new script component type C # at our facility. Fill method Start: void Start() { iTween.RotateFrom(gameObject, iTween.Hash("y", 90.0f, "time", 2.0f, "easetype", iTween.EaseType.easeInExpo)); iTween.MoveFrom(gameObject, iTween.Hash("y", 3.5f, "time", 2.0f, "easetype", iTween.EaseType.easeInExpo)); iTween.ShakePosition(Camera.main.gameObject, iTween.Hash("y", 0.3f, "time", 0.8f, "delay", 2.0f)); iTween.ColorTo(gameObject, iTween.Hash("r", 1.0f, "g", 0.5f, "b", 0.4f, "delay", 1.5f, "time", 0.3f)); iTween.ScaleTo(gameObject, iTween.Hash("y", 1.75f, "delay", 2.8f, "time", 2.0f)); iTween.RotateBy(gameObject, iTween.Hash("x", 0.5f, "delay", 4.4f)); iTween.MoveTo(gameObject, iTween.Hash("y", 1.5f, "delay", 5.8f)); iTween.MoveTo(gameObject, iTween.Hash("y", 0.5f, "delay", 7.0f, "easetype", iTween.EaseType.easeInExpo)); iTween.ScaleTo(gameObject, iTween.Hash("y", 1.0f, "delay", 7.0f)); iTween.ShakePosition(Camera.main.gameObject, iTween.Hash("y", 0.3f, "time", 0.8f, "delay", 8.0f)); iTween.ColorTo(gameObject, iTween.Hash("r", 0.165f, "g", 0.498f, "b", 0.729f, "delay", 8.5f, "time", 0.5f)); iTween.CameraFadeAdd(); iTween.CameraFadeTo(iTween.Hash("amount", 1.0f, "time", 2.0f, "delay", 10.0f)); } Run and see the result. Let's Let us examine the script line by line. iTween.RotateFrom(gameObject, iTween.Hash("y", 90.0f, "time", 2.0f, "easetype", iTween.EaseType.easeInExpo)); Method RotateFrom used to rotate an object. Unlike RotateTo and RotateBy, RotateFrom used to initialize the specified rotation angle and rotate to their original condition. Method, like most others, has an overload. You can use a short or long version: RotateFrom(GameObject target, Vector3 rotation, float time); RotateFrom(GameObject target, Hashtable args); We pass gameObject - an object that is the current script. In order not to write something like: Hashtable args = new Hashtable(); args.Add(ââ¬Åyââ¬Â, 90.0f); args.Add(ââ¬Åtimeââ¬Â, 2.0f); args.Add(ââ¬Åeasetypeââ¬Â, iTween.EaseType.easeInExpo); We use iTween.Hash - Express version of Hashtable. As we pointed arguments y = 90.0f, is equivalent (if x and z are zero, of course) Quaternion.Euler( new Vector3(0f, 90.0f, 0f) ) The rotation, which begins our rotation. time=2.0f Time that should be spent for animation. There are similar argument called "speed", in his case does not indicate the time, but the rate at which the animation will be held. The last argument, which we pointed it easetype = iTween.EaseType.easeInExpo. easetype waveform is to be used for interpolation. Here is a graphical representation of curves: iTween.MoveFrom(gameObject, iTween.Hash("y", 3.5f, "time", 2.0f, "easetype", iTween.EaseType.easeInExpo)); Move From similar to the previous one, it should be understood simply used instead of the rotation movement. iTween.ShakePosition(Camera.main.gameObject, iTween.Hash("y", 0.3f, "time", 0.8f, "delay", 2.0f)); ShakePosition used in this case in order to implement the "shaking" the camera. This method makes you move an object by decreasing amplitude, does not use interpolation, the object will appear at random points in the designated part of him. There is a new argument called "delay", it is quite important option animation is used to specify the number of seconds that must elapse before the start of the animation. iTween.ColorTo(gameObject, iTween.Hash("r", 1.0f, "g", 0.5f, "b", 0.4f, "delay", 1.5f, "time", 0.3f)); Color To smoothly change the color of an object over time. iTween.ScaleTo(gameObject, iTween.Hash("y", 1.75f, "delay", 2.8f, "time", 2.0f)); ScaleTo, changes the size of the object. iTween.RotateBy(gameObject, iTween.Hash("x", 0.5f, "delay", 4.4f)); RotateBy reminds RotateFrom, necessary in cases where the object you want to deploy more than 360 degrees (although in this case we could have done by RotateTo). Suppose we have indicated z = 2.0f, it would mean that the object must turn twice around the Z axis for a certain period of time. iTween.MoveTo(gameObject, iTween.Hash("y", 1.5f, "delay", 5.8f)); iTween.MoveTo(gameObject, iTween.Hash("y", 0.5f, "delay", 7.0f, "easetype", iTween.EaseType.easeInExpo)); MoveTo, probably, the main method of the whole class iTween. He moves the object to the specified coordinates in the allotted time. The interpolation is based on all the same easetype, which you already know. iTween.CameraFadeAdd(); iTween.CameraFadeTo(iTween.Hash("amount", 1.0f, "time", 2.0f, "delay", 10.0f)); CameraFadeAdd creates a new object that is used to simulate the blackout. The depth varies from the current value of the specified arguments. The following overload: CameraFadeAdd() CameraFadeAdd(Texture2D texture) CameraFadeAdd(Texture2D texture, int depth) If no Texture2D, will be used black color. From what I have described, there are few more important things. For example, in the arguments, you can specify the method that will be invoked upon the occurrence of some event. Let's say: public class iTweenController : MonoBehaviour { int clbkN = 0; GUIStyle style; void Awake() { style = new GUIStyle(); style.fontSize = 60; } void Start() { iTween.MoveTo(gameObject, iTween.Hash("position", new Vector3(5.0f, 1.0f, 0.0f), "oncomplete", "myClbk", "loopType", iTween.LoopType.loop, "speed", 2.0f)); } void myClbk() { clbkN++; } void OnGUI() { GUI.Label(new Rect(10, 10, 0, 0), "Callback # "+clbkN, style); } } We used a new method arguments MoveTo: position = new Vector3(5.0f, 1.0f, 0.0f) shorthand, relevant "x", 5.0f, "y", 1.0f, "z", 0.0f oncomplete = "myClbk" At the end of the animation (or iteration of the loop animation) method is called with the specified name. loopType = iTween.LoopType.loop Type animation. In this case, indicate the normal cycle, the animation will be played endlessly, at the beginning of each animation object will be moved to the starting position. On this, perhaps, done. Thank you for your attention. All data posted on the site represents accessible information that can be browsed and downloaded for free from the web. No replies yet
https://unionassets.com/blog/basic-animation-itween-259
CC-MAIN-2018-47
refinedweb
1,041
54.69
Episode #27: The PyCon 2017 recap and functional Python Published Thurs, May 25, 2017, recorded Tues, May 23, 2017. - All videos available: - Lessons learned: - pick up swag on day one. vendors run out. - take business cards with you and keep them on you - Not your actual business cards unless you are representing your company. - Cards that have your social media, github account, blog, or podcast or whatever on them. - 3x3 stickers are too big. 2x2 plenty big enough - lightening talks are awesome, because they are a lot of ranges of speaking experience - will definitely do that again - try to go to the talks that are important to you, but don’t over stress about it, since they are taped. However, it would be lame if all the rooms were empty, so don’t everybody ditch. - lastly: everyone knows Michael. Michael #2: How to Create Your First Python 3.6 AWS Lambda Function - Tutorial from Full Stack Python - Walks you through creating an account - Select your Python version (3.6, yes!) def lambda_handler(event, context): …# write this function, done! - Set and read environment variables (could be connection strings and API keys) Brian #3: How to Publish Your Package on PYPI - jetbrains article - structure of the package - oops. doesn't include src, see - decent discussion of a the contents of the setup.py file (but interestingly absent is an example setup.py file) - good discussion of .pypirc file and links to the test and production PyPi - example of using twine to push to PyPI - overall: good discussion, but you'll still need a decent example. Michael #4: Coconut: Simple, elegant, Pythonic functional programming - Coconut is a functional programming language that compiles to Python. - Since all valid Python is valid Coconut, using Coconut will only extend and enhance what you're already capable of in Python. pip install coconut - Some of Coconut’s major features include built-in, syntactic support for: - Pattern-matching, - Algebraic data-types, - Tail call optimization, - Partial application, - Better lambdas, - Parallelization primitives, and - A whole lot more, all of which can be found in Coconut’s detailed documentation. - Talk Python episode coming in a week Brian #5: Choose a licence - MIT : simple and permissive - Apache 2.0 : something extra about patents. - GPL v3 : this is the contagious one that requires derivitive work to also be GPL v3 - Nice list with overviews of what they all mean with color coded bullet points: Michael #6: Python for Scientists and Engineers - Table of contents: - Beginners Start Here: - Main Book - Machine Learning Section - Machine Learning with an Amazon like Recommendation Engine - Machine Learning For Complete Beginners: Learn how to predict how many Titanic survivors using machine learning. No previous knowledge needed! - Cross Validation and Model Selection: In which we look at cross validation, and how to choose between different machine learning algorithms. Working with the Iris flower dataset and the Pima diabetes dataset. - Natural Language Processing - Introduction to NLP and Sentiment Analysis - Natural Language Processing with NTLK - Intro to NTLK, Part 2 - Build a sentiment analysis program - Sentiment Analysis with Twitter - Analysing the Enron Email Corpus: The Enron Email corpus has half a million files spread over 2.5 GB. When looking at data this size, the question is, where do you even start? - Build a Spam Filter using the Enron Corpus In other news: - Python Testing with pytest Beta release and initial feedback is going very well.
https://pythonbytes.fm/episodes/show/27/the-pycon-2017-recap-and-functional-python
CC-MAIN-2018-30
refinedweb
564
60.75
A problem with a sensor packet. Hello. Could you help us to resolve the strange problem? We built TurtleBot baced on IRobot Roomba 500. We did some changes in «turtlebot_node.py» for using Roomba. Such as in the «init_params» method: self.robot_type = rospy.get_param('~robot_type', 'roomba') self.has_gyro = rospy.get_param('~has_gyro', False) And in the «reconfigure» method we commented an updating of the «has_gyro» field: #self.has_gyro = config['has_gyro'] However for the serial communication we decided to use Arduino. So we plugged Roomba to Arduino via MiniDIN cable according to Roomba Open Interface. We also pluggud Arduino to NetBook via USB-cable. We uploaded following sketch on Arduino: #include <SoftwareSerial.h> SoftwareSerial myserial(10, 11); void setup() { Serial.begin(57600); myserial.begin(57600); } void loop() { if(Serial.available() > 0) { myserial.write(Serial.read()); } if(myserial.available() > 0) { Serial.write(myserial.read()); } } So Arduino is just retransmitting data from CPU to Roomba and vice versa. Also we did some changes for it. Such as in «turtlebot_node.py» we changed default port (because Arduio works on ttyACM0 port): __init__(self, default_port='/dev/ttyACM0', default_update_rate=30.0) We just decided to change it in the constructor, because there is no redifinitionining of it when this object initializes. That's it. It worked perfectly. We even ran «follower» node. But one day something happened with our TurtleBot. It crashes with the following error: Failed to contact device with error: [Distance, angle displacement too big, invalid readings from robot. Distance: 5.42, Angle: -46.11]. Please check that the Create is powered on and that the connector is plugged into the Create. We tested different NetBooks, different ROSes (electric and fuerte), different Roombas, different Ubuntus (12.04 and 11.10), different cables, different bauds (19200 and 57600), different Arduinos at least. We've read every string of the code. But we can't find where is the problem. Roomba periodically is sending noise in sensor packet. Maybe the problem with delay between requsting sensors data and reading it? Could you please help us. By the way, when we are sending two bytes (142 and 100) on Roomba, it returns 63 bytes of sensor packet instead of expected 80. Thank you! I don't have an answer for your question, just a note: you shouldn't comment out or change source code in turtlebot_node.py if not necessary. All the parameters you wanted to change can be changed using ros parameters and that's what you should use. Thank you, Lorenz. I didn't think about it. I'll try to use rosparams.
https://answers.ros.org/question/41373/a-problem-with-a-sensor-packet/
CC-MAIN-2019-13
refinedweb
427
53.78
How to Get the Last Part of the Path in Python? In this article, we will learn how one can get the last part of the path in Python. We will use some built-in functions, and some custom codes as well to better understand the topic. We will look at two modules of Python - os Module and pathlib Module. os Module in Python provides three different functions to extract the last part of the path. pathlib Module in Python also provides a function to get the last part of the path. Let us discuss these functions separately. Get the Last Part of the Path using OS Module The os module in Python has various functions to interact with the operating system. It provides os.path , a submodule of the os module for the manipulation of paths. We will use the three functions of os.path to get the last part of the path in Python. Example: Use os.path.normpath() and os.path.basename() This method uses os.path.normpath() and os.path.basename() together to find the last part of the given path. os.path.normpath() - It removes all the trailing slashes from the given path. It is passed as an argument to os.path.basename(). os.path.basename() - It returns the last part of the path. import os path = os.path.basename(os.path.normpath('/folderA/folderB/folderC/folderD/')) print(path) folderD Example: Use os.path.split() This method uses os.path.split() to find the last part of the path. As the name suggests, it splits the path into two - head part and tail part. Here, the tail is the last pathname component and the head is everything leading up to that. The tail part will never contain a slash; if the name of the path ends with a slash, the tail will be empty. This example returns the last part of the path i.e. tail part. import os path = '/home/User/Desktop/sample.txt' # Split the path in head and tail pair head_tail = os.path.split(path) # print tail part of the path print(head_tail[1]) sample.txt Get the Last Part of the Path using Pathlib Module The pathlib module provides PurePath() function to get the last part of the path. path.name prints the last part of the given path. If you are confused between Path and PurePath, PurePath provides purely computational operations while Path or we can say "concrete path" inherits from the PurePath provides I/O operations. Example: Use pathlib.PurePath() import pathlib path = pathlib.PurePath('/folderA/folderB/folderC/folderD/') print(path.name) folderD Conclusion In this article, we learned to find the last part of the given path by using built-in functions such as os.path.basename(), os.path.normpath(), os.path.split(), pathlib.PurePath() and different examples to extract the last part. These functions will work in all cases.
https://www.studytonight.com/python-howtos/how-to-get-the-last-part-of-the-path-in-python
CC-MAIN-2022-21
refinedweb
482
77.43
I have code in a class that dynamically creates methods using define_method class Foo ["bar", "baz"].each do |method| create_method(method) end private def create_method(name) define_method(name) do puts "HELL" end end end Foo.new.bar `create_method' for Foo:Class (NoMethodError) There are several problems with your code. The error you are getting has absolutely nothing to do with private or public at all. The error message says that the method create_method cannot be found. There are two reasons for that: create_methodafter its definition. create_methodis defined as an instance method, i.e. for calling it on instances of Foo, but you are calling it on Fooitself. You have to define it as a method somewhere in Foo's class (i.e. Class), one of its ancestors (e.g. Module), or Foo's singleton class. I will define it as a singleton method of Foo here, but if the method really is as generic as you have showed in your example, then it probably rather belongs in Module instead. class Foo class << self private def create_method(name) define_method(name) do puts "HELL" end end end ["bar", "baz"].each do |method| create_method(method) end end Foo.new.bar # HELL
https://codedump.io/share/W03gQuEDsndf/1/dynamically-create-a-public-method-in-a-private-method
CC-MAIN-2016-50
refinedweb
198
75.3
Special thanks to Brian Farris for contributing this post. Who should care? Anyone who is involved in testing. Whether you are testing creatives for a marketing campaign, pricing strategies, website designs, or even pharmaceutical treatments, multi-armed bandit algorithms can help you increase the accuracy of your tests while cutting down costs and automating your process. Where does the name come from? A typical slot machine is a device in which the player pulls a lever arm and receives rewards at some expected rate. Because the expected rate is typically negative, these machines are sometimes referred to as “one-armed bandits”. By analogy, a “multi-armed bandit” is a machine in which there are multiple lever arms to pull, each one of which may pay out at a different expected rate. The “multi-armed bandit” problem refers to the challenge of constructing a strategy for pulling the levers when one has no prior knowledge of the payout rate for any of the levers. Therefore, one must strike a balance between exploring each of the levers in order to determine their value, while exploiting one’s current knowledge in order to favor high paying levers. This question has been the subject of active research since the 1950s, and many variations have been studied. What do slot machines have to do with marketing, pricing, website design, etc? For any testing problem, you can make an analogy with the multi-armed slot machine by thinking of each test case as an “arm”. Considering the multi-armed bandit problem is equivalent to looking for an optimal strategy for efficiently testing each case of interest for your problem. Connection to A-B Testing Traditional A-B testing can be thought of as a special case of the multi-armed bandit problem, in which we choose to pursue a strategy of pure exploration in the initial testing phase, followed by a period of pure exploitation in which we choose the most valuable “arm” 100% of the time. If the exploitation phase can be assumed to be much longer than the exploration phase, this approach is usually reasonable, as the wasted resources during the exploration are insignificant relative to the total rewards. However, in cases where the cost of the exploration phase is non-negligible, or in cases in which arm values are changing dynamically on short enough timescales that it becomes impractical to repeatedly perform new A-B tests, alternative approaches are needed. When to Use Bandits Bandits are appropriate any time that testing is required. As discussed above, traditional A-B testing can be thought of as a special case of the multi-bandit problem, but it is certainly not the optimal way of balancing exploration with exploitation. Therefore, by definition, it cannot hurt to reimagine any A-B testing framework as a multi-armed bandit problem. Moreover, significant gains may be obtained by switching to a more sophisticated testing algorithm such as epsilon-greedy. That said, the degree of improvements that can be expected depend on several factors. Short term applications When the window of time to exploit is less than or equal to the length of time required to complete an A-B test, large gains can be made by using a multi-armed bandit approach in order to begin exploiting as early as possible. This can occur, for example, in short term holiday promotions. Capturing dynamic changes When the phenomenon being tested changes significantly enough that the results of an A-B test can become invalid over time, multi-arm bandits provide an alternative to repeatedly retesting. By continuously exploring, one can maintain an optimal strategy. Automating testing process Multi-armed bandits can also provide value by eliminating the need for repeated intervention by analysts in order to perform repeated A-B tests. Epsilon-Greedy The most straightforward algorithm for continuously balancing exploration with exploitation is called “epsilon-greedy”. A schematic diagram for the algorithm is shown above. Here, we pull a randomly chosen arm a fraction ε of the time. The other 1-ε of the time, we pull the arm which we estimate to be the most profitable. As each arm is pulled and rewards are received, our estimates of arm values are updated. This method can be thought of a a continuous testing setup, where we devote a fraction ε of our resources to testing. The following python code implements a simple 10-Armed Bandit using the epsilon-greedy algorithm. The payout rate of the arms are normally distributed with mean=0 and sigma=1. Gaussian noise is also added to the rewards, also with mean=0 and sigma=1. This setup mirrors that of Sutton and Barto, Section 2.1. import numpy as np class Bandit: def __init__(self): self.arm_values = np.random.normal(0,1,10) self.K = np.zeros(10) self.est_values = np.zeros(10) def get_reward(self,action): noise = np.random.normal(0,1) reward = self.arm_values[action] + noise return reward def choose_eps_greedy(self,epsilon): rand_num = np.random.random() if epsilon>rand_num: return np.random.randint(10) else: return np.argmax(self.est_values) def update_est(self,action,reward): self.K[action] += 1 alpha = 1./self.K[action] self.est_values[action] += alpha * (reward - self.est_values[action]) # keeps running average of rewards Now we can run a simple experiment using our bandit in order to see how ε controls the tradeoff between exploration and exploitation. A single experiment consists of pulling the arm Npulls times for a given 10 armed bandit. def experiment(bandit,Npulls,epsilon): history = [] for i in range(Npulls): action = bandit.choose_eps_greedy(epsilon) R = bandit.get_reward(action) bandit.update_est(action,R) history.append(R) return np.array(history) We perform experiments in which we execute a sequence of 3000 pulls, updating our estimate of the arm values and keeping track of the reward history. We average the results over 500 such experiments, and we repeat for 3 different values of ε for comparison. Nexp = 2000 Npulls = 3000 avg_outcome_eps0p0 = np.zeros(Npulls) avg_outcome_eps0p01 = np.zeros(Npulls) avg_outcome_eps0p1 = np.zeros(Npulls) for i in range(Nexp): bandit = Bandit() avg_outcome_eps0p0 += experiment(bandit,Npulls,0.0) bandit = Bandit() avg_outcome_eps0p01 += experiment(bandit,Npulls,0.01) bandit = Bandit() avg_outcome_eps0p1 += experiment(bandit,Npulls,0.1) avg_outcome_eps0p0 /= np.float(Nexp) avg_outcome_eps0p01 /= np.float(Nexp) avg_outcome_eps0p1 /= np.float(Nexp) # plot results import matplotlib.pyplot as plt plt.plot(avg_outcome_eps0p0,label="eps = 0.0") plt.plot(avg_outcome_eps0p01,label="eps = 0.01") plt.plot(avg_outcome_eps0p1,label="eps = 0.1") plt.ylim(0,2.2) plt.legend() plt.show() We compare the evolution of expected reward vs iteration for several values of ε. Choosing ε=0 is equivalent to a purely greedy algorithm, in which we always choose the arm which is believed to be most rewarding. In this case, the expected value very quickly increases, as it commits to one arm and stops exploring, but the expected value is relatively low as it doesn’t attempt to search for better options at all. Choosing ε=0.1 leads to higher expected rewards, but it takes approximately 500 iterations before leveling off. Choosing ε=0.01 leads to a higher expected reward in the long term because fewer pulls are wasted continuing to explore after finding the best arm, but it takes much longer to reach this level. Contextual Bandit In the bandit problem described above, it is assumed that nothing is known about each arm other than what we have learned from prior pulls. We can relax this assumption and assume that for each arm there is a d-dimensional “context” vector. For example, if each arm represents a digital ad, the features in these vectors may correspond to things like banner size, web browser type, font color, etc. We can now model the value of each arm using these context vectors as well as past rewards in order to inform our choice of which arm to pull. This scenario is known as the contextual bandit problem. In the figure above, we show results of an experiment similar to the Epsilon-Greedy experiment. In this case, however, we let there be 100 arms, and the value of each arm is equal to linear combination of 100 context features. The value of each context feature is drawn from a uniform distribution between 0 and 1. The coefficients of these linear combinations are now allowed to vary sinusoidally in time, with random frequency and phase. The resulting arm values vary periodically and are represented by the grey curves in the plot on the right hand side. Thus, we see that classical Bandit techniques will fail, as the “best arm” is constantly changing. Moreover, because there are a large number of arms, it is impractical to test them all quickly enough to keep up with the fact that they are changing. In order to gain information quickly enough and exploit our knowledge, we must take advantage of contextual information. To do this, we model the values of the arms using a linear regression, continuously updating the fit using stochastic gradient descent with constant learning rate. We perform the experiment 2000 times and plot expected rewards in the left-hand plot. The green curve is the result of a classical epsilon-greedy algorithm with no context taken into account. As expected, the expected rewards are essentially zero, as we do not take into account the fact that the value of the arms are changing. The green curve shows the result of the contextual bandit described above. In this case, the rewards increase significantly. We also plot the rewards received for the contextual bandit in red dots on the right-hand plot. Here, we see the choice of arm jumping as the relative value of the arms switches. Randomized Probability Matching – A Bayesian Approach There is a nice alternative to the epsilon-greedy algorithm known as Randomized Probability Matching. The technique is described in detail here. Here is a summary of the approach. - We assume a “prior” probability distribution for the expected reward of each arm in the bandit. - We use these distributions to compute the probability that each arm is the best arm. - We chose which arm to pull next. The probability of a given arm being chosen is set to be equal to the probability that it is the best arm. - Measure the reward received and update our “posterior” probability distribution for the expected reward of each arm. - Iterate steps 2-4 Twitter Demo The following demo using Twitter data further illustrates how this works. Imagine that we are running a political ad campaign. Each time an individual Twitter user sends a tweet with the word “president” or “election” in it, we serve them an imaginary ad. We have three distinct ads we can choose from. A “hillary clinton” ad, a “bernie sanders” ad, and a “donald trump” ad. We further imagine that the individual has “clicked” on our ad if his tweet contains a reference to the corresponding candidate name. We will use randomized probability matching to continuously update our estimates of the probability distribution for the click rate of each ad. The video below shows how these distributions evolve in real time as tweets are collected. The x-axis represents click-rate, and the y-axis represents probability density. Note that this video has been sped up by 8x. There are several interesting things to note about this demo: - Each distribution starts out broad and narrows as we gather more data - Each time the curves change, a new ad was served. The video clearly shows that the “winning” ad is being served more often. - The losing ads are sampled less often, and thus there distributions remain broad. This is a good thing, because we don’t need to know the click-rate of the losing ads with great accuracy as long as we are confident that they are not the winner. For the code used to generate this demo, see this repo on github. Relation to Reinforcement learning The multi-armed Bandit problem can be thought of as a special case of the more general Reinforcement Learning problem. The general problem is beyond the scope of this post, but is an exciting area of machine learning research. The Advanced Research team in Capital One Data Labs has been working on a Reinforcement Learning software package. To learn more, check out this excellent book by Sutton and Barto. Brian Farris is a data scientist at Capital One Labs in New York. He was a Fellow in the Spring 2015 NYC cohort of the Data Incubator. Prior to this he was a postdoc in computational astrophysics at Columbia and NYU, working on simulations of the environments around binary black holes. He received his PhD in Physics from the University of Illinois at Urbana-Champaign.
https://blog.thedataincubator.com/2016/07/multi-armed-bandits-2/
CC-MAIN-2018-17
refinedweb
2,126
54.93
This is the mail archive of the [email protected] mailing list for the libstdc++ project. Dan Kegel wrote: > ... according to ppc405 erratum #77 in 87256. A bit more detail: in the ppc405 linux kernel port, the changeset that introduced the workaround is: [email protected] 2001-09-21 03:00:38-04:00 [email protected] Update from David Gibson to properly implement the IBM4xx Errata #77 solution. This places certain pipeline operations prior to the executing of a stwcx. instruction. See e.g. OK, that fixes the kernel. We already had that. What we need on top of that is a fix for gcc's stdlibc++. Here's what I'm trying: First, in gcc/config/rs6000/rs6000.h, define __PPC405__ when compiling for ppc405. Second, insert the 'sync' instruction into atomicity.h like so: --- gcc-3.0.2/libstdc++-v3/config/cpu/powerpc/bits/atomicity.h.orig Tue Feb 27 16:04:08 2001 +++ gcc-3.0.2/libstdc++-v3/config/cpu/powerpc/bits/atomicity.h Thu Jul 18 13:36:39 2002 @@ -32,6 +32,18 @@ typedef int _Atomic_word; +#ifdef __PPC405__ +/* fix PPC405 specific bug: errata #77 - JRO 07/18/02 & dank + See erratum 77 in + + See also version 1.21 of arch/ppc/kernel/ppc_asm.h, Fri Sep 21 00:00:34 2001 + in ChangeSet 1.489 at +*/ +#define __LIBSTDCPP_PPC405_ERR77_SYNC "sync \n\t" +#else +#define __LIBSTDCPP_PPC405_ERR77_SYNC +#endif + static inline _Atomic_word __attribute__ ((__unused__)) __exchange_and_add (volatile _Atomic_word* __mem, int __val) @@ -42,6 +54,7 @@ "0:\t" "lwarx %0,0,%2 \n\t" "add%I3 %1,%0,%3 \n\t" + __LIBSTDCPP_PPC405_ERR77_SYNC "stwcx. %1,0,%2 \n\t" "bne- 0b \n\t" "/* End exchange & add */" etc. I'm building now; wish me luck :-) - Dan
http://gcc.gnu.org/ml/libstdc++/2002-07/msg00145.html
crawl-001
refinedweb
286
61.83
I've just spent hours hunting for a bug and was finally able to isolate the code causing it. It seems that using ALLEGRO_NATIVE_PATH_SEP with PHYFS won't work on Windows. Consider the following code: I'm working on Linux and the code works whether you use PHYSFS (load the file data/image.png from the my-file.zip archive) or if you comment the marked lines and don't use PHYSFS (load image.png from the data/ folder). Both work on Linux. However, when building for Windows, the PHYSFS version won't work. The code will work fine if you don't use PHYSFS, though. I tested this with wine and with a Windows XP VM, both with the same results. Of course, you can use '/' instead of ALLEGRO_NATIVE_PATH_SEP and the problem disappears but it's still annoying trying to use ALLEGRO_NATIVE_PATH_SEP and figure out that it's the cause of your problems. Personally I would be for deprecating ALLEGRO_NATIVE_PATH_SEP. There is no reason to ever use it (Windows filesystem functions all understand / in place of \). And when displaying a path to a user I doubt they really care if you show them C:\\GAMES or C://GAMES. Having said that, I have not used a Windows in a very long time, so maybe there is cases where \ needs to be used? --"Either help out or stop whining" - Evert There are cases such as FOR loop variables being wrongly set in the windows command prompt when using "/" instead of "\". Some old command prompt commands also use "/" for named arguments and might potentially cause unexpected results. In my experience it will generally not cause any issues and I use "/" all the time, except for batch scripts. so maybe there is cases where \ needs to be used? Off the top of my head, I can see two kinds of cases:- As output : In order to show a path to a user, for example in order to display "You are editing C:\Data\Users\(...)\Image4.png"- As input : If you need to parse a path produced as a string by a system command or third-party library. Or if you want to parse argv[0] In the second case, it's actually better to expect both separators when you're on Windows OS. I remember slashes when running a program through sh or gdb, and antislashes otherwise. Using '/' seems like the way to go as solving the issue might be more complicated than it seems. Any solution in Windows might fix the issue in some places but break things somewhere else. That said, could this be a PHYSFS issue and not Allegro's? I mean, if you use data\image.png (as per the output of the printf() in line 23) with al_load_bitmap(), it will work fine if you're not using PHYSFS but fail with PHYSFS. A quick look at the PHYSFS documentation seems to indicate that PHYSFS usually expects its filenames to be written in platform-independent notation. So, that might likely be the cause of the problem. [EDIT:] Indeed, the following minimal PHYSFS code fails when using '\\' as path separator: #include <stdio.h> #include <assert.h> #include <physfs.h> int main(int argc, char **argv) { assert(PHYSFS_init(argv[0])); assert(PHYSFS_mount("my-file.zip", NULL, 0)); PHYSFS_File *file = PHYSFS_openRead("data\\image.png"); assert(file); } Uhm, I might be hallucinating, but PHYSFS expects /, just like CMake, and lots of other things. To expand, physfs is a virtual file system. Its conventions are to use forward / slash. It's totally arbitrary, but you have to convert windows filenames to use forward slashes yourself if they are in the DOS format. EDITAnd actually, I use ALLEGRO_NATIVE_PATH_SEP myself, but it's easily replaced by a couple ifdef. Yeah, I was trying to say that indeed, the error in my code was caused by using the Windows ALLEGRO_NATIVE_PATH_SEP ('\\') with PHYSFS. Knowing the PHYSFS convention makes it obvious that if you're going to use PHYSFS you have to use forward slashes but it's not something that's immediately obvious if you're not aware of the PHYSFS convention. I guess it's not really an Allegro bug, nor a PHYSFS bug. Just something that you must be aware of when using both together. Yeah, just something to be aware of. And CMake-GUI, doesn't tell you anything is wrong with your paths until you hit configure. :/ Also, I have a SanitizePath() function for just this purpose. Turns everything into a forward slash path. So: PhysFS uses forward slashes only? The way to deal with that seems to be to make a macro or function that returns the correct path separator, based on ALLEGRO_NATIVE_PATH_SEP and whether you are using PhysFS or not. ---Smokin' Guns - spaghetti western FPS action There's no OS that doesn't understand forward slashes, at least when it comes to system calls, everything should just automatically be converted to / <//>
https://www.allegro.cc/forums/thread/617884
CC-MAIN-2020-40
refinedweb
820
64.71
You can use the ActionName attribute like so: [ActionName("My-Action")] public ActionResult MyAction() { return View(); } Note that you will then need to call your View file "My-Action.cshtml" (or appropriate extension). You will also need to reference "my-action" in any Html.ActionLink methods. There isn't such a simple solution for controllers. Now with MVC5, Attribute Routing has been absorbed into the project. You can now use: [Route("My-Action")] For controllers, you can apply a RoutePrefix attribute which will be applied to all action methods in that controller: [RoutePrefix("my-controller")] One of the benefits of using RoutePrefix is URL parameters will also be passed down to any action methods. [RoutePrefix("clients/{clientId:int}")] public class ClientsController : Controller ..... [Route("edit-client")] public ActionResult Edit(int clientId) // will match /clients/123/edit-client This is the real answer. Not sure why Phil didn't add this info Nice tip. Just to add: When you do this with the default View() invocation, MVC will search for "My-Action.aspx" somewhere in the Views folder, not "MyAction.aspx," unless you explicitly specify the original name. @Eduardo I think the ActionName attribute was added for Preview 5, which came out just after Phil's post. How do you explicitly specify the view file name? Do I have to change the views file name to My-Action.aspx? @LordofScripts - make sure that you've configured routing appropriately with: routes.MapMvcAttributeRoutes(); Asp.Net MVC: How do I enable dashes in my urls? - Stack Overflow You could create a custom route handler as shown in this blog:); } } routes.Add( new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Default", action = "Index", id = "" }), new HyphenatedRouteHandler()) ); Asp.Net MVC: How do I enable dashes in my urls? - Stack Overflow Uppercase urls are problematic because cookie paths are case-sensitive, most of the internet is actually case-sensitive while Microsoft technologies treats urls as case-insensitive. (More on my blog post) To install it, simply open the NuGet window in the Visual Studio by right clicking the Project and selecting NuGet Package Manager, and on the "Online" tab type "Lowercase Dashed Route", and it should pop up.. After trying about a dozen of the other solutions I found online, this was the first one that actually worked for me with MVC5.Thanks Asp.Net MVC: How do I enable dashes in my urls? - Stack Overflow Here's what I did using areas in ASP.NET MVC 5 and it worked liked a charm. I didn't have to rename my views, either. public static void RegisterRoutes(RouteCollection routes) { // add these to enable attribute routing and lowercase urls, if desired routes.MapMvcAttributeRoutes(); routes.LowercaseUrls = true; // routes.MapRoute... } [RouteArea("SampleArea", AreaPrefix = "sample-area")] [Route("{action}")] public class SampleAreaController: Controller { // ... [Route("my-action")] public ViewResult MyAction() { // do something useful } } The URL that shows up in the browser if testing on local machine is: localhost/sample-area/my-action. You don't need to rename your view files or anything. I was quite happy with the end result. After routing attributes are enabled you can delete any area registration files you have such as SampleAreaRegistration.cs. Asp.Net MVC: How do I enable dashes in my urls? - Stack Overflow [RoutePrefix("dogs-and-cats")] public class DogsAndCatsController : Controller { [HttpGet("living-together")] public ViewResult LivingTogether() { ... } [HttpPost("mass-hysteria")] public ViewResult MassHysteria() { } } To get this behavior for projects using Asp.Net MVC prior to v5, similar functionality can be found with the AttributeRouting project (also available as a nuget). In fact, Microsoft reached out to the author of AttributeRouting to help them with their implementation for MVC 5. Wow, no ones mentioned the Ghostbusters reference? Asp.Net MVC: How do I enable dashes in my urls? - Stack Overflow You could write a custom route that derives from the Route class GetRouteData to strip dashes, but when you call the APIs to generate a URL, you'll have to remember to include the dashes for action name and controller name. Phil is right (as usual). There's a great example that I can't take credit for, but I'm grateful I found. It's by an MVP, and you can find it here. Here is my NuGet package for this implementation: nuget.org/packages/LowercaseDashedRoute Don't forget to read the README on GitHub (via Project Info link on NuGet page) Asp.Net MVC: How do I enable dashes in my urls? - Stack Overflow You can define a specific route such as: routes.MapRoute( "TandC", // Route controllerName "CommonPath/{controller}/Terms-and-Conditions", // URL with parameters new { controller = "Home", action = "Terms_and_Conditions" } // Parameter defaults ); Similarly, using AttributeRouting allows you to specify any route name you want, with the added benefit of knowing exactly which route your action is associated with. Asp.Net MVC: How do I enable dashes in my urls? - Stack Overflow ExtensionlessUrlHandler-Integrated-4.0 DELETE PUT You've mention on a deleted post you were running on a 2008 server right? Try removing webDav role, or disable it from your site config: on system.webServer -> modules section, remove WebDAVModule module: <system.webServer> <modules> <remove name="WebDAVModule" /> </modules> <handlers> <remove name="WebDAV" /> </handlers> </system.webServer> I did this stuffs before. But still not works ): +Daniel I did. Still not works. I see just now, when I make a DELETE request, and while it's trying to do request I click another link, and the DELETE request sone fine! My english is too bad, do you understande what is my mean? See: 1- Make a DELETE request, 2- It's trying, 3- I click another link, 4- The DELETE request (that was trying) get succeed! It's important to remove both the module and the handler. I missed removing the module the first time in IIS and was very confused for awhile! c# - How do I enable HTTP PUT and DELETE for ASP.NET MVC in IIS? - Sta... If you are getting following error in your production environment in the asp.net web api on PUT or DELETE though these methods are working fine locally. 405 - http verb used to access this page is not allowed. <system.webServer> > <modules> <remove name="WebDAVModule" /> </modules> </system.webServer> Cause: webDAV module blocks PUT/DELETE methods by default. So first remove this module and its handler. We first remove any existing ExtensionlessUrlHandler-Integrated-4.0 settings and add it with desired path and verbs. c# - How do I enable HTTP PUT and DELETE for ASP.NET MVC in IIS? - Sta... You just need to add the following lines of code in your web.config <system.webServer> <security> <requestFiltering> <verbs allowUnlisted="false"> <add verb="GET" allowed="true" /> <add verb="POST" allowed="true" /> <add verb="DELETE" allowed="true" /> <add verb="PUT" allowed="true" /> </verbs> </requestFiltering> </security> <modules> <remove name="WebDAVModule" /> </modules> <handlers> <remove name="WebDAV" /> </handlers> Not working for me. I hosted wordpress on mysite.com/wordpress/index.php/wp-json/wp/v2/posts/649?force=true. I try to delete but I get 405 method not allowed. Is this version specific? It seems like I see this same answer all over the web, but in IIS 8.0 it just causes a 500 internal server error. Maybe we need a new solution. c# - How do I enable HTTP PUT and DELETE for ASP.NET MVC in IIS? - Sta... $.ajax({ url: this.href + "?linkid=" + $(link).data("linkid"), cache: false, type: 'DELETE', // data: { linkid: $(link).data("linkid") }, beforeSend: function () { // doing something in UI }, complete: function () { // doing something in UI }, success: function (data) { // doing something in UI }, error: function () { // doing something in UI } }); Do you have any explaination why a DELETE call, can't have Form Data? While on local it had and worked fine? @Neal yep you are right. Post your comment as an answer that I can accept it @Neal - There is nothing in the HTTP 1.1 spec that forbids or even suggests that a DELETE request should not have a message body. Can you provide a link indicating otherwise? c# - How do I enable HTTP PUT and DELETE for ASP.NET MVC in IIS? - Sta... I guess the specific package you are trying to install needs to run a powershell script and for some reason powershell execution is disabled on your machine. You can search google on "how to enable powershell" for a complete guide but generaly it goes like this: - Open up a powershell command window (just search for powershell after pressing windows start button) This issue does not affect the server you are planing to install your application on. My execution policy is already RemoteSigned... but still received this error... Worked like a charm! @RosdiKasim Same here, solution did not work for me. Any idea hot to tackle this? asp.net mvc - Nuget give this error "ps1 cannot be loaded because runn... RegisterGlobalFilters FiltersConfig public class FilterConfig { public static void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new HandleErrorAttribute()); filters.Add(new RequireHttpsAttribute()); } } asp.net mvc - How do I enable ssl for all controllers in mvc applicati... Fiddler is doing a man in the middle scenario to snoop the traffic and fiddler can do this when you enable "decrpt https traffic in fiddler options" . asp.net mvc - Password is visible when i post the form in https with m... This is an old question but it was still in the top 3 results on searches for "web api application insights owin". After lots of searching and not a lot of answers that didn't require us to write our own middleware or explictly instrumenting everything. We came across an extension package that made things super simple: public class Startup { public void Configuration(IAppBuilder app) { app.UseApplicationInsights(); // rest of the config here... } } <TelemetryInitializers> <!-- other initializers.. --> <Add Type="ApplicationInsights.OwinExtensions.OperationIdTelemetryInitializer, ApplicationInsights.OwinExtensions"/> </TelemetryInitializers> then retrieve from owin middleware with Request.GetOwinContext().Get<TelemetryClient>(); asp.net mvc - How do I enable Application Insights server telemetry on... AI uses httpmodule to collect information on begin request and send it on end request. As described here Owin/Katana uses middelwares to execute logic on a different stages. As most of AI auto collection logic is internal you cannot reuse it in your middleware. But you can instrument your code yourself. Create TelemetryClient from your code and start sending Request, Traces and Exceptions (like described here) So basically you are saying that it's not possible to integrate AI in a WebAPI project that uses OWIN/Katana without code modifications? I'd really like to avoid that, since what we need now is just a temporary profiling to detect and fix the slowdown and then remove AI from the projects again. If code modifications are needed, it means we will have to deploy an entirely new version to production just to diagnose the problem, which is a big hassle at the moment. AI is not a profiler. It is SDK for code instrumentation. There is out of the box support for asp.net applications that use regular IIS stack. If onBegin and onEnd are not called, AI code is not invoked. I understand that, but our current scenario would greatly benefit from the Application Insights Monitor telemetry on a live environment. If we go the code approach now, it means we will have to come up with a plan on how to manage the keys over all different environments, how to set them, where in the code this will need to be, etc. Unfortunately we don't have time for these tasks right now and we need to solve the problem as quick as possible, so it's very sad that this process does not work. Are you from the dev team? Have you considered supporting katana out of the box somehow? Biggest problem is the lack of available documentation about this subject. It smells to me like the AI team are expecting the community to pick up the slack around OWIN/Katana and AI integration and I kinda get that. For me, the problem was discovering that AI would 'just work' for my traditional WebAPIs and not for my OWIN ones. Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). asp.net mvc - How do I enable Application Insights server telemetry on... Are you sure your database server is configured, the service broker is enable and your user has all the permission required? In order to use SqlDependency it requires Service Broker. So you have to enable Service Broker on the database and check the minimum database permission required asp.net mvc - My connection string is not working for signalR - Stack ... Below is our implementation of a OWIN Middleware for Application Insights. /// <summary> /// Extensions to help adding middleware to the OWIN pipeline /// </summary> public static class OwinExtensions { /// <summary> /// Add Application Insight Request Tracking to the OWIN pipeline /// </summary> /// <param name="app"><see cref="IAppBuilder"/></param> public static void UseApplicationInsights(this IAppBuilder app) => app.Use(typeof(ApplicationInsights)); } /// <summary> /// Allows for tracking requests via Application Insight /// </summary> public class ApplicationInsights : OwinMiddleware { /// <summary> /// Allows for tracking requests via Application Insight /// </summary> /// <param name="next"><see cref="OwinMiddleware"/></param> public ApplicationInsights(OwinMiddleware next) : base(next) { } /// <summary> /// Tracks the request and sends telemetry to application insights /// </summary> /// <param name="context"><see cref="IOwinContext"/></param> /// <returns></returns> public override async Task Invoke(IOwinContext context) { // Start Time Tracking var sw = new Stopwatch(); var startTime = DateTimeOffset.Now; sw.Start(); await Next.Invoke(context); // Send tracking to AI on request completion sw.Stop(); var request = new RequestTelemetry( name: context.Request.Path.Value, startTime: startTime, duration: sw.Elapsed, responseCode: context.Response.StatusCode.ToString(), success: context.Response.StatusCode >= 200 && context.Response.StatusCode < 300 ) { Url = context.Request.Uri, HttpMethod = context.Request.Method }; var client = new TelemetryClient(); client.TrackRequest(request); } } Looks pretty good, clean and simple. I don't see any key property though, how does the AI server knows which application to add the telemetry to? Is it based on that Name parameter on the telemetry request? @julealgon The instrumentation key is read from the ApplicationInsights.config file. I tried this solution, and it worked perfectly! asp.net mvc - How do I enable Application Insights server telemetry on... The [RequireHttps] attribute is inherited, so you could create a base controller, apply the attribute to that, and then derive all your controllers from that base. [RequireHttps] public abstract class BaseController : Controller {} public class HomeController : BaseController {} public class FooController : BaseController {} that would be even more work then adding the attribute to each controller, I was looking for less work? Maybe some general attribute setting or something? Not a lot of work. A simple find-and-replace on "Controller : Controller", and replace with "Controller : BaseController", across the project in visual studio should do the trick. ok well it would almost be the same amount of work as creating the attribute above each controller, nonetheless I was looking for a different solution. If there is one... asp.net mvc - How do I enable ssl for all controllers in mvc applicati... The UI is a bit different in the newer versions of Windows Server. Here is where you have to enable ASP.Net in order to get it working on IIS
https://recalll.co/app/?q=Asp.Net%20MVC%3A%20How%20do%20I%20enable%20dashes%20in%20my%20urls%3F
CC-MAIN-2019-18
refinedweb
2,521
57.87
/* protj.c The 'j' protocol. Copyright (C) 1992, 1994,j_rcsid[] = "$Id: protj.c,v 1.9 2002/03/05 19:10:41 ian Rel $"; #endif #include <ctype.h> #include <errno.h> #include "uudefs.h" #include "uuconf.h" #include "conn.h" #include "trans.h" #include "system.h" #include "prot.h" /* The 'j' protocol. The 'j' protocol is a wrapper around the 'i' protocol, which avoids the use of certain characters, such as XON and XOFF. Each 'j' protocol packet begins with a '^' character, followed by a two byte encoded size giving the total number of bytes in the packet. The first byte is HIGH, the second byte is LOW, and the number of bytes is (HIGH - 32) * 64 + (LOW - 32), where 32 <= HIGH < 127 and 32 <= LOW < 96 (i.e., HIGH and LOW are printable ASCII characters). This is followed by a '=' character. The next two bytes are the number of data bytes in the packet, using the same encoding. This is followed by a '@' character, and then that number of data bytes. The remaining bytes in the packet are indices of bytes which must be transformed, followed by a trailing '~' character. The indices are encoded in the following overly complex format. Each byte index is two bytes long. The first byte in the index is INDEX-HIGH and the second is INDEX-LOW. If 32 <= INDEX-HIGH < 126, the byte index refers to the byte at position (INDEX-HIGH - 32) * 32 + INDEX-LOW % 32 in the actual data, where 32 <= INDEX-LOW < 127. If 32 <= INDEX-LOW < 64, then 128 must be added to the indexed byte. If 64 <= INDEX-LOW < 96, then the indexed byte must be exclusive or'red with 32. If 96 <= INDEX-LOW < 127, both operations must be performed. If INDEX-HIGH == 126, then the byte index refers to the byte at position (INDEX-LOW - 32) * 32 + 31, where 32 <= INDEX-LOW < 126. 128 must be added to the byte, and it must be exclusive or'red with 32. This unfortunately requires a special test (when encoding INDEX-LOW must be checked for 127; when decoding INDEX-HIGH must be checked for 126). It does, however, permit the byte indices field to consist exclusively of printable ASCII characters. The maximum value for a byte index is (125 - 32) * 32 + 31 == 3007, so the is the maximum number of data bytes permitted. Since it is convenient to have each 'j' protocol packet correspond to each 'i' protocol packet, we restrict the 'i' protocol accordingly. Note that this encoding method assumes that we can send all printable ASCII characters. */ /* The first byte of each packet. I just picked these values randomly, trying to get characters that were perhaps slightly less likely to appear in normal text. */ #define FIRST '\136' /* The fourth byte of each packet. */ #define FOURTH '\075' /* The seventh byte of each packet. */ #define SEVENTH '\100' /* The trailing byte of each packet. */ #define TRAILER '\176' /* The length of the header. */ #define CHDRLEN (7) /* Get a number of bytes encoded in a two byte length at the start of a packet. */ #define CGETLENGTH(b1, b2) (((b1) - 32) * 64 + ((b2) - 32)) /* Set the high and low bytes of a two byte length at the start of a packet. */ #define ISETLENGTH_FIRST(i) ((i) / 64 + 32) #define ISETLENGTH_SECOND(i) ((i) % 64 + 32) /* The maximum packet size we support, as determined by the byte indices. */ #define IMAXPACKSIZE ((125 - 32) * 32 + 31) /* Amount to offset the bytes in the byte index by. */ #define INDEX_OFFSET (32) /* Maximum value of INDEX-LOW, before offsetting. */ #define INDEX_MAX_LOW (32) /* Maximum value of INDEX-HIGH, before offsetting. */ #define INDEX_MAX_HIGH (94) /* The set of characters to avoid. */ static char *zJavoid; /* The number of characters to avoid. */ static size_t cJavoid; /* A buffer used when sending data. */ static char *zJbuf; /* The end of the undecoded data in abPrecbuf. */ static int iJrecend; /* Local functions. */ static boolean fjsend_data P((struct sconnection *qconn, const char *zsend, size_t csend, boolean fdoread)); static boolean fjreceive_data P((struct sconnection *qconn, size_t cneed, size_t *pcrec, int ctimeout, boolean freport)); static boolean fjprocess_data P((size_t *pcneed)); /* Start the protocol. We first send over the list of characters to avoid as an escape sequence, starting with FIRST and ending with TRAILER. There is no error checking done on this string. */ boolean fjstart (qdaemon, pzlog) struct sdaemon *qdaemon; char **pzlog; { size_t clen; char *zsend; int b; size_t cbuf, cgot; char *zbuf; size_t i; /* Send the characters we want to avoid to the other side. */ clen = strlen (zJavoid_parameter); zsend = zbufalc (clen + 3); zsend[0] = FIRST; memcpy (zsend + 1, zJavoid_parameter, clen); zsend[clen + 1] = TRAILER; zsend[clen + 2] = '\0'; if (! fsend_data (qdaemon->qconn, zsend, clen + 2, TRUE)) { ubuffree (zsend); return FALSE; } ubuffree (zsend); /* Read the characters the other side wants to avoid. */ while ((b = breceive_char (qdaemon->qconn, cIsync_timeout, TRUE)) != FIRST) { if (b < 0) { if (b == -1) ulog (LOG_ERROR, "Timed out in 'j' protocol startup"); return FALSE; } } cbuf = 20; zbuf = zbufalc (cbuf); cgot = 0; while ((b = breceive_char (qdaemon->qconn, cIsync_timeout, TRUE)) != TRAILER) { if (b < 0) { ubuffree (zbuf); if (b == -1) ulog (LOG_ERROR, "Timed out in 'j' protocol startup"); return FALSE; } if (cgot + 1 >= cbuf) { char *znew; cbuf += 20; znew = zbufalc (cbuf); memcpy (znew, zbuf, cgot); ubuffree (zbuf); zbuf = znew; } zbuf[cgot] = b; ++cgot; } zbuf[cgot] = '\0'; /* Merge the local and remote avoid bytes into one list, translated into bytes. */ cgot = cescape (zbuf); clen = strlen (zJavoid_parameter); zJavoid = zbufalc (clen + cgot + 1); memcpy (zJavoid, zJavoid_parameter, clen + 1); cJavoid = cescape (zJavoid); for (i = 0; i < cgot; i++) { if (memchr (zJavoid, zbuf[i], cJavoid) == NULL) { zJavoid[cJavoid] = zbuf[i]; ++cJavoid; } } ubuffree (zbuf); /* We can't avoid ASCII printable characters, since the encoding method assumes that they can always be sent. If it ever turns out to be important, a different encoding method could be used, perhaps keyed by a different FIRST character. */ if (cJavoid == 0) { ulog (LOG_ERROR, "No characters to avoid in 'j' protocol"); return FALSE; } for (i = 0; i < cJavoid; i++) { if (zJavoid[i] >= 32 && zJavoid[i] <= 126) { ulog (LOG_ERROR, "'j' protocol can't avoid character '\\%03o'", zJavoid[i]); return FALSE; } } /* If we are avoiding XON and XOFF, use XON/XOFF handshaking. */ if (memchr (zJavoid, '\021', cJavoid) != NULL && memchr (zJavoid, '\023', cJavoid) != NULL) { if (! fconn_set (qdaemon->qconn, PARITYSETTING_NONE, STRIPSETTING_EIGHTBITS, XONXOFF_ON)) return FALSE; } /* Let the port settle. */ usysdep_sleep (2); /* Allocate a buffer we use when sending data. We will probably never actually need one this big; if this code is ported to a computer with small amounts of memory, this should be changed to increase the buffer size as needed. */ zJbuf = zbufalc (CHDRLEN + IMAXPACKSIZE * 3 + 1); zJbuf[0] = FIRST; zJbuf[3] = FOURTH; zJbuf[6] = SEVENTH; /* iJrecend is the end of the undecoded data, and iPrecend is the end of the decoded data. At this point there is no decoded data, and we must initialize the variables accordingly. */ iJrecend = iPrecend; iPrecend = iPrecstart; /* Now do the 'i' protocol startup. */ return fijstart (qdaemon, pzlog, IMAXPACKSIZE, fjsend_data, fjreceive_data); } /* Shut down the protocol. */ boolean fjshutdown (qdaemon) struct sdaemon *qdaemon; { boolean fret; fret = fishutdown (qdaemon); ubuffree (zJavoid); ubuffree (zJbuf); return fret; } /* Encode a packet of data and send it. This copies the data, which is a waste of time, but calling fsend_data three times (for the header, the body, and the trailer) would waste even more time. */ static boolean fjsend_data (qconn, zsend, csend, fdoread) struct sconnection *qconn; const char *zsend; size_t csend; boolean fdoread; { char *zput, *zindex; const char *zfrom, *zend; char bfirst, bsecond; int iprecendhold; boolean fret; zput = zJbuf + CHDRLEN; zindex = zput + csend; zfrom = zsend; zend = zsend + csend; /* Optimize for the common case of avoiding two characters. */ bfirst = zJavoid[0]; if (cJavoid <= 1) bsecond = bfirst; else bsecond = zJavoid[1]; while (zfrom < zend) { char b; boolean f128, f32; int i, ihigh, ilow; b = *zfrom++; if (b != bfirst && b != bsecond) { int ca; char *za; if (cJavoid <= 2) { *zput++ = b; continue; } ca = cJavoid - 2; za = zJavoid + 2; while (ca-- != 0) if (*za++ == b) break; if (ca < 0) { *zput++ = b; continue; } } if ((b & 0x80) == 0) f128 = FALSE; else { b &=~ 0x80; f128 = TRUE; } if (b >= 32 && b != 127) f32 = FALSE; else { b ^= 0x20; f32 = TRUE; } /* We must now put the byte index into the buffer. The byte index is encoded similarly to the length of the actual data, but the byte index also encodes the operations that must be performed on the byte. The first byte in the index is the most significant bits. If we only had to subtract 128 from the byte, we use the second byte directly. If we had to xor the byte with 32, we add 32 to the second byte index. If we had to perform both operations, we add 64 to the second byte index. However, if we had to perform both operations, and the second byte index was 31, then after adding 64 and offsetting by 32 we would come up with 127, which we are not permitted to use. Therefore, in this special case we set the first byte of the index to 126 and put the original first byte into the second byte position instead. This is why we could not permit the high byte of the length of the actual data to be 126. We can get away with the switch because both the value of the second byte index (31) and the operations to perform (both) are known. */ i = zput - (zJbuf + CHDRLEN); ihigh = i / INDEX_MAX_LOW; ilow = i % INDEX_MAX_LOW; if (f128 && ! f32) ; else if (f32 && ! f128) ilow += INDEX_MAX_LOW; else { /* Both operations had to be performed. */ if (ilow != INDEX_MAX_LOW - 1) ilow += 2 * INDEX_MAX_LOW; else { ilow = ihigh; ihigh = INDEX_MAX_HIGH; } } *zindex++ = ihigh + INDEX_OFFSET; *zindex++ = ilow + INDEX_OFFSET; *zput++ = b; } *zindex++ = TRAILER; /* Set the lengths into the buffer. zJbuf[0,3,6] were set when zJbuf was allocated, and are never changed thereafter. */ zJbuf[1] = ISETLENGTH_FIRST (zindex - zJbuf); zJbuf[2] = ISETLENGTH_SECOND (zindex - zJbuf); zJbuf[4] = ISETLENGTH_FIRST (csend); zJbuf[5] = ISETLENGTH_SECOND (csend); /* Send the data over the line. We must preserve iPrecend as discussed in fjreceive_data. */ iprecendhold = iPrecend; iPrecend = iJrecend; fret = fsend_data (qconn, zJbuf, (size_t) (zindex - zJbuf), fdoread); iJrecend = iPrecend; iPrecend = iprecendhold; /* Process any bytes that may have been placed in abPrecbuf. */ if (fret && iPrecend != iJrecend) { if (! fjprocess_data ((size_t *) NULL)) return FALSE; } return fret; } /* Receive and decode data. This is called by fiwait_for_packet. We need to be able to return decoded data between iPrecstart and iPrecend, while not losing any undecoded partial packets we may have read. We use iJrecend as a pointer to the end of the undecoded data, and set iPrecend for the decoded data. iPrecend points to the start of the undecoded data. */ static boolean fjreceive_data (qconn, cineed, pcrec, ctimeout, freport) struct sconnection *qconn; size_t cineed; size_t *pcrec; int ctimeout; boolean freport; { int iprecendstart; size_t cjneed; size_t crec; int cnew; iprecendstart = iPrecend; /* Figure out how many bytes we need to decode the next packet. */ if (! fjprocess_data (&cjneed)) return FALSE; /* As we long as we read some data but don't have enough to decode a packet, we try to read some more. We decrease the timeout each time so that we will not wait forever if the connection starts dribbling data. */ do { int iprecendhold; size_t cneed; if (cjneed > cineed) cneed = cjneed; else cneed = cineed; /* We are setting iPrecend to the end of the decoded data for the 'i' protocol. When we do the actual read, we have to set it to the end of the undecoded data so that any undecoded data we have received is not overwritten. */ iprecendhold = iPrecend; iPrecend = iJrecend; if (! freceive_data (qconn, cneed, &crec, ctimeout, freport)) return FALSE; iJrecend = iPrecend; iPrecend = iprecendhold; /* Process any data we have received. This will set iPrecend to the end of the new decoded data. */ if (! fjprocess_data (&cjneed)) return FALSE; cnew = iPrecend - iprecendstart; if (cnew < 0) cnew += CRECBUFLEN; if ((size_t) cnew > cineed) cineed = 0; else cineed -= cnew; --ctimeout; } while (cnew == 0 && crec > 0 && ctimeout > 0); DEBUG_MESSAGE1 (DEBUG_PROTO, "fjreceive_data: Got %d decoded bytes", cnew); *pcrec = cnew; return TRUE; } /* Decode the data in the buffer, optionally returning the number of bytes needed to complete the next packet. */ static boolean fjprocess_data (pcneed) size_t *pcneed; { int istart; istart = iPrecend; while (istart != iJrecend) { int i, iget; char ab[CHDRLEN]; int cpacket, cdata, chave; int iindex, iendindex; /* Find the next occurrence of FIRST. If we have to skip some garbage bytes to get to it, zero them out (so they don't confuse the 'i' protocol) and advance iPrecend. This will save us from looking at them again. */ if (abPrecbuf[istart] != FIRST) { int cintro; char *zintro; size_t cskipped; cintro = iJrecend - istart; if (cintro < 0) cintro = CRECBUFLEN - istart; zintro = memchr (abPrecbuf + istart, FIRST, (size_t) cintro); if (zintro == NULL) { bzero (abPrecbuf + istart, (size_t) cintro); istart = (istart + cintro) % CRECBUFLEN; iPrecend = istart; continue; } cskipped = zintro - (abPrecbuf + istart); bzero (abPrecbuf + istart, cskipped); istart += cskipped; iPrecend = istart; } for (i = 0, iget = istart; i < CHDRLEN && iget != iJrecend; ++i, iget = (iget + 1) % CRECBUFLEN) ab[i] = abPrecbuf[iget]; if (i < CHDRLEN) { if (pcneed != NULL) *pcneed = CHDRLEN - i; return TRUE; } cpacket = CGETLENGTH (ab[1], ab[2]); cdata = CGETLENGTH (ab[4], ab[5]); /* Make sure the header has the right magic characters, that the data is not larger than the packet, and that we have an even number of byte index characters. */ if (ab[3] != FOURTH || ab[6] != SEVENTH || cdata > cpacket - CHDRLEN - 1 || (cpacket - cdata - CHDRLEN - 1) % 2 == 1) { istart = (istart + 1) % CRECBUFLEN; continue; } chave = iJrecend - istart; if (chave < 0) chave += CRECBUFLEN; if (chave < cpacket) { if (pcneed != NULL) *pcneed = cpacket - chave; return TRUE; } /* Figure out where the byte indices start and end. */ iindex = (istart + CHDRLEN + cdata) % CRECBUFLEN; iendindex = (istart + cpacket - 1) % CRECBUFLEN; /* Make sure the magic trailer character is there. */ if (abPrecbuf[iendindex] != TRAILER) { istart = (istart + 1) % CRECBUFLEN; continue; } /* We have a packet to decode. The decoding process is simpler than the encoding process, since all we have to do is examine the byte indices. We zero out the byte indices as we go, so that they will not confuse the 'i' protocol. */ while (iindex != iendindex) { int ihigh, ilow; boolean f32, f128; int iset; ihigh = abPrecbuf[iindex] - INDEX_OFFSET; abPrecbuf[iindex] = 0; iindex = (iindex + 1) % CRECBUFLEN; ilow = abPrecbuf[iindex] - INDEX_OFFSET; abPrecbuf[iindex] = 0; iindex = (iindex + 1) % CRECBUFLEN; /* Now we must undo the encoding, by adding 128 and xoring with 32 as appropriate. Which to do is encoded in the low byte, except that if the high byte is the special value 126, then the low byte is actually the high byte and both operations are performed. */ f128 = TRUE; f32 = TRUE; if (ihigh == INDEX_MAX_HIGH) iset = ilow * INDEX_MAX_LOW + INDEX_MAX_LOW - 1; else { iset = ihigh * INDEX_MAX_LOW + ilow % INDEX_MAX_LOW; if (ilow < INDEX_MAX_LOW) f32 = FALSE; else if (ilow < 2 * INDEX_MAX_LOW) f128 = FALSE; } /* Now iset is the index from the start of the data to the byte to modify; adjust it to an index in abPrecbuf. */ iset = (istart + CHDRLEN + iset) % CRECBUFLEN; if (f128) abPrecbuf[iset] |= 0x80; if (f32) abPrecbuf[iset] ^= 0x20; } /* Zero out the header and trailer to avoid confusing the 'i' protocol, and update iPrecend to the end of decoded data. */ for (i = 0, iget = istart; i < CHDRLEN && iget != iJrecend; ++i, iget = (iget + 1) % CRECBUFLEN) abPrecbuf[iget] = 0; abPrecbuf[iendindex] = 0; iPrecend = (iendindex + 1) % CRECBUFLEN; istart = iPrecend; } if (pcneed != NULL) *pcneed = CHDRLEN + 1; return TRUE; }
http://opensource.apple.com/source/uucp/uucp-11/uucp/protj.c
CC-MAIN-2014-49
refinedweb
2,499
69.62
Java custom class loader example February 09, 2009 23:14:15 Last update: February 09, 2009 23:14:15 It's important to note that when a class is loaded with a certain class loader, all classes referenced from that class are also loaded through the same class loader (unless another class loader is specifically requested). Test code: where I used the public domain Base64 code for the test. This class loader tries to load the requested class on its own first, and delegates to the parent only when a java.lang.SecurityExceptionis thrown (which happens when it tries to load core Java classes such as java.lang.String). The classes are loaded from CLASSPATHthrough the getResourceAsStreamcall. It's important to note that when a class is loaded with a certain class loader, all classes referenced from that class are also loaded through the same class loader (unless another class loader is specifically requested). package demo; import java.io.*; public class MyClassLoader extends ClassLoader { private static final int BUFFER_SIZE = 8192; protected synchronized Class loadClass(String className, boolean resolve) throws ClassNotFoundException { log("Loading class: " + className + ", resolve: " + resolve); // 1. is this class already loaded? Class cls = findLoadedClass(className); if (cls != null) { return cls; } // 2. get class file name from class name String clsFile = className.replace('.', '/') + ".class"; // 3. get bytes for class byte[] classBytes = null; try { InputStream in = getResourceAsStream(clsFile); byte[] buffer = new byte[BUFFER_SIZE]; ByteArrayOutputStream out = new ByteArrayOutputStream(); int n = -1; while ((n = in.read(buffer, 0, BUFFER_SIZE)) != -1) { out.write(buffer, 0, n); } classBytes = out.toByteArray(); } catch (IOException e) { log("ERROR loading class file: " + e); } if (classBytes == null) { throw new ClassNotFoundException("Cannot load class: " + className); } // 4. turn the byte array into a Class try { cls = defineClass(className, classBytes, 0, classBytes.length); if (resolve) { resolveClass(cls); } } catch (SecurityException e) { // loading core java classes such as java.lang.String // is prohibited, throws java.lang.SecurityException. // delegate to parent if not allowed to load class cls = super.loadClass(className, resolve); } return cls; } private static void log(String s) { System.out.println(s); } } Test code: package demo; public class TestClassLoader { public static void main(String[] args) throws Exception { MyClassLoader loader1 = new MyClassLoader(); // load demo.Base64 Class clsB64 = Class.forName("demo.Base64", true, loader1); System.out.println("Base64 class: " + clsB64); if (Base64.class.equals(clsB64)) { System.out.println("Base64 loaded through custom loader is the same as that loaded by System loader."); } else { System.out.println("Base64 loaded through custom loader is NOT same as that loaded by System loader."); } // call the main method in Base64 java.lang.reflect.Method main = clsB64.getMethod("main", new Class[] {String[].class}); main.invoke(null, new Object[]{ new String[]{} }); } } where I used the public domain Base64 code for the test.
http://www.xinotes.net/notes/note/444/
CC-MAIN-2014-15
refinedweb
450
50.33
Brian Withun writes: > Is it possible for DTML to create or modify a form value? It is possible to extend any dictionary, especially "REQUEST.form". You use the dictionary "update" method as in <dtml-call "REQUEST.form.update({'a':1, 'b':2})"> It will, however, not work as you might expect. This is because the DTML namespace *does not* look into "REQUEST.form" but into the dictionary "REQUEST.other". During "REQUEST" construction, the "other" dictionary is updated with the "form" dictionary. Therefore, you see form variables, too, in the DTML namespace. However, later updates to "REQUEST.form" are not automatically propagated to "REQUEST.other" (and therefore not seen via the DTML namespace). You can use <dtml-call "REQUEST.other.update({'a':1, 'b':2})"> or (as someone else suggested) "REQUEST.set". Dieter _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) - [Zope] REQUEST.form.set ??? Brian Withun - Re: [Zope] REQUEST.form.set ??? Daren Lunsford - Dieter Maurer
https://www.mail-archive.com/[email protected]/msg06957.html
CC-MAIN-2016-44
refinedweb
160
54.79
Making fields readonly by group I want to control whether a field is readonly or not according to the user's group. This is a common request on the forum but the only solution offered is to hack a copy of fields_get() into your module. As far as I can see attrs won't work, alternative forms doesn't work either (or is there a way?), record rules only work on the record level (duh!) and likewise menuitem access controls. Will the new modifiers allow this? Is there any other non-hacky way? Question information - Language: - English Edit question - Status: - Expired - Assignee: - No assignee Edit question - Last query: - 2012-11-21 - Last reply: - 2012-12-07 This question was reopened - 2012-11-21 by Martin Collins Hi, There is another way: use a function field that returns a value based on the user's group. I did the following as a proof-of-concept, a while back: Declare the function field: The function (which needs tidying) returns 'User' or 'Manager' depending on whether they are in a couple of groups: def _check_ res = {} for i in ids: if not i: continue # Get the HR Officer/Manager id's group_obj = self.pool. manager_ids = group_obj. # Get the user and see what groups he/she is in user_obj = self.pool. user = user_obj.browse(cr, uid, uid, context=context) group_ids = [] for grp in user.groups_id: group_ if (manager_ids[0] in group_ids) or (manager_ids[1] in group_ids): res[i] ='Manager' else: res[i] = 'User' return res James, that works! Just have to test 'permissions' in an attr. And because it's a function field it doesn't clutter up the table. It could easily be adapted to test other properties too. All in all a very neat hack. There is a caveat though: if the 'permissions' field is not included on a form it doesn't get calculated. You can hide it from some users using groups= and I guess setting store=True will work too. Setting attrs=" Hm. I don't think this permissions function field solves the problem. This way, you get some information about which kind of permission the user has. But that's just a coded version of using the user groups directly. And it won't protect against write accesses - any user could still use XMLRPC to write to the database. can certainly assign access rule on field level in OpenERP, please refer to the below example. 'name': fields.char('Name', size=128, required=True, select=True, write=[ Tony Gu On Wed, Nov 16, 2011 at 6:25 AM, Martin Collins > Question #178779 on OpenERP Server changed: > https:/ > > Martin Collins posted a new comment: > received this question notification because you are a member of > OpenERP Committers, which is an answer contact for OpenERP Server. > The 'permissions' field can be used to make the field read-only in the view. It would need to be declared in the view e.g. <field name="permissions" invisible="1" /> <field name="other_field" attrs=" It's probably not an efficient approach, but it works. Stopping the field from being written to via XMLRPC is a separate issue, but easily solvable by overriding the 'create' and 'write' methods in the Python code. > you can certainly assign access rule on field level in OpenERP, please > refer to the below example. > > 'name': fields.char('Name', size=128, required=True, select=True, > write=[ Really? That's a much better solution if it works. I've not seen any documentation on this or examples of this approach in the code. Ideally, this is the way I think that OpenERP *should* work. > 'name': fields.char('Name', size=128, required=True, select=True, > write=[ Do you have any more info on this, I'm not getting it. Ok, I see 3 ways with code adding : 1st way 'client_order_ref': fields. 2nd way: changes on the functions fields_view_get and/or fields_get: set readonly attribute on precise fields depending on the uid's groups. ...but it would be a great feature to configure this on OpenERP directly by the Administrator user. 3rd way: add permisssions functional field, then use attrs attributes on fields to control. https:/ But the ideal would be the possibility for an Administrator user to configure directly it: What do you think about this feature? For the most experienced among you, what do you think just on the technical potential compatibility with the core actual working of OE 6.1? Is there any logical core limits to imagine adding this feature? The '3rd way' no longer works in 6.1 and I still can't get the '1st way' working. SerpentCS have a blog posting about this in 7.0 but from the description I cannot see how it would be useful even if I was running 7. Anyone come up with a working solution yet? This question was expired because it remained in the 'Open' state without activity for the last 15 days. I got the '3rd way' working again. Before I was not explicitly setting the return value to False. It worked in 6.0 but in 6.1 you need to set it. I'd to use 1st way on 6.0, but after i implemented, it didn't have any effect what i did is change on .py file something like : 'name': fields. do i missing something here? it would be very convenient if the attrs would also work based on the groups. It doesn't seem to work in 6.0 and 6.1? Hi! "attrs" would kind of work, but only on view definitions. If you access the data via a modified view (*1), another module (*1) or xmlrpc (*2), this would be cirumvented. *1 You would need administrator rights to do so, so no problem, as you can't hide anything from admins, either way. *2 For XMLRPC, user rights apply, so this is your problem. Other than that, I think a modified fields_get() looks like the only way at this moment (at least in v5 or 6.0, I've yet to start with 6.1)
https://answers.launchpad.net/openobject-server/+question/178779
CC-MAIN-2019-47
refinedweb
1,012
73.98
13 January 2012 10:48 [Source: ICIS news] SINGAPORE (ICIS)--A widespread strike now on its fifth day in ?xml:namespace> “Our usual customers have told us they are unable to buy from us at the moment because everything is in a mess,” said a major producer of soap noodles. Workers at But Asian soap noodle sellers said they are not very worried that the market will be significantly affected by “Soap is something people need everyday. The demand will come back after things stabilize in the country,” a trader said. Prices are not likely to be affected given “Prices still remain more dependent on feedstock costs,” said an Asian producer. Soap noodles in southeast Asia are mainly made from crude palm oil (CPO), crude coconut oil (CNO) and crude palm kernel oil (CPKO), whose prices have been stable to soft
http://www.icis.com/Articles/2012/01/13/9523465/nigeria-strikes-stall-imports-of-asia-soap-noodles.html
CC-MAIN-2014-42
refinedweb
141
65.05
How to Apply train_test_split() 00:00 Getting started with train_test_split(). You need to import train_test_split() and numpy before you can use them, so let’s start with the import statements. 00:18 Now that you have both imported, you can use numpy to create a dataset and train_test_split() to split thati sparse matrices if appropriate. arrays is the sequence of lists, NumPy arrays, pandas DataFrames, or similar array-like objects that hold the data that you want to split. 00:54 All these objects together make up the dataset, and they must be of the same length. In supervised machine learning applications, you’ll typically work with two such sequences: a two-dimensional array with the inputs, typically known as x, and a one-dimensional array of outputs, typically known as y. 01:16 options are the optional keyword arguments that you can use to get the desired behavior. train_size is the number that defines the size of the training set. 01:26 If you provide a float, then it must be in the range 0 to 1, and it will define the share of the dataset you use for testing. 01:33 If you provide an int, then it will represent the total number of the training samples. The default value is None. test_size is the number that defines the size of the test set. 01:44 It’s very similar to train_size. You should provide either train_size or test_size. If neither is given, then the default share of the dataset that will be used for testing is 0.25, 25%. random_state is the object that controls randomization during splitting. 02:04 It can either be an int or an instance of RandomState. The default value is None. shuffle is a Boolean that determines whether or not to shuffle the dataset before applying the split. stratify is an array-like object that, if not None, determines how to use a stratified split. 02:26 Now it’s time to try data splitting. You’ll start by creating a simple dataset to work with. This dataset will contain the inputs in the two-dimensional array x and outputs and the one-dimensional array y. 02:46 Here, you can see NumPy’s arange() being used, which is extremely convenient for generating arrays based on numerical ranges. You’ll also use .reshape() to modify the shape of the array returned by arange() and get a two-dimensional data structure. 03:10 Here, you can see the x and y NumPy arrays that were created. You can split both input and output datasets with a single function call, as seen onscreen. 03:27 Given two sequences, such as x and y here, train_test_split() performs the split and returns four sequences, which in this case will be NumPy arrays, in this order: x_train, the training part of the first sequence x. 03:43 x_test, the test part of the first sequence x. y_train, the training part the second sequence y. And finally, y_test, the test part of the second sequence y. 04:00 You probably got different results from the ones you see onscreen. This is because dataset splitting is random by default, and the result differs each time you run the function. However, this often isn’t what you want. 04:14 In the next part of the course, you’ll see how you can modify your code so that you get consistent, reproducible results. Become a Member to join the conversation.
https://realpython.com/lessons/application-train-test-split/
CC-MAIN-2021-39
refinedweb
580
73.37
Unlike with many other modern languages, parsing XML in Java requires more than one line of code. XML traversing using XPath takes even more code, and I find this is unfair and annoying. I'm a big fan of XML and use it it in almost every Java application. Some time ago, I decided to put all of that XML-to-DOM parsing code into a small library—jcabi-xml. Put simply, the library is a convenient wrapper for JDK-native DOM manipulations. That's why it is small and dependency-free. With the following example, you can see just how simple XML parsing can be: import com.jcabi.xml.XML; import com.jcabi.xml.XMLDocument; XML xml = new XMLDocument( "<root><a>hello</a><b>world!</b></root>" ); Now, we have an object of interface XML that can traverse the XML tree and convert it back to text. For example: // outputs "hello" System.out.println(xml.xpath("/root/a/text()").get(0)); // outputs the entire XML document System.out.println(xml.toString()); Method xpath() allows you to find a collection of text nodes or attributes in the document, and then convert them to a collection of strings, using XPath query: // outputs "hello" and "world" for (String text : xml.xpath("/root/*/text()")) { System.out.println(text); } Method nodes() enables the same XPath search operation, but instead returns a collection of instances of XML interface: // outputs "<a>hello</a>" and "<b>world</b>" for (XML node : xml.nodes("/root/*")) System.out.println(node); } Besides XML parsing, printing and XPath traversing, jcabi-xml also provides XSD validation and XSL transformations. I'll write about those features in the next post :) PS. Also, check this: XML/XPath Matchers for Hamcrest.
http://www.yegor256.com/2014/04/24/java-xml-parsing-and-traversing.html
CC-MAIN-2017-22
refinedweb
286
57.47
IterParseFilter IterParseFilterXPath-like filtering of ElementTree's iterparse event stream - Background - SAX processing - Using Stackless to convert callbacks into an event stream - ElementTree's "iterparse" hybrid event stream parser - Using XPath selectors - The query syntax - Callback-oriented XPath-filtering parsing - Listening to events - Implementation - The Code! - Bonus - tag structure of an XML document Background Here's a nice little XML document. 'll make it be a StringIO object so I can handle it as an in-memory file: from cStringIO import StringIO want to get the titles from the document. There are two types of titles; the title for the document and the job title. It would be nice if the two elements had different names but such is life. The easiest standard way to do this in Python 2.5 is with the built-in ElementTree module: >>> import xml.etree.ElementTree >>> from xml.etree import ElementTree as ET >>> tree = ET.parse(f) >>> tree.findall("person/title") [<Element title at 1094b20>, <Element title at 1094c10>] >>> for title in tree.findall("person/title"): ... print repr(title.text) ... 'Porch Light Switcher' 'Bottle Washer, 3rd class' >>> This is fine for small data. What if I want to parse a huge document? In this case, suppose I had 100,000 people working for me, each with a set of standardized fields of 10K each. That's not realistic. A better example is where I had to parse bibliographic records from a large (140MB) PubMed file. I expect most people reading this have needed to parse large data-oriented XML files. (By "data-oriented" I mean files containing many records with very similar internal structures.) SAX processing Traditionally there are two ways to process XML: SAX-style, with callbacks for each event, and DOM-style, building a single data structure representing the entire XML file. These are not the only ways and I'll discuss some hybrid approaches in a bit. ElementTree is DOM-style and reads everything into memory. If there's a large file then the traditional answer is to be SAX oriented. Here's the above code rewritten for SAX. >>> from xml.sax import handler >>> class ShowPeopleTitle(handler.ContentHandler): ... def startDocument(self): ... self.where = [] ... self.in_person_title = False ... def startElement(self, tag, attrs): ... self.where.append(tag) ... if self.where[-2:] == ["person", "title"]: ... self.in_person_title = True ... self.text = "" ... def characters(self, s): ... if self.in_person_title: ... self.text = self.text + s ... def endElement(self, tag): ... if self.where[-2:] == ["person", "title"]: ... print repr(self.text) ... del self.text ... self.in_person_title = False ... del self.where[-1] ... >>> import xml.sax >>> show = ShowPeopleTitle() >>> f.seek(0) >>> xml.sax.parse(f, show) u'Porch Light Switcher' u'Bottle Washer, 3rd class' >>>A scary thing is I wrote the ShowPeopleTitle without rewrite. I got it right on the first attempt. I have a parser-generator named Martel which generated SAX events. I spent a lot of time writing callback handlers for it. Most people don't like callbacks. They are harder to think about and it feels like your software is no longer in control of things. I think that's one reason why people feel uncomfortable with Twisted. There are some use problems with how SAX does callbacks. You can have only one handler attached to the processor, which makes it tricker to have specialized for different parts of the document. (You have to implement an intermediate dispatcher.) If you have multiple handlers then each has to listen to and ignore essentially all of the events, causing a rather large performance hit. Using Stackless to convert callbacks into an event stream SAX is a special case of "event programming". Each XML construct generates an event. "<person>" generates a "startElement" event with arguments ("person", {}), where the {} is the dictionary of element attributes, the close tag "</person>" generates an "endElement" event with arguments ("person",), etc. These events can be stream oriented instead. By "stream" I mean that it works like a Python iterator; you can do a for-loop over it. There is a trivial way to convert any callback-based system into an event stream. In standard Python you have to use threads, which are a bit clumsy and don't scale well if you have more than a hundred or so. Instead, I'll show what it's like with Stackless Python, which add some very cool features to Python. The general concepts are the same in this case between a thread solution and Stackless'. #!/usr/local/bin/spython # This uses Stackless Python import stackless import sys import xml.sax from xml.sax import handler from cStringIO import StringIO # Parse using the normal SAX parser. # Catch any errors and forward those to the event stream def parse(f, handler, event_channel): try: xml.sax.parse(f, handler) except: event_channel.send( ('error', sys.exc_info()) ) # Forward parse events through the event channel class StacklessHandler(handler.ContentHandler): def __init__(self, event_channel): self.event_channel = event_channel def startDocument(self): self.event_channel.send( ('startDocument', None) ) def endDocument(self): self.event_channel.send( ('endDocument', None) ) def startElement(self, tag, attrs): self.event_channel.send( ('start', (tag, attrs)) ) def endElement(self, tag): self.event_channel.send( ('end', tag) ) def characters(self, text): self.event_channel.send( ('characters', text) ) def itersax(f): # Create the handler along with a channel used to return events event_channel = stackless.channel() stackless_handler = StacklessHandler(event_channel) # Start the parsing in another tasklet stackless.tasklet(parse)(f, stackless_handler, event_channel) while 1: x = event_channel.receive() if x[0] == "error": raise x[1][0], x[1][1], x[1][2] yield x if x[0] == "endDocument": break> ''') def main(): where = [] in_person_title = False # loop over the SAX event stream for (event, args) in itersax(f): if event == "start": where.append(args[0]) if where[-2:] == ["person", "title"]: in_person_title = True text = "" elif event == "characters": if in_person_title: text = text + args elif event == "end": if where[-2:] == ["person", "title"]: print repr(text) text = None in_person_title = False del where[-1] if __name__ == "__main__": stackless.schedule(main)() stackless.run()As you can see, the handler code is essentially the same. It still acts on events but in this case the user code controls the main loop, or appears to do so. There's no need to store state through class variables because you can use local variables. Incidentally, local variables are much faster to access in Python than instance variables. ElementTree's "iterparse" hybrid event stream parser There's are hybrid approaches, and ElementTree includes a nice one. It uses a feed-oriented SAX parser to create the ElementTree. While it's creating the tree it can pass parts of the tree back to the caller, as an event stream. This is more easily seen than explained. In the following I've asked iterparse to generate "start" and "end" elements. The default only generates "end" elements: >>> from xml.etree import ElementTree as ET >>> f.seek(0) >>> for (event, ele) in ET.iterparse(f, ("start", "end")): ... print repr(event), repr(ele.tag), repr(ele.text) ... 'start' 'document' '\n ' 'start' 'title' 'People working for me' 'end' 'title' 'People working for me' 'start' 'person' '\n ' 'start' 'name' 'Jim' 'end' 'name' 'Jim' 'start' 'title' 'Porch Light Switcher' 'end' 'title' 'Porch Light Switcher' 'start' '{}homepage' None 'end' '{}homepage' None 'end' 'person' '\n ' 'start' 'person' '\n ' 'start' 'name' 'Joe' 'end' 'name' 'Joe' 'start' 'title' 'Bottle Washer, 3rd class' 'end' 'title' 'Bottle Washer, 3rd class' 'start' '{}nick' 'Joe-Joe' 'end' '{}nick' 'Joe-Joe' 'end' 'person' '\n ' 'end' 'document' '\n ' >>>It's roughly the same as my Stackless-based parser, except that it returns ElementTree nodes instead of SAX event data. I was surprised to see that the 'start' element already contains the text after the start tag. This means it's already process the following character events up to the next element's start. Here's how to get the text for the each person's title using iterparse >>> f.seek(0) >>> where = [] >>> for (event, ele) in ET.iterparse(f, ("start", "end")): ... if event == "start": ... where.append(ele.tag) ... elif event == "end": ... if where[-2:] == ["person", "title"]: ... print repr(ele.text) ... del where[-1] ... 'Porch Light Switcher' 'Bottle Washer, 3rd class' >>>This code is simpler than that previous cases because ElementTree is iterating over the tree while the tree is being built. The 'end' element will contain all the information about the tree underneath it. For example, >>> f.seek(0) >>> for (event, ele) in ET.iterparse(f): ... # only asked for 'end' events, and there is only one type of 'person' tag ... if ele.tag == 'person': ... print repr(ele.find("name").text), "is a", repr(ele.find("title").text) ... ele.clear() ... 'Jim' is a 'Porch Light Switcher' 'Joe' is a 'Bottle Washer, 3rd class' >>>See the ele.clear() in this example? That's the ElementTree method to remove all children from the element; to prune the tree while processing it. It's very handy when parsing record-oriented documents because in most cases once you've processed the record you don't need it any more and likely want to discard it so it doesn't take up memory. By the way, the event-based methods (both stream and callback) are also nice because processing can occur while fetching data. For example, you can start processing the response from a URL request as the data comes in rather than waiting until the response is fully received and processed. Using XPath selectors None of the event-based schemes were as simple as using the ElementTree's "findall" method. In most examples I had to track the element stack myself to know that the "title" was under "person" and not under "document". Years ago I heard about a streaming XML protocol which allowed XPath-based filters, just like "findall" takes an XPath. I decided to implement something like that myself. I've given the module the ever-so-graceful name 'iterparse_filter'. Use an IterParseFilter to define which fields you are interested in (via a very limited subset of XPath). In this case "person/title" >>> f.seek(0) >>> import iterparse_filter >>> filter = iterparse_filter.IterParseFilter() >>> filter.iter_end("person/title") >>> for (event, ele) in filter.iterparse(f): ... print repr(ele.text) ... 'Porch Light Switcher' 'Bottle Washer, 3rd class' >>>If I wanted all foaf elements I can filter on that namespace: >>> f.seek(0) >>> filter = iterparse_filter.IterParseFilter( ... namespaces = {"foaf": ""} ) >>> filter.iter_end("foaf:*") >>> for (event, ele) in filter.iterparse(f): ... print repr(ele.tag) ... '{}homepage' '{}nick' >>>(I also support Clark notation so I could have written the previous xpath as: filter.iter_end("{}*")) I can set up multiple filters >>> f.seek(0) >>> filter = iterparse_filter.IterParseFilter() >>> filter.iter_end("/document/title") >>> filter.iter_end("/document/person") >>> for (event, ele) in filter.iterparse(f): ... if ele.tag == "title": ... print "the title is", repr(ele.text) ... else: ... print "The person's name is", repr(ele.find("name").text) ... the title is 'People working for me' The person's name is 'Jim' The person's name is 'Joe' >>>and have filters allowing the start element events: >>> f.seek(0) >>> filter = iterparse_filter.IterParseFilter() >>> filter.iter_start("/document/person") >>> filter.iter_end("person/title") >>> for (event, ele) in filter.iterparse(f): ... if event == "start": # can only be a person ... print "Person:" ... else: ... print " ", repr(ele.text) # must be the title ... Person: 'Porch Light Switcher' Person: 'Bottle Washer, 3rd class' >>> The query syntax The supported query syntax is: - simple element names - 'person', 'name', 'language' - '*' - match any element in any namespace - Clark notation namespaces - '{}img', '{}empty_namespace' - '{namespace}*' - match any element in the given namespace - names with a namespace prefix - 'das2:SOURCES', 'py:for' - 'ns:*' - match any element in the given namespace defined by 'ns' - '/' as a path separator - 'html/body' means any 'body' element which is a child of an 'html' element - '/' as the root indicator - '/html/body' means only the 'body' elements which are immediate children of the root element, which must be an 'html' element - '//' as a descendent operator - 'body//img' matches any 'img' element under a 'body' element Callback-oriented XPath-filtering parsing What I don't like about the this new code is the need to figure things out again. The filtering code did all the work to figure out that an element in a given location should be placed in the event stream only to have the body of the code figure things out again. It should be less complicated because of extra guarantees made by the filter for the event types and elements returned, but it's still a bit cumbersome. I've instead been experimenting with a hybrid iterator/callback-handler combination. I can also register a callback handler based on event type and xpath location match. In the following I'll call the 'save_titles' function for every end event on an 'person/title' element: >>> f.seek(0) >>> filter = iterparse_filter.IterParseFilter() >>> def save_titles(event, ele, state): ... state.append(ele.text) ... >>> filter.on_end("person/title", save_titles) >>> filter.handler_parse(f, titles) >>> titles ['Porch Light Switcher', 'Bottle Washer, 3rd class'] >>>This uses a variant method named 'handler_parse' which takes an optional 'state' parameter. This is not used by the iterparse_filter code except to pass it to the callbacks. It should be used as a place to store information. In this case to store the list of titles. (There are other ways to solve this problem: pass in a local function which gets the list from its closure, or use a class method storing the data in the instance.) Listening to events Here is the list of listenable events: - start-document - occurs before any file processing - end-document - occurs after all file processing - start - start of an element - end - end of an element - start-default - only called if there are no 'start' handlers for an element - end-default - only called if there are no 'end' handlers for an element - start-ns - define a new namespace prefix - end-ns - end of scope for the matching namespace prefix >>> g = StringIO("""<list> ... <item x='1' /> ... <item y='2' /> ... <strange/> ... <item z='3' /> ... </list>""") >>> filter = iterparse_filter.IterParseFilter() >>> def unknown_node(event, ele, state): ... raise AssertionError("Unknown element %r" % (ele.tag,)) ... >>> def ignore(event, ele, state): ... pass ... >>> filter = iterparse_filter.IterParseFilter() >>> filter.on_end("/list/item", ignore) >>> filter.on_end_default("/list/*", unknown_node) >>> filter.handler_parse(g) Traceback (most recent call last): File "<stdin>", line 1, in ? File "iterparse_filter.py", line 251, in handler_parse return self.create_fa().handler_parse(file, state) File "iterparse_filter.py", line 344, in handler_parse for x in self.parse(file, state): File "iterparse_filter.py", line 391, in parse end_handler(event, ele, state) File "<stdin>", line 2, in unknown_node AssertionError: Unknown element 'strange' >>>though to report it nicely I need the position information, which I don't think I can get from ElementTree. I can register multiple handlers for a given node. Here are the total number of 'title' elements and the total number of 'person/title' elements: >>> f.seek(0) >>> filter = iterparse_filter.IterParseFilter() >>> class Counter(object): ... def __init__(self): self.count = 0 ... def __call__(self, event, tag, state): self.count += 1 ... >>> title_counter = Counter() >>> job_title_counter = Counter() >>> filter.on_start("title", title_counter) >>> filter.on_start("person/title", job_title_counter) >>> filter.handler_parse(f) >>> title_counter.count 3 >>> job_title_counter.count 2 >>>Start handlers are called in order they are registered. If a handler H1 was registered before H2 then H1 will be called before H2. End handlers are called in reverse order, so H2 will be called before H1. The 'iterparse' and 'handler_parse' methods use the same underlying 'parse' method which supports both styles of parsing. You can request iterator-style token stream and register callbacks. Though I'm not sure that's a useful feature. Implementation I started my making an NFA to evalute the XPath but didn't like the overhead in pure Python nor the amount of backtracking needed for something like //A//B. One of the first times I used XPath I tried something like that and the XPath engine took a long time to find what I thought was an easy answer. I posted to the list and was told my query was naive and didn't reflect knowledge of how XPath worked. I thought it was a perfectly sensible. I knew the going down this route would cause problems. I then started to make a DFA. Fast, but potentially exponential space instead of exponential time. Pretty standard knowledge because it's the same problem with regular expressions. Indeed, in another project I had hacked a solution based on converting my XPath into a regexp and tag stack into a string and using Python's regular expression engine to do the test for me. But it was slow and not something I wanted to do for every step. I worked on the DFA solution for a bit and realized there was a bug in my code. Looking around the web I came across a developerWorks article by Benoit Marchal about his "Handler Compiler". He ran into the same problem I did. His article gave some pointers for other tools. The most helpful was the PPT presentation "From Searching Text to Querying XML Streams" by Dan Suciu. It was very clear and explained exactly the NFA vs. DFA tradeoff. Best of all it used the phrase "Compute the DFA Lazily" and explained why that works. In short, data-oriented XML has a relatively simple structure. There may only be a few hundred unique stacks in a document. Each only needs to be tested once. (This would be more complicated if I supported more of XPath.) The overhead of my regular expression-based tester is amortized away, and it all becomes simple. Well, except for the actual code to convert from XPath and tag stacks into regular expression and strings, but it's not as hard as manipulting finite state machines. I'm not sure I have the convertsion code correct. I don't have much experience with XPath outside of ElementTree and I kinda faked it. Let me know of any problems. For future work, read "Optimizing The Lazy DFA Approach for XML Stream Processing" by Danny Chen and Raymond K. Wong and see xmltk: An XML Toolkit for Lightweight XML Stream Processing. The Code! I consider this code still rather experimental so I'm leaving it here in this writings page instead of making a dedicated page for it. Download iterparse_filter.py TODO: - Experience on usefulness - Should I support all three of ''iterparse', 'handler_parse', and the joint 'parse'? - If yes, have special purpose implementation of the first two? - Better name for 'handler_parse'? - Too many supported events? - Are the 'default' ones really needed? - Is the supported XPath query language correct? - Better documentation, its own page - Push the filtering into C to minimize callback overhead Bonus - tag structure of an XML document The parsing methods from IterParseFilter create an intermediate "FilterAutomata" class which does the actual processing. I split it into two classes because the former allows mutable data structures while the latter does not. It caches handler information in a local state machine (a dfa) and changing the xpath definitions may corrupt that information. If you parse the same file format several times you might want to create and use the FilterAutomata yourself. This preserves the state machine making for much less initialization overhead after the first call. You can also use it to view the internal tag structure of the XML document. Here's one way: import iterparse_filter def _show_structure(dfa, depth): items = dfa.items() items.sort() for (k, v) in items: print " "*depth + k+ ":" _show_structure(v[0], depth+1) def show_structure(fa): dfa = fa.dfa _show_structure(dfa, 0) filter = iterparse_filter.IterParseFilter() fa = filter.create_fa() filename = "/Users/dalke/nbn_courses/nbn_web/ecoli.xml" fa.handler_parse(open(filename)) show_structure(fa) The "ecoli.xml" file is a BLAST-XML document. Its internal structure is: BlastOutput: BlastOutput_db: BlastOutput_iterations: Iteration: Iteration_hits: Hit: Hit_accession: Hit_def: Hit_hsps: Hsp: Hsp_align-len: Hsp_bit-score: Hsp_evalue: Hsp_gaps: Hsp_hit-frame: Hsp_hit-from: Hsp_hit-to: Hsp_hseq: Hsp_identity: Hsp_midline: Hsp_num: Hsp_positive: Hsp_qseq: Hsp_query-frame: Hsp_query-from: Hsp_query-to: Hsp_score: Hit_id: Hit_len: Hit_num: Iteration_iter-num: Iteration_query-ID: Iteration_query-def: Iteration_query-len: Iteration_stat: Statistics: Statistics_db-len: Statistics_db-num: Statistics_eff-space: Statistics_entropy: Statistics_hsp-len: Statistics_kappa: Statistics_lambda: BlastOutput_param: Parameters: Parameters_expect: Parameters_filter: Parameters_gap-extend: Parameters_gap-open: Parameters_matrix: BlastOutput_program: BlastOutput_query-ID: BlastOutput_query-def: BlastOutput_query-len: BlastOutput_reference: BlastOutput_version:
http://www.dalkescientific.com/writings/diary/archive/2006/11/06/iterparse_filter.html
crawl-001
refinedweb
3,323
57.77
In keeping with my recent HTTP theme, I want to provide an update on a change to HTTP content encoding supporting in AIR 2.0.3, which has just been released. The HTTP protocol permits server and clients to agree on encoding a document for transfer. For text and XML documents, this can significantly reduce transfer time, as they typically compress well. Prior to the AIR 2.0.3 update, AIR supported gzip and flate encodings only on Mac OS and Linux. On Mac OS, this was a result of using the default OS HTTP stack, which supports those encodings by default. On Linux, which has no default HTTP stack, we implemented direct support for these options. On Windows, the capability was not available because AIR uses the OS HTTP stack, but Windows did not support these encodings prior to Vista. Therefore, AIR could not depend on this capability being present and did not enable it. Developers could work around this by managing the HTTP content encoding header themselves and performing the decompression in ActionScript, although that’s not a particularly fun option to implement. As of the AIR 2.0.3 release, we’ve added support for gzip and flate encoding for all versions of Windows, thus bringing it to parity with Mac OS and Linux. This change applies to all applications using the 2.0 (and later) namespaces. Applications using the 2.0 namespace will automatically benefit from this change when run on the 2.0.3 and later runtime; there is no need to re-publish.
http://blogs.adobe.com/simplicity/category/air/page/3
CC-MAIN-2014-42
refinedweb
259
66.03
By default, JSF validation error messages are prepended with the client ID of the component. This makes the error message completely unacceptable. In this article, we will learn how to progressively improve the error message. The Default Behavior Let us say that we have a controller class as follows. @Named @RequestScoped public class HelloController { @Size(min=5, message="Name must be at least 5 letters long") private String name; //Getter setter etc... } And an input text for the “name” property in a form: Name: <h:inputText <h:message By default, the validation error message will look something like this. Of course, this is far from ideal. Using the “label” Attribute A simple way to improve the error message is to add a label attribute to the component. Name: <h:inputText <h:message Now, the error message will look like this. That’s better. Still, it looks like a hack. Override Default JSF Message The root of the problem is that JSF by default attempts to prepend either the client ID or the label to the message followed by a “:”. The structure of the message is defined using the following message key: javax.faces.validator.BeanValidator.MESSAGE={1}: {0} Where {0} is the error message and {1} is the client ID. To fix the problem properly, we will need to override this message. First, create your own message properties file. You can call it anything. We will call in ApplicationMessages.properties. We will put it in the root of the source folder. The file will look like this. javax.faces.validator.BeanValidator.MESSAGE={0} Then, register the file in faces-config.xml. <faces-config ...> <application> <message-bundle>ApplicationMessages</message-bundle> </application> </faces-config> Now, the error message will look like this.
https://mobiarch.wordpress.com/2013/07/18/user-friendly-validation-error-message-in-jsf-2/
CC-MAIN-2018-13
refinedweb
290
58.79
java begineers java begineers i have array that is object[] obj= { new string("hi"), new vector().add(10), new hashmap().setkey()} display(obj); display(object[] obj) {} Now my question is what is the string length and how to retrieve Which is the good website for struts 2 tutorials? Which is the good website for struts 2 tutorials? Hi, After completing the MCA I have learned Java and now searching for good tutorial website... Hi, Rose India website is the good Please help me... its very urgent Please help me... its very urgent Please send me a java code to check whether INNODB is installed in mysql... If it is there, then we need to calculate the number of disks used by mysql Top 10 Tips for Good Website Design Designing a good website as to come up with all round appreciation, traffic... presence. Good website design tips reflect on the art of mastering the traffic... by the webmaster. Good website design is a continuous process that needs you to evaluate Free Web Hosting - Why Its not good Idea Free Web Hosting - Why Its not good Idea This article shows you why Free... are looking for getting high traffic on your web site. Also its very important... future. Its very important decision in choosing web hosting server for hosting java website downloadert - Java Beginners java website downloadert Hello, I am working on website downloaded based on java it works on few website only. and it give exception after downloading few pages Exception in thread "main" java.util.NoSuchElementException ='' Its Very Urgent.. Please send me the coding asap.. Thanks... a tutorial link so that it will be easy for the all the beginners to get its...Pop up Very Very Urgent Respected Sir/Madam, I am How to Make Money Online Without a Website Its easy to Make Money Online Without a Website if you patience and little... without creating your own website. There lot's of skill required to register a domain, write pages for your website and then upload on your hosting server Website Designing Services Website Designing Services A website is like a book that contains information on a single or more topics while a web page is a page of the website similar... a web page or a website or a web application using web designing tools website creation using java and mysql - Java Beginners website creation using java and mysql i am going to create a website for some company using java and mySql... i dont know the procedure to create a website.can anybody give me the steps what i have to follow How to Track Website Traffic? for beginners, while some are good for advanced websites with higher volume of traffic... whatever to roll a great SEO campaign everything boils down to your website traffic that actually translates into your website revenue. Thus to track website traffic Communication Website Development that there is a professional representation of the contents of the website. What is communication? In its... a group of people. Good communication skills have a very positive effect on our...Communication Website Development We provide high quality Communication Open website on Button Click - Java Beginners Open website on Button Click Hello sir I want to open website on button click using java swing plz help me sir. in my swing application one "VISIT US BUTTON" i want to open my website on Button CLick Hi Friend ABUT A FUNCTION AND ITS USE - Java Beginners ABUT A FUNCTION AND ITS USE SIR CAN U PLEASE TELL ME ABOUT parseInt(args[i]) a)its use? b)when it is used? C)is it related to string class Website Designing Company India Website Designing Company India Roseindia Technology Pvt Ltd is a one of the most front running website designing company in India that offers... offers complete web designing solutions in the market including static website Web Design Packages, Website Design Packages Why Custom Website Designing? The Custom Web Designing means the designing of website according to customer's need. In the fierce competitive era, every one want to be popular and website is one of the strongest mediums reply me its urgent - Java Beginners reply me its urgent Hi friends I am facing problem in my application in this type please help me i am using database mysql5.0 version...'@'localhost' (using password: YES)" please tell me what is the error....its Java for complete beginners Java Guide is available at RoseIndia that help beginners to master.... Beginners and explore through the long list of tutorials available at our website... java experts who are more than willing to help beginners in any queries The Scannability of your website The Scannability of your website  ... of the users 79% scan the pages of a website and do not read it word by word. The content... other pages so highlight those very words that make this difference apparent. Also).  java - Java Beginners file in the directory like c://java/src/... I need the code.....very urgent...java Hi, I need the sample code very urgent. how to store the file into the directory using java io classes? ie i have a file like jpa one-to-many persist children with its ID can see from my annotaiton, When Item pesist, it will also persist its children. As composited ID has a very bad reputation and I realy need its parent's ID...jpa one-to-many persist children with its ID @Entity public class What is JavaScript and its Features? in hand with XML and PHP. Quiz, polls, etc that is present in the website... on JavaScript. Every person has used the java script. All browsers have support... appears whenever there is a need of one. JavaScript is very different from Custom Website Programming development center in India. Our custom website programming services is very cost... their website developed at very small cost. You can share your website development... client in getting their custom website programmed at very affordable price. We Fetch Gmail Contacts through its service - Java Beginners Fetch Gmail Contacts through its service Hello All i want to fetch gmail contacts through there web service in java how can i do. please give me sample code .. Thanking You... Vikram Creating Website with the use of template in Wicket Creating Website with the use of template in Wicket...;creating Ajax Auto Completer" and some others very beautiful examples now... a website by creating templates of it. In this section of Wicket tutorial we GUI Interface .Reply me with in hour..Its Urgent - Java Beginners GUI Interface .Reply me with in hour..Its Urgent Hi ... Now i am doing project for Voting finger print Authentication .. Can any pls assit me.. I have to create GUI Interace ..How should i create the Interface. In Existing Reply Me - Java Beginners will be not displayed. please help me how it is open selection of the user its very urgent...Reply Me Hi deepak, your sending web application is good (alphabetical searching dropdown menu) Steps:- 1:-searching is Car Website Development Car Website Development We provide high quality car website development services. Our highly qualified website designers and development team will help you..., testing, uploading and maintaining websites for car industry. A website What is Static Website and products in simple manner and at low cost. This type of website is very useful...What is Static Website The static website is simple website design which... to expand their business through web. Through static website individuals or small... and then placing a click able link to your website. Links or hyperlinks are are divided Help Very Very Urgent - JSP-Servlet requirements.. Please please Its Very very very very very urgent... Thanks...Help Very Very Urgent Respected Sir/Madam, I am sorry..Actually the link u have sent was not my actual requirement.. So,I send my requirement Project - Java Beginners Java Project Hello Sir I want Mini Student Admission System project in java with Source Code and Back end Database is MS Access. plz Give That Sir its Very Very Urgent Very simple `Hello world' java program that prints HelloWorld Hello World Java Simple Java Program for beginners (The HelloWorld.java) Java is powerful programming language and it is used... step. This short example shows how to write first java application and compile Thank U - Java Beginners Thank U Thank U very Much Sir,Its Very Very Useful for Me. From SUSHANT ARRAY SIZE!!! - Java Beginners ). The first soloution that came to my mind was to initialize the array to a very big...){ } } } Its just a small question as to how to intialize the array when u don't have the size of it but No one has come up with a good answer/code yet just askin - Java Beginners just askin were can i find a website,, who have a java bean program which include its code, algorithm and flowchart? plsss core java - Java Beginners core java Hi guys, String class implements which interface plzzzzzzzzzzzzz can any body tell me its very very urgentttttttttttttttt Thanks String implements the Serializable, CharSequence java - Java Beginners java Hi, i have 1 year exp in java, how to became good programmer, any pattern is there to learn programming in java, i need struts coding how to do very easily pls anybody tell me, any process is there to learn java - Java Beginners java your website is best look i am intension of this java... 7 super key words the program must be run please i am very... the following links: http Custom Website Design and Development services Custom Website Design and Development services We are established IT company offering dashing website designing and development services to our clients. We work on all the technologies used in website design and development. We java - Java Beginners java how to write a programme in java to copy one line file... CopyFile{ private static void copyfile(String srFile, String dtFile){ try...(); System.out.println("Destination File"); String f2=input.next(); copyfile(f1,f2 Java Thread - Java Beginners are the links where you can find very good examples of wait(), notify(), currentThread...Java Thread hii i feel confusion in tread. i want to know about 1... and simple examples of "Multithreading". 1. java swings - Java Beginners as soon as possible. Its very urgent. Thanks, Valarmathi...java swings Hi, I need the code for joptionpane with jcombobox. my requirement is click on add button,one joptionpane will come.from the option Different Kinds of Website is very useful for expanding market of company with its information... Different Kinds of Website A website is a collection of web pages, images, videos or other digital Java - Java Beginners , Try the following code: import java.io.*; public class CopyFile{ private static void copyfile(){ try{ File f1 = new File("data.txt...){ System.out.println(ex); } } public static void main(String[] args){ copyfile Java swing - Java Beginners ,its very very urgent...Java swing Hi, I want to copy the file from one directory... will be displyed in the abc.txt is copied from c://sample to e://sample .once its copied class with in its own objects? class with in its own objects? Is it a good idea to declare a class with in its own objects java - Java Beginners immediately.Please send to me its very urgent. Thanks, valarmathi P ...java hi, i have one xml file .i want to parse the xml file.../java/example/java/io/flat-file-xml.shtml Web Designing Solutions, Website Templates, Website Design Companies the right content then it is sure to receive a thumbs up from the website visitors who will attest to the user friendly website design. A good website...Website Designing at Rose India Website design is sometimes the only java - Java Beginners ://...., the execution of all those functions remains suspended until the very last function returns its value. This chain of suspended function calls is the stack, because its very urgent Integrate Struts, Hibernate and Spring is very good if you want to learn the process of integrating these technologies... of your changes. This is repetitive task in the programming, so its is very... is very important for any project. It helps you organize all the software Hii - Java Beginners Hii Hi, please reply my posted question its very urgent Thanks Reply me - Java Beginners Reply me Hi Friends, Quest:- Tell em what is the difference between java and php, dotnet Quest:- what is the similar point of php and java... Please tell me its very urgent java - Java Beginners numbers and check whether they are twin prime.? Thank you very much for your series questioned answer..it was too good.. Hi friend, Code to help...; } } return true; } } For more information on java visit java - Java Beginners java Given: int a = 5; int b = 2; Question: Swop the values of this variables without declaring the 3rd variable; hi dude......... This is Gomathi..... its very simple without using the 3rd variable it has scjp 310-065 - Java Beginners scjp 310-065 can you post me (scjp 310-065) successfull dump. its very urgent. because my scjp paper is very near. thank you java project - Java Beginners site and will read the news. Application also provides a very good Technical...java project HAVE START A J2ME PROJECT WITH NO CLEAR IDEA .. ANY ONE... as portal will develop keeping in mind that it will be good looking. It will attract copy file - Java Beginners CopyFile{ public void copy(File src, File dst) throws IOException...(String[] args) throws Exception{ CopyFile file=new CopyFile(); file.copy(new File("C:\\Answers\\Master.txt"),new File("c:\\java\\Master.txt JUnit and its benefits JUnit and its benefits  ... of writing and running tests in the Java programming language. JUnit was originally... then using a good testing framework is recommended. JUnit has established a good Custom Website Design,Custom Web Designing,Custom Website Designing popularity because of its global presence. Custom Website Design...Custom Website Design Custom Website Design Custom website designs means the designing of the website according to customer's choice, and Roseindia hi roseindia - Java Beginners , Threading, collections etc. For Further information, you can go for very good...hi roseindia what is java? Java is a platform independent.... Object Oriented Programming structre(OOPS) concepts are followed in JAVA java Given: int a = 5; int b = 2; Question: Swop the values of this variables without declaring the 3rd variable; hi dude......... This is Gomathi..... its very simple without using the 3rd Website Redesigning despite being very informative website, the rate of success is not so high...Website Redesigning The old designed and old technology based website does not ensure the guarantee of success. A website always need the changes in looks java porgram - Java Beginners understand java and its structure that enables to develop a better code. There are many...java porgram what is java IDE. Hi Friend, Integrated... to the set of tools and features required. IDEs have become very powerful core java - Java Beginners ????????????? plzzzzzzzzzzz help me its very urgent in advance thanks If you want to sort a collection using its comparable interface, you simply call the static
http://www.roseindia.net/tutorialhelp/comment/8386
CC-MAIN-2014-10
refinedweb
2,528
65.52
How to make a DIY home alarm system with a raspberry pi and a webcam Convert a simple webcam to a fancy digital peephole viewer with motion detection features Traditional: - Setup RaspPi as a 24/7 Web cam server and stream your video over the internet. You can also view your signal remotely, using any mobile device equipped with a browser. - Setup motion detection and trigger any events you like, such as store images when motion is detected, upload the images to a remote FTP server, send a notification to your computer, receive an SMS — basically run any script you like! Ok, so here’s how you do it: Necessary hardware: - Raspberry Pi Model B Revision 2.0 (512MB) - Logitech HD Webcam C270 or a similarly compatible usb webcam (list of compatible webcams here). - A usb hub with an external power supply - (optional): a usb extension cable’: You can now connect your webcam to the usb hub. You have to use an external usb hub with an independent power supply as the raspberry pi cannot power the webcam by itself. I ended up hiding the webcam within the door (!), inside the space reserved for an extra key mechanism. The webcam’s lens is taped exactly at the key slot so you cannot see it if you are outside. bit. address 192.168.1.5 netmask 255.255.255.0 gateway 192.168.1.1 network 192.168.1.0 broadcast-update sudo rpi-update Next you need to upgrade your packages: sudo apt-get update sudo: - daemon: set to ON to start motion as a daemon service when pi boots, - webcam_localhost: set to OFF so that you can access motion from other computers, - stream_port: the port for the video stream (default 8081), - control_localhost: set to OFF to be able to update parameters remotely via the web config interface, - control_port: the port that you will access the web config interface (default 8080), - framerate: number of frames per second to be captured by the webcam. Warning: setting above 5 fps will hammer your pi’s performance! - post_capture: specify the number of frames to be captured after motion has been detected.:[email protected] : and here’s a simple python script that I wrote that sends a notification: # use standard Python logging import logging logging grow. You can do all sorts of things with motion. You can even receive an sms or have Twilio call you whenever the alarm is tripped off! Let me know what you did with motion in the comments!
https://medium.com/@Cvrsor/how-to-make-a-diy-home-alarm-system-with-a-raspberry-pi-and-a-webcam-2d5a2d61da3d
CC-MAIN-2016-22
refinedweb
419
59.74
ePubGenerator instructuons yield another errorCity Sue Jan 11, 2011 4:34 PM In RH8, I am trying an epub output for the first time. I find that the instructions by Ankur Janin assume that all readers will be geeks and understand the nuances that are not explained. r_mobile_d.html). Then I downloaded and unzipped 7za.exe. Where was I supposed to put this? Does that matter? Is the example path given in Step 4 the correct format? (I understand the \\ are wrong and should be a single \).. this is from the script: var project; var strProjectName =3D ""; // -- modify the path as per the = location=20 of 7zip on your local machine -- // var str7zipLocation =3D=20 "C:\Program _Files\Adobe\Adobe RoboHelp8\RoboHelp\7za.exe"; if(RoboHelp =3D=3D null) = alert("Please=20 launch RoboHelp HTML"); else { project =3D RoboHelp.getCurrentProject(); = Should there be something after the = sign? should there be a closing } to pair the opening one on the last line? My confidence in ePub has plummeted. 1. Re: ePubGenerator instructuons yield another errorJeff_Coatsworth Jan 12, 2011 7:02 AM (in response to City Sue)1 person found this helpful Don't take it out on ePUB - this is a case of RH not being really ready to produce ePUB format help output. The 7z program is a type of zip file creator like WinZip that ePUB docs use in their creation. It should get installed in some c:\program files\ location other than your Adobe RoboHelp folder. Then you edit the javascript to let RH know where the 7z program is located. If this is too much hassle, you may want to get the new RH9 - it supposedly has ePUB as a SSL option. 2. Re: ePubGenerator instructuons yield another errorCity Sue Jan 12, 2011 9:39 AM (in response to Jeff_Coatsworth) thanks for explaining about the 7z and where it needs to reside! Important information. Perhaps a wrong location was the reason for my error. I will test this out and let you know if it resolves the issue. 3. Re: ePubGenerator instructuons yield another errorCity Sue Jan 14, 2011 11:24 AM (in response to Jeff_Coatsworth) Jeff, I tried placing the 7za exe in another location, but I am still getting the RoboHelp HTML error saying, rather cryptically: Expected:} I have no clue where, this } would be missing in the script file. do you think it has anything to do with the fact that I change the script file name from ePubGenerator._jsx.mht (as it downloaded) to ePubGenerator.jsx. If not, I guess I will have to recommend that we upgrade to RH9. thanks, 4. Re: ePubGenerator instructuons yield another errorCaptiv8r Jan 14, 2011 11:44 AM (in response to City Sue) Hi Sue Perhaps the video linked below will help. Give that a look and see if it helps... Rick 5. Re: ePubGenerator instructuons yield another errorJeff_Coatsworth Jan 14, 2011 11:59 AM (in response to City Sue) Yes, it sounds like you are missing something in the script. Try opening it up through RH > Tools > Scripts > New Scripts. On my TCS2.5 install it launches the ExtendScript Toolkit CS5 so you can have a look at the syntax. 6. Re: ePubGenerator instructuons yield another errorA25CharacterScreenName Jan 14, 2011 1:13 PM (in response to City Sue) City Sue wrote:). This may or may not be a part of the problem. Many years ago Microsoft more or less invented the "Web Archive" file, which is a collection of one or more HTML (and other) files collected into a single file in MIME format; they applied the ".mht" extension to this file. Apparently, you downloaded this file using Internet Explorer (which, AFAIK, is the only browser that creates this format), and selected "Web Archive" as the "Save As" file type. IIRC, a "Web Archive" which consists of a single file is nothing more than the file itself, but my recollection is fuzzy on that. If you want to eliminate the possiblity, download the script again, but tell IE to save it as a text file--then you really can remove the extra ".txt" extension and be sure you have the unaltered script file. Then I downloaded and unzipped 7za.exe. Where was I supposed to put this? Does that matter? Is the example path given in Step 4 the correct format? (I understand the \\ are wrong and should be a single \). 7za.exe is simply a .zip file manager (and a pretty good one at that). As others have explained, just put it wherever you put your collection of other utility programs. The script simply needs to know where to run it from. The double back slashes are, in fact, not wrong. What 'var str7zipLocation="C:\\etc.' is doing is creating a string variable. Inside literal strings the back slash is an escape character which is used to encode characters that would otherwise be invisible. Because of this, if really want to use a back slash as a back slash, and not a way to encode the following character, you must use two slashes. It should also work if you replaced each double back slash with a single forward slash, e.g. "C:/Program Files/7-Zip/7za.exe"; M$Windows accepts either one. This may be the cause of your problem, as \r is the code for a carriage return (start a new line) and \[d] (where [d] is a digit) signals the beginning of an octal representation of a character; one of these problems might be causing a syntax error in the script.. All curly braces in Javascript must be balanced. The message you are getting seems to be saying that the script engine encountered an opening brace but not a closing one; an indication that your file is corrupted or incomplete. [snip] Should there be something after the = sign? should there be a closing } to pair the opening one on the last line? Yes, and yes. The script file I downloaded is 551 lines long, and ends with: if(deleteRootFolder == true) folder.remove(); if(listOfFiles != null) delete listOfFiles; } } If this does not match your file, then you probably have a bad download. Start by downloading the script again, as I suggested above. Open it in any text editor and make sure it matches the clues above (551 lines, ending as indicated). Do not make any changes to it, and import it into the Script Explorer pod. Run it as is; you should see a dialog appear asking where to place the output file, then one or more error dialogs, perhaps suggesting that you haven't generated XML output (an undocumented prerequisite) or that it cannot execute 7Za. You will now know that the script is good. Now edit the script to point to your 7Za location and try again (the script really should allow you to browse for it, but this is, after all, a first attempt). The script still may not behave correctly, but at least you can move forward in resolving your site-specific problems. My confidence in ePub has plummeted. As others have suggested, don't blame the format (which was developed by a group independent of Adobe) for what may be a flawed attempt to generate it. And I would strongly suggest you don't use Adobe Digital Editions to view the resulting .epub when you finally get one; Calibre () is probably the tool you will want to use with .epub files. 7. Re: ePubGenerator instructuons yield another errorCaptiv8r Jan 14, 2011 1:17 PM (in response to A25CharacterScreenName) Hi Lee A25CharacterScreenName wrote: Can you please enlighten us on what your reasons are for suggesting that folks avoid using Adobe Digital Editions? Thanks... Rick 8. Re: ePubGenerator instructuons yield another errorCity Sue Jan 18, 2011 12:51 PM (in response to Captiv8r) Thanks to Jeff and Rick for your responses. I was able to run the script, and am at the point of trying to read the output. Impressed by you taking the time to make a video for me Rock - that is indeed service!. But, I have just been laid off along with lots of other beople, so will try and follow the steps in your video for my own curiosity only. Then I will be looking for a new job!!
https://forums.adobe.com/message/3410806?tstart=0
CC-MAIN-2018-34
refinedweb
1,383
71.24
eups distrib install lsst_sims -t sims will now get you version 2.3.5 of the lsst_sims stack. eups distrib install lsst_sims -t sims Changes since version 2.3.4: sims_movingObjects is no longer a part of lsst_sims. If you want to install sims_movingObjects, you will have to run eups distrib install sims_movingObjects -t sims. This means that lsst_sims no longer depends on oorb or gfortran. eups distrib install sims_movingObjects -t sims sims_catUtils now includes a model for M, L, and T dwarf flares. This is accessible from the VariabilityStars mixin in VariabilityStars sims_catUtils/python/lsst/sims/catUtils/mixins/VariabilityMixin.py imported with from lsst.sims.catUtils.mixins import VariabilityStars Various plot- and movie-generating scripts in sims_maf are now python 3 compatible. Note: You still need to make sure that your ~.astropy/config/astropy.cfg file has log_to_file=False in order for the build to pass. This will hopefully be fixed in the next release. ~.astropy/config/astropy.cfg log_to_file=False This version is built against the weekly tag w.2017.18 of the DM stack. Note that this distribution is now compatible with numpy 1.12 (while v13.0 was not). An updated version 2.3.5.1 has been issued. The only change is that one of the unit tests in sims_alertsim which suffers from a race condition has been marked as an expectedFailure. If you have successfully installed version 2.3.5, you do not need to do anything. If your installation failed on sims_alertsim, try eups distrib install lsst_sims -t sims again. expectedFailure
https://community.lsst.org/t/lsst-sims-version-2-3-5-available-via-eups-distrib-install/1843
CC-MAIN-2017-43
refinedweb
258
61.93
I'm working on a game with a 2D pixelated art style, of course there are pixelated fonts out there and I found a nice one for english, but I was hoping for unicode support, specifically for japanese kana and kanji (so if there's a free to use font somewhere then that solves my problem easier, but I've looked around somewhat none seem to include kanji). So I have been trying to manipulate the GUIStyle, GUIText, and 3D text. I've had the most luck with the 3D Text, since it using a texture, however the filtering remains on so it doesn't look clean.. is there a way to set the 3D Text's filter mode to point or even better, do this through the GUI functions? This picture is a comparison of the 3d Text and the pixelated font I'm using through the GUI Answer by Eric5h5 · Jun 18, 2013 at 02:46 PM Click the gear icon on the font and select "create editable copy"; then you can change the filter mode of the font texture to point instead of bilinear. By the way, GUIText and 3DText both use the same method. You'd normally be better off with GUIText unless you actually want the text to exist in 3D space. Thanks! I'll go try this out now! Answer by oidberg · Nov 25, 2015 at 09:21 AM I did this for another purpose, not sure if it helps here: public class SetFontNearest : MonoBehaviour { [SerializeField] Font[] fonts; void Start () { foreach (Font font in fonts) { var mat = font.material; var txtr = mat.mainTexture; txtr.filterMode = FilterMode. Making text look clear 2 Answers Does 3dText support Unicode Texts? 1 Answer Why is setting the parent of a text transform alters its world coordinates? 0 Answers Why aren't asian fonts rendering dynamically? 0 Answers Big text field GUI. 1 Answer
https://answers.unity.com/questions/476659/i-want-pixelated-unicode-text-need-suggestions.html?sort=oldest
CC-MAIN-2020-45
refinedweb
315
66.67
Dynatrace SaaS release notes version 1.194 - Resolved issues Cluster Saas (Build 1.194.56) 18 total resolved issues Cluster - Dashboard API now returns an indicative error message when a tile bound is missing. (APM-236693) - Empty "Monitoring off/on" column now hidden on custom extensions settings page. (APM-236485) - Fixed unwanted renaming of web applications when creating new application detection rules. (APM-235097) - Valid regex input in method condition again enables "Add" button to save input. (APM-238358) - Error no longer thrown during validation when putting an empty object. (APM-234065) - Solved a problem where waterfall screen did not use the selected end-timeframe when using the new timeframe selector. (APM-236307) - Invalid metric condition input is no longer possible in the UI. (APM-237535) - Charts created from MDA, custom service metrics, etc., no longer have any dimensions split by default. (APM-234724) - Fixed problem with host group settings which are not refreshed when editing another host group. (APM-235791) - Timing-based filters no longer lead to misconfigured calculated service metrics; conditions are now displayed correctly. (APM-237616) - Metrics in browser monitor events table are now harmonized with the corresponding metrics in expanded view. (APM-233075) - Service metrics preview works for regex conditions. (APM-238057) - Increased the cloud application namespace limit from 200 to 5000. (APM-237639) - Mismatch in reporting vs enforcing of DEM units (external synthetic part). (APM-233979) Code level analytics - Resolved issue where user could save empty tag condition if dropdown menu for optional tags was empty. (APM-236376) User Interface - User Details page no longer crashes when deleting an unfilled filter. (APM-234052) - Session List behavior improved for mobile view. (APM-228426) - Sessions that did not convert or bounce no longer drop below zero. (APM-233226) Update 70 (1.194.70) This is a cumulative update that includes all previously released updates for the 1.194 release. Cluster - Negated metric conditions (e.g., "not contains") no longer match falsely. (APM-240650) Update 72 (1.194.72) This is a cumulative update that includes all previously released updates for the 1.194 release. Cluster - Fixed EC2 metrics with time-series alerting. (APM-241478) - Fixed issue leading to missing events for some VM-specific entities. (APM-236262) - Fixed the problem manager outage caused by problems with high number of findings. (APM-240812)
https://www.dynatrace.com/support/help/whats-new/release-notes/saas/sprint-194/
CC-MAIN-2021-31
refinedweb
385
51.24
… Continue reading Silverlight 5 Multi Column and Linked Text Learn WCF RIA Service: Day 3 Learn … Continue reading Learn WCF RIA Service: Day 3 Learn WCF RIA Service: Day 2 Learn WCF RIA Service: Day 1 Building a simple line of business Application using RIA Services Last day we had enough of theory on RIA Services. Today let us get our hand dirty and launch visual studio. We will keep it simple and easy in flow since it is our first exercise. Essentially we will … Continue reading Learn WCF RIA Service: Day 2 Working with JavaScript in Silverlight 4.0 In this post I will show you calling of Javascript function from Silverlight. It is very simple and straight forward. Assume you have a JavaScript function on aspx page as below, SilverlightTestPage.aspx You want to call this Javascript function on page load of the Silverlight page then you will have to add namespace And in … Continue reading Working with JavaScript in Silverlight 4.0
https://debugmode.net/tag/silverlight/
CC-MAIN-2020-24
refinedweb
165
67.79
Typescript 3.8 was released on February 20th, 2020. This version includes changes to the compiler, performance, and editor. In this post, I’m going to review five important changes to the compiler: - Type-only imports and exports - export * as ns syntax - ES2020 private fields - Top-level await - JSDoc property modifiers At the time of this writing, version 3.8.3 is already out. So first, upgrade to the latest version: - You can upgrade by using NPM, with commands like npm install [email protected]or npm -g upgrade typescript(add the -goption if you’re using it globally) - If you’re using Visual Studio, you can do so by downloading it here - If you’re using Visual Studio Code, you can upgrade by modifying either the user settings or the workspace settings - If you’re using Sublime Text, you can upgrade via PackageControl In a terminal window, you can use the following command to confirm you’re using the latest version: tsc --version Now let’s start by reviewing the type-only import/export feature. Type-only import and exports This feature brings new syntax to import and export declarations, along with a new compiler option, importsNotUsedAsValues: import type { MyType } from "./my-module.js"; // ... export type { MyType }; This gives you more control over how import statements are handled in the output, which is particularly useful when compiling using the --isolatedModules option, the transpileModule API, or Babel. By default, TypeScript drops import statements when the imports are only used as types. For example, consider the following: // lion.ts export default class Lion {} // zoo.ts import Lion from './lion'; let myLion: Lion; This will be the output when you compile zoo.ts: // zoo.js let myLion: Lion; Usually, this won’t be a problem. But what if there’s a side-effect in the Lion module: // lion.ts export default class Lion {} console.log("Here's an important message about lions: ... "); In this case, if the import statement is dropped from the output, the console.log statement will never be executed. The import statement will also be dropped if you declare it like this: import {TypeA, Type2} from "./my-module"; However, it will be kept in the output if you declare it like this: import "./my-module"; A bit confusing, right? Here’s another problem. In the following code, can you tell if X is a value or a type? import { X } from "./my-module.js"; export { X }; Knowing this might be important, Babel and TypeScript’s transpileModule API will output code that doesn’t work correctly if X is only a type, and TypeScript’s isolatedModules flag will generate a warning. On the other hand, we can have a similar problem with exports, where a re-export of a type should be omitted but the compiler can’t tell that we’re just re-exporting a type during single-file Babel transpilation, for TypeScript 3.8, import type and export type make explicit the importing/exporting of types. These are some valid ways of using them: import type MyType from './my-module'; import type { MyTypeA, MyTypeB } from './my-module'; import type * as Types from './my-module'; export type { MyType }; export type { MyType } from './module'; And here are some invalid ways: import { type MyType } from './my-module'; import type MyType, { FunctionA } from './my-module'; export { FunctionA, type MyType } from './my-module'; Keep in mind that if the type is not used as a type, the compiler will mark this as an error: import type Lion from './lion'; let myLion: Lion; // Valid myLion = new Lion(); // Invalid: 'Lion' cannot be used as a value because it was imported using 'import type'. When using import type, the behavior is to drop the import declaration from the JavaScript file, as usual. But in TypeScript 3.8, when using import, the behavior can be controlled with the compiler option importsNotUsedAsValue, which can take the values: default, to omit the import declaration preserve, to keep the import declaration, useful to execute side-effects error, like preservebut adds an error whenever an importcould be written as an import type This way, by adding the option "importsNotUsedAsValue":"preserve" to the tsconfig.json file: // tsconfig.json { "compilerOptions": { // ... "importsNotUsedAsValues": "preserve" }, // ... } This TypeScript code: import Lion from './lion'; let myLion: Lion; Compiles to this JavaScript code: import './Lion'; let myLion; Export * as ns syntax TypeScript supports some of the newer ECMAScript 2020 features, such as export * as namespace declarations. Sometimes, it’s useful to have something like this: import * as animals from "./animals.js"; export { animals }; Which exposes all the members of another module as a single member. In ES2020, this can be expressed as one statement: export * as animals from "./animals"; TypeScript 3.8 supports this syntax. If you configure the module for ES2020: // tsconfig.json { "compilerOptions": { "module": "ES2020", // ... } } TypeScript will output the statement without modifications: // allAnimals.ts export * as animals from "./animals"; // allAnimals.js export * as animals from "./animals"; But if you configure the module with something earlier, for example: // tsconfig.json { "compilerOptions": { "module": "ES2015", // ... } } TypeScript will output these two declarations: import * as animals_1 from "./animals"; export { animals_1 as animals }; ES2020 private fields ES2020 also brings a new syntax for private fields. Here’s an example: class Lion { #age: number; constructor(age: number) { this.#age = age; } getAge() { return this.#age; } } Private fields start with the # character, and just like fields marked with the private keyword, they are scoped to their containing class. Why the # character? Well, apparently all the other cool characters were already taken or could lead to invalid code. However, there are some rules. First of all, you cannot use the private modifier and the # character on the same field at the same time (or the public modifier, although the combination wouldn’t make sense anyway). Does this mean that the private modifier is going to disappear eventually? At the time of this writing, there’s an open discussion about this, but the current plan is to leave it as it is. So, which one should you use? Well, it depends on how strict you want to be about privacy. The thing with the private modifier is that it’s only recognized by TypeScript, which means that the access restriction is only enforced at compile-time and the private constraint will get erased from the generated JavaScript code, where the private field can be accessed without problems. On the other hand, the # character will be preserved in the JavaScript code, making the field completely inaccessible outside of the class. Another rule when using # is that private fields always have to be declared before they’re used: class Lion { constructor(age: number) { // Error: Property '#age' does not exist on type 'Lion'.ts this.#age = age; } } Also, notice how you have to reference the private field with this, otherwise, an error will be marked: class Lion { #age: number; // ... getAge() { // The following line throws two errors: // 1. Private identifiers are not allowed outside class bodies. // 2. Cannot find name '#age' return #age; } } In order to use fields marked with #, you must target ECMAScript 2015 (ES6) or higher: // tsconfig.json { "compilerOptions": { "target": "ES6", // ... } } The reason is that the implementation to enforce privacy uses WeakMaps, which can’t be polyfilled in a way that doesn’t cause memory leaks, not all runtimes optimize the use of WeakMaps. In contrast, fields with the private modifier work with all targets and are faster. For example, this TypeScript class: // lion.ts class Lion { #age: number; constructor(age: number) { this.#age = age; } getAge() { return this.#age; } } Outputs this: // lion.js _age; class Lion { constructor(age) { _age.set(this, void 0); __classPrivateFieldSet(this, _age, age); } getAge() { return __classPrivateFieldGet(this, _age); } } _age = new WeakMap(); And from the output of the following code, you can see the age property cannot be seen directly outside its class: const lion = new Lion(5); console.log(lion); // 'Lion {}' console.log(Object.keys(lion)); // '[]' console.log(lion.getAge()); // 5 Top-level await In JavaScript, we can use async functions with await expressions to perform asynchronous operations: async function getLionInformation() { let response = await fetch('/animals/lion.json'); let lion = await response.json(); return lion; } getLionInformation().then((value) => console.log(value)); But await expressions are only allowed in the body of async functions, we cannot use them in top-level code (which can be useful when using the developer console on Chrome, for example): // Syntax error let response = await fetch('/animals/lion.json'); let lion = await response.json(); console.log(lion); However, top-level await (at this time a Stage 3 proposal for ECMAScript) allows us to use await directly at the top level of a module or a script. TypeScript 3.8 supports top-level await, and since files with import and export expressions are considered modules, even a simple export {} would be enough to make this syntax work: let response = await fetch('/animals/lion.json'); let lion = await response.json(); console.log(lion); export {} The only restrictions in TypeScript are that: - The targetcompiler option must be es2017or above, - The modulecompiler option must be esnextor system JSDoc property modifiers JSDoc allows us to add documentation comments directly to JavaScript source code so the JSDoc tool can scan the code and generate an HTML documentation website. But more than for documentation purposes, TypeScript uses JSDoc for type-checking JavaScript files. This is possible due to two compiler options: allowJs, which allows TypeScript to compile JavaScript files checkJs, which is used in conjunction with the option above and allows TypeScript to report errors in JavaScript files TypeScript 3.8 adds support for three accessibility modifiers: @public, which means that the property can be used from anywhere (the default behavior) @private, which means that a property can only be used within the class that defines it @protected, which means that a property can only be used within the class that defines it and all the derived subclasses For example: // lion.js // @ts-check class Lion { constructor() { /** @private */ this.age = 5; } } // Error: Property 'age' is private and only accessible within class 'Lion'. console.log(new Lion().age); And the @readonly modifier, which ensures that a property is only ever assigned a value during initialization: // lion.js // @ts-check class Lion { constructor(ageParam) { /** @readonly */ this.age = ageParam; } setAge(ageParam) { // Error: Cannot assign to 'age' because it is a read-only property this.age = ageParam; } } You can learn more about type checking JavaScript files here, along with the supported JSDoc tags. Conclusion In this post, you have learned about five new features in TypeScript 3.8, type-only imports and exports, the export * as ns syntax, ES2020 private fields, top-level await, and JSDoc property modifiers. Of course, there are more new features. For the compiler: - Better directory watching on Linux and watchOptions(more info here) - “Fast and Loose” incremental checking (more info here) For the editor (more information here): - Refactor string concatenations (see this and this other issue) - Show call hierarchies (see this issue and this pull request) And some breaking changes: - Stricter assignability checks to unions with index signatures - Optional arguments with no inferences are correctly marked as implicitly any objectin JSDoc is no longer anyunder noImplicitAny You can find more information about these breaking changes here..
https://blog.logrocket.com/whats-new-in-typescript-3-8/
CC-MAIN-2022-40
refinedweb
1,857
55.24
ImportError: libboost_python-py26.so.1.40.0: cannot open shared object file: No such file or directory I have to use ubuntu 11.04, in that i have developed the python code for display the videos. while run my python code it throws the below error. Traceback (most recent call last): File "display.py", line 3, in <module> from libavg import avg, anim,button File "/usr/local/ from avg import * ImportError: libboost_ Sent at 2:19 PM on Thursday What to do to resolve this problem... Question information - Language: - English Edit question - Status: - Answered - Assignee: - No assignee Edit question - Last query: - 2012-03-22 - Last reply: - 2012-03-22 Dude,i am sorry to ask this, i really unable to understand what you are saying why because i am new in this. please tell where will include this line export LD_LIBRARY_ http:// Go threw this will be helpful Thanks Thanks dude to set LD_LIBRARY_PATH, use one of the following, ideally in your ~/.bashrc or equivalent file: export LD_LIBRARY_ PATH=/usr/ local/lib or export LD_LIBRARY_ PATH=/usr/ local/lib: $LD_LIBRARY_ PATH Use the first form if it's empty (equivalent to the empty string, or not present at all), and the second form if it isn't. Note the use of export.
https://answers.launchpad.net/ubuntu/+source/python-defaults/+question/191380
CC-MAIN-2018-22
refinedweb
212
58.42
hen the deployment YAML files run, RAM prerequisites Single-user prerequisites 3 GB of RAM is required for single-user Che on OpenShift. Single-user Che uses RAM in this distribution: Che server pod uses up to 1 GB of RAM. The initial request for RAM is 256 MB. The Che server pod rarely uses more than 800 MB RAM. Workspaces use 2 GB of RAM. Multi-user prerequisites You must have at least 5 GB of RAM to run multi-user Che. The Keycloak authorization server and PostgreSQL database require the extra RAM. Multi-user Che uses RAM in this distribution: Che server: approximately 750 MB Keycloak: approximately 1 GB PostgreSQL: approximately 515 MB Workspaces: 2 GB of RAM per workspace. The total workspace RAM depends on the size of the workspace runtime(s) and the number of concurrent workspace pods. Setting default workspace RAM limits The default workspace RAM limit and the RAM allocation request can be configured by passing the CHE_WORKSPACE_DEFAULT__MEMORY__LIMIT__MB and CHE_WORKSPACE_DEFAULT__MEMORY__REQUEST__MB parameters to a Che Requirements for resource allocation and quotas Workspace pods are created in the account of the user who deploys Che. The user needs enough quota for RAM, CPU, and storage to create the pods. Setting up the project workspace Workspace objects are created differently depending on the configuration. Eclipse Che currently supports two different configurations: Single OpenShift project Multi OpenShift project Setting up a single OpenShift project To setup a single OpenShift project: Define the service account used to create workspace objects with the CHE_OPENSHIFT_SERVICEACCOUNTNAMEvariable. To ensure this service account is visible to the Che server, put the service and the Che Che namespace. Setting up a multi OpenShift project To create workspace objects in different namespaces for each user, set the NULL_CHE_INFRA_OPENSHIFT_PROJECTvariable to NULL. To create resources on behalf of the currently logged-in user, use the user’s OpenShift tokens. How the Che server uses PVCs and PVs for storage Che server, Keycloak and PostgreSQL pods, and workspace pods use Persistent Volume Claims (PVCs), which are bound to the physical Persistent Volumes (PVs) with ReadWriteOnce access mode. When the deployment YAML files run, they define the Che PVCs. You can configure workspace PVC access mode and claim size with Che deployment environment variables. Storage requirements for Che infrastructure Che server: 1 GB to store logs and initial workspace stacks. Keycloak: 2 PVCs, 1 GB each to store logs and Keycloak data. PostgreSQL: 1 GB PVC to store database. Storage strategies for Che workspaces The workspace PVC strategy is configurable: Unique PVC strategy How the unique PVC strategy works Every Che Volume of workspace gets its own PVC, which means workspace PVCs are created when a workspace starts for the first time. Workspace PVCs are deleted when a corresponding workspace is deleted. User-defined PVCs are created with few modifications: they are provisioned with genarated names to garantee that it is not conflicting with other PVCs in namespace; subpaths of mount volumes that reference user-defined PVCs are prefixed with {workspace id}/{PVC name}. It is done to have the same data structure on PV on different PVC strategies; Enabling a unique strategy If you have already deployed Che with another strategy, set the CHE_INFRA_KUBERNETES_PVC_STRATEGY variable to unique in dc/che. Note that existing workspaces data won’t be migrated and they will use new unique PVC per Che Volume without cleaning up existing PVCs. If applying the che-server-template.yaml configuration, pass -p CHE_INFRA_KUBERNETES_PVC_STRATEGY=unique to the oc new-app command. Common PVC Strategy How the common PVC strategy works All workspaces (within one OpenShift Project) use the same PVC to store data declared in their volumes (projects and workspace logs by default and whatever additional volumes that a user can define.) User-defined PVCs are ignored and volumes that reference PVCs are replaced with volume that references common PVC. The corresponding containers volume mounts are relinked to common volume and subpaths are prefixed with '{workspaceId}/{originalPVCName}'. User-defined PVC name is used as Che Volume name. It means that if Machine is configured to use Che Volume with the same name as user-defined PVC has then they will use the same shared folder in common PVC. A PV that is bound to PVC che-claim-workspace will have the following structure: pv0001 workspaceid1 workspaceid2 workspaceidn che-logs projects <volume1> <volume2> <User-defined PVC name 1 | volume 3> ... Volumes can be anything that a user defines as volumes for workspace machines. The volume name is equal to the directory name in ${PV}/${ws-id}. When a workspace is deleted, a corresponding subdirectory ( ${ws-id}) is deleted in the PV directory. Enabling the common strategy If you have already deployed Che with another strategy, set the CHE_INFRA_KUBERNETES_PVC_STRATEGY variable to common in dc/che. Note that existing workspaces data won’t be migrated and they will use common PVC without cleaning up existing PVCs. If applying the che-server-template.yaml configuration, pass -p CHE_INFRA_KUBERNETES_PVC_STRATEGY=common to the oc new-app command. Che deployment either when initially deploying Che or through the Che deployment update. Another restriction is that only pods in the same namespace can use the same PVC. The CHE_INFRA_KUBERNETES_PROJECT environment variable should not be empty. It should be either the Che server namespace where objects can be created with the Che service account (SA) or a dedicated namespace where a token or a user name and password need to be used. Per workspace PVC strategy How the per-workspace PVC strategy works Enabling a per-workspace strategy If you have already deployed Che with another strategy, set the CHE_INFRA_KUBERNETES_PVC_STRATEGY variable to per-workspace in dc/che. Note that existing workspaces data won’t be migrated and they will use common PVC per workspace without cleaning up existing PVCs. If applying the che-server-template.yaml configuration, pass -p CHE_INFRA_KUBERNETES_PVC_STRATEGY=per-workspace to the oc new-app command. Updating your Che deployment To update a Che Che GitHub page. Change the pull policy (optional): To change the pull policy, do one of the following: Add --set cheImagePullPolicy=IfNotPresentto the Che deployment. Manually edit dc/cheafter deployment. The default pull policy is Always. The default tag is nightly. This tag sets the image pull policy to Always and triggers a new deployment with a newer image, if available. Scalability To run more workspaces, add more nodes to your OpenShift cluster. An error message is returned when the system is out of resources. GDPR To delete data or request the administrator to delete data, run this command with the user or administrator token: $ curl -X DELETE{id} Debug mode To run Che Server in debug mode, set the following environment variable in the Che deployment to true (default is false): CHE_DEBUG_SERVER=true Private Docker registries Che server logs Logs are persisted in a PV .The PVC che-data-volume is created and bound to a PV after Che, Che server will not be able to write to a file. In the OpenShift web console, select Pods > che-pod > Logs. It is also possible to configure Che master not to store logs, but produce JSON encoded logs to output instead. It may be used to collect logs by systems such as Logstash. To configure JSON logging instead of plain text environment variable CHE_LOGS_APPENDERS_IMPL should have value json. See more at logging docs. Workspace logs Workspace logs are stored in an PV bound to che-claim-workspace PVC. Workspace logs include logs from workspace agent, bootstrapper and other agents if applicable. Che master states The Che Che master instance. Che. Auto-stopping a workspace when its pods are removed Che Che Server cannot interact with the Kubernetes API without user intervention. The job cannot function with the following Che Server configuration: Che Server communicates with the Kubernetes API using a token from the OAuth provider. The job can function with the following Che Server configurations: Workspaces objects are created in the same namespace where Che Server is located. The cluster-admin service account token is mounted to the Che Server pod. To enable the job, set the CHE_INFRA_KUBERNETES_RUNTIMES__CONSISTENCY__CHECK__PERIOD__MIN environment variable to contain a value greater than 0. The value is the time period in minutes between checks for runtimes without pods. Updating Che without stopping active workspaces The differences between a Recreate update and a Rolling update:. Known issues Workspaces may fallback to the stopped state when they are started five to thirty seconds before the network traffic are switched to the new pod. This happens when the bootstrappers use the Che server route URL for notifying the Che Server that bootstrapping is done. Since traffic is already switched to the new Che server, the old Che server cannot get the bootstrapper’s report and fails to start after the waiting timeout is reached. If the old Che. Updating with database migrations or API incompatibility If new version of Che. Deleting deployments The fastest way to completely delete Che and its infrastructure components is to delete the project and namespace. To delete Che and components: $ oc delete namespace che You can use selectors to delete particular deployments and associated objects. To remove all Che Che Monitoring Che Master Server Master server emits metrics in Prometheus format by default on port 8087 of the Che server host (this can be customized by the che.metrics.port configuration property). You can configure your own Prometheus deployment to scrape the metrics (as per convention, the metrics are published on the <CHE_HOST>:8087/metrics endpoint). The Che’s Helm chart can optionally install Prometheus and Grafana servers preconfigured to collect the metrics of the Che server. When you set the global.metricsEnabled value to true when installing Che Che server. You can log in to the Grafana server using the predefined username admin with the default password admin. Creating workspace objects in personal namespaces You can register the OpenShift server as an identity provider when Che is installed in multi-user mode. This allows you to create workspace objects in the OpenShift namespace of the user that is logged in Che through Keycloak. To create a workspace object in the namespace of the user that is logged into Che: Che server shutdown, the dedicated OpenShift account configured for the Kubernetes infrastructure is used. See Setting up the project workspace for more information. To easily install Che on OpenShift with this feature enabled, see this section for Minishift and this one for OCP OpenShift identity provider registration. Configuring Che To configure Che.
https://www.eclipse.org/che/docs/che-6/openshift-admin-guide.html
CC-MAIN-2019-39
refinedweb
1,746
53.51
Raspberry Pi GPIO tools: an extension of RPi.GPIO to easily handle interrupts, and a command line multitool. Project description Visit for a pretty version of this documentation. RPIO is a Raspberry Pi GPIO toolbox, consisting of two main parts: - rpio, a command-line multitool for inspecting and manipulating GPIOs - RPIO.py, a module which extends RPi.GPIO with interrupt handling and other good stuff Installation The easiest way to install/update RPIO on a Raspberry Pi is with either easy_install or pip: $ sudo easy_install -U RPIO $ sudo pip install -U RPIO Another way to get RPIO is directly from the Github repository: $ git clone $ cd RPIO $ sudo python setup.py install After the installation you can use import RPIO as well as the command-line tool rpio. rpio, the command line tool rpio allows you to inspect and manipulate GPIO’s system wide, including those used by other processes. rpio needs to run with superuser privileges (root), else it will restart using sudo. The BCM GPIO numbering scheme is used by default. Here are a few examples of using rpio: Show the help page: $ rpio -h Inspect the function and state of gpios (with -i/--inspect): $ rpio -i 7 $ rpio -i 7,8,9 $ rpio -i 1-9 # Example output for `rpio -i 1-9` (non-existing are ommitted) GPIO 2: ALT0 (1) GPIO 3: ALT0 (1) GPIO 4: INPUT (0) GPIO 7: OUTPUT (0) GPIO 8: INPUT (1) GPIO 9: INPUT (0) Inspect all GPIO's on this board (with -I/--inspect-all): $ rpio -I Set GPIO 7 to `1` (or `0`) (with -s/--set): $ rpio -s 7:1 You can only write to pins that have been set up as OUTPUT. You can set this yourself with `--setoutput <gpio-id>`. Show interrupt events on GPIOs (with -w/--wait_for_interrupts; default edge='both'): $ rpio -w 7 $ rpio -w 7:rising,8:falling,9 $ rpio -w 1-9 Setup a pin as INPUT (optionally with pullup or -down resistor): $ rpio --setinput 7 $ rpio --setinput 7:pullup $ rpio --setinput 7:pulldown Setup a pin as OUTPUT: $ rpio --setoutput 8 Show Raspberry Pi system info: $ rpio --sysinfo # Example output: Model B, Revision 2.0, RAM: 256 MB, Maker: Sony You can update RPIO to the latest version with –update-rpio: $ rpio --update-rpio rpio can install (and update) its manpage: $ rpio --update-man $ man rpio rpio was introduced in version 0.5.1. RPIO.py, the Python module RPIO extends RPi.GPIO with interrupt handling and a few other goodies. Interrupts are used to receive notifications from the kernel when GPIO state changes occur. Advantages include minimized cpu consumption, very fast notification times, and the ability to trigger on specific edge transitions (‘rising|falling|both’). RPIO uses the BCM GPIO numbering scheme by default. This is an example of how to use RPIO to react on events on 3 pins by using interrupts, each with different edge detections: # Setup logging import logging log_format = '%(levelname)s | %(asctime)-15s | %(message)s' logging.basicConfig(format=log_format, level=logging.DEBUG) # Get started import RPIO def do_something(gpio_id, value): logging.info("New value for GPIO %s: %s" % (gpio_id, value)) RPIO.add_interrupt_callback(7, do_something, edge='rising') RPIO.add_interrupt_callback(8, do_something, edge='falling') RPIO.add_interrupt_callback(9, do_something, edge='both') RPIO.wait_for_interrupts() If you want to receive a callback inside a Thread (which won’t block anything else on the system), set threaded_callback to True when adding an interrupt- callback. Here is an example: RPIO.add_interrupt_callback(7, do_something, edge='rising', threaded_callback=True) Make sure to double-check the value returned from the interrupt, since it is not necessarily corresponding to the edge (eg. 0 may come in as value, even if edge=”rising”). To remove all callbacks from a certain gpio pin, use RPIO.del_interrupt_callback(gpio_id). To stop the wait_for_interrupts() loop you can call RPIO.stop_waiting_for_interrupts(). Besides the interrupt handling, you can use RPIO just as RPi.GPIO: import RPIO # set up input channel without pull-up RPIO.setup(7, RPIO.IN) # set up input channel with pull-up control # (pull_up_down be PUD_OFF, PUD_UP or PUD_DOWN, default PUD_OFF)(18, RPIO.OUT, initial=RPIO.LOW) # change to BOARD numbering schema (interrupts will still use BCM though) RPIO.setmode(RPIO.BOARD) # reset every channel that has been set up by this program. and unexport gpio interfaces RPIO.cleanup() You can use RPIO as a drop-in replacement for RPi.GPIO in your existing code like this (if you’ve used the BCM gpio numbering scheme): import RPIO as GPIO # (if you've previously used `import RPi.GPIO as GPIO`) Feedback Chris Hager ([email protected]) If you’ve encountered a bug, please let me know via Github:. Links License RPIO is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. RPIO is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Updates - v0.6.4: Python 3 bugfix in rpio Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/RPIO/0.6.4/
CC-MAIN-2019-43
refinedweb
891
53.92
Editor’s note: The following post was written by Microsoft Integration MVP Leonid Ganeline BizTalk Integration Development Architecture: Artifact Composition This is the second in series of articles. The first article is “BizTalk Integration Development Architecture”. Artifact composition describes several aspects of the BizTalk Integration architecture: - Naming conventions - Folder structure - Shared Artifacts Change Management, Update Rule Integration is a never-ended endeavor. The BizTalk Integration embraces many unrelated projects, created by different developer teams with different skill sets and with different requirements. It is almost impossible to create architecture rules a rule about updates. J+th. It could be some. Naming conventions Naming Conventions are placed in a separate document. Please, use these naming conventions as source information. Folder Structure The folder structure copycats the artifact hierarchy. Because the namespaces also copycat the artifact hierarchy, the folder structure looks like the namespace structure. The namespace is the better formal representation of the hierarchy, so I copy the folder structure from the namespaces not the opposite way. The main rule is: Folder structure and folder names should be a copy of a namespace or the namespace pieces [a copy of the artifacts saved into the folders]. Seems complex, but it is simpler if we use pictures to demonstrate this rule. The name of the project could be just Schemas, but the full project namespace as a name is better. Here two rules are “fighting” for dominance. One rule is: names of a project, a project assembly, a project namespace, and a project folder should be the same. The second rule is: Folder name should be equal to the correspondent part of the namespace. The first rule is a winner. So we use the folders names: GLD, Samples, and Template for all projects which namespace started with “GLD.Samples.Template”. But for the Schema project folder we use “GLD.Samples.Template.Schemas” not “Schemas”. If we group several projects inside single solution, we follow the same rule for the additional group folder names. For example, we group the projects inside the Shared.Schemas solution by a system name, which is CRM or Internal or SAP or Shipping. So we create subfolders for each group: Do not forget to use the same rule for the solution folders inside Visual Studio: This rule is very important for the big BizTalk Integration. Imagine a hundred applications and a hundred Visual Studio solutions. You are going to investigate and fix an error. All you know is the assembly name, it is GLD.Samples.Shared.Classes.Shipping. If you are a developer, you instantly know that you have to open the GLD.Samples.Shared.Classes Visual Studio solution to look to the code. If you are an administrator, you instantly know that you have to check the GLD.Samples.Shared.Classes BizTalk application. If you the source code manager, you instantly know where this code is saved on TFS or Git. If you have ever searched an assembly GLD.Samples.Shared.Classes.Shipping and finally find out code in the GLD.Samples.Sap project which was placed in the GLD.Shipping solution, you completely spoil your time. If you start to investigate the assembly relations in such mess, you are completely doomed. Shared Artifacts To Share or Not To Share? There are several aspects. We are considering this question from different points of view: development, deployment, operations. Too many shared artifacts results in too complex relations between assemblies, projects and applications. Artifacts should be shared only for a good reason. The BizTalk Server controls the relations between assemblies in very strict rules. One of this rule is: If we want to redeploy a shared assembly, we have to previously undeploy all assemblies which reference our shared assembly. That rule is super important in the big BizTalk Integration. Development usually pushes us to share artifacts to use more compact and organized code. Deployment, from the other side, pushes us to denormalize code, because shared code complicates deployment and redeployment. Please, consider both sides before starting to implement a shared component. If we share, we reuse, which couples code. Changes in shared artifacts could trigger changes in the sharing artifacts. If we don’t share, we broke relation between artifacts. Changes in isolated artifacts do not change anything outside those artifacts. It is possible to share without creating relation between artifacts, if we share template, pattern, idea, not the code, not the artifact itself. Schemas As discussed a schema is a very special artifact. It plays the contract role in the BizTalk applications. Services in SOA architecture do not share code, they share contracts. For BizTalk development this means we share schemas between systems, we share schemas between applications. The schemas are good candidates for sharing. Quite few schemas are designed not to be shared and used only for internal purposes. So, if schemas belong to an external system are generated by an [adapter] wizard, they are/will be shared in the most cases. Place them in a Shared.Schemas application under a separate project. If schemas are canonical schemas, place them in a Shared.Schemas application in the Canonical project group under a separate project. If schemas are shared only inside single application, they could be placed in a separate project of the current solution. If all application artifacts are placed in one project, the schemas also could be placed in this project. Let’s see a real-life example. Shared Schemas Architecture Example Application AtoB was created the first. It transfers data between systems A and B. Next was the AtoC application and it exchanges data between systems A and C. Both applications use the Sch_A schema. BizTalk uses schema not in the application namespace but in the global namespace. That means BizTalk cannot recognize a schema if it was deployed in several applications. A schema should be deployed only once, if we want to use this schema in the ports. This rule protects us from severe design errors but it also forces us to share schemas. For our example that means, we cannot deploy Sch_A into both applications. The naïve approach is to reference AtoB.Process assembly from the AtoC.Process. I mentioned this placing Sch_B in the brackets on the picture. So far so good. Then we found out a bug in AtoB application, not in the Sch_A but in Orch_1. We fixed it and want to deploy a new AtoB.Process assembly. No so good, because first we have to undeploy AtoC application because it references the AtoB. That is not good. Now AtoC is related to any modifications in AtoB not only to the Sch_A. Let’s fix it. We extracted schema Sch_A into a separate project/assembly. Now we don’t have to redeploy AtoC.Process when we change something in Sch_B or in Orch_1, but only if we change Sch_A. But it still doesn’t look right. From the design standpoint the Sch_A does not belong to the AtoB application. It does not belong to the AtoC application either. Both applications us it, but it is an independent of them. Only system A dictates how the Sch_A schema looks like. Owners of this system can change it, not the owners/developers of the AtoB and AtoC applications. Let’s change our design to show this ownership. A new Shared.Schemas application was created, which holds all schemas that belong to the integrated systems. Our Sch_A was placed into a separate, independent projects. Now AtoB and AtoC both reference this Shared.Schemas.A assembly. Isn’t is something wrong? How do our changes simplify our development, deployment, operations? Looks like now we have one more application and one more reference (from AtoB to the Shared.Schemas.A). How it could be simpler? The key word here is “changes”. If our applications will never be changed we don’t need these “improvements”. Actually we don’t need AtoB and AtoC but just one application holding all artifact. But when we start to modify our applications we immediately start to understand that “loose coupling” is not just funny words. Next step is to ask a question “why is the Sch_A schema so special and not the Sch_C and Sch_B schemas”? All those schemas belongs to the integrated systems not to the integrating applications. Let’s change our design to fix this. Now schemas of all systems are placed into separate assemblies. Moreover, we found out these systems have (or could have) several interfaces, not only a single interface, hence we got several assemblies for each interface. I use an interface term here as a separate data contract/schema. For sure, if it is a two-way interface (as request-response), both the request and response schemas belong to the same assembly. Again, the resulted design looks more complex but it is more appropriate for the real life. And it models the real relations. Next step in our design is to mention that we integrated two systems, B and C, with one system A. What happens if we add one more system or replace one of the system? Seems the canonical data model fits here perfectly. Let’s create a canonical schema for this interface (I_1 interface) and link the systems through this canonical schema. Now each application deals exactly with one system. Each application deals with implementation details of this single system. Changes in this system will not force us to modify another applications. Before that if system A does change its interface, we have to modify both AtoB and AtoC applications. Now, we change only X_to_Canonical1. I intentionally changed the A application name to the X, to show, that we can easily add Y, Z, etc. systems to the picture without changing any other application. Canonical data model is not a universal, required design pattern. It works if we have one-to-many or many-to-many integration interfaces between systems. It doesn’t make sense to use it if we have one-to-one interface. Here is a template Shared.Schemas application code which can save us some precious development time. .NET classes .NET classes can be shared between the BizTalk applications. It doesn’t create the “redeployment hell” as it could happen with sharing other BizTalk artifacts: schemas, maps, orchestrations, pipelines, etc. If we need to modify a .NET assembly, we just re-GAC it. This does not require to undeploy all related assemblies. One recommendation: Do not share .NET class right away. Try it in one application as a local class for this application. Try it in the next application. When this class is stabilized, when you feel it will not be changed too much in the future, extract it in a shared project. Usually those shared assemblies are placed inside one Shared.Classes application. Here is a template Shared.Classes application code which can save us some precious development time. Maps, Orchestrations, Pipelines, etc. Other BizTalk artifacts, not the schemas, are not good for sharing. We share the pipelines, maps, orchestrations, rules only in very special occasions, usually as the components of the BizTalk shared infrastructure. One example is a notification service that manages notifications: formats notifications, filters notifications, sents them as emails, SMS-s, twitts, etc. Always consider to share artifacts as a service. Maps are usually totally local to the application. It is a really bad thing to share maps between applications from the design point of view. If you want to share the orchestration, do not share the orchestration assembly between applications. Share it as a service. If you call or start an orchestration from another application, you have to reference the orchestration assembly. So consider to use another architecture pattern, like direct binding. That means the calling orchestration just publishes (sends) messages and the called orchestrations subscribe to these messages. If you want to pass the additional parameters with message use the message context properties. The message context also can be used to create the “custom binding” when subscriptions have additional predicates which match for example the originator orchestration name. The pipelines also should not be shared between applications. But the pipeline components can be shared. About the author With 9+ years BizTalk Server experience Leo is working as a BizTalk Developer, Architect, and System Integrator. He got awards: The Microsoft Most Valuable Professional [MVP] Awards 2007, 2008, 2009, 2010, 2011, and 2012 in BizTalk Server; The Microsoft MVP Award 2013 in Microsoft Integration. Leo is a Moderator of the BizTalk Server General forum on the Microsoft MSDN site, he is a blogger [ and] and author of the Microsoft TechNet articles and MSNS Gallery/Samples..
https://blogs.msdn.microsoft.com/mvpawardprogram/2013/10/28/biztalk-integration-development-architecture-artifact-composition/
CC-MAIN-2018-39
refinedweb
2,100
59.09
avaFX Author JavaFX, Scene Builder, and basic issues with threads/tasks L Purcell Greenhorn Joined: Feb 19, 2012 Posts: 12 posted Jan 17, 2013 19:53:37 0 I've read and read, but I'm stumped in learning how to set up/use threads in FX. I set up a Task in the Controller which just sleeps a bit and then does an "updateMessage("Done!"). The Initialize method just displays the class name. In a button-handler I set up a thread (Thread initThread = new Thread(task) ) and a start. I successfully read/display that the thread is "alive" (hooray!), but I CANNOT getMessage. If I can't do THIS simple task, it's hopeless to go further. The rather excellent tutorials on concurrency in FX don't show the OTHER parts of the program, just the TASKs. I know I'm missing something basic, but again, I'm stumped. Help? (Additional question: the Task has a "return"; where does a returned value go (if I were to have one; mine is null for now)?? ) Thanks. Here is the code after the imports: public class TestPhidgetv2Controller implements Initializable { @FXML private Button btnStop; @FXML private Button btnStart; @FXML TextArea txaInfo; @FXML Label lblInfo; String msg; @Override public void initialize(URL url, ResourceBundle rb) { txaInfo.appendText(this.getClass().getSimpleName() + "\n"); } @FXML private void handleBtnStopAction(ActionEvent event) { Platform.exit(); } @FXML private void handleBtnStartAction(ActionEvent event) { Thread initThread = new Thread(task); initThread.setDaemon(false); //true?? false?? initThread.start(); String thr; Boolean thrAlive = initThread.isAlive(); if (thrAlive == true) { thr = initThread.toString(); txaInfo.appendText("Alive! " + thr + "\n"); //Display thread-name IF alive. } else { txaInfo.appendText("Not alive! \n"); } thr = task.getMessage(); //Try to get task-message(???) txaInfo.appendText("First message: " + thr + "\n"); //Display task-message (???) try { Thread.sleep(5000); //Pause for 5 sec. } catch (InterruptedException ex) { Logger.getLogger(TestPhidgetv2Controller.class.getName()).log(Level.SEVERE, null, ex); } msg = task.getMessage(); //Try to get message again. txaInfo.appendText("Second message: " + thr + "\n"); //Display message } Task<Void> task = new Task<Void>() { @Override protected Void call() throws Exception { try { Thread.sleep(3000); //Sleep 3 sec } catch (InterruptedException ex) { Logger.getLogger(TestPhidgetv2Controller.class.getName()).log(Level.SEVERE, null, ex); } updateMessage("Done!"); //Try to set msg return null; } }; } results of run: TestPhidgetv2Controller (( <--This is displayed by the Initialize method.)) Alive! Thread[Thread-3,5,main] (( <--This and below are displayed after clicking the Start button.)) First message: Second message: Manuel Petermann Ranch Hand Joined: Jul 19, 2011 Posts: 177 I like... posted Jan 17, 2013 23:44:07 0 Service<Void> service = new Service<Void>() { @Override protected Task<Void> createTask() { return new Task<Void>() { @Override protected Void call() throws Exception { //Do Long running work here<<< return null; } }; } @Override protected void succeeded() { //Called when finished without exception } }; service.start(); // starts Thread You may want to change the type parameter. For anything else have a look at the javadoc Edit: It just come to mind that the javadoc is not very clear about the thread the succeeded method is invoked on. Please correct my English. Manuel Petermann Ranch Hand Joined: Jul 19, 2011 Posts: 177 I like... posted Jan 18, 2013 00:41:49 0 I should have read your question and code more carefully... I think you got the wrong idea about what your code does. You are starting your thread. All good. Then you are building a bomb. You are sending your thread to rest for 3 seconds and you are sending the application thread to sleep for 5 seconds! Never sleep the application thread! The updateMessage method is trying to set the new message inside the application thread which is asleep. I highly doupt that the developers of javafx made precautions for this kind of thing so your "Done!" String never reaches its property. In addition to my last answer you could override the done method in your Task class as well. Edit: Initializable is superceded by the FXML loader. You should read about it in the javadocs. L Purcell Greenhorn Joined: Feb 19, 2012 Posts: 12 posted Jan 18, 2013 22:59:10 0 Manuel, thanks. And if this msg shows up twice, it's because it disappeared the first time! ?? Anyway, I have had some success, but I still cannot figure out how to use getMessage. Inspired by your suggestions, I switched to a Service instead of just a direct Task, and after a lot of trial and error, it worked -- in the sense that I could use the setOnSucceeded method (based on the Concurrency tutorial)! However, though I studied the FX API on Service, I still don't understand how to use getMessage. Any ideas would be most welcome! (I did remove Initialize - which I had gotten from a template and which is still in some tutorials and examples!) Here is my modified code: L public class TestPhidgetv2Controller { @FXML private Button btnStop; @FXML private Button btnStart; @FXML TextArea txaInfo; @FXML Label lblInfo; @FXML private void handleBtnStopAction(ActionEvent event) { Platform.exit(); } @FXML private void handleBtnStartAction(ActionEvent event) { txaInfo.appendText(this.getClass().getSimpleName() + " started!\n"); PhService phService = new PhService(); phService.setOnSucceeded(new EventHandler<WorkerStateEvent>() { @Override public void handle(WorkerStateEvent wse) { txaInfo.appendText("On Succeeded: " + wse.getSource().getValue() + " \n"); } }); phService.start(); String msg = phService.getMessage(); txaInfo.appendText("Get message: " + msg + "\n"); //Display task-message (???) } public static class PhService extends Service<String> { @Override protected Task createTask() { return new Task<String>() { @Override protected String call() throws Exception { String result = "Return value."; updateMessage("Running..."); return result; } }; } } } Output: TestPhidgetv2Controller started! Get message: On Succeeded: Return value. Manuel Petermann Ranch Hand Joined: Jul 19, 2011 Posts: 177 I like... posted Jan 19, 2013 02:42:38 0 To your first question: I really dont know. The javadocs don't say anything to support that. JavaFx is event driven. Every event needs to have a stub somewhere unless you want to create an infinite loop or something like that. The Button click is one event which triggers your handleButtonStartAction. The success of the service triggers another event namely your handle method. I am not really certain that you understand that concept. To understand what you want to do i need to aks what you excpected to do with String msg = phService.getMessage(); txaInfo.appendText("Get message: " + msg + "\n"); Those are called directly after you started the thread and they do not care if the thread finished or not. For testing purposes you might want to sleep your created thread in your service a while. To get the message you might want to call wse.getSource().getMessage() in your handle method. L Purcell Greenhorn Joined: Feb 19, 2012 Posts: 12 posted Jan 19, 2013 08:58:49 0 Manuel, thanks again! Re the getMessage command, I just wanted to test sending info from the Task thread to the app thread. (Eventually I will be dealing with a USB-device, a Phidget, and its API -- which works well, by the way -- by putting the interactions in threads, as I should, and sending info back to the app thread.) So far, I have only gotten to first base with this! Re the getMessage method, I have a suspicion that maybe it needs a handler, too! And I'll try your suggestion in that regard as well. I was thinking last night that I really need to dig deeper into handlers and listeners, etc; your note prompts me to do just that! Thanks again! (I'll look at the docs, tutorials, etc, but if you know of any good discussion for FX of handlers et al. I'd appreciate knowing.) L **Edit: I tried your suggestion of adding wse.getSource().getMessage() in the handle, and it worked! Thanks! Now I'm studying WHY it worked by going through the Handling JavaFX Events tutorial. L I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link: subject: JavaFX, Scene Builder, and basic issues with threads/tasks Similar Threads Thread Safe primitive and an Object Array unable to create a AsyncProvider webservice Is this code starting multiple threads threads concurrency problem Thread suspended ...How to fix it ? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/602815/JavaFX/java/JavaFX-Scene-Builder-basic-issues
CC-MAIN-2015-32
refinedweb
1,365
66.44
Automation of routine activitiesRichard Leeke Jan 25, 2010 2:09 AM I've seen (and added to) a few postings on this forum suggesting some sort of automation capability to help with routine tasks. I think that would be really useful, but until such time as something like that makes it to the top of the priority list I thought I'd have a go at seeing what can be achieved with a simple keystroke simulation approach. I've managed to get something going for a few cases that take a lot of my time (refreshing large extracts, publishing PDFs from selected sheets, etc). I think this is going to be a real time-saver for me. But it's very clunky - this sort of approach to automation is notoriously difficult to make robust. The thing I had most problems with was detecting when Tableau had finished processing the latest request. If you send keystrokes too early they can get "lost" and nothing happens. It seems that when a sheet is refreshing there are a few places where it may start accepting keystrokes for a moment and then stop again - so my first approach of just repeatedly sending keys until a window popped up wasn't quite good enough and I ended up with various timeouts and retries and suchlike. It ended up taking much longer than I had hoped to get something working. Having some sort of indication of whether Tableau is actively processing something would make this sort of automation much easier and allow it to be much more robust. It wouldn't even have to be visible on the screen - a hidden "busy" control that could be detected by a windows automation tool would be all that was needed. There are a few other things which could also help - but I'd rather think of this as a stop-gap until a proper automation facility comes along, so I'll not even mention those (for now). 1. Re: Automation of routine activitiesChris Gerrard Jan 27, 2010 5:33 AM (in response to Richard Leeke) I'm also keenly interested in automating Workbook operations. My client has a very specific need to generate very high quality PDFs, including 14 separate PDFs of a Worksheet, one for each specific value of a filter. In this situation generating the PDFs using Tableau Server's automation isn't really an option; if nothing else, the different visual renderings between the desktop and server make it ponderous. I've recommended the keystroke record-and-playback approach, and agree with Richard that it's a stopgap measure. Tableau's undo-redo capability suggests that there's a command processing mechanism that could be leveraged for implementing at least those operations that can be undone/redone. 2. Re: Automation of routine activitiesRichard Leeke Jan 27, 2010 4:01 PM (in response to Richard Leeke) If you do go down the record/playback route I'm happy to share what I've found - there are a few obstacles that make it tricky to get it reliable. I can even let you have a copy of the utility I've put together if you want - though where I've got to wouldn't fully handle your need as I haven't found a generic way to drive the shelf controls - so it wouldn't handle changing filter values. That's possible - but I can't see how to avoid making that very specific to a particular worksheet - and the result probably wouldn't be very resilient in the face of sheet changes. What I've done uses a freeware tool called AutoIt, which has a fairly complete scripting language and lets you build standalone executables. So I have a command line tool that supports a few basic operations (open workbook, close workbook, select sheet, refresh sheet, refresh extract, clone workbook, print PDF, etc) - but it's not 100% reliable, because of the issues I mentioned in my original posting. I suspect some of the commercial automation tools will do a better job of recognising GUI objects and GUI states - I had a quick go with one of the high-end automated testing tools just now and it seemed a bit more promising. I just tried to do it with a free tool so I could use it at different client sites. I'm definitely going to persevere with this - even at it's current level of reliability it can be a big time saver. I set up a script to refresh a couple of extracts and 10 or so worksheets in a workbook yesterday. That takes about an hour and a half, and needs a few mouse clicks every 5 or 10 minutes. Just being able to run a script before going to a meeting and come back and it's done is great. 3. Re: Automation of routine activitiesChris Gerrard Jan 27, 2010 9:26 PM (in response to Richard Leeke) Thanks, Richard. I'll look into AutoIt and see what it offers. 4. Re: Automation of routine activitiesPeter Cuttance Apr 28, 2010 7:14 PM (in response to Richard Leeke) Hi Chris — did you have any success with Autoit? 5. Re: Automation of routine activitiesRichard Leeke Apr 29, 2010 4:41 AM (in response to Richard Leeke) One other pitfall with this which I didn't mention in my original posting is that the scripts are quite susceptible to breakage with new versions of Tableau. I originally scripted this with 5.0 and had to make quite a few (very minor) changes when 5.1 came out. Some of these were user-visible changes in the interface (a couple of menu options changed name) and also the internal names for a few windows controls changed. So only go this way if you're prepared to do a bit of maintenance to keep it working as new releases come out. 6. Re: Automation of routine activitiesguest contributor May 4, 2010 2:51 PM (in response to Richard Leeke) Re: ... detecting when Tableau had finished processing the latest request If using together Autoit and Python Image Library, the following code snippet may be useful: from win32api import GetSystemMetrics width = GetSystemMetrics (0) height = GetSystemMetrics (1) from PIL import Image, ImageGrab import time def wait_stale_screenshot(tick=2, timeout=30): """Take sequence of screenshots separated by tick(seconds) and wait for two identical screenshots. Return number of screenshots taken (less by one) or 0 in case of timeout""" time_start = time.time() h0 = ImageGrab.grab( (0, 0, width, height,) ).histogram() tick_count = 0 while time.time()<time_start + timeout: time.sleep(tick) tick_count += 1 h = ImageGrab.grab( (0, 0, width, height,) ).histogram() if h == h0: return tick_count else: h0 = h return 0 This works fairly reliable vs. rotating wait cursor. With carefully chosen parameters should also work for various progress indicators. Vladimir 7. Re: Automation of routine activitiesRichard Leeke May 4, 2010 3:54 PM (in response to Richard Leeke) Wonderful - I'll try to figure out how to use that. If I can get that going I'll happily share the little command-line automation tool I've put together - I've been a bit hestitant just to post it so far because of the trouble with trying to make it robust. Any pointers on how to go about using the Python Image Library with Autoit, Vladimir? That's new territory for me. 8. Re: Automation of routine activitiesguest contributor Oct 13, 2010 2:31 PM (in response to Richard Leeke) We have also built an automation app to meet our needs. It might be interesting to have this group discuss what we are doing and what we would like to be able to do. I am more than willing to facilitate that conversation if anyone is interested. I have also been bitten by the new Tableau version bug - between 5.0 and 5.1 and am going to the same issue with the move to 6.0. It is not a lot of fun... 9. Re: Automation of routine activitiesRichard Leeke Oct 13, 2010 6:49 PM (in response to Richard Leeke) I'm certainly interested to hear how others have tacked it, what you've managed to achieve, what issues you've hit etc. I'm also very happy to share what I've done. I would just reiterate my earlier comments that this sort of approach really is a band-aid - so I'd encourage people not to invest too much time and effort in something which will keep going out of date as new releases come out. There are often other ways of coming at things within the supported capabilities, so I tend to view it as a last-resort approach. 10. Re: Automation of routine activitiestobyerkson Mar 23, 2011 7:59 AM (in response to Richard Leeke) Casting my vote for automation features or language. Even command line parameters would be a good start. 11. Re: Automation of routine [email protected] Mar 23, 2011 8:38 AM (in response to Richard Leeke) +1 from me too. A proper API would be even nicer. 12. Re: Automation of routine activitiesguest contributor May 18, 2011 12:54 AM (in response to Richard Leeke) Hi everybody, I'm a new user of Tableau, working in a French company. I'm also interested in a way to automate refreshing and PDF printing with Tableau. I looked for ways to do it on the Internet, but nothing is posted about that. I'd be happy if one of you could give me some tips on this. Regards, Antoine 13. Re: Automation of routine activitiesSteve Wexler Jun 10, 2011 10:40 AM (in response to Richard Leeke) I've been asking for this since version 2.0 (five years ago). This type of automation component was crucial to Microsoft's success with Excel and Word (and it was WordBasic that did a lot to help Word supplant WordPerfect as the preeminent word processing program.) In any case, I'd love to see this. 14. Re: Automation of routine activitiestommyodell Jun 11, 2011 5:33 PM (in response to Richard Leeke) I can see this being very useful to my organisation. The example that Chris Gerrard mentions, printing a number of graphs based on different filter selections, would be useful to me straigh away.
https://community.tableau.com/thread/105075
CC-MAIN-2019-18
refinedweb
1,722
59.33
Welcome to Invoke!¶ This website covers project information for Invoke such as the changelog, contribution guidelines, development roadmap, news/blog, and so forth. Detailed usage and API documentation can be found at our code documentation site, docs.pyinvoke.org. Please see below for a high level intro, or the navigation on the left for the rest of the site content. What is Invoke?¶ Invoke is a Python (2.7 and 3file: from invoke import task @task def clean(c, docs=False, bytecode=False, extra=''): patterns = ['build'] if docs: patterns.append('docs/_build') if bytecode: patterns.append('**/*.pyc') if extra: patterns.append(extra) for pattern in patterns: c.run("rm -rf {}".format(pattern)) @task def build(c, docs=False): c.run("python setup.py build") if docs: c.
http://www.pyinvoke.org/
CC-MAIN-2018-34
refinedweb
127
54.08
This is the mail archive of the [email protected] mailing list for the libstdc++ project. > Really? I thought this was one of those odd things in the standard > where you could put an extern "C" function in a namespace, and it > semantically was there, but if you actually had two such functions you > got undefined behavior. 7.5p6 says #. So multiple *declarations* of an extern "C" function in different namespaces are fine; they all declare the same function. The note following this text says that two *definitions* of such a function are a violation of the ODR. 3.2p1 says that violations of the ODR in a single translation unit must be diagnosed; 3.2p3 says that violations of the ODR across translation units need not to be diagnosed - i.e. you'd get undefined behaviour. Regards, Martin
http://gcc.gnu.org/ml/libstdc++/2000-09/msg00117.html
crawl-001
refinedweb
140
63.19
Chris Oliver's Weblog - All - F3 - JavaFX - Programming - Research Friday Jan 02, 2009[44] Not terribly surprising, considering JavaFX is compiled to bytecode (with some variable overhead due to binding that is probably not horrible). Posted by Jose on January 02, 2009 at 06:40 PM PST # Yet another completely useless microbenchmark. Posted by 74.46.34.253 on January 02, 2009 at 07:19 PM PST # In JavaFX 1.0 doesn't JavaFX Script run exclusively in the EDT? Why would I want to spend 10 seconds in the EDT freezing the GUI to calculate numbers? Posted by Danno Ferrin on January 02, 2009 at 07:29 PM PST # @Danno Ferrin Takeuchi is a simply function call performance benchmark. And FYI JavaFX script itself knows nothing of the so-called EDT. That it simply an artifact of the current desktop runtime implementation of the scene graph APIs. Your link indicates you have some personal attachment to Groovy. In spite of whatever your personal feelings are, it's quite undeniable that Groovy and JRuby sacrifice an order of magnitude or more performance-wise compared to Java and JavaFX script. In my opinion, that's poor and unacceptable. Posted by Chris Oliver on January 02, 2009 at 08:03 PM PST # First, the comparison is between Groovy and Ruby. JRuby happens to be one of the many implementations of Ruby (Ruby is open source soup to nuts so such details are permitted by it's license). Second, this is just the age old comparison between runtime dispatch and compile time dispatch. The reason that Ruby and Groovy does it has to do with what they aim to accomplish. Take GORM for example, you can throw any random query method at an object that follows a particular pattern and have it resolve it at runtime. Can't do that in FX Script (or Java, or C). There are gigabytes relating to the dynamic vs. static discussion on the internet and Usenet forums, so enough said. To criticize a language without taking it's aims into consideration is, in my opinion, poor and unacceptable. Posted by Danno Ferrin on January 02, 2009 at 08:26 PM PST # It's not really a comparison of apples to apples here: * The JavaFX version is probably using primitives. Groovy and JRuby are using boxed numbers. So this becomes more of a boxed number allocation/GC benchmark than anything else. * The JavaFX version is doing direct static dispatch. Groovy and JRuby are doing dynamic dispatch. Groovy may also be using reflection and boxing the argument list. Those same features that damage numeric performance also enable features Java and JavaFX don't have. And let's also not forget the relative sizes of JRuby and JavaFX teams over the past couple years. We've done a lot, considering. Ultimately these kinds of numeric algorithm comparisons are just noise. There's plenty of other comparisons that could go to JRuby or Groovy, and there's features each language has that the others don't. So what? Unless you're trying to outright insult the folks who have put a lot of work into JRuby and Groovy, or the people who find their performance acceptable for the applications they write, there's little point in publishing a post like this. Spend your effort continuing to improve JavaFX, help JRuby and Groovy improve their performance, or help ongoing JVM work that will aid us all. Sabre-rattling is unbecoming. Posted by Charles Nutter on January 02, 2009 at 09:06 PM PST # JavaFX Script may know nothing of EDT by itself, but surely its interpreter does, or how would you explain this? [aalmiray@localhost tmp]$ cat Tak.fx import java.lang.System; import javax.swing.SwingUtilities; function tak(x:Number, y:Number, z:Number): Number { if (y >= x) z else tak(tak(x-1, y, z), tak(y-1, z, x), tak(z-1, x, y)); } System.out.println("Am I running on EDT?? {SwingUtilities.isEventDispatchThread()}"); for (i in [1..1000]) { tak(24, 16, 8); } [aalmiray@localhost tmp]$ javafxc Tak.fx [aalmiray@localhost tmp]$ time javafx -server -cp . Tak Am I running on EDT?? true 12.11user 0.09system 0:12.27elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (1major+10207minor)pagefaults 0swaps [aalmiray@localhost tmp]$ Posted by Andres Almiray on January 02, 2009 at 09:46 PM PST # How does JavaFX compare to JavaScript running on Google's Chrome or Apple's Safari? How does it compare to ActionScript in Flash? Or Microsoft's Silverlight? I posit that these are your real competitors, not Groovy or JRuby. You also chose to write an incredibly recursive function to spend a huge amount of time dispatching function calls with stack frames, whereas I suggest that a "typical" program might "do useful stuff" instead (e.g. most Rails programs, most ActionScript programs, etc.) But congrats on delivering a clean elegant language to finally give Java attractive interfaces that most people can create and enjoy. JavaFX is definitely a client-side improvement it needs to be pushed hard before it loses to Silverlight and AIR. I hope you're able to make it work on cell phones in H1 2009. Personally I use JavaScript on the client and Perl CGI on the server because they are always there for me, they do what I want them to do, and they perform well. Each to his own, right? Posted by Kevin Hutchinson on January 02, 2009 at 10:14 PM PST # Just for kicks I did a comparison with Scala too: . Scala was faster on this benchmark than JavaFX. Not surprising really, just like it wasn't surprising to see JavaFX faster than the dynamic languages. Posted by Michael Galpin on January 02, 2009 at 10:59 PM PST # Chris Personally, I feel it's valuable, on the one hand, to demonstrate the hard work the JFX compiler team has put into performance. It certainly seems from benchmarks like these that the investments in the compiler have paid off. On the other hand, I think both the title of this posting, as well as the tone, are not welcome. It really feels like you're picking a fight. I'm sure you could approach this issue (which you feel is important) more tactfully and therefore avoid burning bridges with the wider languages community. It is true that some members of other language communities have been launching potshots at JFX for many months now, declaring it irrelevant and showing what they can accomplish using e.g. Groovy, JRuby, etc. I think the tone and attitude has been pretty poor all around. Personally, I'm less concerned with raw performance on these kind of micro-benchmarks than I am with performance of the scene graph and rendering engine you're using. Outside of the video demos (which are impressive!) I haven't been blown away by any of the demos for JFX 1.0 that have been posted, when looking at performance. In the end, I'll be won over to your cause as I see better, higher-quality, more performant demos of what your software can _do_. You've been a good advocate for F3 and JFX in the past; please avoid the food fights, it doesn't suit you or your cause. Cheers! Patrick Posted by Patrick Wright on January 03, 2009 at 04:59 AM PST # JavaFX is neat and all, but where is Rails for JavaFX? You give a performance benchmark, but you don't compare against Java. In Groovy (not sure about JRuby) I could easily rewrite the above code as a Java class, *if* /real/ performance testing showed that that section of my application code was CPU bound, and easily resolve any performance problems. Which I have time to do, thanks to all the time I saved using Groovy (and Grails). New languages without many users really shouldn't throw stones. Posted by noah on January 03, 2009 at 06:56 AM PST # I personally prefer lower performance to the bad language design. Performance can be fixed, language design not. I have yet to find a reason why to use JavaFX as a language. The bind mechanism is not that great when it comes to something more than the school examples. I also consider the bind mechanism as a potential bottleneck, as a single change of a variable can cause many update calls. You guys need to look at the lazy implementation of bind, the current one is simply too eager. This is where you should focus your effort, not the cheap benchmarks as the one above. Posted by John Silver on January 03, 2009 at 08:40 AM PST # The rationale underlying this post (and others on this blog) is simply the following: one of the most significant features of the Java platform and factors in its success (if not the most significant tbh) is the performance of the Hotspot VM. This fact is ignored at your peril. But, as the comments to the this post show - it's nevertheless commonly and easily glossed over without justification. The performance differences between JVM languages like Java, Scala, and JavaFX script, which are able to exploit Hotspot, and those that currently cannot are real and quite extraordinary (by any measure), and not simply to be ignored. And the same can be said (and is said elsewhere on this blog) of other languages that target lesser VMs. If you're observant of this blog you'll also be aware that graphics and animation performance depends on additional factors in addition to VM performance, namely GPU hardware acceleration, which equally is not to be ignored - and isn't - at least here. Posted by Chris Oliver on January 03, 2009 at 09:28 AM PST # @andres almiray As I already said that's simply an artifact of the current desktop runtime implementation - which also explains the excessive startup time for non-graphical JavaFX code such as this benchmark - i.e. class loading and shared library loading of the AWT. This is also why a pure Java or Scala implementation of this benchmark is slightly faster. It's possible to configure the JavaFX script runtime to avoid this - the code is open source - take a look yourself, if you like. Posted by Christopher Oliver on January 03, 2009 at 10:45 AM PST # The fact that Java, JavaFX, and Scala benefit from Hotspot is obvious. But you're completely ignoring the fact that Groovy and JRuby and others also take great advantage of Hotspot. In JRuby's case, we are now the fastest Ruby implementation with a lot of runway to continue improving. And we work every day to make Ruby code more optimizable by Hotspot. We are not, as you put it, ignoring Hotspot...we are actively exploiting it, and performance improves with every release. I also recall a time when JavaFX had absolutely dismal performance. The fact that it required a very large team of developers working for over a year to bring it to this level of performance says a lot about the amount of effort required. A similar amount of effort expended on other JVM languages could produce similar results. The the performance challenge facing an implementation like JRuby is completely different from that facing languages like Scala or JavaFX, both of which largely conform to the JVM rather than push it in new directions. If this platform is to survive, it needs to be able to host languages that may not have been explicitly designed for it, and enhancing the JVM to support such languages (through such work as JSR-292) will better the platform for us all. Either you have not considered this aspect, or you're willfully ignoring it. I hope it's the former. With so many VMs jockeying for developers, your post does little more than alienate the same people working side-by-side with you to make Hotspot and OpenJDK the premier language platform. You insult the efforts of language implementers that don't subscribe to your worldview. You ignore the enormous potential of the JVM as a host for many languages, including languages exposing current weaknesses in the platform. And perhaps worst of all, you frustrate the efforts of many Sun colleagues trying to reach out to language communities by posting a petty, offensive blog post. You're right...we have an amazing platform and an amazing VM here. Perhaps we should work together to ensure it succeeds, rather than kicking each other in the teeth. Posted by Charles Nutter on January 03, 2009 at 11:36 AM PST # @Charles Nutter I'm well aware of the technical reasons for the performance discrepancies mentioned - however those facts do not change the current reality from a user-point-of-view. You can take it as insult if you must, but it's not my doing that you feel that way. The simple fact is that in the problem domain JavaFX is targeting (world-class multimedia), C and C++ are still the real (and very tough) competitors. Current incarnations of Ruby, Groovy, JavaScript and similar languages are simply not-even-close-to-viable in this problem domain. Making JavaFX script and the Java platform viable is itself no small task. From some 10,000 foot view, it's easy to say one can use "any programming language", "any platform", "any vm", "any graphics stack" - and this is heard all the time. However, the factual reality is that in today's day and age that's not actually the case. At the end of the day it's the performance observed by the end-user that separates the "wheat" from the "chaff", when it comes to software platforms. I'm not sure if you've heard of him, but in the domain of graphics and animation our real boss is a dude named "Mr. Frame Rate" - and he's a very harsh master. Do a couple of things wrong and he will deal you a very harsh penalty, which is immediately observable to every single one of your users. It's "his" opinion that matters - not mine. Posted by Christopher Oliver on January 03, 2009 at 12:14 PM PST # So you are right when you are saying that Frame rate is the king for your field of application, but... this benchmark is not about rendering ;-) I follow the (amazing) efforts of both JRuby and javaFX, and I think that these two projects do not have the same users use case. javaFX compete against Flash and Silverlight, right ? So it's problem is to have an acceptable framerate for graphic intensive application, on the client? On the other hand, JRuby is working well on the server, or is perfectly suitable for configuration or various scripting purposes (just to name some examples). I really value JRuby, but I would really never use it for graphic-intensive computations. On the other hand, I would never use javaFX for server-side (at least for the moment). Please note that I'm not pro-JRuby or pro-javaFX, I want (and now I have) a fast and scalable open-source implementation for the Ruby language, and I want a fast and scalable open source RIA-enabling framework. Maybe I'm close to having it now ;-) Posted by Hervé on January 03, 2009 at 02:02 PM PST # Ahhh, Chris Oliver. I see you are still the opinionated, obnoxious jackass I remember from the SeeBeyond days. Some things never change. I, for one, will NEVER even explore JavaFX because of your affiliation with it. Good luck. Posted by Hank on January 03, 2009 at 02:04 PM PST # Hope you got written permission from Sun before publishing your benchmarks as per the JavaFX licence. Posted by Richard Osbaldeston on January 03, 2009 at 02:59 PM PST # @Hank Tbh, I don't remember you. Yes, I'm opinionated, however I haven't made any personal attacks here. Yes, I can dish it out, but I can also take it - which is why I'll leave your troll for posterity. JavaFX script, JRuby, and Groovy will ultimately stand on their own merits, and the use-cases they can carry out on behalf of their users. In spite of the reactions to this post, I have no interest in diminishing either JRuby or Groovy - I wish them well, and the more they can do for their users, the more power to them. Indeed, if they were suitable to solve the problems I am trying to solve, I wouldn't have bothered with JavaFX script. OTOH, pointing out some of the reasons why they may be unsuitable, helps explain the reason for being of JavaFX script - which from my observations is still in question to many readers. Posted by Christopher Oliver on January 03, 2009 at 04:05 PM PST # On behalf of the JRuby community, I appreciate your well-wishing. Yes, we have challenges ahead of us, but the Ruby community has grown and prospered with slower implementations than JRuby. So I think we'll remain successful. And I remain excited about the possibilities of JavaFX and I hope to see JavaFX Script promoted as a general-purpose language for the JVM in addition to a language for multimedia applications. The more the merrier, for sure, and JavaFX fills a lot of gaps in the current set of JVM languages. I would, however, recommend you refrain from future comparisons of JavaFX to languages like Groovy or Ruby. They're entirely different classes of languages suitable for entirely different use cases, and I'd be very surprised if anyone claimed Ruby was fast enough for high-performance graphics work on its own. So your comparison of JavaFX performance to JRuby and Groovy is akin to racing a Ferrari against a 18-wheeler; you're comparing their performance in Ferrari's domain. It doesn't help your case against JavaFX's real competitors. Posted by Charles Nutter on January 03, 2009 at 05:31 PM PST # "Indeed, if they were suitable to solve the problems I am trying to solve, I wouldn't have bothered with JavaFX script." Can you detail what problems you're trying to solve specifically and why you feel JRuby and Groovy are unsuitable? You talk about graphics rendering... is there a specific reason you don't want to use Java to handle that? I'm looking at JavaFX and I can't figure out exactly what problem domain it's trying to fit into. As I understand it it's generally targeted at the same domains as Flash/ActionScript, which take a more declarative approach to graphics manipulation which ensures that the performance of the scripting language itself is in no way critical to the graphics rendering. On the other hand, I see JavaFX is statically typed, which is great from a performance standpoint, but I really wonder how much a statically typed language will appeal to the problem domain that Flash/ActionScript is targeted at. I see either JRuby or Grovvy as better suited for this problem domain, and personally I would kill for a well-supported Flash-like environment with Ruby scripting. Posted by Tony Arcieri on January 03, 2009 at 05:58 PM PST # @Charles Nutter Thanks, and you're welcome. However, the comment immediately following yours shows pretty conclusively that more explanation is yet required (sigh) @Tony Arcieri Posted by Christopher Oliver on January 03, 2009 at 06:19 PM PST # @Tony: as far as it has been disclosed on blog posts, the JavaFX rendering pipeline is based on Java code (project SceneGraph) so any JVM language can take advantage of it. Chris, if that continues to be the case, what then makes JavaFX Script better suited for media handling than other existing JVM languages? (yes I'm broadening the scope beyond JRuby/Groovy/Scala) I must say that JavaFX Script's binding mechanism is refreshing for joining UI elements and data, but it is hardly a unique feature that another language can't pick up, as the Groovy language has demonstrated (it didn't require a grammar change), I'm sure other languages (specially those with meta-programming facilities) can create something similar, yes JRuby can have binding too. Posted by Andres Almiray on January 03, 2009 at 06:51 PM PST # Let me see if i understand this correctly. (I have no affiliation to either jruby, fx script or groovy, and don't use them) Sorry if this comment is longer than your whole post. 1) JRuby and Groovy don't fit your needs for "world class media" because of their performance (because the language features are enough, it seems), so instead of helping them achieve what is needed you go out of your way to create a new language and runtime. Fair enough. Java doesn't fit your needs either, so you go out of your way to create a new language and runtime that is by, your own admition, *slower* ? I'm not sure your boss Mr Framerate would appreciate this very much. He might say that was poor and unacceptable. If java lacked the features you needed (and are in those other languages you mention), which seem to be binding and some limited type inference, could you have made libs and APIs (like JSR what's his face - beans binding, or any of the 3 or 4 different binding libraries) or, say, used the, albeit very limited, generics type inference, or pushed for further inference or whatever you needed in java, and sooner than java 7 ? I think it would be possible to have had APIs just because it's already the case, the fx script feature set is compiled into Java source code. Just like Microsoft invented a new language and runtime to create silverlight, oh wait, they didn't. 2) Are you honestly comparing the work of a team (even if the jfx compiler guys are not that many) payed by a big company, to the open source work people do for free in their spare time (or used to, until pretty recently) when some of them are even your own coworkers ? Are you comparing the perf of a language no one (maybe not even including you) has the right to publish a benchmark of (btw that scala/fx comparison post is illegal, i don't make the rules) ? 3) If you think c/c++ is fx script's competitor and not microsoft or adobe, why are you publishing benchmarks on actionscript on tamarin, jruby and groovy and not, you guessed it, c/c++ ? 4) When fx script has the same number of users, or more, as rails, ruby, jruby, groovy and grails, or c/c++ in the gaming/CAD world, then you could say it's fair to talk smack. I really look forward to that point. 5) "It's possible to configure the JavaFX script runtime to avoid this - the code is open source - take a look yourself, if you like." This is where i lose it. There is nothing, mark my words, *nothing* that is open source about the javafx runtime. I know by heart every piece (or close to it) of open source code the fx team has produced (that's sad but what can you do), and there's nothing from the runtime. One of the smartest moves Sun has made in this hole openjfx endeavor, if not the smartest, is making the building blocks of the javafx runtime usable outside of the said runtime. The fact that they (the open source versions) are pretty much unusable for the reasons i'll expose is on the opposite one of the dumbest. Scenario and Decora are at the core of the runtime. Scenegraph based Uis/libs have been amongst the forefront of HCI research for a long time (and in game engines, etc), because of the features they provide. (Just providing context for people that might not know about them). Scenario and Decora, as open source projects, are dead. The 0.6 release and last svn activity was a long time ago, no responses on the mailing lists, or the java.net forums. (Chris is pretty much a no-show, and i hope he hasn't left like chet, hans, shannon(?), the other scott, tom, and so on. The fact that he has written one of the only two javafx samples (and apps) that doesn't downright suck reassures me he hasn't but you never know). And that, even though, they still are being developed in the fx runtime project. Nevermind the fact that those buggy, outdated, dead versions, are released under the restrictive GPL. On the opposite end, scenario and decora 1.0 that come with the runtime are released under the eqaully as much restrictive javafx license (but for totally opposite reasons). Then there's the javacss project, which does css styling for swing and fx (and is totally undocumented, for good reasons, there are *no* controls to skin - okay actually one, or two). This one is dead as well, and was really short lived, something like two months or less, and has more bugs than scenario (many fixed in 1.0 and more filed in jira). As well here, GPL for the 0.3, and fx license for 1.0. I won't go into more details about the fx license itself, not now, but just the fact that you have to have the whole javafx runtime as a dependency, and that you can't distribute the whole or any part of it, make even scenario/decora/javacss 1.0 unusable as well. Sun has changed its mind about them being usable as open source stand alone libraries, even your own employees don't know if this will change again in the future but they do know a lot of changes are going into those core libraries (just for mobile support for instance) To sum things up: scenario/decora/javacss open source versions are dead, jmc is not open source either, and i believe those round up the ranks of the building blocks. The rest is just code on top. So, are you actually telling Andres to go look in your open source compiler code, targetting your closed source runtime, in order to avoid executing on the EDT ? That would be bad etiquette, so i guess i might be mistaken. What open source runtime are you referring to ? Posted by Rémy Rakic on January 04, 2009 at 01:42 AM PST # @Remy Not sure what you hope to gain by trolling here, tbh - and yes, I'm aware you're a troll when it comes to JavaFX. Sorry, I won't feed you, however I will answer the concrete question you asked: The open source runtime referred to is: Posted by Chris Oliver on January 04, 2009 at 07:59 AM PST # First, let me point out that I am a minor Groovy contributor, so you may view my opinion as skewed. It's really not, but I'll let you be the judge. Don't take this as criticism. I took some time to evaluate the possibility of using JavaFX as a scripting language in our server setup, and found it to be severely lacking. But that's not really what it is designed for, is it? JavaFX, even at the time of the 1.0 release, appears to be aimed squarely at SilverLight and Flash. The lack of support for Java generics and annotations speaks to the fact that if Sun is looking to move JavaFX into server applications, it's something for the future. Right now, it's the front end they are aiming at. So I think we can agree - JavaFX is not suitable for a back-end scripting language. Now how about the speed differential shown by this benchmark? Sun has made a commitment to dynamic languages, including both JRuby and Groovy. Both are supported in the latest version of NetBeans, and I think both are here to stay. Both languages have "dynamic" calling mechanisms. That is, they decide how to call a method at runtime rather than at compile time. Dynamic method calls will probably always be slower than static method calls. Fact of life. There is hope, however. There are (rumored) features in Java 7 that are specifically targeted toward speeding up dynamic languages, specifically around this sort of thing. I think JavaFX will eventually succeed. I hope it does. It has a lot of good points. It's not a dynamic language, and this limits how expressive it can be. It is also a very simple, targeted language - in some cases maybe oversimplified. Within these limits, it can be as fast as Java (since it is easy to compile JavaFX script right to byte code). I also think that JRuby will succeed. It lets Ruby programmers get the best of both worlds without leaving one behind. I also think Groovy will succeed. Groovy is a language that has evolved inside of the Java ecosystem to meet certain hard-to-meet needs. And in that ecosystem, it works better than anything else can for some problems. It is - hands down - the most compatible language with Java. JavaFX includes some design decisions that disturb me a bit when it comes to Java compatibility (using custom/incompatible collections, for example, when java.util.ArrayList would have been fine), plus no annotations, no generics, no side-by-side compile. Groovy is not particularly suited for building applets, but it works great everywhere else. It's supported in every major IDE. It plugs in to Maven seamlessly. It is great for running unit tests. You can use it in a server-side project with other Java classes, and consumers won't even know they are working with compiled Groovy. Okay, I could go on, but that should be enough. So I see JavaFX as fast but rather limited in scope. Great for applets (small, fast, and doesn't break the applet security model), great for stuff on the client. You know, that is a pretty good niche. It won't replace Java, JRuby, or Groovy for what those languages are best at. At least not any time soon. Posted by Jason Smith on January 04, 2009 at 09:12 AM PST # This is one of the most screwed up chains of comments I've ever had the displeasure to see. The original post just shows some facts. Those facts are relevant to you or they are not. But then there are complaints and counter shots and... People need to stop taking comparisons like these so personally. That said, saying that "Groovy and JRuby sacrifice an order of magnitude or more performance-wise...[i]n my opinion, that's poor and unacceptable" is unnecessarily incendiary. Why not "for my needs that's unacceptable, but for yours that might be okay"? Charles Nutter complains that this post is racing a Ferrari against a big rig. It's a good analogy as far as it goes, but it ignores the fact that people may not know a priori that one language implementation is in the league of Ferraris rather than big rigs. Comparisons like these help educate people. And besides, there's nothing inherently wrong with making a speed comparison between Ferarris and big rigs. Nothing prevents others from making an alternative comparison where you try to get a Ferrari to haul a few thousand pounds of cargo up a hill - you see the JRuby and Groovy advocates making the equivalent of that comparison all the time by showing how awkward it is to do any form of metaprogramming in Java. JavaFX's type system wimpy as compared to Scala and its dynamic metagprogramming facilities underpowered as compared to Groovy and JRuby. Clojure beats them all for syntactic metaprogramming. And none of these languages have the level of tool support that Java has. Fine, we're all supposed to be professional engineers. Let's use all these facts and others to make rational decisions instead of taking it personally whenever our religion^h^h^h^h^h^h^h^h language of choice comes out the lesser in a comparison. At the same time, let's recognize that our needs are not the world's needs. I've done hardware level coding and I've done web apps and I've done language processing and I've done image processing. There is not, nor will there ever likely be, a single language that fits all those niches well. Posted by James Iry on January 04, 2009 at 09:41 AM PST # @James Iry You're right. Thanks and my bad. Posted by Chris Oliver on January 04, 2009 at 10:11 AM PST # Chris, i just would like some answers to the questions we've asked for a long time, that's all. You know, there hasn't been a lot of information coming from you guys about something that ultimately impacts the client side folks a whole lot. We've asked and waited, i've been nice, friendly, insightful, angry, sad, helpful, cheerfully inquiring, etc so i thought i'd try being antagonizing a little :) If by trolling you mean pointing out the things that are bad and the ones that are good then so be it. The fact is I've been waiting for the runtime to see what we could do with it for a little over two years, since you first started blogging actually. When the runtime building blocks came out, i've noticed they were really something to watch. The fx runtime has real great potential and will surely become good enough for people to carefully evaluate the options they have when they're doing client side or RIA apps. Changing one's mind about the libraries, closing the runtime and using such a restrictive license surely has sensible reasons, but the lack of communication about the whole project in general, and that point in particular, makes it that we don't know about them and when we do ask there's no answer. There's a lot of ignoring people or antagonizing them, as we've seen here. I think it'll be great when people can use the fx runtime in any jvm language, regardless of its performance or a particular language feature it has. Everyone will benefit from it. I'd really like to help make that a reality, and the client side java an even better platform. Actions speak louder than words, and i'm doing my part, but no one from the outside can do a whole lot to help with the current restrictions and lack of communication, and it'd be nice to finally know why or when/if the situation will change. Ultimately, i'd just like some info about those topics. I'd like OpenJFX to be open. Posted by Rémy Rakic on January 04, 2009 at 11:18 AM PST # Chris, since as you acknowledged , JavaFX is mostly targeting high FPS/rich-media "arena", I found an interesting FPS stress-test example that is implemented both in Flash and Silverlight. So having a JavaFX version of it I think it will add some real meat here. Please take a look and consider to come with a JavaFX killer implementation of it ! Posted by El Cy on January 04, 2009 at 01:53 PM PST # Benchmarks are always welcome! It is important to know the times various systems can produce. I suspect that most of us, most of the time have no idea which system can do what in a given period of time. So please no hurt feelings. Linux speed was improved significantly AFTER an "embarrassing" comparison to Windows a few years ago. So hiding slow numbers, slows down progress. I know Groovy is slower in many ways to Java, but I still like and use them both. That reminds me of fast, cheap and good: choose any two. Posted by Tom Hite on January 05, 2009 at 09:58 AM PST # I'm not a huge fan of JavaFX, but I don't get why Chris got such a beating for posting some benchmark numbers?! Groovy and Ruby are always going to be slower than something that is statically typed. No amount of infrastructure investment in the (complex) dispatch mechanisms of those 2 languages will alter that fact. We've had the Smalltalk people claiming for 20 years of more that dynamic dispatch will one day be faster than static, and it is still nowhere near. So, what's wrong with highlighting the performance advantages of JavaFX's language model? Sure, it doesn't have a MOP etc, but it doesn't need one for its domain. Andrew Posted by Andrew McVeigh on January 06, 2009 at 04:28 AM PST # What I find here is just a "healthy" dose of showing off certain advantages of technology in term of performance and feature that might seem offensive to some when a comparison is made. I feel it is a good thing albeit the insensitivitiy that might occur. After all, great nerds and developers are known to be very passionate about their technology to the point that they might not aware of other's feeling. They just simply speak their mind. But that is just the person's personality and I don't think that it is personal at all. In the past great developers like JRuby's Charles Nutter, Grails/Groovy's Graeme rocher have their time debating too over each other language/performance issues, but then I really learn from them when they exchange their point of view countering each other. It's probably the same thing I see here. It's strange but true that I learn a lot of when people engage in this kind of discussion/disagreement. Oliver, keep the good work ! JavaFX is surely one outstanding technology alongside Groovy and JRuby that I learning... Posted by GeekyCoder on January 06, 2009 at 08:45 AM PST # For the record, my first comment (which sent things downhill) was really a dig on the architecture of JavaFX being the real impediment on performance, and wasn't directed at a particular language. Limiting yourself to one core will do more harm than dynamic dispatch ever could. It turns out the unthreaded approach does make Mr. Framerate very unhappy. Note that Flex does just as bad as JavaFX, and it also shares the distinction with JavaFX as being the only other framework to not allow calculations to occur off of the rendering thread. Really, Sun should (and could if they wanted to) fix this in the next major release of JavaFX. It doesn't have to be easy and can be verbose if needed, but they need some way to execute JavaFX Script in parallel to rendering, otherwise JavaFX Script will be just another DSL to draw pretty pictures (a waste of it's potential). Posted by Danno Ferrin on January 16, 2009 at 07:12 AM PST # since as you acknowledged , JavaFX is mostly targeting high FPS/rich-media "arena", I found an interesting FPS stress-test example that is implemented both in Flash and Silverlight. Posted by توبيكات on May 02, 2009 at 02:49 AM PDT # thank you Posted by منتديات on May 02, 2009 at 02:51 AM PDT # hello yeah but they need some way to execute JavaFX Script in parallel to rendering, otherwise JavaFX Script will be just another DSL to draw pretty pictures Posted by حرب الخليج on May 02, 2009 at 04:22 AM PDT # thank you very much. Posted by çet on May 18, 2009 at 04:18 AM PDT # thanks Posted by Chat Odaları on May 30, 2009 at 01:38 PM PDT # Chris has presented a very targeted, specific benchmark. He feels that performance is the most important factor and we should all defer to Mr. Frame Rate. However, I prefer to look at Mr. License Agreement. I wish I could take advantage of the promises in your benchmarks Mr. Oliver. Unfortunately JavaFX is a botched project because it is too commercially encumbered. My benchmarks show about *zero* frames per second for the JavaFX app I never wrote because the runtime remains closed. Andres, Rémy, and Charles are all working hard to make their efforts open. Perhaps you could be as free with your platform as you are with your performance comparisons? Posted by Karlin Fox on June 12, 2009 at 06:36 AM PDT # Posted by matbaa on June 22, 2009 at 09:59 AM PDT # Posted by دردشه on July 03, 2009 at 06:26 PM PDT # Posted by çet on July 05, 2009 at 06:26 AM PDT #
http://blogs.sun.com/chrisoliver/entry/performance_matters_25x_for_javafx
crawl-002
refinedweb
6,806
69.01
People including me know there is something in Python called __future__ and it appears in quite a few modules I read. And the dull people like me don't know why it's there, and how/when to use it, even after reading the Python's __future__ doc. So any explains with examples to demonstrate it? I have got a few answers quickly, which look all correct, in terms of the basic usage. However and also for further understanding of how __future__ works: I just realized one key thing that was confusing me when I tried to understand it, that is, how a current python release includes something that will be released in a future release? and how can a program use a new feature in a future Python release be compiled successfully by the current python release? So, I guess now that, the current release has already packaged some potential features that will be included in future releases - is this right? but the features are available only by __future__, that is because it doesn't become standard yet - am I right? When you include __future__ module, you can slowly be habitual to incompatible changes or to such ones introducing new keywords and operators. Python does not allow anyone to implement new operators or keywords except __future__ module. So you will import the future module as follows:- from __future__ import with_statement An example that illustrates the above concept:- from __future__ import division print(8/7) print(8//7) from __future__ import division print(8/7) print(8//7) So why we are using the __future__ module:- If we would have not used the __future__ module both print statements would have printed 1 in Python 2 version. In Python 3 print() is a function so it does not need to include future module without including the future module it will give the desired output. print 8/7 print 8//7 print 8/7 print 8//7
https://intellipaat.com/community/2772/what-is-future-in-python-used-for-and-how-when-to-use-it-and-how-it-works
CC-MAIN-2021-04
refinedweb
325
55.88
How to Fetch Data from a Third-party API with Deno In this article, we’ll explore Deno, a relatively new tool built as a competitor/replacement for Node.js that offers a more secure environment and comes with TypeScript support out the box. We’ll use Deno to build a command-line tool to make requests to a third-party API — the Star Wars API — and see what features Deno provides, how it differs from Node, and what it’s like to work with. Deno is a more opinionated runtime that’s written in TypeScript, includes its own code formatter ( deno fmt), and uses ES Modules — with no CommonJS require statements in sight. It’s also extremely secure by default: you have to explicitly give your code permission to make network requests, or read files from disks, which is something Node allows programs to do by default. In this article, we’ll cover installing Deno, setting up our environment, and building a simple command-line application to make API requests. As ever, you can find the code to accompany this article on GitHub. Installing Deno You can check the Deno website for the full instructions. If you’re on macOS or Linux, you can copy this command into your terminal: curl -fsSL | sh You’ll also need to add the install directory to your $PATH. Don’t worry if you’re on Windows, as you can install Deno via package managers such as Chocolatey: choco install deno If Chocolately isn’t for you, deno_install lists a variety of installation methods, so pick the one that suits you best. You can check Deno is installed by running the following command: deno -V This should output the Deno version. At the time of writing, the latest version is 1.7.5, which is what I’m using. If you’re using VS Code, I highly recommend installing the Deno VS Code plugin. If you use another editor, check the Deno documentation to find the right plugin. Note that, if you’re using VS Code, by default the Deno plugin isn’t enabled when you load up a project. You should create a .vscode/settings.json file in your repository and add the following to enable the plugin: { "deno.enable": true } Again, if you’re not a VS Code user, check the manual above to find the right setup for your editor of choice. Writing Our First Script Let’s make sure we have Deno up and running. Create index.ts and put the following inside: console.log("hello world!"); We can run this with deno run index.ts: $ deno run index.ts Check hello world Note that we might see a TypeScript error in our editor: 'index.ts' cannot be compiled under '--isolatedModules' because it is considered a global script file. Add an import, export, or an empty 'export {}' statement to make it a module.ts(1208) This error happens because TypeScript doesn’t know that this file is going to use ES Module imports. It will soon, because we’re going to add imports, but in the mean time if we want to remove the error, we can add an empty export statement to the bottom of the script: export {} This will convince the TypeScript compiler that we’re using ES Modules and get rid of the error. I won’t include this in any code samples in the blog post, but it won’t change anything if we add it other than to remove the TypeScript noise. Fetching in Deno Deno implements support for the same Fetch API that we’re used to using in the browser. It comes built into Deno — which means there’s no package to install or configure. Let’s see how it works by making our first request to the API we’re going to use here, the Star Wars API (or SWAPI). Making a request to will give us back all the data we need for Luke Skywalker. Let’s update our index.ts file to make that request. Update index.ts to look like so: const json = fetch(""); json.then((response) => { return response.json(); }).then((data) => { console.log(data); }); Try and run this in your terminal with deno run: $ deno run index.ts Check error: Uncaught (in promise) PermissionDenied: network access to "swapi.dev", run again with the --allow-net flag throw new ErrorClass(res.err.message); Deno is secure by default, which means scripts need permission to do anything that could be considered dangerous — such as reading/writing to the filesystem and making network requests. We have to give Deno scripts permissions when they run to allow them to perform such actions. We can enable ours with the --allow-net flag: $ deno run --allow-net index.ts Check { name: "Luke Skywalker", ...(data snipped to save space)... } But this flag has given the script permission to access any URL. We can be a bit more explicit and allow our script only to access URLs that we add to an allowlist: $ deno run --allow-net=swapi.dev index.ts If we’re running scripts that we’re authoring ourselves, we can trust that they won’t do anything they shouldn’t. But it’s good to know that, by default, any Deno script we execute can’t do anything too damaging without us first allowing it permission. From now on, whenever I talk about running our script in this article, this is the command I’m running: $ deno run --allow-net=swapi.dev index.ts We can also write this script slightly differently using top level await, which lets us use the await keyword rather than deal with promises: const response = await fetch(""); const data = await response.json(); console.log(data); This is the style I prefer and will use for this article, but if you’d rather stick to promises, feel free. Installing Third-party Dependencies Now that we can make requests to the Star Wars API, let’s start thinking about how we want to allow our users to use this API. We’ll provide command-line flags to let them specify what resource to query (such as people, films, or planets) and a query to filter them by. So a call to our command-line tool might look like so: $ deno run --allow-net=swapi.dev index.ts --resource=people --query=luke We could parse those extra command-line arguments manually, or we could use a third-party library. In Node.js, the best solution for this is Yargs, and Yargs also supports Deno, so we can use Yargs to parse and deal with the command-line flags we want to support. However, there’s no package manager for Deno. We don’t create a package.json and install a dependency. Instead, we import from URLs. The best source of Deno packages is the Deno package repository, where you can search for a package you’re after. Most popular npm packages now also support Deno, so there’s usually a good amount of choice on there and a high likelihood that you’ll find what you’re after. At the time of writing, searching for yargs on the Deno repository gives me yargs 16.2.0. To use it locally, we have to import it from its URL: import yargs from ""; When we now run our script, we’ll first see a lot of output: $ deno run --allow-net=swapi.dev index.ts Download Warning Implicitly using latest version (v16.2.0-deno) for Download Download Download Download Download ...(more output removed to save space) The first time Deno sees that we’re using a new module, it will download and cache it locally so that we don’t have to download it every time we use that module and run our script. Notice this line from the above output: Warning Implicitly using latest version (v16.2.0-deno) for This is Deno telling us that we didn’t specify a particular version when we imported Yargs, so it just downloaded the latest one. That’s probably fine for quick side projects, but generally it’s good practice to pin our import to the version we’d like to use. We can do this by updating the URL: import yargs from ""; It took me a moment to figure out that URL. I found it by recognizing that the URL I’m taken to when I search for “yargs” on the Deno repository is. I then looked back at the console output and realized that Deno had actually given me the exact path: Warning Implicitly using latest version (v16.2.0-deno) for Download I highly recommend pinning your version numbers like this. It will avoid one day a surprising issue because you happen to run after a new release of a dependency. deno fmt A quick aside before we continue building our command-line tool. Deno comes with a built in formatter, deno fmt, which automatically formats code to a consistent style. Think of it like Prettier, but specifically for Deno, and built in. This is another reason I’m drawn to Deno; I love tools that provide all this out of the box for you without needing to configure anything. We can run the formatter locally with this: $ deno fmt This will format all JS and TS files in the current directory, or we can give it a filename to format: $ deno fmt index.ts Or, if we’ve got the VS Code extension, we can instead go into .vscode/settings.json, where we enabled the Deno plugin earlier, and add these two lines: { "deno.enable": true, "editor.formatOnSave": true, "editor.defaultFormatter": "denoland.vscode-deno" } This configures VS Code to run deno fmt automatically when we save a file. Perfect! Using Yargs I won’t be going into the full details of Yargs (you can read the docs if you’d like to get familiar with all it can do), but here’s how we declare that we’d like to take two command-line arguments that are required: --resource and --query: import yargs from ""; const userArguments: { query: string; resource: "films" | "people" | "planets"; } = yargs(Deno.args) .describe("resource", "the type of resource from SWAPI to query for") .choices("resource", ["people", "films", "planets"]) .describe("query", "the search term to query the SWAPI for") .demandOption(["resource", "query"]) .argv; console.log(userArguments); Note: now that we have an import statement, we no longer need the export {} to silence that TypeScript error. Unfortunately, at the time of writing TypeScript doesn’t seem to pick up all the type definitions: the return type of yargs(Deno.args) is set to {}, so let’s tidy that up a bit. We can define our own TypeScript interface that covers all the parts of the Yargs API we’re relying; } Here I declare the functions we’re using, and that they return the same Yargs interface (this’s what lets us chain calls). I also take a generic type, ArgvReturnType, which denotes the structure of the arguments that we get back after Yargs has processed them. That means I can declare a UserArguments type and cast the result of yargs(Deno.argv) to; } interface UserArguments { query: string; resource: "films" | "people" | "planets"; } const userArguments = (yargs(Deno.args) as Yargs<UserArguments>) .describe("resource", "the type of resource from SWAPI to query for") .choices("resource", ["people", "films", "planets"]) .describe("query", "the search term to query the SWAPI for") .demandOption(["resource", "query"]) .argv; I’m sure in the future Yargs may provide these types out of the box, so it’s worth checking if you’re on a newer version of Yargs than 16.2.0. Querying the Star Wars API Now that we have a method of accepting the user’s input, let’s write a function that takes what was entered and queries the Star Wars API correctly: async function queryStarWarsAPI( resource: "films" | "people" | "planets", query: string, ): Promise<{ count: number; results: object[]; }> { const url = `{resource}/?search=${query}`; const response = await fetch(url); const data = await response.json(); return data; } We’ll take two arguments: the resource to search for and then the search term itself. The result that the Star Wars API gives back will return an object including a count (number of results) and a results array, which is an array of all the matching resources from our API query. We’ll look at improving the type safety of this later in the article, but for now I’ve gone for object to get us started. It’s not a great type to use, as it’s very liberal, but sometimes I prefer to get something working and then improve the types later on. Now we have this function, we can take the arguments parsed by Yargs and fetch some data! const result = await queryStarWarsAPI( userArguments.resource, userArguments.query, ); console.log(`${result.count} results`); Now let’s run this: $ deno run --allow-net=swapi.dev index.ts --resource films --query phantom Check 1 results We see that we get one result (we’ll work on the incorrect plural there shortly!). Let’s do some work to get nicer output depending on the resource the user searched for. Firstly, I’m going to do some TypeScript work to improve that return type so we get better support from TypeScript in our editor. The first thing to do is create a new type representing the resources we let the user query for: type StarWarsResource = "films" | "people" | "planets"; We can then use this type rather than duplicate it, first when we pass it into Yargs, and the second time when we define the queryStarWarsAPI function: interface UserArguments { query: string; resource: StarWarsResource; } // ... async function queryStarWarsAPI( resource: StarWarsResource, query: string, ): Promise<{ count: number; results: object[]; }> { ... } Next up, let’s take a look at the Star Wars API and create interfaces representing what we’ll get back for different resources. These types aren’t exhaustive (the API returns more). I’ve just picked a few items for each resource: interface Person { name: string; films: string[]; height: string; mass: string; homeworld: string; } interface Film { title: string; episode_id: number; director: string; release_date: string; } interface Planet { name: string; terrain: string; population: string; } Once we have these types, we can create a function to process the results for each type, and then call it. We can use a typecast to tell TypeScript that result.results (which it thinks is object[]) is actually one of our interface types: console.log(`${result.count} results`); switch (userArguments.resource) { case "films": { logFilms(result.results as Film[]); break; } case "people": { logPeople(result.results as Person[]); break; } case "planets": { logPlanets(result.results as Planet[]); break; } } function logFilms(films: Film[]): void { ... } function logPeople(people: Person[]): void { ... } function logPlanets(planets: Planet[]): void { ... } Once we fill these functions out with a bit of logging, our CLI tool is complete! function logFilms(films: Film[]): void { films.forEach((film) => { console.log(film.title); console.log(`=> Directed by ${film.director}`); console.log(`=> Released on ${film.release_date}`); }); } function logPeople(people: Person[]): void { people.forEach((person) => { console.log(person.name); console.log(`=> Height: ${person.height}`); console.log(`=> Mass: ${person.mass}`); }); } function logPlanets(planets: Planet[]): void { planets.forEach((planet) => { console.log(planet.name); console.log(`=> Terrain: ${planet.terrain}`); console.log(`=> Population: ${planet.population}`); }); } Let’s finally fix up the fact that it outputs 1 results rather than 1 result: function pluralise(singular: string, plural: string, count: number): string { return `${count} ${count === 1 ? singular : plural}`; } console.log(pluralise("result", "results", result.count)); And now our CLI’s output is looking good! $ deno run --allow-net=swapi.dev index.ts --resource planets --query tat Check 1 result Tatooine => Terrain: desert => Population: 200000 Tidying Up Right now, all our code is one large index.ts file. Let’s create an api.ts file and move most of the API logic into it. Don’t forget to add export to the front of all the types, interfaces and functions in this file, as we’ll need to import them in index.ts: // api.ts export type StarWarsResource = "films" | "people" | "planets"; export interface Person { name: string; films: string[]; height: string; mass: string; homeworld: string; } export interface Film { title: string; episode_id: number; director: string; release_date: string; } export interface Planet { name: string; terrain: string; population: string; } export async function queryStarWarsAPI( resource: StarWarsResource, query: string, ): Promise<{ count: number; results: object[]; }> { const url = `{resource}/?search=${query}`; const response = await fetch(url); const data = await response.json(); return data; } And then we can import them from index.ts: import { Film, Person, Planet, queryStarWarsAPI, StarWarsResource, } from "./api.ts" Now our index.ts is looking much cleaner, and we’ve moved all the details of the API to a separate module. Distributing Let’s say we now want to distribute this script to a friend. We could share the entire repository with them, but that’s overkill if all they want to do is run the script. We can use deno bundle to bundle all our code into one JavaScript file, with all the dependencies installed. That way, sharing the script is a case of sharing one file: $ deno bundle index.ts out.js And we can pass this script to deno.run, just as before. The difference now is that Deno doesn’t have to do any type checking, or install any dependencies, because it’s all been put into out.js for us. This means running a bundled script like this will likely be quicker than running from the TypeScript source code: $ deno run --allow-net=swapi.dev out.js --resource films --query phantom 1 result The Phantom Menace => Directed by George Lucas => Released on 1999-05-19 Another option we have is to generate a single executable file using deno compile. Note that, at the time of writing, this is considered experimental, so tread carefully, but I want to include this as I expect it will become stable and more common in the future. We can run deno compile --unstable --allow-net=swapi.dev index.ts to ask Deno to build a self-contained executable for us. The --unstable flag is required because this feature is experimental, though in the future it shouldn’t be. What’s great about this is that we pass in the security flags at compile time — in our case allowing access to the Star Wars API. This means that, if we give this executable to a user, they won’t have to know about configuring the flags: $ deno compile --unstable --allow-net=swapi.dev index.ts Check Bundle Compile Emit deno-star-wars-api And we can now run this executable directly: $ ./deno-star-wars-api --resource people --query jar jar 1 result Jar Jar Binks => Height: 196 => Mass: 66 I suspect in the future that this will become the main way to distribute command-line tools written in Deno, and hopefully it’s not too long before it loses its experimental status. Conclusion In this article, through building a CLI tool, we’ve learned how to use Deno to fetch data from a third-party API and display the results. We saw how Deno implements support for the same Fetch API that we’re accustomed to using in the browser, how fetch is built into the Deno standard library, and how we can use await at the top level of our program without having to wrap everything in an IFFE. I hope you’ll agree with me that there’s a lot to love about Deno. It provides a very productive environment out the box, complete with TypeScript and a formatter. It’s great to not have the overhead of a package manager, particularly when writing small helper tools, and the ability to compile into one executable means sharing those tools with your colleagues and friends is really easy. Learn the basics of programming with the web's most popular language - JavaScript A practical guide to leading radical innovation and growth.
https://www.sitepoint.com/deno-fetch-data-third-party-api/
CC-MAIN-2021-43
refinedweb
3,315
62.68
I tried to use a different forum but I was completely out of my league with the responses.. I have to use this algorithm: for each i from range(n) for each j from range(n-i) if A[j-1]<A[j] swap(A[j], A[j-1]) Here's what I've been playing with: def main(): try: file=open(input("Please enter the name of the file you wish to open:" )) A =file.read().split() n=len(A) print ("These following", n,"numbers are in the inputted file:\n", A) new_list=[] while A: minimum=A[1] for i in range(n): for j in range(n-1): if A[1]<A[2]: minimum=A[1] new_list.append(minimum) A.remove(minimum) minimum=A[2] for i in range(n): for j in range(1,3): if A[2]<A[1]: minimum=A[2] new_list.append(minimum) A.remove(minimum) minimum=A[2] for i in range(n): for j in range(1,2): if A[3]<A[4]: minimum=A[3] new_list.append(minimum) A.remove(minimum) minimum=A[3] for i in range(n): for j in range(1,1): if A[4] < A[0]: minimum=A[4] new_list.append(minimum) A.remove(minimum) #minimum=A[4] #for i in range(n): #for j in range(n-i): #if A[4]<A[0]: #minimum=A[4] #new_list.append(A[4]) #if A < minimum: # minimum = A print (new_list) break file.close() except IOError as e: print("({})".format(e)) #new_list = [] #while A: #minimum = A[0] #for x in A: #if x < minimum: #minimum = x #new_list.append(minimum) #data_list.remove(minimum) #print (new_list) main()
https://www.daniweb.com/programming/software-development/threads/443371/how-to-sort-elements-without-using-built-in-functions
CC-MAIN-2017-43
refinedweb
275
55.13
28 May 2010 12:18 [Source: ICIS news] SINGAPORE (ICIS news)--The Indian toluene market is on a downtrend for the second consecutive week as traders have been quickly liquidating cargoes due to a recent fall in regional prices, high stocks in Kandla and weak domestic demand, traders said on Friday. Ex-tank values in the western Indian ports of Kandla and Mumbai were reported to be at Indian rupees (Rs) 39-40/kg ($0.84-0.86/kg), Rs1.5-2/kg lower than on 21 May, according to global chemical market intelligence service ICIS pricing. “These prices are the result of panic, [as] two-tier traders are unable to hold the product,” said a key importer in ?xml:namespace> Stock levels in Kandla had risen in the past few weeks due to high imports, with most traders’ estimates of stocks to be above 20,000 tonnes. Due to the high stocks, demand in the domestic market was sluggish and buyers were also cautious following the volatility in energy and regional aromatics markets, according to traders. Spot toluene prices in Asia rose on Friday and were hovering at $755-765/tonne FOB (free on board) Korea, after falling below $700/tonne FOB Korea on 25 May due to weak crude values, according to ICIS pricing. ($1 = Rs46.61) For more information
http://www.icis.com/Articles/2010/05/28/9363387/india-toluene-market-slumps-due-to-weak-fundamentals-high.html
CC-MAIN-2013-48
refinedweb
222
54.15
Actually, it was George who pointed out to me that the qualified and unqualified forms are different as far as XML is concerned. I think ideally, WSS4J would need to be configurable by the service provider as to what should happen when the wsse:Type attribute is present. If you don't need/want to accommodate VS 2008/WCF clients then the current behaviour is correct. However, from a practical point of view, it would be nice to be able to configure WSS4J to accept wsse:Type when Type is absent. Naturally, this would imply that there couldn't be application specific semantics to a wsse:Type attribute. Perhaps it would make sense to implement this as a distinct UsernameTokenProcessor so as to not contaminate the current one with deviations from the spec? I wouldn't expect the default behaviour to accept wsse:Type, but a simple FAQ entry could refer to this other UsernameTokenProcessor and point people to the appropriate place to bug MS to fix WCF. M. On Sat, May 30, 2009 at 3:01 AM, Werner Dittmann < [email protected]> wrote: > Just some info about "MS compatibility modes": some time ago (in > WSS4J 1.0) we had built in a specific MS proprietary mode for > password handling. This mode caused several problems later about > interoperability with other, non-MS implementations. > > Implementing a MS-specific handling could also lead to interop > problems with other implementations that are compliant to the spec. > > As Marc said "wsse:Type" and "Type" and different entities. The > spec allows to specify _additional_ implementation specific > attributes. If an implementation now adds such an attribute and uses > "wsse:Type" for its purposes - what should WSS4J then do? Is it > a "MS misbehaviour" and interpret it as the standard password type > or leave interpretation and handling to the specific implementation? > > That's why XML has name spaces and why implementation must use > name spaces in the correct way. > > Best Regards, > Werner > > > Marc Tremblay schrieb: > > Interesting. Not what I'd expect, but I'm sure there's a reason for it. > > > > It's really too bad then that for having been involved in drafting the > > UsernameToken Profile 1.1 spec that MS would mess up their implementation > in > > WCF. > > > > So what would make the most sense as an approach to accommodate the > > qualified Type attribute? An allowQualifiedPasswordType field, or > something > > more general perhaps? > > > > M. > > > > On Fri, May 29, 2009 at 4:54 PM, George Stanchev <[email protected] > >wrote: > > > >> It is not redundant. Read the XML specs. Password is an element. While > >> non-qualified subelements of a qualified element do inherit the > qualified's > >> parent namespace, this is not true for attributes. Attributes that are > not > >> explictly namespace-qualified do not inherit automatically the namespace > of > >> the element they are declared in. No namspace means excacly this - no > >> namespace, not implictly inherit the namespace of its element [1]. > According > >> to the specs, attributes "wsse:Type" and "Type" are two different > entities. > >> > >> George > >> > >> > >> [1] > >> > >> ------------------------------ > >> *From:* Marc Tremblay [mailto:[email protected]] > >> *Sent:* Friday, May 29, 2009 2:24 PM > >> *To:* George Stanchev > >> *Cc:* [email protected] > >> *Subject:* Re: WSS-148 WCF interop issue: Namespace not honored incase > of > >> attributes. > >> > >> I agree that's how the spec is written, but qualifying Type with the > same > >> namespace as Password, while redundant, doesn't change the semantics as > far > >> as XML is concerned. As the spec doesn't specifically forbid namespace > >> qualifying Type, I would expect non-qualified and redundantly qualified > >> forms to be treated as equivalent. > >> > >> Or am I failing to understand how XML works? > >> > >> M. > >> > >> On Fri, May 29, 2009 at 3:47 PM, George Stanchev <[email protected] > >wrote: > >> > >>> > >>> > >> > > > >
http://mail-archives.apache.org/mod_mbox/ws-wss4j-dev/200906.mbox/%[email protected]%3E
CC-MAIN-2014-23
refinedweb
616
56.35
Hello all -- found myself stuck on a certain bit of code dealing with a number pattern I put together. I've written out most of the code, but I'm stuck in 2 areas - thought maybe someone could help shed some light. It seems like it would be very simple, but I just can't get it to work right yet, with whatever I code in. Here's the output I'm trying to achieve: 12345 23451 34512 45123 51234 This is what I've written out so far (stuck spots marked with "?"): public class NumPattern { public static void main (String[] args) { for (int i=1; i <= 5; i++) { int next = i; for (?) { System.out.print (next++); if (next > 5) next = 1; } System.out.print(?); } } } Many thanks in advance!
https://www.daniweb.com/programming/software-development/threads/98947/number-pattern-output
CC-MAIN-2017-51
refinedweb
129
88.16
An Introduction to the Helm Tool Helm is a tool that makes the installation and management of Kubernetes applications efficient. Helm helps you manage Kubernetes Charts. Charts are a collection of information and files needed to create an instance of a running Kubernetes application. There are three main concepts in Helm: Helm has two main components: Why you need Helm Charts The manual deployment of the Kubernetes application, which may have many resources, can be prone to errors such as failure to deploy a resource or typing a wrong input when issuing the `kubectl` command(s). You can avoid these problems by automating the steps in a script. However, the problem with the home-grown automation script is that the logic of the script cannot be easily transferred to a Kubernetes cluster. Introducing the Redis Enterprise Kubernetes Release Redis labs, home of Redis, has been working on a Kubernetes-based deployment of Redis Enterprise for the last few months. We have written our own Kubernetes controller which deploys a Redis Enterprise database service on a Kubernetes cluster. The Redis Enterprise release is made up of many Kubernetes resources, such as service, deployment, StatefulSet and Kubernetes secrets. How Helm Charts improve the Redis Enterprise Kubernetes Release During the beta period of the product development, we used to deploy all the required Kubernetes resources manually, which was error prone. Synchronizing yaml files between Kubernetes clusters, managing configuration versions started to become a challenge. Helm Charts allow us to deploy the Redis Enterprise service using a single command to a Kubernetes namespace of your choice: helm install --namespace redis -n 'production' ./redis-enterprise How do you get started? It’s really that simple! What’s next? If you would like to start experimenting with our Kubernetes generally available release, please contact [email protected] so that we can help you with your Redis needs. By continuing to use this site, you consent to our updated privacy agreement. You can change your cookie settings at any time but parts of our site will not function correctly without them.
https://redis.com/blog/redis-enterprise-release-using-helm-charts/
CC-MAIN-2022-27
refinedweb
344
52.49
Ok, I have test.py in C:\Documents and Settings[user]\Application Data\Sublime Text\Packages\User. Contents of this test.py. hi, you should expect to see hello world inside your edited file, for example, if the cursor was at the end of the last line of code, you should expect:")Hello, World! do you see it? the "Hello, World!"? if it doesn't work for, you should check the following:1. open the console2. do a little change on your script (add empty line for example)3. now save the file4. look at the console do you see errors? it this python parser gives errors at this stage, the script won't even run so later nothing will happen if you want (for example for debug) to print to console, use print "Hello, World!\n" this example was constructed that way, because most plugins are for help you on your current edited file: do some actions, parsing etc. good luck You should be fine if you do view.runCommand('test1') rather than view.runCommand('Test1') Thanks for your replies. jps - As I said in my original post, I tried view.runCommand('test1') but that didn't work either. vim - The file I was editing (which was the script itself) remained unchanged. I can change the script so it errors on compilation. Can I turn on enhanced logging so I can see exactly what happens when I call view.runCommand ? Ah, I missed that part There's no reason the plugin you posted shouldn't work - certainly, it does for me. Just to narrow things down, can you use this instead: import sublime, sublimeplugin print "plugin loaded" class Test1Command(sublimeplugin.TextCommand): def run(self, view, args): print "command run" And see what gets printed in the console. Do you see "plugin loaded" when saving the file? Brilliant. That works exactly as you say it should. (I'm new to Python too so didn't know I could put print commands all over the place!) Also, and quite bizarrely, my other plugin seems to work too. Now I have two files in C:\Documents and Settings[user]\Application Data\Sublime Text\Packages\User with a class called Test1Command (jps.py and test.py). How does Sublime know which I want? Also, how do I use all the plugins in the other subdirectories of C:\Documents and Settings[user]\Application Data\Sublime Text\Packages ? All plugins are put into the same namespace, so you can run them all in the same manner. e.g., you can run the command in Packages/Default/GotoSymbol.py by doing view.runCommand('gotoSymbol'). As to which of your two plugins will be run, I believe it'll just be whichever one is loaded last. I worked for hours trying to get the most basic, like this one, plugins to work yesterday. I'm using ST3, so maybe there are differences in plugin development between that an ST2. But here are the things I did that seem to have worked. Note that as I tried many different permutations of these, I don't exactly know which ones are the real fix. But if these get you working, then maybe you can start making incremental changes and see when it breaks. Moved all *.py files that were my plugins into "Packages" folder. (I read many posts that said that plugin files in the "Packages\User" or "Packages\NameOfPlugin" folder should be read, but I haven't tried them yet. I put it in "Packages" to make sure that finding the files wasn't a problem.) Restart Sublime after every change. (The docs say you don't have to, but if you have errors in your plugins, it seems that the auto-reload feature doesn't work. Which kinda makes sense. Once I got the errors out of all my plugins, the auto-reload feature started working correctly.) A few of the simple/tutorial plugins I copied off the web referenced/imported "sublimePlugin" which errorred on my ST3 Win7 instance. Apparently the correct library to import is "sublime_plugin". The two places this is used, in basic plugins, are at the very top in the import section, as well as in the class line, which I'll illustrate below... [code]# didn't work for me...class ExampleCommand(sublimePlugin.TextCommand): class ExampleCommand(sublime_plugin.TextCommand):[/code] This thread is from 2009, and deals with Sublime Text 1, when the API was very different. I recommend not referencing it Maybe check out docs.sublimetext.info/en/latest/ ... ugins.html ?
https://forum.sublimetext.com/t/plugins-dont-work-or-im-stupid/431/1
CC-MAIN-2016-44
refinedweb
757
67.35
Member 12 Points Apr 10, 2012 02:16 PM|maddyrafi1987|LINK in my project directory inside bin folder i check that GoogleTranslator dll is present or not, i check add reference but GoogleTranslator dll is not present in my ajax toolkit here i want to add name space for Ravsoft And GoogleTranslator dll how ? Error 1 The type or namespace name 'RavSoft' could not be found (are you missing a using directive or an assembly reference?) Error 2 The type or namespace name 'GoogleTranslator' could not be found (are you missing a using directive or an assembly reference?) public void translate() { using (RavSoft.GoogleTranslator t = new GoogleTranslator()) { t.SourceLanguage ="English"; t.TargetLanguage ="Tamil"; t.SourceText = TextBox1.Text; } } All-Star 40501 Points Apr 10, 2012 02:21 PM|Rajneesh Verma|LINK Do the same as given below: Apr 10, 2012 02:22 PM|bbcompent1|LINK Friend, I think that this translator is only for win forms. I just went to and it shows win forms and not asp.net. Apr 10, 2012 02:23 PM|bbcompent1|LINK That sample is a console application, not asp.net Member 12 Points Apr 10, 2012 02:30 PM|bbcompent1|LINK I would suggest you look at this link: 5 replies Last post Apr 10, 2012 02:30 PM by bbcompent1
http://forums.asp.net/p/1791213/4925043.aspx?Re+GoogleTranslator+Dll
CC-MAIN-2014-10
refinedweb
216
62.17
Seam+Enunciate?Aaron Siri Dec 7, 2008 3:24 PM Hello, I'm trying to get a pure WS environment using Seam and Enunciate. In doing this I'm trying to strip out all of the JSF stuff and then get Enunciate to insert all of the WS stuff. I got the WS parts to work. I was able to make a very simple Web Service object (for both SOAP and REST requests) and was able to call it successfully. However, the Seam parts don't seem to be working well. I made my simple service object a Seam-managed component, and according to the logs Seam detected it and registered it. The problem I'm seeing is that none of the bijection appears to be working. The result is always a null reference. My simple web service is below: @WebService @Path("/webservice") @Name("WebServiceTest") public class WebServiceTest { @In CurrentTime currentTime; @Logger private Log log; @GET @Path("/result/{name}") public WebServiceResult getResult(@PathParam("name") String name) { log.info("Hi there", null); Date time = currentTime.getCurrentTime(); WebServiceResult result = new WebServiceResult(); result.setName(name); result.setDate(time); return result; } } In this case both currentTime and log are null. I've tried various combinations of filters/servlets in web.xml and am wondering what is the minimum needed to get this to work. My goal is to keep it as simple as possible. No JSF but have support for the Seam lifecycle, AOP/annotations, EL expressions, etc. Thanks for any assistance. -Aaron 1. Re: Seam+Enunciate?Aaron Siri Dec 7, 2008 3:25 PM (in response to Aaron Siri) Forgot the mention that I'm using Seam 2.1.1 and Enunciate 1.8. Again, thanks. -Aaron 2. Re: Seam+Enunciate?Scott Basinger Dec 8, 2008 2:42 PM (in response to Aaron Siri) Did you add web:context-filter regex-url-pattern= /webserviceto your components.xml? I believe that its required to get seam components installed for the deployed webservices. 3. Re: Seam+Enunciate?Charles Akalugwu Aug 30, 2009 8:30 PM (in response to Aaron Siri) Hi Aaron, I am wondering, did you use eclipse to set up your enunciate? I am having problems getting enunciate to install. I am using eclipse 3.5, jbossas 5.0.0, jbossws 3.2 and richfaces 3.3.1. If you have any pointers on how I can configure my web service project to use enunciate, that would be stellar! cheers. Charlie.
https://developer.jboss.org/thread/185327
CC-MAIN-2018-17
refinedweb
408
66.94
NAME Shell - run shell commands transparently within perl SYNOPSIS use Shell qw(cat ps cp); $passwd = cat('</etc/passwd'); @pslines = ps('-ww'), cp("/etc/passwd", "/tmp/passwd"); # object oriented my $sh = Shell->new; print $sh->ls('-l'); DESCRIPTION Caveats This package is included as a show case, illustrating a few Perl features. It shouldn't be used for production programs. Although it does provide a simple interface for obtaining the standard output of arbitrary commands, there may be better ways of achieving what you need. Running shell commands while obtaining standard output can be done with the qx/STRING/ operator, or by calling open with a filename expression that ends, that there are several built in functions and library packages providing portable implementations of functions operating on files, such as: glob, link and unlink, mkdir and rmdir, rename, File::Compare, File::Copy, File::Find etc. Using Shell.pm while importing foo creates a subroutine foo in the namespace of the importing package. Calling foo with arguments arg1, arg2,... results in a shell command foo arg1 arg2..., where the function name and the arguments are joined with a blank. (See the subsection on Escaping magic characters.) Since the result is essentially a command line to be passed to the shell, your notion of arguments to the Perl function is not necessarily identical to what the shell treats as a command line token, to be passed as an individual argument to the program. Furthermore, note that this implies that foo is callable by file name only, which frequently depends on the setting of the program's environment. Creating a Shell object gives you the opportunity to call any command in the usual OO notation without requiring you to announce it in the use Shell statement. Don't assume any additional semantics being associated with a Shell object: in no way is it similar to a shell process with its environment or current working directory or any other setting.", "/etc/passwd.orig"); That's maybe too gonzo. It actually exports an AUTOLOAD to the current package (and uncovered a bug in Beta 3, by the way). Maybe the usual usage should be use Shell qw(echo cat ps cp); Larry Wall Changes by. Rewritten to use closures rather than eval "string" by Adriano Ferreira.
https://metacpan.org/pod/release/RGARCIA/perl-5.9.4/lib/Shell.pm
CC-MAIN-2015-32
refinedweb
381
50.06
#include <CMatchManager.h> The CMatchManager class is used to manage matches in a raw way. It also provides a higher level class which can be used to easily run a match at an higher level. Some methods accept a parameter named osn. This parameter can be used to forward a push notification to the users who are not active at the moment. It is typically a JSON with made of attributes which represent language -> message pairs. Here is an example: {"en": "Help me!", "fr": "Aidez moi!"}. Creates a match, available for joining by others players. Deletes a match. Only works if you are the one who created it and it is already finished. Dismisses a pending invitation on a given match. Fails if the currently logged in player was not invited. Draws one or more randomized items from the shoe (see CreateMatch). lastEventIdfor next requests. Fetches the latest info about a match. Finishes a match. Only works if you are the one who created it in the first place. Other players will be notified in the form of an event of type 'match.finish'. deleteattribute was passed. Destroys a match object (not actually deleting the match or anything, just freeing the resource associated with the match). Allows to invite a player to join a match. You need to be part of the match to send an invitation. This can be used for private matches as described in the chapter Working with matches. Joins a match. Other players will be notified in the form of an event of type 'match.join'. lastEventIdcontained. Leaves a match. Only works if you have joined it prior to calling this. Other players will be notified in the form of an event of type 'match.leave'. Lists the matches available to join. Posts a move in the match, notifying other players in the form of an event of type 'match.move'. lastEventIdcontained.
http://cloudbuilder.clanofthecloud.mobi/doc/struct_cloud_builder_1_1_c_match_manager.html
CC-MAIN-2018-51
refinedweb
316
77.84
ESP. Functional Description The following figure and the table below describe the key components, interfaces, and controls of the ESP32-PICO-KIT board. Below is the description of the items identified in the figure starting from the top left corner and going clockwise. headers on both sides of the board J2 J3 Testing Add the ESP32 package using the Arduino IDE as per the steps below Starting with 1.6.4, Arduino allows installation of third-party platform packages using Boards Manager. We have packages available for Windows, Mac OS, and Linux (32, 64 bit and ARM). - Install the current upstream Arduino IDE at the 1.8 level or later. The current version is at the Arduino website. - Start Arduino and open Preferences window. - Enter one of the release links above into Additional Board Manager URLs field. You can add multiple URLs, separating them with commas. - Open Boards Manager from Tools > Board menu and install esp32 platform (and don’t forget to select your ESP32 board from Tools > Board menu after installation). You will see the ESP32 Pico Kit listed in the ESP32 Arduino list I used the WifiScan example, this is a hello world type example for detecting Wifi networks nearby #include "WiFi) == WIFI_AUTH_OPEN)?" ":"*"); delay(10); } } Serial.println(""); // Wait a bit before scanning again delay(5000); } The output was scan start scan done 2 networks found 1: mynetwork1(-63)* 2: mynetwork2(-74)* Parts List The board comes in it under $14 ESP32-PICO-KIT ESP32 SiP development board with PICO-D4 male / female
https://www.esp32learning.com/hardware/a-look-at-the-esp32-pico-kit-development-board.php
CC-MAIN-2021-39
refinedweb
254
62.58
#include <vtkMPIGroup.h> This class has been deprecated in VTK 5.2. Use vtkProcessGroup instead. Definition at line 35 of file vtkMPIGroup.h. Reimplemented from vtkObject. Definition at line 40 of file vtkMPIGroup.h. Construct a vtkMPIGroup with the following initial state: Processes = 0, MaximumNumberOfProcesses = 0. Reimplemented from vtkObject. Allocate memory for N process ids where N = controller->NumberOfProcesses Add a process id to the end of the list (if it is not already in the group). Returns non-zero on success. This will not add a process id >= MaximumNumberOfProcessIds. Remove the given process id from the list and shift all ids, starting from the position of the removed id, left by one. Find the location of a process id in the group. Returns -1 if the process id is not on the list. Get the process id at position pos. Returns -1 if pos >= max. available pos. Copy the process ids from a given group. This will copy N ids, where N is the smallest MaximumNumberOfProcessIds. Returns the number of ids currently stored. This method can be used to copy the MPIGroup into a vtkProcessGroup, which is the successor to this class. Copies all the information from group, erasing previously stored data. Similar to copy constructor Allocate memory for numProcIds process ids Definition at line 84 of file vtkMPIGroup.h. Definition at line 97 of file vtkMPIGroup.h. Definition at line 98 of file vtkMPIGroup.h. Definition at line 99 of file vtkMPIGroup.h. Definition at line 100 of file vtkMPIGroup.h.
http://www.vtk.org/doc/release/5.4/html/a01053.html
crawl-003
refinedweb
253
70.09
After the launch of v2.0 of my current project (TabMerger), I decided to learn/integrate a few items that really pushed my skills to the next level. Best of all, adding these to my projects made me very excited to work on new projects and/or refactor existing ones. Here is TabMerger's repository which you can view to get ideas about how to add any of the features discussed below. lbragile / TabMerger TabMerger, as the name suggests, merges your tabs into one location to save memory usage and increase your productivity. TabMerger is a cross-browser extension - currently available on Chrome, Firefox, and Edge. Merges your tabs into one location to save memory usage and increase your productivity. Table of Contents 📃 - Description - Contributing - Download - Leave A Review - Documentation - Todo - Build - Test - Donate - License Description 🖋 Tired of searching through squished tab icons to find that one tab you are sure is there? With TabMerger you can simplify this clutter while greatly increasing productivity in a highly organized and customizable fashion! With a single click, you can have all your tabs in a single place, where you can then re-arrange them into appropriate groups, delete extra/unwanted tabs, customize group colors, and so much more. All tabs that are merged into TabMerger are stored internally for you to use at a later time, even when you close the browser window. Lots of analytics are provided to keep you informed about the state of your… Here are the concepts I urge you to learn as they will hopefully bring the same excitement into your coding life - don't fall into the trap of pushing them off/procrastinating. Table of Contents 📑 - Testing - Linting - Static Testing - TypeScript - Module Aliasing - Documentation - Conclusion 1. Testing 🧪 I highly recommend Jest as it is available right out of the box when you use React (CRA), but you could also use other test runners like Mocha, Karma, etc. Why? Do you want to manually test every little feature of your code every time you change/update/add something? Yeah, no thanks, I would rather have a testing script that automates this for me. Plus it is super rewarding once you understand the main concepts. This is probably the most time consuming of all the items listed here. Start with the basics - Unit Testing Look into Mutation Testing - this is insanely amazing once you understand how to use it! Stryker is the way to go here. Once you understand your coverage reports from Jest & Stryker, add Integration Tests and E2E Tests with Jest Puppeteer which is another easy to integrate module with React. Disclaimer: I haven't done this step yet for TabMerger but experimented with this in the past and it is very fun - feel free to contribute 😊. This should be simpler than unit testing as it is "Black Box" since you do not care about the internal (just input and output) rather than a unit tests' "White Box" approach. TabMerger Testing Here is a brief snapshot of TabMerger's current testing performance: As you can see, with these test scripts, I can check the logic of all the files in my application with the help of around 250 tests in less than 20 seconds. This gives me a great deal of confidence that new features do not break existing code. There is still some room for improvement (uncovered lines and not exactly 100%), but the current state lets me easily add new features without endlessly pursuing a 100% coverage report - after all 99.5% rounds up 😉. You can use npm run test:all to get these results. TabMerger also uses mutation testing and currently scores above 95% (only 67/1499 mutants are undetected across all files). I've parallelized the mutation testing scripts with a matrix build in GitHub to speed up the lengthy execution - from 12 hours to 5 hours. As you can see from the below post, testing is a relatively "hidden" gem that many developers are not aware of or simply need the reason to get started. Additionally, almost all experienced testers recommended Stryker for mutation testing! 2. Linting - Static Testing 📐 You must have heard about linting by now and how amazing it is, but never wanted to delve into it since it sounds too complicated for little to no benefits. I felt exactly the same way until I started using it - and let me tell you linting is beyond amazing. Source Code Linting Add ESLint to your project (even if you plan to use TypeScript). Imagine writing a very long essay/thesis in a Word document without grammar highlighting - do you think you will be flawless? Isn't it nice to be warned of any inaccuracies/errors you made right away? That's exactly ESLint's purpose inside your VSCode IDE (assuming everything is setup right). You can configure it to follow specific rules according to your liking. So far, this fixed a lot of issues in my code - from small to large - and even allowed me to learn new Javascript concepts. For example, I learned that const means constant reference rather than simply value, so you could actually have a const array whose elements can be changed, added or removed. The same is not true for a const variable. That is, const arr: number[] = []; arr.push(1) // valid console.log(arr) // [1] const val = 5; val = 1; // error Here is an example of what ESLint looks like: As you can see, the instructions are very clear and you are even provided with quick actions from VSCode which can be accessed with ctrl + . Style Sheet Linting Now that you have ESLint working, you should consider StyleLint for your styling files (CSS, SASS, LESS, etc.). This helps reduce duplicates that are scattered across your many files on large projects. StyleLint also enforces best standards such as the following: Spacing Errors Unit Errors Duplicates Additionally, StyleLint detects when you forget to add blank lines between blocks of styles and/or if you have an extra space in a block comment like: /* <- space * comment */ TabMerger Linting TabMerger uses both linting types dynamically (through the use of IDE extensions: ESLint & stylelint) and manually: npm run lint→ ESLint npm run lint:style→ StyleLint Manually linting will produce a report in the command line that will outline all the errors across all files so that you can quickly find them (rather than opening each file one by one). Here is an example: Note: a clean run will produce no output. 3. TypeScript ✍ With both of these linters, you will be happy to know that TypeScript (TS) also offers "linting" in the form of type control. This is super useful when working on any project as you can quickly hover over anything (function, variable, etc.) to get more type specific information in your IDE. TypeScript is also very well integrated with VS Code. Typing information can be seen by hovering: Here is how TypeScript errors look like: Error No Error A good point about TypeScript is that you can slowly/incrementally modify your JavaScript files in existing projects and everything will still work fine - since TS files get compiled to JS. Check out TabMerger's repository to see how I structured my project to incorporate and successfully run with TypeScript. I am still in the process of learning TypeScript and in TabMerger I currently have around 100 places where I am not sure how to "type" properly - so I am by no means a master with TypeScript - but ever since I started using it, I haven't looked back. So far, I've refactored pretty much all my old projects to include TypeScript in one way or another. The typing information it provides and just the process of migrating to TypeScript is very rewarding and useful! It will take longer to get used to TypeScript than linting but it should not take too long before you start seeing how useful TS is. 4. Module Aliasing 💥 Tired of looking up your directory tree to know the relative path of your import? This can certainly slow down your workflow and is not very practical when you consider the fact that users/contributors do not want to look up your structure just to use your module. It is very common for npm/yarn packages to have their module paths aliased to simpler names that are mapped to the correct paths. To do this in TS, you can add the baseURL and paths options to your TS configuration file. If done right, this allows you to import { A } from @A/A instead of import { A } from ../components/A/A. Example from one of TabMerger's files: Without Aliasing With Aliasing Unfortunately, React's build scripts prevent the paths option in the tsconfig.json, so a work around is needed to get this working properly: npm i -D react-app-rewired - Add config-overrides.jsto root (see TabMerger's file) - Ensure aliasobject matches your aliases as shown in the file from the previous step - change start: react-scripts startto start: react-app-rewired startand the same for the build script (see TabMerger's file) You will also need to adjust your jest.config.js by adding the alias' and their corresponding true paths to the moduleNameMapper property. Note that you can use RegExp variables to shorten these key/value pairs. 5. Documentation 📚 By now, you might have noticed that the functions I posted in some of the above images have a specific comment syntax. Something like: This is done on purpose to generate good looking documentation as seen here. The main modules which generate these documentation reports are jsDoc (Javascript) and typeDoc (TypeScript). Commenting your code like this will make it much easier to follow for anyone who visits it for the first time. It might also allow you to remember that hard to understand part in your code. The added bonus of using such comments for documentation is that it makes the transition from JS to TS much smoother as you can "infer" types from the comments using VS Code to automatically type your function arguments and return values. You can see that only specific modules are shown on the documentation's main page. This can be controlled by a configuration file and by adding the following to the top of your respective files: /** * @module MODULE_NAME */ TabMerger Documentation Generation In TabMerger, you can use the following commands to generate documentation reports: npm run jsdoc(JavaScript) npm run typedoc(TypeScript) Conclusion I hope my suggestions are useful to some of you. I urge you to take action and add these items to your project as soon as possible. I can almost guarantee that you will instantly be more excited to work on your project(s) (granted everything works as expected). Worst thing that can happen is you cannot get one of the items to work properly or just don't find it that useful. In that case, you can simply revert back to the good old days 😊. I am actually in the process of looking for work so have a "lot" of spare time to help anyone who is confused or stuck on any of these items. Cheers 🥂 Discussion (2) The module aliases can come back to haunt you though because it makes it easy to accidentally create circular dependencies. I use them aswllu but with the rule of thumb to not use aliased imports within the module. So if I'm working on a component which imports another component, I'll use the relative path for that import. Great point! As long as you provide aliases to the main modules of your project (not their internals), circular dependencies should not happen. Yes, you could use relative imports within a module without any problems, but I think it leads to slower programming as you have to determine the correct relative path each time - unless your IDE handles this for you. In my case aliasing greatly increased my workflow speed - but your comment brought forward a valid point that I previously did not consider.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/lbragile/5-things-to-include-in-your-project-asap-2447
CC-MAIN-2021-21
refinedweb
2,009
57.4
I am trying to write a program that reads in a line of text and counts the number of words and the number of instances of each letter. After you type in a line of text you hit return and it should give your output. Sometimes if there are some small words, three or less letters, you will need to hit return a second time. It doesn't count all small words. Also the letter count is always the same: The number of each letter you typed in is: b 134520536 f 4 h 134520536 l 134520536 p 134520536 q 10 r 4 x 134520536 I had it print out 'c' right after the cin.get statement and it seems to be reading in only every other letter. Any help will be greatly appreciated. Thank you in advance. Code:#include <iostream> #include <cctype> using namespace std; void introduction(); //Explains what the program does. void input_count_output(int& num_words); //Asks user to input text from the keyboard and then counts //and outputs to the screen the number of words and the number //of each letter typed in. int main() { int num_words = 0; introduction(); input_count_output(num_words); return 0; } //uses iostream void introduction() { cout << "This program will ask the user to input a line of text\n" << "and then will output the number of words and the number of\n" << "each type of letter.\n" << endl; } //uses iostrem //uses string //uses cctype void input_count_output(int& num_words) { char c; int letter_count[26]; cout << "Please enter a line of text, then press return:\n"; do { cin.get(c); if (!cin.get(c)) //breaks if character is not able to be read in break; if (isalpha(c)) { if (isupper(c)) c = tolower(c); } ++letter_count[c - 'a']; if (c == '\n' || c == '\t' || c == '"' || c == ',' || c == ';' || c == ':' || c == '.' || c == '?' || c == '!' || c == ' ') ++num_words; }while(c != '\n'); //now for output cout << "The number of words you typed in is " << num_words << endl << "The number of each letter you typed in is:\n"; for (int i = 0; i < 26; i++) { if (letter_count[i] > 0) { c = 'a' + i; cout << c << " " << letter_count[i] << endl; } } }
https://cboard.cprogramming.com/cplusplus-programming/88340-help-strings.html
CC-MAIN-2017-17
refinedweb
349
74.83
FIll between two vertical lines in matplotlib I went through the examples in the matplotlib documentation, but it wasn't clear to me how I can make a plot that fills the area between two specific vertical lines. For example, say I want to create a plot between x=0.2 and x=4 (for the full y range of the plot). Should I use fill_between, fill or fill_betweenx? Can I use the where condition for this? It sounds like you want axvspan, rather than one of the fill between functions. The differences is that axvspan (and axhspan) will fill up the entire y (or x) extent of the plot regardless of how you zoom. For example, let's use axvspan to highlight the x-region between 8 and 14: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(range(20)) ax.axvspan(8, 14, alpha=0.5, color='red') plt.show() You could use fill_betweenx to do this, but the extents (both x and y) of the rectangle would be in data coordinates. With axvspan, the y-extents of the rectangle default to 0 and 1 and are in axes coordinates (in other words, percentages of the height of the plot). To illustrate this, let's make the rectangle extend from 10% to 90% of the height (instead of taking up the full extent). Try zooming or panning, and notice that the y-extents say fixed in display space, while the x-extents move with the zoom/pan: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(range(20)) ax.axvspan(8, 14, ymin=0.1, ymax=0.9, alpha=0.5, color='red') plt.show() From: stackoverflow.com/q/23248435
https://python-decompiler.com/article/2014-04/fill-between-two-vertical-lines-in-matplotlib
CC-MAIN-2019-47
refinedweb
286
75.3
Today. Getting Started We're going to start off with 2 psd's I made and get those working in an iPhone page. I am using images for the background and header although you could use just straight colors instead of images. The plus side to not using images is that it obviously loads faster but also when switching between landscape and portrait the images take a moment to load, depending on how large they are. You can find the source psd files here or you can make your own. Something to keep in mind is that we are building a page specifically for the iPhone or iTouch. If you do not have the device yourself you can download the iPhone SDK freely from Apple and it includes an iPhone simulator. if you would like to detect the iPhone on your standard browser page and either load the iPhone css and html through conditional statements or send the user to a different page entirely, use the following code: <script type="text/javascript"> var browser=navigator.userAgent.toLowerCase(); var users_browser = ((browser.indexOf('iPhone')!=-1); if (users_browser) { document.location.href=''; } </script> The code above explained: - Line 2: Create a variable that holds the users type of browser ( among other things ) - Line 3: Assign the browser type a value if the iPhone browser is present. - Line 4 - 8: An if statement that redirects the user to an "iPhone formated page" if the variable "users_browser" returns a value ( meaning the user is using an iPhone or iTouch to view the current page ). Below the code will use html conditional statements to hide the code from a regular browser. <!--#if expr="(${HTTP_USER_AGENT} = /iPhone/)"--> <!-- place iPhone code in here --> <!--#else --> <!-- place standard code to be used by non iphone browser. --> <!--#endif --> Step 1: The HTML So we now know how to point the user to your iPhone page if they are on an iPhone or iTouch device. Now, we will start working on the iPhone HTML page; the code below has some key differences from a regular XHTML transitional document. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xml: <head> <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0;"> <title>My iPhone Page</title> <link rel="apple-touch-icon" href="images/myiphone_ico.png"/> <link rel="StyleSheet" href="css/iphone_portrait.css" type="text/css" media="screen" id="orient_css"> The code above explained line by line: - Line 1 - 5: This is standard 1.0 XHTML Transitional Doctype. Nothing special yet. - Line 6: This line is iPhone and iTouch specific. It sets initial values for the viewport in the Device's browser. width=device-width states the width of the page to be the same width of the device. initial-scale and maximum scale set the starting point for the zoom of the page, maximum-scale is how much the page cane be scaled up. - Line 9: This link element is pointing to the web pages icon. this is used when a user saves the page to their "Home Screen". - Line 10: A link element points to the iPhone style sheet. This element has the id orient_css assigned to it. This is so that we can point to it with javascript to change the css file it points to when it comes to adjusting the layout for the orientation of the device. Step 2: Laying Out The Divs We now continue with the rest of the html before we add any javascript functions for orientation detection. Start with ending the head and then start the body. In the body element we add onorientationchange=orient();. So I just lied, that is a bit of javascript, but this is needed to call our "orient" function (we'll go over this in a bit) when ever the device detects a different orientation. </head> <body onorientationchange="orient();"> <div id="wrap"> <div id="header"> </div> <div id="content"> <p>This is the main content area of the page. </p> <p>Using css and javascript we can manipulate any of these divs using an alternate css file. The css files in this project are for landscape and portrait views.</p> <p>Some more filler text here to demonstrate the page.</p> </div> <div id="bottom"> </div> </div> </body> </html> Step 3: The Orientation Javascript In the head of the page you will want to place the code seen below <script type="text/javascript"> function orient() { switch(window.orientation){ case 0: document.getElementById("orient_css").href = "css/iphone_portrait.css"; break; case -90: document.getElementById("orient_css").href = "css/iphone_landscape.css"; break; case 90: document.getElementById("orient_css").href = "css/iphone_landscape.css"; break; } } window.onload = orient(); </script> switch(window.orientation) works off of the onorientationchange() method in the body element. This will check to see if the current rotation is equal to the "case value", if it returns true it will execute what is after the colon. After an orientation has been matched it breaks out of orient();. window.onload() runs the orient function when the page first finishes loading. After each case (value) : we have javascript pointing to the link elements id that our css file is attached to. Depending on the case value, 0, 90 or -90 ( there is also 180 but it is not supported on the iPhone at this time) the portrait or landscape css file is attached to the href tag in the link element. 0 is upright (portrait), 90 is landscape counter clockwise. -90 is landscape turned clockwise and 180 although not supported yet would represent the device being upside down. Step 4: Implementing The CSS Even with all of this code, the page doesn't do much. That's because we need to add background images and style it all. We will create 2 css files, one called iphone_portrait.css and another called iphone_landscape.css. We will place the portrait css file into the link element as the default css file to use. body { background-color:#333; margin-top:-0px; margin-left:-0px; } #wrap { overflow:auto; width:320px; height:480px; } #header { background:url(../images/header.jpg); background-repeat:no-repeat; height:149px; } #content { background:url(../images/middle.jpg); background-repeat:repeat-y; margin-top:-5px; } p { margin:5px; padding-left:25px; width:270px; font-size:10px; font-family:arial,"san serif"; } #bottom { background:url(../images/bottom_corners.jpg); background-repeat:no-repeat; height:31px; margin-top:-5px; } The above code is for the iphone_portrait.css file and is rather straight forward. Some things to note are: - in the wrap style description overflow:auto makes sure floated items are kept inside the wrap div to keep the page nice and tidy. - the dimensions for the page are 320px wide by 480px tall. be sure to state this in the wrap div. Below is the code to be placed inside the iphone_landscape.css file. the only differences between portrait and landscape css files are the background images, the wrap dimensions are reversed and the margins are adjusted accordingly. body { background-color:#333; margin-top:-0px; margin-left:-0px; } #wrap { overflow:auto; width:480px; height:320px; } #header { background:url(../images/l_header.jpg); background-repeat:no-repeat; height:120px; } #content { background:url(../images/l_middle.jpg); background-repeat:repeat-y; margin-top:-5px; } p { margin:5px; padding-left:25px; width:370px; font-size:10px; font-family:arial,"san serif"; } #bottom { background:url(../images/l_bottom_corners.jpg); background-repeat:no-repeat; height:37px; margin-top:-5px; } If you are using my sliced background images your page should now look like the image below when in portrait mode. Or, in landscape mode? Where To Go From Here? So now that you have a page formatted and styled for the iPhone and iTouch, what else can you do? Well, if your page is meant to be more of a web app you may want to check out the IUI by Joe Hewitt which is a framework that makes your pages look like native iPhone or iTouch apps. Also keep in mind that you can set 3 specific css files; so you can have one css file that styles the page if its turned clockwise to landscape and a different file again for when its turned counter clockwise to landscape. This will allow for some interesting outcomes. Good luck! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/learn-how-to-develop-for-the-iphone--net-443
CC-MAIN-2017-22
refinedweb
1,395
64.91
How can I delete the temporary files from my temp directory via python scripting? My stream execute 1600 iterations per day and includes Text Analytics, so the Overhead of temp files is nearly 80 GB of disk space per day! I tried the code below, but SPSS Modeler doesn't like the "os" methods. import os try: tmpdir = "C:/Users/******/**********/*******/Modeler_TMP_Files" filelist = [ f for f in os.listdir(tmpdir)] for f in filelist: os.remove(os.path.join(tmpdir, f)) except modeler.api.ModelerException, e: print "Löschen der TMP Files ist fehlgeschlagen:", e.getMessage() Do you have any idea? Thanks! Answer by Dominic ST (28) | Dec 17, 2018 at 08:49 AM You have to set your own temp_directory, "C:/*****/*****/*****/*******/Modeler_TMP_Files" in options.cfg of SPSS Modeler files. Then the code below does the trick. import shutil import os path = os.path.normcase(r"C:/*******/*********/********/**********/Modeler_TMP_Files/") if os.path.exists(path): for folderName, subfolders, fileNames in os.walk(path): for subfolder in subfolders: subPath = folderName + subfolder shutil.rmtree(subPath) Notice: The rmtree method delete everything in this subfolder, so the Modeler_TMP_Files folder is empty after executing the code snippet! 158 people are following this question. SPSS 25 Mac license doesn't work anymore 2 Answers SPSS Modeler 18.1.1 activation needed 3 Answers SPSS Statistics vs Modeler 1 Answer SPSS Administrator failed to connect to IBM SPSS Modeler Server 0 Answers SPSS Modeler Essentials for R 3 Answers
https://developer.ibm.com/answers/questions/484954/spss-modeler-delete-temporary-files-with-python/
CC-MAIN-2019-39
refinedweb
239
51.95
Go (Golang) programming language comes with a tool called go fmt. Its a code formatter, which formats your code automagically (alignments, alphabetic sorting, tabbing, spacing, idioms...). Its really awesome. So I've found this little autocommand which utilizes it in Vim, each time buffer is saved to file. au FileType go au BufWritePre <buffer> FmtFmt is a function that comes with Go vim plugin. This is really great, but it has 1 problem. Each time formatter writes to buffer, it creates a jump in undo/redo history. Which becomes very painful when trying to undo/redo changes, since every 2nd change is formatter (making cursor jump to line 1). So I am wondering, is there any way to discard latest change from undo/redo history after triggering Fmt? EDIT:Ok, so far I have: au FileType go au BufWritePre <buffer> undojoin | FmtBut its not all good yet. According to :h undojoin, undojoin is not allowed after undo. And sure enough, it fires an error when I try to :w after an undo. So how do I achieve something like this pseudo-code: if lastAction != undo then au FileType go au BufWritePre <buffer> undojoin | Fmt end If I get this last bit figured out, I think I have a solution. I attempted to use @pepper_chino's answer but ran into issues where if fmt errors then vim would undo the last change prior to running GoFmt. I worked around this in a long and slightly convoluted way: " Fmt calls 'go fmt' to convert the file to go's format standards. This being " run often makes the undo buffer long and difficult to use. This function " wraps the Fmt function causing it to join the format with the last action. " This has to have a try/catch since you can't undojoin if the previous " command was itself an undo. function! GoFmt() " Save cursor/view info. let view = winsaveview() " Check if Fmt will succeed or not. If it will fail run again to populate location window. If it succeeds then we call it with an undojoin. " Copy the file to a temp file and attempt to run gofmt on it let TempFile = tempname() let SaveModified = &modified exe 'w ' . TempFile let &modified = SaveModified silent exe '! ' . g:gofmt_command . ' ' . TempFile call delete(TempFile) if v:shell_error " Execute Fmt to populate the location window silent Fmt else " Now that we know Fmt will succeed we can now run Fmt with its undo " joined to the previous edit in the current buffer try silent undojoin | silent Fmt catch endtry endif " Restore the saved cursor/view info. call winrestview(view) endfunction command! GoFmt call GoFmt()
https://www.dowemo.com/article/70428/how-to-remove-history-records-from-the-golang-formatter
CC-MAIN-2018-26
refinedweb
437
73.17
My previous blog of the Jetson TX2/Pixhawk 2 build includes the use of an RFDesign RFD 868x telemetry unit. Here's the detail of how I set it up. As usual, the post can be found at. My previous blog of the Jetson TX2/Pixhawk 2 build includes the use of an RFDesign RFD 868x telemetry unit. Here's the detail of how I set it up. As usual, the post can be found at. I.. Companion computer was Raspberry Pi running Dronekit and a standard USB webcam. All programming in Python. More details at on my blogsite here. If..... Here's a quick technical post for anyone attempting to harness the capabilities of a Realsense D435 camera on a Jetson TX2. For me, this is about getting usable depth perception on a UAV, but it has proved more problematic than I originally anticipated. The Intel Realsense D435 depthcam is small enough to mount on a UAV and promises enough range for object detection and avoidance at reasonable velocities. My intention is to couple it with the machine learning capabilities of the Jetson TX2 to improve autonomous flight decision making. The problem is that the Intel Realsense SDK2 does not apparently support the ARM processor of the Jetson TX2, as I write. This post links to my blog article which aims to provide some simple installation instructions that now work for me, but took a long time to find out! (Full blog article link is). In common with last year, the competition is to build a drone to autonomously navigate around a circuit, marked out by a red line. However, the event was altogether bigger this year with three flight arenas, many activities organised by sponsors, FPV racing and even a full two-seat glider flight simulator. It was great to see this event continue to build on last years’ strengths and to become more popular still. Its has also attracted international attention with one team coming all the way from Moscow (more of them in my blog!). The idea was to produce a competitive drone equipped with camera, optical flow, Pixhawk flight control unit and enough code to get drone-coding newbies off the ground, so to speak. The code was to be implemented in Python on a Raspberry Pi companion computer. So it wasn’t particularly expected that the students would compete in the main competition, more that they would learn from a real world experience. Who would have known that our newbie team would come 2nd? I have documented all the code on GitHub and put together a full blog with links at I hope it is of interest! place in early morning with glancing sunlight on dew-soaked webbing - great for walking the dog but not so good for computer-vision. Image recognition is by OpenCV on a Raspberry Pi 3 as explained in previous blogs. Frame rates of over 40 fps are achieved by: The reality is that the PiCam is only delivering 30fps, so many frames get processed twice. The position and bearing of the line is calculated in NED space from the orientation of the UAV and pitch of the camera. A velocity vector is calculated to partly follow the line and also to 'crab' over it. The velocity vector is sent to the Pixhawk via Dronekit, set to a fixed magnitude. The Pixhawk is running Arducopter in Guided flight mode. In these videos, the velocity is set at 1 m/s, 1.2 m/s, 1.4 ms and finally 1.75 m/s. To get lower and faster, we need to find a way to keep the track in the field of view. Some possibilities are: So back to the drawing board, but in the next field test maybe we'll target velocity of 2m/s at 1.5m altitude - so faster and lower. More information on the Groundhog at It's been a few week since tests were conducted of The Groundhog line-following a straight red line. The Groundhog was an entry in the MAAXX Europe 2017 competition and although it didn't win, the lessons learned are being implemented so that it can be more competitive next year. Time and weather have eventually converged to permit testing for the oval track - in this case a 50m oval comprising 50mm wide red webbing. The test turned out to be quite successful, with following speeds of 1.5m/s achieved under autonomous control provided by an on-board Raspberry Pi 3. This is significantly faster than the winning UAV in MAAXX Europe this year, which is quite pleasing! The YouTube video shows both on-board and off-board camera footage, the former demonstrating the roaming regions of interest used by OpenCV to maintain a lock under varying lighting conditions. The next steps are to increase the speed still further and control the altitude more precisely. More information on the Groundhog can be found at. Several lessons were identified here from the entry of The Groundhog hexacopter in the MAAXX Europe competition earlier this year. Current developments are around correcting the issues so that we get a UAV successfully lapping the oval track at a minimum average speed of 1m/s. A number of changes in approach have been made from that previously blogged. Recall the platform is based on a combination of Pixhawk/Raspberry Pi3/OpenCV/Dronekit. Image analysis: Control algorithms: As in MAAXX Europe, it makes sense to initially test on a straight line. Initial testing was conducted outdoors using red-seatbelt webbing for the line. It was not possible to fly below about 2m as the propwash blew the line away (will sort that next time!). Initial Testing (Links to YouTube Video). In this last post of the series I shall overview the main program including the control algorithms for the Groundhog. Code is written in Python, using Dronekit and OpenCV all running on a Raspberry Pi 3. As we are flying indoors without GPS and also without optical flow, we are using quaternions to control the vehicle in the GUIDED_NOGPS flight mode of ArduCopter. To be honest, I've not come across anyone else doing this before, so it must be a good idea... There are many references to this - check out Wikipedia for starters. However, a quaternion is just another way for specifying the attitude of a body in a frame of reference, other than the traditional yaw, pitch, roll. Whilst it's slightly harder to get your head around, quaternions are fundamentally (and by that I mean mathematically) more sound. For example, working with pitch, roll and yaw can lead to gimbal lock, in which your use of trig functions can cause /div zero errors at the extremes (like multiples of 90 degrees). With a quaternion, we specify the new attitude as a specified rotation around a vector - an axis of rotation. Think about the vector describing the current attitude. We can map that vector to one describing the target attitude by specifying a rotation (of so many degrees) around an axis of rotation. (More generally in robotics, we add a translation as well, but that's another story). I am a visual learner, so after doing the maths I really understood it using two cable ties (start and finish attitude vectors) and a barbecue stick (axis of rotation). Try it. So in short, we are going to send a stream of quaternions to the Pixhawk to tell it how to change attitude. Fortunately some very clever people have written some functions for this... All code is posted to my GitHub repository. There are just three program files: The main program is invoked to either connect directly to the Pixhawk or to connect to the software in the loop simulator (see previous post). Navigate to the local library on the RPi3 and open a console: In the latter instance, xxx is the address of the RPi on the local network. Several standard libraries are used, some which need to be specifically loaded using pip. In particular, note imutils from Adrian Rosebrock's excellent blog at pyimagesearch.com. I preferred to keep the VL6180X in the local folder rather than fully import it via pip. # import the necessary packages from dronekit import connect, VehicleMode, LocationGlobalRelative from pymavlink import mavutil # Needed for command message definitions import time import numpy as np from pyquaternion import Quaternion from PiVideoStream import PiVideoStream from imutils.video import FPS from picamera.array import PiRGBArray from picamera import PiCamera import argparse import imutils import cv2 import sys from ST_VL6180X import VL6180X from datetime import datetime, timedelta Optionally, a connection string is used when the program is invoked. The presence of a connection string is tested at several points in the program to decide, for example, whether to command a take-off if using the SITL (and obviously not if really flying!). Note also the serial port specified on the RPi3 - "/dev/serial0". This works as the bluetooth has been disabled as per my previous post. The baud rate also has to be set in Mission Planner for the Pixhawk to connect at the same speed. #--------------------------SET UP CONNECTION TO VEHICLE---------------------------------- # Parse the arguments parser = argparse.ArgumentParser(description='Commands vehicle using vehicle.simple_goto.') parser.add_argument('--connect', help="Vehicle connection target string. If not specified, SITL automatically started and used.") args = parser.parse_args() connection_string = args.connect # Connect to the physical UAV or to the simulator on the network if not connection_string: print ('Connecting to pixhawk.') vehicle = connect('/dev/serial0', baud=57600, wait_ready= True) else: print ('Connecting to vehicle on: %s' % connection_string) vehicle = connect(connection_string, wait_ready=True) Start the video thread. This is straight out of pyimagesearch.com. It is essential to use a separate thread to capture video on the RPi to get any useful performance, otherwise the main thread is held up by the slow camera IO. Here, we are capturing and processing at around 20fps. I decided to offload the image processing onto the video thread as well just to compartmentalise all image operations off of the main thread. You should also note the RPi doesn't really parallel process - but using a separate thread allows the main thread to get on with it while the video thread is hanging around for IO (in this case). #--------------------------SET UP VIDEO THREAD ---------------------------------- # created a *threaded *video stream, allow the camera sensor to warmup, # and start the FPS counter print('[INFO] sampling THREADED frames from `picamera` module...') vs = PiVideoStream().start() So we have attached the VL6180X sensor to a rear arm with a view to keeping the Groundhog around 15cm from the floor. The rangefinder is connected to the RPi directly using i2c - NOT the Pixhawk. So the RPi will sense and control the altitude directly. #--------------------------SET UP TOF RANGE SENSOR ---------------------------------- tof_address = 0x29 tof_sensor = VL6180X(address=tof_address, debug=False) tof_sensor.get_identification() if tof_sensor.idModel != 0xB4: print"Not a valid sensor id: %X" % tof_sensor.idModel else: print"Sensor model: %X" % tof_sensor.idModel print"Sensor model rev.: %d.%d" % \ (tof_sensor.idModelRevMajor, tof_sensor.idModelRevMinor) print"Sensor module rev.: %d.%d" % \ (tof_sensor.idModuleRevMajor, tof_sensor.idModuleRevMinor) print"Sensor date/time: %X/%X" % (tof_sensor.idDate, tof_sensor.idTime) tof_sensor.default_settings() tof_sensor.change_address(0x29,0x80) time.sleep(1.0) Control of the Pixhawk is effected using Dronekit (with python). As well as having it's own set of commands (API) it provides an interface which encodes more directly to messages using the mavlink protocol. We are using the set_attitude_target message, which is almost the only method we have of controlling the Pixhawk indoors, without GPS or optical flow. This allows us to encode a quaternion in the local frame to request a change in attitude. Here's the low level function into which we feed the quaternion describing the change in attitude required. Some understanding of how it works is necessary. As I could find no documentation detailing the function, much of this has been gained by trial and error and may not be complete. w,x,y,z: q, the normalised quaternion (so that 1 = w2+x2+y2+z2). 2 means squared! thrust - 0.5 to stay put, higher to go up, lower to go down. (max 1). body roll and pitch rate set to 1 to match default setting in Pixhawk. body yaw rate was made equal to the requested yaw, otherwise generally no yaw was evident (no idea as to why). #--------------------------FUNCTION DEFINITION FOR SET_ATTITUDE MESSAGE MODE-------------------- # Define set_attitude message def set_att_msg_mode(w,x,y,z,thrust): msg = vehicle.message_factory.set_attitude_target_encode( 0, 0, #target system 0, #target component 0b0000000, #type mask [w,x,y,z], #q 1, #body roll rate 1, #body pitch rate z, #body yaw rate thrust) #thrust vehicle.send_mavlink(msg) The quaternion itself was calculated from a separate function, below. This allowed for the more usual change in roll, pitch and yaw to be converted. Some useful observations were made using the SITL beforehand (running ArduCopter 3.4). #--------------------------FUNCTION DEFINITION FOR SET_ATTITUDE MESSAGE -------------------- def set_attitude (pitch, roll, yaw, thrust): # The parameters are passed in degrees # Convert degrees to radians degrees = (2*np.pi)/360 yaw = yaw * degrees pitch = pitch * degrees roll = roll * degrees # Now calculate the quaternion in preparation to command the change in attitude # q for yaw is rotation about z axis qyaw = Quaternion (axis = [0, 0, 1], angle = yaw ) qpitch = Quaternion (axis = [0, 1, 0], angle = pitch ) qroll = Quaternion (axis = [1, 0, 0], angle = roll ) # We have components, now to combine them into one quaternion q = qyaw * qpitch * qroll a = q.elements set_att_msg_mode(a[0], a[1], a[2], a[3], thrust) This function is straight out of the Dronekit examples and required only if testing in the SITL. #-------------- FUNCTION DEFINITION TO ARM AND TAKE OFF TO GIVEN ALTITUDE --------------- def arm_and_takeoff(aTargetAltitude): """ Arms vehicle and fly to aTargetAltitude. """) print ('Taking off!') vehicle.simple_takeoff(aTargetAltitude) # Take off to target altitude while True: # print "Global Location (relative altitude): %s" % vehicle.location.global_relative_frame if vehicle.location.global_relative_frame.alt>=aTargetAltitude*0.95: break time.sleep(1) The main program operates a very simple finite state machine with three states: Recall the code is all in my github repository. Therefore, I will only work through the 'following' function here to avoid repetition. #-------------- FUNCTION DEFINITION TO FLY IN VEHICLE STATE FOLLOWING--------------------- def following (vstate): print vstate #The vehicle process images and uses all data to fly in the following state. # It sends attitude messages until manual control is resumed. red1Good = red2Good = False # Set True when returned target offset is reliable. yaw = roll = 0 target = None # Initialise tuple returned from video stream #altitude = vehicle.location.global_relative_frame.alt # Initialise the FPS counter. # fps = FPS().start() while vstate =="following": # grab the frame from the threaded video stream and return left line offset # We do this to know if we have a 'lock' (goodTarget) as we come off of manual control. target = vs.read() yaw = target[0] red1Good = target[1] roll = target[2] red2Good = target[3] # update the FPS counter # fps.update() # Get the altitude information. tofHeight = tof_sensor.get_distance() # print "Measured distance is : %d mm" % tofHeight # adjust thrust towards target if tofHeight > 200: thrust = 0.45 elif tofHeight < 160: thrust = 0.55 else: thrust = 0.5 # Check if operator has transferred to autopilot using TX switch. if vehicle.mode == "GUIDED_NOGPS": # print "In Guided mode..." # print "Global Location (relative altitude): %s" % vehicle.location.global_relative_frame if (red1Good or red2Good) : yaw = yaw * 100 # Set maximum yaw in degrees either side roll = roll * 20 # Set maximum roll in degrees either side pitch = -4 #print pitch, yaw, roll, thrust set_attitude (pitch, roll, yaw, thrust) else: vstate = "lost" else: # print "Exited GUIDED mode, setting tracking from following..." vstate = "tracking" We keep looping around this finite state machine in the main program. # MAIN PROGRAM vstate = "tracking" # Set the vehicle state to tracking in the finite state machine. # If on simulator, arm and take off. if connection_string:) # Get airborne and hover arm_and_takeoff(10) print "Reached target altitude - currently in Guided mode on altitude hold" vehicle.mode = VehicleMode("GUIDED_NOGPS") while True : if vstate == "tracking": # Enter tracking state vstate = tracking(vstate) #print "Leaving tracking..." elif vstate == "following": # Enter following state vstate = following(vstate) #print "Leaving following" else: # Enter lost state vstate = lost(vstate) #print "Leaving lost" The short range ToF sensor worked remarkably well and the height was able to be maintained at the 20cm mark or so. However, the turbulance resulting from the ground effect complicated issues we were having with tracking. Therefore, we decided to run at more normal altitude of 1.5 m using the Pixhawk barometer. This also worked surprisingly well. Once the control and tracking is working satisfactorily, I would have no hesitation in using the VL6180X ToF sensor again. The image analysis and tracking worked very well. It was clear that the PiCam was able to distibguish the red line more easily than many competitors in the early stages. No adjustments were made to the image analysis code during the competition, except use the 'lost' mode to make the field of view as wide as possible to help find the red line. So this is where it didn't go so well - but we learned alot of lessons! Firstly, we were initially much too conservative with the maximum limits of pitch, roll and yaw. This meant that there was little control at all and the turbulence from the ground effect simply sent the Groundhog off the line. Our conservative approach was understandable as the Groundhog is quite a formidable machine indoors and safety was paramount. At least we had replaced the 12" carbon fibre props with 10" plastic ones! It took the first day of the competition to become confident to raise the attitude limits and realise we had to get off the ground - at least to start with. The really big bug-bear was yaw control. We had assumed that the quaternion function would use the Pixhawk control algorithms to set the yaw accurately. This turned out not to be the case. Once we had increased the sensitivity for yaw, we managed a full lap of the track albeit rather gingerly. That at least put us in the top four teams! However, we soon found the Groundhog would overshoot the target yaw and soon start an oscillation completely characteristic of a system with insufficient damping. Recall the Groundhog weighs in at 3Kg? So we were now paying for that big time! Of course what we needed was a PID control loop at least for the yaw control which was the major problem. It was very frustrating watching the perfectly tracked line disappear increasingly off the side of the screen with every swing of the pendulum. Unfortunately, we simply ran out of time to introduce the necessary code in way that felt safe. On the plus side: On the minus side: The date for the next MAAXX Europe event has already been set for March 2018 and Groundhog will be there. As a minimum, it will be equipped with necessary PID controllers and will at last be able to fly at its cruising altitude of 20cm, hopefully for 30 minutes at a time! It might also have some other tricks up its sleeve by then... Maybe see you there. hoping that by publishing my own step by step checklist, it may help others save a little time. All code is posted to my GitHub repository. I have previously blogged on how to connect a Pixhawk (running arducopter) to a Raspberry Pi 2 using the UART interface. The hardware connection is identical for the Raspberry Pi 3. However, there are some critical differences in setting up the Pi and in the code required to make the connection. Here is my installation checklist for installing Dronekit on the Pi and making the connection between the Pi and Pixhawk. Start with Pixel installed on the Pi3 and do basic Pi configuration and updates: Install Jessie Pixel sudo raspi-config Expand filesystem Disable serial for OS Reboot sudo apt-get update Now start to set up the development environment: sudo easy_install pip sudo apt-get install python-dev sudo apt-get install screen python-wxgtk2.8 python-matplotlib python-opencv python-numpy libxml2-dev libxslt-dev Install pymavlink and mavproxy to provide comms protocols between the Pi and Pixhawk. sudo pip install pymavlink sudo pip install mavproxy Install Dronekit and libraries to simplify control of the Pixhawk. sudo pip install dronekit git clone Now sort out the clash on RPi3 between our need to use the UART and it's use by the Bluetooth interface. There are several versions of this config change about but this was the one that worked for me. In RPI, add 2 lines at the end of /boot/config.txt (it will disable bluetooth and you can use /dev/ttyAMA0 to make the connection): enable_uart=1 dtoverlay=pi3-disable-bt So now we switch to the Pixhawk, connected to a PC using Mission Planner. The version needs to be at least 3.4.0 which allows for GUIDED_NOGPS mode. We also need to make sure Telem2 is properly configured, which we do from the Full Parameters list. Check pixhawk firmware is 3.4.0 Check Serial 2 parameters: SERIAL2_PROTOCOL '1' SERIAL2_BAUD '57' BRD_SER2_RTSCTS '0' Now we can make the physical connection between the Pi and Pixhawk and test it out. Note on power: At this point, I had the RPi and Pixhawk independently powered, so the 5V connection between the two units was disconnected. As long as they have a common ground, all is good. Make physical connection Test connection. On the Pi: sudo -s mavproxy.py --master=/dev/ttyAMA0 --baudrate 57600 --aircraft MyCopter param show ARMING_CHECK watch HEARTBEAT (You should see mavlink heartbeat messages going both ways, every second) Cntl - c Reboot RPi If you can see heartbeat messages during the test, like those below, all is good! > HEARTBEAT {type : 6, autopilot : 8, base_mode : 0, custom_mode : 0, system_status : 0, mavlink_version : 3} < HEARTBEAT {type : 6, autopilot : 8, base_mode : 0, custom_mode : 0, system_status : 0, mavlink_version : 3} < HEARTBEAT {type : 13, autopilot : 3, base_mode : 81, custom_mode : 0, system_status : 3, mavlink_version : 3} Beyond making the basic connection work, I found the SITL crucial to allow the developing software to be frequently tested. Basically, the RPi needs something to connect to on the desk, and that cannot be the Pixhawk itself. As a rudimentary test-environment to make sure the control commands are in the right ball-park, I found the SITL to be great. Any more than that might be being a little optimistic. I confess there is much configuration possible of the SITL itself which I have not played with (that's my excuse!). But for example, don't expect the SITL to replicate anything approaching the inertia of a 3Kg hexacopter travelling at 40mph. I installed the SITL on a PC, connected to the Pi3 through a standard home network. The ardupilot site has instructions for the build and installation here, which worked after several attempts. However all the issues were down to me not following the instructions precisely. See more posts at mikeisted.com In this short blog series I’m outlining the hardware and software of The Groundhog, my entry into the recent MAAXX-Europe autonomous drone competition held at the University of the West of England, Bristol. In this post I shall overview the approach taken to the image recognition system used to track the line being followed around the track. Remember the line is red, about 50mm across and forms an oval track 20m by 6m. We are attempting to race around as fast as we can, avoiding other UAVs if necessary. Please note this blog is not a line by line treatment of the code, indeed the code provided is neither tidied or prepared as if for ‘instruction’. In fact it is the product of much trial and change over the competition weekend! Nevertheless, I hope it provides some useful pointers for the key ideas developed. Image recognition is undertaken by the Raspberry Pi 3 companion computer using Python 2.7 and OpenCV 3.0. I am indebted to Adrian Rosebrock of pyimagesearch for his many excellent blog posts on setting up and using OpenCV. So we are trying to follow a red line which forms an oval track about 20m x 6m. The line is about 50mm across. However, we have limited processing power with the on-board Raspberry Pi, so we need to minimise the load. Here’s the approach: When following the line: If the line is lost: Additional features: The approach worked well during the competition. In fact, it was clear that the ability of the Groundhog to detect and measure the line exceeded most other competitors. Other notable points were: However: Image Sequence The unedited image of a test line on my desk. Obviously the parallel lines run towards a vanishing point. This view is after the image has been warped to get the ‘top-down’ view. It also shows the upper and lower regions of interest, with a successful lock of the line in each. Image Processing Code Please see my website post here. The code is also to be placed shortly on GitHub and I'll edit this post as soon as that happens. The Groundhog was at least twice as big and probably three time as heavy as many other competitors. Why? Because it is built for endurance (flight time 35mins+) and also because it's what I have as my development platform. It normally flies outdoors of course... Ah, so that means no gps and flying less than 30cm from the ground also rules out an optical flow camera (they can't focus that close). So how to control this thing? The answer (as you will see) is using the GUIDED_NOGPS mode now available in Arducopter, but I suspect, little used. This is because almost the only control available is to set the pose (pitch, roll, yaw) using quaternions. Not velocity, not even altitude. It's going to be a little like controlling a 3Kg ball bearing sat on top of a football. Fun. But we are getting ahead of ourselves. Back to hardware... The hexacopter Tarot 680 Pro is my general purpose airframe for just about everything. Here the props are turned downward facing to get them closer to the ground to increase ground effect. Smaller 10 inch props were fitted to bring the overall size down to less that 1m, as required by the competition. This meant the motors were running faster than usual, reducing efficiency. Hey ho. The two 4s batteries are configured to run in series, so 8s is provided to the high efficiency motors which are designed for endurance. The power module for the Pixhawk was taken from just one battery - making sure that they had a common earth! Power for the RPi and servo gimbal is provided by 2 x SBECs, taking power from across both batteries. The feed for the SBECs is taken from across the supply of one of the speed controllers. Key lessons: Flight control is provided by the Hobbyking version of a standard Pixhawk. It is attached upside-down to the underside of the top plate (underneath the RPi). Don't forget to reconfigure Arducopter in Mission Planner to tell it the Pixhawk is upside-down (mode 12). In earlier blogs I had overclocked the RPi 2 to beef up performance. This is not required for the RPi 3, which is a big improvement and is used out of the box with the Pixel operating system. However, configuring the serial port (for connection to the Pixhawk) correctly took ages, as it is also used by the Bluetooth interface. Pay close attention in the forthcoming post to the change in the config file required and then the correct syntax to open the serial port and it will be easy. I'll put this in the post on Control, but it took weeks and weeks to eventually sort out! The RPi is connected to the Pixhawk using the serial cable in precisely the same way as detailed in one of my previous posts here. A standard PiCam 2 is controlled for pitch and roll using a gimbal controlled from the Pixhawk. This gives a steady and level view of the track to reduce the image processing burden. It worked beautifully, and incidentally is exactly the same strategy also used by the 2nd place team (so I know it's a good idea!). Also, we want a forward looking camera so we can see further and go faster - not just looking down. For low level flight, a VL6180 time of flight range finder sensor more usually used for gesture recognition is deployed and strapped to one of the front arms. This can give altitude to mm resolution between 10 and 20 cms. This is connected to the I2C interface on the RPi as there is no provision for this in ArduCopter for the Pixhawk. That's it for now(I'll discuss the Maxbotix later). Coming next: Image recognition using Python and OpenCV.! Please, subscribe to get an access.
https://diydrones.com/members/MikeIsted/content?type=BlogEntry&context=all
CC-MAIN-2022-33
refinedweb
4,840
62.88
170846/where-hyperledger-fabric-store-database-blockchain-where Hyperledger fabric supports LevelDB and CouchDB as state databases, LevelDB is the default state database embedded in the peer process and stores chaincode data as key-value pairs. CouchDB is an optional alternative external state database that provides additional query support when your chaincode data is modelled as JSON, permitting rich queries of the JSON content. If you're using CouchDB as your state database you can use a built-in administration interface to see the records, in your browser try to access here 5984 is the default port for CouchDB. More resources Ledger CouchDB as the State Database Using CouchDB The data of the Hyperledger Blockchain is ...READ MORE The Hyperledger Fabric stores the database in ...READ MORE The sample config files in the Hyperledger Fabric main ...READ MORE For hyperledger fabric you can use query ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE It signs the transaction (eg. initiated by ...READ MORE It signs the transaction with its private key ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/170846/where-hyperledger-fabric-store-database-blockchain-where
CC-MAIN-2022-21
refinedweb
224
56.45
Created on 2010-06-08 19:10 by terry.reedy, last changed 2010-06-09 00:17 by rhettinger. This issue is now closed. In 2.6, the requirement for **kwds keyword argument expansion in calls (LangRef 5.3.4. Calls) is relaxed from "(subclass of) dictionary" (2.5) to "mapping". The requirement in this context for 'mapping' is not specified. LRef3.2 merely says "The subscript notation a[k] selects the item indexed by k from the mapping a;". Here, .keys seems to be needed in addition to .__getitem__. (.items alone does not make an object a mapping.) In python-list thread "Which objects are expanded by double-star ** operator?", Peter Otten posted 2.6 results for class A(object): def keys(self): return list("ab") def __getitem__(self, key): return 42 class B(dict): def keys(self): return list("ab") def __getitem__(self, key): return 42 def f(**kw): print(kw) f(**A()) # {'a': 42, 'b': 42} b = B(); print(b['a'], b['b']) # I added this # 42, 42 f(**b) # {} I get same with 3.1. It appears .keys() is called in the first case, but not the second, possibly due to an internal optimization. The different of outcome seems like a bug, though one could argue that the doc is so vague that it makes no promise to be broken. This falls under the usual category of dict subclasses not having their methods called. Especially since the B dict doesn't actually contain anything. Might that 'usual' fact be somehow documented? I am hard put to suggest anything since it is a fact I, like others, am not aware of. Is this a general rule for subclasses of builtins? or specific to dicts? Somewhere, we should document the-facts-of-life for subclassing builtins. 1) For the most part, C code has the pattern if isinstance(obj, some_builtin_type): call the built_type's methods directly using slotted methods otherwise: use slower getattribute style calls 2) A subclasser of a dict needs to actually populate the dict with the values they want used. The built-in dict class is "open for extension and closed for modification" -- the open/closed principle. This is necessary or else a subclasser could easily break the built-in type's invariants and crash python. 3) For the most part, only something like subclassing UserDict gives you full control.
http://bugs.python.org/issue8945
crawl-003
refinedweb
395
66.13
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 12 results of 12 >>>>> "Charles" == Charles Twardy <ctwardy@...> writes: Charles> I couldn't see any way to do legends, so I hacked Charles> together a routine that worked for me. However, I don't Charles> know how to handle fonts properly (ie, find out how much Charles> plotting space they really take up), so someone might Charles> want to fix the two lines marked "#Hack" and maybe the Charles> related row spacing. Thanks for the script. I've been meaning to add legends for some time and you gave me the push I needed. To do it right (account for font size) is a little more difficult so I've been putting it off, but it's done in CVS now and tested with the 3 backends. I added the legend functionality to the Axes class, which has the advantage that you don't need to specify the line styles, colors etc... since the axes contains the lines and can get them from there. Also, I decided not to go with a whole new legend axes, but rather added a legend patch, legend lines and legend text to the current axis. Changes to axes lines with handle graphics or Line2D API calls are reflected in the legend text. Below is your script which works with the CVS version. Do you mind if I add it to the examples dir in the matplotlib distro? JDH # Thanks to Charles Twardy from matplotlib.matlab import * a = arange(0,3,.02) b = arange(0,3,.02) c=exp(a) d=c.tolist() d.reverse() d = array(d) ax = subplot(111) plot(a,c,'k--',a,d,'k:',a,c+d,'k') legend(('Model length', 'Data length', 'Total message length'), 'upper right') ax.set_ylim([-1,20]) ax.grid(0) xlabel('Model complexity --->') ylabel('Message length --->') title('Minimum Message Length') set(gca(), 'yticklabels', []) set(gca(), 'xticklabels', []) savefig('mml') show() >>>>> "Flavio" == Flavio Coelho <fccoelho@...> writes: Flavio> Hi, does anyone know why matplotlib crashes wxbased apps? Flavio> (Pycrust for instance?) is there any way around this? I have never used matplotlib with wx but I suspect the problem is that by default matplotlib enters the gtk mainloop, which is not compatible with other GUIs that do the same. Generally, one has to hack a shell to use matplotlib interactively -- you can read about two such shells on. I suspect the same can be done for pycrust, but I haven't any experience with it. The best thing to do would be to port to a matplotlib backend to wx and use it natively. That's what I really want to do, because wx comes with enthought python, which will make it easy for win32 users to use. John Hunter Hi, does anyone know why matplotlib crashes wxbased apps? (Pycrust for instance?) is there any way around this? thanks, Fl=E1vio I couldn't see any way to do legends, so I hacked together a routine that worked for me. However, I don't know how to handle fonts properly (ie, find out how much plotting space they really take up), so someone might want to fix the two lines marked "#Hack" and maybe the related row spacing. Here's a simple demo that includes the legend() function. Attached, I hope. -C -- Charles R. Twardy Monash University, School of CSSE ctwardy at alumni indiana edu +61(3) 9905 5823 (w) 5146 (fax) ~^~ "eloquence ought to be banish'd out of all civil Societies as a thing fatal to Peace and good Manners..." ~Sprat 1667 I ran across PyX () yesterday. It handles eps and postscript beautifully! Combine with this snippet of code: import pygtk; pygtk.require("2.0") import sys import gtk import bonobo import bonobo.ui win = gtk.Window() win.connect("delete-event", gtk.mainquit) win.show() container = bonobo.ui.Container() control = bonobo.ui.Widget("";, container.corba_objref()) # A control widget is just like any other GtkWidget. control.show() win.add(control) gtk.main() And you have a simple plotting system with wysiwyg. Should Matplotlib move towards using PyX? -- njh Hi, I'm glad to find this project and planning to use this wonderful matplotlib package in my project "PAIDA". I think the log scaling capability is relatively important in scientific analysis but the matplotlib does not support currently. While I know this capability will be supported in next release, would you tell me if it will be 0.3 release or more later? K. KISHIMOTO Over) >>>>> "Jean-Baptiste" == Jean-Baptiste Cazier <Jean-Baptiste.cazier@...> writes: Jean-Baptiste> If on top of that I could use the pixmap as a Jean-Baptiste> background of other plot, (plot (pixmap, x,y) it Jean-Baptiste> would be even better Jean-Baptiste> Any idea on how this could be done ? Yes, this should be fairly easy to implement, since all the drawing commands draw to a gtk.gdk.Drawable, of which pixmap is a derived class. When I make the hardcopy (png or tiff) I am already using a blank pixmap as the background, so it should be fairly easy to add a command to the Figure API, which is set_background_pixmap, and then allow you to plot on top of it (or just use the pixmap alone). I will take at look at this and get back to you. Thanks for the suggestion, John Hunter S=E6ll ! I recently discovered the matplotlib project with great pleasure. It is already fulling many of my wishes linked to python development However there is a function I would love to have, and it should be straight= forward to implement eventhough it is not in the matlab spirit: - I would like to be able to get a plot based on an existing pixmap - This way I would be able - to get the nice tools provided by the figure,= axes, zooming, etc.. - keep a consistent presentation of my many plots - As figures in matlibplot are supposed to be pixmaps in axes this should b= e fairly easy to implement instead of havinf plot(x,y) it would be plot(pixmap) If on top of that I could use the pixmap as a background of other plot, (pl= ot (pixmap, x,y) it would be even better Any idea on how this could be done ? Takk Kve=F0ja Jean-Baptiste --=20 ----------------------------- Jean-Baptiste.Cazier@... Department of Statistics deCODE genetics Sturlugata,8 570 2993 101 Reykjav=EDk >>>>> "Charles" == Charles R Twardy <ctwardy@...> writes: Charles> It seems I'm running the new version: * line 1702 of Charles> figure.py reads as you asked * I just tried another CVS Charles> get and got nothing * I just did another run of setup.py Charles> and nothing happened Charles> I still get the crash, and can reproduce with a simple Charles> 3-line script, attached. Charles> }Monday. If you'd like to send a script, I can add one Charles> of them to the }screenshots section of the home page. My Charles> script is an ugly beast that parses an ugly dataset. But Charles> I'll see about getting a demo version. -C Thanks for the update. I'm knee deep in getting a postscript backend working so I'll take a look at this in the next few days. This is proving to be very helpful because I've abstracted all the drawing operations away from gtk which means it will be relatively easy to port the lib to new output drivers or GUI toolkits. As for the bug, I can replicate the bug on my system with your example script, so it shouldn't be too hard to find and fix. I'll let you know when I get an updated CVS. JDH
http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200309
CC-MAIN-2015-22
refinedweb
1,299
72.05
RLP has always been the best for optimizing things but, with TinyCLR, our team managed to make major improvements in this area. With the way you add native interops now, you can easily access the entire API. Everything is exposed in the TinyCLR.h file. As a quick example, We are trying to refresh a SPI display. This display is 16bpp only but to save on RAM, we will draw in 8bpp. Our flush method will convert 16bit color space to 8bit. Here is the flush code in C#. This takes about 10 seconds to update the display!! public void Flush(byte[] data) { SetClip(0, 0, Width, Height); WriteCommand(0x2C); controlPin.Write(GpioPinValue.High); for (int i = 0; i < Width * Height; i++) { //blue int blue = (data[i] & 3) << 3; int red = (data[i] & (7 << (2 + 3))) ; int green = (data[i] & (7 << 2))>>2; if (data[i] != 0) { buffer2[0] = (byte)(red | green); buffer2[1] = (byte)blue; } else { buffer2[0] = 0x00; buffer2[1] = 0x00; } spi.Write(buffer2); } } I then took that deep loop and converted it to interop (native call) Here is the first part in C# [MethodImpl(MethodImplOptions.InternalCall)] private extern void NativeFlushHelper(byte[] data); public void NativeFlush(byte[] data) { SetClip(0, 0, Width, Height); WriteCommand(0x2C); controlPin.Write(GpioPinValue.High); NativeFlushHelper(data); } and here is the C++ part #include "AdafruitDisplayShield.h" TinyCLR_Result Interop_AdafruitDisplayShield_GHIElectronics_TinyCLR_ST7735_ST7735::NativeFlushHelper___VOID__SZARRAY_U1(const TinyCLR_Interop_MethodData md) { auto ip = reinterpret_cast<const TinyCLR_Interop_Provider*>(md.ApiProvider.FindDefault(&md.ApiProvider, TinyCLR_Api_Type::InteropProvider)); TinyCLR_Interop_ClrValue arg1; ip->GetArgument(ip, md.Stack, 1, arg1); uint8_t* data = reinterpret_cast<uint8_t*>(arg1.Data.SzArray.Data); uint8_t buffer2[2]; auto spiProvider = (const TinyCLR_Spi_Provider*)md.ApiProvider.FindByIndex(&md.ApiProvider, "GHIElectronics.TinyCLR.NativeApis.STM32F4.SpiProvider",0,TinyCLR_Api_Type::SpiProvider); if (spiProvider == nullptr) return TinyCLR_Result::ArgumentNull; for (int i = 0; i < 160 * 128; i++) { int blue = (data[i] & 3) << 3; int red = (data[i] & (7 << (2 + 3))); int green = (data[i] & (7 << 2)) >> 2; if (data[i] != 0) { buffer2[0] = (uint8_t)(red | green); buffer2[1] = (uint8_t)blue; } else { buffer2[0] = 0x00; buffer2[1] = 0x00; } size_t sz = 2; if(spiProvider->Write(spiProvider, 0, buffer2, sz) != TinyCLR_Result::Success) return TinyCLR_Result::InvalidOperation; } return TinyCLR_Result::Success; } The display now updates in half a second! Yes from almost 10 seconds to half a second! There are still major improvements happening in this coming release, to make this even easier, so stay tuned. And YES this works on any open source device running TinyCLR. This test was done with FEZ holding the Adafruit 1.8" color display shield. Everything is open source!
https://forums.ghielectronics.com/t/using-native-code/21567
CC-MAIN-2019-09
refinedweb
413
50.43
Consider the following Taylor series for sin(θ/7) and the following two functions based on the series, one takes only the first non-zero term def short_series(x): return 0.14285714*x and a second that three non-zero terms. def long_series(x): return 0.1425714*x - 4.85908649e-04*x**3 + 4.9582515e-07*x**5 Which is more accurate? Let’s make a couple plots plot to see. First here are the results on the linear scale. Note that the short series is far more accurate than the long series! The differences are more dramatic on the log scale. There you can see that you get more correct significant figures from the short series as the angle approaches zero. What’s going on? Shouldn’t you get more accuracy from a longer Taylor series approximation? Yes, but there’s an error in our code. The leading coefficient in long_series is wrong in the 4th decimal place. That small error in the most important term outweighs the benefit of adding more terms to the series. The simpler code, implemented correctly, is better than the more complicated code with a small error. The moral of the story is to focus on the leading term. Until it’s right, the rest of the terms don’t matter. Update: Based on some discussion after publishing this post, I think my comment about short_series being simpler misdirected attention from my main point. Here’s a variation on the same argument based on two equally complex functions. def typo1(x): return 0.1425714*x - 4.85908649e-04*x**3 + 4.9582515e-07*x**5 def typo2(x): return 0.14285714*x - 5.85908649e-04*x**3 + 4.9582515e-07*x**5 Both functions have one typo, typo1 in the first non-zero coefficient and typo2 in the second non-zero coefficient. These two functions behave almost exactly like long_series and short_series respectively. A typo in the leading coefficient has a much bigger impact on accuracy than a typo in one of the other coefficients. Related post: Life lessons from differential equations 5 thoughts on “Focus on the most important terms” “there’s an error in our code. The leading coefficient in long_series is wrong in the 4th decimal place” Saying errors in coding results in output errors is a tautology and a sleight of hand in this post. You’re pretending the simple code is better because it’s simple, but it’s better because it’s coded correctly. I’m not just saying that incorrect code produces incorrect results. That is a tautology. Neither of these functions is exactly correct. They’re both approximations. And there are two sources of approximation: truncating the Taylor series, and representing real numbers as floating point numbers. In this particular example, the latter is more important. On a separate note, simpler code is more likely to be correct, all other things being equal, because one’s attention is spread over a smaller error. If the function short_serieshad a typo, there’s only one place to look, but there are three places to look in long_series. I deliberately introduced a typo in the function long_series, but ironically I also had an unintentional error in the second term while writing the post. I corrected it before publishing, but it hardly made any difference, which is consistent with the theme of the post. The error in the most important term drowned out the error in the second most important term. …. I feel like this is a really important idea, but maybe not the best way to present it? It feels very bait-and-switch this way.
https://www.johndcook.com/blog/2018/02/03/focus-on-the-most-important-terms/
CC-MAIN-2018-34
refinedweb
607
66.03
I am using django and I am trying to have a script run in the background which updates some values in my MySQL database. However, I am having some troubles making it work even when I try to run my script manually. It seems like the main issue is importing the models that is to be updated. For instance when I format the import in the following way from .models import bokningsData I get the error File "/home/skallars/mysite/webapp/updateAll.py", line 2, in <module> from .models import bokningsData ModuleNotFoundError: No module named '__main__'.models; '__main__' is not a package The above formatting works however in VSCode. When I format as from models import bokningsData I get the error django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJAN GO_SETTINGS_MODULE or call settings.configure() before accessing settings." If I try to run it as a background task I get the error ValueError: Attempted relative import in non-package If I just copy the code in my updateAll.py file to views.py and run it manually, I get the first two errors depending on the formatting. However, when views.py run by getting triggered when I go to the webpage, it works and the database gets updated. It also works when I update the database manually via the shell. What am I missing here?
https://www.pythonanywhere.com/forums/topic/13609/
CC-MAIN-2019-30
refinedweb
236
59.6
Review Questions for Exam 1 Solutions This review sheet is intended to give you some practice questions to use in preparing for our first midterm. It is not necessarily complete. The first exam covers the reading assignments, programming projects and class/discussion material through Friday, October 1. 1. What is a CPU? See lecture 1 notes. 2. Give one example of an input device and one example of an output device. See Lecture 1 notes. 3. What is a compiler? See class notes. 4. Write Java code that displays the sum of all numbers which are multiples of 3 between 1 and 1000. int sum = 0; for(int i = 3; i <= 1000; i+=3) sum += i; System.out.println("Sum: " + sum); 5. Write a line of code that creates an object that may be used to read a line of text from the keyboard. Scanner reader = new Scanner(System.in); 6. Using your object in #5, write a line of Java code that reads a floating point number from the keyboard. double x = reader.nextDouble(); 7. Assume that val is an initialized variable of type double. Write a Java statement that prints val to the console window. System.out.print(val); 8. Write a complete Java program (including any necessary import statements) that reads 100 floating-point numbers from the user, and prints the max, min and average value. import java.util.*; public class Number8 { public static void main(String[] args) { Scanner scan = new Scanner(System.in); // Read the first value, and initialize the min and max to that first value System.out.print("Enter the first number: "); double value = scan.nextDouble(); double min = value; double max = value; double sum = value; // read remaining 99 floating point numbers for(int i = 0; i < 99; i++) { System.out.print("Enter the next number: "); value = scan.nextDouble(); min = Math.min(min, value); max = Math.max(max, value); sum += value; } // Display average, minimum, and maximum System.out.println("Average: " + (sum/100)); System.out.println("Minimum: " + min); System.out.println("Maximum: " + max); } } 9. Write a Java program that reads 20 lines of text from the user and prints the number of lines that contain the phrase "happy day" (disregard capitalization). import java.util.*; public class Question9 { public static void main(String[] args) { Scanner scan = new Scanner(System.in); int numLines = 0; // number of lines that contain "happy day" - initially zero System.out.print("Enter 20 lines of text: "); for(int i = 0; i < 20; i++) { String line = scan.nextLine().toLowerCase(); if(line.indexOf("happy day") >= 0) numLines++; } System.out.println("Number of lines containing \"happy day\": " + numLines); } } 10. Write a Java program that reads a line of text from the keyboard and prints its reverse to the screen. import java.util.*; public class Question10 { public static void main(String[] args) { Scanner scan = new Scanner(System.in); System.out.print("Enter a line of text: "); String s = scan.nextLine(); for(int i = 0; i < s.length(); i++) { System.out.print(s.charAt(i)); } } } 11. Write Java code that prints the string referenced by variable myString with the first letter capitalized. String up = myString.toUpperCase(); System.out.print(up.charAt(0) + myString.substring(1)); 12. What is the value and type of the result? a. 17/8 2 b. 24%5 4 c. 14.0/4 3.5 d. Math.pow(3, 5) 243 e. Character.isLetter('+') false f. Math.round(12.56999) g. 15 + 2 + " hellos" "17 hellos" h. 15 * 2 / 4 7 i. (5 < 4) || Character.isDigit('2') true 13. Declare and initialize a class constant that represents the number of days in March. public static final int NUM_DAYS_MARCH = 31; 14. Give an example of an explicit cast, and explain when the use of an explicit cast is, and is not, necessary. We haven't covered this yet. 15. Write Java code that creates an object of type Random. Random generator = new Random(); 16. What is the difference in comparing Strings with the equals() method vs. the = = operator? Using == to compare two String reference variables compares the addresses in the reference variables. The equals() method actually compares the Strings. 17. Give examples to show how the methods indexOf(), replace(), length(), substring(), equals(), equalsIgnoreCase(), toUpperCase() and charAt() work. (These are all methods in the String class). See class notes, Java documentation, and textbook. 18. What is a constructor? It's a method that is called when a new object is created, and it is used to perform any necessary initializations for the new object. The constructor has the same name as the class. 19. Write a if-else if statement that prints a message indicating whether the value stored in int variable n is 0, 1, 2, 3 or none of these. if( (n >= 0) && (n <=3)) System.out.println("Between 0 and 3"); else System.out.println("Not between 0 and 3"); 20. Write code that prints the squares of the numbers between 1 and n, where n is a value entered by the user. If the user enters an integer less than 1, print an error message. if(n >=1) { for(int i = 1; i <= n; i++) System.out.println(Math.pow(i, 2)); } else System.out.println("Error"); 21. Do exercise 21, chapter 2, in the Stepp and Reges book. 22. Use loops to print the following: * *** ***** ... *********** The first line in the triangle contains 1 *, the second line contains 3 *, ... the last line contains 11 *'s. for(int i = 1; i <= 6; i+=2) { // print a line with 6-i spaces followed by 2i-1 *'s for(int j = 1; j <= 6-i; j++) System.out.print(" "); for(int k = 1; k <= 2*i-1; k++) System.out.print("*"); System.out.println(); //go to beginning of next line } 23. Write Java code that simulates the throw of two dice and prints the value rolled. Random gen = new Random(); int throw1 = gen.nextInt(6)+1; int throw2 = gen.nextInt(6)+1; System.out.println("Throw total: " + (throw1 + throw2)); 24. Write a program that uses a Graphics object to draw 10 concentric circles. We did this in class.
http://www.cs.utexas.edu/~eberlein/cs312/mid1reviewSol.html
CC-MAIN-2013-20
refinedweb
1,013
69.79
User’s Guide Scyld ClusterWare Release 5.11.2-5112g0000 January 20, 2015 User’s Guide: Scyld ClusterWare Release 5.11.2-5112g0000; January 20, 2015 Revised Edition Published January 20, 2015 Copyright © 1999 - 2015 Penguin Computing, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording or otherwise) without the prior written permission of Penguin Computing, Inc.. The software described in this document is "commercial computer software" provided with restricted rights (except as to included open/free source). Use beyond license provisions is a violation of worldwide intellectual property laws, treaties, and conventions. Scyld ClusterWare, the Highly Scyld logo, and the Penguin Computing logo are trademarks of Penguin Computing, Inc.. Intel is a registered trademark of Intel Corporation or its subsidiaries in the United States and other countries. Infiniband is a trademark of the InfiniBand Trade Association. Linux is a registered trademark of Linus Torvalds. Red Hat and all Red Hat-based trademarks are trademarks or registered trademarks of Red Hat, Inc. in the United States and other countries. All other trademarks and copyrights referred to are the property of their respective owners. Table of Contents Preface .....................................................................................................................................................................................v Feedback .........................................................................................................................................................................v 1. Scyld ClusterWare Overview ............................................................................................................................................1 What Is a Beowulf Cluster? ............................................................................................................................................1 A Brief History of the Beowulf.............................................................................................................................1 First-Generation Beowulf Clusters .......................................................................................................................2 Scyld ClusterWare: A New Generation of Beowulf..............................................................................................3 Scyld ClusterWare Technical Summary .........................................................................................................................3 Top-Level Features of Scyld ClusterWare ............................................................................................................3 Process Space Migration Technology ...................................................................................................................5 Compute Node Provisioning .................................................................................................................................5 Compute Node Categories ....................................................................................................................................5 Compute Node States............................................................................................................................................5 Major Software Components ................................................................................................................................6 Typical Applications of Scyld ClusterWare....................................................................................................................7 2. Interacting With the System..............................................................................................................................................9 Verifying the Availability of Nodes ................................................................................................................................9 Monitoring Node Status..................................................................................................................................................9 The BeoStatus GUI Tool .......................................................................................................................................9 BeoStatus Node Information .....................................................................................................................10 BeoStatus Update Intervals .......................................................................................................................10 BeoStatus in Text Mode ............................................................................................................................11 The bpstat Command Line Tool..........................................................................................................................11 The beostat Command Line Tool ........................................................................................................................12 Issuing Commands........................................................................................................................................................14 Commands on the Master Node..........................................................................................................................14 Commands on the Compute Node ......................................................................................................................14 Examples for Using bpsh...........................................................................................................................14 Formatting bpsh Output.............................................................................................................................15 bpsh and Shell Interaction .........................................................................................................................16 Copying Data to the Compute Nodes ...........................................................................................................................17 Sharing Data via NFS .........................................................................................................................................17 Copying Data via bpcp........................................................................................................................................17 Programmatic Data Transfer ...............................................................................................................................18 Data Transfer by Migration.................................................................................................................................18 Monitoring and Controlling Processes .........................................................................................................................18 3. Running Programs ...........................................................................................................................................................21 Program Execution Concepts .......................................................................................................................................21 Stand-Alone Computer vs. Scyld Cluster ...........................................................................................................21 Traditional Beowulf Cluster vs. Scyld Cluster....................................................................................................21 Program Execution Examples .............................................................................................................................22 Environment Modules...................................................................................................................................................24 Running Programs That Are Not Parallelized ..............................................................................................................24 Starting and Migrating Programs to Compute Nodes (bpsh)..............................................................................25 Copying Information to Compute Nodes (bpcp) ................................................................................................25 Running Parallel Programs ...........................................................................................................................................26 An Introduction to Parallel Programming APIs..................................................................................................26 iii MPI ............................................................................................................................................................27 PVM ..........................................................................................................................................................28 Custom APIs..............................................................................................................................................28 Mapping Jobs to Compute Nodes .......................................................................................................................28 Running MPICH and MVAPICH Programs .......................................................................................................29 mpirun........................................................................................................................................................29 Setting Mapping Parameters from Within a Program ...............................................................................30 Examples ...................................................................................................................................................30 Running OpenMPI Programs..............................................................................................................................31 Pre-Requisites to Running OpenMPI ........................................................................................................31 Using OpenMPI.........................................................................................................................................31 Running MPICH2 and MVAPICH2 Programs ...................................................................................................32 Pre-Requisites to Running MPICH2/MVAPICH2 ....................................................................................32 Using MPICH2..........................................................................................................................................32 Using MVAPICH2.....................................................................................................................................32 Running PVM-Aware Programs .........................................................................................................................32 Porting Other Parallelized Programs...................................................................................................................33 Running Serial Programs in Parallel.............................................................................................................................33 mpprun ................................................................................................................................................................33 Options ......................................................................................................................................................34 Examples ...................................................................................................................................................34 beorun..................................................................................................................................................................34 Options ......................................................................................................................................................34 Examples ...................................................................................................................................................35 Job Batching .................................................................................................................................................................35 Job Batching Options for ClusterWare ...............................................................................................................35 Job Batching with TORQUE ..............................................................................................................................35 Running a Job ............................................................................................................................................36 Checking Job Status ..................................................................................................................................37 Finding Out Which Nodes Are Running a Job..........................................................................................37 Finding Job Output ....................................................................................................................................38 Job Batching with POD Tools.............................................................................................................................38 File Systems..................................................................................................................................................................38 Sample Programs Included with Scyld ClusterWare ....................................................................................................39 linpack .................................................................................................................................................................39 A. Glossary of Parallel Computing Terms .........................................................................................................................41 B. TORQUE and Maui Release Information.....................................................................................................................45 TORQUE Release Notes ..............................................................................................................................................45 TORQUE README.array_changes ............................................................................................................................45 TORQUE Change Log..................................................................................................................................................48 Maui Change Log .........................................................................................................................................................94 C. OpenMPI Release Information ......................................................................................................................................99 D. MPICH2 Release Information...................................................................................................................................... 147 E. MVAPICH2 Release Information................................................................................................................................. 153 F. MPICH-3 Release Information..................................................................................................................................... 181 CHANGELOG ........................................................................................................................................................... 181 Release Notes.............................................................................................................................................................. 211 iv Preface Welcome to the Scyld ClusterWare HPC User’s Guide. This manual is for those who will use ClusterWare to run applications, so it presents the basics of ClusterWare parallel computing — what ClusterWare is, what you can do with it, and how you can use it. The manual covers the ClusterWare architecture and discusses the unique features of Scyld ClusterWare HPC. It will show you how to navigate the ClusterWare environment, how to run programs, and how to monitor their performance. Because this manual is for the user accessing a ClusterWare system that has already been configured, it does not cover how to install, configure, or administer your Scyld cluster. You should refer to other parts of the Scyld documentation set for additional information, specifically: • Visit the Penguin Computing Support Portal at to find the latest documentation. • If you have not yet built your cluster or installed Scyld ClusterWare HPC, refer to the latest Release Notes and the Installation Guide. • If you are looking for information on how to administer your cluster, refer to the Administrator’s Guide. • If you plan to write programs to use on your Scyld cluster, refer to the Programmer’s Guide. Also not covered is use of the Linux operating system, on which Scyld ClusterWare is based. Some of the basics are presented here, but if you have not used Linux or Unix before, a book or online resource will be helpful. Books by O’Reilly and Associates2 are good sources of information. This manual will provide you with information about the basic functionality of the utilities needed to start being productive with Scyld ClusterWare. Feedback We welcome any reports on errors or difficulties that you may find. We also would like your suggestions on improving this document. Please direct all comments and problems to [email protected]. When writing your email, please be as specific as possible, especially with errors in the text. Please include the chapter and section information. Also, please mention in which version of the manual you found the error. This version is Scyld ClusterWare HPC, Revised Edition, published January 20, 2015. Notes 1. 2. v Preface vi Chapter 1. Scyld ClusterWare Overview Scyld ClusterWare is a Linux-based high-performance computing system. It solves many of the problems long associated with Linux Beowulf-class cluster computing, while simultaneously reducing the costs of system installation, administration, and maintenance. With Scyld ClusterWare, the cluster is presented to the user as a single, large-scale parallel computer. This chapter presents a high-level overview of Scyld ClusterWare. It begins with a brief history of Beowulf clusters, and discusses the differences between the first-generation Beowulf clusters and a Scyld cluster. A high-level technical summary of Scyld ClusterWare is then presented, covering the top-level features and major software components of Scyld. Finally, typical applications of Scyld ClusterWare are discussed. Additional details are provided throughout the Scyld ClusterWare HPC documentation set. What Is a Beowulf Cluster? The term "Beowulf" refers to a multi-computer architecture designed for executing parallel computations. A "Beowulf cluster" is a parallel computer system conforming to the Beowulf architecture, which consists of a collection of commodity off-the-shelf computers (COTS) (referred to as "nodes"), connected via a private network running an open-source operating system. Each node, typically running Linux, has its own processor(s), memory storage, and I/O interfaces. The nodes communicate with each other through a private network, such as Ethernet or Infiniband, using standard network adapters. The nodes usually do not contain any custom hardware components, and are trivially reproducible. One of these nodes, designated as the "master node", is usually attached to both the private and public networks, and is the cluster’s administration console. The remaining nodes are commonly referred to as "compute nodes". The master node is responsible for controlling the entire cluster and for serving parallel jobs and their required files to the compute nodes. In most cases, the compute nodes are configured and controlled by the master node. Typically, the compute nodes require neither keyboards nor monitors; they are accessed solely through the master node. From the viewpoint of the master node, the compute nodes are simply additional processor and memory resources. In conclusion, Beowulf is a technology of networking Linux computers together to create a parallel, virtual supercomputer. The collection as a whole is known as a "Beowulf cluster". While early Linux-based Beowulf clusters provided a costeffective hardware alternative to the supercomputers of the day, allowing users to execute high-performance computing applications, the original software implementations were not without their problems. Scyld ClusterWare addresses — and solves — many of these problems. A Brief History of the Beowulf Cluster computer architectures have a long history. The early network-of-workstations (NOW) architecture used a group of standalone processors connected through a typical office network, their idle cycles harnessed by a small piece of special software, as shown below. 1 Chapter 1. Scyld ClusterWare Overview Figure 1-1. Network-of-Workstations Architecture The NOW concept evolved to the Pile-of-PCs architecture, with one master PC connected to the public network, and the remaining PCs in the cluster connected to each other and to the master through a private network as shown in the following figure. Over time, this concept solidified into the Beowulf architecture. Figure 1-2. A Basic Beowulf Cluster For a cluster to be properly termed a "Beowulf", it must adhere to the "Beowulf philosophy", which requires: • Scalable performance • The use of commodity off-the-shelf (COTS) hardware • The use of an open-source operating system, typically Linux Use of commodity hardware allows Beowulf clusters to take advantage of the economies of scale in the larger computing markets. In this way, Beowulf clusters can always take advantage of the fastest processors developed for high-end workstations, the fastest networks developed for backbone network providers, and so on. The progress of Beowulf clustering technology is not governed by any one company’s development decisions, resources, or schedule. 2 Chapter 1. Scyld ClusterWare Overview First-Generation Beowulf Clusters The original Beowulf software environments were implemented as downloadable add-ons to commercially-available Linux distributions. These distributions included all of the software needed for a networked workstation: the kernel, various utilities, and many add-on packages. The downloadable Beowulf add-ons included several programming environments and development libraries as individually-installable packages. With this first-generation Beowulf scheme, every node in the cluster required a full Linux installation and was responsible for running its own copy of the kernel. This requirement created many administrative headaches for the maintainers of Beowulf-class clusters. For this reason, early Beowulf systems tended to be deployed by the software application developers themselves (and required detailed knowledge to install and use). Scyld ClusterWare reduces and/or eliminates these and other problems associated with the original Beowulf-class clusters. Scyld ClusterWare: A New Generation of Beowulf Scyld ClusterWare streamlines the process of configuring, administering, running, and maintaining a Beowulf-class cluster computer. It was developed with the goal of providing the software infrastructure for commercial production cluster solutions. Scyld ClusterWare was designed with the differences between master and compute nodes in mind; it runs only the appropriate software components on each compute node. Instead of having a collection of computers each running its own fully-installed operating system, Scyld creates one large distributed computer. The user of a Scyld cluster will never log into one of the compute nodes nor worry about which compute node is which. To the user, the master node is the computer, and the compute nodes appear merely as attached processors capable of providing computing resources. With Scyld ClusterWare, the cluster appears to the user as a single computer. Specifically, • The compute nodes appear as attached processor and memory resources • All jobs start on the master node, and are migrated to the compute nodes at runtime • All compute nodes are managed and administered collectively via the master node The Scyld ClusterWare architecture simplifies cluster setup and node integration, requires minimal system administration, provides tools for easy administration where necessary, and increases cluster reliability through seamless scalability. In addition to its technical advances, Scyld ClusterWare provides a standard, stable, commercially-supported platform for deploying advanced clustering systems. See the next section for a technical summary of Scyld ClusterWare. Scyld ClusterWare Technical Summary Scyld ClusterWare presents a more uniform system view of the entire cluster to both users and applications through extensions to the kernel. A guiding principle of these extensions is to have little increase in both kernel size and complexity and, more importantly, negligible impact on individual processor performance. In addition to its enhanced Linux kernel, Scyld ClusterWare includes libraries and utilities specifically improved for highperformance computing applications. For information on the Scyld libraries, see the Reference Guide. Information on using the Scyld utilities to run and monitor jobs is provided in Chapter 2 and Chapter 3. If you need to use the Scyld utilities to configure and administer your cluster, see the Administrator’s Guide. Top-Level Features of Scyld ClusterWare The following list summarizes the top-level features of Scyld ClusterWare. 3 Chapter 1. Scyld ClusterWare Overview Security and Authentication With Scyld ClusterWare, the master node is a single point of security administration and authentication. The authentication envelope is drawn around the entire cluster and its private network. This obviates the need to manage copies or caches of credentials on compute nodes or to add the overhead of networked authentication. Scyld ClusterWare provides simple permissions on compute nodes, similar to Unix file permissions, allowing their use to be administered without additional overhead. Easy Installation Scyld ClusterWare is designed to augment a full Linux distribution, such as Red Hat Enterprise Linux (RHEL) or CentOS. The installer used to initiate the installation on the master node is provided on an auto-run CD-ROM. You can install from scratch and have a running Linux HPC cluster in less than an hour. See the Installation Guide for full details. Install Once, Execute Everywhere A full installation of Scyld ClusterWare is required only on the master node. Compute nodes are provisioned from the master node during their boot process, and they dynamically cache any additional parts of the system during process migration or at first reference. Single System Image Scyld ClusterWare makes a cluster appear as a multi-processor parallel computer. The master node maintains (and presents to the user) a single process space for the entire cluster, known as the BProc Distributed Process Space. BProc is described briefly later in this chapter, and more details are provided in the Administrator’s Guide. Execution Time Process Migration Scyld ClusterWare stores applications on the master node. At execution time, BProc migrates processes from the master to the compute nodes. This approach virtually eliminates both the risk of version skew and the need for hard disks on the compute nodes. More information is provided in the section on process space migration later in this chapter. Also refer to the BProc discussion in the Administrator’s Guide. Seamless Cluster Scalability Scyld ClusterWare seamlessly supports the dynamic addition and deletion of compute nodes without modification to existing source code or configuration files. See the chapter on the BeoSetup utility in the Administrator’s Guide. Administration Tools Scyld ClusterWare includes simplified tools for performing cluster administration and maintenance. Both graphical user interface (GUI) and command line interface (CLI) tools are supplied. See the Administrator’s Guide for more information. Web-Based Administration Tools Scyld ClusterWare includes web-based tools for remote administration, job execution, and monitoring of the cluster. See the Administrator’s Guide for more information. Additional Features Additional features of Scyld ClusterWare include support for cluster power management (IPMI and Wake-on-LAN, easily extensible to other out-of-band management protocols); runtime and development support for MPI and PVM; and support for the LFS and NFS3 file systems. 4 Chapter 1. Scyld ClusterWare Overview Fully-Supported Scyld ClusterWare is fully-supported by Penguin Computing, Inc. Process Space Migration Technology Scyld ClusterWare is able to provide a single system image through its use of the BProc Distributed Process Space, the Beowulf process space management kernel enhancement. BProc enables the processes running on compute nodes to be visible and managed on the master node. All processes appear in the master node’s process table, from which they are migrated to the appropriate compute node by BProc. Both process parent-child relationships and Unix job-control information are maintained with the migrated jobs. The stdout and stderr streams are redirected to the user’s ssh or terminal session on the master node across the network. The BProc mechanism is one of the primary features that makes Scyld ClusterWare different from traditional Beowulf clusters. For more information, see the system design description in the Administrator’s Guide. Compute Node Provisioning Scyld ClusterWare utilizes light-weight provisioning of compute nodes from the master node’s kernel and Linux distribution. For Scyld Series 30 and Scyld ClusterWare HPC, PXE is the supported method for booting nodes into the cluster; the 2-phase boot sequence of earlier Scyld distributions is no longer used. The master node is the DHCP server serving the cluster private network. PXE booting across the private network ensures that the compute node boot package is version-synchronized for all nodes within the cluster. This boot package consists of the kernel, initrd, and rootfs. If desired, the boot package can be customized per node in the Beowulf configuration file /etc/beowulf/config, which also includes the kernel command line parameters for the boot package. For a detailed description of the compute node boot procedure, see the system design description in the Administrator’s Guide. Also refer to the chapter on compute node boot options in that document. Compute Node Categories Compute nodes seen by the master over the private network are classified into one of three categories by the master node, as follows: • Unknown — A node not formally recognized by the cluster as being either a Configured or Ignored node. When bringing a new compute node online, or after replacing an existing node’s network interface card, the node will be classified as unknown. • Ignored — Nodes which, for one reason or another, you’d like the master node to ignore. These are not considered part of the cluster, nor will they receive a response from the master node during their boot process. • Configured — Those nodes listed in the cluster configuration file using the "node" tag. These are formally part of the cluster, recognized as such by the master node, and used as computational resources by the cluster. For more information on compute node categories, see the system design description in the Administrator’s Guide. 5 Chapter 1. Scyld ClusterWare Overview Compute Node States BProc maintains the current condition or "node state" of each configured compute node in the cluster. The compute node states are defined as follows: • down — Not communicating with the master, and its previous state was either down, up, error, unavailable, or boot. • unavailable — Node has been marked unavailable or "off-line" by the cluster administrator; typically used when performing maintenance activities. The node is useable only by the user root. • error — Node encountered an error during its initialization; this state may also be set manually by the cluster administrator. The node is useable only by the user root. • up — Node completed its initialization without error; node is online and operating normally. This is the only state in which non-root users may access the node. • reboot — Node has been commanded to reboot itself; node will remain in this state until it reaches the boot state, as described below. • halt — Node has been commanded to halt itself; node will remain in this state until it is reset (or powered back on) and reaches the boot state, as described below. • pwroff — Node has been commanded to power itself off; node will remain in this state until it is powered back on and reaches the boot state, as described below. • boot — Node has completed its stage 2 boot but is still initializing. After the node finishes booting, its next state will be either up or error. For more information on compute node states, see the system design description in the Administrator’s Guide. Major Software Components The following is a list of the major software components included with Scyld ClusterWare HPC. For more information, see the relevant sections of the Scyld ClusterWare HPC documentation set, including the Installation Guide, Administrator’s Guide, User’s Guide, Reference Guide, and Programmer’s Guide. 6 • BProc — The process migration technology; an integral part of Scyld ClusterWare. • BeoSetup — A GUI for configuring the cluster. • BeoStatus — A GUI for monitoring cluster status. • beostat — A text-based tool for monitoring cluster status. • beoboot — A set of utilities for booting the compute nodes. • beofdisk — A utility for remote partitioning of hard disks on the compute nodes. • beoserv — The cluster’s DHCP, PXE and dynamic provisioning server; it responds to compute nodes and serves the boot image. • BPmaster — The BProc master daemon; it runs on the master node. • BPslave — The BProc compute daemon; it runs on each of the compute nodes. • bpstat — A BProc utility that reports status information for all nodes in the cluster. • bpctl — A BProc command line interface for controlling the nodes. • bpsh — A BProc utility intended as a replacement for rsh (remote shell). • bpcp — A BProc utility for copying files between nodes, similar to rcp (remote copy). Chapter 1. Scyld ClusterWare Overview • MPI — The Message Passing Interface, optimized for use with Scyld ClusterWare. • PVM — The Parallel Virtual Machine, optimized for use with Scyld ClusterWare. • mpprun — A parallel job-creation package for Scyld ClusterWare. Typical Applications of Scyld ClusterWare Scyld clustering provides a facile solution for anyone executing jobs that involve either a large number of computations or large amounts of data (or both). It is ideal for both large, monolithic, parallel jobs and for many normal-sized jobs run many times (such as Monte Carlo type analysis). The increased computational resource needs of modern applications are frequently being met by Scyld clusters in a number of domains, including: • Computationally-Intensive Activities — Optimization problems, stock trend analysis, financial analysis, complex pattern matching, medical research, genetics research, image rendering • Scientific Computing / Research — Engineering simulations, 3D-modeling, finite element analysis, computational fluid dynamics, computational drug development, seismic data analysis, PCB / ASIC routing • Large-Scale Data Processing — Data mining, complex data searches and results generation, manipulating large amounts of data, data archival and sorting • Web / Internet Uses — Web farms, application serving, transaction serving, data serving These types of jobs can be performed many times faster on a Scyld cluster than on a single computer. Increased speed depends on the application code, the number of nodes in the cluster, and the type of equipment used in the cluster. All of these can be easily tailored and optimized to suit the needs of your applications. 7 Chapter 1. Scyld ClusterWare Overview 8 Chapter 2. Interacting With the System This chapter discusses how to verify the availability of the nodes in your cluster, how to monitor node status, how to issue commands and copy data to the compute nodes, and how to monitor and control processes. For information on running programs across the cluster, see Chapter 3. Verifying the Availability of Nodes In order to use a Scyld cluster for computation, at least one node must be available or up. Thus, the first priority when interacting with a cluster is ascertaining the availability of nodes. Unlike traditional Beowulf clusters, Scyld ClusterWare provides rich reporting about the availability of the nodes. You can use either the BeoStatus GUI tool or the bpstat command to determine the availability of nodes in your cluster. These tools, which can also be used to monitor node status, are described in the next section. If fewer nodes are up than you think should be, or some nodes report an error, check with your Cluster Administrator. Monitoring Node Status You can monitor the status of nodes in your cluster with the BeoStatus GUI tool or with either of two command line tools, bpstat and beostat. These tools are described in the sections that follow. Also see the Reference Guide for information on the various options and flags supported for these tools. The BeoStatus GUI Tool The BeoStatus graphical user interface (GUI) tool is the best way to check the status of the cluster, including which nodes are available or up. There are two ways to open the BeoStatus GUI as a Gnome X window, as follows. Click the BeoStatus icon in the tool tray or in the applications pulldown. Alternatively, type the command beostatus in a terminal window on the master node; you do not need to be a privileged user to use this command. The default BeoStatus GUI mode is a tabular format known as the "Classic" display (shown in the following figure). You can select different display options from the Mode menu. 9 Chapter 2. Interacting With the System Figure 2-1. BeoStatus in the "Classic" Display Mode BeoStatus Node Information Each row in the BeoStatus display reports information for a single node, including the following: • Node — The node’s assigned node number, starting at zero. Node -1, if shown, is the master node. The total number of node entries shown is set by the "iprange" or "nodes" keywords in the file /etc/beowulf/config, rather than the number of detected nodes. The entry for an inactive node displays the last reported data in a grayed-out row. • Up — A graphical representation of the node’s status. A green checkmark is shown if the node is up and available. Otherwise, a red "X" is shown. • State — The node’s last known state. This should agree with the state reported by both the bpstat command and in the BeoSetup window. • CPU "X" — The CPU loads for the node’s processors; at minimum, this indicates the CPU load for the first processor in each node. Since it is possible to mix uni-processor and multi-processor machines in a Scyld cluster, the number of CPU load columns is equal to the maximum number of processors for any node in your cluster. The label "N/A" will be shown for nodes with less than the maximum number of processors. • Memory — The node’s current memory usage. • Swap — The node’s current swap space (virtual memory) usage. • Disk — The node’s hard disk usage. If a RAM disk is used, the maximum value shown is one-half the amount of physical memory. As the RAM disk competes with the kernel and application processes for memory, not all the RAM may be available. • Network — The node’s network bandwidth usage. The total amount of bandwidth available is the sum of all network interfaces for that node. BeoStatus Update Intervals Once running, BeoStatus is non-interactive; the user simply monitors the reported information. The display is updated at 4second intervals by default. You can modify this default using the command beostatus -u secs (where secs is the number of seconds) in a terminal window or an ssh session to the master node with X-forwarding enabled. 10 Chapter 2. Interacting With the System Tip: Each update places load on the master and compute nodes, as well as the interconnection network. Too-frequent updates can degrade the overall system performance. BeoStatus in Text Mode In environments where use of the Gnome X window system is undesirable or impractical, such as when accessing the master node through a slow remote network connection, you can view the status of the cluster as curses text output (shown in the following figure). Do do this, enter the command beostatus -c in a terminal window on the master node or an ssh session to the master node. BeoStatus in text mode reports the same node information as reported by the "Classic" display, except for the graphical indicator of node up (green checkmark) or node down (red X). The data in the text display is updated at 4-second intervals by default. Figure 2-2. BeoStatus in Text Mode The bpstat Command Line Tool You can also check node status with the bpstat command. When run at a shell prompt on the master node without options, bpstat prints out a listing of all nodes in the cluster and their current status. You do not need to be a privileged user to use this command. Following is an example of the outputs from bpstat for a cluster with 10 compute nodes. [user@cluster user] $ bpstat Node(s) 5-9 4 0-3 Status down up up Mode User ---------- root ---x--x--x any ---x--x--x root Group root any root bpstat will show one of the following indicators in the "Status" column: • A node marked up is available to run jobs. This status is the equivalent of the green checkmark in the BeoStatus GUI. 11 Chapter 2. Interacting With the System • Nodes that have not yet been configured are marked as down. This status is the equivalent of the red X in the BeoStatus GUI. • Nodes currently booting are temporarily shown with a status of boot. Wait 10-15 seconds and try again. • The "error" status indicates a node initialization problem. Check with your Cluster Administrator. For additional information on bpstat, see the section on monitoring and controlling processes later in this chapter. Also see the Reference Guide for details on using bpstat and its command line options. The beostat Command Line Tool You can use the beostat command to display raw status data for cluster nodes. When run at a shell prompt on the master node without options, beostat prints out a listing of stats for all nodes in the cluster, including the master node. You do not need to be a privileged user to use this command. The following example shows the beostat output for the master node and one compute node: [user@cluster user] $ beostat model model name stepping cpu MHz cache size fdiv_bug hlt_bug sep_bug f00f_bug coma_bug fpu fpu_exception cpuid level wp bogomips : : : : : : : : : : : : : : : 5 AMD Opteron(tm) Processor 248 10 2211.352 1024 KB no no no no no yes yes 1 yes 4422.05 *** /proc/meminfo *** Sun Sep 17 10:46:33 2006 total: used: free: shared: buffers: cached: Mem: 4217454592 318734336 3898720256 0 60628992 Swap: 2089209856 0 2089209856 MemTotal: 4118608 kB MemFree: 3807344 kB MemShared: 0 kB Buffers: 59208 kB Cached: 0 kB SwapTotal: 2040244 kB SwapFree: 2040244 kB 0 *** /proc/loadavg *** Sun Sep 17 10:46:33 2006 3.00 2.28 1.09 178/178 0 *** /proc/net/dev *** Sun Sep 17 10:46:33 2006 Inter-| Receive | Transmit face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo co eth0:85209660 615362 0 0 0 0 0 0 703311290 559376 eth1:4576500575 13507271 0 0 0 0 0 0 9430333982 13220730 12 Chapter 2. Interacting With the System sit0: 0 0 *** /proc/stat *** cpu0 15040 0 466102 25629625 cpu1 17404 0 1328475 24751544 0 0 0 0 0 0 0 0 Sun Sep 17 10:46:33 2006 Sun Sep 17 10:46:33 2006 *** statfs ("/") *** Sun Sep 17 10:46:33 2006 path: / f_type: 0xef53 f_bsize: 4096 f_blocks: 48500104 f_bfree: 41439879 f_bavail: 38976212 f_files: 24641536 f_ffree: 24191647 f_fsid: 000000 000000 f_namelen: 255 ============== Node: .0 (index 0) ================== *** /proc/cpuinfo *** Sun Sep 17 10:46:34 2006 num processors : 2 vendor_id : AuthenticAMD cpu family : 15 model : 5 model name : AMD Opteron(tm) Processor 248 stepping : 10 cpu MHz : 2211.386 cache size : 1024 KB fdiv_bug : no hlt_bug : no sep_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes bogomips : 4422.04 *** /proc/meminfo *** Sun Sep 17 10:46:34 2006 total: used: free: shared: buffers: Mem: 4216762368 99139584 4117622784 0 Swap: 0 0 0 MemTotal: 4117932 kB MemFree: 4021116 kB MemShared: 0 kB Buffers: 0 kB Cached: 0 kB SwapTotal: 0 kB SwapFree: 0 kB cached: 0 0 *** /proc/loadavg *** Sun Sep 17 10:46:34 2006 0.99 0.75 0.54 36/36 0 13 Chapter 2. Interacting With the System *** /proc/net/dev *** Sun Sep 17 10:46:34 2006 Inter-| Receive | Transmit face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo co eth0:312353878 430256 0 0 0 0 0 0 246128779 541105 eth1: 0 0 0 0 0 0 0 0 0 0 *** /proc/stat *** cpu0 29984 0 1629 15340009 cpu1 189495 0 11131 15170565 Sun Sep 17 10:46:34 2006 Sun Sep 17 10:46:34 2006 *** statfs ("/") *** Sun Sep 17 10:46:34 2006 path: / f_type: 0x1021994 f_bsize: 4096 f_blocks: 514741 f_bfree: 492803 f_bavail: 492803 f_files: 514741 f_ffree: 514588 f_fsid: 000000 000000 f_namelen: 255 The Reference Guide provides details for using beostat and its command line options. Issuing Commands Commands on the Master Node When you log into the cluster, you are actually logging into the master node, and the commands you enter on the command line will execute on the master node. The only exception is when you use special commands for interacting with the compute nodes, as described in the next section. Commands on the Compute Node Scyld ClusterWare provides the bpsh command for running jobs on the compute nodes. bpsh is a replacement for the traditional Unix utility rsh, used to run a job on a remote computer. Like rsh, the bpsh arguments are the node on which to run the command and the command. bpsh allows you to run a command on more than one node without having to type the command once for each node, but it doesn’t provide an interactive shell on the remote node like rsh does. bpsh is primarily intended for running utilities and maintenance tasks on a single node or a range of nodes, rather than for running parallel programs. For information on running parallel programs with Scyld ClusterWare, see Chapter 3. bpsh provides a convenient yet powerful interface for manipulating all (or a subset of) the cluster’s nodes simultaneously. bpsh provides you the flexibility to access a compute node individually, but removes the requirement to access each node individually when a collective operation is desired. A number of examples and options are discussed in the sections that follow. For a complete reference to all the options available for bpsh, see the Reference Guide. 14 Chapter 2. Interacting With the System Examples for Using bpsh Example 2-1. Checking for a File You can use bpsh to check for specific files on a compute node. For example, to check for a file named output in the /tmp directory of node 3, you would run the following command on the master node: [user@cluster user] $ bpsh 3 ls /tmp/output The command output would appear on the master node terminal where you issued the command. Example 2-2. Running a Command on a Range of Nodes You can run the same command on a range of nodes using bpsh. For example, to check for a file named output in the /tmp directory of nodes 3 through 5, you would run the following command on the master node: [user@cluster user] $ bpsh 3,4,5 ls /tmp/output Example 2-3. Running a Command on All Available Nodes Use the -a flag to indicate to bpsh that you wish to run a command on all available nodes. For example, to check for a file named output in the /tmp directory of all nodes currently active in your cluster, you would run the following command on the master node: [user@cluster user] $ bpsh -a ls /tmp/output Note that when using the -a flag, the results are sorted by the response speed of the compute nodes, and are returned without node identifiers. Because this command will produce output for every currently active node, the output may be hard to read if you have a large cluster. For example, if you ran the above command on a 64-node cluster in which half of the nodes have the file being requested, the results returned would be 32 lines of /tmp/output and another 32 lines of ls: /tmp/output: no such file or directory. Without node identifiers, it is impossible to ascertain the existence of the target file on a particular node. See the next section for bpsh options that enable you to format the results for easier reading. Formatting bpsh Output The bpsh command has a number of options for formatting its output to make it more useful for the user, including the following: • The -L option makes bpsh wait for a full line from a compute node before it prints out the line. Without this option, the output from your command could include half a line from node 0 with a line from node 1 tacked onto the end, then followed by the rest of the line from node 0. • The -p option prefixes each line of output with the node number of the compute node that produced it. This option causes the functionality for -L to be used, even if not explicitly specified. 15 Chapter 2. Interacting With the System • The -s option forces the output of each compute node to be printed in sorted numerical order, rather than by the response speed of the compute nodes. With this option, all the output for node 0 will appear before any of the output for node 1. To add a divider between the output from each node, use the -d option. • Using -d generates a divider between the output from each node. This option causes the functionality for -s to be used, even if not explicitly specified. For example, if you run the command bpsh -a -d -p ls /tmp/output on an 8-node cluster, the output would make it clear which nodes do and do not have the file output in the /tmp directory, for example: 0 --------------------------------------------------------------------/tmp/output 1 --------------------------------------------------------------------1: ls: /tmp/output: No such file or directory 2 --------------------------------------------------------------------2: ls: /tmp/output: No such file or directory 3 --------------------------------------------------------------------3: /tmp/output 4 --------------------------------------------------------------------4: /tmp/output 5 --------------------------------------------------------------------5: /tmp/output 6 --------------------------------------------------------------------6: ls: /tmp/output: No such file or directory 7 --------------------------------------------------------------------7: ls: /tmp/output: No such file or directory bpsh and Shell Interaction Special shell features, such as piping and input/output redirection, are available to advanced users. This section provides several examples of shell interaction, using the following conventions: • The command running will be cmda. • If it is piped to anything, it will be piped to cmdb. • If an input file is used, it will be /tmp/input. • If an output file is used, it will be /tmp/output. • The node used will always be node 0. Example 2-4. Command on Compute Node, Output on Master Node The easiest case is running a command on a compute node and doing something with its output on the master node, or giving it input from the master. Following are a few examples: [user@cluster user] $ bpsh 0 cmda | cmdb [user@cluster user] $ bpsh 0 cmda > /tmp/output [user@cluster user] $ bpsh 0 cmda < /tmp/input 16 Chapter 2. Interacting With the System Example 2-5. Command on Compute Node, Output on Compute Node A bit more complex situation is to run the command on the compute node and do something with its input (or output) on that same compute node. There are two ways to accomplish this. The first solution requires that all the programs you run be on the compute node. For this to work, you must first copy the cmda and cmdb executable binaries to the compute node. Then you would use the following commands: [user@cluster user] $ bpsh 0 sh -c "cmda | cmdb" [user@cluster user] $ bpsh 0 sh -c "cmda > /tmp/output" [user@cluster user] $ bpsh 0 sh -c "cmda < /tmp/input" The second solution doesn’t require any of the programs to be on the compute node. However, it uses a lot of network bandwidth as it takes the output and sends it to the master node, then sends it right back to the compute node. The appropriate commands are as follows: [user@cluster user] $ bpsh 0 cmda | bpsh 0 cmdb [user@cluster user] $ bpsh 0 cmda | bpsh 0 dd of=/tmp/output [user@cluster user] $ bpsh 0 cat /tmp/input | bpsh 0 cmda Example 2-6. Command on Master Node, Output on Compute Node You can also run a command on the master node and do something with its input or output on the compute nodes. The appropriate commands are as follows: [user@cluster user] $ cmda | bpsh 0 cmdb [user@cluster user] $ cmda | bpsh 0 dd of=/tmp/output [user@cluster user] $ bpsh 0 cat /tmp/input | cmda Copying Data to the Compute Nodes There are several ways to get data from the master node to the compute nodes. This section describes using NFS to share data, using the Scyld ClusterWare command bpcp to copy data, and using programmatic methods for data transfer. Sharing Data via. 17 Chapter 2. Interacting With the System: [user@cluster user] $ bpcp /tmp/foo 1:/tmp/foo Programmatic Data Transfer The third method for transferring data is to do it programmatically. This is a bit more complex than the methods described in the previous section, and will only be described here only conceptually. If you are using an MPI job, you can have your Rank 0 process on the master node read in the data, then use MPI’s message passing capabilities to send the data over to a compute node. If you are writing a program that uses BProc functions directly, you can have the process first read the data while it is on the master node. When the process is moved over to the compute node, it should still be able to access the data read in while on the master node. Data Transfer by Migration Another programmatic method for file transfer is to read a file into memory prior to calling BProc to migrate the process to another node. This technique is especially useful for parameter and configuration files, or files containing the intermediate state of a computation. See the Reference Guide for a description of the BProc system calls. Monitoring and Controlling Processes One of the features of Scyld ClusterWare that isn’t provided in traditional Beowulf clusters is the BProc Distributed Process Space. BProc presents a single unified process space for the entire cluster, run from the master node, where you can see and control jobs running on the compute nodes. This process space allows you to use standard Unix tools, such as top, ps, and kill. See the Administrator’s Guide for more details on BProc. Scyld ClusterWare also includes a tool called bpstat that can be used to determine which node is running a process. Using the command option bpstat -p will list all processes currently running by processID (PID), with the number of the node running each process. The following output is an example: [user@cluster user] $ bpstat -p PID 6301 6302 6303 6304 6305 6313 6314 6321 18 Node 0 1 0 2 1 2 3 3 Chapter 2. Interacting With the System Using the command option bpstat -P (with an uppercase "P" instead of a lowercase "p") tells bpstat to take the output of the ps and reformat it, pre-pending a column showing the node number. The following two examples show the difference in the outputs from ps and from bpstat -P. Example output from ps: [user@cluster user] $ ps xf PID 6503 6665 6471 6538 6553 6654 6655 6656 6658 6657 6660 6659 6662 6661 6663 TTY pts/2 pts/2 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 STAT S R S S S R S RW SW RW SW RW SW SW SW TIME 0:00 0:00 0:00 0:00 0:00 0:03 0:00 0:01 0:00 0:01 0:00 0:01 0:00 0:00 0:00 COMMAND bash ps xf] Example of the same ps output when run through bpstat -P instead: [user@cluster user] $ ps xf | bpstat -P NODE 0 0 1 1 2 2 3 3 PID 6503 6666 6667 6471 6538 6553 6654 6655 6656 6658 6657 6660 6659 6662 6661 6663 TTY pts/2 pts/2 pts/2 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 pts/3 STAT S R R S S S R S RW SW RW SW RW SW SW SW TIME 0:00 0:00 0:00 0:00 0:00 0:00 0:06 0:00 0:06 0:00 0:06 0:00 0:06 0:00 0:00 0:00 COMMAND bash ps xf bpstat -P] For additional information on bpstat, see the section on monitoring node status earlier in this chapter. For information on the bpstat command line options, see the Reference Guide. 19 Chapter 2. Interacting With the System 20 Chapter 3. Running Programs This chapter describes how to run both serial and parallel jobs with Scyld ClusterWare, and how to monitor the status of the cluster once your applications are running. It begins with a brief discussion of program execution concepts, including some examples. The discussion then covers running programs that aren’t parallelized, running parallel programs (including MPIaware and PVM-aware programs), running serial programs in parallel, job batching, and file systems. Finally, the chapter covers the sample linpack program included with Scyld ClusterWare. Program Execution Concepts This section compares program execution on a stand-alone computer and a Scyld cluster. It also discusses the differences between running programs on a traditional Beowulf cluster and a Scyld cluster. Finally, it provides some examples of program execution on a Scyld cluster. Stand-Alone Computer vs. Scyld Cluster On a stand-alone computer running Linux, Unix, and most other operating systems, executing a program is a very simple process. For example, to generate a list of the files in the current working directory, you open a terminal window and type the command ls followed by the [return] key. Typing the [return] key causes the command shell — a program that listens to and interprets commands entered in the terminal window — to start the ls program (stored at /bin/ls). The output is captured and directed to the standard output stream, which also appears in the same window where you typed the command. A Scyld cluster isn’t simply a group of networked stand-alone computers. Only the master node resembles the computing system with which you are familiar. The compute nodes have only the minimal software components necessary to support an application initiated from the master node. So for instance, running the ls command on the master node causes the same series of actions as described above for a stand-alone computer, and the output is for the master node only. However, running ls on a compute node involves a very different series of actions. Remember that a Scyld cluster has no resident applications on the compute nodes; applications reside only on the master node. So for instance, to run the ls command on compute node 1, you would enter the command bpsh 1 ls on the master node. This command sends ls to compute node 1 via Scyld’s BProc software, and the output stream is directed to the terminal window on the master node, where you typed the command. Some brief examples of program execution are provided in the last section of this chapter. Both BProc and bpsh are covered in more detail in the Administrator’s Guide. Traditional Beowulf Cluster vs. Scyld Cluster A job on a Beowulf cluster is actually a collection of processes running on the compute nodes. In traditional clusters of computers, and even on earlier Beowulf clusters, getting these processes started and running together was a complicated task. Typically, the cluster administrator would need to do all of the following: • Ensure that the user had an account on all the target nodes, either manually or via a script. • Ensure that the user could spawn jobs on all the target nodes. This typically entailed configuring a hosts.allow file on each machine, creating a specialized PAM module (a Linux authentication mechanism), or creating a server daemon on each node to spawn jobs on the user’s behalf. • Copy the program binary to each node, either manually, with a script, or through a network file system. • Ensure that each node had available identical copies of all the dependencies (such as libraries) needed to run the program. 21 Chapter 3. Running Programs • Provide knowledge of the state of the system to the application manually, through a configuration file, or through some add-on scheduling software. With Scyld ClusterWare, most of these steps are removed. Jobs are started on the master node and are migrated out to the compute nodes via BProc. A cluster architecture where jobs may be initiated only from the master node via BProc provides the following advantages: • Users no longer need accounts on remote nodes. • Users no longer need authorization to spawn jobs on remote nodes. • Neither binaries nor libraries need to be available on the remote nodes. • The BProc system provides a consistent view of all jobs running on the system. With all these complications removed, program execution on the compute nodes becomes a simple matter of letting BProc know about your job when you start it. The method for doing so depends on whether you are launching a parallel program (for example, an MPI job or PVM job) or any other kind of program. See the sections on running parallel programs and running non-parallelized programs later in this chapter. Program Execution Examples This section provides a few examples of program execution with Scyld ClusterWare. Additional examples are provided in the sections on running parallel programs and running non-parallelized programs later in this chapter. Example 3-1. Directed Execution with bpsh In the directed execution mode, the user explicitly defines which node (or nodes) will run a particular job. This mode is invoked using the bpsh command, the ClusterWare shell command analogous in functionality to both the rsh (remote shell) and ssh (secure shell) commands. Following are two examples of using bpsh. The first example runs hostname on compute node 0 and writes the output back from the node to the user’s screen: [user@cluster user] $ bpsh 0 /bin/hostname n0 If /bin is in the user’s $PATH, then the bpsh does not need the full pathname: [user@cluster user] $ bpsh 0 hostname n0 The second example runs the /usr/bin/uptime utility on node 1. Assuming /usr/bin is in the user’s $PATH: [user@cluster user] $ bpsh 1 uptime 12:56:44 up 4:57, 5 users, load average: 0.06, 0.09, 0.03 Example 3-2. Dynamic Execution with beorun and mpprun In the dynamic execution mode, Scyld decides which node is the most capable of executing the job at that moment in time. Scyld includes two parallel execution tools that dynamically select nodes: beorun and mpprun. They differ only in that beorun runs the job concurrently on the selected nodes, while mpprun runs the job sequentially on one node at a time. The following example shows the difference in the elapsed time to run a command with beorun vs. mpprun: [user@cluster user] $ date;beorun -np 8 sleep 1;date 22 Chapter 3. Running Programs Fri Aug 18 11:48:30 PDT 2006 Fri Aug 18 11:48:31 PDT 2006 [user@cluster user] $ date;mpprun -np 8 sleep 1;date Fri Aug 18 11:48:46 PDT 2006 Fri Aug 18 11:48:54 PDT 2006 Example 3-3. Binary Pre-Staged on Compute Node A needed binary can be "pre-staged" by copying it to a compute node prior to execution of a shell script. In the following example, the shell script is in a file called test.sh: ###### #! /bin/bash hostname.local ####### [user@cluster user] $ bpsh 1 mkdir -p /usr/local/bin [user@cluster user] $ bpcp /bin/hostname 1:/usr/local/bin/hostname.local [user@cluster user] $ bpsh 1 ./test.sh n1 This makes the hostname binary available on compute node 1 as /usr/local/bin/hostname.local before the script is executed. The shell’s $PATH contains /usr/local/bin, so the compute node searches locally for hostname.local in $PATH, finds it, and executes it. Note that copying files to a compute node generally puts the files into the RAM filesystem on the node, thus reducing main memory that might otherwise be available for programs, libraries, and data on the node. Example 3-4. Binary Migrated to Compute Node If a binary is not "pre-staged" on a compute node, the full path to the binary must be included in the script in order to execute properly. In the following example, the master node starts the process (in this case, a shell) and moves it to node 1, then continues execution of the script. However, when it comes to the hostname.local2 command, the process fails: ###### #! /bin/bash hostname.local2 ####### [user@cluster user] $ bpsh 1 ./test.sh ./test.sh: line 2: hostname.local2: command not found Since the compute node does not have hostname.local2 locally, the shell attempts to resolve the binary by asking for the binary from the master. The problem is that the master has no idea which binary to give back to the node, hence the failure. Because there is no way for Bproc to know which binaries may be needed by the shell, hostname.local2 is not migrated along with the shell during the initial startup. Therefore, it is important to provide the compute node with a full path to the binary: ###### #! /bin/bash /tmp/hostname.local2 ####### 23 Chapter 3. Running Programs [user@cluster user] $ cp /bin/hostname /tmp/hostname.local2 [user@cluster user] $ bpsh 1 ./test.sh n1 With a full path to the binary, the compute node can construct a proper request for the master, and the master knows which exact binary to return to the compute node for proper execution. Example 3-5. Process Data Files Files that are opened by a process (including files on disk, sockets, or named pipes) are not automatically migrated to compute nodes. Suppose the application BOB needs the data file 1.dat: [user@cluster user] $ bpsh 1 /usr/local/BOB/bin/BOB 1.dat 1.dat must be either pre-staged to the compute node, e.g., using bpcp to copy it there; or else the data files must be accessible on an NFS-mounted file system. The file /etc/beowulf/fstab (or a node-specific fstab.nodeNumber) specifies which filesystems are NFS-mounted on each compute node by default. Example 3-6. Installing Commercial Applications Through the course of its execution, the application BOB in the example above does some work with the data file 1.dat, and then later attempts to call /usr/local/BOB/bin/BOB.helper.bin and /usr/local/BOB/bin/BOB.cleanup.bin. If these binaries are not in the memory space of the process during migration, the calls to these binaries will fail. Therefore, /usr/local/BOB should be NFS-mounted to all of the compute nodes, or the binaries should be pre-staged using bpcp to copy them by hand to the compute nodes. The binaries will stay on each compute node until that node is rebooted. Generally for commercial applications, the administrator should have $APP_HOME NFS-mounted on the compute nodes that will be involved in execution. A general best practice is to mount a general directory such as /opt, and install all of the applications into /opt. Environment Modules The ClusterWare env-modules environment-modules package provides for the dynamic modification of a user’s environment via modulefiles. Each modulefile contains the information needed to configure the shell for an application, allowing a user to easily switch between applications with a simple module switch command that resets environment variables like PATH and LD_LIBRARY_PATH. A number of modules are already installed that configure application builds and execution with OpenMPI, MPICH2, and MVAPICH2. Execute the command module avail to see a list of available modules. See specific sections, below, for examples of how to use modules. For more information about creating your own modules, see, or view the manpages man module and man modulefile. 24 Chapter 3. Running Programs Running Programs That Are Not Parallelized Starting and Migrating Programs to Compute Nodes (bpsh) There are no executable programs (binaries) on the file system of the compute nodes. This means that there is no getty, no login, nor any shells on the compute nodes. Instead of the remote shell (rsh) and secure shell (ssh) commands that are available on networked stand-alone computers (each of which has its own collection of binaries), Scyld ClusterWare has the bpsh command. The following example shows the standard ls command running on node 2 using bpsh: [user@cluster user] $ bpsh 2 ls -FC / bin/ bpfs/ dev/ etc/ home/ lib/ lib64/ opt/ proc/ sbin/ sys/ tmp/ usr/ var/ At startup time, by default Scyld ClusterWare exports various directories, e.g., /bin and /usr/bin, on the master node, and those directories are NFS-mounted by compute nodes. However, an NFS-accessible /bin/ls is not a requirement for bpsh 2 ls to work. Note that the /sbin directory also exists on the compute node. It is not exported by the master node by default, and thus it exists locally on a compute node in the RAM-based filesystem. bpsh 2 ls /sbin usually shows an empty directory. Nonetheless, bpsh 2 modprobe bproc executes successfully, even though which modprobe shows the command resides in /sbin/modprobe and bpsh 2 which modprobe fails to find the command on the compute node because its /sbin does not contain modprobe. bpsh 2 modprobe bproc works because the bpsh initiates a modprobe process on the master node, then forms a process memory image that includes the command’s binary and references to all its dynamically linked libraries. This process memory image is then copied (migrated) to the compute node, and there the references to dynamic libraries are remapped in the process address space. Only then does the modprobe command begin real execution. bpsh is not a special version of sh, but a special way of handling execution. This process works with any program. Be aware of the following: • All three standard I/O streams — stdin, stdout, and stderr — are forwarded to the master node. Since some programs need to read standard input and will stop working if they’re run in the background, be sure to close standard input at invocation by using use the bpsh -n flag when you run a program in the background on a compute node. • Because shell scripts expect executables to be present, and because compute nodes don’t meet this requirement, shell scripts should be modified to include the bpsh commands required to affect the compute nodes and run on the master node. • The dynamic libraries are cached separately from the process memory image, and are copied to the compute node only if they are not already there. This saves time and network bandwidth. After the process completes, the dynamic libraries are unloaded from memory, but they remain in the local cache on the compute node, so they won’t need to be copied if needed again. For additional information on the BProc Distributed Process Space and how processes are migrated to compute nodes, see the Administrator’s Guide. Copying Information to Compute Nodes (bpcp) Just as traditional Unix has copy (cp), remote copy (rcp), and secure copy (scp) to move files to and from networked machines, Scyld ClusterWare has the bpcp command. 25 Chapter 3. Running Programs Although the default sharing of the master node’s home directories via NFS is useful for sharing small files, it is not a good solution for large data files. Having the compute nodes read large data files served via NFS from the master node will result in major network congestion, or even an overload and shutdown of the NFS server. In these cases, staging data files on compute nodes using the bpcp command is an alternate solution. Other solutions include using dedicated NFS servers or NAS appliances, and using cluster file systems. Following are some examples of using bpcp. This example shows the use of bpcp to copy a data file named foo2.dat from the current directory to the /tmp directory on node 6: [user@cluster user] $ bpcp foo2.dat 6:/tmp The default directory on the compute node is the current directory on the master node. The current directory on the compute node may already be NFS-mounted from the master node, but it may not exist. The example above works, since /tmp exists on the compute node, but will fail if the destination does not exist. To avoid this problem, you can create the necessary destination directory on the compute node before copying the file, as shown in the next example. In this example, we change to the /tmp/foo directory on the master, use bpsh to create the same directory on the node 6, then copy foo2.dat to the node: [user@cluster user] $ cd /tmp/foo [user@cluster user] $ bpsh 6 mkdir /tmp/foo [user@cluster user] $ bpcp foo2.dat 6: This example copies foo2.dat from node 2 to node 3 directly, without the data being stored on the master node. As in the first example, this works because /tmp exists: [user@cluster user] $ bpcp 2:/tmp/foo2.dat 3:/tmp Running Parallel Programs An Introduction to Parallel Programming APIs Programmers are generally familiar with serial, or sequential, programs. Simple programs — like "Hello World" and the basic suite of searching and sorting programs — are typical of sequential programs. They have a beginning, an execution sequence, and an end; at any time during the run, the program is executing only at a single point. A thread is similar to a sequential program, in that it also has a beginning, an execution sequence, and an end. At any time while a thread is running, there is a single point of execution. A thread differs in that it isn’t a stand-alone program; it runs within a program. The concept of threads becomes important when a program has multiple threads running at the same time and performing different tasks. To run in parallel means that more than one thread of execution is running at the same time, often on different processors of one computer; in the case of a cluster, the threads are running on different computers. A few things are required to make parallelism work and be useful: The program must migrate to another computer or computers and get started; at some point, the data upon which the program is working must be exchanged between the processes. The simplest case is when the same single-process program is run with different input parameters on all the nodes, and the results are gathered at the end of the run. Using a cluster to get faster results of the same non-parallel program with different inputs is called parametric execution. 26 Chapter 3. Running Programs A much more complicated example is a simulation, where each process represents some number of elements in the system. Every few time steps, all the elements need to exchange data across boundaries to synchronize the simulation. This situation requires a message passing interface or MPI. To solve these two problems — program startup and message passing — you can develop your own code using POSIX interfaces. Alternatively, you could utilize an existing parallel application programming interface (API), such as the Message Passing Interface (MPI) or the Parallel Virtual Machine (PVM). These are discussed in the sections that follow. MPI The Message Passing Interface (MPI) application programming interface is currently the most popular choice for writing parallel programs. The MPI standard leaves implementation details to the system vendors (like Scyld). This is useful because they can make appropriate implementation choices without adversely affecting the output of the program. A program that uses MPI is automatically started a number of times and is allowed to ask two questions: How many of us (size) are there, and which one am I (rank)? Then a number of conditionals are evaluated to determine the actions of each process. Messages may be sent and received between processes. The advantages of MPI are that the programmer: • Doesn’t have to worry about how the program gets started on all the machines • Has a simplified interface for inter-process messages • Doesn’t have to worry about mapping processes to nodes • Abstracts the network details, resulting in more portable hardware-agnostic software Also see the section on running MPI-aware programs later in this chapter. Scyld ClusterWare includes several implementations of MPI: MPICH Scyld ClusterWare includes MPICH, a freely-available implementation of the MPI standard, is a project that is managed by Argonne National Laboratory. Visit for more information. Scyld MPICH is modified to use BProc and Scyld job mapping support; see the section on job mapping later in this chapter. MVAPICH MVAPICH is an implementation of MPICH for Infiniband interconnects. Visit for more information. Scyld MVAPICH is modified to use BProc and Scyld job mapping support; see the section on job mapping later in this chapter. MPICH2 Scyld ClusterWare includes MPICH2, a second generation MPICH. Visit for more information. Scyld MPICH2 is customized to use environment modules. See the Section called Running MPICH2 and MVAPICH2 Programs for details. MVAPICH2 MVAPICH2 is second generation MVAPICH. Visit for more information. Scyld MVAPICH2 is customized to use environment modules. See the Section called Running MPICH2 and MVAPICH2 Programs for details. 27 Chapter 3. Running Programs OpenMPI OpenMPI is an open-source implementation of the Message Passing Interface 2 (MPI-2) specification. The OpenMPI implementation is an optimized combination of several other MPI implementations, and is likely to perform better than MPICH or MVAPICH. Visit for more information. Also see the Section called Running OpenMPI Programs for details. Other MPI Implementations Various commercial MPI implementations run on Scyld ClusterWare. Visit the Penguin Computing Support Portal at for more information. You can also download and build your own version of MPI, and configure it to run on Scyld ClusterWare. PVM Parallel Virtual Machine (PVM) was an earlier parallel programming interface. Unlike MPI, it is not a specification but a single set of source code distributed on the Internet. PVM reveals much more about the details of starting your job on remote nodes. However, it fails to abstract implementation details as well as MPI does. PVM is deprecated, but is still in use by legacy code. We generally advise against writing new programs in PVM, but some of the unique features of PVM may suggest its use. Also see the section on running PVM-aware programs later in this chapter. Custom APIs As mentioned earlier, you can develop you own parallel API by using various Unix and TCP/IP standards. In terms of starting a remote program, there are programs written: • Using the rexec function call • To use the rexec or rsh program to invoke a sub-program • To use Remote Procedure Call (RPC) • To invoke another sub-program using the inetd super server These solutions come with their own problems, particularly in the implementation details. What are the network addresses? What is the path to the program? What is the account name on each of the computers? How is one going to load-balance the cluster? Scyld ClusterWare, which doesn’t have binaries installed on the cluster nodes, may not lend itself to these techniques. We recommend you write your parallel code in MPI. That having been said, we can say that Scyld has some experience with getting rexec() calls to work, and that one can simply substitute calls to rsh with the more cluster-friendly bpsh. Mapping Jobs to Compute Nodes Running programs specifically designed to execute in parallel across a cluster requires at least the knowledge of the number of processes to be used. Scyld ClusterWare uses the NP environment variable to determine this. The following example will use 4 processes to run an MPI-aware program called a.out, which is located in the current directory. [user@cluster user] $ NP=4 ./a.out 28 Chapter 3. Running Programs Note that each kind of shell has its own syntax for setting environment variables; the example above uses the syntax of the Bourne shell (/bin/sh or /bin/bash). What the example above does not specify is which specific nodes will execute the processes; this is the job of the mapper. Mapping determines which node will execute each process. While this seems simple, it can get complex as various requirements are added. The mapper scans available resources at the time of job submission to decide which processors to use. Scyld ClusterWare includes beomap, a mapping API (documented in the Programmer’s Guide with details for writing your own mapper). The mapper’s default behavior is controlled by the following environment variables: • NP — The number of processes requested, but not the number of processors. As in the example earlier in this section, NP=4 ./a.out will run the MPI program a.out with 4 processes. • ALL_CPUS — Set the number of processes to the number of CPUs available to the current user. Similar to the example above, --all-cpus=1 ./a.out would run the MPI program a.out on all available CPUs. • ALL_NODES — Set the number of processes to the number of nodes available to the current user. Similar to the ALL_CPUS variable, but you get a maximum of one CPU per node. This is useful for running a job per node instead of per CPU. • ALL_LOCAL — Run every process on the master node; used for debugging purposes. • NO_LOCAL — Don’t run any processes on the master node. • EXCLUDE — A colon-delimited list of nodes to be avoided during node assignment. • BEOWULF_JOB_MAP — A colon-delimited list of nodes. The first node listed will be the first process (MPI Rank 0) and so on. You can use the beomap program to display the current mapping for the current user in the current environment with the current resources at the current time. See the Reference Guide for a detailed description of beomap and its options, as well as examples for using it. Running MPICH and MVAPICH Programs MPI-aware programs are those written to the MPI specification and linked with Scyld MPI libraries. Applications that use MPICH (Ethernet "p4") or MVAPICH (Infiniband "vapi") are compiled and linked with common MPICH/MVAPICH implementation libraries, plus specific compiler family (e.g., gnu, Intel, PGI) libraries. The same application binary can execute either in an Ethernet interconnection environment or an Infiniband interconnection environment that is specified at run time. This section discusses how to run these programs and how to set mapping parameters from within such programs. For information on building MPICH/MVAPICH programs, see the Programmer’s Guide. mpirun Almost all implementations of MPI have an mpirun program, which shares the syntax of mpprun, but which boasts of additional features for MPI-aware programs. In the Scyld implementation of mpirun, all of the options available via environment variables or flags through directed execution are available as flags to mpirun, and can be used with properly compiled MPI jobs. For example, the command for running a hypothetical program named my-mpi-prog with 16 processes: [user@cluster user] $ mpirun -np 16 my-mpi-prog arg1 arg2 29 Chapter 3. Running Programs is equivalent to running the following commands in the Bourne shell: [user@cluster user] $ export NP=16 [user@cluster user] $ my-mpi-prog arg1 arg2 Setting Mapping Parameters from Within a Program A program can be designed to set all the required parameters itself. This makes it possible to create programs in which the parallel execution is completely transparent. However, it should be noted that this will work only with Scyld ClusterWare, while the rest of your MPI program should work on any MPI platform. Use of this feature differs from the command line approach, in that all options that need to be set on the command line can be set from within the program. This feature may be used only with programs specifically designed to take advantage of it, rather than any arbitrary MPI program. However, this option makes it possible to produce turn-key application and parallel library functions in which the parallelism is completely hidden. Following is a brief example of the necessary source code to invoke mpirun with the -np 16 option from within a program, to run the program with 16 processes: /* Standard MPI include file */ # include <mpi.h> main(int argc, char **argv) { setenv("NP","16",1); // set up mpirun env vars MPI_Init(&argc,&argv); MPI_Finalize(); } More details for setting mapping parameters within a program are provided in the Programmer’s Guide. Examples The examples in this section illustrate certain aspects of running a hypothetical MPI-aware program named my-mpi-prog. Example 3-7. Specifying the Number of Processes This example shows a cluster execution of a hypothetical program named my-mpi-prog run with 4 processes: [user@cluster user] $ NP=4 ./my-mpi-prog An alternative syntax is as follows: [user@cluster user] $ NP=4 [user@cluster user] $ export NP [user@cluster user] $ ./my-mpi-prog Note that the user specified neither the nodes to be used nor a mechanism for migrating the program to the nodes. The mapper does these tasks, and jobs are run on the nodes with the lowest CPU utilization. 30 Chapter 3. Running Programs Example 3-8. Excluding Specific Resources In addition to specifying the number of processes to create, you can also exclude specific nodes as computing resources. In this example, we run my-mpi-prog again, but this time we not only specify the number of processes to be used (NP=6), but we also exclude of the master node (NO_LOCAL=1) and some cluster nodes (EXCLUDE=2:4:5) as computing resources. [user@cluster user] $ NP=6 NO_LOCAL=1 EXCLUDE=2:4:5 ./my-mpi-prog Running OpenMPI Programs OpenMPI programs are those written to the MPI-2 specification. This section provides information needed to use programs with OpenMPI as implemented in Scyld ClusterWare. Pre-Requisites to Running OpenMPI A number of commands, such as mpirun, are duplicated between OpenMPI and other MPI implementations. Scyld ClusterWare provides the env-modules package which gives users a convenient way to switch between the various implementations. Be sure to load an OpenMPI module to favor OpenMPI, located in /opt/scyld/openmpi/, over the MPICH commands and libraries which are located in /usr/. Each module bundles together various compiler-specific environment variables to configure your shell for building and running your application, and for accessing compiler-specific manpages. Be sure that you are loading the proper module to match the compiler that built the application you wish to run. For example, to load the OpenMPI module for use with the Intel compiler, do the following: [user@cluster user] $ module load openmpi/intel Currently, there are modules for the GNU, Intel, and PGI compilers. To see a list of all of the available modules: [user@cluster user] $ module avail openmpi ------------------------------- /opt/modulefiles ------------------------------openmpi/gnu/1.5.3 openmpi/intel/1.5.3 openmpi/pgi/1.5.3 For more information about creating your own modules, see and the manpages man module and man modulefile. Using OpenMPI Unlike the Scyld ClusterWare MPICH implementation, OpenMPI does not honor the Scyld Beowulf job mapping environment variables. You must either specify the list of hosts on the command line, or inside of a hostfile. To specify the list of hosts on the command line, use the -H option. The argument following -H is a comma separated list of hostnames, not node numbers. For example, to run a two process job, with one process running on node 0 and one on node 1: [user@cluster user] $ mpirun -H n0,n1 -np 2 ./mpiprog Support for running jobs over Infiniband using the OpenIB transport is included with OpenMPI distributed with Scyld ClusterWare. Much like running a job with MPICH over Infiniband, one must specifically request the use of OpenIB. For example: [user@cluster user] $ mpirun --mca btl openib,sm,self -H n0,n1 -np 2 ./myprog 31 Chapter 3. Running Programs Read the OpenMPI mpirun man page for more information about, using a hostfile, and using other tunable options available through mpirun. Running MPICH2 and MVAPICH2 Programs MPICH2 and MVAPICH2 programs are those written to the MPI-2 specification. This section provides information needed to use programs with MPICH2 or MVAPICH2 as implemented in Scyld ClusterWare. Pre-Requisites to Running MPICH2/MVAPICH2 As with Scyld OpenMPI, the Scyld MPICH2 and MVAPICH2 distributions are repackaged Open Source MPICH2 and MVAPICH2 that utilize environment modules to build and to execute applications. Each module bundles together various compiler-specific environment variables to configure your shell for building and running your application, and for accessing implementation- and compiler-specific manpages. You must use the same module to both build the application and to execute it. For example, to load the MPICH2 module for use with the Intel compiler, do the following: [user@cluster user] $ module load mpich2/intel Currently, there are modules for the GNU, Intel, and PGI compilers. To see a list of all of the available modules: [user@cluster user] $ module avail mpich2 mvapich2 ------------------------------- /opt/modulefiles ------------------------------mpich2/gnu/1.3.2 mpich2/intel/1.3.2 mpich2/pgi/1.3.2 ------------------------------- /opt/modulefiles ------------------------------mvapich2/gnu/1.6 mvapich2/intel/1.6 mvapich2/pgi/1.6 For more information about creating your own modules, see and the manpages man module and man modulefile. Using MPICH2 Unlike the Scyld ClusterWare MPICH implementation, MPICH2 does not honor the Scyld Beowulf job mapping environment variables. Use mpiexec to execute MPICH2 applications. After loading an mpich2 module, see the man mpiexec manpage for specifics, and visit for full documentation. Using MVAPICH2 Unlike the Scyld ClusterWare MVAPICH implementation, MVAPICH2 does not honor the Scyld Beowulf job mapping environment variables. Use mpirun_rsh to execute MVAPICH2 applications. After loading an mvapich2 module, use mpirun_rsh --help to see specifics, and visit for full documentation. Running PVM-Aware Programs Parallel Virtual Machine (PVM) is an application programming interface for writing parallel applications, enabling a collection of heterogeneous computers to be used as a coherent and flexible concurrent computational resource. Scyld has developed the Scyld PVM library, specifically tailored to allow PVM to take advantage of the technologies used in Scyld 32 Chapter 3. Running Programs ClusterWare. A PVM-aware program is one that has been written to the PVM specification and linked against the Scyld PVM library. A complete discussion of cluster configuration for PVM is beyond the scope of this document. However, a brief introduction is provided here, with the assumption that the reader has some background knowledge on using PVM. You can start the master PVM daemon on the master node using the PVM console, pvm. To add a compute node to the virtual machine, issue an add .# command, where # is replaced by a node’s assigned number in the cluster. Tip: You can generate a list of node numbers using either of the beosetup or bpstat commands. Alternately, you can start the PVM console with a hostfile filename on the command line. The hostfile should contain a .# for each compute node you want as part of the virtual machine. As with standard PVM, this method automatically spawns PVM slave daemons to the specified compute nodes in the cluster. From within the PVM console, use the conf command to list your virtual machine’s configuration; the output will include a separate line for each node being used. Once your virtual machine has been configured, you can run your PVM applications as you normally would. Porting Other Parallelized Programs Programs written for use on other types of clusters may require various levels of change to function with Scyld ClusterWare. For instance: • Scripts or programs that invoke rsh can instead call bpsh. • Scripts or programs that invoke rcp can instead call bpcp. • beomap can be used with any script to load balance programs that are to be dispatched to the compute nodes. For more information on porting applications, see the Programmer’s Guide Running Serial Programs in Parallel For jobs that are not "MPI-aware" or "PVM-aware", but need to be started in parallel, Scyld ClusterWare provides the parallel execution utilities mpprun and beorun. These utilities are more sophisticated than bpsh, in that they can automatically select ranges of nodes on which to start your program, run tasks on the master node, determine the number of CPUs on a node, and start a copy on each CPU. Thus, mpprun and beorun provide you with true "dynamic execution" capabilities, whereas bpsh provides "directed execution" only. mpprun and beorun are very similar, and have similar parameters. They differ only in that mpprun runs jobs sequentially on the selected processors, while beorun runs jobs concurrently on the selected processors. mpprun mpprun is intended for applications rather than utilities, and runs them sequentially on the selected nodes. The basic syntax of mpprun is as follows: [user@cluster user] $ mpprun [options] app arg1 arg2... 33 Chapter 3. Running Programs where app is the application program you wish to run; it need not be a parallel program. The arg arguments are the values passed to each copy of the program being run. Options mpprun includes options for controlling various aspects of the job, including the ability to: • Specify the number of processors on which to start copies of the program mpprun. Examples Run 16 tasks of program app: [user@cluster user] $ mpprun -np 16 app infile outfile Run 16 tasks of program app on any available nodes except nodes 2 and 3: [user@cluster user] $ mpprun -np 16 --exclude 2:3 app infile outfile Run 4 tasks of program app with task 0 on node 4, task 1 on node 2, task 2 on node 1, and task 3 on node 5: [user@cluster user] $ mpprun --map 4:2:1:5 app infile outfile beorun beorun is intended for applications rather than utilities, and runs them concurrently on the selected nodes. The basic syntax of beorun is as follows: [user@cluster user] $ beorun [options] app arg1 arg2... where app is the application program you wish to run; it need not be a parallel program. The arg arguments are the values passed to each copy of the program being run. Options beorun includes options for controlling various aspects of the job, including the ability to: • 34 Specify the number of processors on which to start copies of the program Chapter 3. Running Programs beorun. Examples Run 16 tasks of program app: [user@cluster user] $ beorun -np 16 app infile outfile Run 16 tasks of program app on any available nodes except nodes 2 and 3: [user@cluster user] $ beorun -np 16 --exclude 2:3 app infile outfile Run 4 tasks of program app with task 0 on node 4, task 1 on node 2, task 2 on node 1, and task 3 on node 5: [user@cluster user] $ beorun --map 4:2:1:5 app infile outfile Job Batching Job Batching Options for ClusterWare For Scyld ClusterWare HPC, the default installation includes the TORQUE resource manager, providing users an intuitive interface for for remotely initiating and managing batch jobs on distributed compute nodes. TORQUE is an open source tool based on standard OpenPBS. Basic instructions for using TORQUE are provided in the next section. For more general product information, see the TORQUE12 information page sponsored by Cluster Resources, Inc. (CRI). (Note that TORQUE is not included in the default installation of Scyld Beowulf Series 30.) Scyld also offers the Scyld TaskMaster Suite for clusters running Scyld Beowulf Series 30, Scyld ClusterWare HPC, and upgrades to these products. TaskMaster is a Scyld-branded and supported commercial scheduler and resource manager, developed jointly with Cluster Resources. For information on TaskMaster, see the Scyld TaskMaster Suite page in the HPC Clustering area of the Penguin website13, or contact Scyld Customer Support. In addition, Scyld provides support for most popular open source and commercial schedulers and resource managers, including SGE, LSF, PBSPro, Maui and MOAB. For the latest information, visit the Penguin Computing Support Portal at. Job Batching with TORQUE The default installation is configured as a simple job serializer with a single queue named batch. 35 Chapter 3. Running Programs You can use the TORQUE resource manager to run jobs, check job status, find out which nodes are running your job, and find job output. Running a Job To run a job with TORQUE, you can put the commands you would normally use into a job script, and then submit the job script to the cluster using qsub. The qsub program has a number of options that may be supplied on the command line or as special directives inside the job script. For the most part, these options should behave exactly the same in a job script or via the command line, but job scripts make it easier to manage your actions and their results. Following are some examples of running a job using qsub. For more detailed information on qsub, see the qsub man page. Example 3-9. Starting a Job with a Job Script Using One Node The following script declares a job with the name "myjob", to be run using one node. The script uses the PBS -N directive, launches the job, and finally sends the current date and working directory to standard output. #!/bin/sh ## Set the job name #PBS -N myjob #PBS -l nodes=1 # Run my job /path/to/myjob echo Date: $(date) echo Dir: $PWD You would submit "myjob" as follows: [bjosh@iceberg]$ qsub -l nodes=1 myjob 15.iceberg Example 3-10. Starting a Job from the Command Line This example provides the command line equivalent of the job run in the example above. We enter all of the qsub options on the initial command line. Then qsub reads the job commands line-by-line until we type ^D, the end-of-file character. At that point, qsub queues the job and returns the Job ID. [bjosh@iceberg]$ qsub -N myjob -l nodes=1:ppn=1 -j oe cd $PBS_0_WORKDIR echo Date: $(date) echo Dir: $PWD ^D 16.iceberg Example 3-11. Starting an MPI Job with a Job Script The following script declares an MPI job named "mpijob". The script uses the PBS -N directive, prints out the nodes that will run the job, launches the job using mpirun, and finally prints out the current date and working directory. When submitting MPI jobs using TORQUE, it is recommended to simply call mpirun without any arguments. mpirun will detect that it is being launched from within TORQUE and assure that the job will be properly started on the nodes TORQUE has assigned to the job. In this case, TORQUE will properly manage and track resources used by the job. 36 Chapter 3. Running Programs ## Set the job name #PBS -N mpijob # RUN my job mpirun /path/to/mpijob echo Date: $(date) echo Dir: $PWD To request 8 total processors to run "mpijob", you would submit the job as follows: [bjosh@iceberg]$ qsub -l nodes=8 mpijob 17.iceberg To request 8 total processors, using 4 nodes, each with 2 processors per node, you would submit the job as follows: [bjosh@iceberg]$ qsub -l nodes=4:ppn=2 mpijob 18.iceberg Checking Job Status You can check the status of your job using qstat. The command line option qstat -n will display the status of queued jobs. To watch the progression of events, use the watch command to execute qstat -n every 2 seconds by default; type [CTRL]-C to interrupt watch when needed. Example 3-12. Checking Job Status This example shows how to check the status of the job named "myjob", which we ran on 1 node in the first example above, using the option to watch the progression of events. [bjosh@iceberg]$ qsub myjob && watch qstat -n iceberg: JobID Username Queue Jobname SessID NDS TSK ReqdMemory ReqdTime S ElapTime 15.iceberg bjosh default myjob -- 1 -- -- 00:01 Q -- Table 3-1. Useful Job Status Commands Command Purpose ps -ef | bpstat -P Display all running jobs, with node number for each qstat -Q Display status of all queues qstat -n Display status of queued jobs qstat -f JOBID Display very detailed information about Job ID pbsnodes -a Display status of all nodes Finding Out Which Nodes Are Running a Job To find out which nodes are running your job, use the following commands: • To find your Job Ids: qstat -an • To find the Process IDs of your jobs: qstat -f <jobid> 37 Chapter 3. Running Programs • To find the number of the node running your job: ps -ef | bpstat -P | grep <yourname> The number of the node running your job will be displayed in the first column of output. Finding Job Output When your job terminates, TORQUE will store its output and error streams in files in the script’s working directory. • Default output file: <jobname>.o<jobid> You can override the default using qsub with the -o <path> option on the command line, or use the #PBS -o <path> directive in your job script. • Default error file: <jobname>.e<jobid> You can override the default using qsub with the -e <path> option on the command line, or use the #PBS -e <path> directive in your job script. • To join the output and error streams into a single file, use qsub with the -j oe option on the command line, or use the #PBS -j oe directive in your job script. Job Batching with POD Tools POD Tools is a collection of tools for submitting TORQUE jobs to a remote cluster and for monitoring them. POD Tools is useful for, but not limited to, submitting and monitoring jobs to a remote Penguin On Demand cluster. POD Tools executes on both Scyld and non-Scyld client machines, and the Tools communicate with the beoweb service that must be executing on the target cluster. The primary tool in POD Tools is POD Shell (podsh), which is a command-line interface that allows for remote job submission and monitoring. POD Shell is largely self-documented. Enter podsh --help for a list of possible commands and their formats. The general usage is podsh <action> [OPTIONS] [FILE/ID]. The action specifies what type of action to perform, such as submit (for submitting a new job) or status (for collecting status on all jobs or a specific job). POD Shell can upload a TORQUE job script to the target cluster, where it will be added to the job queue. Additionally, POD Shell can be used to stage data in and out of the target cluster. Staging data in (i.e. copying data to the cluster) is performed across an unencrypted TCP socket. Staging data out (i.e. from the cluster back to the client machine) is performed using scp from the cluster to the client. In order for this transfer to be successful, password-less authentication must be in place using SSH keys between the cluster’s master node and the client. POD Shell uses a configuration file that supports both site-wide and user-local values. Site-wide values are stored in entries in /etc/podtools.conf. These settings can be overridden by values in a user’s ~/.podtools/podtools.conf file. These values can again be overridden by command-line arguments passed to podsh. The template for podtools.conf is found at /opt/scyld/podtools/podtools.conf.template. File Systems Data files used by the applications processed on the cluster may be stored in a variety of locations, including: • 38 On the local disk of each node Chapter 3. Running Programs • On the master node’s disk, shared with the nodes through a network file system • On disks on multiple nodes, shared with all nodes through the use of a parallel file system The simplest approach is to store all files on the master node, as with the standard Network File System. Any files in your /home directory are shared via NFS with all the nodes in your cluster. This makes management of the files very simple, but in larger clusters the performance of NFS on the master node can become a bottleneck for I/O-intensive applications. If you are planning a large cluster, you should include disk drives that are separate from the system disk to contain your shared files; for example, place /home on a separate pair of RAID1 disks in the master node. A more scalable solution is to utilize a dedicated NFS server with a properly configured storage system for all shared files and programs, or a high performance NAS appliance. Storing files on the local disk of each node removes the performance problem, but makes it difficult to share data between tasks on different nodes. Input files for programs must be distributed manually to each of the nodes, and output files from the nodes must be manually collected back on the master node. This mode of operation can still be useful for temporary files created by a process and then later reused on that same node. Sample Programs Included with Scyld ClusterWare linpack The Linpack benchmark suite, used to evaluate computer performance, stresses a cluster by solving a random dense linear system, maximizing your CPU and network usage. Administrators use Linpack to evaluate the cluster fitness. For information on Linpack, see the Top 500 page at. The linpack shell script provided with Scyld ClusterWare is a portable, non-optimized version of the High Performance Linpack (HPL) benchmark. It is intended for verification purposes only, and the results should not be used for performance characterization. Running the linpack shell script starts xhpl after creating a configuration/input file. If linpack doesn’t run to completion or takes too long to run, check for network problems, such as a bad switch or incorrect switch configuration. Tip: The linpack default settings are too general to result in good performance on clusters larger than a few nodes; consult the file /usr/share/doc/hpl-1.0/TUNING for tuning tips appropriate to your cluster. A first step is to increase the problem size, set around line 15 to a default value of 3000. If this value is set too high, it will cause failure by memory starvation. The following figure illustrates example output from linpack. 39 Chapter 3. Running Programs Figure 3-1. Testing Your Cluster with linpack Notes 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 40 Appendix A. Glossary of Parallel Computing Terms Bandwidth A measure of the total amount of information delivered by a network. This metric is typically expressed in millions of bits per second (Mbps) for data rate on the physical communication media or megabytes per second (MBps) for the performance seen by the application. Backplane Bandwidth The total amount of data that a switch can move through it in a given time, typically much higher than the bandwidth delivered to a single node. Bisection Bandwidth The amount of data that can be delivered from one half of a network to the other half in a given time, through the least favorable halving of the network fabric. Boot Image The file system and kernel seen by a compute node at boot time; contains enough drivers and information to get the system up and running on the network. Cluster A collection of nodes, usually dedicated to a single purpose. Compute Node Nodes attached to the master through an interconnection network, used as dedicated attached processors. With Scyld, users never need to directly log into compute nodes. Data Parallel A style of programming in which multiple copies of a single program run on each node, performing the same instructions while operating on different data. Efficiency The ratio of a program’s actual speed-up to its theoretical maximum. FLOPS Floating-point operations per second, a key measure of performance for many scientific and numerical applications. Grain Size, Granularity A measure of the amount of computation a node can perform in a given problem between communications with other nodes, typically defined as "coarse" (large amount of computation) or "fine" (small amount of computation). Granularity is a key in determining the performance of a particular process on a particular cluster. High Availability Refers to level of reliability; usually implies some level of fault tolerance (ability to operate in the presence of a hardware failure). 41 Appendix A. Glossary of Parallel Computing Terms Hub A device for connecting the NICs in an interconnection network. Only one pair of ports (a bus) can be active at any time. Modern interconnections utilize switches, not hubs. Isoefficiency The ability of a process to maintain a constant efficiency if the size of the process scales with the size of the machine. Jobs In traditional computing, a job is a single task. A parallel job can be a collection of tasks, all working on the same problem but running on different nodes. Kernel The core of the operating system, the kernel is responsible for processing all system calls and managing the system’s physical resources. Latency The length of time from when a bit is sent across the network until the same bit is received. Can be measured for just the network hardware (wire latency) or application-to-application (includes software overhead). Local Area Network (LAN) An interconnection scheme designed for short physical distances and high bandwidth, usually self-contained behind a single router. MAC Address On an Ethernet NIC, the hardware address of the card. MAC addresses are unique to the specific NIC, and are useful for identifying specific nodes. Master Node Node responsible for interacting with users, connected to both the public network and interconnection network. The master node controls the compute nodes. Message Passing Exchanging information between processes, frequently on separate nodes. Middleware A layer of software between the user’s application and the operating system. MPI The Message Passing Interface, the standard for producing message passing libraries. MPICH A commonly used MPI implementation, built on the chameleon communications layer. 42 Appendix A. Glossary of Parallel Computing Terms Network Interface Card (NIC) The device through which a node connects to the interconnection network. The performance of the NIC and the network it attaches to limit the amount of communication that can be done by a parallel program. Node A single computer system (motherboard, one or more processors, memory, possibly a disk, network interface). Parallel Programming The art of writing programs that are capable of being executed on many processors simultaneously. Process An instance of a running program. Process Migration Moving a process from one computer to another after the process begins execution. PVM The Parallel Virtual Machine, a common message passing library that predates MPI. Scalability The ability of a process to maintain efficiency as the number of processors in the parallel machine increases. Single System Image All nodes in the system see identical system files, including the same kernel, libraries, header files, etc. This guarantees that a program that will run on one node will run on all nodes. Socket A low-level construct for creating a connection between processes on a remote system. Speedup A measure of the improvement in the execution time of a program on a parallel computer vs. a serial computer. Switch A device for connecting the NICs in an interconnection network so that all pairs of ports can communicate simultaneously. Version Skew The problem of having more than one version of software or files (kernel, tools, shared libraries, header files) on different nodes. 43 Appendix A. Glossary of Parallel Computing Terms 44 Appendix B. TORQUE and Maui Release Information The following is reproduced essentially verbatim from files contained within the TORQUE tarball downloaded from Adaptive Computing: and the Maui tarball: TORQUE Release Notes Software Version 4.2.9 === What’s New in TORQUE v4.2 === pbs_server is now multi-threaded so that it will respond to user commands much faster, and most importantly, one user command doesn’t tie up the system and block other things. TORQUE no longer uses rpp to communicate - tcp is used in all cases. rpp allows the network to drop its packets when it is at a high load. tcp does not allow this in the protocol, making it more reliable. TORQUE now has the option of using a mom hierarchy to specify how the moms report to the server. If not specified, each mom will report directly to the server as previous versions have, but if specified direct traffic on the server can be greatly reduced. Details on providing a mom hierarchy file are found here: in section 1.3.2 (1.0 Installation and configuration > 1.3 Advanced configuration > 1.3.2 Server configuration) pbs_iff has been replaced with a daemon that needs to be running on any submit host (under the condition that you are using iff, not munge or unix sockets) To start the iff replacement, run trqauthd as root. TORQUE now includes a job_radix option (-W job_radix=X) to specify how many moms each mom should communicate with for that job. Moms are then an n-branch tree communications-wise TORQUE README.array_changes This file contains information concerning the use of the new job array features in TORQUE 2.5. --- WARNING --TORQUE 2.5 uses a new format for job arrays. It in not backwards compatible with job arrays from version 2.3 or 2.4. Therefore, it is imperative that the system be drained of any job arrays BEFORE upgrading. Upgrading with job arrays queued or running may cause data loss, crashes, etc, and is not supported. 45 Appendix B. TORQUE and Maui Release Information COMMAND UPDATES FOR ARRAYS -------------------------The commands qalter, qdel, qhold, and qrls now all support TORQUE arrays and will have to be updated. The general command syntax is: command <array_name> [-t array_range] [other command options] The array ranges accepted by -t here are exactly the same as the array ranges that can be specified in qsub. () SLOT LIMITS -------------------------It is now possible to limit the number of jobs that can run concurrently in a job array. This is called a slot limit, and the default is unlimited. The slot limit can be set in two ways. The first method can be done at job submission: qsub script.sh -t 0-299%5 This sets the slot limit to 5, meaning only 5 jobs from this array can be running at the same time. The second method can be done on a server wide basis using the server parameter max_slot_limit. Since administrators are more likely to be concerned with limiting arrays than users in many cases the max_slot_limit parameter is a convenient way to set a global policy. If max_slot_limit is not set then the default limit is unlimited. To set max_slot_limit you can use the following queue manager command. qmgr -c ’set server max_slot_limit=10’ This means that no array can request a slot limit greater than 10, and any array not requesting a slot limit will receive a slot limit of 10. If a user requests a slot limit greater than 10, the job will be rejected with the message: Requested slot limit is too large, limit is X. In this case, X would be 10. It is recommended that if you are using torque with a scheduler like Moab or Maui that you also set the server parameter moab_array_compatible=true. Setting moab_array_compatible will put all jobs over the slot limit on hold so the scheduler will not try and schedule jobs above the slot limit. JOB ARRAY DEPENDENCIES -------------------------The following dependencies can now be used for job arrays: 46 Appendix B. TORQUE and Maui Release Information afterstartarray afterokarray afternotokarray afteranyarray beforestartarray beforeokarray beforenotokarray beforeanyarray The general syntax is: qsub script.sh -W depend=dependtype:array_name[num_jobs] The suffix [num_jobs] should appear exactly as above, although the number of jobs is optional. If it isn’t specified, the dependency will assume that it is the entire array, for example: qsub script.sh -W depend=afterokarray:427[] will assume every job in array 427[] has to finish successfully for the dependency to be satisfied. The submission: qsub script.sh -W depend=afterokarray:427[][5] means that 5 of the jobs in array 427 have to successfully finish in order for the dependency to be satisfied. NOTE: It is important to remember that the "[]" is part of the array name. QSTAT FOR JOB ARRAYS -------------------------Normal qstat output will display a summary of the array instead of displaying the entire array, job for job. qstat -t will expand the output to display the entire array. ARRAY NAMING CONVENTION -------------------------Arrays are now named with brackets following the array name, for example: dbeer@napali:~/dev/torque/array_changes$ echo sleep 20 | qsub -t 0-299 189[].napali Individual jobs in the array are now also noted using square brackets instead of dashes, for example, here is part of the output of qstat -t for the above array: 189[287].napali 189[288].napali 189[289].napali STDIN[287] STDIN[288] STDIN[289] dbeer dbeer dbeer 0 Q batch 0 Q batch 0 Q batch 47 Appendix B. TORQUE and Maui Release Information 189[290].napali 189[291].napali 189[292].napali 189[293].napali 189[294].napali 189[295].napali 189[296].napali 189[297].napali 189[298].napali 189[299].napali STDIN[290] STDIN[291] STDIN[292] STDIN[293] STDIN[294] STDIN[295] STDIN[296] STDIN[297] STDIN[298] STDIN[299] dbeer dbeer dbeer dbeer dbeer dbeer dbeer dbeer dbeer dbeer 0 0 0 0 0 0 0 0 0 0 Q Q Q Q Q Q Q Q Q Q batch batch batch batch batch batch batch batch batch batch TORQUE Change Log c - crash b - bug fix e - enhancement f - new feature n - note 4.2.9 f - A new qmgr option: set server copy_on_rerun=[True|False] is available. When set to True, Torque will copy the OU, ER files over to the user-specified directory when the qrerun command is executed (i.e a job preemption). This setting requires a pbs_server restart for the new value to take in effect. Note that the MOMs and the pbs_server must be updated to this version before setting copy_on_rerun=True will behave as expected. f - A new qmgr option: job_exclusive_on_use=[True|False] is available. When set to True, pbsnodes will report job-exclusive anytime 1 or more processors are in use. This resolves discrepancies between Moab and TORQUE node reports in cases where Moab is configured with a SINGLEJOB policy. e - Improved performance by moving scan_for_terminated to its own thread. e - Two new fields were added to the accounting file for completed jobs: total_execution_slots and unique_node_count. total_execution_slots should be 20 for a job that requests nodes=2:ppn=10. unique_node_count should be the number of unique hosts the job occupied. b - TRQ-2410. Improved qstat behavior in cases where bad job IDs were referenced in the command. b - TRQ-2632. pbsdsh required FQDN even if other elements didn’t. pbsdsh no longer requires FQDN. b - TRQ-2692. Corrected mismatched <Job_Id> XML tags in the job log. b - TRQ-2367. Fixed bug related to some jobs missing accounting records on large systems. b - TRQ-2732. Fixed bug where OU files were being left in spool when job was preempted or requeued. b - TRQ-2828. Fixed bug where ‘momctl -q clearmsg‘ didn’t properly clear error messages. b - TRQ-2795. Fixed bug where jobs rejected due to max_user_queuable limit reached, yet no jobs in the queue. b - TRQ-2646. Fixed bug where qsub did not process args correctly when using a submit filter. b - TRQ-2759. Fixed bug where reported cput was incorrect. b - TRQ-2707. Fixed bug with submit filter arguments not being parsed during interactive jobs. b - TRQ-2653. Build bug reported with MIC libraries. Fixed build bug related to newer Intel MIC libraries installing in different locations. b - TRQ-2730. Make nvml and numa-support configurations work together. The admin must now specify which gpus are on which node board the same way it is done with mic co-processors, adding gpu=X[-Y] to the mom.layout line for that node board. A sample mom.layout file might look like: 48 Appendix B. TORQUE and Maui Release Information nodes=0 gpu=0 nodes=1 gpu=1 This only works if you use nvml. The nvidia-smi command is not supported. b - TRQ-2411. Fixed output format bug in cases where multiple job IDs are passed into qstat. c - TRQ-2787. Crash on start up when reading empty array file. Fixed start up bug related to empty job array (.AR) files. n - Some limitations exist in the way that pbsdsh can be used. Please note the following situations are not currently supported: * Running multiple instances of pbsdsh concurrently within a single job. (TRQ-2851) * Using the -o and -s options concurrently; although requesting these options together is permitted, only the output from the first node is displayed rather than output from every node in the chain. (TRQ-2690) 4.2.8 b - TRQ-2501. Fix the total number of execution slots having a count that is off-by-one for every Cray compute node. b - TRQ-2498. Fixed a memory leak when using qrun -a (asynchronous). Also fixed a write after free error that could lead to memory corruption. b - Fixed the thread pool manager so it would free idle nodes. Also changed the default thread stack sizes to a maximum of 8 Mb and Minimum of 1 Mb. b - TRQ-2647. Fixed a problem where the gpu status was displayed only at pbs_mom startup. 4.2.7 b - TRQ-2329. Fix a problem where nodes could be allocated to array subjobs even after the job was deleted. b - TRQ-2351. Fix an issue where moms that are before 4.2.6 can’t run jobs if the server is 4.2.6. e - Made it so trqauthd cannot be loaded more than once. trqauthd opens a UNIX domain name file to do its communication with client commands. If the UNIX domain name file exists trqauthd will not load. By default this file is /tmp/trqauthd-unix. It can be configured to point to a different directory. If trqauthd will not start and you know there are no other instances of trqauthd running you should delete the UNIX domain file and try again. b - TRQ-2373. Fix login nodes restricting the number of jobs to the number specified by np=X. b - TRQ-2354. Fix an issue with potential overflow in user job counts. Also fix a user being considered different if from a different submit host. b - TRQ-2369. Fix a problem with pbs_mom recovering which cpu indices were in use for jobs that were running at shutdown and still running at the time the mom restarted. b - TRQ-2377. Jobs with future start dates were being placed in queued after being deleted if they were deleted before their start date and keep_completed kept them around long enough. Fix this. c - TRQ-2347. Fix a segfault around re-sending batch requests. b - TRQ-2270. Fix some problems with TORQUE continuing to have nodes in a free state when the host is down. b - TRQ-2395. Fix a problem when running jobs on non-Cray nodes reporting to a pbs server running in cray enabled mode. n - TRQ-2299. Make it so that the reporter mom doesn’t fork to send its update. 4.2.6 b - TRQ-2273. Job start time is hard coded to than that to run the job will be requeued change will now set the prolog timeout to b - TRQ-2111. Fix a rare case of running jobs 5 minutes. If the prolog takes longer without killing the prolog. This less than 5 minutes. being deleted without having their 49 Appendix B. TORQUE and Maui Release Information resources freed. b - TRQ-2208. Stop having pbs_mom use trqauthd when it is checkpointing a job. e - TRQ-2022. Make pbs_mom capable of handling either naming convention for cpuset files, those with the ’cpuset.’ prefix and those without. b - TRQ-2259. Fix a problem for multi-node jobs: vmem was being stored in mem and vice versa from the sisters. b - TRQ-2280. Save properties added to cray compute nodes in the nodes file if the file is overwritten by pbs_server. 4.2.5 e - Remove the mom asking for a job status before sending an obit to pbs_server for a job that has exited. This is unnecessary overhead. b - TRQ-2097. Make it so that the proper errno is stored for non-blocking sockets at connect time. b - TRQ-2111. Make queued jobs never hold node resources. c - TRQ-2155. Fix a crash in trqauthd. e - TRQ-2058. Add the option of having the pbs_mom daemon read the mom hierarchy file instead of having to get it from pbs_server. To do this, copy the hierarchy to mom_priv/mom_hierarchy. e - TRQ-2058. Add the -n option to pbs_server, telling pbs_server not to send a hierarchy over the network unless it is requested by pbs_mom. e - TRQ-2020. Add the option of setting properties (features) for cray compute nodes in the nodes file. Syntax: node_id cray_compute property_name. 4.2.4 b - TRQ-1802. Make the environment variable $PBS_NUM_NODES accurate for multi-req jobs. e - TRQ-1832. Add the ability to add a login_property to a job at the queue level by setting required_login_property on the queue. e - TRQ-1925. Make pbs_mom smart enough to reserved extra memory nodes for non-numa configured TORQUE when more memory is requested than reserved. e - TRQ-1923. Make job aborts for a mother superior not recognizing the job a bit more intelligent - if the job has been reported in the last 180 seconds in the mom’s status update don’t abort it. b - TRQ-1934. Ask for canonical hostnames on the default address family without specifying for uniformity in the code. b - TRQ-2003. For cray fix a miscalculation of nppn and width when mppdepth is provided for the job. e - TRQ-1833. Optimize starting jobs by not internally tracking the jobid for each execution slot used by the job. Reduce string buildup and manipulation in other internal places as well. Job start for large jobs has been optimized to be up to 150X faster according to internal testing. b - TRQ-2030. Fix an ALPS 1.2 bug with labels on nodes. In 1.2 labels would be repeated like this: labelnamelabelname... Cray only. b - TRQ-1914. Fix after type dependencies not being removed from arrays. b - TRQ-2015. Fix a problem where pbs_mom processes get stuck in a defunc state when doing a qrerun on a job. qrerun is not required to make this happen. Just the action of requeing a running job on the mom causes this to happen. 4.2.3 b - TRQ-1653. Arrays depending on non-array jobs was broken. Fix this. b - Add retries on transient failures to setuid and seteuid calls. TRQ-1541. e - Add support for qstat -f -u <user>. This results in qstat -f output for only the specified user. e - TRQ-1798. Make pbs_server calculate mppmaxnodect more accurately for Cray. 50 Appendix B. TORQUE and Maui Release Information e - Add a timeout for mother superior when cleaning up a job. Instead of waiting infinitely for sisters to confirm that a job has exited, consider the job dead after 10 minutes. This time can be adjusted by setting $job_exit_wait_time in the mom’s config file (time in seconds). This prevents jobs from being stuck infinitely if a compute node crashes or if a mom daemon becomes unresponsive. TRQ-1776. e - Add the parameter default_features to queues. TRQ-1794. The other way of adding a feature to all jobs in a queue (setting resources_default.neednodes) is circumvented if a user requests a feature in the nodes request. Setting default_features overcomes this issue. b - If privileged ports are disabled, make pbs_moms not check if incoming connections from mother superior are on privileged ports. TRQ-1669. e - Add two mom config parameters: max_join_job_wait_time and resend_join_job_wait_time. The first specifies how long pbs_mom should wait before deciding that join jobs will never be received, and defaults to 10 minutes. The latter specifies how long pbs_mom should wait before attempting to resend join jobs to moms that it hasn’t received replies from, and this defaults to 5 minutes. Both are specified in seconds. Prior to this functionality mother superior would wait indefinitely for the join job replies. Please carefully consider what these values should be for your site and set them appropriately. TRQ-1790. e - If an error happens communicating with one MIC, attempt to communicate with the others instead of failing the entire routine. e - Reintroduced the procct resource for queues which allows jobs to be managed based on the number of procs requested. TRQ-1623 b - TRQ-1709. Fix parsing of -l gpus=X,other_things parsing incorrectly. b - TRQ-1639. Gpu status information wasn’t being displayed correctly. b - TRQ-1826. mppdepth is now passed correctly to the ALPS reservation. 4.2.2 b - Make job_starter work for parallel jobs as well as serial. (TRQ-1577 - thanks to NERSC for the patch) b - Fix one issue with being able to submit jobs to the cray while offline. TRQ-1595. e - Make the abort and email messages for jobs more specific when they are killed for going over a limit. TRQ-1076. e - Add mom parameter mom_oom_immunize, making the mom immune to being killed in out of memory conditions. Default is now true. (thanks to Lukasz Flis for this work) b - Don’t count completed jobs against max_user_queuable. TRQ-1420. e - For mics, set the variable $OFFLOAD_DEVICES with a list of MICs to use for the job. b - make pbs_track compatible with display_job_server_suffix = false. The user has to set NO_SERVER_SUFFIX in the environment. TRQ-1389 b - Fix the way we monitor if a thread is active. Before we used the id, but if the thread has exited, the id is no longer valid and this will cause a crash. Use pthread_cleanup functionality instead. TRQ-1745. b - TRQ-1751. Add some code to handle a corrupted job file where the job file says it is running but there is no exec host list. These jobs now will receive a system hold. b - Fixed problem where max_queuable and max_user_queuable would fail incorrectly. TRQ-1494 b - Cray: nppn wasn’t being specified in reservations. Fix this. TRQ-1660. 4.2.1 b - Fix a deadlock when submitting two large arrays consecutively, the second depending on the first. TRQ-1646 (reported by Jorg Blank). 51 Appendix B. TORQUE and Maui Release Information 4.2.0 f - Support the MIC architecture. This was co-developed with Doug Johnson at Ohio Supercomputer Center (OSC) and provides support for the Intel MIC architecture similar to GPU support in TORQUE. b - Fix a queue deadlock. TRQ-1435 b - Fix an issue with multi-node jobs not reporting resources completely. TRQ-1222. b - Make the API not retry for 5 consecutive timeouts. TRQ-1425 b - Fix a deadlock when no files can be copied from compute nodes to pbs_server. TRQ-1447. b - Don’t strip quotes from values in scripts before specific processing. TRQ-1632 4.1.6 b - Make job_starter work for parallel jobs as well as serial. (TRQ-1577 - thanks to NERSC for the patch, backported from 4.2.2) 4.1.5 b - For cray: make sure that reservations are released when jobs are requeued. TRQ-1572. b - For cray: support the mppdepth directive. Bugzilla #225. c - If the job is no long valid after attempting to lock the array in get_jobs_array(), make sure the array is valid before attempting to unlock it. TRQ-1598. e - For cray: make it so you can continue to submit jobs to pbs_server even if you have restarted it while the cray is offline. TRQ-1595. b - Don’t log an invalid connection message when close_conn() is called on 65535 (PBS_LOCAL_CONNECTION). TRQ-1557. 4.1.4 e - When in cray mode, write physmem and availmem in addition to totmem so that Moab correctly reads memory info. e - Specifying size, nodes, and mppwidth and all mutually exclusize, so reject job submissions that attempt to specify more than one of these. TRQ-1185. b - Merged changes for revision 7000 by hand because the merge was not clean. fixes problems with a deadlock when doing job dependencies using synccount/syncwith. TRQ-1374 b - Fix a segfault in req_jobobit due to an off-by-one error. TRQ-1361. e - Add the svn revision to --version outputs. TRQ-1357. b - Fix a race condition in mom hierarchy reporting. TRQ-1378. b - Fixed pbs_mom so epilogue will only run once. TRQ-1134 b - Fix some debug output escaping into job output. TRQ-1360. b - Fixed a problem where server threads all get stuck in a poll. The problem was an infinite loop created in socket_wait_for_read if poll return -1. TRQ-1382 b - Fix a Cray-mode bug with jobs ending immediately when spanning nodes of different proc counts when specifying -l procs. TRQ-1365. b - Don’t fail to make the tmpdir for sister moms. bugzilla #220, TRQ-1403. c - Fix crashes due to unprotected array accesses. TRQ-1395. b - Fixed a deadlock in get_parent_dest_queues when the queue_parent_name and queue_dest_name are the same. TRQ-1413. 11/7/12 b - Fixed segfault in req_movejob where the job ji_qhdr was NULL. TRQ-1416 b - Fix a conflict in the code for herogeneous jobs and regular jobs. b - For alps jobs, use the login nodes evenly even when one goes down. TRQ-1317. b - Display the correct ’Assigned Cpu Count’ in momctl output. TRQ-1307. b - Make pbs_original_connect() no longer hang if the host is down. TRQ-1388. b - Make epilogues run only once and be executed by the child and not the main pbs_mom process. TRQ-937. 52 Appendix B. TORQUE and Maui Release Information b - Reduce the error messages in HA mode from moms. They now only log errors if no server could be contacted. TRQ-1385. b - Fixed a seg-fault in send_depend_req. Also fixed a deadlock in the depend_on_term TRQ-1430 and TRQ-1436 b - Fixed a null pointer dereference seg-fault when checking for disallowed types TRQ-1408.. b - Fixed a problem where qsub was not applying the submit filter when given in the torque.cfg file. TRQ-1446 e - When the mom has no jobs, check the aux path to make sure it is clean and that we aren’t leaving any files there. TRQ-1240.. e - When the mom has no jobs, check the aux path to make sure it is clean and that we aren’t leaving any files there. TRQ-1240. b - Made it so that threads taken up by poll job tasks cannot consume all available threads in the thread pool. This will make it so other work can continue if poll jobs get stuck for whatever reason and that the server will recover. TRQ-1433 b - Fix a deadlock when recording alps reservations. TRQ-1421. b - Fixed a segfault in req_jobobit caused by NULL pointer assignment to variable pa. TRQ-1467 b - Fixed deadlock in remove_array. remove_array was calling get_arry with allarrays_mutex locked. TRQ-1466 b - Fixed a problem with an end of file error when running momctl -dx. TRQ-1432. b - Fix a deadlock in rare cases on job insertion. TRQ-1472. b - Fix a deadlock after restarting pbs_server when it was SIGKILL’d before a job array was done cloning. TRQ-1474. b - Fix a Cray-related deadlock. Always lock the reporter mom before a compute node. TRQ-1445 b - Additional fix for TRQ-1472. In rm_request on the mom pbs_tcp_timeout was getting set to 0 which made it so the MOM would fail reading incoming data if it had not already arrived. This would cause momctl -to fail with an end of file message. e - Add a safety net to resend any obits for exiting jobs on the mom that still haven’t cleaned up after five minutes. TRQ-1458. b - Fix cray running jobs being cancelled after a restart due to jobs not being set to the login nodes. TRQ-1482. b - Fix a bug that using -V got rid of -v. TRQ-1457. b - Make qsub -I -x work again. TRQ-1483. c - Fix a potential crash when getting the status of a login node in cray mode. TRQ-1491. 4.1.3 b - fix a security loophole that potentially allowed an interactive job to run as root due to not resetting a value when $attempt_to_make_dir and $tmpdir are set. TRQ-1078. b - fix down_on_error for the server. TRQ-1074. b - prevent pbs_server from spinning in select due to sockets in CLOSE_WAIT. TRQ-1161. e - Have pbs_server save the queues each time before exiting so that legacy 53 Appendix B. TORQUE and Maui Release Information formats are converted to xml after upgrading. TRQ-1120. b - Fix phantom jobs being left on the pbs_moms and blocking jobs for Cray hardware. TRQ-1162. (Thanks Matt Ezell) b - Fix a race condition on free’d memory when check for orphaned alps reservations. TRQ-1181. (Thanks Matt Ezell) b - If interrupted when reading the terminal type for an interactive job continue trying to read instead of giving up. TRQ-1091. b - Fix displaying elapsed time for a job. TRQ-1133. b - Make offlining nodes persistent after shutting down. TRQ-1087. b - Fixed a memory leak when calling net_move. net_move allocates memory for args and starts a thread on send_job. However, args were not getting released in send_job. TRQ-1199 b - Changed pbs_connect to check for a server name. If it is passed in only that server name is tried for a connection. If no server name is given then the default list is used. The previous behavior was to try the name passed in and the default server list. This would lead to confusion in utilities like qstat when querying for a specific server. If the server specified was no available information from the remaining list would still be returned. TRQ-1143. e - Make issue_Drequest wait for the reply and have functions continue processing immediately after instead of the added overhead of using the threadpool. c - tm_adopt() calls caused pbs_mom to crash. Fix this. TRQ-1210. b - Array element 0 wasn’t showing up in qstat -t output. TRQ-1155. b - Cores with multiple processing units were being incorrectly assigned in cpusets. Additionally, multi-node jobs were getting the cpu list from each node in each cpuset, also causing problems. TRQ-1202. b - Finding subjobs (for heterogeneous jobs) wasn’t compatible with hostnames that have dashes. TRQ-1229. b - Removed the call to wait_request the main_loop on pbs_server. All of our communication is handled directly and there is no longer a need to wait for an out of band reply from a client. TRQ-1161. e - Modified output for qstat -r. Expanded Req’d Time to include seconds and centered Elap Time over its column. b - Fixed a bug found at Univ. of Michigan where a corrupt .JB file would cause pbs_server to seg-fault and restart. b - Don’t leave quotes on any arguments passed to the resource list. TRQ-1209. b - Fix a race condition that causes deadlock when two threads are routing the same job. b - Fixed a bug with qsub where environment variables were not getting populated with the -v option. TRQ-1228. b - This time for sure. TRQ-1228. When max_queuable or max_user_queuable were set it was still possible to go over the limit. This was because a job is qualified in the call to req_quejob but does not get inserted into the queue until svr_enquejob is called in req_commit, four network requests later. In a multi-threaded environment this allowed several jobs to be qualified and put in the pipeline before they were actually commited to a queue. b - If max_user_queuable or max_queuable were set on a queue TORQUE would not honor the limit when filling those queues from a routing queue. This has now been fixed. TRQ-1088. b - Fixed seg-fault when running jobs asynchronously. TRQ-1252. b - Job dependencies didn’t work with display_server_suffix=false. Fixed. TRQ-1255. b - Don’t report alps reservation ids if a node is in interactive mode. TRQ-1251. b - Only attempt to cancel an orphaned alps reservation a maximum of one time per iteration. TRQ-1251. b - Fixed a bug with SIGHUP to pbs_server. The signal handler (change_logs()) does file I/O which is not allowed for signal interruption. This caused pbs_server to be up 54 Appendix B. TORQUE and Maui Release Information but unresponsive to any commands. TRQ-1250 and TRQ-1224 b - Fix a deadlock when recording an alps reservation on the server side. Cray only. TRQ-1272. c - Fix mismanagement of the ji_globid. TRQ-1262. c - Setting display_job_server_suffix=false crashed with job arrays. Fixed. bugzilla #216 b - Restore the asynchronous functionality. TRQ-1284. e - Made it so pbs_server will come up even if a job cannot recover because of a missing job dependency. TRQ-1287 b - Fixed a segfault in the path from do_tcp to tm_request to tm_eof. In this path we freed the tcp channel three times. the call to DIS_tcp_cleanup was removed from tm_eof and tm_request. TRQ-1232. b - Fixed a deadlock which occurs when there is a job with a dependency that is being moved from a routing queue to an execution queue. TRQ-1294 b - Fix a deadlock in logging when the machine is out of disk space. TRQ-1302. e - Retry cleanup with the mom every 20 seconds for jobs that are stuck in an exiting state. TRQ-1299. b - Enabled qsub filters to be access from a non-default location.i TRQ-1127 b - Put the ability to write the resources_used data to the accounting logs. This was in 4.1.1 and 4.1.2 but failed to make it into 4.1.3. TRQ-1329 c - Fix a double free if the same chan is stored on two tasks for a job. TRQ-1299. b - Changed pbs_original_connect to retry a failed connect attempt MAX_RETRIES (5) times before returning failure. This will reduce the number of client commands that fail due to a connection failure. TRQ-1355 b - Fix the proliferation of "Non-digit found where a digit was expected" messages, due to an off-by-one error. TRQ-1230. b - Fixed a deadlock caused by queue not getting released when jobs are aborted when moving jobs from a routing queue to an execution queue. TRQ-1344. 4.1.2 e - Add the ability to run a single job partially on CRAY hardware and partially on hardware external to the CRAY in order to allow visualization of large simulations. 4.1.1 e - pbs_server will now detect and release orphaned ALPS reservations b - Fixed a deadlock with nodes in stream_eof after call to svr_connect. b - resources_used information now appears in the accounting log again TRQ-1083 and bugzilla 198. b - Fixed a seg-fault found a LBNL where freeaddrinfo would crash because of uninitialized memory. b - Fixed a deadlock in handle_complete_second_time. We were not unlocking when exiting svr_job_purge. e - Added the wrappers lock_ji_mutex and unlock_ji_mutex to do the mutex locking for all job->ji_mutex locks. e - admins can now set the global max_user_queuable limit using qmgr. TRQ-978. b - No longer make multiple alps reservation parameters for each alps reservation. This creates problems for the aprun -B command. b - Fix a problem running extremely large jobs with alps 1.1 and 1.2. Reservations weren’t correctly created in the past. TRQ-1092. b - Fixed a deadlock with a queue mutex caused by call qstat -a <queue1> <queue2> b - Fixed a memory corruption bug, double free in check_if_orphaned. To fix this issue_Drequest was modified to always free the batch request regardless of any errors. b - Fix a potential segfault when using munge but not having set authorized users. 55 Appendix B. TORQUE and Maui Release Information TRQ-1102 b - Added a modified version of a patch submitted by Matt Ezell for Bugzilla 207. This fixes a seg-fault in qsub if Moab passes an environment variable without a value. b - fix an error in parsing environment variables with commas, newlines, etc. TRQ-1113 b - fixed a deadlock with array jobs running simultaneously with qstat. b - Fixed qsub -v option. Variable list was not getting passed in to job environment. TRQ-1128 b - TRQ-1116. mail is now sent on job start again. b - TRQ-1118. Cray jobs are now recovered correctly after a restart. b - TRQ-1109. Fixed x11 forwarding for interactive jobs. (qsub -I -X). Previous to this fix interactive jobs would not run any x applications such as xterm, xclock, etc. b - TRQ-1161, Fixes a problem where TORQUE gets into a high CPU utilization condition. The problem was that in the function process_pbs_server_port there was not error returned if the call to getpeername() failed in the default case. b - TRQ-1161. This fixes another case that would cause a thread to spin on poll in start_process_pbs_server_port. A call to the dis function would return and error but the code would close the connection and return the error code which was a value less than 20. start_process_pbs_server_port did not recognize the low error code value and would keep calling into process_pbs_server_port. b - qdel’ing a running job in the cray environment was trying to communicate with the cray compute instead of the login node. This is now fixed. TRQ-1184. b - TRQ-1161. Fixed a problem in stream_eof where a svr_connect was used to connect to a MOM to see if it was still there. On successful connection the connection is closed but the wrong function (close_conn) with the wrong argument (the handle returned by svr_connect()) was used. Replaced with svr_disconnect b - Make it so that procct is never shown to Moab or users. TRQ-872. b - TRQ-1182. Fixed a problem where jobs with dependencies were deleted on the restart of pbs_server. b - TRQ-1199. Fixed memory leaks found by Valgrind. Fixed a leak when routing jobs to a remote server, memory leak with procct, memory leak creating queues, memory leak with mom_server_valid_message_source and a memory leak in req_track. 4.1.0 e - make free_nodes() only look at nodes in the exec_host list and not examine all nodes to check if the job at hand was there. This should greatly speed up freeing nodes. f - add the server parameter interactive_jobs_can_roam (Cray only). When set to true, interactive jobs can have any login as mother superior, but by default all interactive jobs with have their submit_host as mother superior b - Fixed TRQ-696. Jobs get stuck in running state. b - Fixed a problem where interactive jobs using X-forwarding would fail because TORQUE though DISPLAY was not set. The problem was that DISPLAY was set using lowercase internally. TRQ-1010 4.0.3 b b c b b b b - 56 fix qdel -p all - was performing a qdel all. TRQ-947 fix some memory leaks in 4.0.2 on the mom and server TRQ-944 TRQ-973. Fix a possibility of a segfault in netcounter_incr() removed memory manager from alloc_br and free_br to solve a memory leak fixes to communications between pbs_sched and pbs_server. TRQ-884 fix server crash caused by gpu mode not being right after gpus=x:. TRQ-948. fix logic in torque.setup so it does not say successfully started when Appendix B. TORQUE and Maui Release Information trqauthd failed to start. TRQ-938. b - fix segfaults on job deletes, dependencies, and cases where a batch request is held in multiple places. TRQ-933, 988, 990 e - TRQ-961/bugzilla-176 - add the configure option --with-hwloc-path=PATH to allow installing hwloc to a non-default location. c - fix a crash when using job dependencies that fail - TRQ-990 e - Cache addresses and names to prevent calling getnameinfo() and getaddrinfo() too often. TRQ-993 c - fix a crash around re-running jobs e - change so some Moab environment variables will be put into environment for the prologue and epilogue scripts. TRQ-967. b - make command line arguments override the job script arguments. TRQ-1033. b - fix a pbs_mom crash when using blcr. TRQ-1020. e - Added patch to buildutils/pbs_mkdirs.in which enables pbs_mkdirs to run silently. Patch submitted by Bas van der Vlies. Bugzilla 199. 4.0.2 e - Change so init.d script variables get set based on the configure command. TRQ-789, TRQ-792. b - Fix so qrun jobid[] does not cause pbs_server segfault. TRQ-865. b - Fix to validate qsub -l nodes=x against resources_max.nodes the same as v2.4. TRQ-897. b - bugzilla #185. Empty arrays should no longer be loaded and now when qdel’ed they will be deleted. b - bugzilla #182. The serverdb will now correctly write out memory allocated. b - bugzilla #188. The deadlock when using job logging is resolved b - bugzilla #184. pbs_server will no longer log an erroneous error when the 12th job array is submitted. e - Allow pbs_mom to change users group on stderr/stdout files. Enabled by configuring Torque with CFLAGS=’-DRESETGROUP’. TRQ-908. e - Have the parent intermediate mom process wait for the child to open the demux before moving on for more precise synchronization for radix jobs. e - Changed the way jobs queued in a routing queue are updated. A thread is now launched at startup and by default checks every 10 seconds to see if there are jobs in the routing queues that can be promoted to execution queues. b - Fix so pbs_mom will compile when configured with --with-nvml-lib=/usr/lib and --with-nvml-include. TRQ-926. b - fix pbs_track to add its process to the cpuset as well. TRQ-925. b - Fix so gpu count gets written out to server nodes file when using --enable-nvidia-gpus. TRQ-927. b - change pbs_server to listen on all interfaces. TRQ-923 b - Fix so "pbs_server --ha" does not fail when checking path for server.lock file. TRQ-907. b - Fixed a problem in qmgr where only 9 commands could be completed before a failure. Bugzilla 192 and TRQ-931 b - Fix to prevent deadlock on server restart with completed job that had a dependency. TRQ-936. b - prevent TORQUE from losing connectivity with Moab when starting jobs asynchronously TRQ-918 b - prevent the API from segfaulting when passed a negative socket descriptor b - don’t allow pbs_tcp_timeout to ever be less than 5 minutes - may be temporary b - fix pbs_server so it fails if another instance of pbs_server is already running on same port. TRQ-914. 4.0.1 b - Fix trqauthd init scripts to use correct path to trqauthd. 57 Appendix B. TORQUE and Maui Release Information b - fix so multiple stage in/out files can again be used with qsub -W b - fix so comma separated file list can be used with qsub -W stagein/stageout. Matches qsub documentation again. b - Only seed the random number generator once b - The code to run the epilogue set of scripts was removed when refactoring the obit code. The epilogues are now run as part of post_epilogue. preobit_reply is no longer used. b - if using a default hierarchy and moms on non-default ports, pass that information along in the hierarchy e - Make pbs_server contact pbs_moms in the order in which they appear in the hierarchy in order to reduce errors on start-up of a large cluster. b - fix another possibility for deadlock with routing queues e - move some the the main loop functionality to the threadpool in order to increase responsiveness. e - Enabled the configuration to be able to write the path of the library directory to /etc/ld.so.conf.d in a file named libtorque.conf. The file will be created by default during make install. The configuration can be made to not install this file by using the configure option --without-loadlibfile b - Fixed a bug where Moab was using the option SYNCJOBID=TRUE which allows Moab to create the job ids in TORQUE. With this in place if TORQUE were terminated it would delete all jobs submitted through msub when pbs_server was restarted. This fix recovers all jobs whether submitted with msub or qsub when pbs_server restarts. b - fix for where pbsnodes displays outdated gpu_status information. b - fix problem with ’+ and segfault when using multiple node gpu requests. b - Fixed a bug in svr_connect. If the value for func were null then the newly created connection was not added to the svr_conn table. This was not right. We now always add the new connection to svr_conn. b - fix problem with mom segfault when using 8 or more gpus on mom node. b - Fix so child pbs_mom does not remain running after qdel on slow starting job. TRQ-860. b - Made it so the MOM will let pbs_server know it is down after momctl -s is invoked. e - Made it so localhost is no longer hard coded. The string comes from getnameinfo. b - fix a mom hiearchy error for running the moms on non-default ports b - Fix server segfault for where mom in nodes file is not in mom_hierarchy. TRQ-873. b - Fix so pbs_mom won’t segfault after a qdel is done for a job that is still running the prologue. TRQ-832. b - Fix for segfault when using routing queues in pbs_server. TRQ-808 b - Fix so epilogue.precancel runs only once and only for canceled jobs. TRQ-831. b - Added a close socket to validate_socket to properly terminate the connection. Moved the free of the incoming variable sock to process_svr_conn from the beginning of the function to the end. This fixed a problem where the client would always get a RST when trying to close its end of the connection. b - Fix server segfault for where mom in nodes file is not in mom_hierarchy. TRQ-873. b - routing to a routing queue now works again, TRQ-905, bugzilla 186 b - Fix server segfaults that happened doing qhold for blcr job. TRQ-900. n - TORQUE 4.0.1 released 5/3/2012. 58 Appendix B. TORQUE and Maui Release Information. b - Made changes to IM protocol where commands were not either waiting for a reply or not sending a reply. Also made changes to close connections that were left open. b - Fix for where qmgr record_job_info is True and server hangs on startup.. 3.0.5 b b b e b - fix for writing too much data when job_script is saved to job log. fix for where pbs_mom would not automatically set gpu mode. fix for alligning qstat -r output when configured with -DTXT. Change size of transfer block used on job rerun from 4k to 64k. e - change to allow pbs_mom to run if configured with --enable-nvidia-gpus but installed on a node without Nvidia gpus. 3.0.4 c b b b - fix a buffer being overrun with nvidia gpus enabled no longer leave zombie processes when munge authenticating. no longer reject procs if it is the second argument to -l when having pbs_mom re-read the config file, old servers were kept, and pbs_mom 59 Appendix B. TORQUE and Maui Release Information. 3.0.3 b - fix for bugzilla #141 - qsub was overwriting the path variable in PBSD_authenticate e - automatically create and mount /dev/cpuset when TORQUE is configured but the cpuset directory isn’t there b - fix a bug where node lines past 256 characters were rejected. This buffer has been made much larger (8192 characters) b - clear out exec_gpus as needed b - fix for bugzilla #147 - recreate $PBS_NODESFILE file when restarting a blcr checkpointed job b - Applied patch submitted by Eric Roman for resmom/Makefile.am (Bugzilla #147) b - Fix for adding -lcr for BLCR makefiles (Bugzilla #146) c - fix a potential segfault when using asynchronous runjob with an array slot limit b - fix bugzilla #135, stagein was deleting directory instead of file b - fix bugzilla #133, qsub submit filter, the -W arguments are not all there e - add a mom config option - $attempt_to_make_dir - to give the user the option to have TORQUE attempt to create the directories for their output file if they don’t exist b - Fixed momctl to return an error on failure. Prior to this fix momctl always returned 0 regardless of success or failure. e - Change to allow qsub -l ncpus=x:gpus=x which adds a resource list entry for both b - fix so user epilogues are run as user instead of root b - No longer report a completion code if a job is pre-empted using qrerun. c - Fix a crash in record_jobinfo() - this is fixed by backporting dynamic strings from 4.0.0 so that all of the resizing is done in a central location, fixing the crash. b - No longer count down walltime for jobs that are suspending or have stopped running for any other reasons e - add a mom config option - $ext_pwd_retry - to specify # of retries on checking for password validity. 3.0.2 c b b b e b e b b b 3.0.1 e b b e e - 60 check if the file pointer to /dev/console can be opened. If not, don’t attempt to write it fix a potential buffer overflow security issue in job names and host address names restore += functionality for nodes when using qmgr. It was overwriting old properties fix bugzilla #134, qmgr -= was deleting all entries added the ability in qsub to submit jobs requesting total gpus for job instead of gpus per node: -l ncpus=X,gpus=Y do not prepend ${HOME} with the current dir for -o and -e in qsub allow an administrator using the proxy user submission to also set the job id to be used in TORQUE. This makes TORQUE easier to use in grid configurations. fix jobs named with -J not always having the server name appended correctly make it so that jobs named like arrays via -J have legal output and error file names make a fix for ATTR_node_exclusive - qsub wasn’t accepting -n as a valid argument updated qsub’s man page to include ATTR_node_exclusive when updating the nodes file, write out the ports for the mom if needed fix a bug for non-NUMA systems that was continuously increasing memory values the queue files are now stored as XML, just like the serverdb Added code from 2.5-fixes which will try and find nodes that did not Appendix B. TORQUE and Maui Release Information e e b b e n b b b b b b b e - b - e f e b e - b c - resolve when pbs_server started up. This is in reference to Bugzilla bug 110. make gpus compatible with NUMA systems, and add the node attribute numa_gpu_node_str for an additional way to specify gpus on node boards Add code to verify the group list as well when VALIDATEGROUPS is set in torque.cfg Fix a bug where if geometry requests are enabled and cpusets are enabled, the cpuset wasn’t deleted unless a geometry request was made. Fix a race condition for pbs_mom -q, exitstatus was getting overwritten and as a result pbs_server wasn’t always re-queued, but were being deleted instead. Add a configure option --with-tcp-retry-limit to prevent potential 4+ hour hangs on pbs_server. We recommend --with-tcp-retry-limit=2 Changing the way to set ATTR_node_exclusive from -E to -n, in order to continue compatibility with Moab. preserve the order on array strings in TORQUE, like the route_destinations for a routing queue fix bugzilla #111, multi-line environment variables causing errors in TORQUE. allow apostrophes in Mail_Users attributes, as apostrophes are rare but legal email characters restored functionality for -W umask as reported in bugzilla 115 Updated torque.spec.in to be able to handle the snapshot names of builds. fix pbs_mom -q to work with parallel jobs Added code to free the mom.lock file during MOM shutdown. Added new MOM configure option job_starter. This options will execute the script submitted in qsub to the executable or script provided fixed a bug in set_resources that prevented the last resource in a list from being checked. As a result the last item in the list would always be added without regard to previous entries. altered the prologue/epilogue code to allow root squashing added the mom config parameter $reduce_prolog_checks. This makes it so TORQUE only checks to verify that the file is a regular file and is executable. allow more than 5 concurrent connections to TORQUE using pbsD_connect. Increase it to 10 fix a segfault when receiving an obit for a job that no longer exists Added options to conditionally build munge, BLCR, high-availability, cpusets, and spooling. Also allows customization of the sendmail path and allows for optional XML conversion to serverdb. also remove the procct resource when it is applied because of a default fix a segfault when queue has acl_group_enable and acl_group_sloppy set true and no acl_groups are defined. 3.0.0 e - serverdb is now stored as xml, this is no longer configurable. f - added --enable-numa-support for supporting NUMA-type architectures. We have tested this build on UV and Altix machines. The server treats the mom as a node with several special numa nodes embedded, and the pbs_mom reports on these numa nodes instead of itself as a whole. f - for numa configurations, pbs_mom creates cpusets for memory as well as cpus e - adapted the task manager interface to interact properly with NUMA systems, including tm_adopt e - Addeded autogen.sh go make life easier in a Makefile.in-less world. e - Modified buildutils/pbs_mkdirs.in to create server_priv/nodes file at install time. The file only shows examples and a link to the TORQUE documentation. f - added ATTR_node_exclusive to allow a job to have a node exclusively. f - added --enable-memacct to use an extra protocol in order to 61 Appendix B. TORQUE and Maui Release Information e n - e - e b e e - e - e - e - e b e - accurately track jobs that exceed over their memory limits and kill them when ATTR_node_exclusive is set, reserve the entire node (or entire numa node if applicable) in the cpuset Changed the protocol versions for all client-to-server, mom-to-server and mom-to-mom protocols from 1 to 2. The changes to the protocol in this version of TORQUE will make it incompatible with previous versions. when a select statement is used, tally up the memory requests and mark the total in the resource list. This allows memory enforcement for NUMA jobs, but doesn’t affect others as memory isn’t enforced for multinode jobs add an asynchronous option to qdel do not reply when an asynchronous reply has already been sent make the mem, vmem, and cput usage available on a per-mom basis using momctl -d2 (Dr. Bernd Kallies) move the memory monitor functionality to linux/mom_mach.c in order to store the more accurate statistics for usage, and still use it for applying limits. (Dr. Bernd Kallies) when pbs_mom is compiled to use cpusets, instead of looking at all processes, only examine the ones in cpuset task files. For busy machines (especially large systems like UVs) this can exponentially reduce job monitoring/harvesting times. (Dr. Bernd Kallies) when cpusets are configured and memory pressure enabled, add the ability to check memory pressure for a job. Using $memory_pressure_threshold and $memory_pressure_duration in the mom’s config, the admin sets a threshold at which a job becomes a problem. If duration is set, the job will be killed if it exceeds the threshold for the configured number of checks. If duration isn’t set, then an error is logged. (Dr. Bernd Kallies) change pbs_track to look for the executable in the existing path so it doesn’t always need a complete path. (Dr. Bernd Kallies) report sessions on a per numa node basis when NUMA is enabled (Dr. Bernd Kallies) Merged revision 4325 from 2.5-fixes. Fixed a problem where the -m n (request no mail on qsub) was not always being recongnized. Merged buildutils/torque.spec.in from 2.4-fixes.). 2.5.10 b - Fixed a problem where pbs_mom will crash of check_pwd returns NULL. This could happen for example if LDAP was down and getpwnam returns NULL. 62 Appendix B. TORQUE and Maui Release Information b c b - b - received from the server on the MOM. This fix allows the MOM to delete the job and free up resources even if the server for some reason does not send the delete job request. TRQ-608: Removed code to check for blocking mode in write_nonblocking_socket(). Fixes problem with interactive jobs (qsub -I) exiting prematurely. fix a buffer being overrun with nvidia gpus enabled (backported from 3.0.4). no longer leave zombie processes when munge authenticating. (backported from 3.0.4) 2.5.9 e - change mom to only log "cannot find nvidia-smi in PATH" once when built with --enable-nvidia-gpus and running on a node that does not have Nvidia drivers installed. b - Change so gpu states get set/unset correctly. Fixes problems with multiple exclusive jobs being assigned to same gpu and where next job gets rejected because gpu state was not reset after last shared gpu job finished. e - Added a 1 millisecond sleep to src/lib/Libnet/net_client.c client_to_svr() if connect fails with EADDRINTUSE EINVAL or EADDRNOTAVAIL case. For these cases TORQUE will retry the connect again. This fix increases the chance of success on the next iteration. b - Changes to decrease some gpu error messages and to detect unusual gpu drivers and configurations. b - Change so user cannot impersonate a different user when using munge. e - Added new option to torque.cfg name TRQ_IFNAME. This allows the user to designate a preferred outbound interface for TORQUE requests. The interface is the name of the NIC interface, for example eth0. e - Added instructions concerning the server parameter moab_array_compatible to the README.array_changes file. b - Fixed a problem where pbs_server would seg-fault if munged was not running. It would also seg-fault if an invalid credential were sent from a client. The seg-fault was occurred in the same place for both cases. b - Fixed a problem where jobs dependent on an array using afteranyarray would not start when a job element of the array completed. b - Fixed a bug where array jobs .AZ file would not be deleted when the array job was done. e - Modified qsub so that it will set PBS_O_HOST on the server from the incoming interface. (with this fix QSUBHOST from torque.cfg will no longer work. Do we need to make it to override the host name?) b - fix so user epilogues are run as user instead of root (backported from 3.0.3) b - fix the prevent pbs_server from hanging when doing server to server job moves. (backported from 3.0.3) b - Fixed a problem where array jobs would always lose their state when pbs_server was restarted. Array jobs now retain their previous state between restarts of the server the same as non-array jobs. This fix takes care of a problem where Moab and TORQUE would get out of sync on jobs because of this discrepency between states. b - Made a fix related to procct. If no resources are requested on the qsub line previous 63 Appendix B. TORQUE and Maui Release Information versions of TORQUE did not create a Resource_List attribute. Specifically a node and nodect element for Resource_List. Adding this broke some applications. I made it so if no nodes or procs resources are requested the procct is set to 1 without creating the nodes element. e - Changed enable-job-create to with-job-create with an optional CFLAG argument. --with-job-create=<CFLAG options> e - Changed qstat.c to display 6 instead of 5 digits for Req’d Memory for a qstat -a. 2.5.8 e - added util function getpwnam_ext() that has retry and errno logging capability for calls to getpwnam(). c - fix a potential segfault when using asynchronous runjob with an array slot limit (backported from 3.0.3) b - In pbs_original_connect() only the first NCONNECT entries of the connection table were checked for availability. NCONNECT is defined as 10. However, the connection table is PBS_NET_MAX_CONNECTIONS in size. PBS_NET_MAX_CONNECTIONS is 10240. NCONNECT is now defined as PBS_NET_MAX_CONNECTIONS. b - fix bugzilla #135, stagein was deleting directory instead of file (backported from 3.0.3) b - If the resources nodes or procs are not submitted on the qsub command line then the nodes attribute does not get set. This causes a problem if procct is set on queues because there is no proc count available to evaluate. This fix sets a default nodes value of 1 if the nodes or procs resources are not requested. e - Change so Nvidia drivers 260, 270 and above are recognized. e - Added server attribute no_mail_force which when set True eliminates all e-mail when job mail_points is set to "n" 2.5.7 e - Added new qsub argument -F. This argument takes a quoted string as an argument. The string is a list of space separated commandline arguments which are available to the job script. b - Fixed a potential buffer overflow problem in src/resmom/checkpoint.c function mom_checkpoint_recover. I modified the code to change strcpy and strcat to strncpy and strncpy. dependency so dependent jobs would get stuck on hold if the current server was not the first server in the server_name file. 2.5.6 b - Made changes to record_jobinfo and supporting functions to be able to use dynamically allocated buffers for data. This fixed a problem where incoming data overran fixed sized buffers.. 64 Appendix B. TORQUE and Maui Release Information b - Fixed a problem with minimum sizes in queues. Minimum sizes were not getting enforced because the logic checking the queue against the user request used and && when it needs a || in the comparison.. c - fix a segfault when using --enable-nvidia-gpus and pbs_mom has Nvidia driver older than 260 that still has nvidia-smi command e - Added capability to automatically set mode on Nvidia gpus. Added support for gpu reseterr option on qsub. The nodes file will be updated with Nvidia gpu count when --enable-nvidia-gpu configure option is used. Moved some code out of job_purge_thread to prevent segfault on mom. e - Applied patch submitted by Eric Roman. This patch addresses some build issues with BLCR, and fixes an error where BLCR would report -ENOSUPPORT when trying to checkpoint a parallel job. The patch adds a --with-blcr option to configure to find the path to the BLCR libraries. There are --with-blcr-include, --with-blcr-lib and --with-blcr-bin to override the search paths, if necessary. The last option, --with-blcr-bin is used to generate contrib/blcr/checkpoint_script and contrib/blcr/restart_script from the information supplied at configure time. b - Fixed problem where calling qstat with a non-existent job id would hang the qstat command. This was only a problem when configured with MUNGE. b - fix a potential buffer overflow security issue in job names and host address names 2.5.5 65 Appendix B. TORQUE and Maui Release Information b - change so gpus get written back to nodes file e - make it so that even if an array request has multiple consecutive ’%’ the slot limit will be set correctly b - Fixed bug in job_log_open where the global variable logpath was freed instead of joblogpath. b - Fixed memory leak in function procs_requested. b - Validated incoming data for escape_xml to prevent a seg-fault with incoming null pointers e - Added submit_host and init_work_dir as job attributes. These two values are now displayed with a qstat -f. The submit_host is the name of the host from where the job was submitted. init_work_dir is the working directory as in PBS_O_WORKDIR. e - change so blcr checkpoint jobs can restart on different node. Use configure --enable-blcr to allow. b - remove the use of a GNU specific function, and fix an error for solaris builds b - Updated PBS_License.txt to remove the implication that the software is not freely redistributable. b - remove the $PBS_GPUFILE when job is done on mom b - fix a race condition when issuing a qrerun followed by a qdel that caused the job to be queued instead of deleted sometimes. e - Implemented Bugzilla Bug 110. If a host in the nodes file cannot be resolved at startup the server will try once every 5 minutes until the node will resolve and it will add it to the nodes list. e - Added a "create" method to pbs_server init.d script so a serverdb file can be created if it does not exist at startup time. This is an enhancement in reference to Bugzilla bug 90. b - Fixed a problem in parse_node_token where the local static variable pt would be advanced past the end of the line input if there is no newline character at the end of the nodes file.. 2.5.4 f - added the ability to track gpus. Users set gpus=X in the nodes file for relevant node, and then request gpus in the nodes request: -l nodes=X[:ppn=Y][:gpus=Z]. The gpus appear in $PBS_GPUFILE, a new environment variable, in the form: <hostname>-gpu<index> and in a new job attribute exec_gpus: <hostname>-gpu/<index>[+<hostname>-gpu/<index>...] b - clean up job mom checkpoint directory on checkpoint failure e - Bugzilla bug 91. Check the status before the service is actually started. (Steve Traylen - CERN) e - Bugzilla bug 89. Only touch lock/subsys files if service actually starts. (Steve Traylen - CERN) c - when using job_force_cancel_time, fix a crash in rare cases e - add server parameter moab_array_compatible. When set to true, this parameter places a limit hold on jobs past the slot limit. Once one of the unheld jobs completes or is deleted, one of the held jobs is freed. b - fix a potential memory corruption for walltime remaining for jobs (Vikentsi Lapa) b - fix potential buffer overrun in pbs_sched (Bugzilla #98, patch from Stephen Usher @ University of Oxford) e - check if a process still exists before killing it and sleeping. This speeds up 66 Appendix B. TORQUE and Maui Release Information the time for killing a task exponentially, although this will show mostly for SMP/NUMA systems, but it will help everywhere. (Dr. Bernd Kallies) b - Fix for reque failures on mom. Forked pbs_mom would silently segfault and job was left in Exiting state. b - change so "mom_checkpoint_job_has_checkpoint" and "execing command" log messages do not always get logged 2.5.3 b - stop reporting errors on success when modifying array ranges b - don’t try to set the user id multiple times b - added some retrying to get connection and changed some log messages when doing a pbs_alterjob after a checkpoint c - fix segfault in tracejob. It wasn’t malloc’ing space for the null terminator e - add the variables PBS_NUM_NODES and PBS_NUM_PPN to the job environment (TRQ-6) e - be able to append to the job’s variable_list through the API (TRQ-5) e - Added support for munge authentication. This is an alternative for the default ruserok remote authentication and pbs_iff. This is a compile time option. The configure option to use is --enable-munge-auth. Ken Nielson (TRQ-7) September 15, 2010. b - fix the dependency hold for arrays. They were accidentally cleared before (RT 8593) e - add a logging statement if sendto fails at any points in rpp_send_out b - Applied patch submitted by Will Nolan to fix bug 76. "blocking read does not time out using signal handler" b - fix a bug in the $spool_as_final_name code if HAVE_WORDEXP is undefined b - Bugzilla bug 84. Security bug on the way checkpoint is being handled. (Robin R. - Miami Univ. of Ohio) e - Now saving serverdb as an xml file instead of a byte-dump, thus allowing canned installations without qmgr scripts, as well as more portability. Able to upgrade automatically from 2.1, 2.3, and 2.4 b - fix to cleanup job files on mom after a BLCR job is checkpointed and held b - make the tcp reading buffer able to grow dynamically to read larger values in order to avoid "invalid protocol" messages e - change so checkpoint files are transfered as the user, not as root. f - Added configure option --with-servchkptdir which allows specifying path for server’s checkpoint files b - could not set the server HA parameters lock_file_update_time and lock_file_check_time previously. Fixed. e - qpeek now has the options --ssh, --rsh, --spool, --host, -o, and -e. Can now output both the STDOUT and STDERR files. Eliminated numlines, which didn’t work. b - fix to prevent a possible segfault when using checkpointing.) 67 Appendix B. TORQUE and Maui Release Information modified. e - add --enable-top-tempdir-only to only create the top directory of the job’s temporary directory when configured b - make the code for reconnecting to the server more robust, and remove elements of not connecting if a job isn’t running e - allow input of walltime in the format of [DD]:HH:MM:SS b - Fix so BLCR checkpoint files get copied to server on qchkpt and periodic checkpoints c - corrected a segfault when display_job_server_suffix is set to false and job_suffix_alias was unset. 2.5.1 b - modified Makefile.in and Makefile.am at root to include contrib/AddPrivileges 2.5.0 e - Added new server config option alias_server_name. This option allows the MOM to add an additional server name to be added to the list of trusted addresses. The point of this is to be able to handle alias ip addresses. UDP requests that come into an aliased ip address are returned through the primary ip address in TORQUE. Because the address of the reply packet from the server is not the same address the MOM sent its HELLO1 request, the MOM drops the packet and the MOM cannot be added to the server. e - auto_node_np will now adjust np values down as well as up. e - Enabled TORQUE to be able to parse the -l procs=x node spec. Previously TORQUE simply recored the value of x for procs in Resources_List. It now takes that value and allocates x processors packed on any available node. (Ken Nielson Adaptive Computing. June 17, 2010) f - added full support (server-scheduler-mom) for Cygwin (UIIP NAS of Belarus, uiip.bas-net.by) b - fixed EINPROGRESS in net_client.c. This signal appears every time of 68 Appendix B. TORQUE and Maui Release Information e - f - f f - e b e e e e e - e f f f f f f e f f b b f e - connecting and requires individual processing. The old erroneous processing brought a large network delay, especially on Cygwin. improved signal processing after connecting in client_to_svr and added own implementation of bindresvport for OS which lack it (Igor Ilyenko, UIIP Minsk) created permission checking of Windows (Cygwin) users, using mkpasswd, mkgroup and own functions IamRoot, IamUser (Yauheni Charniauski, UIIP Minsk) created permission checking of submitted jobs (Vikentsi Lapa, UIIP Minsk) Added the --disable-daemons configure option for start server-sched-mom as Windows services, cygrunsrv.exe goes its into background independently. Adapted output of Cygwin’s diagnostic information (Yauheni Charniauski, UIIP Minsk) Changed pbsd_main to call daemonize_server early only if high_availability_mode is set. added new qmgr server attributes (clone_batch_size, clone_batch_delay) for controlling job cloning (Bugzilla #4) added new qmgr attribute (checkpoint_defaults) for setting default checkpoint values on Execution queues (Bugzilla #1) print a more informative error if pbs_iff isn’t found when trying to authenticate a client added qmgr server attribute job_start_timeout, specifies timeout to be used for sending job to mom. If not set, tcp_timeout is used. added -DUSESAVEDRESOURCES code that uses servers saved resources used for accounting end record instead of current resources used for jobs that stopped running while mom was not up. TORQUE job arrays now use arrays to hold the job pointers and not linked lists (allows constant lookup). Allow users to delete a range of jobs from the job array (qdel -t) Added a slot limit to the job arrays - this restricts the number of jobs that can concurrently run from one job array. added support for holding ranges of jobs from an array with a single qhold (using the -t option). now ranges of jobs in an array can be modified through qalter (using the -t option). jobs can now depend on arrays using these dependencies: afterstartarray, afterokarray, afternotokarray, afteranyarray, added support for using qrls on arrays with the -t option complte overhaul of job array submission code by default show only a single entry in qstat output for the whole array (qstat -t expands the job array) server parameter max_job_array_size limits the number of jobs allowed in an array job arrays can no longer circumvent max_user_queuable job arrays can no longer circumvent max_queuable added server parameter max_slot_limit to restrict slot limits changed array names from jobid-index to jobid[index] for consistency 2.4.13) 69 Appendix B. TORQUE and Maui Release Information. b - Merged revisions 4555, 4556 and 4557 from 2.5-fixes branch. This revisions fix problems in High availability mode and also a problem where the MOM was not releasing the lock on mom.lock on exit. b - fix pbs_mom -q to work with parallel jobs (backported from 3.0.1) b - fixed a bug in set_resources that prevented the last resource in a list from being checked. As a result the last item in the list would always be added without regard to previous entries. - Fixed a problem with minimum sizes in queues. Minimum sizes were not getting enforced because the logic checking the queue against the user request used and && when it need a || in the comparison. c - fix a segfault when queue has acl_group_enable and acl_group_sloppy set true and no acl_groups are defined. (backported from 3.0.1). e - Updated Makefile.in, configure, etc. to reflect change in configure.ac to add libpthread to the build. This was done for the fix for Bugzilla Bug 121. 2.4.12 b - Bugzilla bug 84. Security bug on the way checkpoint is being handled. (Robin R. - Miami Univ. of Ohio, back-ported from 2.5.3) b - make the tcp reading buffer able to grow dynamically to read larger values in order to avoid "invalid protocol" messages (backported from 2.5.3) b - could not set the server HA parameters lock_file_update_time and lock_file_check_time previously. Fixed. (backported from 2.5.3) e - qpeek now has the options --ssh, --rsh, --spool, --host, -o, and -e. Can now output both the STDOUT and STDERR files. Eliminated numlines, which didn’t work. (backported from 2.5.3) b - Modified the pbs_server startup routine to skip unknown hosts in the nodes file instead of terminating the server startup. b - fix to prevent a possible segfault when using checkpointing (back-ported from 2.5.3). b - fix to cleanup job files on mom after a BLCR job is checkpointed and held (back-ported from 2.5.3) c - when using job_force_cancel_time, fix a crash in rare cases (backported from 2.5.4) b - fix a potential memory corruption for walltime remaining for jobs (Vikentsi Lapa, backported from 2.5.4) 70 Appendix B. TORQUE and Maui Release Information b - fix potential buffer overrun in pbs_sched (Bugzilla #98, patch from Stephen Usher @ University of Oxford, backported from 2.5.4) e - check if a process still exists before killing it and sleeping. This speeds up the time for killing a task exponentially, although this will show mostly for SMP/NUMA systems, but it will help everywhere. (backported from 2.5.4) (Dr. Bernd Kallies) e -). b - Merged revision 4325 from 2.5-fixes. Fixed a problem where the -m n (request no mail on qsub) was not always being recongnized. b - Fix for reque failures on mom. Forked pbs_mom would silently segfault and job was left in Exiting state. (backported from 2.5.4) b - prevent the nodes file from being overwritten when running make packages b - change so "mom_checkpoint_job_has_checkpoint" and "execing command" log messages do not always get logged (back-ported from 2.5.4) b - remove the use of a GNU specific function. (back-ported from 2.5.5) 2.4.11 b - changed type cast for calloc of ioenv from sizeof(char) to sizeof(char *) in pbsdsh.c. This fixes bug 79. b - Added patch to fix bug 76, "blocking read does not time out using signal handler. b - Modified the pbs_server startup routine to skip unknown hosts in the nodes file instead of terminating the server startup. 2.4.10 b - fix for bug 61. The fix takes care of a problem where pbs_mom under some situations will change the mode and permissions of /dev/null. 2.4.9 71 Appendix B. TORQUE and Maui Release Information. 2.4.8 e - Bugzilla bug 22. HIGH_PRECISION_FAIRSHARE for fifo scheduling. c - no longer sigabrt with "running" jobs not in an execution queue. log an error. c - fixed segfault for when TORQUE thinks there’s a nanny but there isn’t e - mapped ’qsub -P user:group’ to qsub -P user -W group_list=group b - reverted to old behavior where interactive scripts are checked for directives and not run without a parameter. e - setting a queue’s resource_max.nodes now actually restricts things, although so far it only limits based on the number of nodes (i.e. not ppn) f - added QSUBSENDGROUPLIST to qsub. This allows the server to know the correct group name when disable_server_id_check is set to true and the user doesn’t exist on the server. e - Bugzilla bug 54. Patch submitted by Bas van der Vlies to make pbs_mkdirs more robust, provide a help function and new option -C <chk_tree_location> 2.4.7 b - fixed a bug for when a resource_list has been set, but isn’t completely initialized, causing a segfault b - stop counting down walltime remaining after a job is completed b - correctly display the number for tasks as used in TORQUE in qstat -a output b - no longer ignoring fread return values in linux cpuset code (gcc 4.3.3) b - fixed a bug where job was added to obit retry list multiple times, causing a segfault b - Fix for Bugzilla bug 43. "configure ignores with-modulefiles=no" b - no longer try to decide when to start with -t create in init.d scripts, -t creates should be done manually by the user f - added -P to qsub. When submitting a job as root, the root user may add -P <username> to submit the job as the proxy user specified by <usermname> 2.4.6 f - added an asynchronous option for qsig, specified with -a. b - fix to cleanup job that is left in running state after mom restart f - added two server parameters: display_job_server_suffix and job_suffix_alias. The first defaults to true and is whether or not jobs should be appended by .server_name. The second defaults to NULL, but if it is defined it will be appended at the end of the jobid, i.e. jobid.job_suffix_alias. f - added -l option to qstat so that it will display a server name and an 72 Appendix B. TORQUE and Maui Release Information alias if both are used. If these aren’t used, -l has no effect. e - qstat -f now includes an extra field "Walltime Remaining" that tells the remaining walltime in seconds. This field is does not account for weighted walltime. b - fixed open_std_file to setegid as well, this caused a problem with epilogue.user scripts. e - qsub’s -W can now parse attributes with quoted lists, for example: qsub script -W attr="foo,foo1,foo2,foo3" will set foo,foo1,foo2,foo3 as attr’s value. b - split Cray job library and CSA functionality since CSA is dependent on job library but job library is not dependant on CSA 2.4.5 b - epilogue.user scripts were being run with prologue arguments. Fixed bug in run_pelog() to include PE_EPILOGUSER so epilogue arguments get passed to epilogue.user script. b - Ticket 6665. pbs_mom and job recovery. Fixed a bug where the -q option would terminate running processes as well as requeue jobs. This made the -q option the same as the -r option for pbs_mom. -q will now only requeue jobs and will not attempt to kill running processes. I also added a -P option to start pbs_mom. This is similar to the -p option except the -P option will only delete any left over jobs from the queue and will not attempt to adopt and running processes. e - Modified man page for pbs_mom. Added new -P option plus edited -p, -q and -r options to hopefully make them more understandable. n - 01/15/2010 created snapshot torque-2.4.5-snap201001151416.tar.gz. b - now checks secondary groups (as well as primary) for creating a file when spooling. Before it wouldn’t create the spool file if a user had permission through a secondary group. n - 01/18/2010. Items above this point merged into trunk. b - fixed a file descriptor error with high availability. Before it was possible to try to regain a file descriptor which was never held, now this is fixed. b - No longer overwrites the user’s environment when spoolasfinalname is set. Now the environment is handled correctly. b - No longer will segfault if pbs_mom restarts in a bad state (user environment not initialized) e - Changing MAXNOTDEFAULT behavior. Now, by default, max is not default and max can be configured as default with --enable-maxdefault. 2.4.4 b - fixed contrib/init.d/pbs_mom so that it doesn’t overwrite $args defined in /etc/sysconfig/pbs_mom b - when spool_as_final_name is configured for the mom, no longer send email messages about not being able to copy the spool file b - when spool_as_final_name is configured for the mom, correctly substitute job environment variables f - added logging for email events, allows the admin to check if emails are being sent correctly b - Made a fix to svr_get_privilege(). On some architectures a non-root user name would be set to null after the line " host_no_port[num_host_chars] = 0;" because num_host_chars was = 1024 which was the size of hot_no_port. The null termination needed to happen at 1023. There were other problems with this function so code was added to validate the incoming variables before they were used. The symptom of this bug was that non-root managers and operators could not perform operations where they should 73 Appendix B. TORQUE and Maui Release Information have had rights. b - Missed a format statement in an sprintf statement for the bug fix above. b - Fixed a way that a file descriptor (for the server lockfile) could be used without initialization. RT 6756 2.4.3 b - fix PBSD_authenticate so it correctly splits PATH with : instead of ; (bugzilla #33) b - pbs_mom now sets resource limits for tasks started with tm_spawn (Chris Samuel, VPAC) c - fix assumption about size of unsocname.sun_path in Libnet/net_server.c b - Fix for Bugzilla bug 34. "torque 2.4.X breaks OSC’s mpiexec". fix in src/server src/server/stat_job.c revision 3268. b - Fix for Bugzilla bug 35 - printing the wrong pid (normal mode) and not printing any pid for high availability mode. f - added a diagnostic script (contrib/diag/tdiag.sh). This script grabs the log files for the server and the mom, records the output of qmgr -c ’p s’ and the nodefile, and creates a tarfile containing these. b - Changed momctl -s to use exit(EXIT_FAILURE) instead of return(-1) if a mom is not running. b - Fix for Bugzilla bug 36. "qsub crashes with long dependency list". b - Fix for Bugzilla bug 41. "tracejob creates a file in the local directory". 2.4.2 b - Changed predicate in pbsd_main.c for the two locations where daemonize_server is called to check for the value of high_availability_mode to determine when to put the server process in the background. 74 Appendix B. TORQUE and Maui Release Information receiving bad request (Michael Meier, University of Erlangen-Nurnberg) (merged from 2.3.8) f - new fifo scheduler config option. ignore_queue: queue_name allows the scheduler to be instructed to ignore up to 16 queues on the server 75 Appendix B. TORQUE and Maui Release Information (Simon Toth, CESNET z.s.p.o.), CESNET z.s.p.o) f - new boolean queue attribute "is_transit" that allows jobs to exceed server resource limits (queue limits are respected). This allows routing queues to route jobs that would be rejected for exceeding local resources even when the job won’t be run locally. (Simon Toth, CESNET z.s.p.o) 76 Appendix B. TORQUE and Maui Release Information b e e e - -f to qsub) this attribute indicates that a job can survive the loss of a sister mom also added corresponding fault_tolerant and fault_intolerant types to the "disallowed_types" queue attribute fixes for pbs_moms updating of comment and checkpoint name and time change so we can reject hold requests on running jobs that do not have checkpoint enabled if system was configured with --enable-blcr change to qsub so only the host name can be specified on the -e/-o options added -w option to qsub that allows setting of PBS_O_WORKDIR 2.3.8 c - keep pbs_server from trying to free empty attrlist after receiving bad request (Michael Meier, University of Erlangen-Nurnberg) e - moving jobs can now trigger a scheduling iteration b - fix how qsub sets PBS_O_HOST and PBS_SERVER (Eirikur Hjartarson, deCODE genetics) f - add qpool.gz to contrib directory b - fix return value of cpuset_delete() for Linux (Chris Samuel - VPAC) e - Set PBS_MAXUSER to 32 from 16 in order to accommodate systems that use a 32 bit user name.(Ken Nielson Cluster Resources) c - modified acct_job in server/accounting.c to dynamically allocate memory to accommodate strings larger than PBS_ACCT_MAX_RCD. (Ken Nielson Cluster Resources) e - all the user to turn off credential lifetimes so they don’t have to lose iterations while credentials are renewed. e - added OS independent resending of failed job obits (from D Beer), also removed OS specific CACHEOBITFAILURES code. b - fix so after* dependencies are handled correctly for exiting / completed jobs 2.3.7 b - fixed a bug where UNIX domain socket communication was failing when "--disable-privports" was used. e - add job exit status as 10th argument to the epilogue script b - fix truncated output in qmgr (peter h IPSec+jan n NANCO) b - change so set_jobexid() gets called if JOB_ATR_egroup is not set e - pbs_mom sisters can now tolerate an explicit group ID instead of only a valid group name. This helps TORQUE be more robust to group lookup failures. 2.3.6 e - in Linux, a pbs_mom will now "kill" a job’s task, even if that task can no longer be found in the OS processor table. This prevents jobs from getting "stuck" when the PID vanishes in some rare cases. 2.3.5 e - added new init.d scripts for Debian/Ubuntu systems b - fixed a bug where TORQUE’s exponential backoff for sending messages to the MOM could overflow 77 Appendix B. TORQUE and Maui Release Information 2.3.4 c - fixed segfault when loading array files of an older/incompatible version b - fixed a bug where if attempt to send job to a pbs_mom failed due to timeout, the job would indefinitely remain the in ’R’ state b - fixed a bug preventing multiple TORQUE servers and TORQUE MOMs from operating properly all from the same host e - fixed several compiler error and warnings for AIX 5.2 systems b - fixed a bug with "max_report" where jobs not in the Q state were not always being reported to scheduler 2.3.3 (ALT_CLSTR_ADDR) 2.3.2 e - added --disable-posixmemlock to force mom not to use POSIX MEMLOCK. b - fix potential buffer overrun in qsub 78 Appendix B. TORQUE and Maui Release Information - added new values to TJobAttr so we don’t have mismatch with job.h values. 79 Appendix B. TORQUE and Maui Release Information e - added $umask to pbs_mom config, used for generated output files. e - minor pbsnodes overhaul b - fixed memory leak in pbs_server 2.2.2 b - correctly parse /proc/pid/stat that contains parens (Meier) b - prevent runaway hellos being sent to server when mom’s node is removed from the server’s node list b - fix qdel of entire job arrays for non operator/managers b - fix problem where job array .AR files are not saved to disk b - fixed problem with tracking job memory usage on OS X b - fix memory leak in server and mom with MoveJobFile requests (backported from 2.3.1) b - pbs_server doesn’t try to "upgrade" .JB files if they have a newer version of the job_qs struct 2.2.1 b - fix a bug where dependent jobs get put on hold when the previous job has completed but its state is still available for life of keep_completed b - fixed a bug where pbs_server never delete files from the "jobs" directory b - fixed a bug where compute nodes were being put in an indefinite "down" state e - added job_array_size attribute to pbs_submit documentation 2.2.0 e f e f e - 80 e f e e f e - e e f e e e e e b e b e e e e - improve RPP logging for corruption issues dynamic resources use mlockall() in pbs_mom if _POSIX_MEMLOCK consumable resource "tokens" support (Harte-Hanks) build process sets default submit filter path to ${libexecdir}/qsub_filter we fall back to /usr/local/sbin/torque_submitfilter to maintain compatibility allow long job names when not using -N new MOM $varattr config daemons are no longer installed 700 tighten directory path checks new mom configs: $auto_ideal_load and $auto_max_load pbs_mom on Darwin (OS X) no longer depends on libkvm (now works on all versions without need to re-enable /dev/kmem on newer PPC or all x86 versions) added PBS_SERVER env variable for job scripts add --about support to daemons and client commands added qsub -t (primitive job array) add PBS_RESOURCE_GRES to prolog/epilog environment add -h hostname to pbs_mom (NCIFCRF) filesec enhancements (StockholmU) added ERS and IDS documentation allow export of specific variables into prolog/epilog environment change fclose to pclose to close submit filter pipe (ABCC) add support for Cray XT size and larger qstat task reporting (ORNL) pbs_demux is now built with pbs_mom instead of with clients epilogue will only run if job is still valid on exec node add qnodes, qnoded, qserverd, and qschedd symlinks enable DEFAULTCKPT torque.cfg parameter allow compute host and submit host suffix with nodefile_suffix Appendix B. TORQUE and Maui Release Information f - add --with-modulefiles=[DIR] support b - be more careful about broken tclx installs 2.1.11 b - nqs2pbs is now a generated script b - correct handling of priv job attr b - change font selectors in manpages to bold b - on pbs_server startup, don’t skip job-exclusive nodes on initial MOM scan b - pbs_server should not connect to "down" MOMs for any job operation b - use alarm() around writing to job’s stdio incase it happens to be a stopped tty 2.1.10 b - fix buffer overflow in rm_request, fix 2 printf that should be sprintf (Umea University) b - correct updating trusted client list (Yahoo) b - Catch newlines in log messages, split messages text (Eygene Ryabinkin) e - pbs_mom remote reconfig pbs_mom now disabled by default use $remote_reconfig to enable it b - fix pam configure (Adrian Knoth) b - handle /dev/null correctly when job rerun 2.1.9 f - new queue attribute disallowed_types, currently recognized types: interactive, batch, rerunable, and nonrerunable e - refine "node note" feature with pbsnodes -N e - bypass pbs_server’s uid 0 check on cygwin e - update suse initscripts b - fix mom memory locking b - fix sum buffer length checks in pbs_mom b - fix memory leak in fifo scheduler b - fix nonstandard usage of ’tail’ in tpackage b - fix aliasing error with brp_txtlen f - allow manager to set "next job number" via hidden qmgr attribute next_job_number 2.1.8 b b b b e b b b e - stop possible memory corruption with an invalid request type (StockholmU) add node name to pbsnodes XML output (NCIFCRF) correct Resource_list in qstat XML output (NCIFCRF) pam_authuser fixes from uam.es allow ’pbsnodes -l’ to work with a node spec clear exec_host and session_id on job requeue fix mom child segfault when a user env var has a ’%’ correct buggy logging in chk_job_request() (StockholmU) pbs_mom shouldn’t require server_name file unless it is actually going to be read (StockholmU) f - "node notes" with pbsnodes -n (sandia) 2.1.7 b b b f f e - fix bison syntax error in Parser.y fix 2.1.4 regression with spool file group owner on freebsd don’t exit if mlockall sets errno ENOSYS qalter -v variable_list MOMSLEEPTIME env delays pbs_mom initialization minor log message fixups 81 Appendix B. TORQUE and Maui Release Information e - enable node-reuse in qsub eval if server resources_available.nodect is set e - pbs_mom and pbs_server can now use PBS_MOM_SERVER_PORT, PBS_BATCH_SERVICE_PORT, and PBS_MANAGER_SERVICE_PORT env vars. e - pbs_server can also use PBS_SCHEDULER_SERVICE_PORT env var. e - add "other" resource to pelog’s 5th argument 2.1.6 b b b b - freebsd5 build fix fix 2.1.4 regression with TM on single-node jobs fix 2.1.4 regression with rerunning jobs additional spool handling security fixes 2.1.5 b - fix 2.1.4 regression with -o/dev/null 2.1.4 b - fix cput job status b - Fix "Spool Job Race condition" 2.1.3 b b b b e e f e e e b e c c b f - correct run-time symbol in pam module on RHEL4 some minor hpux11 build fixes (PACCAR) fix bug with log roll and automatic log filenames compile error with size_fs() on digitalunix pbs_server will now print build details with --about new freebsd5 mom arch for Freebsd 5.x and 6.x (trasz) backported new queue attribute "max_user_queuable" optimize acl_group_sloppy fix "list_head" symbol clash on Solaris 10 allow pam_pbssimpleauth to be built on OSX and Solaris networking fixes for HPUX, fixes pbs_iff (PACCAR) allow long job names when not using -N using depend=syncwith crashed pbs_server races with down nodes and purging jobs crashed pbs_server staged out files will retain proper permission bits may now specify umask to use while creating stderr and stdout spools e.g. qsub -W umask=22 b - correct some fast startup behaviour e - queue attribute max_queuable accounts for C jobs 2.1.2 b b b f f f e f e e f f f 82 - fix momctl queries with multiple hosts don’t fail make install if --without-sched correct MOM compile error with atol() qsub will now retry connecting to pbs_server (see manpage) X11 forwarding for single-node, interactive jobs with qsub -X new pam_pbssimpleauth PAM module, requires --with-pam=DIR add logging for node state adjustment correctly track node state and allocation based for suspended jobs entries can always be deleted from manager ACL, even if ACL contains host(s) that no longer exist more informative error message when modifying manager ACL all queue create, set, and unset operations now set a queue mtime added support for log rolling to libtorque pbs_server and pbs_mom have two new attributes Appendix B. TORQUE and Maui Release Information e b b b b b b b b e f - 2.1.1 c b e f f f - f c e b b f b f b - e e c b e e b b b e e e - log_file_max_size, log_file_roll_depth support installing client libs and cmds on unsupported OSes (like cygwin) fix subnode allocation with pbs_sched fix node allocation with suspend-resume fix stale job-exclusive state when restarting pbs_server don’t fall over when duplicate subnodes are assigned after suspend-resume handle suspended jobs correctly when restarting pbs_server allow long host lists in runjob request fix truncated XML output in qstat and pbsnodes typo broke compile on irix6array and unicos8 momctl now skips down nodes when selecting by property added submit_args job attribute fix mom_sync_job code that crashes pbs_server (USC) checking disk space in $PBS_SERVER_HOME was mistakenly disabled (USC) node’s np now accessible in qmgr (USC) add ":ALL" as a special node selection when stat’ing nodes (USC) momctl can now use :property node selection (USC) send cluster addrs to all nodes when a node is created in qmgr (USC) - new nodes are marked offline - all nodes get new cluster ipaddr list - new nodes are cleared of offline bit set a node’s np from the status’ ncpus (only if ncpus > np) (USC) - controlled by new server attribute "auto_node_np" fix possible pbs_server crash when nodes are deleted in qmgr (USC) avoid dup streams with nodes for quicker pbs_server startup (USC) configure program prefix/suffix will now work correctly (USC) handle shared libs in tpackages (USC) qstat’s -1 option can now be used with -f for easier parsing (USC) fix broken TM on OSX (USC) add "version" and "configversion" RM requests (USC) in pbs-config --libs, don’t print rpath if libdir is in the sys dlsearch path (USC) don’t reject job submits if nodes are temporarily down (USC) if MOM can’t resolve $pbsserver at startup, try again later (USC) - $pbsclient still suffers this problem fix nd_addrs usage in bad_node_warning() after deleting nodes (MSIC) enable build of xpbsmom on darwin systems (JAX) run-time config of MOM’s rcp cmd (see pbs_mom(8)) (USC) momctl can now accept query strings with spaces, multiple -q opts (USC) fix linking order for single-pass linkers like IRIX (ncifcrf) fix mom compile on solaris with statfs (USC) memory corruption on job exit causing cpu0 to be allocated more than once (USC) add increased verbosity to tracejob and added ’-q’ commandline option support larger values in qstat output (might break scripts!) (USC) make ’qterm -t quick’ shutdown pbs_server 83 Appendix B. TORQUE and Maui Release Information) 2.0.0p8 really fix torque.cfg parsing (USC) fix possible overlapping memcpy in ACL parsing (USC) fix rare self-inflicted sigkill in MOM (USC) 2.0.0p7 fixed pbs_mom SEGV in req_stat_job() fixed torque.cfg parameter handling fixed qmgr memory leak 84 Appendix B. TORQUE and Maui Release Information 2.0.0p6 2.0.0p5) 2.0.0p4 fix up socklen_t issues fixed epilog to report total job resource utilization improved RPM spec (USC) 85 Appendix B. TORQUE and Maui Release Information & add experimental server "mom_job_sync" (USC) export PBS_SCHED_HINT to prelog) 86 Appendix B. TORQUE and Maui Release Information) 1.2.0p6 enabled opsys mom config (US 87 Appendix B. TORQUE and Maui Release Information -) 88 Appendix B. TORQUE and Maui Release Information vacation responses (VPAC) added multiple changes to address gcc warnings (USC) enabled auto-sizing of ’qstat -Q’ columns purge DOS EOL characters from submit scripts 89 Appendix B. TORQUE and Maui Release Information added added added fixed added added added added added support for node specification w/pbsnodes -a hstfile support to momctl chroot (-D) support (SRCE) mom chdir pjob check (SRCE) MOM HELLO initialization procedure momctl diagnostic/admin command (shutdown, reconfig, query, diagnose) mom job abort bailout to prevent infinite loops network reinitialization when socket failure detected mom-to-scheduler reporting when existing job detected 90 Appendix B. TORQUE and Maui Release Information 91 Appendix B. TORQUE and Maui Release Information added added added added added added added support for pbs_server health check and routing to scheduler support for specification of more than one clienthost parameter PW unused-tcp-interrupt patch PW mom-file-descriptor-leak patch PW prologue-bounce patch PW mlockall patch (release mlock for mom children) support for job names up to 256 chars in length Patches incorporated prior to patch 2: HPUX superdome support add proper tracking of HP resources - Oct 2003 (NOR) is_status memory leak patches - Oct 2003 (CRI) corrects various memory leaks Bash test - Sep 2003 (FHCRC) allows support for linked shells at configure time AIXv5 support -Sep 2003 (CRI) 92 Appendix B. TORQUE and Maui Release Information allows support for AIX 5.x systems OSC Meminfo -- Dec 2001 (P. Wycoff) corrects how pbs_mom figures out how much physical memory each node has under Linux Sandia CPlant Fault Tolerance I (w/OSC enhancements) -- Dec 2001 (L. Fisk/P. Wycoff) handles server-MOM hangs OSC Timeout I -- Dec 2001 (P. Wycoff) enables longer inter daemon timeouts OSC Prologue Env I -- Jan 2002 (P. Wycoff) add support for env variable PBS_RESOURCE_NODES in job prolog OSC Doc/Install I -- Dec 2001 (P. Wycoff) fix to the pbsnodes man page Configuration information for Linux on the IA64 architecture fix the build process to make it clean out the documentation directories during a "make distclean" fix the installation process to keep it from overwriting ${PBS_HOME}/server_name if it already exists correct code creating compile time warnings allow PBS to compile on Linux systems which do not have the Linux kernel source installed Maui RM Extension -- Dec 2002 (CRI) enable Maui resource manager extensions including QOS, reservations, etc NCSA Scaling I -- Mar 2001 (G. Arnold) increase number of nodes supported by PBS to 512 NCSA No Spool -- Apr 2001 (G. Arnold) support $HOME/.pbs_spool for large jobs NCSA MOM Pin pin PBS MOM into memory to keep it from getting swapped ANL RPP Tuning -- Sep 2000 (J Navarro) tuning RPP for large systems WGR Server Node Allocation -- Jul 2000 (B Webb) addresses issue where PBS server incorrectly claims insufficient nodes 93 Appendix B. TORQUE and Maui Release Information WGR MOM Soft Kill -- May 2002 (B Webb) processes are killed with SIGTERM followed by SIGKILL PNNL SSS Patch -- Jun 2002 (Skousen) improves server-mom communication and server-scheduler CRI Job Init Patch -- Jul 2003 (CRI) correctly initializes new jobs eliminating unpredictable behavior and crashes VPAC Crash Trap -- Jul 2003 (VPAC) supports PBSCOREDUMP env variable CRI Node Init Patch -- Aug 2003 (CRI) correctly initializes new nodes eliminating unpredictable behavior and crashes SDSC Log Buffer Patch -- Aug 2003 (SDSC) addresses log message overruns Maui Change Log 94 Appendix B. TORQUE and Maui Release Information." Maui 3.3 - Fixed configure script. Was putting RMCFG[name] TYPE=PBS@RMNMHOST@. Maui 3.2.6p21 - Added RMCFG[] ASYNCJOBSTART=TRUE for asynchronous job starts in pbs. (Thanks to Bas van der Vlies and the community) - Added StartTime and CompletionTime to Gold Charge. - Fixed backfill issue with SINGLEUSER NODEACCESSPOLICY. (Thanks goes to Roy Dragseth) - N->{A|C}Res.Swap is overcommitted with N->CRes.Swap instead of N->CRes.Mem. (Thanks goes to Roy Dragseth) - Fixed a nodes configured swap from changing in a maui+pbs setup. (Thanks goes to Gareth Williams of CSIRO) - Fixed CHECKSUM authentication for maui + slurm. Thanks goes to Eyegene Ryabinkin. - Fixed 64bit issue. Maui assumed ints were always 8 bytes for 64bit systems even though x86_64 ints are still 4 bytes. This lead to aliasing of large indexed node properties to smaller indexed properties. Maui now triggers off of sizeof(int). Thanks goes to Alexis Cousein. - Fixed an optimiztion issue with x86_64 systems. -O2 was optimizing out parts of the communication strings. Maui 3.2.6p20 - Fixed a potential security issue when Maui is used with some PBS configurations. - Fixed a bug pertaining to Maui’s resource policy ExtendedViolation time (thanks goes to Nick Sonneveld). - Fixed a bug with generic consumable floating resources which prevented them from working when a job also requested a ’mem’ resource (thanks to Jeffrey Reed for the fix). - Fixed typos in MPBSI.c that may have caused undesirable side-effects (thanks to Jeffrey Reed for the fix). - Fixed a bug where group fairshare components were being miscalculated when percent based fairshare is enabled (thanks goes to Steve Traylen for the fix). - Implemented FSSECONDARYGROUPS to map unix groups to fair share groups. Contributed by Bas van der Vlies (SARA). - Implemented IGNPBSGROUPLIST to ignore ’qsub -W group_list=<value>’. Contributed by Bas van der Vlies (SARA). - Note for FSSECONDARYGROUPS and IGNPBSGROUPLIST: {{{ 95 Appendix B. TORQUE and Maui Release Information. }}} - Implemented fitting of jobs into available partitions. Contributed by Eygene Ryabinkin, Russian Research Centre "Kurchatov Institute". ex. USERCFG[user] PLIST=par1,par2,par3 PDEF=par2 user’s job will first try to be fit in par2. If par2 doesn’t work then par1 and par3 will be tried. - Added DBG(3,fALL) to lone DPrint statements in src/moab/MRes.c because it was flooding the log file for Michael Barnes when working with 32k jobs. Maui 3.2.6p19 - Implemented fixes for a segfault, FSPOLICY loading, and workload traces submitted by Ake Sandgren - Implemented patch in MJobProcessExtensionString to avoid dropping first token (from Peter Gardfall) - Added SUSE service startup script (suse.maui.d) to "contrib" directory (thanks goes to Stephen Cary) Maui 3.2.6p18 - Implemented NODEALLOCMAXPS patch from Bas van der Vlies - Implemented minor bug patch from Ake Sandgren - Implemented NODECFG[] MAXPROC[CLASS] patch from Yaroslav Halchenko Maui 3.2.6p16 - Maui’s ’configure’ script now compatible with latest TORQUE versions Maui 3.2.6p15 - Various enhancements and bug fixes Maui 3.2.6p14 Features - Corrected logging in earliest start time evaluation handling Fixed buffer overflow in mdiag -p Fixed bug with floating generic resource tracking Added support for NODEMEMOVERCOMMITFACTOR on TORQUE Integrated latest DOE S3 communications library Cleaned up showconfig output Corrected multiple memory errors Fixed security initialization Fixed feature based reservation checkpoint recovery Improved command line arg handling Maui 3.2.6p11 96 Appendix B. TORQUE and Maui Release Information Accounting Notes 1. 2. 97 Appendix B. TORQUE and Maui Release Information 98 Appendix C. OpenMPI Release Information The following is reproduced essentially verbatim from files contained within the OpenMPI tarball downloaded from Copyright (c) 2004-2010-2014 Cisco Systems, Inc. All rights reserved. Copyright (c) 2006 Voltaire, Inc. All rights reserved. Copyright (c) 2006 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Copyright (c) 2006-2014 Los Alamos National Security, LLC. All rights reserved. Copyright (c) 2010-2012 IBM Corporation. All rights reserved. Copyright (c) 2012 Oak Ridge National Labs. All rights reserved. Copyright (c) 2012 Sandia National Laboratories. All rights reserved. Copyright (c) 2012 University of Houston. All rights reserved. Copyright (c) 2013 NVIDIA Corporation. All rights reserved. Copyright (c) 2013-2014 Intel, Inc. All rights reserved.. 1.8.4 ----- Fix MPI_SIZEOF; now available in mpif.h for modern Fortran compilers 99 Appendix C. OpenMPI Release Information - - - - 100 Appendix C. OpenMPI Release Information 101 Appendix C. OpenMPI Release Information - 1.8.1 ----- Fix for critical bug: mpirun removed files (but not directories) from / when run as root. Thanks to Jay Fenlason and Orion Poplawski for bringing the issue to our attention and helping identify the fix. 1.8 --- --level option. - Fix some compiler warnings. - Ensure that ORTE daemons are not bound to a single processor if TaskAffinity is set on by default in Slurm. Thanks to Artem Polyakov for identifying the problem and providing a patch 102 Appendix C. OpenMPI Release Information 1.7.5 ----********************************************************************** *. 103 Appendix C. OpenMPI Release Information -. 1.7.4 ----********************************************************************** CRITICAL CHANGE * * * As of release 1.7.4, OpenMPI’s default mapping, ranking, and binding * settings have changed: * * Mapping: if #procs <= 2, default to map-by core * if #procs > 2, default to map-by socket * Ranking: * if default mapping is used, then default to rank-by slot * if map-by <obj> is given, then default to rank-by <obj>, * where <obj> is whatever object we mapped against * * Binding: default to bind-to core * * * Users can override any of these settings individually using the * corresponding MCA parameter. Note that multi-threaded applications * in particular may want to override at least the binding default * to allow threads to use multiple cores. ********************************************************************** - Restore version number output in "ompi_info --all". - Various bug fixes for the mpi_f08 Fortran bindings. - Fix ROMIO compile error with Lustre 2.4. Thanks to Adam Moody for reporting the issue. - Various fixes for 32 bit platforms. - Add ability to selectively disable building the mpi or mpi_f08 module. See the README file for details. - Fix MX MTL finalization issue. - Fix ROMIO issue when opening a file with MPI_MODE_EXCL. - Fix PowerPC and MIPS assembly issues. - Various fixes to the hcoll and FCA collective offload modules. - Prevent integer overflow when creating datatypes. Thanks to original patch from Gilles Gouaillardet. - Port some upstream hwloc fixes to Open MPI’s embedded copy for working around buggy NUMA node cpusets and including mising header files. Thanks to Jeff Becker and Paul Hargrove for reporting the issues. - Fix recursive invocation issues in the MXM MTL. - Various bug fixes to the new MCA parameter back-end system. - Have the posix fbtl module link against -laio on NetBSD platforms. Thanks to Paul Hargrove for noticing the issue. - Various updates and fixes to network filesystem detection to support more operating systems. - Add gfortran v4.9 "ignore TKR" syntax to the mpi Fortran module. - Various compiler fixes for several BSD-based platforms. Thanks to 104 Appendix C. OpenMPI Release Information Paul Hargrove for reporting the issues. - Fix when MPI_COMM_SPAWN[_MULTIPLE] is used on oversubscribed systems. - Change the output from --report bindings to simply state that a process is not bound, instead of reporting that it is bound to all processors. - Per MPI-3.0 guidance, remove support for all MPI subroutines with choice buffers from the TKR-based mpi Fortran module. Thanks to Jed Brown for raising the issue. - Only allow the usnic BTL to build on 64 bit platforms. - Various bug fixes to SLURM support, to include ensuring proper exiting on abnormal termination. - Ensure that MPI_COMM_SPAWN[_MULTIPLE] jobs get the same mapping directives that were used with mpirun. - Fixed the application of TCP_NODELAY. - Change the TCP BTL to not warn if a non-existent interface is ignored. - Restored the "--bycore" mpirun option for backwards compatibility. - Fixed debugger attach functionality. Thanks to Ashley Pittman for reporting the issue and suggesting the fix. - Fixed faulty MPI_IBCAST when invoked on a communicator with only one process. - Add new Mellanox device IDs to the openib BTL. - Progress towards cleaning up various internal memory leaks as reported by Valgrind. - Fixed some annoying flex-generated warnings that have been there for years. Thanks to Tom Fogal for the initial patch. - Support user-provided environment variables via the "env" info key to MPI_COMM_SPAWN[_MULTIPLE]. Thanks to Tom Fogal for the feature request. - Fix uninitialized variable in MPI_DIST_GRAPH_CREATE. - Fix a variety of memory errors on SPARC platforms. Thanks to Siegmar Gross for reporting and testing all the issues. - Remove Solaris threads support. When building on Solaris, pthreads will be used. - Correctly handle the convertor internal stack for persistent receives. Thanks to Guillaume Gouaillardet for identifying the problem. - Add support for using an external libevent via --with-libevent. See the README for more details. - Various OMPIO updates and fixes. - Add support for the MPIEXEC_TIMEOUT environment variable. If set, mpirun will terminate the job after this many seconds. - Update the internal copy of ROMIO to that which shipped in MPICH 3.0.4. - Various performance tweaks and improvements in the usnic BTL, including now reporting MPI_T performance variables for each usnic device. - Fix to not access send datatypes for non-root processes with MPI_ISCATTER[V] and MPI_IGATHER[V]. Thanks to Pierre Jolivet for supplying the initial patch. - Update VampirTrace to 5.14.4.9. - Fix ptmalloc2 hook disable when used with ummunotify. - Change the default connection manager for the openib BTL to be based on UD verbs data exchanges instead of ORTE OOB data exchanges. 105 Appendix C. OpenMPI Release Information - Fix Fortran compile error when compiling with 8-byte INTEGERs and 4-byte ints. - Fix C++11 issue identified by Jeremiah Willcock. - Many changes, updates, and bug fixes to the ORTE run-time layer. - Correctly handle MPI_REDUCE_SCATTER with recvcounts of 0. - Update man pages for MPI-3, and add some missing man pages for MPI-2.x functions. - Updated mpi_f08 module in accordance with post-MPI-3.0 errata which basically removed BIND(C) from all interfaces. - Fixed MPI_IN_PLACE detection for MPI_SCATTER[V] in Fortran routines. Thanks to Charles Gerlach for identifying the issue. - Added support for routable RoCE to the openib BTL. - Update embedded hwloc to v1.7.2. - ErrMgr framework redesigned to better support fault tolerance development activities. See the following RFC for details: - Added database framework to OPAL and changed all modex operations to flow thru it, also included additional system info in the available data - Added staged state machine to support sequential work flows - Added distributed file system support for accessing files across nodes that do not have networked file systems - Extended filem framework to support scalable pre-positioning of files for use by applications, adding new "raw" component that transmits files across the daemon network - Native Windows support has been removed. A cygwin package is available from that group for Windows-based use. - Added new MPI Java bindings. See the Javadocs for more details on the API. - Wrapper compilers now add rpath support by default to generated executables on systems that support it. This behavior can be disabled via --disable-wrapper-rpath. See note in README about ABI issues when using rpath in MPI applications. - Added a new parallel I/O component and multiple new frameworks to support parallel I/O operations. -. - Add support for Intel Phi SCIF transport. - For CUDA-aware MPI configured with CUDA 6.0, use new pointer attribute to avoid extra synchronization in stream 0 when using CUDA IPC between GPUs on the same node. - For CUDA-aware MPI configured with CUDA 6.0, compile in support of GPU Direct RDMA in openib BTL to improve small message latency. - Updated ROMIO from MPICH v3.0.4. - MPI-3: Added support for remaining non-blocking collectives. - MPI-3: Added support for neighborhood collectives. - MPI-3: Updated C bindings with consistent use of []. - MPI-3: Added the const keyword to read-only buffers. - MPI-3: Added support for non-blocking communicator duplication. - MPI-3: Added support for non-collective communicator creation. 106 Appendix C. OpenMPI Release Information 1.7.3 ----- Make CUDA-aware support dynamically load libcuda.so so CUDA-aware MPI library can run on systems without CUDA software. - Fix various issues with dynamic processes and intercommunicator operations under Torque. Thanks to Suraj Prabhakaran for reporting the problem. - Enable support for the Mellanox MXM2 library by default. - Improve support for Portals 4. - Various Solaris fixes. Many thanks to Siegmar Gross for his incredible patience in reporting all the issues. - MPI-2.2: Add reduction support for MPI_C_*COMPLEX and MPI::*COMPLEX. - Fixed internal accounting when openpty() fails. Thanks to Michal Peclo for reporting the issue and providing a patch. - Fixed too-large memory consumption in XRC mode of the openib BTL. Thanks to Alexey Ryzhikh for the patch. - Add bozo check for negative np values to mpirun to prevent a deadlock. Thanks to Upinder Malhi for identifying the issue. - Fixed MPI_IS_THREAD_MAIN behavior. Thanks to Lisandro Dalcin for pointing out the problem. - Various rankfile fixes. - Fix functionality over iWARP devices. - Various memory and performance optimizations and tweaks. - Fix MPI_Cancel issue identified by Fujitsu. - Add missing support for MPI_Get_address in the "use mpi" TKR implementation. Thanks to Hugo Gagnon for identifying the issue. - MPI-3: Add support for MPI_Count. - MPI-2.2: Add missing MPI_IN_PLACE support for MPI_ALLTOALL. - Added new usnic BTL to support the Cisco usNIC device. - Minor VampirTrace update to 5.14.4.4. - Removed support for ancient OS X systems (i.e., prior to 10.5). - Fixed obscure packing/unpacking datatype bug. Thanks to Takahiro Kawashima for identifying the issue. - Add run-time support for PMI2 environments. - Update openib BTL default parameters to include support for Mellanox ConnectX3-Pro devices. - Update libevent to v2.0.21. - "ompi_info --param TYPE PLUGIN" now only shows a small number of MCA parameters by default. Add "--level 9" or "--all" to see *all* MCA parameters. See README for more details. - Add support for asynchronous CUDA-aware copies. - Add support for Mellanox MPI collective operation offload via the "hcoll" library. - MPI-3: Add support for the MPI_T interface. Open MPI’s MCA parameters are now accessible via the MPI_T control variable interface. Support has been added for a small number of MPI_T performance variables. - Add Gentoo memory hooks override. Thanks to Justin Bronder for the patch. - Added new "mindist" process mapper, allowing placement of processes via PCI locality information reported by the BIOS. - MPI-2.2: Add support for MPI_Dist_graph functionality. - Enable generic, client-side support for PMI2 implementations. Can be leveraged by any resource manager that implements PMI2; e.g. SLURM, 107 Appendix C. OpenMPI Release Information versions 2.6 and higher. 1.7.2 -----. 108 Appendix C. OpenMPI Release Information - Add a distance-based mapping component to find the socket "closest" to the PCI bus. 1.7.1 ----- Fixed compile error when --without-memory-manager was specified on Linux - Fixed XRC compile issue in Open Fabrics support. 1.7 --- 109 Appendix C. OpenMPI Release Information. 1.6.5 -----) 110 Appendix C. OpenMPI Release Information -) 1.6. 111 Appendix C. OpenMPI Release Information - Restore "use mpi" ABI compatibility with the rest of the 1.5/1.6 series (except for v1.6.3, where it was accidentally broken). - Fix a very old error in opal_path_access(). Thanks to Marco Atzeri for chasing it down. 1.6.3 -----. 1.6. 1.6.1 ----- A bunch of changes to eliminate hangs on OpenFabrics-based networks. 112 Appendix C. OpenMPI Release Information. -. -. 1.6 --- 113 Appendix C. OpenMPI Release Information - Fix some process affinity issues. When binding a process, Open MPI will now bind to all available hyperthreads in a core (or socket, depending on the binding options specified). --> Note that "mpirun --bind-to-socket ..." does not work on POWER6). 1.5. 114 Appendix C. OpenMPI Release Information -. 1.5.4 -----) 115 Appendix C. OpenMPI Release Information - - - - 116 Win Appendix C. OpenMPI Release Information. 1.5.3 ----- Add missing "affinity" MPI extension (i.e., the OMPI_Affinity_str() API) that was accidentally left out of the 1.5.2 release. 1.5.2 ----- switch. 117 Appendix C. OpenMPI Release Information - -. 1.5 --- Added "knem" support: direct process-to-process copying for shared memory message passing. See 118 Appendix C. OpenMPI Release Information. 119 Appendix C. OpenMPI Release Information - -. 1.4. 120 Appendix C. OpenMPI Release Information -) 1.4. - Increased rdmacm address resolution timeout from 1s to 30s & updated. 121 Appendix C. OpenMPI Release Information -. 122 Appendix C. OpenMPI Release Information 1.4.3 -----). - Fixed various MPI_THREAD_MULTIPLE race conditions. - Fixed an issue with an undeclared variable from ptmalloc2 munmap on BSD systems. - Fixes for BSD interface detection. - Various other BSD fixes. Thanks to Kevin Buckley helping to track. all of this down. - Fixed issues with the use of the -nper* mpirun command line arguments. - Fixed an issue with coll tuned dynamic rules. - Fixed an issue with the use of OPAL_DESTDIR being applied too aggressively. - Fixed an issue with one-sided xfers when the displacement exceeds 2GBytes. - Change to ensure TotalView works properly on Darwin. - Added support for Visual Studio 2010. - Fix to ensure proper placement of VampirTrace header files. - Needed to add volatile keyword to a varialbe used in debugging (MPIR_being_debugged). - Fixed a bug in inter-allgather. - Fixed malloc(0) warnings. - Corrected a typo the MPI_Comm_size man page (intra -> inter). Thanks to Simon number.cruncher for pointing this out. - Fixed a SegV in orted when given more than 127 app_contexts. - Removed xgrid source code from the 1.4 branch since it is no longer supported in the 1.4 series. - Removed the --enable-opal-progress-threads config option since opal progress thread support does not work in 1.4.x. - Fixed a defect in VampirTrace’s vtfilter. - Fixed wrong Windows path in hnp_contact. - Removed the requirement for a paffinity component. - Removed a hardcoded limit of 64 interconnected jobs. - Fix to allow singletons to use ompi-server for rendezvous. - Fixed bug in output-filename option. - Fix to correctly handle failures in mx_init(). - Fixed a potential Fortran memory leak. - Fixed an incorrect branch in some ppc32 assembly code. Thanks to Matthew Clark for this fix. - Remove use of undocumented AS_VAR_GET macro during configuration. - Fixed an issue with VampirTrace’s wrapper for MPI_init_thread. - Updated mca-btl-openib-device-params.ini file with various new vendor id’s. - Configuration fixes to ensure CPPFLAGS in handled properly if a non-standard valgrind location was specified. - Various man page updates 123 Appendix C. OpenMPI Release Information 1.4. 124 Appendix C. OpenMPI Release Information -. 1.4.1 -----). 1.4 -. 1.3.4 ----- Fix some issues in OMPI’s SRPM with regard to shell_scripts_basename and its use with mpi-selector. Thanks to Bill Johnstone for pointing out the problem. - Added many new MPI job process affinity options to mpirun. See the 125 Appendix C. OpenMPI Release Information. 1.3.3 ----- 126 Appendix C. OpenMPI Release Information - - - shared memory progression rules and by enabling the "sync" collective to barrier every 1,000th collective. Various fixes for the IBM XL C/C++ v10.1 compiler. Allow explicit disabling of ptmalloc2 hooks at runtime (e.g., enable support for Debian report. Added Microsoft Windows support. See README.WINDOWS file for details. 1.3 127 Appendix C. OpenMPI Release Information. 1.3.1 ----- 128 Appendix C. OpenMPI Release Information - - - - - - OPAL. 1.3 --- for reporting the issue. See ticket #1603. 129 Appendix C. OpenMPI Release Information -ude MCA parameters for including/excluding comma-delimited lists of HCAs and ports. - Added RDMA CM support, includng btl_openib_cpc_[in|ex]clude". - 130 Appendix C. OpenMPI Release Information libc-provided allocator instead of Open MPI’s ptmalloc2. This change may be overriden with the configure option enable-ptmalloc2-internal -manager. -. 1.2.9 (unreleased) ------------------.. 131 Appendix C. OpenMPI Release Information - Fix a regression introduced in 1.2.6 for the IBM eHCA. See ticket #1526. 1.2.7 ----- Add some Sun HCA vendor IDs. See ticket #1461. - Fixed a memory leak in MPI_Alltoallw when called from Fortran. Thanks to Dave Grote for the bugreport. See ticket #1457. - Only link in libutil when it is needed/desired. Thanks to Brian Barret for diagnosing and fixing the problem. See ticket #1455. - Update some QLogic HCA vendor IDs. See ticket #1453. - Fix F90 binding for MPI_CART_GET. Thanks to Scott Beardsley for bringing it to our attention. See ticket #1429. - Remove a spurious warning message generated in/by ROMIO. See ticket #1421. - Fix a bug where command-line MCA parameters were not overriding MCA parameters set from environment variables. See ticket #1380. - Fix a bug in the AMD64 atomics assembly. Thanks to Gabriele Fatigati for the bug report and bugfix. See ticket #1351. - Fix a gather and scatter bug on intercommunicators when the datatype being moved is 0 bytes. See ticket #1331. - Some more man page fixes from the Debian maintainers. See tickets #1324 and #1329. - Have openib BTL (OpenFabrics support) check for the presence of /sys/class/infiniband before allowing itself to be used. This check prevents spurious "OMPI did not find RDMA hardware!" notices on systems that have the software drivers installed, but no corresponding hardware. See tickets #1321 and #1305. - Added vendor IDs for some ConnectX openib HCAs. See ticket #1311. - Fix some RPM specfile inconsistencies. See ticket #1308. Thanks to Jim Kusznir for noticing the problem. - Removed an unused function prototype that caused warnings on some systems (e.g., OS X). See ticket #1274. - Fix a deadlock in inter-communicator scatter/gather operations. Thanks to Martin Audet for the bug report. See ticket #1268. 1.2.6 ----- Fix a bug in the inter-allgather for asymmetric inter-communicators. Thanks to Martin Audet for. 132 Appendix C. OpenMPI Release Information -. 1.2.5 ----- Fixed compile issue with open() on Fedora 8 (and newer) platforms. Thanks to Sebastian Schmitzdorff for noticing the problem. - Added run-time warnings during MPI_INIT when MPI_THREAD_MULTIPLE and/or progression threads are used (the OMPI v1.2 series does not support these well at all). - Better handling of ECONNABORTED from connect on Linux. Thanks to Bob Soliday for noticing the problem; thanks to Brian Barrett for submitting a patch. - Reduce extraneous output from OOB when TCP connections must be retried. Thanks to Brian Barrett for submitting a patch. - Fix for ConnectX devices and OFED 1.3. See ticket #1190. - Fixed a configure problem for Fortran 90 on Cray systems. Ticket #1189. - Fix an uninitialized variable in the error case in opal_init.c. Thanks to Ake Sandgren for pointing out the mistake. - Fixed a hang in configure if $USER was not defined. Thanks to Darrell Kresge for noticing the problem. See ticket #900. - Added support for parallel debuggers even when we have an optimized build. See ticket #1178. - Worked around a bus error in the Mac OS X 10.5.X (Leopard) linker when compiling Open MPI with -g. See ticket #1179. - Removed some warnings about ’rm’ from Mac OS X 10.5 (Leopard) builds. - Fix the handling of mx_finalize(). See ticket #1177. Thanks to Ake Sandgren for bringing this issue to our attention. - Fixed minor file descriptor leak in the Altix timer code. Thanks to Paul Hargrove for noticing the problem and supplying the fix. - Fix a problem when using a different compiler for C and Objective C. See ticket #1153. - Fix segfault in MPI_COMM_SPAWN when the user specified a working directory. Thanks to Murat Knecht for reporting this and suggesting a fix. - A few manpage fixes from the Debian Open MPI maintainers. Thanks to Tilman Koschnick, Sylvestre Ledru, and Dirk Eddelbuettel. - Fixed issue with pthread detection when compilers are not all from the same vendor. Thanks to Ake Sandgren for the bug report. See ticket #1150. - Fixed vector collectives in the self module. See ticket #1166. - Fixed some data-type engine bugs: an indexing bug, and an alignment bug. See ticket #1165. - Only set the MPI_APPNUM attribute if it is defined. See ticket #1164. 133 Appendix C. OpenMPI Release Information 1.2). 134 Appendix C. OpenMPI Release Informationof-band messaging system. - Fixed a memory leak when using mpi_comm calls. Thanks to Bas van der Vlies for reporting the problem. - Fixed various memory leaks in OPAL and ORTE. 135 Appendix C. OpenMPI Release Information - ---.). 136 Appendix C. OpenMPI Release Information -. 1.1.5 ----- Implement MPI_TYPE_CREATE_DARRAY function. - Fix race condition in shared memory BTL startup that could cause MPI applications to hang in MPI_INIT. - Fix syntax error in a corner case of the event library. Thanks to 137 Appendix C. OpenMPI Release Information -----. 1.1.3 -----. 138 Appendix C. OpenMPI Release Information -. 1.1. 139 Appendix C. OpenMPI Release Information - Add some missing Fortran MPI-2 IO constants. 1.1.1 ----- - - - - - 140. Appendix C. OpenMPI Release Information -. 1.1 --- messagees. 1.0.3 (unreleased; all fixes included in 1.1) 141 Appendix C. OpenMPI Release Information ---------------------------------------------.2 ----- 142 Appendix C. OpenMPI Release Information -. 143 Appendix C. OpenMPI Release Information -).. 144 Appendix C. OpenMPI Release Information -. Notes 1. 145 Appendix C. OpenMPI Release Information 146 Appendix D. MPICH2 Release Information The following is reproduced essentially verbatim from files contained within the MPICH2 tarball downloaded from. NOTE: MPICH-2 has been effectively deprecated by the Open Source Community in favor of MPICH-3, which Scyld ClusterWare distributes as a set of mpich-scyld RPMs. Scyld ClusterWare continues to distribute mpich2-scyld, although we encourage users to migrate to MPICH-3, which enjoys active support by the Community. =============================================================================== 147 Appendix D. MPICH2 Release Information #: mpich2-1.5. 148 Appendix D. MPICH2 Release Information # PM/PMI: Initial support for the PBS launcher. # Several other minor bug fixes, memory leak fixes, and code cleanup. A full list of changes is available using: svn log -r8675:HEAD \ ... or at the following link: mpich2-1.4.1?action=follow_copy&rev=HEAD&stop_rev=8675&mode=follow_copy ===============================================================================. 149 Appendix D. MPICH2 Release Information #: mpich2-1.4?action=follow_copy&rev=HEAD&stop_rev=7838&mode=follow_copy ---------------------------------------------------------------------KNOWN ISSUES ---------------------------------------------------------------------### Known runtime failures * MPI_Alltoall might fail in some cases because of the newly added fault-tolerance features. If you are seeing this error, try setting the environment variable MPICH_ENABLE_COLL_FT_RET=0. ### Threads * ch3:sock does not (and will not) support fine-grained threading. 150 Appendix D. MPICH2 Release Information * affects the MPI_Pack_external and MPI_Unpack_external routines, as well the external data representation capabilities of ROMIO. * ch3 has known problems in some cases when threading and dynamic processes are used together on communicators of size greater than one. ### Build Platforms * Builds using the native "make" program on OpenSolaris fail unknown reasons. A workaround is to use GNU Make instead. See the following ticket for more information: * Build fails with Intel compiler suite 13.0, because of weak symbol issues in the compiler. A workaround is to disable weak symbol support by passing --disable-weak-symbols to configure. See the following ticket for more information: * The sctp channel is fully supported for FreeBSD and Mac OS X. As of the time of this release, bugs in the stack currently existed in the Linux kernel, and will hopefully soon be resolved. It is known to not work under Solaris and Windows. For Solaris, the SCTP API available in the kernel of standard Solaris 10 is a subset of the standard API used by the sctp channel. Cooperation with the Sun SCTP developers to support ch3:sctp under Solaris for future releases is currently ongoing. For Windows, no known kernel-based SCTP stack for Windows currently exists. ### Process Managers * The MPD process manager can only handle relatively small amounts of data on stdin and may also have problems if there is data on stdin 151 Appendix D. MPICH2 Release Information that is not consumed by the program. * The SMPD process manager does not work reliably with threaded MPI processes. MPI_Comm_spawn() does not currently work for >= 256 arguments with smpd. ###. ### C++ Binding: * The MPI datatypes corresponding to Fortran datatypes are not available (e.g., no MPI::DOUBLE_PRECISION). * The C++ binding does not implement a separate profiling interface, as allowed by the MPI-2 Standard (Section 10.1.10 Profiling). * MPI::ERRORS_RETURN may still throw exceptions in the event of an error rather than silently returning. Notes 1. 152 Appendix E. MVAPICH2 Release Information The following is reproduced essentially verbatim from files contained within the MVAPICH2 tarball downloaded from The MVAPICH2 2.0 User Guide is available at MVAPICH2 Changelog -----------------This file briefly describes the changes to the MVAPICH2 software package. The logs are arranged in the "most recent first" order. 153 Appendix E. MVAPICH2 Release Information 154 Appendix E. MVAPICH2 Release Information -b (11/08/2013) 155 Appendix E. MVAPICH2 Release Information * Features and Enhancements (since 2.0a): - Based on MPICH-3.1b1 - Multi-rail support for GPU communication - Non-blocking streams in asynchronous CUDA transfers for better overlap - Initialize GPU resources only when used by MPI transfer - Extended support for MPI-3 RMA in OFA-IB-CH3, OFA-IWARP-CH3, and OFA-RoCE-CH3 - Additional MPIT counters and performance variables - Updated compiler wrappers to remove application dependency on network and other extra libraries - Thanks to Adam Moody from LLNL for the suggestion - Capability to checkpoint CH3 channel using the Hydra process manager - Optimized support for broadcast, reduce and other collectives - Tuning for IvyBridge architecture - Improved launch time for large-scale mpirun_rsh jobs - Introduced retry mechanism in mpirun_rsh for socket binding - Updated hwloc to version 1.7.2 * Bug-Fixes (since 2.0a): - Consider list provided by MV2_IBA_HCA when scanning device list - Fix issues in Nemesis interface with --with-ch3-rank-bits=32 - Better cleanup of XRC files in corner cases - Initialize using better defaults for ibv_modify_qp (initial ring) - Add unconditional check and addition of pthread library - MPI_Get_library_version updated with proper MVAPICH2 branding - Thanks to Jerome Vienne from the TACC for the report MVAPICH2-2.0a (08/24/2013) * Features and Enhancements (since 1.9): - Based on MPICH-3.0.4 - Dynamic CUDA initialization. Support GPU device selection after MPI_Init - Support for running on heterogeneous clusters with GPU and non-GPU nodes - Supporting MPI-3 RMA atomic operations and flush operations with CH3-Gen2 interface - Exposing internal performance variables to MPI-3 Tools information interface (MPIT) - Enhanced MPI_Bcast performance - Enhanced performance for large message MPI_Scatter and MPI_Gather - Enhanced intra-node SMP performance - Tuned SMP eager threshold parameters - Reduced memory footprint - Improved job-startup performance - Warn and continue when ptmalloc fails to initialize - Enable hierarchical SSH-based startup with Checkpoint-Restart - Enable the use of Hydra launcher with Checkpoint-Restart * Bug-Fixes (since 1.9): - Fix data validation issue with MPI_Bcast - Thanks to Claudio J. Margulis from University of Iowa for the report - Fix buffer alignment for large message shared memory transfers - Fix a bug in One-Sided shared memory backed windows - Fix a flow-control bug in UD transport - Thanks to Benjamin M. Auer from NASA for the report 156 Appendix E. MVAPICH2 Release Information - Fix bugs with MPI-3 RMA in Nemesis IB interface - Fix issue with very large message (>2GB bytes) MPI_Bcast - Thanks to Lu Qiyue for the report - Handle case where $HOME is not set during search for MV2 user config file - Thanks to Adam Moody from LLNL for the patch - Fix a hang in connection setup with RDMA-CM * Features and Enhancements (since 1.9rc1): - Updated to hwloc v1.7 - Tuned Reduce, AllReduce, Scatter, Reduce-Scatter and Allgatherv Collectives * Bug-Fixes (since 1.9rc1): - Fix cuda context issue with async progress thread - Thanks to Osuna Escamilla Carlos from env.ethz.ch for the report - Overwrite pre-existing PSM environment variables - Thanks to Adam Moody from LLNL for the patch - Fix several warnings - Thanks to Adam Moody from LLNL for some of the patches MVAPICH2-1.9RC1 (04/16/2013) * Features and Enhancements (since 1.9b): - Based on MPICH-3.0.3 - Updated SCR to version 1.1.8 - Install utility scripts included with SCR - Support for automatic detection of path to utilities used by mpirun_rsh during configuration - Utilities supported: rsh, ssh, xterm, totalview - Support for launching jobs on heterogeneous networks with mpirun_rsh - Tuned Bcast, Reduce, Scatter Collectives - Tuned MPI performance on Kepler GPUs - Introduced MV2_RDMA_CM_CONF_FILE_PATH parameter which specifies path to mv2.conf * Bug-Fixes (since 1.9b): - Fix autoconf issue with LiMIC2 source-code - Thanks to Doug Johnson from OH-TECH for the report - Fix build errors with --enable-thread-cs=per-object and --enable-refcount=lock-free - Thanks to Marcin Zalewski from Indiana University for the report - Fix MPI_Scatter failure with MPI_IN_PLACE - Thanks to Mellanox for the report - Fix MPI_Scatter failure with cyclic host files - Fix deadlocks in PSM interface for multi-threaded jobs - Thanks to Marcin Zalewski from Indiana University for the report - Fix MPI_Bcast failures in SCALAPACK - Thanks to Jerome Vienne from TACC for the report - Fix build errors with newer Ekopath compiler - Fix a bug with shmem collectives in PSM interface - Fix memory corruption when more entries specified in mv2.conf than the requested number of rails - Thanks to Akihiro Nomura from Tokyo Institute of Technology for the report - Fix memory corruption with CR configuration in Nemesis interface 157 Appendix E. MVAPICH2 Release Information MVAPICH2-1.9b (02/28/2013) * Features and Enhancements (since 1.9a2): - Based on MPICH-3.0.2 - Support for all MPI-3 features - Support for single copy intra-node communication using Linux supported CMA (Cross Memory Attach) - Provides flexibility for intra-node communication: shared memory, LiMIC2, and CMA - Checkpoint/Restart using LLNL’s Scalable Checkpoint/Restart Library (SCR) - Support for application-level checkpointing - Support for hierarchical system-level checkpointing - Improved job startup time - Provided a new runtime variable MV2_HOMOGENEOUS_CLUSTER for optimized startup on homogeneous clusters - New version of LiMIC2 (v0.5.6) - Provides support for unlocked ioctl calls - Tuned Reduce, Allgather, Reduce_Scatter, Allgatherv collectives - Introduced option to export environment variables automatically with mpirun_rsh - Updated to HWLOC v1.6.1 - Provided option to use CUDA libary call instead of CUDA driver to check buffer pointer type - Thanks to Christian Robert from Sandia for the suggestion - Improved debug messages and error reporting * Bug-Fixes (since 1.9a2): - Fix page fault with memory access violation with LiMIC2 exposed by newer Linux kernels - Thanks to Karl Schulz from TACC for the report - Fix a failure when lazy memory registration is disabled and CUDA is enabled - Thanks to Jens Glaser from University of Minnesota for the report - Fix an issue with variable initialization related to DPM support - Rename a few internal variables to avoid name conflicts with external applications - Thanks to Adam Moody from LLNL for the report - Check for libattr during configuration when Checkpoint/Restart and Process Migration are requested - Thanks to John Gilmore from Vastech for the report - Fix build issue with --disable-cxx - Set intra-node eager threshold correctly when configured with LiMIC2 - Fix an issue with MV2_DEFAULT_PKEY in partitioned InfiniBand network - Thanks to Jesper Larsen from FCOO for the report - Improve makefile rules to use automake macros - Thanks to Carmelo Ponti from CSCS for the report - Fix configure error with automake conditionals - Thanks to Evren Yurtesen from Abo Akademi for the report - Fix a few memory leaks and warnings - Properly cleanup shared memory files (used by XRC) when applications fail MVAPICH2-1.9a2 (11/08/2012) * Features and Enhancements (since 1.9a): 158 Appendix E. MVAPICH2 Release Information - Based on MPICH2-1.5 - Initial support for MPI-3: (Available for all interfaces: OFA-IB-CH3, OFA-IWARP-CH3, OFA-RoCE-CH3, uDAPL-CH3, OFA-IB-Nemesis, PSM-CH3) - Nonblocking collective functions available as "MPIX_" functions (e.g., "MPIX_Ibcast") - Neighborhood collective routines available as "MPIX_" functions (e.g., "MPIX_Neighbor_allgather") - MPI_Comm_split_type function available as an "MPIX_" function - Support for MPIX_Type_create_hindexed_block - Nonblocking communicator duplication routine MPIX_Comm_idup (will only work for single-threaded programs) - MPIX_Comm_create_group support - Support for matched probe functionality (e.g., MPIX_Mprobe, MPIX_Improbe, MPIX_Mrecv, and MPIX_Imrecv), (Not Available for PSM) - Support for "Const" (disabled by default) - Efficient vector, hindexed datatype processing on GPU buffers - Tuned alltoall, Scatter and Allreduce collectives - Support for Mellanox Connect-IB HCA - Adaptive number of registration cache entries based on job size - Revamped Build system: - Uses automake instead of simplemake, - Allows for parallel builds ("make -j8" and similar) * Bug-Fixes (since 1.9a): - CPU frequency mismatch warning shown under debug - Fix issue with MPI_IN_PLACE buffers with CUDA - Fix ptmalloc initialization issue due to compiler optimization - Thanks to Kyle Sheumaker from ACT for the report - Adjustable MAX_NUM_PORTS at build time to support more than two ports - Fix issue with MPI_Allreduce with MPI_IN_PLACE send buffer - Fix memleak in MPI_Cancel with PSM interface - Thanks to Andrew Friedley from LLNL for the report MVAPICH2-1.9a (09/07/2012) * Features and Enhancements (since 1.8): - Support for InfiniBand hardware UD-multicast - UD-multicast-based designs for collectives (Bcast, Allreduce and Scatter) - Enhanced Bcast and Reduce collectives with pt-to-pt communication - LiMIC-based design for Gather collective - Improved performance for shared-memory-aware collectives - Improved intra-node communication performance with GPU buffers using pipelined design - Improved inter-node communication performance with GPU buffers with non-blocking CUDA copies - Improved small message communication performance with GPU buffers using CUDA IPC design - Improved automatic GPU device selection and CUDA context management - Optimal communication channel selection for different GPU communication modes (DD, DH and HD) in different configurations (intra-IOH and inter-IOH) - Removed libibumad dependency for building the library 159 Appendix E. MVAPICH2 Release Information - Option for selecting non-default gid-index in a loss-less fabric setup in RoCE mode - Option to disable signal handler setup - Tuned thresholds for various architectures - Set DAPL-2.0 as the default version for the uDAPL interface - Updated to hwloc v1.5 - Option to use IP address as a fallback if hostname cannot be resolved - Improved error reporting * Bug-Fixes (since 1.8): - Fix issue in intra-node knomial bcast - Handle gethostbyname return values gracefully - Fix corner case issue in two-level gather code path - Fix bug in CUDA events/streams pool management - Fix ptmalloc initialization issue when MALLOC_CHECK_ is defined in the environment - Thanks to Mehmet Belgin from Georgia Institute of Technology for the report - Fix memory corruption and handle heterogeneous architectures in gather collective - Fix issue in detecting the correct HCA type - Fix issue in ring start-up to select correct HCA when MV2_IBA_HCA is specified - Fix SEGFAULT in MPI_Finalize when IB loop-back is used - Fix memory corruption on nodes with 64-cores - Thanks to M Xie for the report - Fix hang in MPI_Finalize with Nemesis interface when ptmalloc initialization fails - Thanks to Carson Holt from OICR for the report - Fix memory corruption in shared memory communication - Thanks to Craig Tierney from NOAA for the report and testing the patch - Fix issue in IB ring start-up selection with mpiexec.hydra - Fix issue in selecting CUDA run-time variables when running on single node in SMP only mode - Fix few memory leaks and warnings MVAPICH2-1.8 (04/30/2012) * Features and Enhancements (since 1.8rc1): - Introduced a unified run time parameter MV2_USE_ONLY_UD to enable UD only mode - Enhanced designs for Alltoall and Allgather collective communication from GPU device buffers - Tuned collective communication from GPU device buffers - Tuned Gather collective - Introduced a run time parameter MV2_SHOW_CPU_BINDING to show current CPU bindings - Updated to hwloc v1.4.1 - Remove dependency on LEX and YACC * Bug-Fixes (since 1.8rc1): - Fix hang with multiple GPU configuration 160 Appendix E. MVAPICH2 Release Information - Thanks to Jens Glaser from University of Minnesota for the report Fix buffer alignment issues to improve intra-node performance Fix a DPM multispawn behavior Enhanced error reporting in DPM functionality Quote environment variables in job startup to protect from shell Fix hang when LIMIC is enabled Fix hang in environments with heterogeneous HCAs Fix issue when using multiple HCA ports in RDMA_CM mode - Thanks to Steve Wise from Open Grid Computing for the report - Fix hang during MPI_Finalize in Nemesis IB netmod - Fix for a start-up issue in Nemesis with heterogeneous architectures - Fix few memory leaks and warnings - MVAPICH2-1.8rc1 (03/22/2012) * Features & Enhancements (since 1.8a2): - New design for intra-node communication from GPU Device buffers using CUDA IPC for better performance and correctness - Thanks to Joel Scherpelz from NVIDIA for his suggestions - Enabled shared memory communication for host transfers when CUDA is enabled - Optimized and tuned collectives for GPU device buffers - Enhanced pipelined inter-node device transfers - Enhanced shared memory design for GPU device transfers for large messages - Enhanced support for CPU binding with socket and numanode level granularity - Support suspend/resume functionality with mpirun_rsh - Exporting local rank, local size, global rank and global size through environment variables (both mpirun_rsh and hydra) - Update to hwloc v1.4 - Checkpoint-Restart support in OFA-IB-Nemesis interface - Enabling run-through stabilization support to handle process failures in OFA-IB-Nemesis interface - Enhancing OFA-IB-Nemesis interface to handle IB errors gracefully - Performance tuning on various architecture clusters - Support for Mellanox IB FDR adapter * Bug-Fixes (since 1.8a2): - Fix a hang issue on InfiniHost SDR/DDR cards - Thanks to Nirmal Seenu from Fermilab for the report - Fix an issue with runtime parameter MV2_USE_COALESCE usage - Fix an issue with LiMIC2 when CUDA is enabled - Fix an issue with intra-node communication using datatypes and GPU device buffers - Fix an issue with Dynamic Process Management when launching processes on multiple nodes - Thanks to Rutger Hofman from VU Amsterdam for the report - Fix build issue in hwloc source with mcmodel=medium flags - Thanks to Nirmal Seenu from Fermilab for the report - Fix a build issue in hwloc with --disable-shared or --disabled-static options - Use portable stdout and stderr redirection - Thanks to Dr. Axel Philipp from *MTU* Aero Engines for the patch - Fix a build issue with PGI 12.2 - Thanks to Thomas Rothrock from U.S. Army SMDC for the patch 161 Appendix E. MVAPICH2 Release Information - Fix an issue with send message queue in OFA-IB-Nemesis interface - Fix a process cleanup issue in Hydra when MPI_ABORT is called (upstream MPICH2 patch) - Fix an issue with non-contiguous datatypes in MPI_Gather - Fix a few memory leaks and warnings MVAPICH2-1.8a2 (02/02/2012) * Features and Enhancements (since 1.8a1p1): - Support for collective communication from GPU buffers - Non-contiguous datatype support in point-to-point and collective communication from GPU buffers - Efficient GPU-GPU transfers within a node using CUDA IPC (for CUDA 4.1) - Alternate synchronization mechanism using CUDA Events for pipelined device data transfers - Exporting processes local rank in a node through environment variable - Adjust shared-memory communication block size at runtime - Enable XRC by default at configure time - New shared memory design for enhanced intra-node small message performance - Tuned inter-node and intra-node performance on different cluster architectures - Update to hwloc v1.3.1 - Support for fallback to R3 rendezvous protocol if RGET fails - SLURM integration with mpiexec.mpirun_rsh to use SLURM allocated hosts without specifying a hostfile - Support added to automatically use PBS_NODEFILE in Torque and PBS environments - Enable signal-triggered (SIGUSR2) migration * Bug Fixes (since 1.8a1p1): - Set process affinity independently of SMP enable/disable to control the affinity in loopback mode - Report error and exit if user requests MV2_USE_CUDA=1 in non-cuda configuration - Fix for data validation error with GPU buffers - Updated WRAPPER_CPPFLAGS when using --with-cuda. Users should not have to explicitly specify CPPFLAGS or LDFLAGS to build applications - Fix for several compilation warnings - Report an error message if user requests MV2_USE_XRC=1 in non-XRC configuration - Remove debug prints in regular code path with MV2_USE_BLOCKING=1 - Thanks to Vaibhav Dutt for the report - Handling shared memory collective buffers in a dynamic manner to eliminate static setting of maximum CPU core count - Fix for validation issue in MPICH2 strided_get_indexed.c - Fix a bug in packetized transfers on heterogeneous clusters - Fix for deadlock between psm_ep_connect and PMGR_COLLECTIVE calls on QLogic systems - Thanks to Adam T. Moody for the patch - Fix a bug in MPI_Allocate_mem when it is called with size 0 - Thanks to Michele De Stefano for reporting this issue - Create vendor for Open64 compilers and add rpath for unknown compilers - Thanks to Martin Hilgemen from Dell Inc. for the initial patch - Fix issue due to overlapping buffers with sprintf - Thanks to Mark Debbage from QLogic for reporting this issue 162 Appendix E. MVAPICH2 Release Information - Fallback to using GNU options for unknown f90 compilers - Fix hang in PMI_Barrier due to incorrect handling of the socket return values in mpirun_rsh - Unify the redundant FTB events used to initiate a migration - Fix memory leaks when mpirun_rsh reads hostfiles - Fix a bug where library attempts to use in-active rail in multi-rail scenario MVAPICH2-1.8a1p1 (11/14/2011) * Bug Fixes (since 1.8a1) - Fix for a data validation issue in GPU transfers - Thanks to Massimiliano Fatica, NVIDIA, for reporting this issue - Tuned CUDA block size to 256K for better performance - Enhanced error checking for CUDA library calls - Fix for mpirun_rsh issue while launching applications on Linux Kernels (3.x) MVAPICH2-1.8a1 (11/09/2011) * Features and Enhancements (since 1.7): - Support for MPI communication from NVIDIA GPU device memory - High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU) - High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU) - Communication with contiguous datatype - Reduced memory footprint of the library - Enhanced one-sided communication design with reduced memory requirement - Enhancements and tuned collectives (Bcast and Alltoallv) - Update to hwloc v1.3.0 - Flexible HCA selection with Nemesis interface - Thanks to Grigori Inozemtsev, Queens University - Support iWARP interoperability between Intel NE020 and Chelsio T4 Adapters - RoCE enable environment variable name is changed from MV2_USE_RDMAOE to MV2_USE_RoCE * Bug Fixes (since 1.7): - Fix for a bug in mpirun_rsh while doing process clean-up in abort and other error scenarios - Fixes for code compilation warnings - Fix for memory leaks in RDMA CM code path MVAPICH2-1.7 (10/14/2011) * Features and Enhancements (since 1.7rc2): - Support SHMEM collectives upto 64 cores/node - Update to hwloc v1.2.2 - Enhancement and tuned collective (GatherV) * Bug - Fixes: Fixes for code compilation warnings Fix job clean-up issues with mpirun_rsh Fix a hang with RDMA CM 163 Appendix E. MVAPICH2 Release Information MVAPICH2-1.7rc2 (09/19/2011) * Features and Enhancements (since 1.7rc1): - Based on MPICH2-1.4.1p1 - Integrated Hybrid (UD-RC/XRC) design to get best performance on large-scale systems with reduced/constant memory footprint - Shared memory backed Windows for One-Sided Communication - Support for truly passive locking for intra-node RMA in shared memory and LIMIC based windows - Integrated with Portable Hardware Locality (hwloc v1.2.1) - Integrated with latest OSU Micro-Benchmarks (3.4) - Enhancements and tuned collectives (Allreduce and Allgatherv) - MPI_THREAD_SINGLE provided by default and MPI_THREAD_MULTIPLE as an option - Enabling Checkpoint/Restart support in pure SMP mode - Optimization for QDR cards - On-demand connection management support with IB CM (RoCE interface) - Optimization to limit number of RDMA Fast Path connections for very large clusters (Nemesis interface) - Multi-core-aware collective support (QLogic PSM interface) * Bug Fixes: - Fixes for code compilation warnings - Compiler preference lists reordered to avoid mixing GCC and Intel compilers if both are found by configure - Fix a bug in transferring very large messages (>2GB) - Thanks to Tibor Pausz from Univ. of Frankfurt for reporting it - Fix a hang with One-Sided Put operation - Fix a bug in ptmalloc integration - Avoid double-free crash with mpispawn - Avoid crash and print an error message in mpirun_rsh when the hostfile is empty - Checking for error codes in PMI design - Verify programs can link with LiMIC2 at runtime - Fix for compilation issue when BLCR or FTB installed in non-system paths - Fix an issue with RDMA-Migration - Fix for memory leaks - Fix an issue in supporting RoCE with second port on available on HCA - Thanks to Jeffrey Konz from HP for reporting it - Fix for a hang with passive RMA tests (QLogic PSM interface) MVAPICH2-1.7rc1 (07/20/2011) * Features and Enhancements (since 1.7a2) - Based on MPICH2-1.4 - CH3 shared memory channel for standalone hosts (including laptops) without any InfiniBand adapters - HugePage support - Improved on-demand InfiniBand connection setup - Optimized Fence synchronization (with and without LIMIC2 support) - Enhanced mpirun_rsh design to avoid race conditions and support for improved debug messages - Optimized design for collectives (Bcast and Reduce) - Improved performance for medium size messages for QLogic PSM - Support for Ekopath Compiler 164 Appendix E. MVAPICH2 Release Information * Bug - Fixes Fixes in Dynamic Process Management (DPM) support Fixes in Checkpoint/Restart and Migration support Fix Restart when using automatic checkpoint - Thanks to Alexandr for reporting this Compilation warnings fixes Handling very large one-sided transfers using RDMA Fixes for memory leaks Graceful handling of unknown HCAs Better handling of shmem file creation errors Fix for a hang in intra-node transfer Fix for a build error with --disable-weak-symbols - Thanks to Peter Willis for reporting this issue Fixes for one-sided communication with passive target synchronization Proper error reporting when a program is linked with both static and shared MVAPICH2 libraries MVAPICH2-1.7a2 (06/03/2011) * Features and Enhancements (Since 1.7a) - Improved intra-node shared memory communication performance - Tuned RDMA Fast Path Buffer size to get better performance with less memory footprint (CH3 and Nemesis) - Fast process migration using RDMA - Automatic inter-node communication parameter tuning based on platform and adapter detection (Nemesis) - Automatic intra-node communication parameter tuning based on platform - Efficient connection set-up for multi-core systems - Enhancements for collectives (barrier, gather and allgather) - Compact and shorthand way to specify blocks of processes on the same host with mpirun_rsh - Support for latest stable version of HWLOC v1.2 - Improved debug message output in process management and fault tolerance functionality - Better handling of process signals and error management in mpispawn - Performance tuning for pt-to-pt and several collective operations * Bug - fixes Fixes for memory leaks Fixes in CR/migration Better handling of memory allocation and registration failures Fixes for compilation warnings Fix a bug that disallows ’=’ from mpirun_rsh arguments Handling of non-contiguous transfer in Nemesis interface Bug fix in gather collective when ranks are in cyclic order Fix for the ignore_locks bug in MPI-IO with Lustre MVAPICH2-1.7a (04/19/2011) * Features and Enhancements - Based on MPICH2-1.3.2p1 - Integrated with Portable Hardware Locality (hwloc v1.1.1) 165 Appendix E. MVAPICH2 Release Information - Supporting Large Data transfers (>2GB) - Integrated with Enhanced LiMIC2 (v0.5.5) to support Intra-node large message (>2GB) transfers - Optimized and tuned algorithm for AlltoAll - Enhanced debugging config options to generate core files and back-traces - Support for Chelsio’s T4 Adapter MVAPICH2-1.6 (03/09/2011) * Features and Enhancements (since 1.6-RC3) - Improved configure help for MVAPICH2 features - Updated Hydra launcher with MPICH2-1.3.3 Hydra process manager - Building and installation of OSU micro benchmarks during default MVAPICH2 installation - Hydra is the default mpiexec process manager * Bug - fixes (since 1.6-RC3) Fix hang issues in RMA Fix memory leaks Fix in RDMA_FP MVAPICH2-1.6-RC3 (02/15/2011) * Features and Enhancements - Support for 3D torus topology with appropriate SL settings - For both CH3 and Nemesis interfaces - Thanks to Jim Schutt, Marcus Epperson and John Nagle from Sandia for the initial patch - Quality of Service (QoS) support with multiple InfiniBand SL - For both CH3 and Nemesis interfaces - Configuration file support (similar to the one available in MVAPICH). Provides a convenient method for handling all runtime variables through a configuration file. - Improved job-startup performance on large-scale systems - Optimization in MPI_Finalize - Improved pt-to-pt communication performance for small and medium messages - Optimized and tuned algorithms for Gather and Scatter collective operations - Optimized thresholds for one-sided RMA operations - User-friendly configuration options to enable/disable various checkpoint/restart and migration features - Enabled ROMIO’s auto detection scheme for filetypes on Lustre file system - Improved error checking for system and BLCR calls in checkpoint-restart and migration codepath - Enhanced OSU Micro-benchmarks suite (version 3.3) Bug Fixes - Fix in aggregate ADIO alignment - Fix for an issue with LiMIC2 header - XRC connection management - Fixes in registration cache - IB card detection with MV2_IBA_HCA runtime option in 166 Appendix E. MVAPICH2 Release Information - multi rail design Fix for a bug in multi-rail design while opening multiple HCAs Fixes for multiple memory leaks Fix for a bug in mpirun_rsh Checks before enabling aggregation and migration Fixing the build errors with --disable-cxx Thanks to Bright Yang for reporting this issue Fixing the build errors related to "pthread_spinlock_t" seen on RHEL systems MVAPICH2-1.6-RC2 (12/22/2010) * Features and Enhancements - Optimization and enhanced performance for clusters with nVIDIA GPU adapters (with and without GPUDirect technology) - Enhanced R3 rendezvous protocol - For both CH3 and Nemesis interfaces - Robust RDMA Fast Path setup to avoid memory allocation failures - For both CH3 and Nemesis interfaces - Multiple design enhancements for better performance of medium sized messages - Enhancements and optimizations for one sided Put and Get operations - Enhancements and tuning of Allgather for small and medium sized messages - Optimization of AllReduce - Enhancements to Multi-rail Design and features including striping of one-sided messages - Enhancements to mpirun_rsh job start-up scheme - Enhanced designs for automatic detection of various architectures and adapters * Bug fixes - Fix a bug in Post-Wait/Start-Complete path for one-sided operations - Resolving a hang in mpirun_rsh termination when CR is enabled - Fixing issue in MPI_Allreduce and Reduce when called with MPI_IN_PLACE - Thanks to the initial patch by Alexander Alekhin - Fix for an issue in rail selection for small RMA messages - Fix for threading related errors with comm_dup - Fix for alignment issues in RDMA Fast Path - Fix for extra memcpy in header caching - Fix for an issue to use correct HCA when process to rail binding scheme used in combination with XRC. - Fix for an RMA issue when configured with enable-g=meminit - Thanks to James Dinan of Argonne for reporting this issue - Only set FC and F77 if gfortran is executable MVAPICH2-1.6RC1 (11/12/2010) * Features and Enhancements - Using LiMIC2 for efficient intra-node RMA transfer to avoid extra memory copies - Upgraded to LiMIC2 version 0.5.4 167 Appendix E. MVAPICH2 Release Information - Removing the limitation on number of concurrent windows in RMA operations - Support for InfiniBand Quality of Service (QoS) with multiple lanes - Enhanced support for multi-threaded applications - Fast Checkpoint-Restart support with aggregation scheme - Job Pause-Migration-Restart Framework for Pro-active Fault-Tolerance - Support for new standardized Fault Tolerant Backplane (FTB) Events for Checkpoint-Restart and Job Pause-Migration-Restart Framework - Dynamic detection of multiple InfiniBand adapters and using these by default in multi-rail configurations (OLA) - Enhanced and optimized algorithms for MPI_Reduce and MPI_AllReduce operations for small and medium message sizes. - XRC support with Hydra Process Manager - Improved usability of process to CPU mapping with support of delimiters (’,’ , ’-’) in CPU listing - Thanks to Gilles Civario for the initial patch - Use of gfortran as the default F77 compiler - Support of Shared-Memory-Nemesis interface on multi-core platforms requiring intra-node communication only (SMP-only systems, laptops, etc. ) * Bug fixes - Fix for memory leak in one-sided code with --enable-g=all --enable-error-messages=all - Fix for memory leak in getting the context of intra-communicator - Fix for shmat() return code check - Fix for issues with inter-communicator collectives in Nemesis - KNEM patch for osu_bibw issue with KNEM version 0.9.2 - Fix for osu_bibw error with Shared-memory-Nemesis interface - Fix for Win_test error for one-sided RDMA - Fix for a hang in collective when thread level is set to multiple - Fix for intel test errors with rsend, bsend and ssend operations in Nemesis - Fix for memory free issue when it allocated by scandir - Fix for a hang in Finalize - Fix for issue with MPIU_Find_local_and_external when it is called from MPIDI_CH3I_comm_create - Fix for handling CPPFLGS values with spaces - Dynamic Process Management to work with XRC support - Fix related to disabling CPU affinity when shared memory is turned off at run time - MVAPICH2-1.5.1 (09/14/10) * Features and Enhancements - Significantly reduce memory footprint on some systems by changing the stack size setting for multi-rail configurations - Optimization to the number of RDMA Fast Path connections - Performance improvements in Scatterv and Gatherv collectives for CH3 interface (Thanks to Dan Kokran and Max Suarez of NASA for identifying the issue) - Tuning of Broadcast Collective - Support for tuning of eager thresholds based on both adapter and platform type 168 Appendix E. MVAPICH2 Release Information - Environment variables for message sizes can now be expressed in short form K=Kilobytes and M=Megabytes (e.g. MV2_IBA_EAGER_THRESHOLD=12K) - Ability to selectively use some or all HCAs using colon separated lists. e.g. MV2_IBA_HCA=mlx4_0:mlx4_1 - Improved Bunch/Scatter mapping for process binding with HWLOC and SMT support (Thanks to Dr. Bernd Kallies of ZIB for ideas and suggestions) - Update to Hydra code from MPICH2-1.3b1 - Auto-detection of various iWARP adapters - Specifying MV2_USE_IWARP=1 is no longer needed when using iWARP - Changing automatic eager threshold selection and tuning for iWARP adapters based on number of nodes in the system instead of the number of processes - PSM progress loop optimization for QLogic Adapters (Thanks to Dr. Avneesh Pant of QLogic for the patch) * Bug - fixes Fix memory leak in registration cache with --enable-g=all Fix memory leak in operations using datatype modules Fix for rdma_cross_connect issue for RDMA CM. The server is prevented from initiating a connection. Don’t fail during build if RDMA CM is unavailable Various mpirun_rsh bug fixes for CH3, Nemesis and uDAPL interfaces ROMIO panfs build fix Update panfs for not-so-new ADIO file function pointers Shared libraries can be generated with unknown compilers Explicitly link against DL library to prevent build error due to DSO link change in Fedora 13 (introduced with gcc-4.4.3-5.fc13) Fix regression that prevents the proper use of our internal HWLOC component Remove spurious debug flags when certain options are selected at build time Error code added for situation when received eager SMP message is larger than receive buffer Fix for Gather and GatherV back-to-back hang problem with LiMIC2 Fix for packetized send in Nemesis Fix related to eager threshold in nemesis ib-netmod Fix initialization parameter for Nemesis based on adapter type Fix for uDAPL one sided operations (Thanks to Jakub Fedoruk from Intel for reporting this) Fix an issue with out-of-order message handling for iWARP Fixes for memory leak and Shared context Handling in PSM for QLogic Adapters (Thanks to Dr. Avneesh Pant of QLogic for the patch) MVAPICH2-1.5 (07/09/10) * Features and Enhancements (since 1.5-RC2) - SRQ turned on by default for Nemesis interface - Performance tuning - adjusted eager thresholds for variety of architectures, vbuf size based on adapter types and vbuf pool sizes - Tuning for Intel iWARP NE020 adapter, thanks to Harry Cropper of Intel - Introduction of a retry mechanism for RDMA_CM connection establishment 169 Appendix E. MVAPICH2 Release Information * Bug fixes (since 1.5-RC2) - Fix in build process with hwloc (for some Distros) - Fix for memory leak (Nemesis interface) MVAPICH2-1.5-RC2 (06/21/10) * Features and Enhancements (since 1.5-RC1) - Support for hwloc library (1.0.1) for defining CPU affinity - Deprecating the PLPA support for defining CPU affinity - Efficient CPU affinity policies (bunch and scatter) to specify CPU affinity * Bug fixes (since 1.5-RC1) - Compilation issue with the ROMIO adio-lustre driver, thanks to Adam Moody of LLNL for reporting the issue - Allowing checkpoint-restart for large-scale systems - Correcting a bug in clear_kvc function. Thanks to T J (Chris) Ward, IBM Research, for reporting and providing the resolving patch - Shared lock operations with RMA with scatter process distribution. Thanks to Pavan Balaji of Argonne for reporting this issue - Fix a bug during window creation in uDAPL - Compilation issue with --enable-alloca, Thanks to E. Borisch, for reporting and providing the patch - Improved error message for ibv_poll_cq failures - Fix an issue that prevents mpirun_rsh to execute programs without specifying the path from directories in PATH - Fix an issue of mpirun_rsh with Dynamic Process Migration (DPM) - Fix for memory leaks (both CH3 and Nemesis interfaces) - Updatefiles correctly update LiMIC2 - Several fixes to the registration cache (CH3, Nemesis and uDAPL interfaces) - Fix to multi-rail communication - Fix to Shared Memory communication Progress Engine - Fix to all-to-all collective for large number of processes MVAPICH2-1.5-RC1 (05/04/10) * Features and Enhancements - MPI 2.2 compliant - Based on MPICH2-1.2.1p1 - OFA-IB-Nemesis interface design - OpenFabrics InfiniBand network module support for MPICH2 Nemesis modular design - Support for high-performance intra-node shared memory communication provided by the Nemesis design - Adaptive RDMA Fastpath with Polling Set for high-performance inter-node communication - Shared Receive Queue (SRQ) support with flow control, 170 Appendix E. MVAPICH2 Release Information uses significantly less memory for MPI library - Header caching - Advanced AVL tree-based Resource-aware registration cache - Memory Hook Support provided by integration with ptmalloc2 library. This provides safe release of memory to the Operating System and is expected to benefit the memory usage of applications that heavily use malloc and free operations. - Support for TotalView debugger - Shared Library Support for existing binary MPI application programs to run ROMIO Support for MPI-IO - Support for additional features (such as hwloc, hierarchical collectives, one-sided, multithreading, etc.), as included in the MPICH2 1.2.1p1 Nemesis channel - Flexible process manager support - mpirun_rsh to work with any of the eight interfaces (CH3 and Nemesis channel-based) including OFA-IB-Nemesis, TCP/IP-CH3 and TCP/IP-Nemesis - Hydra process manager to work with any of the eight interfaces (CH3 and Nemesis channel-based) including OFA-IB-CH3, OFA-iWARP-CH3, OFA-RoCE-CH3 and TCP/IP-CH3 - MPIEXEC_TIMEOUT is honored by mpirun_rsh * Bug fixes since 1.4.1 - Fix compilation error when configured with ‘--enable-thread-funneled’ - Fix MPE functionality, thanks to Anthony Chan for reporting and providing the resolving patch - Cleanup after a failure in the init phase is handled better by mpirun_rsh - Path determination is correctly handled by mpirun_rsh when DPM is used - Shared libraries are correctly built (again) MVAPICH2-1.4.1 * Enhancements since mvapich2-1.4 - MPMD launch capability to mpirun_rsh - Portable Hardware Locality (hwloc) support, patch suggested by Dr. Bernd Kallies <[email protected]> - Multi-port support for iWARP - Enhanced iWARP design for scalability to higher process count - Ring based startup support for RDMAoE * Bug fixes since mvapich2-1.4 - Fixes for MPE and other profiling tools as suggested by Anthony Chan ([email protected]) - Fixes for finalization issue with dynamic process management - Removed overrides to PSM_SHAREDCONTEXT, PSM_SHAREDCONTEXTS_MAX variables. Suggested by Ben Truscott <[email protected]>. - Fixing the error check for buffer aliasing in MPI_Reduce as suggested by Dr. Rajeev Thakur <[email protected]> - Fix Totalview integration for RHEL5 - Update simplemake to handle build timestamp issues 171 Appendix E. MVAPICH2 Release Information - Fixes for --enable-g={mem, meminit} - Improved logic to control the receive and send requests to handle the limitation of CQ Depth on iWARP - Fixing assertion failures with IMB-EXT tests - VBUF size for very small iWARP clusters bumped up to 33K - Replace internal mallocs with MPIU_Malloc uniformly for correct tracing with --enable-g=mem - Fixing multi-port for iWARP - Fix memory leaks - Shared-memory reduce fixes for MPI_Reduce invoked with MPI_IN_PLACE - Handling RDMA_CM_EVENT_TIMEWAIT_EXIT event - Fix for threaded-ctxdup mpich2 test - Detecting spawn errors, patch contributed by Dr. Bernd Kallies <[email protected]> - IMB-EXT fixes reported by Yutaka from Cray Japan - Fix alltoall assertion error when limic is used MVAPICH2-1.4 * Enhancements since mvapich2-1.4rc2 - Efficient runtime CPU binding - Add an environment variable for controlling the use of multiple cq’s for iWARP interface. - Add environmental variables to disable registration cache for All-to-All on large systems. - Performance tune for pt-to-pt Intra-node communication with LiMIC2 - Performance tune for MPI_Broadcast * Bug fixes since mvapich2-1.4rc2 - Fix the reading error in lock_get_response by adding initialization to req->mrail.protocol - Fix mpirun_rsh scalability issue with hierarchical ssh scheme when launching greater than 8K processes. - Add mvapich_ prefix to yacc functions. This can avoid some namespace issues when linking with other libraries. Thanks to Manhui Wang <[email protected]> for contributing the patch. MVAPICH2-1.4-rc2 * Enhancements since mvapich2-1.4rc1 - Added Feature: Check-point Restart with Fault-Tolerant Backplane Support (FTB_CR) - Added Feature: Multiple CQ-based design for Chelsio iWARP - Distribute LiMIC2-0.5.2 with MVAPICH2. Added flexibility for selecting and using a pre-existing installation of LiMIC2 - Increase the amount of command line that mpirun_rsh can handle (Thanks for the suggestion by Bill Barth @ TACC) * Bug fixes since mvapich2-1.4rc1 - Fix for hang with packetized send using RDMA Fast path - Fix for allowing to use user specified P_Key’s (Thanks to Mike Heinz @ QLogic) - Fix for allowing mpirun_rsh to accept parameters through the parmeters file (Thanks to Mike Heinz @ QLogic) - Modify the default value of shmem_bcast_leaders to 4K 172 Appendix E. MVAPICH2 Release Information - Fix for one-sided with XRC support Fix hang with XRC Fix to always enabling MVAPICH2_Sync_Checkpoint functionality Fix build error on RHEL 4 systems (Reported by Nathan Baca and Jonathan Atencio) Fix issue with PGI compilation for PSM interface Fix for one-sided accumulate function with user-defined continguous datatypes Fix linear/hierarchical switching logic and reduce threshold for the enhanced mpirun_rsh framework. Clean up intra-node connection management code for iWARP Fix --enable-g=all issue with uDAPL interface Fix one sided operation with on demand CM. Fix VPATH build MVAPICH2-1.4-rc1 * Bugs fixed since MVAPICH2-1.2p1 - Changed parameters for iWARP for increased scalability - Fix error with derived datatypes and Put and Accumulate operations Request was being marked complete before data transfer had actually taken place when MV_RNDV_PROTOCOL=R3 was used - Unregister stale memory registrations earlier to prevent malloc failures - Fix for compilation issues with --enable-g=mem and --enable-g=all - Change dapl_prepost_noop_extra value from 5 to 8 to prevent credit flow issues. - Re-enable RGET (RDMA Read) functionality - Fix SRQ Finalize error Make sure that finalize does not hang when the srq_post_cond is being waited on. - Fix a multi-rail one-sided error when multiple QPs are used - PMI Lookup name failure with SLURM - Port auto-detection failure when the 1st HCA did not have an active failure - Change default small message scheduling for multirail for higher performance - MPE support for shared memory collectives now available MVAPICH2-1.2p1 (11/11/2008) * Changes since MVAPICH2-1.2 173 Appendix E. MVAPICH2 Release Information - Fix shared-memory communication issue for AMD Barcelona systems. MVAPICH2-1.2 (11/06/2008) * Bugs fixed since MVAPICH2-1.2-rc2 - Ignore the last bit of the pkey and remove the pkey_ix option since the index can be different on different machines. Thanks for Pasha@Mellanox for the patch. - Fix data types for memory allocations. Thanks for Dr. Bill Barth from TACC for the patches. - Fix a bug when MV2_NUM_HCAS is larger than the number of active HCAs. - Allow builds on architectures for which tuning parameters do not exist. * Changes related to the mpirun_rsh framework - Always build and install mpirun_rsh in addition to the process manager(s) selected through the --with-pm mechanism. - Cleaner job abort handling - Ability to detect the path to mpispawn if the Linux proc filesystem is available. - Added Totalview debugger support - Stdin is only available to rank 0. Other ranks get /dev/null. * Other miscellaneous changes - Add sequence numbers for RPUT and RGET finish packets. - Increase the number of allowed nodes for shared memory broadcast to 4K. - Use /dev/shm on Linux as the default temporary file path for shared memory communication. Thanks for Doug Johnson@OSC for the patch. - MV2_DEFAULT_MAX_WQE has been replaced with MV2_DEFAULT_MAX_SEND_WQE and MV2_DEFAULT_MAX_RECV_WQE for send and recv wqes, respectively. - Fix compilation warnings. MVAPICH2-1.2-RC2 (08/20/2008) * Following bugs are fixed in RC2 - Properly handle the scenario in shared memory broadcast code when the datatypes of different processes taking part in broadcast are different. - Fix a bug in Checkpoint-Restart code to determine whether a connection is a shared memory connection or a network connection. 174 Appendix E. MVAPICH2 Release Information - Support non-standard path for BLCR header files. - Increase the maximum heap size to avoid race condition in realloc(). - Use int32_t for rank for larger jobs with 32k processes or more. - Improve mvapich2-1.2 bandwidth to the same level of mvapich2-1.0.3. - An error handling patch for uDAPL interface. Thanks for Nilesh Awate for the patch. - Explicitly set some of the EP attributes when on demand connection is used in uDAPL interface. MVAPICH2-1.2-RC1 (07/02/08) * Following features are added for this new mvapich2-1.2 release: - Based on MPICH2 1.0.7 - Scalable and robust daemon-less job startup -- Enhanced and robust mpirun_rsh framework (non-MPD-based) to provide scalable job launching on multi-thousand core clusters -- Available for OpenFabrics (IB and iWARP) and uDAPL interfaces (including Solaris) - Adding support for intra-node shared memory communication with Checkpoint-restart -- Allows best performance and scalability with fault-tolerance support - Enhancement to software installation -- Change to full autoconf-based configuration -- Adding an application (mpiname) for querying the MVAPICH2 library version and configuration information - Enhanced processor affinity using PLPA for multi-core architectures - Allows user-defined flexible processor affinity - Enhanced scalability for RDMA-based direct one-sided communication with less communication resource - Shared memory optimized MPI_Bcast operations - Optimized and tuned MPI_Alltoall MVAPICH2-1.0.2 (02/20/08) * Change the default MV2_DAPL_PROVIDER to OpenIB-cma * Remove extraneous parameter is_blocking from the gen2 interface for 175 Appendix E. MVAPICH2 Release Information MPIDI_CH3I_MRAILI_Get_next_vbuf * Explicitly name unions in struct ibv_wr_descriptor and reference the members in the code properly. * Change "inline" functions to "static inline" properly. * Increase the maximum number of buffer allocations for communication intensive applications * Corrections for warnings from the Sun Studio 12 compiler. * If malloc hook initialization fails, then turn off registration cache * Add MV_R3_THESHOLD and MV_R3_NOCACHE_THRESHOLD which allows R3 to be used for smaller messages instead of registering the buffer and using a zero-copy protocol. * Fixed an error in message coalescing. * Setting application initiated checkpoint as default if CR is turned on. MVAPICH2-1.0.1 (10/29/07) * Enhance udapl initializaton, set all ep_attr fields properly. Thanks for Kanoj Sarcar from NetXen for the patch. * Fixing a bug that miscalculates the receive size in case of complex datatype is used. Thanks for Patrice Martinez from Bull for reporting this problem. * Minor patches for fixing (i) NBO for rdma-cm ports and (ii) rank variable usage in DEBUG_PRINT in rdma-cm.c Thanks to Steve Wise for reporting these. MVAPICH2-1.0 (09/14/07) * Following features and bug fixes are added in this new MVAPICH2-1.0 release: - Message coalescing support to enable reduction of per Queue-pair send queues for reduction in memory requirement on large scale clusters. This design also increases the small message messaging rate significantly. Available for Open Fabrics Gen2-IB. - Hot-Spot Avoidance Mechanism (HSAM) for alleviating network congestion in large scale clusters. Available for Open Fabrics Gen2-IB. - RDMA CM based on-demand connection management for large scale clusters. Available for OpenFabrics Gen2-IB and Gen2-iWARP. - uDAPL on-demand connection management for large scale clusters. 176 Appendix E. MVAPICH2 Release Information Available for uDAPL interface (including Solaris IB implementation). - RDMA Read support for increased overlap of computation and communication. Available for OpenFabrics Gen2-IB and Gen2-iWARP. - Application-initiated system-level (synchronous) checkpointing in addition to the user-transparent checkpointing. User application can now request a whole program checkpoint synchronously with BLCR by calling special functions within the application. Available for OpenFabrics Gen2-IB. - Network-Level fault tolerance with Automatic Path Migration (APM) for tolerating intermittent network failures over InfiniBand. Available for OpenFabrics Gen2-IB. - Integrated multi-rail communication support for OpenFabrics Gen2-iWARP. - Blocking mode of communication progress. Available for OpenFabrics Gen2-IB. - Based on MPICH2 1.0.5p4. * Fix for hang while using IMB with -multi option. Thanks to Pasha (Mellanox) for reporting this. * Fix for hang in memory allocations > 2^31 - 1. Thanks to Bryan Putnam (Purdue) for reporting this. * Fix for RDMA_CM finalize rdma_destroy_id failure. Added Timeout env variable for RDMA_CM ARP. Thanks to Steve Wise for suggesting these. * Fix for RDMA_CM invalid event in finalize. Thanks to Steve Wise and Sean Hefty. * Fix for shmem memory collectives related memory leaks * Updated src/mpi/romio/adio/ad_panfs/Makefile.in include path to find mpi.h. Contributed by David Gunter, Los Alamos National Laboratory. * Fixed header caching error on handling datatype messages with small vector sizes. * Change the finalization protocol for UD connection manager. * Fix for the "command line too long" problem. Contributed by Xavier Bru <[email protected]> from Bull () * Change the CKPT handling to invalidate all unused registration cache. * Added ofed 1.2 interface change patch for iwarp/rdma_cm from Steve Wise. * Fix for rdma_cm_get_event err in finalize. Reported by Steve Wise. 177 Appendix E. MVAPICH2 Release Information * Fix for when MV2_IBA_HCA is used. Contributed by Michael Schwind of Technical Univ. of Chemnitz (Germany). MVAPICH2-0.9.8 (11/10/06) * Following features are added in this new MVAPICH2-0.9.8 release: - BLCR based Checkpoint/Restart support - iWARP support: tested with Chelsio and Ammasso adapters and OpenFabrics/Gen2 stack - RDMA CM connection management support - Shared memory optimizations for collective communication operations - uDAPL support for NetEffect 10GigE adapter. MVAPICH2-0.9.6 (10/22/06) * Following features and bug fixes are added in this new MVAPICH2-0.9.6 release: - Added on demand connection management. - Enhance shared memory communication support. - Added ptmalloc memory hook support. - Runtime selection for most configuration options. MVAPICH2-0.9.5 (08/30/06) * Following features and bug fixes are added in this new MVAPICH2-0.9.5 release: - Added multi-rail support for both point to point and direct one side operations. - Added adaptive RDMA fast path. - Added shared receive queue support. - Added TotalView debugger support * Optimization of SMP startup information exchange for USE_MPD_RING to enhance performance for SLURM. Thanks to Don and team members from Bull and folks from LLNL for their feedbacks and comments. * Added uDAPL build script functionality to set DAPL_DEFAULT_PROVIDER explicitly with default suggestions. * Thanks to Harvey Richardson from Sun for suggesting this feature. 178 Appendix E. MVAPICH2 Release Information MVAPICH2-0.9.3 (05/20/06) * Following features are added in this new MVAPICH2-0.9.3 release: - Multi-threading support - Integrated with MPICH2 1.0.3 stack - Advanced AVL tree-based Resource-aware registration cache - Tuning and Optimization of various collective algorithms - Processor affinity for intra-node shared memory communication - Auto-detection of InfiniBand adapters for Gen2 MVAPICH2-0.9.2 (01/15/06) * Following features are added in this new MVAPICH2-0.9.2 release: - InfiniBand support for OpenIB/Gen2 - High-performance and optimized support for many MPI-2 functionalities (one-sided, collectives, datatype) - Support for other MPI-2 functionalities (as provided by MPICH2 1.0.2p1) - High-performance and optimized support for all MPI-1 functionalities MVAPICH2-0.9.0 (11/01/05) * Following features are added in this new MVAPICH2-0.9.0 release: - Optimized two-sided operations with RDMA support - Efficient memory registration/de-registration schemes for RDMA operations - Optimized intra-node shared memory support (bus-based and NUMA) - Shared library support - ROMIO support - Support for multiple compilers (gcc, icc, and pgi) MVAPICH2-0.6.5 (07/02/05) * Following features are added in this new MVAPICH2-0.6.5 release: - uDAPL support (tested for InfiniBand, Myrinet, and Ammasso GigE) 179 Appendix E. MVAPICH2 Release Information MVAPICH2-0.6.0 (11/04/04) * Following features are added in this new MVAPICH2-0.6.0 release: - MPI-2 functionalities (one-sided, collectives, datatype) - All MPI-1 functionalities - Optimized one-sided operations (Get, Put, and Accumulate) - Support for active and passive synchronization - Optimized two-sided operations - Scalable job start-up - Optimized and tuned for the above platforms and different network interfaces (PCI-X and PCI-Express) - Memory efficient scaling modes for medium and large clusters Notes 1. 2. 180 Appendix F. MPICH-3 Release Information The following is reproduced essentially verbatim from files contained within the MPICH-3 tarball downloaded from. See for various user guides. CHANGELOG =============================================================================== Changes in 3.1.3 =============================================================================== # Several enhancements to Portals4 support. # Several enhancements to PAMI (thanks to IBM for the code contribution). # Several enhancements to the CH3 RMA implementation. # Several enhancements to ROMIO. # Fixed deadlock in multi-threaded MPI_Comm_idup. # Several other minor bug fixes, memory leak fixes, and code cleanup. A full list of changes is available at the following link: A full list of bugs that have been fixed is available at the following link: ===============================================================================. Now all MPICH F90 tests have been ported to F08. # Updated weak alias support to align with gcc-4.x # Minor enhancements to the CH3 RMA implementation. # Better implementation of MPI_Allreduce for intercommunicator. 181 Appendix F. MPICH-3 Release Information # =============================================================================== # Blue Gene/Q implementation supports MPI-3. This release contains a functional and compliant Blue Gene/Q implementation of the MPI-3 standard. Instructions to build on Blue Gene/Q are on the mpich.org wiki: # Fortran 2008 bindings (experimental). Build with --enable-fortran=all. Must have a Fortran 2008 + TS 29113 capable compiler. # Significant rework of MPICH library management and which symbols go into which libraries. Also updated MPICH library names to make them consistent with Intel MPI, Cray MPI and IBM PE MPI. Backward compatibility links are provided for older mpich-based build systems. # The ROMIO "Blue Gene" driver has seen significant rework. We have separated "file system" features from "platform" features, since GPFS shows up in more places than just Blue Gene # New ROMIO options for aggregator selection and placement on Blue Gene # Optional new ROMIO two-phase algorithm requiring less communication for certain workloads # The old ROMIO optimization "deferred open" either stopped working or was disabled on several platforms. # Added support for powerpcle compiler. Patched libtool in MPICH to support little-endian powerpc linux host. # Fixed the prototype of the Reduce_local C++ binding. The previous prototype was completely incorrect. Thanks to Jeff Squyres for reporting the issue. 182 Appendix F. MPICH-3 Release Information # The mpd process manager, which was deprecated and unsupported for the past four major release series (1.3.x till 3.1), has now been deleted. RIP. # Several other minor bug fixes, memory leak fixes, and code cleanup. A full list of changes is available at the following link: A full list of bugs that have been fixed is available at the following link: =============================================================================== Changes in 3.1 =============================================================================== # Implement runtime compatibility with MPICH-derived implementations as per the ABI Compatibility Initiative (see for more information). # Integrated MPICH-PAMI code base for Blue Gene/Q and other IBM platforms. # Several improvements to the SCIF netmod. Intel). (code contribution from # Major revamp of the MPI_T interface added in MPI-3. # Added environment variables to control a lot more capabilities for collectives. See the README.envvar file for more information. # Allow non-blocking collectives and fault tolerance at the same time. The option MPIR_PARAM_ENABLE_COLL_FT_RET has been deprecated as it is no longer necessary. # Improvements to MPI_WIN_ALLOCATE to internally allocate shared memory between processes on the same node. # Performance improvements for MPI RMA operations on shared memory for MPI_WIN_ALLOCATE and MPI_WIN_ALLOCATE_SHARED. # Enable shared library builds by default. # Upgraded hwloc to 1.8. # Several improvements to the Hydra-SLURM integration. # Several improvements to the Hydra process binding code. See the Hydra wiki page for more information: 183 Appendix F. MPICH-3 Release Information # MPICH now supports operations on very large datatypes (those that describe more than 32 bits of data). This work also allows MPICH to fully support MPI-3’s introduction of MPI_Count. # Several other minor bug fixes, memory leak fixes, and code cleanup. A full list of changes is available at the following link: A full list of bugs that have been fixed is available at the following link: =============================================================================== Changes in 3.0.4 =============================================================================== # BUILD SYSTEM: Reordered the default compiler search to prefer Intel and PG compilers over GNU compilers because of the performance difference. WARNING: If you do not explicitly specify the compiler you want through CC and friends, this might break ABI for you relative to the previous 3.0.x release. # OVERALL: Added support to manage per-communicator eager-rendezvous thresholds. # PM/PMI: Performance improvements to the Hydra process manager on large-scale systems by allowing for key/value caching. # Several other minor bug fixes, memory leak fixes, and code cleanup. A full list of changes is available at the following link: =============================================================================== Changes in 3.0.3 =============================================================================== # RMA: Added a new mechanism for piggybacking RMA synchronization operations, which improves the performance of several synchronization operations, including Flush. # RMA: Added an optimization to utilize the MPI_MODE_NOCHECK assertion in passive target RMA to improve performance by eliminating a lock request message. # RMA: Added a default implementation of shared memory windows to CH3. adds support for this MPI 3.0 feature to the ch3:sock device. 184 This Appendix F. MPICH-3 Release Information # RMA: Fix a bug that resulted in an error when RMA operation request handles where completed outside of a synchronization epoch. # PM/PMI: Upgraded to hwloc-1.6.2rc1. This version uses libpciaccess instead of libpci, to workaround the GPL license used by libpci. # PM/PMI: Added support for the Cobalt process manager. # BUILD SYSTEM: allow MPI_LONG_DOUBLE_SUPPORT to be disabled with a configure option. # FORTRAN: fix MPI_WEIGHTS_EMPTY in the Fortran bindings # MISC: fix a bug in MPI_Get_elements where it could return incorrect values # Several other minor bug fixes, memory leak fixes, and code cleanup. A full list of changes is available at the following link: =============================================================================== Changes in 3.0.2 =============================================================================== # PM/PMI: Upgrade to hwloc-1.6.1 # RMA: Performance enhancements for shared memory windows. # COMPILER INTEGRATION: minor improvements and fixes to the clang static type checking annotation macros. # MPI-IO (ROMIO): improved error checking for user errors, contributed by IBM. # MPI-3 TOOLS INTERFACE: new MPI_T performance variables providing information about nemesis communication behavior and and CH3 message matching queues. # TEST SUITE: "make testing" now also outputs a "summary.tap" file that can be interpreted with standard TAP consumer libraries and tools. The "summary.xml" format remains unchanged. # GIT: This is the first release built from the new git repository at git.mpich.org. A few build system mechanisms have changed because of this switch. # BUG FIX: resolved a compilation error related to LLONG_MAX that affected several users (ticket #1776). # BUG FIX: nonblocking collectives now properly make progress when MPICH is configured with the ch3:sock channel (ticket #1785). # Several other minor bug fixes, memory leak fixes, and code cleanup. A full list of changes is available at the following link: 185 Appendix F. MPICH-3 Release Information =============================================================================== Changes in 3.0.1 =============================================================================== # PM/PMI: Critical bug-fix in Hydra to work correctly in multi-node tests. # A full list of changes is available using: svn log -r10790:HEAD ... or at the following link:? \ action=follow_copy&rev=HEAD&stop_rev=10790&mode=follow_copy ===============================================================================:? \ action=follow_copy&rev=HEAD&stop_rev=10344&mode=follow_copy ===============================================================================) 186 Appendix F. MPICH-3 Release Information. 187 Appendix F. MPICH-3 Release Information #:? \ action=follow_copy&rev=HEAD&stop_rev=8675&mode=follow_copy =============================================================================== Changes in 1.4 =============================================================================== # OVERALL: Improvements to fault tolerance for collective 188 Appendix F. MPICH-3 Release Information 189 Appendix F. MPICH-3 Release Information:? \ action=follow_copy&rev=HEAD&stop_rev=7838&mode=follow_copy ===============================================================================. 190 Appendix F. MPICH-3 Release Information #:? \ action=follow_copy&rev=HEAD&stop_rev=7457&mode=follow_copy ===============================================================================:? \ action=follow_copy&rev=HEAD&stop_rev=7350&mode=follow_copy =============================================================================== Changes in 1.3 =============================================================================== # OVERALL: Initial support for fine-grained threading in ch3:nemesis:tcp. # OVERALL: Support for Asynchronous Communication Progress. # OVERALL: The ssm and shm channels have been removed. 191 Appendix F. MPICH-3 Release Information #:? \ action=follow_copy&rev=HEAD&stop_rev=5762&mode=follow_copy =============================================================================== Changes in 1.2.1 =============================================================================== # OVERALL: Improved support for fine-grained multithreading. # OVERALL: Improved integration with Valgrind for debugging builds of MPICH2. # PM/PMI: Initial support for hwloc process-core binding library in Hydra. 192 Appendix F. MPICH-3 Release Information # PM/PMI: Updates to the PMI-2 code to match the PMI-2 API and wire-protocol draft. # Several other minor bug fixes, memory leak fixes, and code cleanup. A full list of changes is available using: svn log -r5425:HEAD ... or at the following link:? \ action=follow_copy&rev=HEAD&stop_rev=5425&mode=follow_copy =============================================================================== Changes in 1.2 =============================================================================== # OVERALL: Support for MPI-2.2 # OVERALL: Several fixes to Nemesis/MX. # WINDOWS: Performance improvements to Nemesis/windows. # PM/PMI: Scalability and performance improvements to Hydra using PMI-1.1 process-mapping features. # PM/PMI: Support for process-binding for hyperthreading enabled systems in Hydra. # PM/PMI: Initial support for PBS as a resource management kernel in Hydra. # PM/PMI: PMI2 client code is now officially included in the release. # TEST SUITE: Support to run the MPICH2 test suite through valgrind. # Several other minor bug fixes, memory leak fixes, and code cleanup. A full list of changes is available using: svn log -r5025:HEAD ... or at the following link:? \ action=follow_copy&rev=HEAD&stop_rev=5025&mode=follow_copy =============================================================================== Changes in 1.1.1p1 =============================================================================== - OVERALL: Fixed an invalid read in the dataloop code for zero count types. - OVERALL: Fixed several bugs in ch3:nemesis:mx (tickets #744,#760; 193 Appendix F. MPICH-3 Release Information also change r5126). - BUILD SYSTEM: Several fixes for functionality broken in 1.1.1 release, including MPICH2LIB_xFLAGS and extra libraries living in $LIBS instead of $LDFLAGS. Also, ’-lpthread’ should no longer be duplicated in link lines. - BUILD SYSTEM: MPICH2 shared libraries are now compatible with glibc versioned symbols on Linux, such as those present in the MX shared libraries. - BUILD SYSTEM: Minor tweaks to improve compilation under the nvcc CUDA compiler. - PM/PMI: Fix mpd incompatibility with python2.3 introduced in mpich2-1.1.1. - PM/PMI: Several fixes to hydra, including memory leak fixes and process binding issues. - TEST SUITE: Correct invalid arguments in the coll2 and coll3 tests. - Several other minor bug fixes, memory leak fixes, and code cleanup. list of changes is available using: A full svn log -r5032:HEAD ... or at the following link:? \ action=follow_copy&rev=HEAD&stop_rev=5032&mode=follow_copy =============================================================================== Changes in 1.1.1 =============================================================================== # OVERALL: Improved support for Boost MPI. # PM/PMI: Significantly improved time taken by MPI_Init with Nemesis and MPD on large numbers of processes. # PM/PMI: Improved support for hybrid MPI-UPC program launching with Hydra. # PM/PMI: Improved support for process-core binding with Hydra. # PM/PMI: Preliminary support for PMI-2. Currently supported only with Hydra. # Many other bug fixes, memory leak fixes and code cleanup. A full list of changes is available using: svn log -r4655:HEAD ... or at the following link:? \ 194 Appendix F. MPICH-3 Release Information action=follow_copy&rev=HEAD&stop_rev=4655&mode=follow_copy =============================================================================== Changes in 1.1 =============================================================================== - OVERALL: Added MPI 2.1 support. - OVERALL: Nemesis is now the default configuration channel with a completely new TCP communication module. - OVERALL: Windows support for nemesis. - OVERALL: Added a new Myrinet MX network module for nemesis. - OVERALL: Initial support for shared-memory aware collective communication operations. Currently MPI_Bcast, MPI_Reduce, MPI_Allreduce, and MPI_Scan. - OVERALL: Improved handling of MPI Attributes. - OVERALL: Support for BlueGene/P through the DCMF library (thanks to IBM for the patch). - OVERALL: Experimental support for fine-grained multithreading - OVERALL: Added dynamic processes support for Nemesis. - OVERALL: Added automatic as well as statically runtime configurable receive timeout variation for MPD (thanks to OSU for the patch). - OVERALL: Improved performance for MPI_Allgatherv, MPI_Gatherv, and MPI_Alltoall. - PM/PMI: Initial support for the new Hydra process management framework (current support is for ssh, rsh, fork and a preliminary version of slurm). - ROMIO: Added support for MPI_Type_create_resized and MPI_Type_create_indexed_block datatypes in ROMIO. - ROMIO: Optimized Lustre ADIO driver (thanks to Weikuan Yu for initial work and Sun for further improvements). - Many other bug fixes, memory leak fixes and code cleanup. A full list of changes is available using: svn log -r813:HEAD ... or at the following link:? \ action=follow_copy&rev=HEAD&stop_rev=813&mode=follow_copy 195 Appendix F. MPICH-3 Release Information =============================================================================== Changes in 1.0.7 =============================================================================== - OVERALL: Initial ROMIO device for BlueGene/P (the ADI device is also added but is not configurable at this time). - OVERALL: Major clean up for the propagation of user-defined and other MPICH2 flags throughout the code. - OVERALL: Support for STI Cell Broadband Engine. - OVERALL: Added datatype free hooks to be used by devices independently. - OVERALL: Added device-specific timer support. - OVERALL: make uninstall works cleanly now. - ROMIO: Support to take hints from a config file - ROMIO: more tests and bug fixes for nonblocking I/O - PM/PMI: Added support to use PMI Clique functionality for process managers that support it. - PM/PMI: Added SLURM support to configure to make it transparent to users. - PM/PMI: SMPD Singleton Init support. - WINDOWS: Fortran 90 support added. - SCTP: Added MPICH_SCTP_NAGLE_ON support. - MPE: Updated MPE logging API so that it is thread-safe (through global mutex). - MPE: Added infrastructure to piggyback argument data to MPI states. - DOCS: Documentation creation now works correctly for VPATH builds. - Many other bug fixes, memory leak fixes and code cleanup. A full list of changes is available using: svn log -r100:HEAD =============================================================================== Changes in 1.0.6 =============================================================================== - Updates to the ch3:nemesis channel including preliminary support for thread safety. - Preliminary support for dynamic loading of ch3 channels (sock, ssm, 196 Appendix F. MPICH-3 Release Information shm). See the README file for details. - Singleton init now works with the MPD process manager. - Fixes in MPD related to MPI-2 connect-accept. - Improved support for MPI-2 generalized requests that allows true nonblocking I/O in ROMIO. - MPE changes: * Enabled thread-safe MPI logging through global mutex. * Enhanced Jumpshot to be more thread friendly + added simple statistics in the Legend windows. * Added backtrace support to MPE on Solaris and glibc based systems, e.g. Linux. This improves the output error message from the Collective/Datatype checking library. * Fixed the CLOG2 format so it can be used in serial (non-MPI) logging. - Performance improvements for derived datatypes (including packing and communication) through in-built loop-unrolling and buffer alignment. - Performance improvements for MPI_Gather when non-power-of-two processes are used, and when a non-zero ranked root is performing the gather. - MPI_Comm_create works for intercommunicators. - Enabled -O2 and equivalent compiler optimizations for supported compilers by default (including GNU, Intel, Portland, Sun, Absoft, IBM). - Many other bug fixes, memory leak fixes and code cleanup. A full list of changes is available at. =============================================================================== Changes in 1.0.5 =============================================================================== - An SCTP channel has been added to the CH3 device. This was implemented by Brad Penoff and Mike Tsai, Univ. of British Columbia. Their group’s webpage is located at . - Bugs related to dynamic processes have been fixed. - Performance-related fixes have been added to derived datatypes and collective communication. - Updates to the Nemesis channel - Fixes to thread safety for the ch3:sock channel - Many other bug fixes and code cleanup. A full list of changes is available 197 Appendix F. MPICH-3 Release Information at . =============================================================================== Changes in 1.0.4 =============================================================================== - For the ch3:sock channel, the default build of MPICH2 supports thread safety. A separate build is not needed as before. However, thread safety is enabled only if the user calls MPI_Init_thread with MPI_THREAD_MULTIPLE. If not, no thread locks are called, so there is no penalty. - A new low-latency channel called Nemesis has been added. It can be selected by specifying the option --with-device=ch3:nemesis. Nemesis uses shared memory for intranode communication and various networks for internode communication. Currently available networks are TCP, GM and MX. Nemesis is still a work in progress. See the README for more information about the channel. - Support has been added for providing message queues to debuggers. Configure with --enable-debuginfo to make this information available. This is still a "beta" test version and has not been extensively tested. - For systems with firewalls, the environment variable MPICH_PORT_RANGE can be used to restrict the range of ports used by MPICH2. See the documentation for more details. - Withdrew obsolete modules, including the ib and rdma communication layers. For Infiniband and MPICH2, please see For other interconnects, please contact us at [email protected] . - Numerous bug fixes and code cleanup. A full list of changes is available at . - Numerous new tests in the MPICH2 test suite. - For developers, the way in which information is passed between the top level configure and configures in the device, process management, and related modules has been cleaned up. See the comments at the beginning of the top-level configure.in for details. This change makes it easier to interface other modules to MPICH2. =============================================================================== Changes in 1.0.3 =============================================================================== - There are major changes to the ch3 device implementation. Old and unsupported channels (essm, rdma) have been removed. The internal interface between ch3 and the channels has been improved to similify the process of adding a new channel (sharing existing code where possible) and to improve performance. Further changes in this internal interface are expected. 198 Appendix F. MPICH-3 Release Information - Numerous bug fixes and code cleanup Creation of intercommunicators and intracommunicators from the intercommunicators created with Spawn and Connect/Accept The computation of the alignment and padding of items within structures now handles additional cases, including systems where the alignment an padding depends on the type of the first item in the structure MPD recognizes wdir info keyword gforker’s mpiexec supports -env and -genv arguments for controlling which environment variables are delivered to created processes - While not a bug, to aid in the use of memory trace packages, MPICH2 tries to free all allocated data no later than when MPI_Finalize returns. - Support for DESTDIR in install targets - Enhancements to SMPD - In order to support special compiler flags for users that may be different from those used to build MPICH2, the environment variables MPI_CFLAGS, MPI_FFLAGS, MPI_CXXFLAGS, and MPI_F90FLAGS may be used to specify the flags used in mpicc, mpif77, mpicxx, and mpif90 respectively. The flags CFLAGS, FFLAGS, CXXFLAGS, and F90FLAGS are used in the building of MPICH2. - Many enhancements to MPE - Enhanced support for features and idiosyncracies of Fortran 77 and Fortran 90 compilers, including gfortran, g95, and xlf - Enhanced support for C++ compilers that do not fully support abstract base classes - Additional tests in the mpich2/tests/mpi - New FAQ included (also available at) - Man pages for mpiexec and mpif90 - Enhancements for developers, including a more flexible and general mechanism for inserting logging and information messages, controlable with --mpich-dbg-xxx command line arguments or MPICH_DBG_XXX environment variables. - Note to developers: This release contains many changes to the structure of the CH3 device implementation (in src/mpid/ch3), including signficant reworking of the files (many files have been combined into fewer files 199 Appendix F. MPICH-3 Release Information representing logical grouping of functions). The next release of MPICH2 will contain even more significant changes to the device structure as we introduce a new communication implementation. =============================================================================== Changes in 1.0.2 =============================================================================== - Optimizations to the MPI-2 one-sided communication functions for the sshm (scalable shared memory) channel when window memory is allocated with MPI_Alloc_mem (for all three synchronization methods). - Numerous bug fixes and code cleanup. - Fixed memory leaks. - Fixed shared library builds. - Fixed performance problems with MPI_Type_create_subarray/darray - The following changes have been made to MPE2: - MPE2 now builds the MPI collective and datatype checking library by default. - SLOG-2 format has been upgraded to 2.0.6 which supports event drawables and provides count of real drawables in preview drawables. - new slog2 tools, slog2filter and slog2updater, which both are logfile format convertors. slog2filter removes undesirable categories of drawables as well as alters the slog2 file structure. slog2updater is a slog2filter that reads in older logfile format, 2.0.5, and writes out the latest format 2.0.6. - The following changes have been made to MPD: - Nearly all code has been replaced by new code that follows a more object-oriented approach than before. This has not changed any fundamental behavior or interfaces. - There is info support in spawn and spawn_multiple for providing parts of the environment for spawned processes such as search-path and current working directory. See the Standard for the required fields. - mpdcheck has been enhanced to help users debug their cluster and network configurations. - CPickle has replaced marshal as the source module for dumps and loads. - The mpigdb command has been replaced by mpiexec -gdb. - Alternate interfaces can be used. 200 See the Installer’s Guide. Appendix F. MPICH-3 Release Information =============================================================================== Changes in 1.0.1 =============================================================================== - Copyright statements have been added to all code files, clearly identifying that all code in the distribution is covered by the extremely flexible copyright described in the COPYRIGHT file. - The MPICH2 test suite (mpich2/test) can now be run against any MPI implementation, not just MPICH2. - The send and receive socket buffers sizes may now be changed by setting MPICH_SOCKET_BUFFER_SIZE. Note: the operating system may impose a maximum socket buffer size that prohibits MPICH2 from increasing the buffers to the desire size. To raise the maximum allowable buffer size, please contact your system administrator. - Error handling throughout the MPI routines has been improved. The error handling in some internal routines has been simplified as well, making the routines easier to read. - MPE (Jumpshot and CLOG logging) is now supported on Microsoft Windows. - C applications built for Microsoft Windows may select the desired channels at runtime. - A program MPI_Init. other MPI program. by such a not started with mpiexec may become an MPI program by calling It will have an MPI_COMM_WORLD of size one. It may then call routines, including MPI_COMM_SPAWN, to become a truly parallel At present, the use of MPI_COMM_SPAWN and MPI_COMM_SPAWN_MULTIPLE process is only supported by the MPD process manager. - Memory leaks in communicator allocation and the C++ binding have been fixed. - Following GNU guidelines, the parts of the install step that checked the installation have been moved to an installcheck target. Much of the installation now supports the DESTDIR prefix. - Microsoft Visual Studio projects have been added to make it possible to build x86-64 version - Problems with compilers and linkers that do not support weak symbols, which are used to support the PMPI profiling interface, have been corrected. - Handling of Fortran 77 and Fortran 90 compilers has been improved, including support for g95. - The Fortran stdcall interface on Microsoft Windows now supports character*. - A bug in the OS X implementation of poll() caused the sock channel to hang. A workaround has been put in place. - Problems with installation under OS/X are now detected and corrected. (Install breaks libraries that are more than 10 seconds old!) 201 Appendix F. MPICH-3 Release Information - The following changes have been made to MPD: - Sending a SIGINT to mpiexec/mpdrun, such as by typing control-C, now causes SIGINT to be sent to the processes within the job. Previously, SIGKILL was sent to the processes, preventing applications from catching the signal and performing their own signal processing. - The process for merging output has been improved. - A new option, -ifhn, has been added to the machine file, allowing the user to select the destination interface to be used for TCP communication. See the User’s Manual for details. - The user may now select, via the "-s" option to mpiexec/mpdrun, which processes receive input through stdin. stdin is immediately closed for all processes not in set receiving input. This prevents processes not in the set from hanging should they attempt to read from stdin. - The MPICH2 Installer’s Guide now contains an appendix on troubleshooting problems with MPD. - The following changes have been made to SMPD: - On Windows machines, passwordless authentication (via SSPI) can now be used to start processes on machines within a domain. This feature is a recent addition, and should be considered experimental. - On Windows machines, the -localroot option was added to mpiexec, allowing processes on the local machines to perform GUI operations on the local desktop. - On Windows machines, network drive mapping is now supported via the -map option to mpiexec. - Three new GUI tools have been added for Microsoft Windows. These tools are wrappers to the command line tools, mpiexec.exe and smpd.exe. wmpiexec allows the user to run a job much in the way they with mpiexec. wmpiconfig provides a means of setting various global options to the SMPD process manager environment. wmpiregister encrypts the user’s credentials and saves them to the Windows Registry. - The following changes have been made to MPE2: - MPE2 no longer attempt to compile or link code during ’make install’ to validate the installation. Instead, ’make installcheck’ may now be used to verify that the MPE installation. - MPE2 now supports DESTDIR. - The sock channel now has preliminary support for MPI_THREAD_SERIALIZED and MPI_THREAD_MULTIPLE on both UNIX and Microsoft Windows. We have performed rudimentary testing; and while overall the results were very positive, known issues do exist. ROMIO in particular experiences hangs in several places. We plan to correct that in the next release. As always, please report any difficulties you encounter. 202 Appendix F. MPICH-3 Release Information - Another channel capable of communicating with both over sockets and shared memory has been added. Unlike the ssm channel which waits for new data to arrive by continuously polling the system in a busy loop, the essm channel waits by blocking on an operating system event object. This channel is experimental, and is only available for Microsoft Windows. - The topology routines have been modified to allow the device to override the default implementation. This allows the device to export knowledge of the underlying physical topology to the MPI routines (Dims_create and the reorder == true cases in Cart_create and Graph_create). - New memory allocation macros, MPIU_CHK[PL]MEM_*(), have been added to help prevent memory leaks. See mpich2/src/include/mpimem.h. - New error reporting macros, MPIU_ERR_*, have been added to simplify the error handling throughout the code, making the code easier to read. See mpich2/src/include/mpierrs.h. - Interprocess communication using the Sock interface (sock and ssm channels) may now be bound to a particular destination interface using the environment variable MPICH_INTERFACE_HOSTNAME. The variable needs to be set for each process for which the destination interface is not the default interface. (Other mechanisms for destination interface selection will be provided in future releases.) Both MPD and SMPD provide a more simplistic mechanism for specifying the interface. See the user documentation. - Too many bug fixes to describe. Much thanks goes to the users who reported bugs. Their patience and understanding as we attempted to recreate the problems and solve them is greatly appreciated. =============================================================================== Changes in 1.0 =============================================================================== - MPICH2 now works on Solaris. - The User’s Guide has been expanded considerably. been expanded some as well. The Installation Guide has - MPI_COMM_JOIN has been implemented; although like the other dynamic process routines, it is only supported by the Sock channel. - MPI_COMM_CONNECT and MPI_COMM_ACCEPT are now allowed to connect with remote process to which they are already connected. - Shared libraries can now be built (and used) on IA32 Linux with the GNU compilers (--enable-sharedlibs=gcc), and on Solaris with the native Sun Workshop compilers (--enable-sharedlibs=solaris). They may also work on other operating systems with GCC, but that has not been tested. Previous restrictions disallowing C++ and Fortran bindings when building shared libraries have been removed. - The dataloop and datatype contents code has been improved to address 203 Appendix F. MPICH-3 Release Information alignment issues on all platforms. - A bug in the datatype code, which handled zero block length cases incorrectly, has been fixed. - An segmentation fault in the datatype memory management, resulting from freeing memory twice, has been fixed. - The following changes were made to the MPD process manager: - MPI_SPAWN_MULTIPLE now works with MPD. - The arguments to the ’mpiexec’ command supplied by the MPD have changed. First, the -default option has been removed. Second, more flexible ways to pass environment variables have been added. - The commands ’mpdcheck’ and ’testconfig’ have been to installations using MPD. These commands test the setup of the machines on which you wish to run MPICH2 jobs. They help to identify misconfiguration, firewall issues, and other communication problems. - Support for MPI_APPNUM and MPI_UNIVERSE_SIZE has been added to the Simple implementation of PMI and the MPD process manager. - In general, error detection and recovery in MPD has improved. - A new process manager, gforker, is now available. Like the forker process manager, gforker spawns processes using fork(), and thus is quite useful on SMPs machines. However, unlike forker, gforker supports all of the features of a standard mpiexec, plus some. Therefore, It should be used in place of the previous forker process manager, which is now deprecated. - The following changes were made to ROMIO: - The amount of duplicated ROMIO code in the close, resize, preallocate, read, write, asynchronous I/O, and sync routines has been substantially reduced. - A bug in flattening code, triggered by nested datatypes, has been fixed. - Some small memory leaks have been fixed. - The error handling has been abstracted allowing different MPI implementations to handle and report error conditions in their own way. Using this abstraction, the error handling routines have been made consistent with rest of MPICH2. - AIO support has been cleaned up and unified. It now works correctly on Linux, and is properly detected on old versions of AIX. - A bug in MPI_File_seek code, and underlying support code, has been fixed. - Support for PVFS2 has improved. - Several dead file systems have been removed. 204 Others, including HFS, SFS, Appendix F. MPICH-3 Release Information PIOFS, and Paragon, have been deprecated. - MPE and CLOG have been updated to version 2.1. For more details, please see src/mpe2/README. - New macros for memory management were added to support function local allocations (alloca), to rollback pending allocations when error conditions are detected to avoid memory leaks, and to improve the conciseness of code performing memory allocations. - New error handling macros were added to make internal error handling code more concise. =============================================================================== Changes in 0.971 =============================================================================== - Code restricted by copyrights less flexible than the one described in the COPYRIGHT file has been removed. - Installation and User Guides have been added. - The SMPD PMI Wire Protocol Reference Manual has been updated. - To eliminate portability problems, common blocks in mpif.h that spanned multiple lines were broken up into multiple common blocks each described on a single line. - A new command, mpich2version, was added to allow the user to obtain information about the MPICH2 installation. This command is currently a simple shell script. We anticipate that the mpich2version command will eventually provide additional information such as the patches applied and the date of the release. - The following changes were made to MPD2: - Support was added for MPI’s "singleton init", in which a single process started in the normal way (i.e., not by mpiexec or mpirun) becomes an MPI process with an MPI_COMM_WORLD of size one by calling MPI_Init. After this the process can call other MPI functions, including MPI_Comm_spawn. - The format for some of the arguments to mpiexec have changed, especially for passing environment variables to MPI processes. - In addition to miscellaneous hardening, better error checking and messages have been added. - The install process has been improved. In particular, configure has been updated to check for a working install program and supply it’s own installation script (install.sh) if necessary. - A new program, mpdcheck, has been added to help diagnose machine configurations that might be erroneous or at least confusing to mpd. 205 Appendix F. MPICH-3 Release Information - Runtime version checking has been added to insure that the Simple implementation of PMI linked into the application and the MPD process manager being used to run that application are compatible. - Minor improvements have been made to mpdboot. - Support for the (now deprecated) BNR interface has been added to allow MPICH1 programs to also be run via MPD2. - Shared libraries are now supported on Linux systems using the GNU compilers with the caveat that C++ support must be disabled (--disable-cxx). - The CH3 interface and device now provide a mechanism for using RDMA (remote direct memory access) to transfer data between processes. - Logging capabilities for MPI and internal routines have been readded. the documentation in doc/logging for details. See - A "meminit" option was added to --enable-g to force all bytes associated with a structure or union to be initialized prior to use. This prevents programs like Valgrind from complaining about uninitialized accesses. - The dist-with-version and snap targets in the top-level Makefile.sm now properly produce mpich2-<ver>/maint/Version instead of mpich2-<ver>/Version. In addition, they now properly update the VERSION variable in Makefile.sm without clobbering the sed line that performs the update. - The dist and snap targets in the top-level Makefile.sm now both use the dist-with-version target to avoid inconsistencies. - The following changes were made to simplemake: - The environment variables DEBUG, DEBUG_DIRS, and DEBUG_CONFDIR can now be used to control debugging output. - Many fixes were made to make simplemake so that it would run cleanly with perl -w. - Installation of *all* files from a directory is now possible (example, installing all of the man pages). - The clean targets now remove the cache files produced by newer versions of autoconf. - For files that are created by configure, the determination of the location of that configure has been improved, so that make of those files (e.g., make Makefile) is more likely to work. There is still more to do here. - Short loops over subdirectories are now unrolled. - The maintainerclean target has been renamed to maintainer-clean to match GNU guidelines. 206 Appendix F. MPICH-3 Release Information - The distclean and maintainer-clean targets have been improved. - An option was added to perform one ar command per directory instead of one per file when creating the profiling version of routines (needed only for systems that do not support weak symbols). =============================================================================== Changes in 0.97 =============================================================================== - MPI-2 one-sided communication has been implemented in the CH3 device. - mpigdb works as a simple parallel debugger for MPI programs started with mpd. New since MPICH1 is the ability to attach to running parallel programs. See the README in mpich2/src/pm/mpd for details. - MPI_Type_create_darray() and MPI_Type_create_subarray() implemented including the right contents and envelope data. - ROMIO flattening code now supports subarray and darray combiners. - Improve scalability and performance of some ROMIO PVFS and PVFS2 routines - An error message string parameter was added to MPID_Abort(). If the parameter is non-NULL this string will be used as the message with the abort output. Otherwise, the output message will be base on the error message associated with the mpi_errno parameter. - MPID_Segment_init() now takes an additional boolean parameter that specifies if the segment processing code is to produce/consume homogeneous (FALSE) or heterogeneous (TRUE) data. - The definitions of MPID_VCR and MPID_VCRT are now defined by the device. - The semantics of MPID_Progress_{Start,Wait,End}() have changed. blocking progress loop now looks like the following. A typical if (req->cc != 0) { MPID_Progress_state progress_state; MPID_Progress_start(&progress_state); while (req->cc != 0) { mpi_errno = MPID_Progress_wait(&progress_state); if (mpi_errno != MPI_SUCCESS) { /* --BEGIN ERROR HANDLING-- */ MPID_Progress_end(&progress_state); goto fn_fail; /* --END ERROR HANDLING-- */ } } MPID_Progress_end(&progress_state); 207 Appendix F. MPICH-3 Release Information } NOTE: each of these routines now takes a single parameter, a pointer to a thread local state variable. - The CH3 device and interface have been modified to better support MPI_COMM_{SPAWN,SPAWN_MULTIPLE,CONNECT,ACCEPT,DISCONNECT}. Channels writers will notice the following. (This is still a work in progress. the note below.) See - The introduction of a process group object (MPIDI_PG_t) and a new set of routines to manipulate that object. - The renaming of the MPIDI_VC object to MPIDI_VC_t to make it more consistent with the naming of other objects in the device. - The process group information in the MPIDI_VC_t moved from the channel specific portion to the device layer. - MPIDI_CH3_Connection_terminate() was added to the CH3 interface to allow the channel to properly shutdown a connection before the device deletes all associated data structures. - A new upcall routine, MPIDI_CH3_Handle_connection(), was added to allow the device to notify the device when a connection related event has completed. A present the only event is MPIDI_CH3_VC_EVENT_TERMINATED, which notify the device that the underlying connection associated with a VC has been properly shutdown. For every call to MPIDI_CH3_Connection_terminate() that the device makes, the channel must make a corresponding upcall to MPIDI_CH3_Handle_connection(). MPID_Finalize() will likely hang if this rule is not followed. - MPIDI_CH3_Get_parent_port() was added to provide MPID_Init() with the port name of the the parent (spawner). This port name is used by MPID_Init() and MPID_Comm_connect() to create an intercommunicator between the parent (spawner) and child (spawnee). Eventually, MPID_Comm_spawn_multiple() will be update to perform the reverse logic; however, the logic is presently still in the sock channel. Note: the changes noted are relatively fresh and are the beginning to a set of future changes. The goal is to minimize the amount of code required by a channel to support MPI dynamic process functionality. As such, portions of the device will change dramatically in a future release. A few more changes to the CH3 interface are also quite likely. - MPIDI_CH3_{iRead,iWrite}() have been removed from the CH3 interface. MPIDI_CH3U_Handle_recv_pkt() now returns a receive request with a populated iovec to receive data associated with the request. MPIDU_CH3U_Handle_{recv,send}_req() reload the iovec in the request and return and set the complete argument to TRUE if more data is to read or written. If data transfer for the request is complete, the complete argument must be set to FALSE. =============================================================================== 208 Appendix F. MPICH-3 Release Information Changes in 0.96p2 =============================================================================== The shm and ssm channels have been added back into the distribution. Officially, these channels are supported only on x86 platforms using the gcc compiler. The necessary assembly instructions to guarantee proper ordering of memory operations are lacking for other platforms and compilers. That said, we have seen a high success rate when testing these channels on unsupported systems. This patch release also shared memory, or sshm, allocates shared memory preallocating N-squared includes a new unsupported channel. The scalable channel is similar to the shm channel except that it communication queues only when necessary instead of queues. =============================================================================== Changes in 0.96p1 =============================================================================== This patch release fixes a problem with building MPICH2 on Microsoft Windows platforms. It also corrects a serious bug in the poll implementation of the Sock interface. =============================================================================== Changes in 0.96 =============================================================================== The 0.96 distribution is largely a bug fix release. In addition to the many bug fixes, major improvements have been made to the code that supports the dynamic process management routines (MPI_Comm_{connect,accept,spawn,...}()). Additional changes are still required to support MPI_Comm_disconnect(). We also added an experimental (and thus completely unsupported) rdma device. The internal interface is similar to the CH3 interface except that it contains a couple of extra routines to inform the device about data transfers using the rendezvous protocol. The channel can use this extra information to pin memory and perform a zero-copy transfer. If all goes well, the results will be rolled back into the CH3 device. Due to last minute difficulties, this release does not contain the shm or ssm channels. These channels will be included in a subsequent patch release. =============================================================================== Changes in 0.94 =============================================================================== Active target one-sided communication is now available for the ch3:sock channel. This new functionality has undergone some correctness testing but has not been optimized in terms of performance. Future release will include performance enhancements, passive target communication, and availability in channels other than just ch3:sock. 209 Appendix F. MPICH-3 Release Information The shared memory channel (ch3:shm), which performs communication using shared memory on a single machine, is now complete and has been extensively tested. At present, this channel only supports IA32 based machines (excluding the Pentium Pro which has a memory ordering bug). In addition, this channel must be compiled with gcc. Future releases with support additional architectures and compilers. A new channel has been added that performs inter-node communication using sockets (TCP/IP) and intra-node communication using shared memory. This channel, ch3:ssm, is ideal for clusters of SMPs. Like the shared memory channel (ch3:shm), this channel only supports IA32 based machines and must be compiled with gcc. In future releases, the ch3:ssm channel will support additional architectures and compilers. The two channels that perform commutation using shared memory, ch3:shm and ch3:ssm, now support the allocation of shared memory using both the POSIX and System V interfaces. The POSIX interface will be used if available; otherwise, the System V interface is used. In the interest of increasing portability, many enhancements have been made to both the code and the configure scripts. And, as always, many bugs have been fixed :-). ***** INTERFACE CHANGES **** The parameters to MPID_Abort() have changed. MPID_Abort() now takes a pointer to communicator object, an MPI error code, and an exit code. MPIDI_CH3_Progress() has been split into two functions: MPIDI_CH3_Progress_wait() and MPIDI_CH3_Progress_test(). =============================================================================== Changes in 0.93 =============================================================================== Version 0.93 has undergone extensive changes to provide better error reporting. Part of these changes involved modifications to the ADI3 and CH3 interfaces. The following routines now return MPI error codes: MPID_Cancel_send() MPID_Cancel_recv() MPID_Progress_poke() MPID_Progress_test() MPID_Progress_wait() MPIDI_CH3_Cancel_send() MPIDI_CH3_Progress() MPIDI_CH3_Progress_poke() MPIDI_CH3_iRead() MPIDI_CH3_iSend() MPIDI_CH3_iSendv() MPIDI_CH3_iStartmsg() MPIDI_CH3_iStartmsgv() 210 Appendix F. MPICH-3 Release Information MPIDI_CH3_iWrite() MPIDI_CH3U_Handle_recv_pkt() MPIDI_CH3U_Handle_recv_req() MPIDI_CH3U_Handle_send_req() ******************************************************************************* Of special note are MPID_Progress_test(), MPID_Progress_wait() and MPIDI_CH3_Progress() which previously returned an integer value indicating if one or more requests had completed. They no longer return this value and instead return an MPI error code (also an integer). The implication being that while the semantics changed, the type signatures did not. ******************************************************************************* The function used to create error codes, MPIR_Err_create_code(), has also changed. It now takes additional parameters, allowing it create a stack of errors and making it possible for the reporting function to indicate in which function and on which line the error occurred. It also allows an error to be designated as fatal or recoverable. Fatal errors always result in program termination regardless of the error handler installed by the application. A RDMA channel has been added and includes communication methods for shared memory and shmem. This is recent development and the RDMA interface is still in flux. Release Notes ---------------------------------------------------------------------KNOWN ISSUES ---------------------------------------------------------------------### Fine-grained thread safety * ch3:sock does not (and will not) support fine-grained threading. * 211 Appendix F. MPICH-3 Release Information affects the MPI_Pack_external and MPI_Unpack_external routines, as well the external data representation capabilities of ROMIO. In particular: noncontiguous user buffers could consume egregious amounts of memory in the MPI library and any types which vary in width between the native representation and the external32 representation will likely cause corruption. The following ticket contains some additional information: * ch3 has known problems in some cases when threading and dynamic processes are used together on communicators of size greater than one. ### Process Managers * Hydra has a bug related to stdin handling: ###. Notes 1. 2. 212
http://manualzz.com/doc/6773423/user-s-guide---penguin-computing
CC-MAIN-2018-26
refinedweb
53,006
55.74
I like the idea, and particularly the fact that the example covers the distinction between strictness in the value, tuple or state. If nothing else, I could bundle this up into a documentation patch for mtl, but the example probably belongs in transformers with the data types, which Ross still maintains. You may have better luck emailing him directly. Sent from my iPad On Sep 21, 2011, at 11:03 AM, Henning Thielemann <lemming at henning-thielemann.de> wrote: > > The confusion about the kind of strictness in lazy and strict State monad pops up again and again. How about illustrating the difference using these examples: > > Control.Monad.Trans.State.Lazy> evalState (mapM (\n -> modify (n+) >> get) [1::Integer ..]) 0 > [1,3,6,10,15,21,28,36,45,55,66,78,91,105,120 ... > > Control.Monad.Trans.State.Strict> evalState (mapM (\n -> modify (n+) >> get) [1::Integer ..]) 0 > <<wait infinitely>> > > ? > > > Now, often I like to have a kind of strictness where lists can be processed lazily but the state is updated strictly between the list cells. > > > E.g. the following expression cannot be computed both in lazy and in strict State monad: > > evalState (mapM (\n -> modify (n+) >> get) [1::Integer ..]) 0 !! 10000000 > > > Since the list shall still be processed lazily, we must stay in the lazy State monad. But the state shall be updated strictly for every list cell. I can achieve this for instance by replacing 'mapM' by the following 'mapS': > > mapS :: (a -> State s b) -> [a] -> State s [b] > mapS _ [] = gets (\s -> seq s []) > mapS f (x:xs) = do y <- f x; s <- get; ys <- mapS f xs; return (seq s (y:ys)) > > > Now > > evalState (mapS (\n -> modify (n+) >> get) [1::Integer ..]) 0 !! 10000000 > > can be computed with constant memory consumption. > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org >
http://www.haskell.org/pipermail/libraries/2011-September/016768.html
CC-MAIN-2014-15
refinedweb
302
62.88
A string is a one dimensional character array that is terminated by a null character. Concatenation of two strings is the joining of them to form a new string. For example. String 1: Mangoes are String 2: tasty Concatenation of 2 strings: Mangoes are tasty A program to concatenate two strings is given as follows. #include <iostream> using namespace std; int main() { char str1[100] = "Hi..."; char str2[100] = "How are you"; int i,j; cout<<"String 1: "<<str1<<endl; cout<<"String 2: "<<str2<<endl; for(i = 0; str1[i] != '\0'; ++i); j=0; while(str2[j] != '\0') { str1[i] = str2[j]; i++; j++; } str1[i] = '\0'; cout<<"String after concatenation: "<<str1; return 0; } String 1: Hi... String 2: How are you String after concatenation: Hi...How are you In the above program, there are two strings str1 and str2. A for loop is used to reach the end of str1. At the end of the for loop, i contains the index of the null value in the str1. The following code snippet demonstrates this. for(i = 0; str1[i] != '\0'; ++i); A while loop is used to transfer the value of str2 to str1. Variable j starts from 0 and copies the character in str2 into str1 at the position pointed by i. This loop runs till value of str2[j] is not null. This is demonstrated as follows. j=0; while(str2[j] != '\0') { str1[i] = str2[j]; i++; j++; } After the strings are concatenated in str1, null is added to the end. Then the concatenated string is displayed. The code snippet for this is as follows − str1[i] = '\0'; cout<<"String after concatenation: "<<str1;
https://www.tutorialspoint.com/cplusplus-program-to-concatenate-two-strings
CC-MAIN-2021-17
refinedweb
277
84.17
twisted.mail.imap4.IMAP4Server(basic.LineReceiver, policies.TimeoutMixin)class documentation twisted.mail.imap4View Source (View In Hierarchy) Implements interfaces: twisted.mail.interfaces.IMailboxIMAPList when the connection times out. Override to define behavior other than dropping the connection. Override this for when raw data is received. Override this for when each line is received. Parse an astring from line that represents a command's final argument. This special case exists to enable parsing empty string literals. Parse an astring from the line, return (arg, rest), possibly via a deferred (to handle literals) Lookup the account associated with the given parameters Override this method to define the desired authentication behavior. The default behavior is to defer authentication to self.portal if. Apply the search filter to a set of messages. Send the response to the client. Pop search terms from the beginning of query until there are none left and apply them to the given message. Pop one search term from the beginning of query (possibly more than one element) and return whether it matches the given message. Returns True if the message matches the ALL search key (always). Returns True if the message has a BCC address matching the query. Returns True if the message does not match the query. Returns True if the message matches any of the first two query items. Returns True if the message date is earlier than the query date. Returns True if the message date is the same as the query date. Returns True if the message date is later than the query date. Returns True if the message UID is in the range defined by the search query. Indicates that the write status of a mailbox has changed. Indicates that the flags of one or more messages have changed. Indicates that the number of messages in a mailbox has changed.
http://twistedmatrix.com/documents/current/api/twisted.mail.imap4.IMAP4Server.html
CC-MAIN-2017-43
refinedweb
306
68.06
In my last few posts we’ve been building a simple Office Business Application (OBA) for the new Northwind Traders. If you missed them: - OBA Part 1 – Exposing Line-of-Business Data - OBA Part 2 – Building and Outlook Client against LOB Data In this post I’m going to talk about how we can create a purchase order in Word 2007 that contains data about the items being purchased and how we can query that data and place it into our database. We’ll use this code as a basis for our SharePoint Workflow which we will build out in the next post. If you recall our architecture diagram of our Northwind Traders OBA involved our Sales Reps submitting purchase orders as Word 2007 documents up to SharePoint which kicked off a workflow to parse the document and update the database with the order data through our data service. This allows us to store the unstructured document on SharePoint and the structured order data in our database. However before we build out the SharePoint Workflow we need a clean way to store and then retrieve the structured order data inside the Word 2007 document. Since Word 2007 documents are Open XML we can use the Open XML SDK and LINQ to XML to easily parse the document. (I’ve talked about how to manipulate documents with Open XML SDK before here.) Content Controls One way to store data in an Word 2007 document is by using content controls. These allow you to define specific data areas/fields in the document which are then bound to XML that is placed inside the document. When users enter data into these areas of the document the data is stored as a CustomXML Part inside the document. You can use Visual Studio to create content controls and map them to XML or you can use Word itself. There’s also a nifty tool called the Word 2007 Content Control Toolkit that makes the mapping more visual. I’d also highly recommend installing the VSTO Power Tools as well which includes a VS Add-In for manipulating Open XML documents. This allows you to look inside the document easily to inspect all the parts directly within Visual Studio. So the first thing to do is to create a purchase order template and lay out the content controls on the document. We’ll create something very simple using Microsoft Word 2007. On the Developer tab you will see the Controls section. There you can choose which types of controls to lay out on the document. Click the properties button to assign a friendly title and tag to the control. Here I’ve laid out the minimum information we’ll need to submit an order to the system: Users can write anything else around the content controls but the system only cares about capturing the data we’ve specified. This gives us the ability to store structured and unstructured data completely inside the .docx file. Creating and Mapping the XML Now we are ready to map the values of the content controls to some custom XML. The XML document for our order looks like this. (Note that there are 10 <OrderDetail> elements I just snipped them for brevity): <OrderEntry xmlns="urn:microsoft:examples:oba"> <CustomerID /> <OrderDate /> <Shipper /> <OrderDetails> <OrderDetail> <ProductName /> <Quantity /> </OrderDetail> <OrderDetail>... </OrderDetails> </OrderEntry> Now open up the Word 2007 Content Control Toolkit an open the OrderEntryTemplate.docx. Under Actions select "Create a new Custom XML Part", switch to edit view, and then paste in the XML: Next switch to Bind View and then drag the elements onto the content controls on the left. Make sure you select the element first and then drag it. Once you’re done, save the document and then you can open it in Visual Studio if you’ve loaded the VSTO Power Tools. This will show the Open XML parts of the document and you can expand the customXml folder and see that our XML has been added to the document. If you back into Word and fill out the content controls and then view the document in Visual Studio again, you will see that the item1.xml custom XML part will contain the data we entered. Now that we have a purchase order template we can give this to our sales reps who can collaborate with our high volume customers via email to fill it out. They can then submit the purchase orders to a SharePoint list that can run a workflow to extract the order data and update the database through the data service. Using the Open XML SDK to Retrieve the Order Data The easiest way to programmatically manipulate Office 2007 Open XML documents is by using the Open XML SDK. Once you install it you can then Add Reference to the DocumentFormat.OpenXML assembly. In order to use LINQ to XML you’ll also need a reference to System.Core and System.Xml.Linq. These are imported automatically when you create a new project in Visual Studio 2008. You’ll also need to add a Service Reference to the ADO.NET Data Service like I’ve shown before. So let’s start simple and just create a console application for now called NorthwindOrderDocParser. Later we’ll talk about moving this to a SharePoint workflow. Before we start parsing the document let’s create a couple simple classes that store the data we’re extracting from our document. ''' <summary> ''' These classes represents the order data that is inside the Word Document. ''' </summary> ''' <remarks></remarks> Public Class DocumentOrderData Sub New(ByVal customerID As String, ByVal orderDate As Date, ByVal shipperName As String) _CustomerID = customerID _OrderDate = orderDate _Shipper = shipperName End Sub Private _CustomerID As String Public Property CustomerID() As String Get Return _CustomerID End Get Set(ByVal value As String) _CustomerID = value End Set End Property Private _OrderDate As Date Public Property OrderDate() As Date Get Return _OrderDate End Get Set(ByVal value As Date) _OrderDate = value End Set End Property Private _Shipper As String Public Property Shipper() As String Get Return _Shipper End Get Set(ByVal value As String) If value Is Nothing OrElse value.Trim = "" Then value = "Speedy Express" End If _Shipper = value End Set End Property Private _details As New List(Of Detail) Public ReadOnly Property Details() As List(Of Detail) Get Return _details End Get End Property Public Class Detail Sub New(ByVal productName As String, ByVal quantity As Short) _ProductName = productName _Quantity = quantity End Sub Private _ProductName As String Public Property ProductName() As String Get Return _ProductName End Get Set(ByVal value As String) _ProductName = value End Set End Property Private _Quantity As Short Public Property Quantity() As Short Get Return _Quantity End Get Set(ByVal value As Short) _Quantity = value End Set End Property End Class End Class Next, let’s add a schema for the OrderEntry XML data that is contained in the document. This will give us IntelliSense on our XML when we’re using LINQ to XML. We can just open the document in Visual Studio like before and copy the OrderEntry XML data into the clipboard. Then we can Add a new XML to Schema Item and paste into the Wizard’s dialog box. This will infer the schema and place the XSD file into the project automatically for us. Notice that I specified a namespace on our OrderEntry XML data. We now can import this namespace into our main program along with a few other .NET namespaces we’ll need: 'Reference to our data service and data entities: Imports NorthwindOrderDocParser.NorthwindService 'Open XML SDK: Imports DocumentFormat.OpenXml.Packaging Imports System.IO 'Default XML Namespace: Imports < We are almost ready to start writing our main program to parse the purchase order. First we need a test document. For this test I filled out the following information in a document called MyTestOrder.docx. Now we can write our main program: Module Module1 Sub Main() Try Dim docFile = My.Computer.FileSystem.GetFileInfo("MyTestOrder.docx") Dim docData As DocumentOrderData Using sr = docFile.OpenRead() 'Attempt to parse the document for order data docData = ParseOrderDocument(sr) sr.Close() End Using If docData IsNot Nothing Then Dim employeeEmail = "[email protected]" 'Attempt to add the order data through the service AddNewOrder(docData, employeeEmail) Console.WriteLine("Order saved successfully.") Else Console.WriteLine("No order data was found in the document.") End If Catch ex As Exception Console.WriteLine("Order could not be processed." & vbCrLf & ex.ToString()) End Try Console.ReadLine() End Sub The ParseOrderDocument method is going to need to grab the XML data from our Custom XML parts as we iterate over the part collection. It’s a collection because there can actually be many Custom XML definitions in our document. In order to make grabbing the XML data from the parts easier let’s create an Extension method that extends the OpenXMLPart type. I like to place Extension methods in a separate file called Extensions.vb: Imports DocumentFormat.OpenXml.Packaging Imports System.IO Imports System.Xml Module Extensions ' Create an extension method so we can easily access the part XML <System.Runtime.CompilerServices.Extension()> _ Function GetXDocument(ByVal part As OpenXmlPart) As XDocument Dim xdoc As XDocument Using sr As New StreamReader(part.GetStream()) xdoc = XDocument.Load(XmlReader.Create(sr)) sr.Close() End Using Return xdoc End Function End Module Now we can go back to our main Module1 and add a the ParseOrderDocument method. Notice that I’m using the Extension method we created in the For Each part… loop to return the custom XML as an XDocument. Then I use the child axis property <OrderEntry> (displayed in IntelliSense as I type the query) to see if the element exists. Also notice that since I imported our XML namespace at the top of the file it will only return an <OrderEntry> element in that namespace. So we’re safe not to clash with other custom XML that may be added to the document by other processes. ''' <summary> ''' Attempts to parse the word document for order data and returns an order ''' object with all the required. The document must have a customXML part ''' that adheres to the OrderEntry.xsd ''' </summary> ''' <param name="docStream">The document to parse</param> ''' <returns>The order data contained in the document</returns> ''' <remarks></remarks> Function ParseOrderDocument(ByVal docStream As Stream) As DocumentOrderData Dim orderData As DocumentOrderData = Nothing Try 'Use the Open XML SDK to open the document and access parts easily Dim wordDoc = WordprocessingDocument.Open(docStream, False) Using wordDoc 'Get the main document part (document.xml) Dim mainPart = wordDoc.MainDocumentPart Dim docXML As XElement = Nothing 'Find the order data custom XML part For Each part In mainPart.CustomXmlParts docXML = part.GetXDocument.<OrderEntry>.FirstOrDefault() If docXML IsNot Nothing Then Exit For End If Next If docXML Is Nothing Then Throw New InvalidOperationException("This document does not contain order entry data.") End If 'Grab the order data fields from the XML Dim customerID = docXML.<CustomerID>.Value.Trim() Dim orderDate = docXML.<OrderDate>.Value.Trim() Dim shipper = docXML.<Shipper>.Value.Trim() If customerID <> "" AndAlso IsDate(orderDate) Then 'Create and fill the DocumentOrderData orderData = New DocumentOrderData(customerID, CDate(orderDate), shipper) For Each item In docXML.<OrderDetails>.<OrderDetail> 'Grab order details data fields Dim product = item.<ProductName>.Value.Trim() Dim quantity = item.<Quantity>.Value.Trim() If product <> "" AndAlso IsNumeric(quantity) Then 'Add a new DocumentOrderData.Detail for each product found orderData.Details.Add(New DocumentOrderData.Detail(product, CShort(quantity))) End If Next End If wordDoc.Close() End Using Catch ex As Exception Throw New InvalidOperationException("Could not process this document.", ex) End Try Return orderData End Function Updating the Database through the Data Service Now that we have our document parsed we’re just left with adding the data through our data service. What we need to do is query the reference data (entities) that we’ll need to properly associate on our Order. For instance Order will need a reference to the Customer, the Employee and the Shipper. Then each Order_Detail will need a reference to the Product entity. Notice that we’re passing the employee email address into this method so that we can associate the sales rep with the order. If you recall we had to add this field to the Customer and Employee tables in Northwind. (For this test program I’m hard-coding the value but later we’ll get this information from the Outlook client when it submits the order to SharePoint.) Once we have these entities queried and returned from the service we can link them up properly and add our new Order and Order_Details to the data service. For more information on updating data and setting proper linkage to entities returned from an ADO.NET data service read this post and this one. ''' <summary> ''' Adds a new order through the ADO.NET Data service and sets up all the required ''' associations to related entities. ''' </summary> ''' <param name="docData">The order data</param> ''' <param name="employeeEmail">EmailAddress of sales representitve</param> ''' <remarks></remarks> Private Sub AddNewOrder(ByVal docData As DocumentOrderData, ByVal employeeEmail As String) Dim ctx As New NorthwindEntities(New Uri("")) Dim cust As Customer Try 'Try to retrieve the customer cust = (From c In ctx.Customers _ Where c.CustomerID = docData.CustomerID).FirstOrDefault() Catch ex As Exception Throw New InvalidOperationException("Invalid customer ID.") End Try If cust IsNot Nothing Then Dim ship = (From s In ctx.Shippers _ Where s.CompanyName = docData.Shipper).FirstOrDefault() 'Email Address will come from our Outlook client/sales person Dim emp = (From e In ctx.Employees _ Where e.EmailAddress = employeeEmail).FirstOrDefault() Dim o As New Order() o.OrderDate = docData.OrderDate o.RequiredDate = Now.AddDays(2) o.ShipAddress = cust.Address o.ShipCity = cust.City o.ShipCountry = cust.Country o.ShipName = cust.ContactName o.ShipPostalCode = cust.PostalCode o.ShipRegion = cust.Region o.Freight = 25 ctx.AddToOrders(o) o.Customer = cust ctx.SetLink(o, "Customer", cust) If ship IsNot Nothing Then o.Shipper = ship ctx.SetLink(o, "Shipper", ship) End If If emp IsNot Nothing Then o.Employee = emp ctx.SetLink(o, "Employee", emp) End If o.Order_Details = New System.Collections.ObjectModel.Collection(Of Order_Detail) For Each item In docData.Details Dim productName = item.ProductName.ToLower() Dim product = (From p In ctx.Products _ Where p.ProductName.ToLower() = productName).FirstOrDefault() If product IsNot Nothing Then 'Create a detail for each product being ordered Dim detail As New Order_Detail() o.Order_Details.Add(detail) detail.Quantity = item.Quantity detail.UnitPrice = If(product.UnitPrice.HasValue, _ product.UnitPrice.Value, 1D) ctx.AddToOrder_Details(detail) detail.Product = product ctx.SetLink(detail, "Product", product) detail.Order = o ctx.SetLink(detail, "Order", o) ctx.AddLink(o, "Order_Details", detail) End If Next 'Saving in Batch mode will update the data inside a database transaction 'This will throw an exception if the service can't save the Order ctx.SaveChanges(Services.Client.SaveChangesOptions.Batch) End If End Sub End Module When we run this program we will see that Customer ALFKI now has a new Order and 4 Order Details entered into the database. Since we’re sending the updates in Batch mode this will cause our order data to be properly wrapped in a database transaction. Next post we’ll talk about how we can create a SharePoint workflow to run this code when order documents are added to a SharePoint list. However, if SharePoint is not a requirement of your system (maybe you have no need to collaborate on documents or store this unstructured data) you could easily add this code directly to the Outlook client we built in the previous post. I updated the sample on Code Gallery with this project so have a look. Enjoy! Join the conversationAdd Comment PingBack from Beth Massi has been busy creating a number of great posts describing how to build add-ins to Excel, Outlook, I just finished my last talk of the conference on LINQ to XML and it was lots of fun as always. I’ve In my last few app building posts we’ve been building a simple Office Business Application (OBA) for In my last few app building posts we've been building a simple Office Business Application (OBA) In my last few app building posts we’ve been building a simple Office Business Application (OBA) for Thanks for the description. In the past I've worked on Invantive COmposition which fills Word templates with data from Access or sQL Server and we have used the custom xml parts as a local repository to contain functional specifications of the template model and even (huge!) DLL libraries. Although you can't see things such as Custom XML parts from outside of Word, I think it proves that alternatives such as Office for the Mac are less usable in a corporate environment.
https://blogs.msdn.microsoft.com/bethmassi/2009/02/12/oba-part-3-storing-and-reading-data-in-word-documents/
CC-MAIN-2016-18
refinedweb
2,791
54.42
A Polyglot's Guide to Multiple Dispatch – Part 1 A Polyglot's Guide to Multiple Dispatch – Part 1 What is multiple dispatch and what problems does it solve? Join the DZone community and get the full member experience.Join For Free This is the first article in a series dedicated to multiple dispatch, an advanced abstraction technique available to programmers out-of-the-box in some languages, and implementable in others. This first post in the series presents the technique and explains the problem it intends to solve. It uses C++ as the presentation language because C++ does not support multiple dispatch directly, but can be used to implement it in various ways. Showing how multiple dispatch is implemented in a language that doesn't support it natively is important, in my opinion, as it lets us understand the issue on a deeper level. Follow-up articles will keep focusing on multiple dispatch using other programming languages: Part 2 will show how to implement multiple dispatch in Python; Part 3 will use Common Lisp, where multiple dispatch comes built-in as part of a large and powerful object-oriented system called CLOS; Part 4 will use Clojure, a more modern attempt at a Lisp, where multiple dispatch is also built-in, but works somewhat differently. Polymorphism, Single Dispatch, Multiple Dispatch There are many kinds of polymorphism in programming. The kind we're talking about here is runtime subtype-based polymorphism, where behavior is chosen dynamically based on the runtime types of objects. More specifically, multiple dispatch is all about the runtime types of more than one object. The best way to understand multiple dispatch is to first think about single dispatch. Single dispatch is what we usually refer to as "runtime polymorphism" in languages like C++ and Java [1]. We have an object on which we call a method, and the actual method being called at runtime depends on the runtime type of the object. In C++ this is done with virtual functions: class Shape { public: virtual void ComputeArea() const = 0; }; class Rectangle : public Shape { public: virtual void ComputeArea() const { std::cout << "Rectangle: width times height\n"; } }; class Ellipse : public Shape { public: virtual void ComputeArea() const { std::cout << "Ellipse: width times height times pi/4\n"; } }; int main(int argc, const char** argv) { std::unique_ptr<Shape> pr(new Rectangle); std::unique_ptr<Shape> pe(new Ellipse); pr->ComputeArea(); // invokes Rectangle::ComputeArea pe->ComputeArea(); // invokes Ellipse::ComputeArea return 0; } Even though both pr and pe are pointers to a Shape as far as the C++ compiler is concerned, the two calls toComputeArea get dispatched to different methods at runtime due to C++'s implementation of runtime polymorphism via virtual functions. Now, spend a few seconds thinking about the question: "What is the dispatch done upon in the code sample above?" It's fairly obvious that the entity we dispatch upon is a pointer to Shape. We have pr and we call a method on it. The C++ compiler emits code for this call such that at runtime the right function is invoked. The decision which function to invoke is based upon examining a single object - what pr points to. Hence single dispatch. A natural extension of this idea is multiple dispatch, wherein the decision which function to call is based on the runtime types of multiple objects. Why is this useful? It's not a tool programmers reach for very often, but when it is appropriate, alternatives tend to be cumbersome and repetitive. A telling sign that multiple dispatch may be in order is when you have some operation that involves more than one class and there is no single obvious class where this operation belongs. Think of simulating a sound when a drumstick hits a drum. There are many kinds of drumsticks, and many kinds of drums; their combinations produce different sounds. Say we want to write a function (or family of functions) that determines which sound is produced. Should this function be a method of the Drum class or the DrumStick class? Forcing this decision is one of the follies of classical OOP, and multiple dispatch helps us solve it naturally without adding a kludge into our design. A simpler and more canonical example is computing intersections of shapes — maybe for computer graphics, or for simulation, or other use cases. A generic shape intersection computation can be complex to implement, but in many specific cases it's easy. For example, computing intersections of rectangles with rectangles is trivial; same for circles and ellipses; rectangles with triangles may be a tiny bit harder, but still much simpler than artibrary polygons, and so on. How do we write code to handle all these cases? All in all, we just need an intersect function that takes two shapes and computes an intersection. This function may have a whole bunch of special cases inside for different combinations of shapes it knows how to do easily, before it resorts to some heavy-handed generic polygon intersection approach. Such code, however, would be gross to develop and maintain. Wouldn't it be nice if we could have: void Intersect(const Rectangle* r, const Ellipse* e) { // implement intersection of rectangle with ellipse } void Intersect(const Rectangle* r1, const Rectangle* r2) { // implement intersection of rectangle with another rectangle } void Intersect(const Shape* s1, const Shape* s2) { // implement interesction of two generic shapes } And then the call Intersect(some_shape, other_shape) would just magically dispatch to the right function? This capability is what's most often referred to by multiple dispatch in programming language parlance [3]. A Failed Attempt in C++ You may be tempted to come up with the following "trivial" solution in C++: class Shape { public: virtual std::string name() const { return typeid(*this).name(); } }; class Rectangle : public Shape {}; class Ellipse : public Shape {}; class Triangle : public Shape {}; // Overloaded Intersect methods. void Intersect(const Rectangle* r, const Ellipse* e) { std::cout << "Rectangle x Ellipse [names r=" << r->name() << ", e=" << e->name() << "]\n"; } void Intersect(const Rectangle* r1, const Rectangle* r2) { std::cout << "Rectangle x Rectangle [names r1=" << r1->name() << ", r2=" << r2->name() << "]\n"; } // Fallback to shapes void Intersect(const Shape* s1, const Shape* s2) { std::cout << "Shape x Shape [names s1=" << s1->name() << ", s2=" << s2->name() << "]\n"; } Now in main: Rectangle r1, r2; Ellipse e; Triangle t; std::cout << "Static type dispatch\n"; Intersect(&r1, &e); Intersect(&r1, &r2); Intersect(&r1, &t); We'll see: Static type dispatch Rectangle x Ellipse [names r=9Rectangle, e=7Ellipse] Rectangle x Rectangle [names r1=9Rectangle, r2=9Rectangle] Shape x Shape [names s1=9Rectangle, s2=8Triangle] Note how the intersections get dispatched to specialized functions when these exist and to a generic catch-allShape x Shape handler when there is no specialized function. So that's it, multiple dispatch works out of the box? Not so fast... what we see here is just C++ function overloading in action. The compiler knows the static, compile-time types of the pointers passed to the Intersect calls, so it just emits the right call. Function overloading is great and useful, but this is not the general problem we're trying to solve. In a realistic code-base, you won't be passing pointers to concrete subclasses of Shape around. You are almost certainly going to be dealing with pointers to the Shape base class. Let's try to see how the code in the previous sample works with dynamic types: std::unique_ptr<Shape> pr1(new Rectangle); std::unique_ptr<Shape> pr2(new Rectangle); std::unique_ptr<Shape> pe(new Ellipse); std::unique_ptr<Shape> pt(new Triangle); std::cout << "Dynamic type dispatch\n"; Intersect(pr1.get(), pe.get()); Intersect(pr1.get(), pr2.get()); Intersect(pr1.get(), pt.get()); Prints: Dynamic type dispatch Shape x Shape [names s1=9Rectangle, s2=7Ellipse] Shape x Shape [names s1=9Rectangle, s2=9Rectangle] Shape x Shape [names s1=9Rectangle, s2=8Triangle] Yeah... that's not good. All calls were dispatched to the generic Shape x Shape handler, even though the runtime types of the objects are different (see the names gathered from typeid). This is hardly surprising, because when the compiler sees Intersect(pr1.get(), pr2.get()), the static types for the two arguments are Shape* and Shape*. You could be forgiven for thinking that the compiler may invoke virtual dispatch here, but virtual dispatch in C++ doesn't work this way. It only works when a virtual method is called on a pointer to a base object, which is not what's happening here. Multiple Dispatch in C++ With the Visitor Pattern I'll admit I'm calling this approach "the visitor pattern" only because this is how it's called elsewhere and because I don't have a better name for it. In fact, it's probably closer to an "inverted" visitor pattern, and in general the pattern name may obscure the code more than help. So forget about the name, and just study the code. The last paragraph of the previous section ended with an important observation: virtual dispatch in C++ kicks in onlywhen a virtual method is called on a pointer to a base object. Let's leverage this idea to simulate double dispatch on our hierarchy of shapes. The plan is to arrange Intersect to hop through virtual dispatches on both its arguments to get to the right method for their runtime types. We'll start by defining Shape like this: class Shape { public: virtual std::string name() const { return typeid(*this).name(); } // Dispatcher that should be called by clients to intersect different shapes. virtual void Intersect(const Shape*) const = 0; // Specific interesection methods implemented by subclasses. If subclass A // has a special way to intersect with subclass B, it should implement // InteresectWith(const B*). virtual void IntersectWith(const Shape*) const {} virtual void IntersectWith(const Rectangle*) const {} virtual void IntersectWith(const Ellipse*) const {} }; The Intersect method is what the users of the code will invoke. To be able to make use of virtual dispatches, we are forced to turn a two-argument call Intersect(A*, B*) to a method call A->Intersect(B). The IntersectWithmethods are concrete implementations of intersections the code will dispatch to and should be implemented by subclasses on a case-per-case basis. class Rectangle : public Shape { public: virtual void Intersect(const Shape* s) const { s->IntersectWith(this); } virtual void IntersectWith(const Shape* s) const { std::cout << "Rectangle x Shape [names this=" << this->name() << ", s=" << s->name() << "]\n"; } virtual void IntersectWith(const Rectangle* r) const { std::cout << "Rectangle x Rectangle [names this=" << this->name() << ", r=" << r->name() << "]\n"; } }; class Ellipse : public Shape { public: virtual void Intersect(const Shape* s) const { s->IntersectWith(this); } virtual void IntersectWith(const Rectangle* r) const { std::cout << "Ellipse x Rectangle [names this=" << this->name() << ", r=" << r->name() << "]\n"; } }; std::unique_ptr<Shape> pr1(new Rectangle); std::unique_ptr<Shape> pr2(new Rectangle); std::unique_ptr<Shape> pe(new Ellipse); std::cout << "Dynamic type dispatch\n"; pr1->Intersect(pe.get()); pr1->Intersect(pr2.get()); Will now print Dynamic type dispatch Ellipse x Rectangle [names this=7Ellipse, r=9Rectangle] Rectangle x Rectangle [names this=9Rectangle, r=9Rectangle] Success! Even though we're dealing solely in pointers to Shape, the right intersections are computed. Why does this work? As I've mentioned before, the key here is use C++'s virtual function dispatch capability twice. Let's trace through one execution to see what's going on. We have: pr1->Intersect(pe.get()); pr1 is a pointer to Shape, and Intersect is a virtual method. Therefore, the runtime type's Intersect is called here, which is Rectangle::Intersect. The argument passed into the method is another pointer to Shape which at runtime points to an Ellipse (pe). Rectangle::Intersect calls s->IntersectWith(this). The compiler sees that s is a Shape*, and IntersectWith is a virtual method, so this is another virtual dispatch. What gets called isEllipse::IntersectWith. But which overload of this method is called? This is an extremely crucial point in the explanation, so please focus :-) Here is Rectangle::Intersect again: virtual void Intersect(const Shape* s) const { s->IntersectWith(this); } s->IntersectWith is called with this, which the compiler knows is a pointer to Rectangle, statically. If you wondered why I define Intersect in each subclass rather than doing it once in Shape, even though its code is exactly the same for each subclass, this is the reason. Had I defined it in Shape, the compiler would think the type of this is Shape* and would always dispatch to the IntersectWith(const Shape*) overload. Defining this method in each subclass helps the compiler leverage overloading to call the right method. What happens eventually is that the call pr1->Intersect(pe.get()) gets routed to Ellipse::IntersectWith(const Rectangle*), thanks to two virtual dispatches and one use of method overloading. The end result is double dispatch! [4] But wait a second, how did we end up with Ellipse::IntersectWith(Rectangle)? Shouldn'tpr1->Intersect(pe.get()) go to Rectangle::IntersectWith(Ellipse) instead? Well, yes and no. Yes because this is what you'd expect from how the call is syntactically structured. No because you almost certainly want double dispatches to be symmetric. I'll discuss this and other related issues in the next section. Symmetry and Base-class Defaults When we come up with ways to do multiple dispatch, whether in C++ or in other languages, there are two aspects of the solution we should always keep in mind: - Does it permit symmetry? In other words, does the order of objects dispatched upon matters? And if it doesn't, how much extra code is needed to express this fact. - Does base-class default dispatch work as expected? Suppose we create a new subclass of Rectangle, calledSquare and we don't explicitly create an IntersectWith method for Square and Ellipse. Will the right thing happen and the intersection between a Rectangle and Ellipse be invoked when we ask forSquare x Ellipse? This is the right thing because this is what we've come to expect from class hierarchies in object-oriented languages. In the visitor-based solution presented above, both aspects will work, though symmetry needs a bit of extra code. The full code sample is available here (and the accompanying .cpp file). It's conceptually similar to the code shown above, but with a bit more details. In particular, it implements symmetry between rectangle and ellipse intersections as follows: namespace { // All intersections between rectangles and ellipses dispatch here. void SymmetricIntersectRectangleEllipse(const Rectangle* r, const Ellipse* e) { std::cout << "IntersectRectangleEllipse [names r=" << r->name() << ", e=" << e->name() << "]\n"; } } void Rectangle::IntersectWith(const Ellipse* e) const { SymmetricIntersectRectangleEllipse(this, e); } void Ellipse::IntersectWith(const Rectangle* r) const { SymmetricIntersectRectangleEllipse(r, this); } This ensures that both rectangle->Intersect(ellipse) and ellipse->Intersect(rectangle) end up in the same function. As far as I know there's not way to do this automatically in the visitor approach, so a bit of extra coding is due when symmetry between subclasses is desired. Note also that this method doesn't force symmetry either. If some form of dispatch is order-dependent, it's easy to express. The Problem With the Visitor-based Approach Although the visitor-based approach works, enables fairly clean client code, and is efficient (constant time - two virtual calls), there's a glaring issue with it that's apparent with the most cursory look at the code: it's very intrusive, and hence hard to maintain. Imagine we want to add a new kind of shape - a HyperFrob. Suppose also that there's an efficient algorithm for intersecting a HyperFrob with an Ellipse. Ideally, we'd only have to write code for the new functionality: - Define the new HyperFrob class deriving from Shape. - Implement the generic HyperFrob x Shape intersection algorithm. - Implement the specific HyperFrom x Ellipse algorithm. But in reality, we're forced to modify the definition of the base class Shape to add an overload of IntersectWith forHyperFrob. Moreover, if we want intersections between HyperFrob and Ellipse to be symmetric (which we almost certainly do), we'll have to modify Ellipse as well to add the same overload. If we don't control the Shape base class at all, we're in real trouble. This is an instance of the expression problem. I'll have more to say about the expression problem in a future post, but for now the Wikipedia link will have to do. It's not an easy problem to solve in C++, and the approaches to implement multiple dispatch should be judged by how flexible they are in this respect, along with the other considerations. Multiple-dispatch in C++ By Brute-force The visitor-based approach is kind-of clever, leveraging single virtual dispatch multiple times to simulate multiple dispatch. But if we go back to first principles for a moment, it becomes clear that there's a much more obvious solution to the problem - brute-force if-else checks. I mentioned this possibility early in the article and called it "gross to develop and maintain", but it makes sense to at least get a feel for how it would look: class Shape { public: virtual std::string name() const { return typeid(*this).name(); } }; class Rectangle : public Shape {}; class Ellipse : public Shape {}; class Triangle : public Shape {}; void Intersect(const Shape* s1, const Shape* s2) { if (const Rectangle* r1 = dynamic_cast<const Rectangle*>(s1)) { if (const Rectangle* r2 = dynamic_cast<const Rectangle*>(s2)) { std::cout << "Rectangle x Rectangle [names r1=" << r1->name() << ", r2=" << r2->name() << "]\n"; } else if (const Ellipse* e2 = dynamic_cast<const Ellipse*>(s2)) { std::cout << "Rectangle x Ellipse [names r1=" << r1->name() << ", e2=" << e2->name() << "]\n"; } else { std::cout << "Rectangle x Shape [names r1=" << r1->name() << ", s2=" << s2->name() << "]\n"; } } else if (const Ellipse* e1 = dynamic_cast<const Ellipse*>(s1)) { if (const Ellipse* e2 = dynamic_cast<const Ellipse*>(s2)) { std::cout << "Ellipse x Ellipse [names e1=" << e1->name() << ", e2=" << e2->name() << "]\n"; } else { // Handle other Ellipse x ... dispatches. } } else { // Handle Triangle s1 } } One thing is immediately noticeable: the intrusiveness issue of the visitor-based approach is completely solved. Obliterated! Intersect is now a stand-alone function that encapsulates the dispatch. If we add new kinds of shape, we only have to change Intersect, nothing else. Perfect... or is it? The other immediately noticeable fact about this code is: holy cow, how long it is. I'm only showing a small snippet here, but the number of these if clauses grows as square of the number of subclasses. Imagine how this could look for 20 kinds of shapes. Moreover, Intersect is just one algorithm. We may have other "multi methods" - this travesty would have to be repeated for each algorithm. Another, less obvious problem is that the code is somewhat brittle. Given a non-trivial inheritance hierarchy, we have to be very careful about the order of the if clauses, lest a parent class "shadows" all its subclasses by coming before them in the chain. It's no wonder that one would be very reluctant to write all this code. In fact, smart folks came up with all kinds of ways to automate such if chains. If you're thinking - "hey I could just store pairs of typeids in a map and dispatch upon that" - congrats, you're in the right direction. One of the most notable experts to tackle the beast is Andrei Alexandrescu, who dedicated chapter 11 of "Modern C++ Design" to this problem, implementing all kinds of automated solutions based on heavy template metaprogramming. It's a fairly impressive piece of work, presenting multiple approaches with different tradeoffs in terms of performance and intrusiveness. If you Google for Loki (his C++ template library) and look into the MultiMethods.h header you'll see it in all its glory - complete with type lists, traits, policies, and template templates. This is C++, and these are the abstractions the language provides for meta-programming - so take it or leave it :-) If you are seriously considering using multiple dispatch in your C++ code, Loki is well worth a look. An Attempt for Standardization By far the most interesting attempt to solve this problem came from Bjarne Stroustrup himself, who co-authored a paper with two of his students named "Open Multi-Methods for C++" [5]. In this paper, the authors thoroughly review the problem and propose a C++ language extension that will implement it efficiently in the compiler. The main idea is to let function arguments be potentially virtual, meaning that they perform dynamic dispatch and not just static overloading. So we could implement our intersection problem as follows: // This is not real C++: the syntax is based on the paper // "Open Multi-Methods for C++" and was only implemented experimentally. // Generic Shape x Shape intersection. void Intersect(virtual const Shape*, virtual const Shape*); // Interesection for Rectangle x Ellipse. void Intersect(virtual const Rectangle*, virtual const Ellipse*); Note how similar this is to the failed attempt to leverage overloading for multiple dispatch in the beginning of this article. All we add is the virtual keyword for arguments, and the dispatch turns from static to dynamic. Unfortunately, the proposal never made it into the standard (it was proposed as document number N2216). Conclusions and Next Steps This part in the series presented the multiple dispatch problem and demonstrated possible solutions in C++. Each solution has its advantages and issues, and choosing one depends on the exact needs of your project. C++ presents unique challenges in designing such high-level abstractions, because it's comparatively rigid and statically typed. Abstractions in C++ also tend to strive to being as cheap as possible in terms of runtime performance and memory consumption, which adds another dimension of complexity to the problem. In the following parts of the series we'll examine how the same problem is solved in other, more dynamic and structurally flexible programming languages. [1] As opposed to "compile-time" polymorphism which in C++ is done with overloaded functions and templates. [2] More examples: You may have multiple event types handled by multiple handlers - mixing and matching them boils down to the same problem. Or in game code, you may have collision detection between different kinds of objects; or completely different battle scenarios depending on two kinds of units - knight vs. mage, mage vs. mage, knight vs. elf, or whatever. These examples sound like toys, but this is because realistic examples are often much more boring and more difficult to explain. Battles between mages and knights is more reasonable to discuss in an introductory article than different kinds of mathematical transforms applied to different kinds of nodes in a dataflow graph. [3] To be more precise, this is a special case - double dispatch, where dispatch is done on two objects. I will mostly focus on double dispatch in this series, even though some of the languages and techniques presented support an arbitrary number of objects. In my experience, in 99% of the cases where multiple dispatch is useful, two objects are sufficient. [4] I'll lament again that the "visitor" pattern is not a great name to apply here. An alternative way to talk about this approach is "partial application". With double dispatch, we route the call through two virtual method calls. The first of these can be seen to create a partially applied method that knows the dynamic type of one of its arguments, and what remains is to grab the other. This idea also extends naturally to multiple dispatch with more than 2 objects. As an exercise, try to figure out how to do triple dispatch using this technique. [5] The paper is available from Stroustrup's home page. Published at DZone with permission of Eli Bendersky . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/a-polyglots-guide-to-multiple-dispatch?fromrel=true
CC-MAIN-2019-35
refinedweb
3,962
50.97
Optimizing React Performance with Stateless ComponentsBy Peter Bengtsson This> } } Editor’s Note: We’re trying out CodeSandbox for the demos in this article. Let us know what you think!> } } If you run this, you’ll notice that our little component gets re-rendered even though nothing has changed! It’s not a big deal right now, but in a real application components tend to grow and grow in complexity and each unnecessary re-render causes the site to be slower. If you were to debug this app now with react-addons-perf I’m sure you’d find that time is wasted rendering Users->User. Oh no! What to do?! Everything seems to point to the fact that we need to use shouldComponentUpdate to override how React considers the props to be different when we’re certain they’re not. To add a React life cycle hook, the component needs to go be a class. Sigh. So we go back to the original class-based implementation and add the new lifecycle hook method: Back to Being a Class Component import React, { Component } from 'react' class User extends Component { shouldComponentUpdate(nextProps) { // Because we KNOW that only these props would change the output // of this component. return nextProps.name !== this.props.name || nextProps.highlighted !== this.props.highlighted } render() { const { name, highlighted, userSelected } = this.props console.log('Hey User is being rendered for', [name, highlighted]) return <div> <h3 style={{fontStyle: highlighted ? 'italic' : 'normal'}} onClick={event => { userSelected() }} >{name}</h3> </div> } } Note the new addition of the shouldComponentUpdate method. This is kinda ugly. Not only can we no longer use a function, we also have to manually list the props that could change. This involves a bold assumption that the userSelected function prop doesn’t change. It’s unlikely, but something to watch out for. But do note that this only renders once! Even after the containing App component re-renders. So, that’s good for performance. But can we do it better? What About React.PureComponent? As of React 15.3, there’s a new base class for components. It’s called PureComponent and it has a built-in shouldComponentUpdate method that does a “shallow equal” comparison of every prop. Great! If we use this we can throw away our custom shouldComponentUpdate method which had to list specific props. import React, { PureComponent } from 'react' class User extends PureComponent { render() { const { name, highlighted, userSelected } = this.props console.log('Hey User is being rendered for', [name, highlighted]) return <div> <h3 style={{fontStyle: highlighted ? 'italic' : 'normal'}} onClick={event => { userSelected() }} >{name}</h3> </div> } } Try it out and you’ll be disappointed. It re-renders every time. Why?! The answer is because the function userSelected is recreated every time in App‘s render method. That means that when the PureComponent based component calls its own shouldComponentUpdate() it returns true because the function is always different since it’s created each time. Generally the solution to that is to bind the function in the containing component’s constructor. First of all, if we were to do that it means we’d have to type the method name 5 times (whereas before it was 1 times): this.userSelected = this.userSelected.bind(this)(in the constructor) userSelected() {(as the method definition itself) <User userSelected={this.userSelected} ...(when defining where to render the Usercomponent) Another problem is that, as you can see, when actually executing that userSelected method it relies on a closure. In particular that relies on the scoped variable user from the this.state.users.map() iterator. Admittedly, there is a solution to that and that’s to first bind the userSelected method to this and then when calling that method (from within the child component) pass the user (or its name) back. Here is one such solution. recompose to the Rescue! First, to iterate, what we want: - Writing functional components feels nicer because they’re functions. That immediately tells the code-reader that it doesn’t hold any state. They’re easy to reason about from a unit testing point of view. And they feel less verbose and purer JavaScript (with JSX of course). - We’re too lazy to bind all the methods that get passed into child components. Granted, if the methods are complex it might be nice to refactor them out instead of creating them on-the-fly. Creating methods on-the-fly means we can write its code right near where they get used and we don’t have to give them a name and mention them 5 times in 3 different places. - The child components should never re-render unless the props to them change. It might not matter for tiny snappy ones but for real-world applications when you have lots and lots of these all that excess rendering burns CPU when it can be avoided. (Actually, what we ideally want is that components are only rendered once. Why can’t React solve this for us? Then there’d be 90% fewer blog posts about “How To Make React Fast”.) recompose is “a React utility belt for function components and higher-order components. Think of it like lodash for React.” according to the documentation. There’s a lot to explore in this library, but right now we want to render our functional components without them being re-rendered when props don’t change. Our first attempt at re-writing it back to a functional component but with recompose.pure looks like this: import React from 'react' import { pure } from 'recompose' const User = pure(({ name, highlighted, userSelected }) => { console.log('Hey User is being rendered for', [name, highlighted]) return <div> <h3 style={{fontStyle: highlighted ? 'italic' : 'normal'}} onClick={event => { userSelected() }}>{name}</h3> </div> }) export default User As you might notice, if you run this, the User component still re-renders even though the props (the name and highlighted keys) don’t change. Let’s take it up one notch. Instead of using recompose.pure we’ll use recompose.onlyUpdateForKeys which is a version of recompose.pure, but you specify the prop keys to focus on explicitly: import React from 'react' import { onlyUpdateForKeys } from 'recompose' const User = onlyUpdateForKeys(['name', 'highlighted'])(({ name, highlighted, userSelected }) => { console.log('Hey User is being rendered for', [name, highlighted]) return <div> <h3 style={{fontStyle: highlighted ? 'italic' : 'normal'}} onClick={event => { userSelected() }}>{name}</h3> </div> }) export default User When you run that you’ll notice that it only ever updates if props name or highlighted change. If it the parent component re-renders, the User component doesn’t. Hurrah! We have found the gold! Discussion First of all, ask yourself if it’s worth performance optimizing your components. Perhaps it’s more work than it’s worth. Your components should be light anyway and perhaps you can move any expensive computation out of components and either move them out into memoizable functions outside or perhaps you can reorganize your components so that you don’t waste rendering components when certain data isn’t available anyway. For example, in this case, you might not want to render the User component until after that fetch has finished. It’s not a bad solution to write code the most convenient way for you, then launch your thing and then, from there, iterate to make it more performant. In this case, to make things performant you need to rewrite the functional component definition from: const MyComp = (arg1, arg2) => { ... } …to… const MyComp = pure((arg1, arg2) => { ... }) More from this author Ideally, instead of showing ways to hack around things, the best solution to all of this, would be a new patch to React that is a vast improvement to shallowEqual that is able to “automagically” decipher that what’s being passed in and compared is a function and just because it’s not equal doesn’t mean it’s actually different. Admission! There is a middle-ground alternative to having to mess with binding methods in constructors and the inline functions that are re-created every time. And it’s Public Class Fields. It’s a stage-2 feature in Babel so it’s very likely your setup supports it. For example, here’s a fork using it which is not only shorter but it now also means we don’t need to manually list all non-function props. This solution has to forgo the closure. Still, though, it’s good to understand and be aware of recompose.onlyUpdateForKeys when the need calls. For more on React, check out our course React The ES6 Way. This article was peer reviewed by Jack Franklin. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
https://www.sitepoint.com/optimizing-react-performance-stateless-components/?utm_source=reactnl&utm_medium=email
CC-MAIN-2017-22
refinedweb
1,437
55.95
Slinky doing React the Scala way Motivation I just wanted to know what React is all about. So I did the really good Intro Tutorial from React. The main reason to try React, was that there are at least two libraries, that allows you to write React Apps in Scala: So what are the differences to using React with Javascript. Is it worth to add another abstraction? Idea Just follow the Tutorial and translate it to Slinky / Scala. And write the findings in a blog post. You find the source for slinky-react-turorial on Github. I made a commit for each chapter of the Tutorial. You can follow the React Tutorial and implement it with Slinky or just see the differences. Setup the Project First create a Slinky project: sbt new shadaj/create-react-scala-app.g8 Now we adjust our project to get to the starting point of the tutorial. The main part is the App.scala where the code for the tutorial is. A component in React: The same in Slinky: So my first impression was, that there is more information and still less code in Slinky. How is this possible? - Slinky uses a macro annotation @react that reduces the boilerplate. See Technical Overview. - Props and States must be defined in Slinky with types or case classes. We will see some examples along the way. In the Javascript you have to check the whole Component to figure out what Props are needed. Another point is that Slinky uses its own Tag API. So there is always a translation involved when coming from HTML. Here is the only adjustment I made, as I am a lazy person: > I changed in my Scala code to: for (r <- 0 to 2) yield div(className := "board-row")( for (c <- 0 to 2) yield renderSquare(r * 3 + c) ) See all changes in the Commit. Passing Data Through Props As mentioned above, we need to define our Props class. @react class Square extends StatelessComponent { case class Props(value: Int) ... Thanks to macro annotation, creating the component looks natural (Props must not be created): Square(squareValue) This looks actually better than using JSX (IMHO): <Square value={i} /> See all changes in the Commit. Making an Interactive Component Let’s start with the constructor of the React version: constructor(props) { super(props); this.state = { value: null, }; } Quite some code, with not so much information. - What are the Props? - What is the type of value? In Slinky there is a bit more code involved, but also a lot more information. The StatelessComponent gets new a Component, which requires the initialState function. All the missing information of the Javascript version is here. Again some magic hidden by the @react annotation. <button className="square" onClick={() => this.setState({value: 'X'})} > {this.state.value} </button> This mix of JSX-tags and JavaScript code makes it a bit harder to read. button( className := "square", onClick := (_ => setState(State("X"))) ) (state.value) With Slinky you have to replace always the State object. The value is in its own attribute list — which is a bit strange in the context of HTML. The cluttering this is not required in Scala. See all changes in the Commit. Lifting State Up this.state = { squares: Array(9).fill(null), }; Ok null in Scala that is not an option😏. Let’s just use an empty String for now (Spoiler: an Option is in the air): case class State(squares: Seq[String])def initialState: State = State(List.fill(9)("")) Also interesting is the onClick function. In Javascript you do not have to care about the types. In Scala you do, because you have to define them in the Props class. You must try until the compiler is happy😬. case class Props(value: String, onClick: () => ()) See all changes in the Commit. Why Immutability Is Important As a Scala developer you are most certainly already convinced, that immutability is a good thing. As this is an important Scala Idiom. Let’s see how this affects the code. An example of the last chapter: const squares = this.state.squares.slice(); squares[i] = 'X'; this.setState({squares: squares}); In Scala the immutable collections API provides functions to update them directly — so you can skip the step to make a copy first (why is it called slice?): val squares = state.squares.updated(squareIndex, “X”) setState(State(squares)) So again clearer to read with less code and not to forget type safe. For example this.setState({square: squares}); would not tell you that you have missed an ‘s’. It just does not update the state of the squares! Taking Turns Not much new stuff to discuss here. this.state.xIsNext ? ‘X’ : ‘O’; Well, this construct does not exist in Scala, but you can write it in this (more readable) way: if (state.xIsNext) “X” else “O” See all changes in the Commit. Declaring a Winner function calculateWinner(squares) { ... for (let i = 0; i < lines.length; i++) { const [a, b, c] = lines[i]; if (squares[a] && squares[a] === squares[b] && squares[a] === squares[c]) { return squares[a]; } } return null; } Here we find two points that are not really Scala-like: nullagain! - Two return statements. Ok now is definitely the time to introduce Option to our model. So our State looks now like this: case class State(squares: Seq[Option[Char]], xIsNext: Boolean) def initialState: State = State(List.fill(9)(None), xIsNext = true) This gives us the following calculation: private def calculateWinner(): Option[Char] = { val lines = List( (0, 1, 2), ... , (2, 4, 6) ) val squares = state.squares lines.collectFirst { case (a, b, c) if squares(a).nonEmpty && squares(a) == squares(b) && squares(a) == squares(c) => squares(a).get } } Also here shines the power of the collections API — as it provides for every scenario the perfect function. Finds the first element of the collection for which the given partial function is defined, and applies the partial function to it. (Scala Doc) With Option in your model we can now simplify our code. For example this Javascript: const winner = calculateWinner(this.state.squares); let status; if (winner) { status = 'Winner: ' + winner; } else { status = 'Next player: ' + (this.state.xIsNext ? 'X' : 'O'); } Is in Scala: def status = calculateWinner() .map(“Winner: “ + _) .getOrElse(s”Next player ${nextPlayer.mkString}”) See all changes in the Commit. Lifting State Up, Again In Javascript everything is a “JSON” data structure: history = [ // Before first move { squares: [ null, null, null, null, null, null, null, null, null, ] }, // After first move { squares: [..] }, // After second move ... , // ... ] So you have Maps and Arrays of simple types. In Scala, the best practice to structure data are Algebraic Data Types (ADTs). So let’s make them more concrete: case class HistoryEntry( squares: Seq[Option[Char]] = List.fill(9)(None)) ... case class State(history: Seq[HistoryEntry], xIsNext: Boolean) def initialState: State = State(Seq(HistoryEntry()), xIsNext = true) The default of an HistoryEntry are nine squares that are not set (None). In general lifting up the State with Slinky was easy, as you have great support from the IDE (Intellij in my case). See all changes in the Commit. Showing the Past Moves const moves = history.map((step, move) => { const desc = move ? 'Go to move #' + move : 'Go to game start'; return ( <li> <button onClick={() => this.jumpTo(move)}>{desc}</button> </li> ); }); This code snippet brings up quite some questions (to a not regular-Javascript developer): - history is an Array of squares , what are then step and move? — step is one square, move is the index. - What is false of a positive integer? — Easy 0 is false, the rest is true. How did I know? I didn’t, it’s just what it does😱. - step in not used, why is it there? — No better alternative I assume. This function in Scala: val moves = history.indices.map(move => li( button(onClick := { () => jumpTo(move) })( if (move > 0) s"Go to move # $move" else "Go to game start" ) ) ) Not so much magic here — you need the indices, but then it is straight forward. No shortcuts, but simple readable code. See all changes in the Commit. Picking a Key It’s strongly recommended that you assign proper keys whenever you build dynamic lists. Well this chapter explains the warning I’ve got from the beginning: This is the code causing this warning: for (r <- 0 to 2) yield div(className := "board-row")( for (c <- 0 to 2) yield renderSquare(r * 3 + c) ) So to a div we can add it simply as the attribute key: div(key := s"row_$r", className := "board-row")( ...) But how about the own components? Well Slinky provides the function withKey for this: Square(props.squares(squareIndex), () => props.onClick(squareIndex)) .withKey(s"square_$squareIndex") See all changes in the Commit. Implementing Time Travel Nothing new here — we are done. See all changes in the Commit. Conclusion It was quite easy to translate the React “Tic Tac Toe” to Slinky. I think a Javascript React developer would be productive pretty fast, delivering more robust and easier to maintain code. If this is the case for Scala developers that start with React, I hope I can answer in a future blog🙏. You should definitely check out Slinky , if you work with Scala and/ or with React. Pros - Type Safety. - More Information, through defined Types. - Less boilerplate code thanks to macros. - Real immutability with Case Classes and immutable collections. - Powerful Collection API. - If your backend is in Scala, you can reuse your models and algorithms on the client.✌️ Cons - You have to learn an additional abstraction of React. Quite an intuitive and easy one I have to admit. - There is a lot of great learning material and examples for React — so you need to translate them to Slinky. - The Hot Loading is really good, but compared with pure React — well is still Scala😊. References I linked hopefully everything in the text above. Here are only the important ones listed: pme123/slinky-react-tutorial You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or… github.com Slinky - Write React apps in Scala just like ES6 Just like React, Slinky components implement a render()method that returns what to display based on the input data, but… slinky.dev Let me know what you think of Slinky and/or this Blog.
https://pme123.medium.com/slinky-doing-react-the-scala-way-f78ccf42bf8f
CC-MAIN-2021-31
refinedweb
1,709
76.42
LANGUAGES: C# ASP.NET VERSIONS: 2.0 This two-part series guides you through the process of creating a SharePoint Web part. We began with the out-of-the-box configurations required to ensure that search is successful inside your Microsoft Office SharePoint Server portals, and explored where the search result and click-through data is kept (see Part I). Now the challenge is to use this knowledge to build our own custom Web part. Building Your Query Report Control With the system set up, you need end users to perform searches. As those searches are performed, you'll need a customized way to display them sorted by popularity. Luckily, this information can be extracted and displayed on a Web page or in a Web part. Because the information is simply in the database, there are a few options for accessing the data. You could access the data by connecting directly to the database, you could write a Web service, or you could even use the SharePoint classes to help build a component. It's important to keep in mind that these internal SharePoint stored procedures could change with any hot fix or service pack, and are not documented as interfaces to the system. As we'll briefly explore, each one of these options offers advantages and disadvantages. If you connect to these stored procedures directly from code, you must write your own user and role security and database connection methods, provide the correct stored procedure, and provide the correct parameters. This can be a viable option, but you'll need to provide your data access code rights to the SharePoint database. If the SharePoint machine and the Web server on which you are working are in the same environment, this can usually be worked out with administration settings. The second option, using Web services, works best with applications that need remote connections. For instance, you could display this information to end users in a smart client (like Microsoft Word or your own Windows Forms application). Although this helps distribute the data, the Web service itself would still need to make a connection to the database (either with direct access to SQL Server or through another method, such as using the SharePoint object model). The third option, and the one we're going to take, is to use SharePoint s own classes to pull information from the Search Query Results. This method includes the requirement of having SharePoint installed on one of the machines, but does not require you to write the database calls or the security permission code. To build this component, start by looking at the class hierarchy of Microsoft.SharePoint.Portal.Analytics.UI. Figure 1 shows how the classes are inherited. These are the classes that power the usage reports screens and provide the fundamental classes that are used for the component. You'll be creating a custom report control that would be at home on any of the usage report pages; however, this control is repurposing the data the report would use for display purposes. Having done this, you can take that data and use it in any Web page using simple ASP.NET controls. Figure 1: Report control hierarchy. The component will derive from QueryTopLargeListReportControl to get the functionality provided to all report controls based on the search queries. When implementing QueryTopLargeListReportControl, you are required to implement certain properties that each class must implement; these properties are shown in Figure 2. Figure 2: Property definitions for QueryTopLargeListReportControl. From your previous research, you might recall that the stored procedure used for this control is proc_MSS_QLog_TopQueries. This sample control has the string built in to the control, but you could easily make this a property that can be modified through the user interface in order to create a more generic TopQueries control. The RdlFileName will be left blank because you won't actually be using the Report Control portion of this component. If you want to use this control as a report control, or for testing purposes, simply set the RdlFileName to the report you want to use. The RdlFileName could be one of the existing Microsoft ones, or even a custom one you write. Once you've implemented the class, your code will start to look like that shown in Figure 3. public class DevCowTopQueries: QueryTopLargeListReportControl { protected override string StoredProcedureName { get { return "proc_MSS_QLog_TopQueries"; } } protected override string RdlFileName { get { throw new Exception("The method or operation is not implemented."); } } } Figure 3: New control DevCowTopQueries. Even though you are only required to implement two properties, the control also needs two other properties that are specific to this control: TitleText and StoredProcedureParameters (see Figure 4). The TitleText property is part of the reporting and control, and is only set for cases that the report control is used; the StoredProcedureParameters property is critical to the stored procedure you are using. From looking at the database stored procedure, we can tell there are three parameters required, one of which controls the number of results returned. The values used are inherited from the base classes, and only need to be implemented if you want to override them. For instance, the base value for TopResultCount is 10; if you want to have this variable, or if you want to increase this number, you can override your own property and set the value required. protected override string TitleText { get { return "DevCow Query Control"; } } protected override Collection { get { Collection Guid guid = SPControl.GetContextWeb(this.Context).Site; } } Figure 4: DevCowTopQueries properties. Here's where your control begins to differ from the standard Microsoft controls. The Microsoft controls keep the report data internal to the control, but the entire idea of this control is to use that data on other Web pages. Luckily, one of the inherited method calls is LoadReportData, which doesn't take any parameters and returns a System.Data.DataTable. To allow this data to be exposed to the end user, you must create a new method, GetData: public DataTable GetData() { return this.LoadReportData(); } You can see that the method has the identical signature to the LoadReportData method, and is, in fact, simply a wrapper method to allow other applications to access otherwise private data. Keep in mind that this class does not perform any security checks or audit logging on this method, but these security checks or audits might be required, depending on the environment. Now that you're returning the data in a DataTable to other Web parts, Web pages, or controls, there's no more mystery for the experienced developer using the control to take advantage of Search Query data. Display Top Search Terms Now that the search data is being retrieved from the SharePoint analytics classes, you need a way to display this data on a Web page. Because you are already using SharePoint to pull the data back, creating a few Web parts to make it easy to display the results and create the functionality to use the results to perform your own search is the best option. To set the values of the dropdown list, you first must know the names of each column of the DataTable. You could look at the return types of the stored procedure, but in some cases it's not obvious from the T-SQL. You could run the stored procedure and see the values, or you could opt to create a GridView-based Web part to dump the raw data to the screen. Here, a GridView Web part is a valuable exercise, perhaps to display the results for administrators, as illustrated in Figure 5. Administrators may want to see the entire view of the data that will be displayed to the end users in another format. Figure 5: TopSearchTermsGrid Web part. You can see that all the columns are information from the search pages discussed when you set up the page. These include the query string, the search scope, and the results URL, to name a few. Additionally, you can see information such as the number of times the query has been sought. Although this is a simple Web part, it can be placed on any page to see the results and for what users are searching. Once you know what the users are looking for, you can add best bets and improve the relevance of information that should show up based on actual user search patterns. With all this information, you can now build a Web part that has a dropdown list of the top search queries run on the system. Remember, you are building components and Web parts to help the end users know the most popular search phrases on the site, and this is one way to help them expand their knowledge of the site without generating a lot of IT support calls. The salient field available for use is the queryString field, which is used for both the displayed text and the value for searching. Figure 6 shows the call to the control you just built to get the data, as well as using the data contained in the queryString column to populate the dropdown list values. ddlTopChoicesClient = new DropDownList(); DevCowTopQueries tdd = new DevCowTopQueries(); ddlTopChoicesClient.DataSource = tdd.GetData(); ddlTopChoicesClient.DataTextField = "queryString"; ddlTopChoicesClient.DataBind(); this.Controls.Add(ddlTopChoicesClient); Figure 6: Binding to usage data. The dropdown list allows the Web part to present the user a pre-defined list of query string selections and also prevents them from modifying the list. This user experience lends itself to the collaborative nature of the Web part's intention. If the user wants to search for information not directly related to the available search options, they'll have to perform that search using the default search input control. Build a GO Button The GO button is the button that will perform the actual search action. There are many ways you can perform a search with SharePoint. You can type in the search URL with a query string, call the Web services API, use the object model to perform the search, and use the built-in JavaScript. For the purpose of demonstration, this Web part implements a pair of GO buttons, one using the JavaScript method and the other using the URL redirection method (both are valid methods that are easy to use for searching). The other methods might be used if you were writing your own search results page or wanted to change the way search results were returned. With client-side JavaScript, you can use the built-in functions provided with the SharePoint JavaScript library. The function we want to use, GoSearch, is located in the search.js library. There are a number of parameters you can pass to the function. The only parameter the GO button uses is the queryString value retrieved from the option selected in the dropdown list. As you can see in Figure 7, there are a few hard-coded assumptions, such as the location of the Search Center page and that the desired search scope is All Sites . searchResultsButton = new Button(); searchResultsButton.Text = " GO ClientSide"; searchResultsButton.OnClientClick = "GoSearch(null,'" + ddlTopChoicesClient.UniqueID + "',null,true,false,null,'null',null" + ",null,'\u002fsearchcenter\u002fPages\u002fResults.aspx'" + ", 'This Site','This List', 'This Folder', 'Related Sites'" + ", '\u002f_layouts\u002fOSSSearchResults.aspx');"; this.Controls.Add(searchResultsButton); Figure 7: Create a client-side search button. Once the JavaScript function is constructed, attach the string to the OnClientClick event of the button. This will perform the query search using the client redirect and will not require that the server have a postback for every request to the search query. The second GO button performs the search on the server. Possible reasons for using this method would be to modify the URL or to log event information every time a user uses the Top Search Query functionality. To perform the server-side URL redirect, you must create a new dropdown list that has the URL pre-built from the data of the TopSearch control. To create the new dropdown control, loop through each row of the DataTable and construct the URL as {resultsUrl}?k={queryString}&s={scope}. The URL along with parameters for the queryString and scope are then passed to the correct URL for processing. As you can see, a single letter is used for the parameter locations: k is for queryString; s is for scope. Now when the user presses the button, a server-side request is made, processing can be performed, and the redirect to the URL occurs. Go Directly to BestBet We're sure that everyone has seen the functionality on Google that lets you go directly to the most relevant search result of the keyword entered. What if there was the same ability in a SharePoint site? Now there can be! Using the search results, keywords, and best bets, here's a look at how to create just such a Web part for the users. The first step is connecting to the SharePoint Object Model and getting a reference to the Keywords class. The Keywords class contains a field named Term that can be matched to the search query. Note that with this method, all the words in the search string must also be in the Term value of the Keyword. Once you have a reference to the Keywords for the site, you can then perform standard operations like Add, View, Create, and Delete. Every Keyword can have a set of objects, called BestBets, associated with the Keyword term. These best bets provide a title and URL that can be used to navigate based on the Keyword term. This section of the article describes how to navigate directly to the top best bet if someone clicks the I might be Lucky! button. Creating a new button is straightforward; simply create a new button that has a server-side event and when the user clicks the button, search through the keywords to find the right set and redirect the user to the top best bet. Keyword terms are not based on search scopes like the other query string results; for that reason, use the first dropdown list you created that has the query string as the value to make it easy to perform look-ups. Once you have the query string, there are built-in functions to help you narrow the collection of keyword results. The Keywords class has a function named FilterKeywords that takes three parameters. The first two are enumerations based on how you want the results returned and the last one is the keyword term you are looking for; in this case, it's the query string search. Now that you have the keyword collection of terms that matched the search, you can perform a loop through each keyword. Each keyword may or may not have a set of best bets associated with it, so you might have to look at more than one keyword to find the first best bet. Once you find the keyword with a best bet, get the URL from the best bet and direct the user to the best bet location. If you don't find a best bet, you can take the user to another page or simply notify the user that no best bets are available for that term. Figure 8 shows the code that will perform the look-up of the best bet. void bestBetButton_Click(object sender, EventArgs e) { SearchContext searchContext = SearchContext.GetContext(SPContext.Current.Site); Keywords keywords = new Keywords(searchContext, new Uri(SPContext.Current.Site.Url)); KeywordCollection kwc = keywords.GetFilteredKeywords(KeywordView.AllKeywords, KeywordFilter.Keyword, ddlTopChoicesClient.SelectedValue); foreach (Keyword kw in kwc) { foreach (BestBet bb in kw.BestBets) { Page.Response.Redirect(bb.Url.ToString()); return; } } lblUserMsg.Text = "No BestBet defined for search query."; } Figure 8: Redirect to SharePoint BestBet. Conclusion There are many ways to extend search with SharePoint. You can create custom search results with XSLT, connect to many other systems with Business Data Catalog (BDC), and search through files in your enterprise like file shares. But there is one constant users need a way to help them navigate through a system. Building tools that use the abilities of enterprise search can help direct users to the right information on your site, making it more valuable as a tool to all users. Building tools that use the usage data and search information provide a start to what you can add for your users. Using additional capabilities like keywords and best bets will help you, as content owner, direct your users to relevant information; now you can even see the areas that people are using the most. Search doesn't have to be just building pages that return results, although the Web services provided by Microsoft will allow you to create these pages. The real advantage is using all the tools, such as JavaScript, URL navigation, the SharePoint object model, and more to create the solution that is right for your organization. You can even perform these actions from other pages or applications by creating an interface, such as a Web service. Be sure to configure security requirements on any data that you expose! Building controls that take advantage of the work that has been done by the framework can help you make an impact quickly, but keep in mind that undocumented features can change in the future. For this reason, make sure to test your customizations when you apply service packs and perform upgrades. This concludes the exploration of SharePoint Search. This series has demonstrated how to correctly configure SharePoint Search on a MOSS 2007 portal with a variety of content sources and search scopes. You've taken extra steps to ensure that your users are finding the most relevant details by identifying best-bet results for certain search terms. You know where to look to gain insight into how search is being used through a variety of built-in reporting pages. To further your understanding of how search data is stored and tracked, you've used Reflector to dive into the guts of SharePoint source code to discover its hidden secrets. Finally, you've applied this knowledge to create a custom Web part, which uses the portal itself to keep its contents fresh and relevant. The source code accompanying this series is available for download. Matthew S. Ranlett, a senior consultant with Intellinet s Information Worker team, is based out of Atlanta. A Microsoft SQL Server MVP, Matt is co-author of Professional SharePoint 2007 Development, and co-founder of the Atlanta .NET Regular Guys, hosted at DevCow (). Brendon Schwartz is a principal consultant with Slalom Consulting in Atlanta specializing in SharePoint 2007. Brendon is a Microsoft MVP, co-author of Professional SharePoint 2007 Development, and co-founder of the Atlanta .NET Regular Guys, hosted at DevCow ().
https://www.itprotoday.com/file-sharing-and-management/search-part-ii
CC-MAIN-2020-40
refinedweb
3,116
59.94
One of the flags to the QueueUserWorkItem function is WT_EXECUTELONGFUNCTION. The documentation for that flag reads The callback function can perform a long wait. This flag helps the system to decide if it should create a new thread. As noted in the documentation, the thread pool uses this flag to decide whether it should create a new thread or wait for an existing work item to finish. If all the current thread pool threads are busy running work items and there is another work item to dispatch, it will tend to wait for one of the existing work items to complete if they are "short", because the expectation is that some work item will finish quickly and its thread will become available to run a new work item. On the other hand, if the work items are marked WT_EXECUTELONGFUNCTION, then the thread pool knows that waiting for the running work item to complete is not going to be very productive, so it is more likely to create a new thread. If you fail to mark a long work item with the WT_EXECUTELONGFUNCTION flag, then the thread pool ends up waiting for that work item to complete, when it really should be kicking off a new thread. Eventually, the thread pool gets impatient and figures out that you lied to it, and it creates a new thread anyway. But it often takes a while before the thread pool realizes that it's been waiting in vain. Let's illustrate this with a simple console program. #include <windows.h> #include <stdio.h> DWORD g_dwLastTick; void CALLBACK Tick(void *, BOOLEAN) { DWORD dwTick = GetTickCount(); printf("%5d\n", dwTick - g_dwLastTick); } DWORD CALLBACK Clog(void *) { Sleep(4000); return 0; } int __cdecl main(int argc, char* argv[]) { g_dwLastTick = GetTickCount(); switch (argc) { case 2: QueueUserWorkItem(Clog, NULL, 0); break; case 3: QueueUserWorkItem(Clog, NULL, WT_EXECUTELONGFUNCTION); break; } HANDLE hTimer; CreateTimerQueueTimer(&hTimer, NULL, Tick, NULL, 250, 250, 0); Sleep(INFINITE); return 0; } This program creates a periodic thread pool work item that fires every 250ms, and which merely prints how much time has elapsed since the timer was started. As a baseline, run the program with no parameters, and observe that the callbacks occur at roughly 250ms intervals, as expected. 251 501 751 1012 ^C Next, run the program with a single command line parameter. This causes the "case 2" to be taken, where the "Clog" work item is queued. The "Clog" does what its names does: It clogs up the work item queue by taking a long time (four seconds) to complete. Notice that the first callback doesn't occur for a whole second. 1001 1011 1021 1021 1252 1502 1752 ^C That's because we queued the "Clog" work item without the WT_EXECUTELONGFUNCTION flag. In other words, we told the thread pool, "Oh, don't worry about this guy, he'll be finished soon." The thread pool wanted to run the Tick event, and since the Clog work item was marked as "fast", the thread pool decided to wait for it and recycle its thread rather than create a new one. After about a second, the thread pool got impatient and spun up a new thread to service the now-long-overdue Tick events. Notice that as soon as the first Tick event was processed, three more were fired in rapid succession. That's because the thread pool realized that it had fallen four events behind (thanks to the clog) and had to fire the next three immediately just to clear its backlog. The fifth and subsequent events fire roughly on time because the thread pool has figured out that the Clog really is a clog and should be treated as a long-running event. Finally, run the program with two command line parameters. This causes the "case 3" to be taken, where we queue up the Clog but also pass the WT_EXECUTELONGFUNCTION flag. 251 511 761 1012 ^C Notice that with this hint, the thread pool no longer gets fooled by the Clog and knows to spin up a new thread to handle the Tick events. Moral of the story: If you're going to go wading into the thread pool, make sure you play friendly with other kids and let the thread pool know ahead of time whether you're going to take a long time. This allows the thread pool to keep the number of worker threads low (thus reaping the benefits of thread pooling) while still creating enough threads to keep the events flowing smoothly. Exercise: What are the consequences to the thread pool if you create a thread pool timer whose callback takes longer to complete than its timer period? When I’ve looked at any of these thread pool API’s, I’ve wondered how is the thread pool created, who creates it and how is it maintained? Are there any API’s that allow you to look at or interact with the thread pool? John: describes when the queue is created. I don’t know how you can query or cancel work items in the queue. However, one redneck way to count the number of work items in the queue is to simply increment a variable when you call QueueUserWorkItem(), and decrement it when your job runs. Similarly, you could implement cancelling jobs by reserving a bit in the data structure you use to initialize each work item that indicates if the work item should simply return. Anyone wanting to cancel that work item simply sets that bit, and when the work item notices it goes away. Of course all this requires that you program your stuff well. ;-) I’m just curious – in this sample code – why TCHARs or _tmain are not used – being that seems to be all the rage in windows literature … Maybe Raymond addressed this in an earlier post? Doesn’t really complicate the snippet does it? Especially when I notice __cdecl and the windows specific types? Just curious. I’m a huge fan of the blog. I’m probabably reading too much into the snippet. Admittedly, I’m a weekend windows programmer – so I tend to read from gurus like a bit too closesly. I guess I’m slightly wondering if Raymond knows something I’ve not realized or not been introduced to yet. Like, the OS isn’t natively UNICODE something that’d astound me … maybe he’s just writing to be pre win2k compatible? At any rate, reminds me of advanced programming books using for(int i = 0; i < x; i++) when they could just as easily have written it as for(int i = 0; i < x; ++i) Many thanks … not at all trying to be offensive in this post. Just a newbie to Raymond. Since the code doesn’t manipulate file names, receive input, and is hard-coded English, adding Unicode support would just distract from the point of the article. (I don’t see why the difference between i++ and ++i is important in the example above; they both compile to the same object code.) Thanks. That answers my question. My (++i) vs (i++) example was more about semantics and intention than actual object code. Just getting a feel for your style. Again, thanks and please keep up the great insights. -Luther nobody can solve the exercise?? ++i vs i++ is all about the final object code. If you don’t understand the difference between the two, you will not understand when ++i is better than i++. I am trying to compile this with VS2003, and I get "error C3861: ‘QueueUserWorkItem’: identifier not found, even with argument-dependent lookup". Perhaps I am not config right for multi-thread? Do I need ATL? If I hover over the call, I get a parm list, so something is finding it. :) OK – OK, Norman … turns out to be a bad example. My apologies as it sounds like I’ve offended you. My question boiled down to why Raymond was using chars in a clearly win32 program … WHICH!! … Raymond answered for me. I wasn’t sure if it was intentionally or just for didactic purposes. There is no need to expand on his answer. Thanks "What are the consequences to the thread pool if you create a thread pool timer whose callback takes longer to complete than its timer period?" In .Net, the timer function will be called successively. If the timer function uses only local variables, or if it’s properly synchronized (which it ought to be), then at least you won’t trash anything. However, what you will do (if the timer handler is always longer than the interval) is exhaust the thread pool, at which point your timer function will essentially run repeatedly, forever, as new calls are getting queued up faster than existing calls get completed. This can have really unfortunate effects on the rest of the application, since there are then no pool threads for anything else, either. (The program will appear to hang, only to process those events a random period of time later.) Monday, July 25, 2005 2:46 PM by Luther > My apologies as it sounds like I’ve offended > you. It looks like there was some misunderstanding involved. In the past there really have been religious arguments over choices of prefix vs. postfix operators in cases where it didn’t matter. Sometimes agnostics can figure out why religious arguments arise but in this case I couldn’t even figure out why. Anyway it looked like you were bringing it here, so I balanced it. Rest assured that if you had posted for the opposite side then I would have posted for the opposite’s opposite in exactly the same way ^_^ No problem … My thought was that by definition, the postfix operator returned the old value and that in general, if I didn’t need the old value – I ought to use the prefix operator. Period. If nothing else, I thought this was sort of a "self-commenting" idiom. I do realize that the compiler is "smart" and may do the optimum thing in either case, but I considered it better form, explicitly clearer … I consider that the more hints I can include in my code, all the better …. for both the person reading my code and the compiler compiling my code. "Yes, I choose prefix notation here bcs I absolutely do not need the old value." For me, prefix notation here was about keeping the code TEXT more closely consistent with the intended result in the object code. But it is clear that practically – it makes no difference. It is also clear that many developers would have no problem understanding intention with either notation (given the context). What really helped nail the coffin was this comment from Kernighan and Ritchie: "Section 2.8 … In a context where no value is wanted, just the inrementing effect, as in if (c == ‘n’) n1++; prefix and postfix are the same." So for my benefit, even from a strictly "language semantic" standpoint, there is no difference. "for(…i++)" is not only optimized by the compiler, but by language definition (or at least, K&Rs suggestion), it is the same operation as "for(…++i)". Many thanks to all who made this revelation possible ;-) I may continue to use prefix notation in for loops but I realize that it isn’t always necessary – so I will no longer tease my coworkers ;-) Raymond, do you know why the .NET equivalent of QueueUserWorkItem does not support a WT_EXECUTELONGFUNCTION flag? Also, do you know why the .NET FCL bothers to expose UnsafeQueueUserWorkItem? The documentation describes the security risks of doing this, but not the benefit (of which there presumably is some). Um, read the subtitle of the blog again? Try asking somebody who works on .NET. Owen — the documentation for UnsafeQueueUserWorkItem says that the difference is that QueueUserWorkItem’s worker thread "inherits" the stack of the caller (the caller of QueueUserWorkItem, that is) when the thread starts executing the work-item. The Unsafe version does not "inherit" the stack. ("… does not propagate the calling stack onto the worker thread…") This only matters when the code has security requirements set, and it’s doing full stack walks instead of just link checks. (Of course, "you can’t trust the return address" anyway, but apparently the .net people didn’t figure that out, or maybe there’s a reason that the issues mentioned in Raymond’s "you can’t trust the return address" blog entry don’t apply to .net.) As for why it doesn’t support the long-function flag… I don’t know, I’m not on the design team. I just use it. ;-) Note that WT_EXECUTELONGFUNCTION won’t really do much of anything in Longhorn. It turned out that a lot of people were just using it to spin up threads faster, since the threadpool was throttling pretty aggressively. So now it just spins up threads very quickly all the time; it tries hard to fully utilize the available processor bandwidth.
https://blogs.msdn.microsoft.com/oldnewthing/20050722-15/?p=34843
CC-MAIN-2018-13
refinedweb
2,166
68.81
Jan 07, 2008 03:00 PMThe. import java.util.HashMapAs an Ant task JtestR supports Ant, buildr and Maven2 integration. describe "An empty", HashMap do before :each do @hash_map = HashMap.new end it "should be able to add an entry to it" do @hash_map.put "foo", "bar" @hash_map.get("foo").should == "bar" end it "should return a keyset iterator that throws an exception on next" do proc do @hash_map.key_set.iterator.next end.should raise_error(java.util.NoSuchElementException) end end Download the Free Adobe® Flex® Builder 3 Trial Adobe® Rich Internet Application Project Portal Usage Landscape: Enterprise Open Source Data Integration Adobe® Rich Internet Application Project Portal
http://www.infoq.com/news/2008/01/boost-java-test
crawl-002
refinedweb
107
58.18
Products > Thermal Imaging Flir A310 and A320 Thermal Camera (1/2) > >> mohitsharma44: Hello everyone, I have a flir a310 and a320 camera. I was following tomas123 and others on exiftool forums and eevblog (for quite sometime now). I am working on obtaining temperature from raw pixel values of the camera (just like the script for e4). So far I am following tomas123's php script (that I converted to python with a few other things) however there is a small problem: The Flir A310 is giving the "AtmosphericTemperature" as -273.15C |O because of which my raw atm temp is same as -ve plank_o value. I proceeded with the calculation and the max and min temp looks awful lot close to the flir tools's reading. Flir tech denied helping in this case and all their replies concentrated on forcing me to use flir tools + ??? (which I have but for some applications, it is not enough). Did anyone have any success in using a310 and/ or a320 and obtain the correct temperature? I am open to suggestions and pointers to the right direction. mohitsharma44: In case anyone would like to play with the files, I am attaching: * A310's radiometric jpeg image () * header () * ir image () * csv file () all are created using a python script (I am using exact same formula as tomas123's php script except that for converting ir image to temperature, I am using numpy). I am attaching the snippet for that below as well. In case any one needs, here is the snippet for reading test_ir.jpg and converting pixel values to temperature and saving it as csv file: --- Code: ---import numpy as np from scipy.misc import imread im = imread('test_ir.jpg') raw_temp_pix = (im - (1 - tau) * raw_atm - (1 - emmissivity) * tau * raw_refl) / emmissivity / tau t_im = (b / np.log(r1 / (r2 * (raw_temp_pix[:] + o)) + f) - 273.15) np.savetxt(csv_fname, t_im, delimiter=',') --- End code --- I'll be happy to share the bitbucket repo if any one needs it. mohitsharma44: Hate to say it but I solved it using brute force technique (Still dont understand the reason). Using the above formula I checked that I am off by ~0.07 deg C when compared with FLIR tools's output. Modifying it the following way, I get exact same values that FLIR tools outputs to csv --- Code: ---raw_temp_pix = (im - (1 - emmissivity) * raw_refl) / emmissivity / tau --- End code --- This is kind of confusing. It should either be that I do not consider tau at all and it should just be --- Code: ---raw_val-(1-emissivity)*raw_refl)/emissivity --- End code --- . Why do i have to divide the whole result by tau? ps: I am not an expert in thermography. tomas123: The calculations of atmosphere transmissivity is more complex. Load my excel sheet from this post and the flir document from the following post and compare it with your script: tomas123: your equation is right --- Code: ---raw_temp_pix = (im - (1 - tau) * raw_atm - (1 - emmissivity) * tau * raw_refl) / emmissivity / tau --- End code --- goes to --- Code: ---im = raw_temp_pix * emmissivity * tau + (1 - emmissivity) * tau * raw_refl + (1 - tau) * raw_atm --- End code --- and this is identical with equation 3 I think, you have some troubles with parameter tau. Please feed my excel sheet with the exiftool flir tags and compare the results with your csv values. Navigation [0] Message Index [#] Next page
https://www.eevblog.com/forum/thermal-imaging/flir-a310-and-a320-thermal-camera/?wap2;PHPSESSID=2m3vr1ch52simajluk15lofms7
CC-MAIN-2021-39
refinedweb
549
61.46
. mailtoscheme, it's possible to link to other applications by using other.., and on mobile you want that link to open your app. iOS terms this concept "universal links" and Android calls it "deep links" (unfortunate naming, since deep links can also refer to the topic above). Expo supports these links on both platforms (with some configuration). Expo also supports deferred deep links with Branch. <a href="">, instead we have to use Linking.openURL. import * as Linking from 'expo-linking'; Linking.openURL(''); Anchorcomponent that will open a URL when it is pressed. import { Text } from 'react-native'; import * as Linking from 'expo-linking';> WebBrowser.openBrowserAsyncand React Native's Linking.openURL. Often WebBrowseris a better option because it's a modal within your app and users can easily close out of it and return to your app. expo-web-browserlike expo install expo-web-browserand use it like this: import React, { Component } from 'react'; import { Button, Linking, View, StyleSheet } from 'react-native'; import * as WebBrowser from 'expo-web-browser'; import Constants from 'expo-constants'; export default class App extends Component { render() { return ( <View style={styles.container}> <Button title="Open URL with ReactNative.Linking" onPress={this._handleOpenWithLinking} style={styles.button} /> <Button title="Open URL with Expo.WebBrowser" onPress={this._handleOpenWithWebBrowser} style={styles.button} /> </View> ); } _handleOpenWithLinking = () => { Linking.openURL(''); }; _handleOpenWithWebBrowser = () => { WebBrowser.openBrowserAsync(''); }; } const styles = StyleSheet.create({ container: { flex: 1, alignItems: 'center', justifyContent: 'center', paddingTop: Constants.statusBarHeight, backgroundColor: '#ecf0f1', }, button: { marginVertical: 10, }, }); (use only lower case): { "expo": { "scheme": "myapp" } } myapp://. app.json, changing the schemekey after your app is already ejected will not have the desired effect. If you'd like to change the deep link scheme in your bare app, you'll need to replace the existing scheme with the new one in the following locations: schemein app.json CFBundleURLSchemesin ios/<your-project-name>/Supporting/Info.plist data android:schemetag in android/app/src/main/AndroidManifest.xml Linkingmodule. When you want to provide a service with a url that it needs to redirect back into your app, you can call Linking.makeUrl()and it will resolve to the following: exp://exp.host/@community/with-webbrowser-redirect myapp:// exp://localhost:19000 Linking.makeUrl(). These will be used by your app to receive data, which we will talk about in the next section. Linking.addEventListener('url', callback). Linking.getInitialURL-- it returns a Promisethat resolves to the url, if there is one. Linking.makeUrl(path, queryParams)will construct a working url automatically for you. You can use it like this: let redirectUrl = Linking.makeUrl('path/into/app', { hello: 'world', goodbye: 'now' }); myapp:///path/into/app?hello=world&goodbye=nowfor a standalone app. Linking.parse()to get back the path and query parameters you passed in. _handleUrl = url => { this.setState({ url }); let { path, queryParams } =. /.well-known/apple-app-site-association(with no extension). The AASA contains JSON which specifies your Apple app ID and a list of paths on your domain that should be handled by your mobile app. For example, if you want links of the format be opened by your mobile app, your AASA would have the following contents: { "applinks": { "apps": [], "details": [{ "appID": "LKWJEF.io.myapp.example", "paths": ["/records/*"] }] } }*(with wildcard matching for the record ID) should be opened directly by your mobile app. See Apple's documentation for further details on the format of the AASA. Branch provides an AASA validator which can help you confirm that your AASA is correctly deployed and has a valid format. associatedDomainsconfiguration to your app.json(make sure to follow Apple's specified format). Second, you need to edit your App ID on the Apple developer portal and enable the "Associated Domains" application service. To do so go in the App IDs section and click on your App ID. Select Edit, check the Associated Domains checkbox and click Done. You will also need to regenerate your provisioning profile after adding the service to the App ID. This can be done by running expo build:ios --clear-provisioning-profileinside of your app directory. Next time you build your app, it will prompt you to create a new one. intentFiltersto the Android section of your app.json. The following basic configuration will cause your app to be presented in the standard Android dialog as an option for handling any record links to myapp.io: "intentFilters": [ { "action": "VIEW", "data": [ { "scheme": "https", "host": "*.myapp.io", "pathPrefix": "/records" }, ], "category": [ "BROWSABLE", "DEFAULT" ] } ] /.well-known/assetlinks.jsonspecifying your app ID and which links should be opened by your app. See Android's documentation for details about formatting this file. Second, add "autoVerify": trueto the intent filter in your app.json; this tells Android to check for your assetlinks.jsonon your server and register your app as the automatic handler for the specified paths: "intentFilters": [ { "action": "VIEW", "autoVerify": true, "data": [ { "scheme": "https", "host": "*.myapp.io", "pathPrefix": "/records" } ], "category": [ "BROWSABLE", "DEFAULT" ] } ] exp://exp.host/@community/native-component-listmight just show up as plain text in your browser rather than as a link (exp://exp.host/@community/native-component-list). <script>window.location.replace("example://path/into/app");</script>
https://docs.expo.io/workflow/linking/
CC-MAIN-2020-40
refinedweb
840
50.94
Port scanning may be defined as a surveillance technique, which is used in order to locate the open ports available on a particular host. Network administrator, penetration tester or a hacker can use this technique. We can configure the port scanner according to our requirements to get maximum information from the target system. Now, consider the information we can get after running the port scan − Information about open ports. Information about the services running on each port. Information about OS and MAC address of the target host. Port scanning is just like a thief who wants to enter into a house by checking every door and window to see which ones are open. As discussed earlier, TCP/IP protocol suite, use for communication over internet, is made up of two protocols namely TCP and UDP. Both of the protocols have 0 to 65535 ports. As it always advisable to close unnecessary ports of our system hence essentially, there are more than 65000 doors (ports) to lock. These 65535 ports can be divided into the following three ranges − System or well-known ports: from 0 to 1023 User or registered ports: from 1024 to 49151 Dynamic or private ports: all > 49151 In our previous chapter, we discussed what a socket is. Now, we will build a simple port scanner using socket. Following is a Python script for port scanner using socket − from socket import * import time startTime = time.time() if __name__ == '__main__': target = input('Enter the host to be scanned: ') t_IP = gethostbyname(target) print ('Starting scan on host: ', t_IP) for i in range(50, 500): s = socket(AF_INET, SOCK_STREAM) conn = s.connect_ex((t_IP, i)) if(conn == 0) : print ('Port %d: OPEN' % (i,)) s.close() print('Time taken:', time.time() - startTime) When we run the above script, it will prompt for the hostname, you can provide any hostname like name of any website but be careful because port scanning can be seen as, or construed as, a crime. We should never execute a port scanner against any website or IP address without explicit, written permission from the owner of the server or computer that you are targeting. Port scanning is akin to going to someone’s house and checking their doors and windows. That is why it is advisable to use port scanner on localhost or your own website (if any). The above script generates the following output − Enter the host to be scanned: localhost Starting scan on host: 127.0.0.1 Port 135: OPEN Port 445: OPEN Time taken: 452.3990001678467 The output shows that in the range of 50 to 500 (as provided in the script), this port scanner found two ports — port 135 and 445, open. We can change this range and can check for other ports. ICMP is not a port scan but it is used to ping the remote host to check if the host is up. This scan is useful when we have to check a number of live hosts in a network. It involves sending an ICMP ECHO Request to a host and if that host is live, it will return an ICMP ECHO Reply. The above process of sending ICMP request is also called ping scan, which is provided by the operating system’s ping command. Actually in one or other sense, ping sweep is also known as ping sweeping. The only difference is that ping sweeping is the procedure to find more than one machine availability in specific network range. For example, suppose we want to test a full list of IP addresses then by using the ping scan, i.e., ping command of operating system it would be very time consuming to scan IP addresses one by one. That is why we need to use ping sweep script. Following is a Python script for finding live hosts by using the ping sweep − import os import platform from datetime import datetime net = input("Enter the Network Address: ") net1= net.split('.') a = '.' net2 = net1[0] + a + net1[1] + a + net1[2] + a st1 = int(input("Enter the Starting Number: ")) en1 = int(input("Enter the Last Number: ")) en1 = en1 + 1 oper = platform.system() if (oper == "Windows"): ping1 = "ping -n 1 " elif (oper == "Linux"): ping1 = "ping -c 1 " else : ping1 = "ping -c 1 " t1 = datetime.now() print ("Scanning in Progress:") for ip in range completed in: ",total) The above script works in three parts. It first selects the range of IP address to ping sweep scan by splitting it into parts. This is followed by using the function, which will select command for ping sweeping according to the operating system, and last it is giving the response about the host and time taken for completing the scanning process. The above script generates the following output − Enter the Network Address: 127.0.0.1 Enter the Starting Number: 1 Enter the Last Number: 100 Scanning in Progress: Scanning completed in: 0:00:02.711155 The above output is showing no live ports because the firewall is on and ICMP inbound settings are disabled too. After changing these settings, we can get the list of live ports in the range from 1 to 100 provided in the output. To establish a TCP connection, the host must perform a three-way handshake. Follow these steps to perform the action − Step 1 − Packet with SYN flag set In this step, the system that is trying to initiate a connection starts with a packet that has the SYN flag set. Step 2 − Packet with SYN-ACK flag set In this step, the target system returns a packet with SYN and ACK flag sets. Step 3 − Packet with ACK flag set At last, the initiating system will return a packet to the original target system with the ACK flag set. Nevertheless, the question that arises here is if we can do port scanning using ICMP echo request and reply method (ping sweep scanner) then why do we need TCP scan? The main reason behind it is that suppose if we turn off the ICMP ECHO reply feature or using a firewall to ICMP packets then ping sweep scanner will not work and we need TCP scan. import socket from datetime import datetime net = input("Enter the IP address: ") net1 = net.split('.') a = '.' net2 = net1[0] + a + net1[1] + a + net1[2] + a st1 = int(input("Enter the Starting Number: ")) en1 = int(input("Enter the Last Number: ")) en1 = en1 + 1 t1 = datetime.now() def scan(addr): s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) socket.setdefaulttimeout(1) result = s.connect_ex((addr,135)) if result == 0: return 1 else : return 0 def run1(): for ip in range(st1,en1): addr = net2 + str(ip) if (scan(addr)): print (addr , "is live") run1() t2 = datetime.now() total = t2 - t1 print ("Scanning completed in: " , total) The above script works in three parts. It selects the range of IP address to ping sweep scan by splitting it into parts. This is followed by using a function for scanning the address, which further uses the socket. Later, it gives the response about the host and time taken for completing the scanning process. The result = s. connect_ex((addr,135)) statement returns an error indicator. The error indicator is 0 if the operation succeeds, otherwise, it is the value of the errno variable. Here, we used port 135; this scanner works for the Windows system. Another port which will work here is 445 (Microsoft-DSActive Directory) and is usually open. The above script generates the following output − Enter the IP address: 127.0.0.1 Enter the Starting Number: 1 Enter the Last Number: 10 127.0.0.1 is live 127.0.0.2 is live 127.0.0.3 is live 127.0.0.4 is live 127.0.0.5 is live 127.0.0.6 is live 127.0.0.7 is live 127.0.0.8 is live 127.0.0.9 is live 127.0.0.10 is live Scanning completed in: 0:00:00.230025 As we have seen in the above cases, port scanning can be very slow. For example, you can see the time taken for scanning ports from 50 to 500, while using socket port scanner, is 452.3990001678467. To improve the speed we can use threading. Following is an example of port scanner using threading − import socket import time import threading from queue import Queue socket.setdefaulttimeout(0.25) print_lock = threading.Lock() target = input('Enter the host to be scanned: ') t_IP = socket.gethostbyname(target) print ('Starting scan on host: ', t_IP) def portscan(port): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: con = s.connect((t_IP, port)) with print_lock: print(port, 'is open') con.close() except: pass def threader(): while True: worker = q.get() portscan(worker) q.task_done() q = Queue() startTime = time.time() for x in range(100): t = threading.Thread(target = threader) t.daemon = True t.start() for worker in range(1, 500): q.put(worker) q.join() print('Time taken:', time.time() - startTime) In the above script, we need to import the threading module, which is inbuilt in the Python package. We are using the thread locking concept, thread_lock = threading.Lock() to avoid multiple modification at a time. Basically, threading.Lock() will allow single thread to access the variable at a time. Hence, no double modification occurs. Later, we define. Now after running the above script, we can see the difference in speed for scanning 50 to 500 ports. It only took 1.3589999675750732 seconds, which is very less than 452.3990001678467, time taken by socket port scanner for scanning the same number of ports of localhost. The above script generates the following output − Enter the host to be scanned: localhost Starting scan on host: 127.0.0.1 135 is open 445 is open Time taken: 1.3589999675750732
https://www.tutorialspoint.com/python_penetration_testing/python_penetration_testing_network_scanner.htm
CC-MAIN-2020-34
refinedweb
1,630
73.88
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards During discussion of the library a lot of questions were about ABI stability and should the library take care about it. It was decided that making ABI stable could be a useful feature, but it will add a lot of overhead and make the library usage less simple. For those who do not require ABI stability across compilers such feature will be an overkill. It was decided to make this library more simple and low level, so that it could be used to make ABI stable plugins system for users that require it still not adding overhead for other users. There are some open C++ plugin systems. Most of them force user to have some predefined API. The problem is that all of those API differ. To be more usable Boost.DLL does not force API. It's up to user to design suitable API. Some methods of the library use boost::filesystem::path or return std::vector<std::string>. This may look non optimal at first, but there is a reason to do so. boost::filesystem::path allows to transparently use Unicode strings with non-Unicode ones. Using it provides a more user-friendly interface for the library while the performance overhead is not noticeable because of a slow file system operations that occur in boost::filesystem::path accepting methods. std::vector<std::string> variables are returned by the library_info methods. Querying a library is a slow procedure anyway: it randomly reads parts of file from disc and executes algorithms that sometimes have linear complexity from sections or exported symbols count. Returning std::vector<std::string> simplifies implementation and does not require from user to keep an instance of library_info after query. Having not a very noticeable performance overhead in rarely called methods seems reasonable. Other methods are assumed to be hot paths and optimized as much as possible. There is a good big reason to make self loading via shared_library(program_location()) instead of having some shared_library::load_self() member method. That reason is the requirement to have an ability to call shared_library(this_line_location()) from any place, even from the main binary. We need that to link plugins into the binary and to create a transparent reference counting mechanism. Making multiple interfaces that do exactly the same things looks unreasonable to me, that's why shared_library(program_location()) and shared_library(this_line_location()) are used without shared_library::load_self(). Mangling depends on source code, for example "boost::foo" could be foo function or foo variable. Depending on that knowledge it must be mangled in different ways. More problems arise if foo is an overloaded function that accepts parameters: "boost::foo(variant<int, short>)". In that case full name of parameter must be specified, which could be boost::variant<int, short> or variant<int, short, void_, void_> ... There was an idea to allow user to forward declare function and generate mangled name from it: namespace boost { void foo(variant<int, short>); } std::string mangled_name = boost::dll::magic_mangle(boost::foo); But that idea has epic failed because of linker problems and no reliable way to get mangled symbol name from compiler internals at compile time. That's why aliases were considered a lesser evil: BOOST_DLL_ALIAS(boost::foo, foo_variant) // in plugin "foo_variant" // in plugin importer
https://www.boost.org/doc/libs/1_64_0/doc/html/boost_dll/design_rationale.html
CC-MAIN-2018-30
refinedweb
561
53.92
Ticket #10860 (reopened enhancement) Full resolution not exposed to Linux guest Description When using VirtualBox on a MacBook Pro Retina, only half the resolution is exposed to, at least, Linux guests. This causes the fonts to be blurry in the guest OS. To verify this, install Linux, and go to fullscreen, type xrandr and notice the resolution is set to 1440x900. To compare with VMWare, full Retina mode is supported: This exposes the current full resolution to the guest, which in the default host retina mode would mean the guest sees 2880x1800 resolution and has to deal with it on its own accordingly. Attachments Change History comment:2 Changed 5 years ago by frank Right, and the RC page asked to report Beta / RC bugs in the forum! comment:3 Changed 5 years ago by frank - Status changed from new to closed - Resolution set to duplicate comment:4 Changed 5 years ago by dsvensson I've seen #10848 but that only describes the host UI issues which have been fixed (at least the hidpi-flag in Info.plist) in the 4.2 beta. This issue is about exposing the 1-1 resolution to the guest OS. comment:5 Changed 5 years ago by dsvensson - Status changed from closed to reopened - Resolution duplicate deleted comment:6 Changed 4 years ago by tdy Any update on this? I'm still getting 1440x900 res in Linux with the guest additions installed. comment:7 Changed 3 years ago by frank - Status changed from reopened to closed - Resolution set to fixed If this is still a problem with VBox 4.3.2, please reopen this ticket. In that case, please attach a VBox.log file of a VM session running on VBox 4.3.2 with 4.3.2 Guest Additions installed. comment:8 Changed 3 years ago by tdy - Status changed from closed to reopened - Resolution fixed deleted VBox.log is attached for VBox 4.3.2 with 4.3.2 Guest Additions installed, vboxguest/vboxsf/vboxvideo modules loaded, and VBoxClient-all executed. The highest exposed resolution from xrandr is still 1440x900. comment:9 Changed 3 years ago by stevenleeg Same issue for me, I'm only able to set the resolution to 1440x900 on the 15" Retina MBP. comment:10 Changed 3 years ago by demmeln It would be great to see if this is being worked on, or what exactly the issue is. comment:11 Changed 3 years ago by frank I would like to see more VBox.log files from VM sessions where this problem manifests. So far we are unable to reproduce it. comment:12 Changed 3 years ago by GabrielCox Same here VBox 4.3.4 on a 13" Macbook Pro Retina with 2560x1600 resolution. Guest (Win 7 w/128MB VRAM) shows up as 1280x800 when full screen. Info.plist has the following key set: <key>NSHighResolutionCapable</key> <true/> Additional Log Attached. Changed 3 years ago by GabrielCox - attachment VBox_log_fullscreen_on_2560x1600mbp.log added VB 4.3.4 Win XP Guest Full Screen @ 1280x800 on 2560x1600 Mac comment:13 Changed 3 years ago by GabrielCox Correction on last attachment labeled VBox_log_fullscreen_on_2560x1600mbp.log: The Guest OS is Win7-64, not WinXP Changed 3 years ago by chrisb - attachment rmbp13_vbox.log added comment:14 Changed 3 years ago by chrisb rmbp13_vbox.log shows a 13" MBPR with Virtualbox reporting a fullscreen max resolution of 1280x800 - possible guilty line: 01:05:03.646244 VMMDev::SetVideoModeHint: got a video mode hint (1280x800x0)@(0x0),(1;0) at 0 It ought to be easy to reproduce - this is a completely clean install of debian-7.4.0-amd64-xfce-CD-1.iso with guest additions installed following the instructions. comment:15 Changed 3 years ago by Armada651 @frank I think you may be misunderstanding the issue because you should have no trouble reproducing it. On Mac OS X retina displays 1 pixel is normally scaled up to 4 pixels for application which do not support high dpi. Qt has a nice article explaining the problem: VirtualBox already supports high-dpi for much of the UI, but not for the guest display. It seems it uses the device-indepedent pixels (1px = 4px) instead of the physical pixels (1px = 1px). This results in much lower image quality as everything is scaled by a factor of 4. Changed 3 years ago by pippijn - attachment VBox.2.log added VirtualBox VM 4.3.10 r93012 comment:16 Changed 3 years ago by pippijn I can confirm this happens on the latest VirtualBox 4.3.10 r93012. I attached another VBox.log. comment:17 Changed 3 years ago by pippijn Update: it also still doesn't work on 4.3.14. comment:18 Changed 3 years ago by ejan Same issue in the latest release: 4.3.18. Is this issue being worked on at all? Changed 3 years ago by user1024 - attachment VBox.3.log added VBox 4.3.18, Fedora 20 comment:19 Changed 3 years ago by user1024 Same issue. The VirtualBox guest sees the MacBook's resolution (in my case) as 1280x800 when it is actually 2500x1600. Steps to notice/reproduce on MacBook Pro Retina 13", as mentioned in the original bug: - Install any Linux version with graphics (e.g. Fedora or Ubuntu). Install Guest Additions. - Start the VM and log in. - Go to View->Switch to Fullscreen if not there already. You can also try maximizing the window and going to View -> Auto-resize Guest Display. - in a terminal, run xrandr. Output for me says the current resolution is 1178x735. (In the past I've gotten more usually 1280x800). I attached a VBox.log with Fedora 20 where I followed the above steps, except I was already in fullscreen mode from bootup. I also had my VirtualBox Display Preferences set to Hint, Width=2500, Height=1600. You can see in the log that VBox thinks it's switching to the native fullscreen resolution, which it thinks is only 1200x800 or thereabouts. comment:20 Changed 3 years ago by user1024 Has anyone tried rebuilding from source after changing the Info.plist files to enable high-dpi mode, as in the link posted above by Armada? comment:21 Changed 2 years ago by pt__ This is very easy to reproduce. You need a Retina MacBook Pro running e.g. OS X Yosemite. Do not plug into an external monitor -- use only the built in LCD. Go to "System Preferences -> Displays" and choose "Best for display". Start a virtual machine (e.g. Ubuntu 14.10) and set it to full screen. Look carefully at some text in the guest. It is scaled at 2x (and therefore slightly blurry). The issue is explained in comment 15. The developer guidelines for fixing this are probably somewhere in here: comment:22 follow-up: ↓ 24 Changed 2 years ago by klaus - Type changed from defect to enhancement Sorry, but you're essentially complaining that VirtualBox follows Apple's guidelines (with the mentioned "Best for display" setting the logical screen resolution is half of the physical screen resolution in both directions). It assumes that the guest OS has no HiDPI support, which is true for the vast majority of the candidates. If it would do 1:1 pixel representation then everything would be displaying many VMs in an unusably tiny way. VirtualBox really listens to the OSX settings. If you want to try: pick "Scaled / More Space", then you'll get a much smaller VM window (simulating full HD resolution). For my eyes the default font size in a Windows 7 VM is already too small. 1:1 pixels are not offered by Apple, you need 3rd party tools to do that, and then everything becomes super tiny. What you're asking for is a new feature: allowing customizable, additional scaling. comment:23 Changed 2 years ago by pt__ Sure, let's say it is a feature. I don't think the current behaviour is "correct" though. I would say it is *very* bad behaviour to not have an option. Windows, Ubuntu and OS X *all* provide HiDPI support. It is not possible to make use of this support using VirtualBox (without making OS X unusable, which is obviously not a real fix). VMWare and Parallels both provide support for this (obviously). Until then, I would say that VirtualBox simply does not have support for HiDPI. The only real option for VirtualBox on OS X is to have blurry guests *or* 1:1 tiny pixels on OS X (i.e. make OS X unusable, despite OS X having good support for HiDPI) and normal (HiDPI) guests (as you said). Since many popular guest OS's support HiDPI, I would say that VirtualBox is *not* following the Apple guidelines -- the scaling approach is a last resort when there is no better option, but there clearly is a better option for the majority of guest OS's. comment:24 in reply to: ↑ 22 Changed 2 years ago by user1024 What you're asking for is a new feature: allowing customizable, additional scaling. Happy to call it a feature request rather than bug fix, but a very important one on this platform in my opinion. The blurry experience is not ideal. As originally stated other VM applications have this feature, so it would be really nice to add. Thanks for the attention to the issue. comment:25 Changed 2 years ago by iamthealex I will start out by saying that VirtualBox is a wonderful piece of s/w. Thank you. However, I believe it can be made even better by taking full advantage of a MacBook Pro retina display like VMWare does. I see lots of technical back and forth in this ticket, and I don't care if it's a feature request or a bug report or whether or not VirtualBox is following the spec properly or not. I just want the same resolution in my Linux guest as I see on my MacBookPro Host which has a retina display. For example, a shell on my MBP has much better resolution than a shell on my VirtualBox Linux guest. comment:26 Changed 2 years ago by SpacemanSpiff I have recently upgraded my VBox installation, but it has not fixed the low-resolution problem with my Retina MBP. Here are the details of what I've tried: Host: Early 2013 MBP Retina; OSX 10.9.5 (Mavericks); VBox 4.3.20.96996 Guest: Windows 7 Ultimate (fully updated); VBox Guest Additions 4.3.20.96996; I have played around with various installation and host configurations, with no success. This includes: Simple upgrade of VBox (from 4.2.x to 4.3.19) w/ Guest Additions Installation on host. Complete removal of VBox via AppCleaner, then install 4.3.20 & import Win7 guest. Install VBox Guest Additions 4.3.20 with & without 3D support. Win7 with & without Aero mode active. Guest in Windowed/Scaled/FullScreen/Seamless Mode. I should note that I am trying to avoid installing a resolution switcher, or accessing the guest via RDP. Other than what is listed here, I haven't specialized my VBox installation in any way. Nothing seems to get me past 1440x900. Is there some other procedure or configuration that I have missed? -thanks comment:27 Changed 2 years ago by MJD I have the exact same experience that SpacemanSpiff has. My system specs are exactly the same too. I am trying to avoid installing a resolution switcher as a separate application should not be required to utilize a VM in fullscreen at full resolution or with HiDPI. I also share the same opinion as iamthealex. So my question is: Has the Guest HiDPI Support "feature" been added to the road map? Will it be available in the next release? This has my vote! +1 Thanks! comment:28 Changed 2 years ago by pt__ This issue is still present. VirtualBox is blurry on retina displays. comment:29 Changed 2 years ago by pt__ Actually, it looks like the code to support this is already in trunk. Could a developer review this code (using (only) their built-in Retina display)? And perhaps step through it with a debugger. In particular, could you check whether the "dBackingScaleFactor" value is > 1.0? And whether the if statements take the expected branches? (looks a bit contradictory) // // .. void UIFrameBuffer::eraseImageRect(QPainter &painter, const QRect &rect, bool fUseUnscaledHiDPIOutput, HiDPIOptimizationType hiDPIOptimizationType, double dBackingScaleFactor) { /* Prepare sub-pixmap: */ QPixmap subPixmap = QPixmap(rect.width(), rect.height()); /* If HiDPI 'backing-scale-factor' defined: */ if (dBackingScaleFactor > 1.0) { /* Should we * perform logical HiDPI scaling and optimize it for performance? */ if (!fUseUnscaledHiDPIOutput && hiDPIOptimizationType == HiDPIOptimizationType_Performance) { /* Adjust sub-pixmap: */ subPixmap = QPixmap(rect.width() * dBackingScaleFactor, rect.height() * dBackingScaleFactor); } #ifdef Q_WS_MAC # ifdef VBOX_GUI_WITH_HIDPI /* Should we * do not perform logical HiDPI scaling or * perform logical HiDPI scaling and optimize it for performance? */ if (fUseUnscaledHiDPIOutput || hiDPIOptimizationType == HiDPIOptimizationType_Performance) { /* Mark sub-pixmap as HiDPI: */ subPixmap.setDevicePixelRatio(dBackingScaleFactor); } # endif /* VBOX_GUI_WITH_HIDPI */ #endif /* Q_WS_MAC */ } /* Which point we should draw corresponding sub-pixmap? */ QPointF paintPoint = rect.topLeft(); /* Take the backing-scale-factor into account: */ if (fUseUnscaledHiDPIOutput && dBackingScaleFactor > 1.0) paintPoint /= dBackingScaleFactor; /* Draw sub-pixmap: */ painter.drawPixmap(paintPoint, subPixmap); } comment:30 Changed 2 years ago by samuelparks +1 - would love to see this supported soon comment:31 Changed 2 years ago by df eta for this? i'm about to switch to vmware... comment:32 Changed 2 years ago by stianstrips I would also like to see this feature implemented. Having access to the raw hardware, in this case full screen resolution is a good thing. Need not be default setting. comment:33 Changed 2 years ago by socalstudent +1 - This would be an excellent feature to support. Might the above trunk code make its way into the VirtualBox 5 release or future betas? comment:34 Changed 2 years ago by greatpatton This is really needed as Virtualbox VM on a Mac are graphically awful. comment:35 Changed 2 years ago by Jzee Waiting for the same to see....:) comment:36 Changed 23 months ago by tmancill another +1, the graphics really are gross right now... comment:37 Changed 23 months ago by frank The current VirtualBox 5.0 RC1 code contains the code mentioned in comment 29. So everyone is welcome to test 5.0 RC1 and share their findings in our Beta Forum. comment:38 Changed 22 months ago by andreasfrom VirtualBox 5.0 RC2 with "Use Unscaled HiDPI Output" on, reports an incorrect resolution to the Linux guest. My MacBook has a resolution of 2560x1600, but the Linux guest sees 2880x1800. Manually setting the Linux resolution to 2560x1600, my native resolution, means the guest doesn't fill the whole screen. Edit: Realised too late, that I shouldn't have posted here (sorry). Link to the Beta Forum: The VirtualBox version is actually the 4.2 beta but it couldn't be selected.
https://www.virtualbox.org/ticket/10860?cversion=0&cnum_hist=4
CC-MAIN-2017-17
refinedweb
2,468
66.54
Java Date/Calendar FAQ: How do I get an instance of today's date with Java? (A Java Date to represent “now”). Although you’re strongly encouraged not to get today’s date using the Date class, like this: Date today = new Date(); you can still get today’s date in Java using one line of code with the Java Calendar class, like this: // the correct way to get today's date Date today = Calendar.getInstance().getTime(); This code can be read as, “Create an instance of a Java Calendar, which defaults to the current date and time; then use the getTime method to convert this to a java.util.Date reference.” There are other ways to do this that are similar to this approach, but I believe this is the correct way to get an instance of today’s date, i.e., a date to represent “now”. A complete Java “get today’s date” example For the sake of completeness — and to help you experiment — here’s the source code for a complete Java class which demonstrates this technique of getting today’s date (a “now” date): import java.util.*; /** * A Java Date and Calendar example that shows how to * get today's date ("now"). * * @author alvin alexander, devdaily.com */ public class JavaGetTodaysDateNow { public static void main(String[] args) { // create a calendar instance, and get the date from that // instance; it defaults to "today", or more accurately, // "now". Date today = Calendar.getInstance().getTime(); // print out today's date System.out.println(today); } } As I write this article, the output from this class is: Tue Sep 22 07:59:07 EDT 2009 I’ll add some more tutorials out here about using the Java Date and Calendar classes, including how to format dates, and perform date “math,” but for now, if you just need to get today’s date, I hope this helps. (Update: Here’s a link to a very similar example that also demonstrates the Java SimpleDateFormat class: How to get today’s date in Java.)
https://alvinalexander.com/java/java-today-get-todays-date-now/
CC-MAIN-2020-29
refinedweb
338
57.71
at the moment my datepicker works fine. But I need to fix something. Saturdays and Sundays days are disabled, so they can't be selected. As I know, the official documentaion says nothing about this feature. Maybe with template-url, but anyway dont know where to find it. Any idea? I think it's really easy to solve it. Since it's in spanish, I need to enable sab. dom. If you refer the docs, disabled dates is achieved by: JS: // Disable weekend selection $scope.disabled = function(date, mode) { return ( mode === 'day' && ( date.getDay() === 0 || date.getDay() === 6 ) ); }; HTML: So, you can enable weekends by removing this chunk of code from your datepicker's code, i.e removing the date-disabled attribute passed to datepicker: date-disabled="disabled(date, mode)" Complete HTML: <input type="date" class="form-control" uib-datepicker-popup
https://codedump.io/share/saG6btTaaeY9/1/ui-bootstrap-datepicker-enable-weekend-days
CC-MAIN-2018-26
refinedweb
141
69.58
Idle and Physics Processing¶ Games run in a loop. Each frame, you need to update the state of your game world before drawing it on screen. Godot provides two virtual methods in the Node class to do so: Node._process() and Node._physics_process(). If you define either or both in a script, the engine will call them automatically. There are two types of processing available to you: Idle processing allows you to run code that updates a node every frame, as often as possible. Physics processing happens at a fixed rate, 60 times per second by default. This is independent of your game's actual framerate, and keeps physics running smoothly. You should use it for anything that involves the physics engine, like moving a body that collides with the environment. You can activate idle processing by defining the _process() method in a script. You can turn it off and back on by calling Node.set_process(). The engine calls this method every time it draws a frame: func _process(delta): # Do something... pass public override void _Process(float delta) { // Do something... } Keep in mind that the frequency at which the engine calls _process() depends on your application's framerate, which varies over time and across devices. The function's delta parameter is the time elapsed in seconds since the previous call to _process(). Use this parameter to make calculations independent of the framerate. For example, you should always multiply a speed value by delta to animate a moving object. Physics processing works with a similar virtual function: _physics_process(). Use it for calculations that must happen before each physics step, like moving a character that collides with the game world. As mentioned above, _physics_process() runs at fixed time intervals as much as possible to keep the physics interactions stable. You can change the interval between physics steps in the Project Settings, under Physics -> Common -> Physics Fps. By default, it's set to run 60 times per second. The engine calls this method every time it draws a frame: func _physics_process(delta): # Do something... pass public override void _PhysicsProcess(float delta) { // Do something... } The function _process() is not synchronized with physics. Its rate depends on hardware and game optimization. It also runs after the physics step in single-threaded games. You can see the _process() function at work by creating a scene with a single Label node, with the following script attached to it: extends Label var time = 0 func _process(delta): time += delta text = str(time) # 'text' is a built-in Label property. public class CustomLabel : Label { private float _time; public override void _Process(float delta) { _time += delta; Text = _time.ToString(); // 'Text' is a built-in Label property. } } When you run the scene, you should see a counter increasing each frame.
https://docs.godotengine.org/it/latest/tutorials/scripting/idle_and_physics_processing.html
CC-MAIN-2022-21
refinedweb
460
65.42
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hi there, I would like to know if there is way to run a matlab code from within Processing... The code I am interested in using is: Which is C++ as far as I am aware. I've heard of a thing called "javabuilder" for matlab, but im not sure how I could use it in this situation. Can it be done? Cheers Answers Do you have MATLAB separately installed on your system, and are your trying to invoke a MATLAB script from within your sketch, e.g. with exec()? MATLAB is proprietary, licensed software -- I don't believe that you can run it without MATLAB in any other environment, by design. @jeremydouglass, hey thanks for the response....I ended up getting the c++ source code for this...though it still isnt going to help since there is no c++ mode in Processing. Ill ask in a different question, but here as well since its still a line segment detection code, how can I import into Processing? Where would I start? If you want to do Processing-like coding in C++, here is always openFrameworks: If you are using a Java library instead, you can put it in a /code subfolder of your sketch -- or drag-and-drop onto the PDE window to import. Cool yeah, I've been using Openframeworks as well, still prefer Processing though. Great, I should have figured that one out on my own lol, too easy. I think its working, though im not getting anything other than a grey box...is there something Im missing here? Its supposed to use the piet image. We can mix ".pde" & ".java" files in the same folder together w/ the PDE (Processing's IDE). There is no way for us to know what might be wrong unless you share an MCVE. I literally just dragged and dropped all the java files from the github into a sketch in Processing: Basically what the code is supposed to do is to extract line segments from an image, so like a much simplified contour. The result look like this: I contacted the author, and he has told me for this he was using the "swing GUI kit" and that I would need to change it to use Processing's GUI calls. ...im assuming thats refering to the GUI.java ? So this library isn't compatible w/ Processing, b/c direct access to Swing conflicts w/ Processing's own Swing access. The library needs to be re-written so it calls Processing's API rather than Java's Swing direct access. @GoToLoop Aw man, I figured. Ok...so I would be changing the GUI.java right? How can I call Processing's API instead of swing ? public class GUI extends JFrame { Processing's JAVA2D renderer already got its own instance of JFrame. Look for past getFrame() posts: Also you need to learn Processing's API in order to replace those Swing direct calls: Cool thanks, ill see what I can do. I did find something you did a bit back: Im a bit lost. Can you tell me what exactly does this getJframe() do? Currently im trying to do PApplet...
https://forum.processing.org/two/discussion/26618/running-matlab-code-from-within-processing
CC-MAIN-2020-34
refinedweb
547
73.27
The GNU MPFR library is a C library for extended precision floating point calculations. The name stands for Multiple Precision Floating-point Reliable. The library has an R wrapper Rmpfr that is more convenient for interactive use. There are also wrappers for other languages. It takes a long time to install MPFR and its prerequisite GMP, and so I expected it to take a long time to install Rmpfr. But the R library installs quickly, even on a system that doesn’t have MPFR or GMP installed. (I installed GMP and MPFR from source on Linux, but installed Rmpfr on Windows. Presumably the Windows R package included pre-compiled binaries.) I’ll start by describing the high-level R interface, then go into the C API. Rmpfr You can call the functions in Rmpfr with ordinary numbers. For example, you could calculate ζ(3), the Riemann zeta function evaluated at 3. > zeta(3) 1 'mpfr' number of precision 128 bits [1] 1.202056903159594285399738161511449990768 The default precision is 128 bits, and a numeric argument is interpreted as a 128-bit MPFR object. R doesn’t have a built-in zeta function, so the only available zeta is the one from Rmpfr. If you ask for the cosine of 3, you’ll get ordinary precision. > cos(3) [1] -0.9899925 But if you explicitly pass cosine a 128-bit MPFR representation of the number 3 you will get cos(3) to 128-bit precision. > cos(mpfr(3, 128)) 1 'mpfr' number of precision 128 bits [1] -0.9899924966004454572715727947312613023926 Of course you don’t have to only use 128-bits. For example, you could find π to 100 decimal places by multiplying the arctangent of 1 by 4. > 100*log(10)/log(2) # number of bits needed for 100 decimals [1] 332.1928 > MPFR C library The following C code shows how to compute cos(3) to 128-bit precision and 4 atan(1) to 333 bit precision as above. #include <stdio.h> #include <gmp.h> #include <mpfr.h> int main (void) { // All functions require a rounding mode. // This mode specifies round-to-nearest mpfr_rnd_t rnd = MPFR_RNDN; mpfr_t x, y; // allocate uninitialized memory for x and y as 128-bit numbers mpfr_init2(x, 128); mpfr_init2(y, 128); // Set x to the C double number 3 mpfr_set_d(x, 3, rnd); // Set y to the cosine of x mpfr_cos(y, x, rnd); // Print y to standard out in base 10 printf ("y = "); mpfr_out_str (stdout, 10, 0, y, rnd); putchar ('\n'); // Compute pi as 4*atan(1) // Re-allocate x and y to 333 bits mpfr_init2(x, 333); mpfr_init2(y, 333); mpfr_set_d(x, 1.0, rnd); mpfr_atan(y, x, rnd); // Multiply y by 4 and store the result back in y mpfr_mul_d(y, y, 4, rnd); printf ("y = "); mpfr_out_str (stdout, 10, 0, y, rnd); putchar ('\n'); // Release memory mpfr_clear(x); mpfr_clear(y); return 0; } If this code is saved in the file hello_mpfr.c then you can compile it with gcc hello_mpfr.c -lmpfr -lgmp One line above deserves a little more explanation. The second and third arguments to mpfr_out_str are the base b and number of figures n to print. We chose b=10 but you could specify any base value 2 ≤ b ≤ 62. If n were set to 100 then the output would contain 100 significant figures. When n=0, MPFR will determine the number of digits to output, enough digits that the string representation could be read back in exactly. To understand how many digits that is, see Matula’s theorem in the previous post. 2 thoughts on “Extended floating point precision in R and C” There are lots of black-art tricks to getting extended float precision using an IEEE-754 FPU. Things get worse in the SIMD world, because those FPUs generally are simpler than IEEE-754. I was too lazy to figure this out from first principles. Instead, I’d do key sample calculations in a convenient BigNum (e.g., in bc), then beat on the FP libraries until they gave me what I needed. BTW, stateful FPUs are the bane of real-time programming. We really need to move all that state into the FP values themselves.
https://www.johndcook.com/blog/2020/03/18/gnu-mpfrr-wrapper/
CC-MAIN-2021-31
refinedweb
698
72.66
Fl_Widget | +----Fl_Input_ | +----Fl_Input #include <FL/Fl_Input_.H> This is a virtual base class below Fl_Input. It has all the same interfaces, but lacks the handle() and draw() method. You may want to subclass it if you are one of those people who likes to change how the editing keys work. This can act like any of the subclasses of Fl_Input, by setting type() to one of the following values: #define FL_NORMAL_INPUT 0 #define FL_FLOAT_INPUT 1 #define FL_INT_INPUT 2 #define FL_MULTILINE_INPUT 4 #define FL_SECRET_INPUT 5 #define FL_INPUT_TYPE 7 #define FL_INPUT_READONLY 8 #define FL_NORMAL_OUTPUT (FL_NORMAL_INPUT | FL_INPUT_READONLY) #define FL_MULTILINE_OUTPUT (FL_MULTILINE_INPUT | FL_INPUT_READONLY) #define FL_INPUT_WRAP 16 #define FL_MULTILINE_INPUT_WRAP (FL_MULTILINE_INPUT | FL_INPUT_WRAP) #define FL_MULTILINE_OUTPUT_WRAP (FL_MULTILINE_INPUT | FL_INPUT_READONLY | FL_INPUT_WRAP) Creates a new Fl_Input_ widget using the given position, size, and label string. The default boxtype is FL_DOWN_BOX. The destructor removes the widget and any value associated with it. Returns true if position i is at the start or end of a word. Returns true if position i is at the start or end of a line. Draw the text in the passed bounding box. If damage() & FL_DAMAGE_ALL is true, this assumes the area has already been erased to color(). Otherwise it does minimal update and erases the area itself. Default handler for all event types. Your handle() method should call this for all events that it does not handle completely. You must pass it the same bounding box as you do when calling drawtext() from your draw() method. Handles FL_PUSH, FL_DRAG, FL_RELEASE to select text, handles FL_FOCUS and FL_UNFOCUS to show and hide the cursor. Do the correct thing for arrow keys. Sets the position (and mark if keepmark is zero) to somewhere in the same line as i, such that pressing the arrows repeatedly will cause the point to move up and down. Does the callback if changed() is true or if when() & FL_WHEN_NOT_CHANGED is non-zero. You should call this at any point you think you should generate a callback. Sets or returns the maximum length of the input field. The input widget maintains two pointers into the string. The "position" is where the cursor is. The "mark" is the other end of the selected text. If they are equal then there is no selection. Changing this does not affect the clipboard (use copy() to do that). Changing these values causes a redraw(). The new values are bounds checked. The return value is non-zero if the new position is different than the old one. position(n) is the same as position(n,n). mark(n) is the same as position(position(),n). Gets or sets the current selection mark. mark(n) is the same as position(position(),n). This call does all editing of the text. It deletes the region between a and b (either one may be less or equal to the other), and then inserts the string insert at that point and leaves the mark() and position() after the insertion. Does the callback if when() & FL_WHEN_CHANGED and there is a change. Set start and end equal to not delete anything. Set insert to NULL to not insert anything. length must be zero or strlen(insert), this saves a tiny bit of time if you happen to already know the length of the insertion, or can be used to insert a portion of a string or a string containing nul's. a and b are clamped to the 0..size() range, so it is safe to pass any values. cut() and insert() are just inline functions that call replace(). Fl_Input_::cut() deletes the current selection. cut(n) deletes n characters after the position(). cut(-n) deletes n characters before the position(). cut(a,b) deletes the characters between offsets a and b. A, b, and n are all clamped to the size of the string. The mark and point are left where the deleted text was. If you want the data to go into the clipboard, do Fl_Input_::copy() before calling Fl_Input_::cut(), or do Fl_Input_::copy_cuts() afterwards. Insert the string t at the current position, and leave the mark and position after it. If l is not zero then it is assumed to be strlen(t). Put the current selection between mark() and position() into the specified clipboard. Does not replace the old clipboard contents if position() and mark() are equal. Clipboard 0 maps to the current text selection and clipboard 1 maps to the cut/paste clipboard. Does undo of several previous calls to replace(). Returns non-zero if any change was made. Copy all the previous contiguous cuts from the undo information to the clipboard. This is used to make ^K work. Gets or sets the input field type. Gets or sets the read-only state of the input field. Gets or sets the word wrapping state of the input field. Word wrap is only functional with multi-line input fields.
http://fltk.org/documentation.php/doc-1.1/Fl_Input_.html
CC-MAIN-2018-09
refinedweb
809
75.61
completely behavior, Rhino also adds the possibility of plugins. Whereas most companies provide plugin support for 3rd party developers, McNeel has taken a rather exotic approach which eliminates anything Python plugin and it implements and extends the basic IronPython Language. language as well as Python at the front end, while tapping into all the core Rhino resources at the back end. Scripts thus gain access to Rhino, the core libraries and even other plugins through the RhinoScriptSyntax plugin. Right, enough fore-play, time to get back to hard core programming. 3.2 The bones Once you run a script through the in-build editor (remember you can access the editor by typing “EditPythonScript” in Rhino’s command line) the Python interpreter will thumb through your script and superficially parse the syntax. It will not actually execute any of the code at this point, before it starts doing that it first wants Python Import Statement allows the user to import different modules that are either built into Python when its downloaded, or from external developments. Importing modules allows a user to access methods outside of the current file and reference objects, functions or other information. There are various types of Import Statements: import X, from X import, from X import a, b, c, X = import(‘X’), each with advantages and disadvantages. For simplicity we can stick with import X for the time being. This technique imports module X and allows us to use any methods within that module. Comments (blocks of text in the script which are ignored by the compiler and the interpreter), can be used to add explanations or information to a file, or to temporarily disable certain lines of code. It is considered good practice to always include information about the current script at the top of the file such as author, version and date. Comment lines are indicated with a # sign. Global variables are variables that can be accessed anywhere in your code (outside of functions, within functions and within classes). Variable scope refers to the limitation or accessibility of a variable across different portions of code. Global variables obviously can be accessed globally, while other variables may be limited to certain areas of your code. For example, any variable that is created within a class or a function (we will cover classes and functions later) is limited to within that function. This means they cannot be used outside of that function or class (unless they are specifically passed as input/output). For now, we don’t need to worry about different types of scope and let’s assume that our variables are globally accessible unless otherwise noted. Functions are blocks of code that compact certain functionality into a small package. Functions can have variables, take input, provide output and do a number of other important tasks. We will go into further detail about functions in the coming chapters. Classes are similar in that they provide an opportunity for creating module code to package/compress segments of your code, while also providing other powerful tools. Functions and classes must be created before they can be used (this is rather obvious). For that reason, the Functions & Classes section comes before the Function Calls and Class Instances section. This just means that before we can actually Call (use) a Function, we need to first create the function. 3.3 The guts The following example shows the essential structure that was just described, including: the Import Statement (always needed!), Global Variables, a Function and a Call to the Function. The importance of syntax should also be stated - Please take note of the capitalization and indentation within this example. Python is both case sensitive and indent sensitive. If you spell a variable name once with a capital letter and another time with a lowercase letter, it will not recognize it as the same variable! The indent is used to indicate if certain lines should be included within a Function, Class, Loop or Conditional statement. In this example, the line “print (text)” is indented to be contained within the function “simpleFunction” because it should only be executed once that function is called (Don’t worry yet about how and why functions work, we will explain them soon). Indentation and Case Sensitivity should be highly emphasized since they are a couple of the most common mistakes that you will run into! import rhinoscriptsyntax as rs # Import Statement #Script written by Skylar Tibbits on 03-09-2011 # Default comments strInfo = "This is just a test" # Global Variable def simpleFunction(text): # Function Declaration print(text) # Code to Execute Within the Function # (Note the Indentation) simpleFunction(strInfo) # Calling the Function (After it's created) One of the key features of VBScript that made it easy to write powerful scripts was the large library of Rhino specific functions. The Python implementation includes a set of very similar functions that can be imported and used in any python script for Rhino. This set of functions is known as the rhinoscriptsyntax package. To import the rhinoscriptsyntax package you must include the import rhinoscriptsyntax statement, as rs indicates that we will be using the name “rs” whenever we refer to this package. In the Editor, go to Help>Python Help for a list of all the rhinoscriptsyntax methods. Documentation can also be found at Note: McNeel has made all of the classes in the .NET Framework available to Python, including the classes available in RhinoCommon. This allows you to do some pretty amazing things inside of a python script. Many of the features that once could only be done in a .NET plug-in can now be done in a python script! (Don’t stress about this until you become a master of the basics…for now, just know its available!) 3.4 The skin After a script has been written and tested, you might want to put it in a place which has easy access such as a Rhino toolbar button. If you want to run scripts from within buttons, there are two things you can do: - Link the script - Implement the script If you want to implement the script, you’ll have to wrap it up into a _RunPythonScript command. Imagine the script on the previous page has been saved on the hard disk as an *.py file. The following button editor screenshot shows how to use the two options: 3.5 The Debugger The Debugger is an essential tool for any programmer. Luckily, the script-editor within Rhino has a built-in Debugger for testing and working line-by-line through any script! It is extremely good practice to use the debugger when writing any code longer than just a few lines. The expression “bug in your code,” means that something has gone wrong in your code - i.e your code fails, cannot continue to run or has given the wrong output. (Of interesting note - the first computer bug is said to have been found in 1947, when Harvard University’s Mark II Aiken Relay Calculator machine was experiencing problems. An investigation showed that there was a moth trapped in the machine. The operators removed the moth and taped it into the log book. The entry reads: “First actual case of bug being found.” And thus, the world of debugging was born!) With any malfunctioning code, the programmers job is to quickly and easily identify the bug, however, this can be sometimes extremely difficult, especially if the code has many loops, conditional statements, functions, classes and spans hundreds or thousands of lines. The debugger allows the user to put a breakpoint in the code which suspends the execution of the code and allows the user to see the status of the variables. Without a breakpoint the debugger would run entirely through to completion and would not allow us to see the guts! To add a breakpoint simply click to the left of the line number and a red circle will appear (You can also add multiple breakpoints). This indicates the code will pause at this line. Press the Green arrow at the top of the editor to start the debugger. Use the “Step Into”, “Step Over”, “Step Out” buttons to walk line-by-line through the code. When you come to a loop or conditional statement you can decide to enter or step over it completely. After each line is executed, the debugger will show the variable, object or expressions’ name, its value and type. As the lines are run, the variables and values will be updated directly. This will allow you to check if your variables are taking the correct values, if your code passes the correct conditional statement or if it loops for a given number of times. Many unforeseen errors can quickly be spotted and adjusted by using the Debugger! Next Steps That was a basic overview of Python running in Rhino. Now learn to use operators and functions to get something done.
https://developer.rhino3d.com/5/guides/rhinopython/primer-101/3-script-anatomy/
CC-MAIN-2019-09
refinedweb
1,489
59.94