text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
15 May 2009 05:03 [Source: ICIS news]
By Bohan Loh and Peh Soo Hwee
SEOUL (ICIS news)--US oil and chemicals giant ExxonMobil expects olefins and polyolefins prices to be under pressure through to the second half of the year as more supply will come in due to capacity start-ups in China and the Middle East, while demand is still weak, a senior company executive said on Friday.
“The volume coming on in the next six to nine months is in the middle of a demand-side-recession. It’s hard to see how it doesn’t work out in a softening of the market,” said Bryan W. Milton, vice president of the company’s basic chemicals unit, in an interview with ICIS news on the sidelines of the 9th Asia Petrochemical Industry Conference (APIC ‘09).
Petrochemical facility start-ups in the Middle East and ?xml:namespace>
Business advisory firm KPMG International had recently projected ethylene capacities in the
Prices of Asian ethylene had rallied 22% since the beginning of the year through April but values have stabilized or turned softer in the past three weeks based on ICIS pricing data as demand has yet to completely recover from the slump in the fourth quarter of last year.
While there was growing optimism that the global economy had seen its trough, some market players doubt that recovery would soon take place.
“It’s way too early to say [the] recession is over,”
“Past recessions are always a good measure, although this one is unique. When we came out of past recessions, it was a rocky road,” he said.
“I think we will continue to see that as we go through the coming year,”
ExxonMobil Chemicals had recently reported a 66% decline in first quarter earnings due to lower volumes and unfavourable currency effects.
“We’re having a challenging time just like our customers and suppliers,” he added.
Despite the downturn
“The first barrels of crude oil arrived at the [
Similarly, Exxon is holding talks with its joint venture partner Qatar Petroleum for a petrochemical complex in Ras Laffan.
“We’re talking about derivative products. Debates continue and discussions are very positive,” Milton
|
http://www.icis.com/Articles/2009/05/15/9216439/APIC-09-Exxon-expects-soft-chem-market-through-H2.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
vprintf, vfprintf, vsprintf, vsnprintf.
[edit] Example
Write from variables day, month, year to a null-terminated character string.
Run this code
#include <stdio.h> #include <stdarg.h> void putdate (char *str, const char *format, ...) { va_list args; va_start(args,format); vsprintf(str,format,args); va_end(args); } int main(void) { int day=20, year=2012; char *month="June"; char str[10]; putdate(str,"%d %s %d", day,month,year); printf("summer solstice: %s\n", str); return 0; }
Possible output:
summer solstice: 20 June 2012
|
http://en.cppreference.com/w/c/io/vfprintf
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
The Economist explains: The spread of gay rights
The Economist explains
TODAY'S recommended economics writing:
• Call that a budget? (New Yorker)
• Current economic conditions (Econbrowser)
• Two from Paul Krugman on inflation (Paul Krugman)
• Don't freak out about the shrinking work force (Atlantic)
• Banking union or financial repression? (Bruegel)
• When safe assets return (FT Alphaville) should freak out about the shrinking work force. While it's true that many of those who gave up entirely are 55 or older, it doesn't mean the phenomenon is somehow unalarming. The final decade of people's working life is usually when they have the highest income. It's also a time when they manage to save the most for retirement, after their children have become independent. Another thing worth noting is that some 35% of small business owners are 55 or older. Many of those who joined the ranks of the early retired are no doubt old Americans who saw their businesses fail during the recession.
After reading "Call that a budget?", I'm convinced that the Republican party wants to have its extremist libertarian cake and eat its extremist social conservative cake too.
Being extremist in two fields instead of just one doesn't make you more appealing to the average voter...
Paul Ryan requires his staffers to read Atlas Shrugged. No matter how he tries to soft-pedal it, he's an Ayn Rand true-believer. You'd think he's a bit old for that, but getting stuck in some way at 16 isn't all that uncommon, and it would be comical if he were not the Republican Budget Wonder Boy in Congress.
Obama blew the 2010 election by refusing to fight, to hammer in the simple truth: The Ryan budget wants to gut the Federal programs that Ordinary White People depend on. No one who was howling about the deficit at a Tea Party wants any part of that. Fortunately, I don't see Obama repeating that 2010 blunder in 2012.
|
http://www.economist.com/blogs/freeexchange/2012/04/recommended-economics-writing-1
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
05 March 2009 18:13 [Source: ICIS news]
TORONTO (ICIS news)--Canada’s Ontario province has released a final list of lawn and cosmetic pesticides that will be banned beginning on 22 April, but industry participants said on Thursday that the ban lacked a solid scientific basis and would end up harming farmers and lawn care providers.
?xml:namespace>
“The government is discouraging innovation with these regulations and that jeopardises the ability of farmers to continue to produce a safe and affordable supply of healthy foods,” she said.
“Without access to the newest pest control innovations,
"The ban protects
CropLife noted that pesticides were already regulated under Canadian federal laws. The
The
Industry commentators noted that the regulations included Dow AgroSciences’ 2,4-D pesticide product.
The company last year filed a notice of action under the North American Free Trade Agreement (NAFTA) against
|
http://www.icis.com/Articles/2009/03/05/9197997/ontario-releases-list-of-pesticides-to-be-banned-from-22-april.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
17 September 2012 07:51 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The Chinese producer shut the 80,000 tonne/year PO facility on 9 September for a turnaround, with its second 110,000 tonne/year
Befar will gradually ramp up the
The prices of PO in
Prices in
Befar is the one of the largest domestic providers of
(
|
http://www.icis.com/Articles/2012/09/17/9595875/chinas-befar-group-restarts-shandong-po-facility-on-16.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
10 October 2012 09:41 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The turnaround will last 10 days and the caustic soda plant was running at full capacity before it was shut, the source added.
The Chinese producer has a total production capacity of 800,000dmt/year of caustic soda at the same site. It will continue operating its 200,000dmt/year caustic soda plant at the same site at full capacity, the company source said.
Shandong Jinling Group is offering 32% liquid membrane caustic soda at yuan (CNY) 2,500/tonne ($397/tonne), 48% liquid membrane caustic soda at CNY2,583/tonne and chlorine at CNY600/tonne as of 10 October, according to Chemease, an ICIS service in China.
Trade in the
The prices of chlorine are expected to remain firm, because of the shutdown of the 600,000dmt/year caustic soda plant, according to local producers.
Shandong Jinling Group is the largest producer of caustic soda
|
http://www.icis.com/Articles/2012/10/10/9602558/chinas-shandong-jinling-group-shuts-caustic-soda-plant-on-10.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Premkarthic-7WP
10-07-2019
Hi,
We are having multiple communities sites with multiple domain residing on same server, is there any way to configure community site specific email from address for communities based email notifications, Any pointer to achieve this will help us lot.
Thanks in advance !
Configuring Email
Thanks
Prem
Ravi_Pampana
MVP
Hi,
You can create a configuration with list of different from address and read that in the java class using HtmlEmail
import org.apache.commons.mail.HtmlEmail;
You can see examples in below link...
Using above approach based on the site, you can have different from, to subject etc
Hope this helps !
Ravi,
Thanks for your comments, we are aware that we can use html email service, the question here is with respect to communities as it uses the "AEM Communities Email Reply Configuration - Factory configuration" with name points to "subscriptions-email" for configuring the from address and other communities related attributes for email notifications.
The problem here is in the back end adobe code they hard coded with the value "subscriptions-email" and to override this, it requires huge customization. We are looking for any options that help us us to achieve the functionality with minimal changes at configuration level, that we might missing.
|
https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/aem-communities-notification-email-address/m-p/312436
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Are there plans for theming support in Quasar? Like the IOS theme Quasar v0.x used to have.
dobbel
@dobbel
Best posts made by dobbel
- RE: Ask Razvan a Question! Q & A for Quasar.Conf
- RE: How to achieve v-ripple effect on table rows?
yes that fixes the ripple on the tr. Thanks! I updated the codepen with your css.
- RE: Vuex getter undefined
You’re mixing MapGetters with explicit store getter calls.
You either use the explicit store getter call like:
this.$store.getters.getColorScheme
or you use ( with mapGetters) :
this.myMappedGetterFunction()
BUT because your getter
getColorSchemeis in a vuex module
colorsyou have to do it like this:
explicit store call ( without mapgetters):
let colorScheme = this.$store.getters["colors/getColorScheme"](this.dashboardColorScheme)
With mapgetters:
computed: { ...mapGetters({ getColorScheme: 'colors/getColorScheme', anotherGetter: 'colors/anotherGetter', }) // This is the syntax used: ...mapGetters('some/nested/module', [ 'someGetter', // -> this.someGetter 'someOtherGetter', // -> this.someOtherGetter ]) } // somewhere else in for example a method: let colorScheme = this.getColorScheme(this.dashboardColorScheme)
found a nice vue vuex demo with namespacing(modules):
- RE: Line break in Q-tooltip?
this works with
<br>in
hovertext:
<q-tooltip <div v-</div> </q-tooltip>
- RE: QCalendar: hide times in day view if outside work hours
You can use
interval-startin combination with
interval-count:
- RE: Electron Auto-reload doesn't work well in Node v15.0.1
It’s recommended to use node version 12.x for Quasar.
- RE: Quasar Admin CRM Template
very nice. Would love to see a Vue alternative for react admin.
- [V1] how to build with ios theme? -T ios
[V1] How do I build/run (cordova)apps with ios theme?
From the V1 doc:
quasar dev -m cordova -T ios
quasar build -m cordova -T ios
these commands result in a app with Material Android look/theme instead of ios.
This used to work in v0.17x
- RE: vuex getters actions
This is better:
In your component:
computed: {
…mapGetters(‘MODULE_NAMESPACE’, [‘GETTER_NAME’])
}
then you can use the store getter as a computed property in your component.
- RE: My profile template made using Quasar Framework and Vue.js
the cards wit the ribbon look very pro. Why not use the gallery/lightbox component so people can fullscreen see your nice drawings.
Latest posts made by dobbel
- RE: Style selected row in table
Here’s a codepen with custom hover and selected:
- RE: Axios / quasar.config.js proxy - In Chrome api requests are working, In Firefox/Opera CORS issue
If you set a proxy for
postsand
apiin devServer, then in your Quasar app you must use your Quasar’s host:port as baseUrl for the actual axios requests.
This works , assuming you run your Quasar app on port 8080:
import axios from 'axios'; export function getDataNode() { return axios .get('') // can be replace by `/api/resources` .then( yy => yy.data) } export function getDataJsonPlaceholder() { return axios .get('') // can be replace by `/posts` .then( xx => xx.data) }
proxy: { '/api': { target: '', changeOrigin: true, }, '/post': { target: '', changeOrigin: true, }, },
In your original code the proxies where not used at all, because you used the proxy target url for the axios request.
btw I also had cors issues on chrome.
- RE: Datepicker Formating issue
l. We’ve been having a weird datepicker formatting issue where there is a white container that shows up under the datepicker on desktops
- why do you call this a formatting issue? ( With your topic title I presumed you had a time/date formatting problem)
- with desktop you mean electron? If yes does the problem also occur in spa mode? If no what do you mean with desktop?
- Did this problem always exists or did it suddenly appear?
- RE: Question related to a document topic on usage of Axios
Why is Axios imported again in Vuex store
I think this is because Vuex actions, mutations etc.
canbe defined outside of the Vuex store in separate .js files ( quasar’s default ). And those files don’t have access to the
this.$axios( because of the isolated scope of an imported js module). If you would define the the actions, mutations directly in the vuex store you can use the global axios ref.
See here for an example of isolated scope when using import:
- RE: quasar-dev localhost for capacitor apps
@aitcheyejay said in quasar-dev localhost for capacitor apps:
Unfortunately, this required a hack of the quasar-dev codebase on my part.
could you share this ‘hack’?
- RE: Building a Windows build on a MacOS - Error: spawn yarn ENOENT
here are some people having successfully build a windows build in macos in docker:
- RE: Custom bottom of QTable without removing pagination data
You could use the bottom slot with your own pagination( copied from the pagination slot example), something like this:
|
https://forum.quasar-framework.org/user/dobbel
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
The C++ needs logical operators to control the flow of the program. The expressions involving logical operators evaluate to boolean value – true or false. Then the program makes decisions based on the outcome.
There are three types of logical operators.
Note that there is difference between bitwise AND and logical AND. The same applied to bitwise OR, complement.
Logical AND
The two operands of logical expression evaluates to true if both the operands are true, otherwise it is false.
A logical expression with two or more logical expression as operands is called a compound expression. When the two operands are individual expressions then each of them must evaluate to true so that the compound logical expression is true.
(expression1) && (expression2)
The data type for both the expression must be same. If expression1 is integer, then expression must be an integer. In case, one of the expression is character data type, then it is automatically converted to integer equivalent by the compiler.
(a && 34)
becomes
(97 && 34)
The ASCII value for ‘a’ is 97.
The all possible outcome of logical AND operation are:
Logical OR
The logical expression with logical OR operator evaluates to true if at least one operand is true. Otherwise, it is false.
A compound logical expression with two or more expression gives a boolean output – true if one of the expression evaluates to true. The data type of each operand must be same except char type.
(expression1) || (expression2)
All possible outcome of the expression is given below.
Not Operator
The
Not operation is a special operation which negates the boolean value of any expression or variable.
expression1 = true
then
!(expression1) = false
The
Not operator can be use anywhere to negate the existing value of a variable or expression. The table below gives all combination of output for the
Not operator.
Example Program: Logical Operators
#include <cstdlib> #include <iostream> using namespace std; int main() { //Variable Declarations int a,b,c,d; //Variable Initialization a = 100; b = 90; c = 30; d = 20; // Logical AND if( (a > b) && (c > d)) { cout << "This logical AND statement has value = True" << "\n"; } else { cout << "This logical AND statement has value = False" << "\n"; } // Logical OR if( (a < b) || (c > d)) { cout << "This logical OR statement has value = True" << "\n"; } else { cout << "This logical OR statement has value = False" << "\n"; } // NOT operation if( !(a > b)) { cout << "This NOT statement has value = True" << "\n"; } else { cout << "This NOT statement has value = False" << "\n"; } system("PAUSE"); return EXIT_SUCCESS; }
Output:
This logical AND statement has value = True This logical OR statement has value = True This NOT statement has value = False
|
https://notesformsc.org/c-plus-plus-logical-operators/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
32156/how-detach-and-delete-internet-gateway-from-vpc-using-boto3
Here is a simple implementation. You need to keep few things in mind and some prerequisite.
You need to detach the ig from your VPC.
You need to have the ig-id and the vpc-id to continue.
import boto3
ec2 = boto3.resource('ec2')
vpc = ec2.Vpc('vpc-id')
gw = ec2.InternetGateway('igw-id')
vpc.detach_internet_gateway(InternetGatewayId = 'igw-id')
gw.delete()
Hope this helps.
You can refer to this question here:
You ...READ MORE
To create the subnet in VPC:
subnet = ...READ MORE
You can delete the folder by using ...READ MORE
Here is the simple way of implementing ...READ MORE
The error clearly says the error you ...READ MORE
Here is the code to attach a ...READ MORE
This is the code to delete the ...READ MORE
You can view this answer here :
Before ...READ MORE
You can delete the file from S3 ...READ MORE
.terminate is used for instances and not ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/32156/how-detach-and-delete-internet-gateway-from-vpc-using-boto3
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Alpha Vantage DataFeed
Hello,
I'm pretty new to Python coding and Backtrader.
However I have done some backtests using the Alpaca API and it worked without to much problems.
I would to use the Alpha-Vantage API because they provide not only stock data, but also FOREX and Crypto.
First I tried the Code I found here:
from alpha_vantage.timeseries import TimeSeries import pandas as pd import numpy as np import backtrader as bt from datetime import datetime # IMPORTANT! # ---------- # Register for an API at: # # Then insert it here. Apikey = 'XXXXX' def alpha_vantage_eod(symbol_list, compact=False, debug=False, *args, **kwargs): ''' Helper function to download Alpha Vantage Data. This will return a nested list with each entry containing: [0] pandas dataframe [1] the name of the feed. ''' data_list = list() size = 'compact' if compact else 'full' for symbol in symbol_list: if debug: print('Downloading: {}, Size: {}'.format(symbol, size)) # Submit our API and create a session alpha_ts = TimeSeries(key=Apikey, output_format='pandas') data, meta_data = alpha_ts.get_daily(symbol=symbol, outputsize=size) #Convert the index to datetime. data.index = pd.to_datetime(data.index) data.columns = ['Open', 'High', 'Low', 'Close','Volume'] if debug: print(data) data_list.append((data, symbol)) return data_list class TestStrategy(bt.Strategy): def __init__(self): pass def next(self): for i, d in enumerate(self.datas): bar = len(d) dt = d.datetime.datetime() dn = d._name o = d.open[0] h = d.high[0] l = d.low[0] c = d.close[0] v = d.volume[0] print('{} Bar: {} | {} | O: {} H: {} L: {} C: {} V:{}'.format(dt, bar,dn,o,h,l,c,v)) # Create an instance of cerebro cerebro = bt.Cerebro() # Add our strategy cerebro.addstrategy(TestStrategy) # Download our data from Alpha Vantage. symbol_list = ['LGEN.L','LLOY.L'] data_list = alpha_vantage_eod( symbol_list, compact=False, debug=False) for i in range(len(data_list)): data = bt.feeds.PandasData( dataname=data_list[i][0], # This is the Pandas DataFrame name=data_list[i][1], # This is the symbol timeframe=bt.TimeFrame.Days, compression=1, fromdate=datetime(2018,1,1), todate=datetime(2019,1,1) ) #Add the data to Cerebro cerebro.adddata(data) print('Starting to run') # Run the strategy cerebro.run()
Unfortunately it didn't work. I wasn't able to display any data with it.
And yes, I used my API - key (which is working, i tested it in another code)
I thought that the problem came from the Pandas dataframe from Alpha-Vantage (maybe not supported anymore)
I tried different things and I done quiet a lot of research in the internet and I can't find any suitable solution.
I also tried to request JSON datas and convert them to pandas or CSV, unfortunately this was also not working.
But maybe someone can help with the code above.
Thanks a lot in adavance
marketwizard
I've developed alphavantage data feed to
btsome time ago. Not sure if it works now, you can try -
Hello ab_trader,
thanks for the quick reply !
did you have a sample code for using this datafeed ?
- rajanprabu last edited by
this could help..
@rajanprabu I already test this code and its not working for me, that's why I opened this topic
- rajanprabu last edited by
I'm sorry for the oversight.
@marketwizard there is tests folder in the repo, which has examples.
|
https://community.backtrader.com/topic/3344/alpha-vantage-datafeed
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Vrije Universiteit Brussel – Belgium Faculty of Sciences In Collaboration with Ecole des Mines de Nantes – France 2005 IPSComp: Intelligent Portal for Searching Components A Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science (Thesis research conducted in the EMOOSE exchange) By: Javier Aguirre Promoter: Prof. Theo D’Hondt (Vrije Universiteit Brussel) Co-Promoters: Annya Réquilé (Ecole des Mines de Nantes) Abstract The software development industry is seeing as an inaccurate science, always dealing with low product quality and over run in time, cost and effort factors. Component Based Software Development (CBSD) has emerged as an approach aiming to improve a number of drawbacks found in the software development industry. The main idea is the reuse of well-tested software elements that will be assembly together in order to develop larger systems. This approach will bring as a consequence the reduction in time development, a more stable final product and a reduction in time, effort, cost and testing process. Despite the goal CBSD is aiming, software industry has not been able to accomplish such objectives. The purpose of this research is to present means to provide users with the appropriate tools to describe components, in order to be able to share them among a wide consumer’s community. On top of such component description, tools to discover and search for components can be implemented. The set of users in this case can be identified as different actors, such as software developers, software architects, software producers and systems integrators, among others. In this context, my research focuses on two main parts: the first one is related to component ontology, and the second on code and design transformation for the integration of software component repositories. i ii Acknowledgements I would like to thank my advisor Professor Annya Réquilé for her advice and guidance in the past five months. I would also like to acknowledge the assistance of Professor Mourad Oussalah and Gustavo Bobeff for their valuable comments during the research. A special word of thanks is due to all the professors who were responsible for such a good academic experience. Special thanks to my EMOOSE mates: Richa, Jorge, Daniel and Harmin for their friendship and collaboration. A note of thanks to Sylvie Poizac who was always there for us. Thanks to my parents and family for their love and support. iii iv Contents 1 2 Introduction ........................................................................................................................................... 1 State of the Art ...................................................................................................................................... 3 2.1 Ontology Manager Systems......................................................................................................... 4 2.1.1 ONTOMANAGER .................................................................................................................... 4 2.1.2 SymOntoX................................................................................................................................ 5 2.1.3 Building Domain Ontology Based on Web Data and Generic Ontology.................................. 6 2.1.4 PLIB ......................................................................................................................................... 7 2.2 Component Retrieval Schemes ................................................................................................... 8 2.2.1 Keywords Technique ............................................................................................................. 11 2.2.1.1 INSEAS – Keyword – Faceted – Browsing ................................................................... 11 2.2.2 Faceted Technique ................................................................................................................ 12 2.2.2.1 InterLegis Project based on Odyssey Search Engine - Faceted .................................. 12 2.2.2.2 ADIPS Framework - Faceted - Browsing ...................................................................... 13 2.2.3 Signature Matching Technique .............................................................................................. 14 2.2.3.1 AGORA - Signature Matching....................................................................................... 14 2.2.3.2 COMPONENTEXCHANGE - Signature Matching ........................................................ 15 2.2.4 Behavioral Matching Technique ............................................................................................ 16 2.2.4.1 Behavior sampling - Behavioral Matching..................................................................... 16 2.2.5 Semantic-Based Technique................................................................................................... 17 2.2.5.1 Towards a Semantic-based Approach for Software Reusable Component Classification and Retrieval - Semantic-based ..................................................................................................... 17 2.2.5.2 A Semantic-Based Approach to Component Retrieval - Semantic-based ................... 18 2.2.6 Browsing Technique .............................................................................................................. 19 2.2.6.1 CompoNex – Browsing ................................................................................................. 19 2.2.7 Users Web Mining Technique................................................................................................ 20 2.2.7.1 RASCAL - Users Web Mining ....................................................................................... 20 2.3 Model Driven Architecture - MDA .............................................................................................. 20 3 The eCots Association ........................................................................................................................ 23 4 Contribution......................................................................................................................................... 25 4.1 Component Ontology for IPSComp Specification ...................................................................... 25 4.1.1 XCM Component Ontology.................................................................................................... 26 4.1.2 Component Ontology for IPSComp Specification.................................................................. 27 4.1.2.1 Domain .......................................................................................................................... 27 4.1.2.2 Price .............................................................................................................................. 28 4.1.2.3 Quality Attributes........................................................................................................... 28 4.1.2.4 License .......................................................................................................................... 31 4.1.2.5 Publisher Description .................................................................................................... 31 4.1.2.6 Specialization Scenarios ............................................................................................... 31 4.2 Component Ontology for IPSComp Design ............................................................................... 33 4.3 Component Ontology for IPSComp Implementation.................................................................. 37 4.3.1 Java Code Implementation .................................................................................................... 37 4.3.2 IPSComp Java Code Generation from a XML file ................................................................. 39 4.3.3 IPSComp Ontology Implementation PLIB ............................................................................. 40 4.4 Integrating Software Component Repositories .......................................................................... 42 5 Conclusions ........................................................................................................................................ 63 6 Future Work ........................................................................................................................................ 67 Appendix A - IPSComp Ontology UML Class Diagram............................................................................... 69 Appendix B - IPSComp Ontology component Package UML Class Diagram............................................. 71 Appendix C - IPSComp Ontology qualityAttribute Package UML Class Diagram ...................................... 73 Appendix D - IPSComp Ontology metric Package UML Class Diagram .................................................... 75 v Appendix E - IPSComp Ontology xmlParser Package UML Class Diagram .............................................. 77 Appendix F - IPSComp XML Meta-Model - XSD Schema .......................................................................... 79 Appendix G - IPSComp Component Description – XML Example.............................................................. 87 References .................................................................................................................................................. 93 vi List of Figures Figure 3-1 IPSComp System Architecture [48] ........................................................................................... 24 Figure 4-1 Contribution to the IPSComp Project ......................................................................................... 25 Figure 4-2 XCM Hierarchical Component Structure [51]............................................................................. 27 Figure 4-3 IPSComp Hierarchical Component Structure ............................................................................ 32 Figure 4-4 IPSComp Component Ontology UML Class.............................................................................. 34 Figure 4-5 IPSComp Metric Concept UML Class Diagram ......................................................................... 35 Figure 4-6 IPSComp Quality Attribute Concept UML Class Diagram ......................................................... 36 Figure 4-7 Visitor Pattern with Reflection to load a XML file – UML Class Diagram................................... 40 Figure 4-8 PLIB Editor IPSComp Ontology – Screen Shot ......................................................................... 42 Figure 4-9 Elements for Software Component Repository Integration ....................................................... 44 Figure 4-10 Elements Examples for Software Component Repository Integration..................................... 45 Figure 4-11 Vendor Respository {2} – Vendor Component Description {4} Example ................................. 46 Figure 4-12 Vendor Component Description Meta-Model ................... 48 Figure 4-13 Vendor Component Description Meta-Model ....................................... 49 Figure 4-14 Vendor Component Description Meta-Model......................................... 50 Figure 4-15 Essence Component Description Meta-Model ........................................................................ 51 Figure 4-16 Component Repositories Domain Layers ................................................................................ 55 Figure 4-17 Obtaining a Component Description Image in the IPSComp Ontology ................................... 56 Figure 4-18 Structure of an Enterprise Bean JAR [67]................................................................................ 57 Figure 5-1 IPSComp System Architecture [48] ........................................................................................... 65 Figure 5-2 IPSComp System Architecture Analyzed .................................................................................. 66 vii viii List of Tables Table 2-1 Summary of Component Retrieval Schemes.............................................................................. 10 Table 2-2 InterLegis Component Description [39]....................................................................................... 13 Table 4-1 Quality Model for COTS components [52] .................................................................................. 29 Table 4-2 Implementation of the IValue interface..................................................................................... 38 Table 4-3 Abstract Class Metric .................................................................................................................. 38 Table 4-4 Concrete Metric Class ................................................................................................................. 38 Table 4-5 Abstract Class QualityAttribute ......................................................................................... 39 Table 4-6 Concrete Quality Attribute Class ................................................................................................. 39 Table 4-7 Collections in the Component Ontology...................................................................................... 41 Table 4-8 IPSComp Java Implementation Example Item {7} ...................................................................... 45 Table 4-9 IPSComp XML Meta-Model - XSD Schema Example {12} ......................................................... 46 Table 4-10 IPSComp Component Description – XML Example {13} .......................................................... 47 Table 4-11 Example Scenario Model Transformation Using the IPSComp Transformation API - Steps 1, 2 and 3............................................................................................................................................................ 54 Table 4-12 Example Scenario Model Transformation Using the IPSComp Transformation API – Step 4. 54 ix x 1 Introduction The extensive use of Internet has brought as a consequence an overload in the web content. Internet has become a huge storage device on which we can find any sort of information. But because of the amount of resources being published on the World Wide Web, users have been experiencing a lack of accuracy when searching for specific topics. Researchers and industry have identified this drawback, and are moving forward to solve it. Two approaches addressed to help users to find relevant items have emerged. The first one asks the user to explicitly register information of what he likes and what he dislikes. The recommender system then suggests items that are similar to the one the user likes and rejects those that are alike from the ones the user does not like. This is called content-dependent approach, because the system must rate what a similar item is. The second approach classifies the users in profiles; in such profiles are included users with similar characteristics (age, sex, tastes, educational background, etc). For users within the same profile there is a high probability they will like and dislike the same things, so recommendations are made based on such profiles, it is called collaborative filtering. Additionally, the content-based and collaborative filtering approaches have been combined to obtain better results. These approaches and the combination of them work relatively well to find or suggest users about simple matters such as movies, books, shopping, etc. As a matter of fact there have been lots of suggested algorithms in order to redefine the pages ranking and to gather the user preferences. The amount of information is not the only problem faced at the moment, but also the wide variety and complexity of content. In other words on World Wide Web it is possible to find a wide variety of items starting from static content up to complete applications. It is necessary to come up with a solution that will allow classifying any source of information or content, in order to be able to provide users with means to take advantage of the published resources. The aim of this study is to offer a complete software component description, as well as the base to implement a retrieval tool for such components. Furthermore, it is not possible to leave aside the fact that there are already developed software components as well as repositories which classify them; it is necessary to find means to integrate the elements found in those existing repositories. Additionally this research is included as a first step in the definition of the functional architecture of a larger project. The Intelligent Portal for Searching Components, called IPSComp project, aims at developing an open information portal for commercial off-the-shelf (COTS) components (software and non software components), in which we deal with information about products, and possibly between their users, or between users and producers. The IPSComp project is in the specification phase. The idea of building software systems from pre-fabricated software components that could be exchanged in software component markets has been around at least since 1968 expressed by McIlroy in [72]. His basic proposal is to be able to combine components from different vendors to create applications. The composition is made up from plug-and-play-like reuse of black box components, which enables software component markets. It is important to remark that at this moment there are several definitions of what a software component is, there is still no general consensus about precisely what constitutes a software component. As a matter of fact it is worth it to provide a software component definition that allows enclosing the concepts that will be handled in the present job. As stated in [17]: "A component consists of different (software) artefacts. It is reusable, self-contained and marketable, provides services through well-defined interfaces, hides its implementation and can be deployed in configurations unknown at the time of development. A business component is a component that implements a certain set of services out of a given business domain. In order to be operable, components need a basic infrastructure, e.g. Enterprise Java Beans (EJB) or .NET". This definition is according to the objective of the IPSComp project, which is intended to become a 1 component’s market place. The fact that a component is reusable and self-contained shows that the component is independent; it can be used without other components present. But it is also feasible to combine with other components or into a system by the set of well defined interfaces. This component definition supports the black box approach, which is necessary in the IPSComp project scope where the software components will be marketable and used by customers in different implementations at later times. The business components definition fits into the IPSComp project items to be handle by the system, as well as the platforms there named. To achieve the use components based on the given definition, it is necessary to standardize the component description. The interface and behavior of a component has to be described in a precise way. Specification becomes a key point in the composition of business components, since the specification might be the only source of information available for a composer who combines business components from different vendors to an application system. With the improvements in software components development, a set of platforms have emerged, J2EE .NET, CORBA. All these platforms have means to connect components based on a syntactic description, which is determined by the interfaces each component provides or requires. But it is possible to observe a lack of semantic as well as behavioral description, which should provide the component's characteristics that will encourage and facilitates its reuse. As a consequence, one of the problems Component Based Software Development is facing regarding reuse of components is that finding the right component is a complex task. Despite this fact the software component development has not decreased. As a matter of fact we can not even state that this is a new issue because in the literature we can find papers dated from 1995 [3] which are proposing methods for COTS selection. In [3] the authors identify as main issues in COTS selection the following points: • Lack of a defined, systematic, and repeatable process in the COTS selection. • There can be a potential disregard of the application requirements. • There is a misuse of data consolidation methods in decision making for the COTS selection. The hard-to-identify mismatches are largely due to the fact that the capability of the components are not clearly described or understood through their interfaces. Most commercially available software components are delivered in binary form. It is necessary to rely on the components' interface description to understand their exact capability. Even with the components' development documentation available, people would certainly prefer or can only afford to explore their interface descriptions rather than digesting their development details. Despite the fact that this is an old issue, nowadays it is still an open topic of research. It has not been possible to overcome this issue because it deals with a vast number of possible candidates quite a lot of already developed software components and also with a set of unstructured information trying to describe them, which is in addition difficult to analyze. As such there is not a clear perception of what a Software Component is able to provide. It is necessary to overcome those issues in order to take advantage of the component-based software development objectives, create a marketable place for software components and be able to incorporate already developed components. In chapter 2 the State of the Art is presented, in this chapter is possible to find information regarding some techniques used to handle component’s description, retrieval and model transformation. In chapter 3 a brief introduction to the IPSComp project initiated by the eCots association is presented. Chapter 4 describes the contribution given to the domain under research as well as the prototype developed. The complete code is found in the annexed document “IPSComp: Intelligeng Portal for Searching Components Prototype Source Code”. Chapter 5 states the conclusions, to close in Chapter 6 with the future work. 2 2 State of the Art To overcome the issues present in the component selection process the authors in [3] propose the OTSO (Of The Shelf Option) method which supports the search, evaluation and selection of reusable software and provides specific techniques for defining the evaluation criteria, comparing costs and benefits of the different alternatives [10]. The method states that in order to evaluate software components it is necessary to define the evaluation criteria which can be categorized in four areas: Functional requirements, product quality characteristics, strategic concerns, and domain architecture compatibility. The aspects to be evaluated on each category can be defined by the (Goal-Question-Metric model) GQM which will also provide a well-defined template for documenting the evaluation goals. The objective pursue is the decomposition criteria into a set of concrete, measurable, observable or testable evaluation criteria. Then the method relies on the use of the Analytic Hierarchy Process (AHP) for consolidating the evaluation data for decision-making purposes. As stated by the authors each COTS selection process differs from each other, and they are providing a method for considering a set of elements to be taken into account when selecting components. But it is up to the evaluation team to define the goals and to find out the components that will fulfill the requirements pursued. As such they are not classifying components, but at least they are giving a group of parameters to consider. By formalizing this criteria definition process, it is possible to reuse the OTS software selection experiences better, leading to a more efficient and reliable selection process. We can infer from the OTSO method that we need a complete component specification in order to be able to perform a deep analysis that will drive us to software reuse. Several researches are being performed in the field of components semantic description, and they are based on the utilization of an ontology. As stated by T. Gruber "An ontology is an explicit specification of a conceptualization" [49]. What is important is what an ontology is for. Gruber’s team has been designing ontologies for the purpose of enabling knowledge sharing and reuse. In that context, an ontology is a specification used for making ontological commitments. An ontological commitment is an agreement to use a vocabulary (i.e., ask queries and make assertions) in a way that is consistent (but not complete) with respect to the theory specified by an ontology. Gruber’s team builds agents that commit to ontologies. They design ontologies so they can share knowledge with and among these agents. A number of other knowledge representations (taxonomies, thesauri and controlled vocabulary) are often described as being ontologies. While this is strictly true according the broadest definition of an ontology, the scope and power of ontologies is fully realized when they express a richer set of relationships between concepts. Those terms will briefly describe:1 • Controlled Vocabulary. Is the list of a set of terms that in order to be included in the vocabulary must be approved by the vocabulary authority. Each term must be unambiguously defined. A controlled vocabulary follows two rules: (i) If the same term is commonly used to mean different concepts in different contexts, then its name is explicitly qualified to resolve this ambiguity. (ii) If multiple terms are used to mean the same thing, one of the terms is identified as the preferred term in the controlled vocabulary and the other terms are listed as synonyms or aliases. • Taxonomy. It is a collection of controlled vocabulary organized in a hierarchical structure. There may be different type of relationships in the taxonomy (e.g., whole-part, genus-species, type-instance, etc). Some taxonomies allow poly-hierarchy, in such a case a term can have multiple parents. This means that if a term appears in multiple places in a taxonomy, then it is the same term. • Thesaurus. It is a networked collection of controlled vocabulary terms. This means that a thesaurus uses associative relationships in addition to parent-child relationships. The expressiveness of the 1 Based on the article “What is the difference between a vocabulary, a taxonomy, a thesaurus, an ontology, and a meta-model?” written by Johannes Ernst, NetMesh Inc CEO. (). 3 • • associative relationships in a thesaurus varies and can be as simple as “related to term” as in term A is related to term B. Ontology. It is a controlled vocabulary expressed in an ontology representation language. The language has a grammar for using vocabulary terms to express something meaningful within a specified domain of interest. The grammar contains formal constraints (e.g., specifies what it means to be a well-formed statement, assertion, query, etc.) on how a term in the ontology’s controlled vocabulary can be used together. On the other hand the ontology allows the definition of richer and more descriptive relationships between concepts (e.g. is-targeted-for, is-regulated-by, affects, isexpressed-in, etc). Meta-model. It is an explicit model of the constructs and rules needed to build specific models within a domain of interest. It can be seen from three perspectives: (i) as a set of building blocks and rules used to build models. (ii) As a model of a domain of interest. (iii) As an instance of another model. A valid meta-model is an ontology, but not all ontologies are modeled explicitly as meta-models. The perspective (ii) allows comparing Meta-models to ontologies. After taking a look to the previous definition, it is important to point out that it is possible to define relationships between the different concepts. As a matter of fact a Controlled vocabulary can be part of an Ontology, a Taxonomy can be part-of an Ontology, and a Thesaurus can be a part of a Ontology. Finally, if you create an ontology, which is a set of terms naming concepts (classes) and relations, and you use that vocabulary to create a set of data (instances of the classes, and assertions that the instances are related to each other according to the specific relations in the vocabulary), and you think of the set of data you create as the model of your domain, then the ontology is the meta-model and the set of data created is the model. Consequently to enclose this set of definitions, this can be classified as a taxonomy. 2.1 Ontology Manager Systems The process to create an ontology is not a simple one. It has to go throughout a set of evolution and refinement, and it has to be performed by a specialist in the target field. Even if we are able to define a quite good ontology, this approach must be able to evolve with the domain evolution, and to adapt to new requirements, specifications and context. As a matter of fact some effort has been addressed in order to develop Ontology Management Systems. Among these approaches we can find PLIB [30], ONTOMANAGER [12], ONTOSHARE [14]. Those will be briefly described to get an idea of what they provide. It is important to point out that certain systems will provide a dual function, on the one hand it will handle the ontology, and on the other it will provide means to apply such ontology on a particular set of items. 2.1.1 ONTOMANAGER The evolution of the ontology will be according to the users’ needs. In order to gather the user needs a special log is being recorded in the web servers. Each user’s activity or event is recorded, specific information such as the type of interaction (query, browse, read, etcetera), date, time, user identity. The dependency between events is represented using the “previous event” relation. This analysis is applicable in ontology-based information portals, in which the ontology supports the process of indexing content of an information resource. OntoManager consists of three components: the Data Integration Module, the Visualization Module and the Analysis Module [12]. The Data Integration Module performs three main tasks. • Collect Data from different servers if it is a distributed system and creates a central ontology log, because all distributed logs are supported by the same domain ontology there will not be 4 • • heterogeneity problem. Pre – Process Data by terms of: (i) Data Abstraction because the log will record words but not concepts, for example if any user is looking for more details regarding the “ABC” project in the log we will find “ABC”, but not the ontology concept project. So the “ABC” occurrences must be replaced with the project ontology concept. (ii) Extracting Links: It is important to find out the frequency of browsing relations between to concepts by analyzing successive events (previous event information recorded in the log). Because of the amount of information that can be store the log is being transferred into OLAP cubes, which enables the analysis of the information. Organize logs in a way that enables a fast and efficient access. The Visualization Module combines the integrated ontology usage data with the ontology itself. It enables presentation of the same information in different ways: Graph-based representation of the ontology where the nodes represent the concepts in the ontology and links correspond to the direct hierarchy; table-based representation; bar-based representation (Pareto diagram). For instance the graph representation allows performing queries on the OLAP cubes, to see the usage of the ontology concepts, among other things. The Analysis Module makes suggestions to the ontology manager on how to improve the ontology. This improvement should be guided by the users’ needs. The module performs two tasks: • Ontology Evolution which supports the process of modifying and updated the ontology. Besides it allows undoing changes made to the ontology. • When adding new concepts to an already existing ontology, some instances of it are not classified. OntoCrawler provides semantic capabilities for identifying and extracting new instances whereby existing knowledge about concepts, relations and instances can be used as background knowledge for the crawling process. 2.1.2 SymOntoX SymOntoX (Symbolic Ontology XML-based management system) aims to provide identification of business concepts. As a matter of fact it offers some native modeling options called meta-concepts, such as Business Process, Business Object and Business Actor, which help enterprise experts to better categorize and identify concepts. The ontology model in SymOntoX is referred as OPAL (Object, Process, and Actor modelling Language) [24]. A meta-concept has a double nature (i) it defines a template that is used to model a concept in the ontology; (ii) it partition the ontology in a natural and intuitive way. [24]. The entities of the OPAL are defined as: Actor_Kind: Models any relevant entity of the domain that is able to activate or perform a process (e.g., Tourist, Travel Agency). Process_Kind: Models the activity that is performed by an actor to achieve a given goal (e.g., making a reservation). Object_Kind: Models a passive entity, on which a process operates (e.g., hotel, flight), typically to modify its state. The system also defines a set of meta-concepts in order to facilitate the process of enriching the ontology. Among these meta-concepts we find: Goal_kind a desired state of the affairs that an Actor seeks to reach (e.g., Go_vacation); State_Kind a characteristic pattern of values that instance variables of an entity can assume (e.g., Flight_full); Rule_Kind an expression that is aimed at restraining the possible values of an instance of a concept (constraint rule) or that allows to derive (production rule) new information (e.g., Ticket purchase 30 days before departure); Information_Element_Kind atomic attribute (e.g., Flight_number, Nr_of_rooms); Information_Component_Kind a cluster of attributes pertaining to the information structure of a domain 5 concept (e.g., Flight_info, Hotel_address); Action_Kind a process component, i.e., an activity that is further decomposable (e.g., Room_Requesting); Elementary_Action_Kind a process component (activity) that is not further decomposable (e.g., Cancel_reservation). Then SymOntoX is able to define ontological relations in order to structure the knowledge and to achieve reasoning. These relations are: Specialization: a binary relation that denotes the IsA refinement between concepts. Generalization is the inverse relation. Decomposition: connects a composite entity with its parts. PartOf is the inverse relation. Predication: the relation that allows an Information Element or Component to be associated to primary concepts. The concept that plays the role of an attribute must be either an Information Element or an Information Component. Relatedness: this notion represents a generic domain relation between two concepts. It is generally refined into a specific domain relation, associating a specific label to it. E.g., rel(hotel, station) is refined into: near(hotel, station). Similarity: the binary relation that allows a similarity degree between two concepts to be expressed. The similarity degree must belong to the interval [0.4, 1]. On the other hand the system defines three kinds of users: User who can read the ontology; Super User with read and write rights; Ontology Master is responsible for the ontology content, it has to validate the concepts proposed by the super users. The SymOntoX system also enhances: Querying Capabilities that allows users to retrieve concepts, but it is important to point out that the query is performed by filling out guided forms. References annotation, ontology export to RDF, and there will be provided and import export utility to OWL. SymOntoX has a three-tier architecture: User Interface: It runs on a thin client, which is a web browser, and is developed using JSP. Server Side: it manages the communication with the clients through HTTP by using Java Servlet technology and the SymOntoX application logic. Storage tier: It is composed by a database containing the concepts (ontology content), a log database containing the history of the activities performed by users, a factual database containing the sample instances and a database for administrative purposes, which handles users and existing ontologies. 2.1.3 Building Domain Ontology Based on Web Data and Generic Ontology This tool does not have a specific name. It aims to the construction of Domain Ontologies by first extracting a topic three and then evolving it into a domain ontology. The authors state that web pages are semi-structured, and by applying a wrapper technique the web pages are filtered out the HTML tags and useless information such as advertisement. Then the conjunctions, adverb, exclamations are eliminated. Each word is weighted it in the page. The words are also filtered by weight. The data in the web pages are represented as points. Afterwards a Hierarchical Agglomerative Clustering (HAC) algorithm is applied, which produces a hierarchical grouping of the data featured as points. The algorithm starts with all points in the same cluster, and then per iteration it merges the two clusters that are most similar. In [4] the distance between two points is calculated by the Euclidean distance and the similarity between two clusters is determined by 6 single-link. A node from the binary tree can also be characterized as a data point. Later to identify topics in the binary tree, the variance node is calculated. A smaller variance value of the node indicates a higher intra-node similarity, in which the words are syntactically similar. Then the points of inflection are identified. A node is point of inflection is the variance of this node is smaller than that of its parent node and that of all its children nodes. This means that the data in the children nodes are about the same topic, while the parent nodes cover more general issues. The topic tree is then analyzed in order to: Figure out the concept that the topic implies. The topic has been chosen by the word with the highest weight, but the word might not reflect the topic Clarify the semantic meaning of the hierarchical structure. Extract possible concepts expressed by the words within a topic and building relations between them. Finally they match the concepts found with a generic ontology, specifically HowNet. In HowNet word about the same subject have similar sememes (The meaning expressed by a morpheme, which is a minimal meaningful language unit) definitions. Central topics of the concepts will be formed by the common sememes of the words. The relations between concepts can be deduced from their sememes definitions. 2.1.4 PLIB The result of this project is to create an ontology manager which provides user means to define and evolve an ontology. It is based in three basic concepts: • On any domain there is a specific vocabulary that belongs to the domain and is able to express well define properties which allow person to person communication (context explicit ontology). • Human beings are continually evolving the concepts and practices so it is necessary to have a tool to allow users to create its own ontology as a specialization of a shared ontology. • It is necessary to provide human and/or computer understanding of data meaning. As a result of that they have created a new data base model called Ontology-Based Database (OBDB), where each database contains an ontology. It is remarkable to point out that in [30] the author distinguishes two kinds of ontologies, one of them is document oriented called linguistic ontology (LO) and the other structured-data-oriented called concept ontology (CO). A linguistic ontology is focused in the meaning of the words for specifics Universe of Discourse (UoD) in a particular language. On the other hand concept ontology is addressed to represent the categories of objects and of objects properties that are in some part of the word. The concept ontology needs only to describe those primitive concepts that can not be derived from other concepts. It can also be property oriented, in the sense that the concepts must be kept minimal and specialized by means of properties, for example instead of defining the concept "10-HP engine", "25-HP engine", "50-HP engine", "100-HP engine", you should define only two concepts that will handle all the meanings a class "engine" and an integer-valued property "power in HP". As a result only those classes which can not be represented by restricting existing classes by means of properties values need to belong to a propertyoriented concept ontology. The author defines a single PLIB ontology as a 6-tuple where: "O = <C; P; IsA; PropCont;ClassCont; ValCont>, where: (1) C is the set of classes used to describe the concepts of a given domain; (2) P is the set of properties used to describe the instances of C. P is partitioned into Pval (characteristics properties), Pfonc (context dependent properties) and Pcont (context parameters). When pЄP is a physical measure, its definition includes its measure unit; (3) IsA : C → C is a partial function, the semantic of which is subsumption; (4) PropCont : P → C associates to each property the higher class where it is meaningful (the property is said to be visible for this class); (5) ClassCont : C → 2p associates which each class all the properties that are applicable to every instances of this class (rigid properties); (6) ValCont : Pfonc → 2pcont associates to each context dependent properties the context 7 parameters of which its value depends. Axioms specify that: (1) IsA defines a single hierarchy, (2) visible and applicable properties are both inherited, and (3) only visible property may become applicable." [29] If we want to define an ontology which is mapped to another ontology, the formal definition is Om = <O, M> where O is a single PLIB ontology and M = {mi} is a mapping object with four attributes: "m =< domain; range; import; map >, where: (1) domain Є C defines the class that is mapped onto an external class by a case-of relationship; (2) range Є GUI С contents {string} is the globally unique identifier of the external class onto which the m.domain class is mapped; (3) import Є 2p is a set of properties visible or applicable in the m.range class that are imported in ClassCont(m.domain); (4) map С {(p, id) | p belongs to P ^ id belongs to GUI С {string}} defines the mapping of properties defined in the m.domain class with equivalent properties visible or applicable in the m.range class. The latter are identified by their GUIs." As a conclusion, having in mind that the ontologies should react to all changes in the modeled domain, if the underlying ontology supporting a specific domain, is not up-to-date or the annotation of knowledge resources is inconsistent, redundant or incomplete, then the reliability, accuracy and effectiveness of the system decrease significantly [12] [5]. Ontology manager systems should provide the means not only to create but also to evolve an ontology. 2.2 Component Retrieval Schemes There are several factors that impact the search and retrieval process for software components such as the scope of the repository, query representation, asset representation, storage structure, navigation scheme, size of the repository. Different techniques are used to accomplish component retrieval. In [42] the authors have identified 5 different component retrieval schemes from a repository. I have added two new categories to that classification. The new categories are called users web mining and browsing. • Keyword Search. It is based in an indexing technology. The keywords provided to perform the search are compared to software documentation and items descriptions. This approach is simple to implement and the indexing task can be an automatic process. But it is limited by a lack of semantic information between the query, the set of keywords describing each item and the relation among items [41]. As a consequence keyword based searching is not efficient and it will have as result either too many or too few hits. If the result set contains too many hits, the number of non-relevant hits is likely to be very high. If the result set contains too few hits, items relevant to the search can be left out of it basically by the lack of semantics. Furthermore this approach does not take into account additional information such as relationships among objects for instance synonymous names between different concepts that might be applied. • Faceted Classification. It provides a classification for the items it is willing with. The idea is that this classification must be build by domain experts. Some keywords will describe the components, and such keywords will be placed in the classification schema. The classification schema is used as a standard descriptor for the software components. In order to solve ambiguities a thesaurus is derived for each facet to make sure the keyword matched can only be within the facet context. This technique is useful for objects that can clearly fit into such categories, but it losses quality for objects whose classification is not explicit, or objects that can belong to different classifications regarding specific conditions. Besides it is a labor intensive approach to maintain the classification and description. It requires a domain analyst in order to define the facet. • Signature Matching. It is based in the type and number of arguments defined for the different methods. The idea is that the search is defined in terms of the method’s parameters and return type. It has as drawbacks that the requester must have a deep technical knowledge on the software component he is looking for, and the search can retrieve a lot of items not related with the one the user is expecting to retrieve. Two methods can have exactly the same signature and accomplish 8 completely different task. For instance, the signature for the strcpy and strcat in the C language, even though the signature is the same the result accomplished is totally different. Furthermore this technique does not take into account the domain or the search context information. • Behavioral Matching. It takes into account the functional behavior of the objects. In this technique objects are provided with input vectors and the output vectors. The input vector represents the state of the system before the execution; the output vector represents the state of the system afterwards. The object method is executed and the output vector is generated for each object. By comparing the generated vector to the expected outputs the objects that show certain behavior are retrieved. • Semantic-Based Method. This approach employs domain ontologies. The queries provided by the users are expressed in natural language. The components have a description also expressed in natural language. A semantic analysis algorithm is applied to the user’s query as well as to the component description. This semantic analysis uses the different domain ontologies. The query semantic analysis is matched against the component description semantic analysis to perform the retrieval process. It is important to point out that for this approach the Ontology construction is time demanding and requires a domain specialist in order to accomplish it. Furthermore, the ontology has to evolve along with the domain. • Browsing. The items belonging to the sample to be searched must be classified or categorized. The system provides at least one interface, which allows traversing the classification. The interface might offer different visualization schemas (trees, tables, etcetera). • Users Web Mining. Some search systems are built up by continuously monitoring the user’s behavior, in order to learn from them and later on apply such knowledge at the moment of present recommendations. This technology is comparable with the Collaborative Filtering technique. They do not use any kind of ontology to accomplish its task. The idea is that base on the users’ behavior and items monitoring tools, the system is able to create user profiles, which will be used at the moment of providing recommendations. Table 2-1 shows a comparative table between the different component retrieval schemes, which has been taken from [42], but augmented with some retrieval schemas and with a column to include some examples. Retrieval Scheme Keyword Search Underlying Approach Search for the occurrence of string patterns specified by the user in component attributes and descriptions. Comments • • • • Faceted classification Classify components based on facets (taxonomies) such as function the software performs, medium used, type of system, functional area, etc. • • • • May result in too many or too few items retrieved because only keywords are used for searching May result in many unrelated items It is not precise; it has a lack of semantic. It is simple and can be accomplished in an automatic way Components must fit the classification scheme Some components may overlap categories Difficulty in managing the classification scheme when domain knowledge evolves Only guided search – no augmentation Example INSEAS[8] ONTOLOGER [5] ONOTOSHARE [14] LawBot [9] INSEAS [8] InterLegis [23] ADIPS [13] 9 Retrieval Scheme Signature Matching Underlying Approach Matching of function types and argument types to the query specified by the user. Signature matching could be one at the function level or module level (set of functions). Comments • • • • Behavioral Matching Execute each library component with random input vectors and generate output vectors. Compare expected output to actual output and select components. • • • • SemanticBased Method Browsing Users Web Mining User requirements expressed as simple imperative or nominal sentences. NLP used for generating initial queries and augmented with domain information. Components selected based on closeness measure (query frame visà-vis component frame) Based on components classification it shows different ways to navigate through out the elements composing the repository • The system monitors user’s behaviors and traversals to learn about their preferences. It Classifies users by common characteristics, in different users profiles in order to make recommendations. It is based in collaborative filtering techniques. • • • • • • • • • Difficult to map user requirements to function and module signatures Signature match does not guarantee expected behavior of component Multiple components may have similar signatures Limited support for query relaxation May have low recall Difficult to use when components have complex behaviors or involve side effects Difficult to express required behaviors No support for query augmentation Domain model provides context information Ontology ensures use of appropriate terms Query augmentation to improve recall and precision Natural (flexible) way for the user to specify requirements for components All the items must be classified following a standard taxonomy, ontology, or classification scheme An item may not be well classified or not fixed in a specific category. User behavior provides context information Accuracy depends on the users profiles classification The system is continuously updating information automatically Not based in an ontology Example Agora[11] COMPONENTEXCHANGE [35] Hall R. J. [50] Behavior sampling [55] Yao et al [41] Sugumaran et al [42] INSEAS [8] CompoNex [15] ADIPS [13] LEOPARD [40] RASCAL [20] SUGGEST [2] Table 2-1 Summary of Component Retrieval Schemes In the following sections there is a brief description of some of the systems applying the techniques described in the previous paragraphs; this gives an idea of how they work. It is important to mark out that some of them combine several techniques. That is the reason why next to the system name the most relevant techniques applied are specified. 10 2.2.1 Keywords Technique 2.2.1.1 INSEAS – Keyword – Faceted – Browsing INSEAS stands for Intelligent Search Agent System. The system is based in XML and agent Technologies. Component Agent and User Agent convert the inputs into XML documents, which are stored in the repository for later retrieval. The User Interface Agent is responsible to provide a convenient and efficient search environment. It represents the interfaces for the different search methods, and shows a reasonable number of results. The user can weight to extend similar words, perform combined searches, and give priorities to certain facets. The user can demand the help of an agent, so the user interface agent shows him specific questions provided by the intelligent search helper agent. Component providers utilize the Component Agent in order to store components in the INSEAS repository. The component Agent represents the component information in an XML based format. INSEAS supports CORBA, JavaBeans, and COM/ActiveX components [8]. The Component Search Agent executes four different search methods: keyword search, facet-based search, browsing search and interactive search with a helper agent. The Component Search Agent is composed by 4 independent agents: • • • • Keyword Search Agent: Taken the words provided by the user, it retrieves similar words based on the relevance and relationships between concepts. If the result set is big the agent provides to the user methods to reduce it. The threshold of the result set is determined by the user’s behavior. Facet-based Search Agent: The agent provides a set of facets that the user need to input. The user does not have to fill out all the fields. The agent also observes the fuzzy concept relationship matrices for each tag. Weights between concepts (words) and relevance between concepts in tags and component description documents are used for the retrieving process. Browsing Search Agent: The search is performed in the classified category tree by experts. It can search by domain, implementation language, system type, operating system, among others. It is possible to apply the other agents on the query result set if the user wants to. Intelligent Search Helper Agent: Regarding different factors such as user’s preference, user’s environment, user’s level, domain-related knowledge, search goal, and the search results at each step, the agent decides the order of the questions about the facets to conduct the search. This process is based in a rule-based reasoning technology. The User Agent: It manages and stores the user information in the repository, and collaborates in the component search using the repository. It helps users to provide information regarding currents project’s domain, experience, environment, and etcetera. The Repository stores information about software components, users, and the knowledge and rules. The users and components information is stored in XML. • • Expert Knowledge and Rules repository: o Fuzzy Concept Network Matrices, which stores the relevance and relationship matrices for word concepts and for tag concepts. o The Rules and Expert Knowledge. Uses the rules for intelligent search, results representation, and user interface representation. It takes into account the user information tags, component information tags and user behavior tags. User Information repository: stores information such as user’s id, current project domain, user’s role, development experience, domain experience, dominant language, system type, operating system, satisfaction degree of resulting components, search process, user preferences. 11 • • Component Information Repository: The information in this repository is managed by the Component Agent. The data such, as functionality, environment, interfaces, service level, component type, is stored as XML format. Case Repository: It sores information such as user query, user search steps, search time and satisfaction degree, which is used to perform system upgrade, component classification upgrade, and user preferences upgrade. It also uses an XML format. The System Management Agent: It uses the case information about user behaviors for the system upgrade. The user’s feedback changes the component information of service level, categorization, and weights of the fuzzy relation matrix in order to improve the system performance. INSEAS takes advantage of XML in order to give a semantic meaning to the actors involved. It defines three XML DTDs: Component Information DTD, User Information DTD and Search Case DTD. XML Specification for Component Information: It is composed by several concepts such as functionality, environment (operating system, language, system, etcetera), service level (performance, limitations, database usage), it includes component type, size, domain, understanding level, price, and user’s feedback. XML Specification for User Information: It stores user id, project domain, development experience, language, system, operating system, user’s role, preference, and degree of search satisfaction. XML Specification for Case Information of Search Process: It holds the information of the relationship between user inputs, results, search time, user selection, and user’s satisfaction level. For the searches it stores the user id, user characteristics (project domain, expertise, among others), the keywords used, the date, the number and ids of selected components, the number and ids of non selected components by the user. INSEAS uses the fuzzy similar relationship, the fuzzy generalization and specialization relationship. To model the extended fuzzy concept network it uses the relation matrices and relevance matrices. The System Management Agent makes a relation matrix and a relevance matrix for the total tag group. Furthermore many matrices exist in the repository, for the component classification, the repository has generalization relation and specialization relation matrices for word concepts. 2.2.2 Faceted Technique 2.2.2.1 InterLegis Project based on Odyssey Search Engine - Faceted In [23] the authors state that component based software reuse is affected because components are distributed and heterogeneous, and there is not a domain ontology by which the users can refer to the components they are willing to use. On the other hand, the multi-database or Heterogeneous and Distributed Database System (HDDS) are related with distribution, heterogeneity (and ontology). They propose to apply HDDS technology to achieve software component retrieval. The legacy database will be replaced for the components repository. The use of mediators will represent and integrate domain information repositories (distributed and/or heterogeneous). The metadata stored in the mediators describes the components repositories, presenting their domain, semantics and components architecture. By extending the Odyssey Search Engine it allows the publication of components in the internet using comPublish, and by associating each component to a specific domain based on ontologies and XML. The system allows publishing, describing, storing and retrieving software components. A mediator is created for each domain. The GOA server stores metadata and components locally. In the 12 mediation layer, each mediator represents an ontology domain. The ontology provides identification of components and the mediator helps in the mapping to the component repository. Each domain Ontology will define Ontology terms, for instance “a proposal” can be an Ontology term in the legislative domain, or “a regulation” could be an Ontology Term in the judiciary domain. The mediator layer will also enable the application to define relationships between ontology terms in different domains, so we could state synonymous among ontology terms, which belong to distinct domains. This relationship will help to outperform the search methods. Components are described through XML, and it will contain relevant domain information. It will also define the type of component it is. For instance in the code observed in Table 2-2, the specified component belongs to the Legislative domain, it is use in the analysis phase, it is a Use Case, and its implementation language is UML. <component> <domain> Legislative </domain> <phase> analysis </phase> <type> use case </type> <language> UML </language> <author> Robson Pinheiro </author> ... </component> Table 2-2 InterLegis Component Description [39] The overall architecture for the system described in [39] is as follows: Search Agent: The user interacts with the interface to define a query that is handled by the Search Agent. This will send the query in the Web Search Engine, which is based on Google results, but it will also send a message to the ComPublish. The message contains the application domain so the ComPublish will use the appropriate mediators, the user profile and the component features to retrieve a set of components. Machine Learning Module: It is responsible to gather user information to create user profiles. It will be monitoring the users, and it will update the user profile based on the web pages the user visits, the number of occurrences that the words appear in the different pages. It will have a list of user stereotypes that will help to limit the search results. Filtering Agent and Collaborative Agent: This two agents work together. Once the web search engine, and the ComPublish return the set of components found by a specific query the Filtering Agent and the Collaborative agent will organize and rate the results based on the information stored in the Hot Links and User Profile (that has been gathered by the Machine Learning Module). Once this task is done the rated query results will be presented to the user. 2.2.2.2 ADIPS Framework - Faceted - Browsing The framework has three main components: agent virtual machine, ADIPS Repository and design support environment. Software components are stored in the repository as agent-based components called repository agents. A repository agent works in the agent virtual machine. ADIPS repository designs a multi-agent system working in an agent virtual machine automatically using the repository agents according to a specification given by an interface agent in an agent virtual machine. Repository agents are created by component programmers using a design support interface. Repository agents carry out the application design, which has design knowledge concerning agent-based components. The repository agent has knowledge on design specification of the component including a 13 functional specification an interface specification, cooperation protocols [13]. On the other hand the repository agent has the following two capabilities: (i) Recognition of requirement: if the agent is reusable for the requirement specification, it replies to the message with functions and performance which can be applied. (ii) Retrieving components: carried out by the repository agent. In order to create new agent-based components the programmer analyzes component specifications designed by an application system designer. Then he creates repository agents to fulfill the specification. Agent-based components can recognize a specification and retrieve other components. The programmer can reuse existing components, modify them and store them as new ones. When a specification is sent to the repository as a message, all the agents which manage the component in the repository check each specification and requirement in automatically. Furthermore, the agent which manages other agents decomposes the specification to new specification. This will enable it to retrieve the component without exact match between the specifications described by the application designer and the sub-module specification. This is done throughout an interface, which receives the host, repository and specification. The specification is written in a text area, and in [13] they do not describe such specification. Components that perform the task of sub-module specification will reply to the message with a specification that is presented to the designer in a new window. This allow the application designer decide if there is a lack of components to fulfill its needs. All the components have an attribute for classification called category. The framework allows browsing components by category, and shows the components’ specification. Using this tool the user is able to modify the agent-based component specification. 2.2.3 Signature Matching Technique 2.2.3.1 AGORA - Signature Matching This search engine automatically generates and indexes a worldwide database of software products, classified by component model (JavaBean, ActiveX, and etcetera). The system implements the Internet JavaBeans agent as a meta-search engine on top of the Alta Vista Internet service. This decision was based in the fact that the search for applet:class can locate HTML pages containing applet tags where the code parameter is equal to a specified Java applet class. In order to index the components found they rely on the JavaBeans Introspector class, by which they gather information into five fields regarding associated with the document. In [11] we can find the description of the fields: • • • • • 14 Component: In this case it will be assigned the string “JavaBean”. Name: Contains the fully qualified name of the class or interface represented by the JavaBean. It will empower searches by name. Property: It is a list of properties descriptors, obtained from the JavaBean’s info. Agora processes the properties descriptor to get the property name, type and the names of the methods to read and write the property. It is possible to index the property descriptor, which describes a property that acts like an array and has an indexed read and/or indexed write method to access specific array elements. Event: An event set descriptor describes a group of events that a JavaBean fires. The system retrieves the name, the add listener method, and the remove listener method for each event set and adds them to the tokens associated with the event field. The list of target methods within the target listener interface is retrieved, and the method names are added to the event field. Method: Method descriptor describes a particular method that a JavaBean supports for external access from other components. To reduce redundant information Agora maintains exclusion tables with a list of properties, event sets, and methods common to all JavaBeans. In [11] they also explain the CORBA agent. This agent communicates with CORBA naming services, to get objects, and on the object it reads the interface. To index the CORBA interface information six field values associated with the document are stored: • • • • • • Component: It always stores the value “CORBA”. Name: It holds the interface name. Because the interface information is stored in a separately interface repository, from times to times Agora might not be able to connect to it. So the information will be gathered directly from the interface by calling the CORBA describe_interface() call which has the interface name, its operations and its attributes. Operations: It has the interface operations. Operation description records contain the name of the operation as well as parameters and exceptions. Attributes: It has the interface attributes. Parameters: It contains the operation’s parameters. Exceptions: It contains the operation’s exceptions. To perform the search AGORA offers a set of keywords, depending on the component platform the user is interested in. For instance for JavaBeans there is the word method: method-Name, it will retrieve all the methods with the specific name. 2.2.3.2 COMPONENTEXCHANGE - Signature Matching It is an E-Exchange for software components. The components are described using a Component Description Markup Language (CDML) based on XML. The component’s characteristics are partitioned in four categories: syntactic, behavioral, synchronization and quality. Syntactic Aspects of a component is also known as the interface signature. It shows the component functionality. A language that can be used to describe this aspect is CORBA IDL. Behavioral Specifications define the outcome operations. It can be described using non-formal languages. It comprises non-functional properties (quality attributes), which can be Quality-of-Services (QoS) properties such as performance, reliability, availability and global attributes of a component such as portability, adaptability. It is possible to use the QoS Modeling Language (QML) to represent various QoS properties. CMDL describes components in different aspects. In [35] aspects can be seen as horizontal slices of a system’s functional and non-functional properties. Different aspects can be grouped in different aspect categories: Syntactic aspects, Functional aspects, non-functional aspects, and Licensing and Commerce aspects. • Syntactic Aspects: It is similar to the one provided in the CORBA Component Model. It specifies the following: • Provided Interfaces: The services that the component exposes to the client. It has an interface name, a set of methods and a set of attributes. The methods are specified by a method name, the type of the returned value, the parameters and the exception thrown by the method. A name and its data type specify an attribute. • Required Interfaces: Are the services needed by the component in order to provide its functionality. 15 • Events: It is the set of events that the component either generates or responds to. An event is specified by name and direction (out if the event is generated, in if the component receives the event from another component). • Functional Aspects: set of properties represented by a name a value pair. • Non Functional Aspects: it uses QML to define them. Non functional aspects are specified by contracts, which are specified by constraints along multiple dimensions. A constrain consists of a name, operator and value. The name refers to the name of the dimension, or a property of dimension. Dimension properties allow for more complex characterizations of constraints. They can be used for characterizing measured values over some time period. • Licensing and Commerce Aspects: it defines the scope and use of a specific software component. The system is architecture is implemented as a Fat Butterfly Model. On each wing of the butterfly we have a module, one for Components Integrators, the other for Components Vendors. The component integrator module provides a Query Interface, meanwhile the Component Vendor Module provides a Publish Interface. The Component Description Repository, The License and the Matchmakers compose the central part of the system. For the searching process only those components that satisfy all the specified constrains in the query are retrieved. The user’s query is organized in a set of aspect categories. The matchmaking process is performed by multiple matchmaker components. Each matchmaker is specialized in a particular aspect category. The matchmaker component compares the client queries and component specifications with respect to its aspect category. There is a dispatcher component that splits the client query into multiple sub-queries, which are sent to their respective matchmakers. Finally the dispatcher determines the final result by computing the intersection of the results return by individual matchmakers. The query is typed using the interface provided by the system but from the example given in [35] it can be inferred that the user needs a deep knowledge of the components, because it is looking for a component with an exact method name. If the interface does not provide any help for that it will become a quite complicated task. They do not show any details on the interface so it is not possible to describe it any deeper. 2.2.4 Behavioral Matching Technique 2.2.4.1 Behavior sampling - Behavioral Matching The base of this approach is that software components have a functional behavior that can be executed on given inputs to produce certain outputs [55]. The idea behind Behavior Sampling’s is as follows: the system generates random input vectors and the user computes the desired outputs. The system then executes each of the library components on the selected inputs, comparing the computed output with the expected output. All components correct on all samples are then presented to the user. The key advantage of this approach is that the semantics of components and queries are captured precisely and canonically by extensional input/output behavior. This technique was improved in [50] Generalized-behavior based retrieval. It allows retrieving not only the complete component but also a sub-component. In order to test a decided behavior, the user must construct a model to use. It introduces a step in the process but eliminates side effects, because the system executes the behavior in the model. The authors state that this is design to provide reuse in the large, so the construction of the model is worth it and it will bring benefits. 16 2.2.5 Semantic-Based Technique 2.2.5.1 Towards a Semantic-based Approach for Software Reusable Component Classification and Retrieval - Semantic-based The paper describes an application that will be developed in order to improve searching and retrieving in large software component repositories, and also in the World Wide Web. In [41] the authors express that the system will have two main tasks: • • It will improve the search capabilities of software reuse libraries through annotating software components and packages in these libraries with a semantic description of the services provided by the software. [41]. This will be accomplished using the following techniques: natural languages processing on queries, reuse metrics to evaluate reusability, semantic service description, and domain knowledge base applied for whole process for semantic description and retrieval. It will also improve searching for software on the World Wide Web through the use of program understanding. The system will be composed by a set of subsystems which will be oriented to accomplish specific tasks, the subsystems are: • • • • • Intelligent, natural language-based user interface. The user’s queries expressed in natural language will be transformed in a conceptual graph semantic representation within a knowledge base, and also translated into semantic web based representation. DAM+OIL ontologies are employed to support domain knowledge for the semantic web based representation. Analysis and annotation tool, which by means of program understanding it identifies and describes the functionality of the software components using semantic representation. The semantic representation will be in a conceptual graph knowledge base, and will also be translated into a semantic web representation, supported by DAML+OIL ontologies. Semantic matchmaker that compares a user query in a conceptual graph with component service description in conceptual graphs. It is based on a domain knowledge base. Intelligent Internet search, which automatically will search and download software components from the Internet based on the user requests. They will be annotated by the analysis and annotation tool. Software components repository. It will use UDDI as its infrastructure, and WSDL and DAML-S as service description languages. Components in the repository will be annotated with WSDL/RDF service descriptions. The system will wok as follows: On the user’s queries expressed in a natural language a natural language processing technology will be applied to analyze such a query based on semantics. The query will be translated into conceptual graphs, then into WSDL/RDF. A program understanding tool will be applied to analyze downloaded software packages. The software services and features will also be translated into conceptual graphs and WSDL/RDF. Finally a semantic matchmaker will match the user query conceptual graph to the component conceptual graphs, and then WSDL/RDF representation of the user query and the component are matched. WSDL is used to describe web services in terms of interfaces information, public methods, data type, information for messages, binding information for transport protocol, and address information for locating the service. In this sense WSDL is applicable in a software component description domain. But WSDL has a lack of semantic description, which will be enhanced using Semantic Web ontologies to annotate WSDL description. 17 2.2.5.2 A Semantic-Based Approach to Component Retrieval - Semantic-based The System proposed in [42] uses a natural language interface to provide component retrieval, which utilizes the domain knowledge embedded in ontologies and the domain model. The process to retrieve a component has three main steps: • Initial Query Generation. The user specifies the requirements for the component using natural language. Using a heuristic-based approach keywords and concepts are identified. The query is specified simple imperative or nominal sentences. An imperative sentence consists of a verb phrase with an embedded noun phrase and possibly some prepositional phrases. For instance “Give me details about the biding process”. • Query Refinement. Keywords and Concepts from the user’s query are mapped to the domain ontology. Related terms based on the context are also identified for expansion. The context of the retrieval is established through the domain model. When no matching terms are found in the domain model, the system checks the ontology for synonyms and uses those synonymous to search the domain model. • Component Retrieval and Feedback. The functional requirements specified by the user are decomposed into specific processes and actions using the domain model. Those are then compared to the object’s methods. The user establishes a threshold value, and the objects which percentage of actions supported is greater than the threshold are retrieved. The reuse repository contains components with methods capable of providing some features. It is necessary to match the functionality required with the functionalities supported by the component. Components are described using simple imperative sentences. When parsing the user’s query a frame query is created. Component description is also parsed. For retrieving components, the query frame is matched against the component frame. The query frame contains the features that must be satisfied. The frame structure consists of terms and synonymous, which are inferred from the query and the ontology. The conceptual distance is calculated based on the number of terms in both frames that matches or is related to. The success of the system depends on how the repository is managed, contents indexed and the level of detail components is described. The system architecture consists of a web-based interface to a domain model, an ontology and a reuse repository. It is implemented with server-client architecture. The client is comprised of a web browser interface. In the server side we find the query interface module, query refinement module and repository. The query interface module has three components, which are responsible for capturing the users’ query requirements, generating the preliminary database query and displaying the results in an appropriate format. The query refinement module makes use of the domain specific information contained in the ontology and in the domain model to enhance the initial query. Simple natural language processing techniques translates the user’s query from natural language into a structured query language. The domain model is organized into objectiveness, processes, actions, actors, and components. This classification has been empirically validated in a sales domain application development. The rest of the domain-specific knowledge is found in the ontology. The ontology is composed by the set of terms, information about terms, and relations among terms. 18 2.2.6 Browsing Technique 2.2.6.1 CompoNex – Browsing This approach came out after performing a market maturity study of the software component field. The outcome of the study reflected the means to facilitate the exchange of components between sellers and buyers. Nowadays there is not precise information regarding components, and they must be treated as experimental goods, whose characteristics (usability, compatibility, performance, etc) cannot be assessed until after buying [15]. The testing process should be use to validate the component characteristics rather than to determine them. There should exist and appropriate and automatically verifiable component specifications, this will differ from a test version, because it will explicitly describe component characteristics. It proposes a component classification based in a thematic grouping into several pages [16]: • • • • • White pages: provide general and commercial information about components. It is expressed in natural language. But it proposes the use of taxonomy during specification. It will store general information such as component name, unique identifier, version, description, producer, administrative contacts, and dependencies to other components. As far as the commercial information is regarding it holds conditions of purchase, distribution channel (distribution form, price, accepted payments, scope of supply), and license agreement. Yellow pages: specify the domain that a component belongs to. It also contains information about the underlying architecture and technology of the component. The framework provides different taxonomies for the domain the component belongs to such as UNSPSC, NAICS, and Microsoft GEO. It also provides a taxonomy which list implementation technologies (EJB, COM, .NET, XML Web Services, etcetera). Blue pages: summarize domain-related information about the component functionality. It describes a domain lexicon. It provides three concepts: objects (entities), operations (tasks) and processes. It is possible to relate concepts by abstraction or composition. Typical abstractions are the is-synonym-to (is-identical-to), is-specialization-of, and is-generalization-of, which are use to relate concepts to each other. Compositions are used to combine concepts. Typical compositions are order relationships, the is-part-of, and consist-of. Concept definitions give an impression of what a component or an interfacemethod does. Green pages: provide the provided and required interfaces specification. It uses OMG IDL. It supports for each interface specification of invariant, pre-conditions and post-conditions by means of OCL (Object Constraint Language), which is extended with temporal logic to provide flow information, regarding the predetermined order on which methods should be invoked. Grey pages: provide components quality attributes description either to the component or to the interface methods. The idea is to describe quality components regarding the ISO 9126 quality model, which comprises usability, maintainability, functionality, reliability, and efficiency of a component implementation. It should be specified using QML, but after using the system this module it is not being validated, so it accepts any test in the description. The component specification languages and different levels to be specified are explained in detail in [36], which is a Standardized Specification of Business Components. Further references to this project are targeting this approach to describe the Web Services specification in order to provide means to describe and retrieve web services. It shows compatibility with UDDI, as stated in [33] it can be used as a wrapper to UDDI but taking advantages of the complete service description provided by the thematic grouping and component specification into pages. 19 2.2.7 Users Web Mining Technique 2.2.7.1 RASCAL - Users Web Mining It is a recommender agent system for software components. It has 2 main objectives firstly it is interested in recommending software components that the user is searching for. Secondly, it is intended to recommend components that the system believes a user actually requires but is unaware of such components existence or the need for such components [20]. In the system the user, which will mainly be a developer is considered a java class and the components employed by a class are items. Specifically the components referred to are java methods. The system runs in the background to monitor and update the user’s usage history. The rate method use for RASCAL is implicit; it means the user does not have to explicit rate the component. It is implicit because it automatically deduces the user vote for an item by monitoring how often the user has used such component and usage histories of components are automatically collected and stored in a userpreference database. Then the recommendation will be based upon such rate and a collaborative filter technique, which states that the users can be grouped together in a set of users alike. So for a specific user it is highly probable that he will use the components used for the users belonging to the same set. The architecture of the system is composed by three main elements: the code repository, the usage history collector and the recommender agent. Code Repository: As new components are developed they must be stored somewhere. "The repository is effectively a user preference database, a user is a java class and the components employed by a class are items." [20]. Usage History Collector: It will automatically mine the code repository to extract usage histories for all classes. This will need to be done once initially for each class and subsequently anytime a class is added to the repository. The information is extracted using the Byte-code Engineering Library (BCEL). Component usage histories for all the users are then transformed into a user-item preference database. By the moment the paper was written the database contained a user-item preference matrix for all users. It also contained information for each individual user a list of components based on their actual usage order. Recommender Agent: The tasks performed by the agent are: monitors the current user and updates the user preference; attempts to create the set of users similar to the active one by searching the user-item matrix produced by the usage history collector; finally recommends a set of ordered components to the current user. As a conclusion on the different retrieval schemas it is possible to say that each one of them have their strengths and drawbacks. The combination of the different schemas has shown an improvement in the retrieval process. Nevertheless research in this topic is still being carried out nowadays. The retrieval process is highly related to the description of the resources being retrieved, and software components do not have a specific notation to describe them. The retrieval scheme is tailored to the specific repository where the search takes place. 2.3 Model Driven Architecture - MDA MDA stands for Model Driven Architecture. It is a trademark of the Object Management Group (OMG). As stated in the MDA specification web site [64]: “The MDA is a new way of developing applications and writing specifications, based on a platform-independent model (PIM) of the application or specification's 20.” As inferred from the MDA specification, it is based in models. Some people define a model as a visual representation of a system. But on the other hand, many people refer to a set of IDL interfaces as a “CORBA object model.” Besides, an UML can be rendered into an XML document using the OMG’s XMI DTD for UML, such representation is not a visual artifact. Thus, a more precise definition is needed. In [71] the following definition is given: “A model is a formal specification of the function, structure and/or behavior of a system”. This definition has the following underlying concepts: “A specification is said to be formal when it is based on some well defined language that has well defined meaning associated with each of its constructs” [71]. As a matter of fact if a specification is not formal in this sense, is not a model. Consequently a diagram with boxes and lines and arrows that does not have behind it a definition of the meaning of a box and the meaning of a line and of an arrow is not a model, it is just an informal diagram. Under this model definition the subsequent are models examples “Source code is a model that has the salient characteristic that it can be executed by a machine. A set of IDL interfaces is a model that can be used with any CORBA implementation and that specifies the signature of operations and attributes of which the interfaces are composed. A UML-based specification is a model whose properties can be expressed visually or via an XML document” [71]. A PIM is a model of a software system that does not incorporate any implementation choice. It stands for Platform Independent Model. PIMs describe the system independently of the chosen implementation technology. On the other hand a PSM is a model of a software system that incorporates choices for certain implementation technology/technologies. It stands for Platform Specific Model. PSMs describe the system taking into account the chosen implementation technology. The aim of this approach is to let software development process concentrate in the specific domain it is trying to model. At the higher design level, the design process should only care about the software functionality. There must be a clear separation between the required functionality and the middleware platform on which such functionality is going to be implemented. In the MDA, middleware-specific models and implementations are secondary artifacts. A specification's PIM is the primary artifact. It defines one or more PSMs and sets of interface definition, each specifying how the base model is implemented on a different middleware platform. It separates the fundamental logic behind a specification from the specifics of the particular middleware that implements it. MDA is on the use of models in software development [70]. In order to accomplish that goal from an abstract model of the system a more concrete model should be generated. From that model in turn an even more concrete model can be generated until finally the source code is produced. Source code is considered to be the most concrete representation/model of the software system. Key to this process is that each generation step will be automated as far as possible. The ultimate MDA goal is to generate automatically a complete software system from a model with as less human work in the process as possible. A Model transformation is the process of converting one model to another model of the same system. Transformations can use different mixtures of manual and automatic transformation. There are 4 different transformation approaches: manual transformation, transforming a PIM that is prepared using a profile, transformation using patterns and markings, and automatic transformation [70]: • Manual transformation. When the design decision to make the transformation from PIM to PSM are made during the process of developing a design that conforms to engineering requirements on the implementation. The decisions are considered in the context of a specific implementation design. 21 The MDA adds value in two ways: there is an explicit distinction between a PIM and the transformed PSM, the transformation is recorded. • Transforming a PIM that is prepared using a profile. The PIM and the PSM are expressed using UML profiles. The transformation may involve marking the PIM using marks provided with the platform specific profile. The UML 2 profile extension mechanism may include the specification of operations; then transformation rules may be specified using operations, enabling the specification of a transformation by a UML profile. • Transformation using patterns and markings. This applies a pattern to the transformation. In order to apply the pattern elements from the PIM are marked. Those marked elements are transformed according to the pattern to produce the PSM. For instance a class marked in the PIM with a role from the pattern, once the transformation is applied can produce in the PSM the original class with some extra attributes and operations, new classes corresponding to other roles in the pattern and associations between those classes. • Automatic transformation. In some cases it is not necessary to provide to add marks or use data from additional profiles in order to be able to generate code. The decisions are implemented in tools, development processes, templates, program libraries and code generators. The PIM contains all the information necessary to produce computer program code. As a conclusion on The Model-Driven Architecture (MDA), it basically defines an approach to modeling. In the modeling process it separates the specification of system functionalities from the specification of its implementation on a specific technology platform. The MDA promotes an approach where the same model specifying system functionality can be realized on multiple platforms through auxiliary mapping standards, or through point mappings to specific platforms. Summary and Conclusions After describing the different schemes used to retrieve components, it can be inferred that each one of them can help to resolve different type of queries. For instance, faceted and classification using natural language will provide means to retrieve components based on external information provided as natural human language. As a consequence it can be also used by non technical users. On the other hand a scheme like signature matching provides a more deeply technical description and as such it can be thought more oriented to technical users. Anyways, researches are combining different schemes in order to take advantage of their strengths and diminish their weakness. Components retrieval schemes based on classified components have been evolving in the classification techniques, from simple faceted to ontology-based classification. Ontologies appear as a helping element in the classification and modeling of complex relationships. Ontology manager systems have been developed to create and manage ontologies. For my research I propose the use of an ontology to describe software components. The use of classification schemas has brought a proliferation of models because there is not a common description model for software components. That model is still an open issue. There is not a definitive answer to questions like: what kind of information will characterize a software component? Should this information be based on properties of software components? Can software components be generalized? How quality information will be assessed? Then, what can be done to tackle down the proliferation of models? In this research I propose as an answer to this question an integration of software component repositories. 22 3 The eCots Association eCots is the name for an inter-industrial association founded in January 2004 by Thales, EDF R&D and Bull. The association has specified a project to create an Intelligent Portal for Searching Components called IPSComp. The IPSComp project is in the specification phase and it will become a proposal in a European Integrated Project. The project aims at developing an open information portal for commercial ofthe-shelf (COTS) software and non software components, in which we deal with information about products, and possible between their users, or between users and producers. As expected from any industrial project, the main aim is economical: the project is addressed to provide its users with a maximum of quality-controlled information at the lowest possible price. This research is included as a first step for the definition of the functional architecture of the IPSComp project. The objective of this project is to use the potential offered by Internet portals to federate the community of users of commercial off-the-shelf software components. Thus giving them means of obtaining the information they need from COTS component producers, facilitating access to such information and supplementing it by pooling – through cooperative generation of content – the information on use that it possesses, by setting up a dedicated thematic portal, freely accessible on Internet. In [48] the authors have identified a set of elements that will need further discussion in order to achieve the project’s aim. Among other questions they formulate for instance, what kind of information will characterize a COTS product? Should this information be based on properties of COTS products? Are there many COTS products sharing similar properties? Can they be generalized? How quality of information will be assessed? The paper also identifies some key elements that have been group into three categories: management, development and knowledge base. The interactions between them are depicted in Figure 3-1. The key elements are: • Management: • Procedure to specify how the portal should be used. • Legal issues on the use of the portal. • Quality assessment procedures and measurements for software deliverables, ontology specifications, third-party information, and software and ontology development procedures and methodologies. • COTS versioning management, software configuration management, procedures for ontology and portal evolution. • Standard information to be supplied by vendors and/or private organizations. • Development • Recommender system (personalization and support). • Software specifications. • Ontology specifications. • Multilingual definitions for GUI and ontology specifications. • Knowledge-based Support • Taxonomy and classification. • Global ontologies (COTS and domain-oriented). • COTS component ontologies (COTS quality attributes, COTS implementations, etc). • Domain-oriented ontologies (I&C, E-business, Health Care, etc). 23 3-1 IPSComp System Architecture [48] The end user will have access to the system through a web browser. The knowledge-Based support which is the base of the system has three layers. The Ontology Standards layer will hold the set of tools used to define the ontologies included in the system. For instance, Web Ontology Language (OWL)2 can be used to explicitly formally describe an ontology3. The other two layers correspond to ontologies. One of them is the ontology that supports the set of items include in the system in this particular case software components. The other layer also contains ontologies. Software products are in contact with a wide range of domains. This layer provides ontologies for those domains. For instance, there can be an ontology for the health care domain, for the e-business domain, for the telecommunications domain, components in other domains, etc. Taking advantage of the knowledge-based module, supported by the different ontologies, the portal will also provide a Recommender System, which will be in charge of helping users in the identification and retrieval of software components that they may be interested in. My contribution to the IPSComp is explained in the next chapter. 2 Web Ontology Language (OWL) is a revision of the DAML+OIL web ontology language. It has more facilities for expressing meaning and semantics than XML, RDF, and RDF-S, and thus OWL goes beyond these languages in its ability to represent machine interpretable content on the Web. (). An ontology language is required for the Semantic Web vision in which information is given explicit meaning, making it easier for machines to automatically process and integrate information available on the Web 3 Accordingly to W3C an ontology is the representation of the meaning of terms in vocabularies and the relationships between those terms (). 24 4 Contribution My contribution on the IPSComp project is to create the component ontology specification for IPSComp (Section 4.1. Item {2} in Figure 4-1), produce the component ontology design for IPSComp (Section 4.2. Item {2} in Figure 4-1), implement the component ontology for IPSComp (Section 4.3. Item {1} in Figure 4-1) and provide a component repository integration (Section 4.4. Item {3} in Figure 4-1). The previous points aim to provide the means to achieve a qualified Recommender System for the IPSComp project (Item {4} in Figure 4-1). My contribution in the system architecture is red highlighted in Figure 4-1. 4-1 Contribution to the IPSComp Project 4.1 Component Ontology for IPSComp Specification Ontologies are emerging as a key solution to allow different applications to exchange and to reason about information in the system. Ontologies provide a mechanism to represent and store domain specific knowledge. An ontology usually refers to a set of concepts or terms that can describe some area of knowledge or build a representation of it. Ontologies provide a set of well defined, structured and agreed terms in order to disambiguate communication exchange between applications (software agents, programs, etc). An ontology based component retrieval method should be able to exploit the additional knowledge embedded in domain ontologies to augment or revise a user’s initial query. This use of ontologies to take into account the semantics of the application domain should result in greater query flexibility, augmentation and user satisfaction. 25 4.1.1 XCM Component Ontology The aim of this component Ontology presented in [28] is to provide (i) a standard for the definition of components that unifies the differences between different models (ii) a standard interface for component searching. For each component it defines 2 dimensions: features that are composed by the set of properties, methods and events; and design which describes how a component is constructed by using existing components. As a result XCM is able to hold information for each component regarding: • • • 26 General Information: i.e. component name, version, package, language, component model, domain, operating system, and publisher. Features: The set of features describes how the component interacts with other components. It is composed by properties, methods and events. • Property: Is the named attribute of the component. It is described by: • Syntax: pType: is the domain type; access: it can be readWrite, readOnly, writeOnly; Style: it can be simple, indexed, bound, constraint. • Specification: pName: is the property name; desc: holds the property description. • Introspection: writeMethod: is the method name to set the property value; readMethod: is the method name to get the property value. • Method: It holds the interfaces, provided (behavior that can be triggered to other objects) required (from the other components to complete its functionality). It is described by: • Syntax: returnType: the return domain type; paraType: ordered list of the parameters domain type; status: can be provided or required. • Specification: mName: the method name; desc: a textual description; pre: the pre-condition; post: the post-condition. • Event: It is the message used by a component to communicate with other. It is classified as published (a component publishes to its recipients to notify something has happened and an action must be taken) or consumed (an event that a component subscribes to in others components). It is described by: • Syntax: eType: the event type; delivery: the event delivery (unicast or multicast); status: published or consumed. • Introspection: addListenerMethod: method name that registers one or more listener components based on the event; removeListenerMethod: method name that removes listener components from the event; listenerType: the type of the listener component, represented by the listener interface, that are allow to register for the event; listenerMethods: set of one or more listener methods that the listener components registering for the event must implement. Each listener method is specified by: mName (the method name), returnType (the type of value returned from the method) and paraType (ordered list of parameter type required for the method). Design: it describes how to construct a composite component connecting pre-existing components. • Underlying Component: It is a component use to build up components It is described by: • Syntax: comp: the component domain type. • Specification: cid: the component instance level; desc: the component instance description; role: can be master, client or support. • Connection Oriented Composition: It describes how components are connected using events or pipe and filtering mechanisms. Components can be classified either as Event Components (that fire events) or Listener Components (that listen for events and subsequently trigger specified methods in a well-defined manner). This connection is described by: • Syntax: Event: the fired event. • Specification: rid eCompInstance: the label of an event source component; rid lCompInstance: the set of labels of event listener components; eAction: the event action defined under a fired event listener interface; lcomposition: the composition of methods: inv (the composition invariant), pre (the composition pre-condition), post (the composition post-condition). • Aggregation based Composition: It describes aggregation of components into higher level components. For the aggregation components can be classified as Container Component (that provide the containment for the containee components) or Containee Component (that is aggregated into a container component in the specified position). It is described by: • Specification: container: the label of the container component; containee: the label of a containee component; location: the location where the containee component is positioned in a container component. Figure 4-2 taken from [51] represents the XCM hierarchical component structure. Here a component is defined via (i) general information, (ii) features that contains the component’s set of properties, methods and events; (iii) design that encapsulates how a composite component is constructed from other components either by connection-oriented and/or aggregation-based compositions. This hierarchical structure can be represented as an XML document, while the general structure of the description model the XCM concepts - can be described as an XML schema, as proposed in [28]. Component General Information Property Feature Method Underlying Component Design Event Connection-oriented Composition Aggregation-based Composition Figure 4-2 XCM Hierarchical Component Structure [51] The XCM ontology provides a component ontology which gathers the information that is relevant for the IPSComp project component ontology. As a matter of fact the XCM ontology is taken as a base to create the component ontology for IPSComp. 4.1.2 Component Ontology for IPSComp Specification In order to create the Component Ontology for the IPSComp project I took as a base the XCM ontology and modified some concepts. Moreover I added to the Component Ontology for the IPSComp project quality attributes, the license concept, the price concept and the publisher description concept. The modifications applied to the XMC ontology to create the IPSComp ontology as well as the adding to it are described in the following sections. 4.1.2.1 Domain The General Information in the XCM ontology contains the domain concept, but it is represented as a String. This concept was change to become a multi value field. The aim behind this change is to avoid the problem presented in the faceted-based search scheme with the items that can not fit into one specific category or classification. As a matter of fact a component can be related or belong to different domains. 27 4.1.2.2 Price The price concept has been added to it. It will help to develop the marketing area of the IPSComp project. This concept has been added to the General Information description. It is remarkable to point out that the same component can have different prices depending on characteristics such as the number of licenses, or even the functionality provided by the component. 4.1.2.3 Quality Attributes I added to the IPSComp ontology a set of quality attributes. This is important to extend the component description. A software component in the IPSComp project scope will most probably be a black box software artifact (as stated in the component definition given in the introduction). Besides, the software components will be marketable and used by customers in different implementations at later times. The set of quality attributes will help in the identification and retrieval of software components. In [52], the authors identify that most of the software engineering community has been mainly focused on the functional aspects of components. As a consequence the quality and extra-functional attributes have been left aside. Nevertheless, it is worth it to pay attention to these factors because they can become a key point in any commercial evaluation. There are four main issues when considering quality and extra-functional attributes of software components. • • • • There are several proposed classifications regarding component’s quality attributes. But there is not a general consensus on the quality attributes that should be considered. There is a lack of information about quality attributes among the different component’s providers and vendors. There is an absence of metrics that could help evaluating quality attributes objectively. Finally the international standards provide very general quality models and guidelines, which are difficult to apply to certain domains such as Component Based Software Development (CBSD) and Components Off-The-Shelf (COTS). To overcome these issues Bertoa et al [52] propose a quality model for CBSD based on ISO 9126. The international standard ISO 9126 provide definitions and classifications of the quality characteristics of software products. In ISO 9126 a quality characteristic is a set of properties of a software product by which its quality can be described and evaluated. An attribute is a quality property to which a metric can be assigned. A metric is a procedure for examining a component to produce a single data. The quality model proposed by the authors defines a set of quality attributes and their associated metrics for the effective evaluation of COTS components. This approach is tailored to software components. Three considerations have been taken into account by Bertoa et al [52] in order to produce the quality model: • The moment at which a characteristic can be observed or measured, either at runtime (e.g. performance) or during the product life cycle (e.g. maintainability). • The target users of the model are software developers and software designers. • A component is considered as a black box software artifact, so even though the targets for this classification are software developers and software designers, the idea is that the specific implementation is hidden and can not be modified by them (This is according with the component definition the IPSComp project has taken). Basically three types of transformations were applied by Bertoa et al [52] to the original ISO 9126 to be tailored to software components. First of all, the Portability characteristic and the Fault Tolerance, Stability and Analyzability sub-characteristic disappeared. Second, two new sub-characteristics appeared 28 Compatibility and Complexity. Third, The Usability characteristics and the Learnability, Understandability and Operability sub-characteristics changed their meaning. The quality model characteristics for the modified ISO 9126 are briefly explained: • • • • • • Functionality: It tries to express the components ability to provide the required services. Its definition has not been changed. On the other hand the sub-characteristic Compatibility was added to the model, to indicate if former versions of the component are compatible with its current version. Reliability: It keeps the original meaning. The maturity sub-characteristic is used to measure the number of commercial versions and the time intervals between them. Furthermore, recoverability measures if the component is able to recover from failure and how it does it. Usability: This characteristic has completely changed its definition. The reason behind it is that the component’s end users are developers and application designers rather than regular end users. This characteristic measures the component’s ability to be used by the application developer during the construction of a software product. The Complexity sub-characteristic when integrating and using the component within a software product or system has been added. Efficiency: It keeps the original definition, which distinguishes between Time Behavior and Resource Behavior. Some people call this characteristic performance. Maintainability: It describes the characteristic of a software product to be modified. Even though on a black box component it is not possible to make modifications, the developer must adapt it, configure it, and tested to include it in a final application. As a consequence the Changeability and testability are sub-characteristic defined. Portability: This characteristic was eliminated, because for software components the ability of a product to be transferred from one environment to another must be intrinsic. The Table 4-1 extracted from [52] shows the quality attributes defined for software components and also a complete description of them can be found there. Characteristics Functionality Sub-Characteristic - Runtime Accuracy Security Reliability Usability Recoverability Efficiency Time behavior Resource behavior Maintainability Sub-Characteristic – Life Cycle Suitability Interoperability Compliance Compatibility Maturity Learnability Understandability Operability Complexity Changeability Testability Table 4-1 Quality Model for COTS components [52] In order to measure these characteristics the authors also propose specific metrics. Each quality attribute will have a specific metric associated to it. Those metrics are: • Presence: It identifies whether an attribute is present or not in a component. It is measured by a Boolean indicating if the attribute is present and a String, which states how the characteristic is implemented. • Time: It measures time intervals. It uses an integer indicating the absolute value and a String indicating the units. • Level: It is used to indicate the intensity in which an attribute is present it is described by an integer in a scale from 0 (very low), 1 (low), 2 (medium), 3(high) and 4 (very high). • Ratio: It is used to describe percentages (0 – 100). • Indexes: Are defined as derived measures calculated from basics attributes. For instance, the Complexity Ratio compares the number of configurable of the component with the number of its 29 provided interfaces. This section describes the quality attributes and the proposed metrics for quality attributes taken from [52]. I think the quality attributes are necessary to complete the IPSComp component ontology as stated at the beginning of this section. I took this research because it is based on a standard ISO 9126, it is tailored to software components and it proposes the metrics for the quality attributes. Nevertheless, concerning to the quality attributes these are some issues I want to point out: • • The characteristic Functionality, sub-characteristic Suitability tries to measure how well the component fits the user requirements. It is obtained by dividing the number of user required interfaces by the total number of interfaces provided by the component. As this attribute is directly related with the end user needs, the component provider can not measure it. So it is up to the end user to provide this metric. But this attribute will be dependent not only on the component but also on the application on which the component is being used, and even worst, on the developer needs. There is not a standard, so an application designer might be looking for a component that performs a wide set of interfaces. It is a matter of how the application designer defines the set of services he wants to obtain from a component. As a consequence even though the quality attribute is clear and the operation to be performed in order to obtain the value is also clear and easy to perform, the value depends on the user requirements for the specific application. For the characteristic Usability, sub-characteristic Learnability, which tries to measure the time and effort needed to master tasks such as usage, configuration, parameterization, or administration of the component. This measures is provided by the component provider but should be validated by the end user. This is a really subjective metric, because it depends on the knowledge and skills from the user who is working with the component. As a consequence some metrics can become really subjective values, so I think it is necessary to monitor them in order to determine the accuracy of its value. As far as metrics are concern, in [52] each type of metric is defined by certain attributes, for instance Presence is measured by a Boolean indicating if the attribute is present and a String, which states how the characteristic is implemented; meanwhile Time uses an integer indicating the elapsed time and a String indicating the units. The Presence metric does not have units, the Time metric does not have the feature as the Presence metric does. In order to generalize the metric concept for the IPSComp component Ontology, I change the definition of the metric to be represented by three attributes: • • • Feature which stores the metric name or characteristic to be measured. It might store a relevant value for the metric, for instance in a presence metric the feature stores how a particular attribute is incorporated by the component. In the security case it could have a value “SSL” which means that the security for the component is implemented using SSL certificates. It can be seen as a qualitative value. Value which is the amount of the feature being measured. This can be a number, a Boolean, a Scale system, etc. It can be seen as a quantitative value. For instance if the metric is a time, it can be a number representing the amount of elapsed time. Unit which is the unit used in the metric. The idea behind these three attributes defining a metric is that new metrics can be incorporated to the IPSComp component ontology. Furthermore it provides a context to define concepts to include in the ontological analysis of the component description. For instance the feature defined by the metric can be included in as an ontological concept that can be related to other concepts. On the other hand, the unit concept will fit in the context of a taxonomy to be able to perform comparisons. This standardization facilitates the creation of a grammar for the metrics. 30 Besides I made the following changes to the Time and the Number metrics: • • Time: It will be measured by a Float instead of an integer. This value represents the elapsed time absolute value. Number: In [52] they present the integer metric for some quality attributes, which is only a number. I represent this concept with the number metric, but it is important to remark that this metric has the other 2 attributes addressed to provide context to the metric. Actually the time metric is a specialization of this metric, that has been defined independently and I kept this definition because I consider time is a specific domain, which it is worth it to be handle apart. Finally, after those remarks about quality attributes and changes to the metrics I added the quality model IPSComp component description. In the scope of the whole project it is necessary to arrive to some standards, which will be adopted for the different actors involved in the project. This quality model has been created from the ISO 9126 standard, tailored for software components what makes it a good starting point to achieve a well accepted norm. Furthermore, the quality model does not define only quality attributes, but also the metric applied to each one of them. On the other hand, the aim behind assigning quality attributes for a software component is to provide a set of extra-functional attributes for it. These set of properties must help in the retrieval and evaluation of software components. 4.1.2.4 License Because the IPSComp project belongs to a commercial initiative, there is information that should be included in the IPSComp ontology in order to facilitate the commercialization of components, and to reduce legal issues. The concept license is added to the component ontology, it will provide a name and the description of the license. A component can have several licenses associated to it. “A software license is a type of proprietary or gratuitous license as well as a memorandum of contract between a producer and a user of computer software — sometimes called an End User License Agreement (EULA) — that specifies the perimeters of the permission granted by the owner to the user”4. 4.1.2.5 Publisher Description The information stored by this concept is addressed to provide software providers’ information. The component producer can be described, in order to gather more information that can help users in the retrieval process, but in gender in a marketable place as the IPSComp project is intended to be this information offers additional value to customers, helping identifying producers. 4.1.2.6 Specialization Scenarios Component specialization is a technique presented in [38]. The idea is to give to the component producer, who has access to the details of the implementation, the means to identify helpful specialization opportunities and to publish them as part of the component interface. These are seen as specialization scenarios. The objective of a component producer is to provide software components applicable to the widest possible range of context, having in mind that maximizing reuse, minimizes use. By analyzing the code the producer can offer to the consumer, specialization scenarios in order to provide more efficient alternatives to the generic version of the component. The scenarios are only written in terms of services specified in the port interfaces, because it is the only information available at assembly time. These specialization scenarios are defined providing an extra annotation to the method’s signature in order to indicate whether 4 This definition has been taken from 31 the return type and parameters are considered static or dynamic. A complete description of the research can be found in [38]. This notation on the interfaces has been included in the component ontology for IPSComp. The inclusion of this element in the component description pretends to join the IPSComp project to a PhD research [38]. It is addressed to a specific component repository, which differs from the commercial standard. It has been included to show that tailoring the proposed ontology will not have an impact over the model transformations that I will be explaining further down in the document. It will only add a set of methods to the proposed API. It shows the flexibility to include different component types into the component description. Further studies should be performed in order to include different information until the component description standard is reached. After creating the IPSComp ontology description taking as a base the XCM ontology, modifying it and adding concepts it is worth to show the result in the hierarchical component structure to highlight the differences between the IPSComp ontology and the XCM ontology. Now a component is defined not only by General Information, Feature and Design, but it has been added the Non-Functional Characteristics as shown in Figure 4-3 with the green colored element. The other changes are perceivable at General Information level where the domain concept (Section 4.1.2.1) was modified, the domain price (Section 4.1.2.2), license (section 4.1.2.4) and the publisher description (section 4.1.2.5) were added to the IPSComp ontology. It is shown in Figure 4-3 with the green diagonal lines. Finally at the method level the Specialization Scenarios were added. It is depicted in Figure 4-3 with the green vertical lines. Component General Information Property Feature Method Underlying Component Design Non-Functional Characteristics Event Connection-oriented Composition Aggregation-based Composition Figure 4-3 IPSComp Hierarchical Component Structure The IPSComp ontology provides means to describe software component in a syntactic, semantic and behavioral way. For instance, the syntactic definition can be seen in the methods signature representation. The methods represent the set of interfaces the software component offers. The signature of a component interface is a syntactic description. It is necessary to add constraints regarding their use. It can be achieved by a semantic description. The IPSComp ontology description is able to hold the method’s precondition and post-condition, this allows defining some semantic information. As far as the behavioral description is concerned the idea is to store this information in natural language in the description fields that belongs to the concepts (Method, Property, Component) in the IPSComp ontology. Additionally some of the non-functional characteristics (quality attributes) can store behavioral information (e.g. response time can be seen as a behavioral characteristic). 32 4.2 Component Ontology for IPSComp Design The following UML class diagram5 represents the component description ontology for IPSComp detailed in the previous section. Because the complete model does not fit properly in the page I have selected some classes that will allow illustrating the main ideas, the complete UML class diagram can be found in Appendix A - IPSComp Ontology UML Class Diagram. Figure 4-4 models the IPSComp component ontology. A component has general information and a list of quality attributes, which are normal associations. Besides the Component have aggregation associations with the Method class, which represents the component’s interfaces; the Property class, which represents the component’s state; and the events, which are used to model how a component communicates with other components. On the other hand IPSComp ontology also handles the component’s design (Figure 4-3). It has 2 classes to accomplish this task. Each class represents the way a component can be composed by other components. These classes are the AggregationBased and ConnectionOriented (Figure 4-4). 5 The UML class diagrams were depicted using ‘Poseidon for UML’ (). 33 IPSComp Ontology Component Concept component Invariant (from component) OperatingSystem AggregationBase 1 -invariant:String -location:List 1 + Cointains ConnectionOriented (from component) -eAction:String 1 + lCompInstance 1 1 1 1 1 1 1..* + ContainedIn* Containee container + eCompInstance 1 1..* 11 Post (from component ) -name :String -email:String -webSite:String -phone :String -Id:String 1..* 1 1 Publisher -operatingSystem:String Component -id:int -name:String 1 -desc:String << Client , Master , Support >>-role:Role[*] 1 -comp:String * GeneralInfo * 1 -post:String Pre (from component ) -pre:String 1 Method (from component ) << Provided , Required >>-status : -mName :String -desc:String 1 1 -name:String -returnType:String -paraType:List 1 -eType :String << Multicast , Unicast >>-delivery: << Consumed , Published >>-status:EventStatus -addListenerMethod:String -removeListenerMethod:String -listenerType:String Property (from component ) << ReadOnly , ReadWrite , WriteOnly >>-access: << Bound , Constraint , Indexed , Simple >>-style: -pName:String -desc:String -writeMehtod:String -readMethod:String * +1listenerMethods 1 1 Scenario (from component ) 1 + UnderlyingComponent * (from component) 1 * 1 -version:String -packageName:String -language:String -componentModel:String -domain :String Event 1 1 1 1 1 + event 1 + paraType * + returnType Type (from component ) -typeName:String + pType 1 qualityAttribute * 1 QualityAttribute (from qualityAttribute) -qualityAttribute:QuealityAttributeISO9126 -measurableAt:MeasureAt -subCharacteristic:SubCharacteristicISO9126 -metric:Metric Figure 4-4 IPSComp Component Ontology UML Class In order to model the Metric concept with the three attributes as explained in section 4.1.2.3 I took the following fact into consideration: The value attribute might have different data types among different metrics, for instance for a presence metric it is stored as a Boolean data type, meanwhile in a time metric it is a float data type. To handle that behavior and also looking forward to be able to provide a tool for grammatical metric comparison an interface IValue has been created. This interface defines two abstract methods to manage values (one to get the value the other to set the value). A concrete class to deal with a specific data type has an attribute, which data type is equal to the specific data type the class is willing to handle 34 and must implement the IValue interface. Having explained the IValue concept, let’s consider the Metric concept. In order to model metrics an abstract class Metric has been created. The three attributes defining a Metric class feature, value and unit are of type IValue. This abstract class provides three methods one to handle the setting of each attribute from a String. It takes advantage that all the class attributes implement the IValue interface. The UML Class diagram supporting this model is show in Figure 4-5. There is a Metric class Factory MetricFactory responsible for creating the concrete Metric classes. IPSComp Ontology Metric Concept metric MetricFactory Number Metric (from metric ) (from metric ) (from metric ) -value :IValue -unit:IValue -feature:IValue +getMetric ( metric : String ): Metric << create >>+Number () :Number +setValue (value:String):void +setUnit(unit:String):void +setFeature (feature :String ):void Presence (from metric ) Ratio (from metric ) << create >> +Presence (): Presence << create >> +Ratio (): Ratio Time Level (from metric ) (from metric ) << create >> +Time ():Time << create >>+Level():Level FloatValue IntValue (from metric ) (from metric ) -value : float -value : int << interface >> IValue (from metric ) StringValue (from metric ) +setValue ( value:String ):void +getStringValue ():String -value : String BooleanValue (from metric ) -value :boolean Figure 4-5 IPSComp Metric Concept UML Class Diagram Turning now to the quality attribute in order to model it, an abstract class QualityAttribute has been created. This class has 4 attributes. Three of them correspond to the ISO 9126 classification, they store the quality attribute characteristic, sub-characteristic and the instance at which the quality attributes can be measured (Runtime or Life Cycle), which were explained in section 4.1.2.3. The forth attribute is of type Metric; it represents the metric that can be applied to a quality attribute. 35 To model a concrete quality attribute it is necessary to create a new class that extends the QualityAttribute abstract class. Figure 4-6 depicts the UML class diagram that models the quality concept for the IPSComp ontology. In the picture only a few quality attributes are shown, but all of them work in the same way. Some classes have been added to model list of values using the enumeration pattern. These classes will hold values for the quality attributes characteristics, sub-characteristics, and measure moment, among others. A factory class is also modeled in order to create concrete quality attribute classes. IPSComp Ontology Quality Attribute Concept qualityAttribute Presicion Throughput ErrorHandling Serializable (from qualityAttribute ) (from qualityAttribute) (from qualityAttribute ) (from qualityAttribute ) ComputationalAccuracy (from qualityAttribute ) QualityAttribute (from qualityAttribute ) Auditability -qualityAttribute :QuealityAttributeISO9126 -measurableAt:MeasureAt -subCharacteristic :SubCharacteristicISO9126 -metric:Metric (from qualityAttribute) Capacity (from qualityAttribute ) Persistent (from qualityAttribute) ResponseTime Controllability (from qualityAttribute ) (from qualityAttribute ) DataEncription Memory (from qualityAttribute ) (from qualityAttribute ) QualityAttributeFactoryClass (from qualityAttribute ) Disk Transactional (from qualityAttribute) (from qualityAttribute ) +getQualityAttribute (className :String) :QualityAttribute metric enumeration << interface >> IValue (from metric ) +setValue ( value:String) :void +getStringValue ():String Metric (from metric ) -value :IValue -unit:IValue -feature :IValue MeasureAt (from enumeration ) QuealityAttributeISO9126 (from enumeration ) SubCharacteristicISO9126 (from enumeration ) +setValue ( value:String ):void +setUnit(unit:String ):void +setFeature (feature :String):void Figure 4-6 IPSComp Quality Attribute Concept UML Class Diagram The UML class diagrams presented in this section model the IPSComp ontology. The complete UML class 36 diagram can be found in Appendix A - IPSComp Ontology UML Class Diagram. 4.3 Component Ontology for IPSComp Implementation In this section three main subjects are explained. First of all, based on the UML model for the IPSComp ontology presented in the previous section, I implemented the java code for it (Section 4.3.1). Second I implemented a Java code generation from a XML file to load component ontology into the java implementation (Section 4.3.2). Finally, based on the component ontology for IPSComp Ontology design presented in the previous section I implemented IPSComp ontology in PLIB [30] which is an ontology manager system based on PLIB specification as explained in section 2.1.4. 4.3.1 Java Code Implementation The complete UML class diagram (Shown in Appendix A - IPSComp Ontology UML Class Diagram) that models the IPSComp ontology was implemented in Java. Based on the IPSComp ontology design presented in section 4.2 the considerations I took for the implementation are explained in this section. Regarding the metric concept and I stated that it is defined by three attributes: feature, value and unit (see section 4.1.2.3). Furthermore, I noted that each attribute might have different data types among different metrics, for instance for a Presence metric the value attribute is as a Boolean data type, meanwhile in a Time metric the value attribute is a float data type. As shown in Figure 4-5 model I implemented the IValue interface to handle different data types for the metric attributes. This interface defines two abstract methods to manage values. • • public abstract void setValue (String value). In the classes that implement the IValue interface, this method must set the String that receives as parameter as the value for a specific data type or object (float, Integer, String, etc). public abstract String getStringValue (). In the classes that implement the IValue interface, this method must return the value as a String data type. To implement a concrete class to deal with a specific data type has an attribute, which data type is equal to the specific data type the class is willing to handle and must implement the IValue interface. As shown in Table 4-2, the class FloatValue handles the float data type. It has an attribute of type float and the implementation or the methods defined in the IValue interface. The public void setValue(String value) method assigns to the value attribute the float representation of the String that receives as parameter. Meanwhile the public String getStringValue() returns the String representation of the value attribute which is a float. public class FloatValue implements IValue { private float value; public void setValue(String value) { Float aFloat; aFloat = Float.valueOf(value); this.value = aFloat.floatValue(); } public String getStringValue() { return String.valueOf(this.value); } 37 } Table 4-2 Implementation of the IValue interface Afterwards, each concrete class implementing a particular data type should also provide methods to compare values. Each class must be independent and should be responsible of knowing how to be compared. The implementation of such behavior will provide the means to create the metric grammar. After I implemented the IValue interface, I implemented the Metric concept. In order to implement metrics I created an abstract class Metric. The three attributes defining a Metric class feature, value and unit are of type IValue. This abstract class provides three methods one to handle the setting of each attribute from a String (as shown in the snipped code in Table 4-3). It takes advantage that all the class attributes implement the IValue interface. Public abstract protected protected protected class Metric { IValue value; IValue unit; IValue feature; public void setFeature (String feature){ this.feature.setValue(feature); } public void setUnit (String unit){ this.unit.setValue(unit); } public void setValue (String value){ this.value.setValue(value); } } Table 4-3 Abstract Class Metric The class that implements a concrete metric must extend the Abstract Metric class. In the constructor of the concrete class the specific value type for the metric attributes, which is a class, implementing the IValue interface must be specified. For instance, to implement the Time metric the feature attribute must be a StringValue class, the value attribute must be a FloatValue and the unit attribute must be a FloatValue, as observed in Table 4-4. Public class Time extends Metric { public Time() { this.feature = new StringValue(); this.value = new FloatValue(); this.unit = new StringValue(); } } Table 4-4 Concrete Metric Class Additionally I added to the metric concept implementation a Factory Pattern combined with an Enumeration Pattern. The Enumeration holds the different metrics names defined for the component domain (presence, level, time, ratio, number, which are explained in section 4.1.2.3). The MetricFactory class receives a metric name and returns an instance of the concrete class representing the desired metric. Turning now to the quality attribute implementation, I created an abstract class QualityAttribute. This class has 4 attributes (Table 4-5). Three of them correspond to the ISO 9126 classification, they store the quality attribute characteristic, sub-characteristic and the instance at which the quality attributes can be 38 measured (Runtime or Life Cycle). The forth attribute is of type Metric; it represents the metric that can be applied to the quality attribute. Public abstract class QualityAttribute { protected QualityAttributeISO9126 qualityAttribute; protected MeasureAt measurableAt; protected SubCharacteristicISO9126 subCharacteristic; protected Metric metric; . . } Table 4-5 Abstract Class QualityAttribute In order to implement a concrete quality attribute it is necessary to create a new class that extends the abstract class QualityAttribute. In the constructor of the concrete class the specific Metric has to be initialized as shown in Table 4-6, were the Auditability quality attribute uses a Presence metric to be measured. Public class Auditability extends QualityAttribute { public Auditability() { this.qualityAttribute = QualityAttributeISO9126.FUNCTIONALITY; this.subCharacteristic = SubCharacteristicISO9126.SECURITY; this.metric = new Presence(); this.determineMeasurableAt(); } } Table 4-6 Concrete Quality Attribute Class I added some classes to the implementation to define list of values following the enumerate pattern. These classes will hold values for the quality attributes characteristics, sub-characteristics, and measure moment, among others. A factory class pattern is implemented in order to create concrete quality attribute classes. A concrete class for each quality attribute must be implemented, but the structure is simple it has a constructor that initializes its ISO 9126 classification as well as the concrete metric that has be defined to handle the quality attribute measure. I have provided the Java implementation for the IPSComp ontology. This implementation can be extended to include a grammar for the quality attributes as well as include new quality attributes when needed. It is necessary to monitor the IPSComp ontology to incorporate changes as the domain evolves. 4.3.2 IPSComp Java Code Generation from a XML file The IPSComp ontology has a XML representation. In order to load a component which is an instance of this ontology represented by an XML file I am using the JDOM6 parser. I implemented the module responsible for loading a component description into the Java implementation ontology model with a visitor pattern. This pattern allows creating different visitors depending on the task that must be performed on the element that belongs to the collection. I had into account two extra considerations: • 6 The visitor pattern is able to act in a specific way according to the element it is visiting. But for this case when the JDOM parser traverses the XML file all the nodes are of the same type, so base on the XML tag that is being visited a class will be created. This class will represent an explicit XML tag. To implement this class instantiation process I implemented a factory pattern. It works as follows a Java JDOM 39 class has been created for each XML tag it is worth it to process. The XMLFactory class receives the XML tag name and according to it, it creates a class that handles the component description instantiations retrieving the right values from the xml file. • To allow easy modification in the XML file, and also to handle different XML files the visitor pattern is implemented using reflection. The class MethodFinder provides means to handle the reflection in order to bind to the correct visit method at run time. A part of the UML representation of the model is shown in Figure 4-7. This figure only depicts one specific visitor and shows only some of the classes implemented a few XML tags, but the whole idea can be inferred from there because the mechanism behind the other tags is exactly the same. XML File Load xmlParser << interface >> << interface >> << interface >> JDomLoadParserTraversal IANotherXMLTag IXMLTag IVisitor (from xmlParser ) (from xmlParser ) (from xmlParser ) (from xmlParser ) +loadModel (component :): void +accept (visitor :IVisitor ):void +visit (xmlTag:IXMLTag ):void XMLGeneralInfoTag BaseXMLTag (from xmlParser ) (from xmlParser ) +process (xmlFile:java.io.File ):void XMLTagFactory (from xmlParser ) -tag :String -element:org.jdom.Element +loadModel (component :Component ):void +getXMLTag (tag :String,element :org.jdom.Element ):BaseXMLTag +accept (visitor:IVisitor ):void VisitorLoadModel XMLScenarioTag (from xmlParser ) (from xmlParser ) +loadModel (component :,scenario :):void MethodFinder (from xmlParser ) -component :Component -method:Method -property :Property -event: -price:Price -scenario:Scenario -qualityAttribute : -metric:Metric +visit(xmlTag:XMLGeneralInfoTag ):void +visit(xmlTag:XMLScenarioTag ):void +getPolymorphicMethod (xmlTag :IXMLTag ,visitor : IVisitor):java.lang.reflect.Method Figure 4-7 Visitor Pattern with Reflection to load a XML file – UML Class Diagram 4.3.3 IPSComp Ontology Implementation PLIB Once the IPSComp ontology has been defined, it has been included into PLIB. The idea behind this point was to test the inclusion of the IPSComp ontology into an ontology manager system, and to join the ontology manager system with the Java implementation. PLIB provides a Java API to interact with the ontologies defined in it. There was not a specific reason to use PLIB, actually it was available, but later in the project we realized that it was not completely tested so we could introduce the IPSComp ontology in PLIB but could no finalize the test. This section describes the details of the test. The data model used to describe ontologies in PLIB is an OO data model known as the PLIB data model. 40 Thus, and in order to comply with the 6-tuple ontology based definition (Section 2.1.4), the first step is to define a set of classes (C) all gathered in a classification hierarchy. For that purpose (and according to the underlying PLIB data model) PLIBEditor makes possible to describe each class on the basis of four classes’ categories: - Item: It enables the modeling of any type of entity of the application domain that corresponds to an autonomous and stand-alone abstraction as a class. It is a super type intended to be sub-typed to define the nature of the objects. Nevertheless, it is not defined as ABSTRACT to enable its instantiation to model the classes that are super-classes of two classes corresponding to two different kinds of objects (e.g., components and materials). - Component: It captures the dictionary description of a class of items that represent, at some level of abstraction, parts or components. A property of which the data type is defined by a component_class stands for the aggregation relationship. - Material: It captures the dictionary description of a class of materials. Materials are used to define properties of parts or components. Materials are associated with an idea of amount, they may not be counted. A property of which the data type is defined by a material_class captures that some (part of a) product is made of, or contains, some material. - Feature: It captures the dictionary description of items that represent one aspect of another item and that are themselves associated with properties. All these definitions come from ISO13584-42 and ISO13584-24. In the next ISO13584 release, it is expected that all this stuff will be simplified and only one category will remain items. PLIB tools will be updated consequently. In PLIB Editor, these categories appear explicitly through category containers. For the component ontology description, it was necessary only to focus on the "items" category, and then deploy a classification hierarchy under this particular category from the UML class diagram that defines the IPSComp ontology. While introducing the ontology in PLIB it was necessary to define lists. This fact will be illustrated with an example. The component IPSComp ontology has a class Type that represents the user defined data types and primitive data types (Table 4-7). On the other hand the class Method models an interface belonging to a component. This method has as internal collaborators (attributes) a return type, a method name and a list of parameters. These attributes will represent the method signature (Table 4-7). The attribute parametersType is a list of instances of the Type class. PLIB Editor handles this property data type as aggregate data types. Unfortunately the aggregate type definition has not been tested very intensively, but at the minimum, it is possible to find all the instances required for describing an aggregate structure in the physical file. Class Type { String dataTypeName; } Class Method{ Type returnType; String methodName; ArrayList parametersType; } Table 4-7 Collections in the Component Ontology Figure 4-8 shows the inclusion of the IPSComp ontology in PLIB, and it illustrates the example explained in the previous paragraphs. The Left panel shows the concepts included in the ontology. The specific 41 example is related with the Method concept (It is red highlighted in the picture). The list of properties for the method includes a paraType which is the list of parameters (It is green highlighted in the picture). It shows it is a list, and the element type the list contains is another concept in the ontology, the Type concept (It is yellow highlighted in the picture). Figure 4-8 PLIB Editor IPSComp Ontology – Screen Shot Once the IPSComp ontology has been included the next step is to describe instances of the given ontology. PLIB Editor is currently being improved (not sure that the aggregate data type is supported at the content level). As soon as a new stable release (maybe with some restrictions) will be available, this can be tested. Concluding, it was possible to include the IPSComp ontology into an ontology management tool in this case PLIB it indicates that the ontology can be handle by the system. Because it was not possible to instantiate an instance of the IPSComp ontology, the test regarding the connection through its Java API with the Java implementation could not be performed. Nevertheless, PLIB allows the integration of already defined ontologies; the idea is that in such system the related domain ontologies should be specified in order to accomplish the IPSComp specification goals that pursues to relate the component ontology with other domain ontologies to create a market place for software components. 4.4 Integrating Software Component Repositories It is important to highlight that nowadays there are some component repositories already developed available through the web, and in the IPSComp project scope it will be really helpful to find means to incorporate or interact with those existing repositories and not only develop and post new components in our IPSComp platform. This issue will be undertaken with a Model Driven Architecture (MDA) perspective. Before going into the detail of the software component repositories integration, it is necessary to provide 42 some terminology that is used throughout the chapter. • • • • • • • • • • • • • • • • Component Repository: It is a repository that stores software components. In such repository each software component has a description associated to it. There are Vendor Repositories (Item {2} Figure 4-9) and an IPSComp Repository (Item {1} Figure 4-9). All of them are compliant in the definition. IPSComp Component Description (Item {3} Figure 4-9): It is the textual description of a component in the IPSComp repository. The IPSComp Component Description is compliant to the IPSComp ontology described in section 4.1. Vendor Component Description (Item {4} Figure 4-9): It is the textual description that a vendor provides for any component included in the vendor repository. IPSComp Component Description Meta-Model (Item {5} Figure 4-9): It is the UML model representing the IPSComp ontology. Essence Component Description (Item {17} Figure 4-9): It is the textual component description for a component essence description. Vendor Component Description Meta-Model (Item {6} Figure 4-9): It is the UML model for the Vendor Component Description. Essence Component Description Meta-Model (Item {15} Figure 4-9): It is the UML model for the Essence Component Description. IPSComp Java Implementation (Item {7} Figure 4-9): It is the Java program that implements the representation of an IPSComp compliant component description based on the IPSComp Component Description Meta-Model. Vendor Java Implementation (Item {8} Figure 4-9): It is the java program that implements the representation of a compliant Vendor Component Description Meta-Model. Essence Java Implementation (Item {16} Figure 4-9): It is the java program that implements the representation of a compliant Essence Component Description Meta-Model. IPSComp Transformation API (Item {9} Figure 4-9): It is the java implementation that provides the means (methods) to create and populate an IPSComp Component Description compliant to the IPSComp ontology. It is a jar file that a vendor has to import in the Vendor Framework in order to integrate with the IPSComp Framework. Vendor Framework (Item {10} Figure 4-9): It is the java program that implements the representation of the Vendor Component Description Meta-Model adding the IPSComp Transformation API. IPSComp Framework (Item {11} Figure 4-9): It is the java program that implements the representation of the IPSComp Component Description Meta-Model adding the IPSComp Transformation API. IPSComp XML Model (Item {12} Figure 4-9): It is the .xsd file which defines the XML schema Definition for the IPSComp Ontology. IPSComp XML Component Description (Item {13} Figure 4-9): It is an instance7 of IPSComp XML Meta-Model. This means that the IPSComp XML Component Description is an xml file that conforms to the Scheme defined by the IPSComp XML Component Description. IPSComp xml file parser (Item {14} Figure 4-9): It is the executable java program that allows transforming an IPSComp XML Component Description into an IPSComp Java Implementation in the IPSComp Framework (Section 4.3.2). Additionally in Figure 4-9 there are three different colored arrows, the colors means: Blue: From the element at the origin it is possible to instantiate an element at the end. Red: From the element at the origin it is possible implement (code generation) an element at the end. Green: It is possible to apply a model transformation from the origin to the end. Finally, the picture is divided in two layers; the base level has concrete information of the components stored in different repositories. The elements in that layer represent a real component description. It holds 7 According to the W3 Org the purpose of a schema is to define a class of XML documents, and so the term "instance document" is often used to describe an XML document that conforms to a particular schema [74]. 43 the model. On the other hand the upper level contains the Meta-Models, which are the models for the models present in the information layer. Model Transformation M E T A I N F O R M A T I O N I N F O R M A T I O N Code generation Instantiation IPS Component {5} {11} {15} {12} {6} {10} {9} UML Model of N components UML Model of N components {9} UML Model of N components {14} {7} {13} {16} {8} {17} {3} {4} {1} {2} -9 Elements for Software Component Repository Integration The terminology defined in the previous paragraphs and depicted in Figure 4-9 is going to be used throughout the chapter. I will provide a specific example of some elements intended to clarify the explanation. In Figure 4-10 there are text boxes containing a number and a description. The number is the same number used in Figure 4-9 for the element, the description is a concrete example. For instance, items {5} (Appendix A - IPSComp Ontology UML Class Diagram), {6} (Figure 4-12 Figure 4-13 Figure 4-14) and {15} (Figure 4-15) are UML diagrams representing the Meta-Models. Item {2} is web Vendor Repository and item {4} is a Vendor Component Description in a commercial web site (Figure 4-11). Item {12} is the .xsd file schema that defines the Meta-Model (Table 4-9 or Appendix F - IPSComp XML MetaModel - XSD Schema). Item {13} is an xml file instance of the xsd schema (Table 4-10 or Appendix G IPSComp Component Description – XML Example). 44 M E T A I N F O R M A T I O N I N F O R M A T I O N {5} UML model Appendix A {12} .xsd file Table 4-9 {15} UML model Figure 4-15 {6} UML model Figure 4-12 Figure 4-13 Figure 4-14 {11} Eclipse java project {7} Eclipse java project Table 4-8 {14} Eclipse java project {10} Eclipse java project {13} XML file Table 4-10 {8} Eclipse java project {2} Figure 4-11 {4} Figure 4-11 -10 Elements Examples for Software Component Repository Integration public class Component { private String id; private String name; private String desc; private GeneralInfo generalInfo; private Role role; private String comp; private List properties; private List events; private List methods; private List qualityAttributes; public Component(String id, String name, String desc, Role role, String comp, GeneralInfo generalInfo) { ………… } Table 4-8 IPSComp Java Implementation Example Item {7} 45 Figure 4-11 Vendor Respository {2} – Vendor Component Description {4} Example > ... ... ... Table 4-9 IPSComp XML Meta-Model - XSD Schema Example {12} <componentSpecification xmlns: <id>111111</id> <name>javax.composite.SliderFieldPanel</name> <generalInfo> 46 <version>Jaaava</version> <package>javax.composite </package> <language>Java </language> <model>JavaBean</model> <domain>Interface</domain> <domain>MVC</domain> <os>Windows</os> <os>Linux </os> ... ... ... Table 4-10 IPSComp Component Description – XML Example {13} As stated in the MDA specification web site [64]: “The MDA is a new way of developing applications and writing specifications, based on a platform-independent model (PIM) of the application or specification's.” The aim pursued by using the MDA view is to be able to accomplish transformation from one or more selected existing Vendor Java Implementations into our IPSComp Java Implementation which is IPSComp ontology compliant. The Vendor Component Description Meta-Model describes existing components from Vendor Repositories. The IPSComp Component Description Meta Model is our component description ontology presented in section 4.2. A Walkthrough the Research Process: First of all, in order to apply the MDA transformation it is necessary to obtain the different meta-models. The IPSComp ontology has an UML class diagram representation (section 4.2), the IPSComp Component Description Meta-Model. Then it is necessary to find out Vendor Component Description Meta-Models. I accomplished such a task by browsing the web. A couple of web sites, which sell software components (Vendor Repositories), were selected. These web sites neither provide the Vendor Component Description Meta-Model nor the UML model. I created a Vendor Component Description Meta-Model that supports the component description in different web sites (Vendor Repositories). This task was performed for three Vendor Repositories: Figure 4-12, Figure 4-13 and Figure 4-14. Those figures depict the UML class diagram Vendor Component Description Meta-Model for Vendor Repositories. This is an inferred Vendor Component Description Meta-Model, but it leads to follow the idea behind the contribution. 47 component Publisher Price Component (from component ) -price:float -currency: String -description: String -conditions :String (from component ) (from component ) * AssetValue (from component) -manMonthResearchDevelopment: float -manMothSkillFactor: int -linesCode:double -id:int -name:String -primaryCategory:Category -abstract:String -language:String -downloads:int -keywords:List -componentType:componentType -architecture:Architecture -operatingSystem:List -builtUsing:List -platform:List -diskSpace:float -diskSpaceMeasure:String -memory:float -memoryMeasure:String -name: String -email : String -webSite: String -phone :String -id:String -description:String License (from component ) -name:String -Description:String Review (from component ) 1..* * 1..* Note (from component ) Documentation (from component ) -author:String -comment:String -date: Date -type: String -file:byte enumeration programmingLanguage The enumeration packager contains the list of values used by the model. (from enumeration) -programmingLanguage:String Commercial name of different programming languages and IDEs componentType (from enumeration ) -componentType:String It holds the components underlying architecture EJB, CORBA, :NET OperatingSystem Architecture (from enumeration ) (from enumeration ) -architecture:String i.e. 32Bits -operatingSystem: String ProductType (from enumeration ) It can be Tool or Component -productType: String Platform (from enumeration ) -platform: String Microsoft, IBM, oracle, Borland, etc Category (from enumeration) -category:String Categories to classify components: Data Validation, File Handling, Credid Card Validation, etc Figure 4-12 Vendor Component Description Meta-Model 48 component ScreenShot (from component) -screenShot:byte Component (from component) 0..1 Classification (from component) -category:Category -subCategory:SubCategory -language: Language -technology: Technology Publisher (from component) 1..* -id:int -title:String -description:String -technicalDetail:String -version:String -type:Type -requirements:String -downloadLink:String -downloadSize:String -downloadType:downloadType -price:float -name: String -webSite:String -id:String -description: String Comment (from component) -author:String -comment:String -date: Date * * + Replies enumeration Contains the list of values used by the model. Category (from enumeration) -category:String Language (from enumeration) -language: String Type (from enumeration) -type: String NET/C#, Java, C/C++ Delphi, etc The site classifies the software artifacts Component and Application subCategory (from enumeration) GUI, databases, web, etc. The subCategory is related to a Category -category:Category -subCategory:String Technology (from enumeration) -language:Language -technology:String EJB, JSP, ASP, COM/DCOM, etc downloadType (from enumeration) Trial, Demo, Full -downloadType:String Figure 4-13 Vendor Component Description Meta-Model 49 component Tool (from component ) -id:int -name :String -description:String -URL :String Component (from component ) * + AditionalTool * Project (from component ) 1..* -id:int -name :String -description:String -URL :String + referenceCustomer -id:int -name :String -overview:String -url:String[] -cotsFamily :List -deliveryMode:List -operatingSystem:List -marketIntroductionYear:int -version: String -frecuencyReleases:String -partners :List -size :String -totalUsers :int Customer (from component ) * Organization (from component ) -id:int -name:String -description:String -URL:String Producer (from component ) -contact:String -incorporationYear :int -organizationType :OrganizationType -swDomain :List -employees:int -sales:double -overview:List 1..* Documentation (from component ) -description:String -URL:String * 1..* License (from component ) -type :String[] -Description :String MailingList Forum -responsable:String UserClub -name:String enumeration Contains the list of values used by the model. OrganizationType SoftwareClassification (from enumeration ) (from enumeration ) -organizationType :String OperatingSystem deliveryMode (from enumeration ) (from enumeration ) -operatingSystem:String -deliveryMode:String COTS Editors, COTS Service Provider, University, etc. -softwareClassification:String Internet, Security, Sytem, Data Management, Multimedia, Office/Business, etc LicenseType Packaged Application, Hosted Application, etc (from enumeration ) -licenseType:String Open Source License, OSI Approved License, Apache Software License, Partly open-source, Proprietary License, etc Figure 4-14 Vendor Component Description Meta-Model Secondly, once the Vendor Repositories were searched and the different Vendor Component Description Meta-Model created, it was necessary to compare those meta-models between them and with the IPSComp Component Description Meta-Model in order to identify commonalties and differences. Actually the aim behind this task was to come up with an intermediate model. Such a model will contain those elements that are essential for a component description (Essence Component Description Meta-Model), in the domain chosen, which is the commercial web sites for component repositories. These commercial web sites selling software components produce a components market. The Vendor Repositories selected provide more or less the same sort of information for the components stored in them. On the first hand they have a set of software components and a set of producers. For the producers they display the name, contact information (web site, email) and a brief producer description. As far as the software component is concerned, the component has a name, a textual description, some technical specification regarding the architecture on which it is built up, the programming language, the 50 operating system, a classification based on a set of keywords, a producer, a price, some of them have license information. Based on that information those web sites promote the description and marketing of the goods offered by them, software components. These Vendor Repositories also provide the same means for searching. The most common techniques these Vendor Repositories provide for searching are keyword-based and browsing (explained in section 2.2). They might have an ontology, a taxonomy or a controlled vocabulary to classify components. Another characteristic found (explicitly documented in) is that the search is not case sensitive, but on the other hand it will present different results if the words included in the search are in singular or plural. After the comparison between the different Vendor Component Description MetaModels and the IPSComp Component Description Meta-Model the Essence Component Description MetaModel created is also modeled as an UML class diagram, and shown in Figure 4-15. Essence Component Description Meta-Model component Component (from component) Price (from component) -price:float -currency:String -description: String * License (from component) 1..* -name:String -Description:String -id:int -name:String -description:String -version:String :programmingLanguage -programmingLanguage -Language:List -componentModel:ComponentModel -keywords: List -requirements:String -operatingSystems:List -domain:List -classification:List Publisher (from component) -name:String -email:String -webSite:String -phone:String -id:String -description:String enumeration Contains the list of values used by the model. programmingLanguage (from enumeration) -programmingLanguage :String ComponentModel (from enumeration) -componentModel:String It holds the components underlying architecture EJB, CORBA, :NET Commercial name of different programming languages and IDEs Language (from enumeration) -language:String OperatingSystem (from enumeration) -operatingSystem:String It represents natural languages: English, French, Spanish, etc. Figure 4-15 Essence Component Description Meta-Model Taking into account the MDA description it is possible to state that so far there are five models. Three of 51 them correspond to each one of the three Vendor Component Description Meta-Models. The fourth one is the IPSComp Component Description Meta-Model. The fifth one is the Essence Component Description Meta-Model that was created by comparing the other four models. The idea is to be able to translate a Vendor Java Implementation (Item {8} Figure 4-9) into the IPSComp Java Implementation (Item {7} Figure 4-9). The first thing that has come out from the analysis is that there is information which should be included in the IPSComp ontology such as license, price and publisher description. Refer to section 4.1.2 to see in detail what we have added to the IPSComp Ontology. On the other hand, having a look at the meta-models, most of the fields have been represented as string data types. Basically the idea is that the transformation will behave like a mapping, in which the attributes of one meta-model will be translated to the attributes of the Essence meta-model. This can be achieved by an API. The IPSComp Transformation API provides a method to create an Essence Java Implementation. Then by means of the methods included in the IPSComp Transformation API the user must populate the component description. Because not all the fields present in the Vendor Component Description are in the Essence Component Description some fields will be left out outside. I have implemented a prototype as part of the research, the IPSComp Prototype. The IPSComp prototype includes the IPSComp Framework and the Vendor Framework for the Vendor Repository. In the IPSComp prototype the IPSComp Framework has a class GenericRepository that represents a Component Repository. In the real project implementation this most probably would be implemented in a database. The GenericRepository class implements the singleton pattern to simulate the back end storage device. In the IPSComp prototype this class provides the IPSCopmp Transformation API methods to create or retrieve an Essence Java Implementation. It will manage a component id; it will be composed from the id the component has in the Vendor Repository plus the Vendor Repository name. The Essence Java Implementation is implemented in the IPSComp Transformation API by the GenericComponent class. Scenario Model Transformation using the IPSComp Transformation API. Actor: Software component repositories integrator. This scenario allows the actor to perform a transformation from a Vendor Java Implementation to an IPSCompt Java Implementation. To accomplish a complete transformation the actor must execute five steps, explained in this section:). A software component repositories integrator can perform a transformation from a Vendor Java Implementation to an IPSCOMP Java Implementation using the IPSComp Transformation API. Table 4-11 shows an example of how the actor accomplishes steps 1, 2 and 3. First the software component repositories integrator implements a method, in the sniped code the method transformToComponentEsscence. The method receives a Vendor Java Implementation (ComponentSource) as parameter and it returns an Essence Java Implementation (GenericComponent), which is the result of the transformation from a Vendor Java Implementation to an Essence Java Implementation. To generate the transformation to the Essence Java Implementation the repository integrator in lines 1 through 5 defines some variables. The software component repositories integrator has to retrieve the repository (line 5). Then in line 6 it creates an Essence Java Implementation (Step 1). In order to perform the transformation depending on the model representation some values might require some processing. In lines 7 to 21 the actor performs step 2. For instance, from lines 7 to 9 the id in the Vendor Repository is 52 stored as int, so it has to be converted to String by the software component repositories integrator. From lines 14 to 19 it is shown how to handle collections, in this specific case the software component repositories integrator traverses the list by using the Iterator class and retrieving the value, the IPSComp Transformation API provides methods to include values in the different lists. Lines 20 and 21 show how to handle a Publisher class. This class supports the publisher IPSComp ontology concept in the IPSComp Transformation API. The IPSComp Transformation API provides setter methods to fill out the Essence Java Implementation. Then line 22 returns the created Essence Java Implementation. As it can be inferred from the example, each repository integrator must know how to handle its own repository, and the IPSComp Transformation API provides methods to perform the transformation. Then to accomplish step 3 the Software component repositories integrator can call the static method transformModelToComponentOntology implemented in the class ModelTransformationGeneric included in the IPSComp Transformation API, which receives an Essence Java Implementation (GenericComponent) and returns an instance of the IPSComp Java Implementation, as shown in lines 23 to 25. public static GenericComponent transformToComponentEssence (ComponentSource sourceComponent){ 1. GenericRepository genericRepository; 2. GenericComponent genericComponent; 3. componentSource.Publisher originalPublisher; 4. genericComponent.Publisher publisher; . . . 5. genericRepository = GenericRepository.getGenericRepository (""); 6. genericComponent = genericRepository.createGenericComponent(); 7. IntegerValue = new Integer(sourceComponent.getId()); 8. Value = integerValue.toString(); 9. genericComponent.setId(value); 10. genericComponent.setComponentModel(sourceComponent.getComponentType()); 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. } genericComponent.setName(sourceComponent.getName()); genericComponent.setDescription(sourceComponent.getComponentAbstract()); genericComponent.addKeyword(sourceComponent.getArchitecture()); AList = sourceComponent.getBuiltUsing(); Iterator = aList.iterator(); While (iterator.hasNext()){ value = (String)iterator.next(); genericComponent.addKeyword(value); } publisher = genericRepository.createPublisher(value, originalPublisher.getName(), originalPublisher.getEmail(), originalPublisher.getWebSite(), originalPublisher.getPhone(), originalPublisher.getDescription()); genericComponent.setPublisher(publisher); . . . return genericComponent; 53 public static void main(String[] args){ 23. GenericComponent genericComponent = transformToComponentEssence (aExternalComponent); 24. Component = ModelTransformationGeneric.transformModelToComponentOntology (genericComponent); 25.} Table 4-11 Example Scenario Model Transformation Using the IPSComp Transformation API Steps 1, 2 and 3. As it can be inferred from the previous paragraphs, in the transformation from the different vendor Java implementations to the IPSComp Java Implementation there is a first transformation to the Essence Java Implementation and then from the Essence Java Implementation to the IPSComp Java Implementation (IPSComp Ontology compliant). But from step 1 to Step 3 the transformation can be seen as a dimension in the component’s description domain. The fact that supports such statement is that with the Vendor Component Description Meta-Model, it can only be achieved a superficial component specification. It is superficial in the sense that there is not a complete technical description. The description can be seen in a Marketing level dimension. Those meta-models present concepts such as price, licenses, producers’ information, component name, component description (in natural language), component architecture, etc. As a consequence, up to here, the transformation is made in the “marketing” dimension. It will provide a first image from a Vendor Java Implementation into an IPSComp Java Implementation by traversing by the Essence Java Implementation. Once the IPSComp Java Implementation has been obtained by performing step 1 through 3, there could be fields that have not been translated from the Vendor Component Description. Adding those fields could lead towards a more complete and accurate component image in the IPSComp Repository. The IPSComp Transformation API provides methods to populate the IPSComp Component Description. To clarify the point exposed in the previous paragraph it is easier to take a specific example developed in the IPSComp prototype. The Vendor Component Description for the Vendor Repository has fields, which describe two component prerequisites: Disk space required and Memory required. These characteristics have been included in the IPSComp ontology as quality attributes. So once the transformation has been applied (see Table 4-11 Example Scenario Model Transformation Using the IPSComp Transformation API - Steps 1, 2 and 3.), it will be useful to add the prerequisite information. The software component repositories integrator accomplishes this using the IPSComp Transformation API as shown in Table 4-12 (Step 4). /** After analyzing the information the user can use the component qualityAttribute package included in the IPSComp Transformation API to introduce quality attributes in the IPSComp Java Implementation.**/ 1. QualityAttribute qualityAttribute; 2. String value; 3. QualityAttribute = QualityAttributeFactoryClass.getQualityAttribute (ComponentQualityAttribute.DISKUTILIZATION.toString()); 4. value = String.valueOf(componentSource.getDiskSpace()); 5. qualityAttribute.getMetric().setValue(value); 6. qualityAttribute.getMetric().setUnit(componentSource.getDiskSpaceMeasure()); 7. qualityAttribute.getMetric().setFeature("Disk Space"); 8. component.addQualityAttribute(qualityAttribute); Table 4-12 Example Scenario Model Transformation Using the IPSComp Transformation API – Step 4. In the snipped code shown in Table 4-12 line 1 defines a QualityAttribute class instance which will store quality attribute description to be added to the IPSComp Java Implementation. In line 3 a Factory pattern is used to instantiate the object for a specific Quality Attribute, in this case a DISKUTILIZATION. 54 In order to ensure that only defined quality attributes are assigned, an enumeration pattern is used in the IPSComp prototype. Lines 4 through 7 fill out the metric values. Only a person that knows the Vendor Component Description Meta-Model will know how to read values, but it will also need to become familiar with the IPSComp ontology. Finally in line 8 the new quality attribute is added to the IPSCopm Java Implementation. It can be inferred from the previous example (Table 4-12) that the software component repositories integrator will be able to complete the IPSComp Java Implementation after the model transformation has taken place. This will allow accomplishing a more precise and complete “image” of the component description in the IPSComp Repository. Before explaining step 5, it is necessary to look into more detail to the Component Repositories that I am modeling (Figure 4-16). Those repositories (Components Repositories layer) contain components implemented in a specific platform, which can be for instance EJBs, .NET, CORBA, etc. Each component has a description associated to it (A component in a Repository layer). There is a model that represents each one of the components present in the repository (Component Description Model layer). It is possible to create a meta-model for that model. This meta-model is expressed in UML and it corresponds to the UML class diagrams for the different Vendor Component Description Meta-Model (Component Description Meta-Model layer). As it has been stated after analyzing some of the meta-models, they provide a description in the marketing dimension. Component Description Meta-Model Component Description Model Components Repositories A Component in a repository C C Component EJB, .NET, CORBA, etc C C C Component description C A Component in a repository consists of a software components in a specific architecture (EJB, .NET, CORBA, etc) and the description associated to the component (This relation is represented by the arrow). Figure 4-16 Component Repositories Domain Layers The next step is going further down in the domain, and start taking specific component architectures. 55 Based on the architecture generate a more complete image of the component description in the IPSComp Repository. Once the marketing dimension transformation has been applied (up to step 4), the IPSComp Transformation API provides means to accomplish the transformation in the second identified dimension, the technical description. The idea is to take any specific component architecture (J2EE, .NET, CORBA, etc) and continue with the transformation (step 5). As shown in Figure 4-17 the outer rounded rectangle represents the set of Vendor Repositories. Then after applying the transformation up to step 2, a representation in the Essence Component Description is obtained. It is important to point out that for each Vendor Repository the transformation is unique, so that is why the IPSComp Transformation API is provided, in this way different Vendor Repositories may use the API to accomplish the transformation. The external repositories have a set of components developed in different specific architectures. So applying step 5, it produces a more complete image of the Vendor Java Implementation in the IPSComp ontology. The outcome of this process is represented in the inner rectangle. Vendor Repository A Transf. B Transf. A EJB Vendor Repository B IPSComp Ontology Vendor Repository n .NET CORBA Transf. n Component Essence Figure 4-17 Obtaining a Component Description Image in the IPSComp Ontology In the IPSComp prototype a specific component architecture transformation has been implemented for EJBs. The Enterprise JavaBeans specification is one of the several Java APIs in the Java 2 Platform, Enterprise Edition. The specification details how an application server provides server-side objects known as Enterprise JavaBeans, or EJBs. Enterprise JavaBeans (EJB) technology is the server-side component architecture for the Java 2 Platform. Components (JavaBeans) are reusable software programs that you can develop and assemble easily to create sophisticated applications [67]. J2EE components are packaged separately. Each component, its related files, and a deployment descriptor are assembled into a module. A deployment descriptor is an XML document that describes a component's deployment settings. For instance an enterprise bean module deployment descriptor declares transaction attributes and security authorizations for an enterprise bean. Each EJB JAR file contains a deployment descriptor, the enterprise bean files, and related files (Figure 4-18). To sum up, in order to develop an enterprise bean, it is necessary to provide the following files: • Deployment descriptor: An XML file that specifies information about the bean such as its persistence 56 • • • type and transaction attributes. The descriptor is packed in the JAR file under the META-INF/ folder and it is called ejb-jar.xml. Interfaces: The remote and home interfaces are required for remote access. For local access, the local and local home interfaces are required. Message-driven beans do not use these interfaces. The remote interfaces expose the services provided by the EJB component. The home interfaces handle the EJB component life cycle. Enterprise bean class: Implements the methods defined in the interfaces. Helper classes: Other classes needed by the enterprise bean class, such as exception and utility classes. Assembly Root META-INF All .class files for this EJB module ejb-jar.xml MANIFEST.MF Sun-cmpmappings.xml Figure 4-18 Structure of an Enterprise Bean JAR [67] In order to apply the transformation for the J2EE architecture it is necessary to perform tasks on the JAR file that contains the EJB component. As defined in the J2EE specification a single JAR file might contain more than one EJB component. There is not a rule of thumb to define what component or set of components should be included in a single JAR file. As a matter of fact is up to the component provider and application assembler to take such decision. There exist some reasons to include several EJB components in one JAR file, for instance, if a set of EJB components share the same security parameters it is easier to handle just one descriptor to accomplish such behavior. Because of the fact stated in the previous paragraph it is necessary to take the JAR file and extract the ejb-jar.xml file, which is the component descriptor. Then this file has to be processed to find out the tags that are necessary to load the EJB description into the IPSComp ontology. To implement the processing of the ejb-jar.xml file it has been taken the same approach used to load the IPSComp Ontology from the xml file. It uses a visitor pattern combined with Java reflection and the Factory pattern as explained in section 4.3.2. The ejb-jar.xml file stores information that can be mapped to the IPSComp ontology. Even though there is a standard this descriptor file might have differences between different application servers. The standard deployment descriptor should include the following structural information for each Enterprise Bean: The Enterprise Bean's name The Enterprise Bean's class The Enterprise Bean's home interface The Enterprise Bean's remote interface 57 The Enterprise Bean's type A re-entrancy indication for the Entity Bean The Session Bean's state management type The Session Bean's transaction demarcation type The Entity Bean's persistence management The Entity Bean's primary key class Container-managed fields Environment entries The bean's EJB references Resource manager connection factory references Transaction attributes. This information is used in the J2EE architecture in order to be able to deploy the EJB, and put them to work within the platform. On the other hand there are different J2EE specification versions, for instance, EJB applications that conform to the 2.0 specification, to the 3.0 specification, etc. The tags that provide useful information for the IPSComp ontology are: • <description> It is used to provide text describing the parent element. The description element should include any information that the enterprise bean ejb-jar file producer wants to provide to the consumer of the enterprise bean ejb-jar file (i.e., to the Deployer). It is used in: cmp-field, cmr-field, container-transaction, ejb-jar, ejb-local-ref, ejb-ref, ejb-relation, ejb-relationship-role, entity, env-entry, exclude-list, message-driven, method, method-permission, query, relationship-role-source, relationships, resource-env-ref, resource-ref, run-as, security-identity, security-role, security-role-ref, session. In the IPSComp ontology, the description tag defined in the parents: <ejb-jar>, <session>, <entity> and <message-driven> is important. The <description> within the <ejb-jar> tag holds description for the whole descriptor. If this descriptor defines only one EJB, this might contain the EJB component description. The <description> within the <session>, <entity> and <message-driven> tags has the explanation for a particular EJB component. This will be added to the component description in the IPSComp ontology, which will be addressed to increase the behavioral component description. • <ejb-name> It specifies an enterprise bean's name. This name is assigned by the ejb-jar file producer to name the enterprise bean in the ejb-jar file's deployment descriptor. The name must be unique among the names of the enterprise beans in the same ejb-jar file. It is used in: entity, session, message-driven, method, relationship-role-source. For the IPSComp ontology this field will be used to compare with the name attribute for each component described. • <remote> It contains the fully-qualified name of the enterprise bean's remote interface. Used in: ejbref, entity, session. The interface referenced in this tag contains all the methods that form the components provided services. So the file stored in this tag must be processed to extract the methods and map them to the methods in the IPSComp ontology. • <local> It contains the fully-qualified name of the enterprise bean's local interface. Used in: ejblocal-ref, entity, session. The interface referenced in this tag contains all the methods that form the components provided services. But this tag is used when the EJB is local, which means it will run in the same Java Virtual Machine (JVM). So the file stored in this tag must be processed to extract the methods and map them 58 to the methods in the IPSComp ontology. • <home> It contains the fully-qualified name of the enterprise bean's home interface. It is used in: ejbref, entity, session.. • <local-home> It contains the fully-qualified name of the enterprise bean's local home interface. It is used in: ejb-local-ref, entity, session. The interface defined in this tag is analogous to the interface defined in the <home> tag but it is used when the EJB is local. • <ejb-class> It contains the fully-qualified name of the enterprise bean's class. Used in: entity, message-driven, session. For the session and entity beans this class implements the components provided services. For the IPSComp ontology, this means that the class specified in this tag contains the EJB’s properties. • <ejb-ref> It is used for the declaration of a reference to an enterprise bean’s home. It lists all other enterprise Java beans this bean uses. Used in: entity, message-driven, and session. It will represent a connection between Enterprise Java Beans. It means that the EJB where the reference is defined calls a service from the referenced component. It will be included as an underlying component of type support in the IPSComp ontology. • <ejb-local-ref> element is used for the declaration of a reference to an enterprise bean’s local home. Used in: entity, session, message-driven. It will be handled like the <ejb-ref> tag. It is the equivalent for local beans. • <security-role> It contains the definition of a security role. The definition consists of an optional description of the security role, and the security role name. Used in: assembly-descriptor In the IPSComp ontology it will be mapped to the non-functional characteristic Controllability, which is a security quality attribute. This attribute indicates how the component is able to control the access to its provided services. • <persistence-type> It specifies an entity bean’s persistence management type. It can have as possible values Bean or Container. Used in: entity. The persistence will be mapped to the quality attribute Persitent which indicates whether a component can store its state in a persistent manner for later recovery. A Presence metric is used to measure this attribute. For the Session and the Message-Driven beans it has false as value. The IPSComp Transformation API provides a static method method loadEJB (Component, String componentName, File jarFile, String tmpPath) which loads information from an EJB jar file (Step 5 in scenario Model Transformation using the IPSComp Transformation API). It takes as parameters an instance of an IPSComp Java Implementation, an String with the component name as it is referenced in the ejb-jar.xml file, if it is an empty string it takes the component name stored in the 59 component, a File which is the jar file containing the EJB and a String which is a path where temporal files are created to be processed. The information loaded from the ejb-jar.xml file and must process to be added to the IPSComp Java Implementation. As explained above some tags have java files names that define either an interface or a java class. The way to process such tags is to retrieve the specific class file from the JAR containing the EJB and to gather the precise information for each case. For instance the <remote> tag contains the class name representing the EJB component provided interface. It is necessary to add the .class extension to this file name, extracted from the JAR file and from this file, which is an interface extract all the public methods. This information is stored in the Methods feature of IPSComp Java Implementation, this provides the method name, return type and parameters list. For the method’s return type and parameters list, the data type is the data that will be stored in the IPSComp Java Implementation. Besides, because this is the EJB provided interface the method has status “Provided” in the IPSComp Java Implementation. If the interface is a subclass of any other interface, it is also necessary to perform the same process for all the hierarchical structure. The same process must be performed in the <home> tag extracted from the ejb-jar.xml file. The difference between these 2 interfaces is that the remote represents distributed EJB components meanwhile the home represents services provided by local components, those that run in the same JVM. In order to hold this information, in the IPSComp Java Implementation, the method feature has a class representing the method pre condition, so in this class either the value “Remote EJB service” or “Local EJB service” will be stored. The first idea to retrieve the component properties was to take the <ejb-class> tag. Form this tag has the java file name that implements the EJB is obtained, the .class file extension added to it, and the Java class file corresponding to that file name is extracted from the JAR file and analyzed. The component’s properties correspond to the class’ attributes. This approach is against the idea behind EJB components, in which the only point of contact is throughout its interfaces. Taking into account the EJB specification, the component’s interfaces might specify getter and setter methods. In [73] the authors define the term virtual field. They assign the name virtual fields to the bean fields. They use this term because it is not required that there is actually a field in the defined in the bean. The getter and setter method names just imply the name of a field, similar to JavaBean properties. Taken the virtual field definition by convention the write method name for each property is composed by the property name, capitalizing its first letter and preceding it with the word “set”. It is a void method, and receives as parameter an instance of the same data type as the property that is setting. On the other hand, the method name to read the property composed by the property name, capitalizing its first letter and preceding it with the word “get”. The return data type is the same as the property that is reading and it does not have parameters. If these methods are found in the interface file the information is also stored in the property description in the IPSComp Java Implementation. Furthermore, an EJB property can be classified as Read Only, Write Only, or Read Write. This classification is based on the set of getters and setters methods defined for each property. For instance, if a property has only a set method it is defined as Write Only, if it has only a get method it is defined as Read Only and if it has both method it is a Read Write property. There are some other tags for which it is not necessary to extract additional files from the ejb jar file. The information is gathered directly from the ejb-jar.xml file and added to the IPSComp Java Implementation. They will be described in the following paragraphs. The <ejb-ref> and the <ejb-local-ref> tags list other EJBs a bean uses. It means that the EJB where the reference is defined calls a service from the referenced component. This will be mapped to the IPSComp Java Implementation as an underlying component of type support. That tag has inner tags from where it is possible to obtain: <description> is an optional tag and allows the bean provider to supply some information about the referenced bean's use. <ejb-ref-name> is the environment name the bean should use to create the referenced bean using JNDI. <ejb-ref-type> should be Session or Entity, depending on the referenced bean type. 60 From the <security-role> tag the security role name <role-name> is going to be extracted. With that value a Controllability quality attribute will be added to the IPSComp Java Implementation. The Controllability attribute indicates how the component is able to control the access to its provided services. It has a Presence metric, so the feature will hold the keyword Security-role and the role name as defined in the ejb.jar.xml file and the value is set to true. The <persistence-type> allows creating a Persistent quality attribute which indicates whether a component can store its state in a persistent manner for later recovery. A Presence metric is used to measure this attribute. For the Session and the Message-Driven beans it has false as value. For the entity bean it has true as value, and the feature can be either Bean or Container depending entity bean’s persistence management type. As a conclusion for section 4.4, it has been shown the path followed to accomplish a transformation from a Vendor Java Implementation to an IPSComp Java Implementation. First it was necessary to come up with the models that participate in the transformation, those models where created after analyzing some Components Repositories on the web. Second by comparing those models with the IPSComp Component Description Meta-Model the Essence Component Description Meta-Model was generated. The comparison between models drove me to the identification of two dimension in the component description domain, market dimension, mostly used in commercial web sites and technical dimension. Besides the Essence Component Description Meta-Model, and the traversing of the transformation by the Essence Java Implementation is addressed to allow monitoring the transformation, in order to evolve the ontology, trying to arrive to a standard in software component description. To perform the transformation the Java Transformation API is provided. Finally, the scenario to accomplish a Model Transformation using the IPSComp Transformation API is as follows: Actor: Software component repositories integrator. Purpose: Perform a transformation from a Vendor Java Implementation to an IPSCompt Java Implementation.). 61 62 5 Conclusions The present investigation is aiming to be the starting point to achieve a scalable proper functional architecture for the IPSComp project. The task performed during the last couple of months were oriented to tackle down some issues, or come up with ideas that can be implemented in the final project. At the end of this section there are 2 figures, one shows the proposed architecture, the other has some parts of it highlighted with red color and numbered. Those red colored elements represent portion of the system on which some kind of work has been done. This research has presented a component ontology that is proposed to describe components in a precise manner (item {2} in Figure 5-2) as well as to facilitate the component search and retrieval process. During the definition of IPSComp ontology two dimensions that different users might be interested in, have been identified: marketing and technical dimension. These dimensions are related to the type of user that interacts with the system. For instance a component-based application developer might be interested in the technical description; meanwhile a benchmark analyst would concentrate in the subject of interest (price, size, provider-in the marketing area; performance, interfaces, security-in the technical area). The objective of this research was not to find the final word as far as component description is concerned, actually this topic is still an open issue. As a consequence IPSComp ontology must be able to evolve along with the software component domain. But the importance of this approach is to try to find an ontology description that can be conceived as a component essence description, which must be validated by different actors involved in the process. Being aware of the facts that the IPSComp ontology is not a final version, and that there are several component repositories already developed, MDA provides means to handle these two factors. As a matter of fact, I find that MDA provides a common layer of concepts that can be applied at different domains, and also to different levels of abstraction. MDA transformation allows us to handle the evolution of the component description until the field under research reaches an agreement or standard that will fulfill the needs in the sense of component description software domain (items {3} {4} in Figure 5-2). The component description evolution can be handled by model transformations. For the present research, the first transformation, the one that has been addressed as the transformation in the marketing level has been implemented in both senses. In section 4.4 it was shown how a component integrator could create an image of its components description in the IPSComp ontology. But this transformation has also been implemented the other way around. As a matter of fact components described in the IPSComp ontology can be transformed to the Essence Java Implementation, and the repository integrator could populate Vendor Java Implementation based on IPSComp Java Implementation and Essence Java Implementation. This will provide the means to incorporate Essence Java Implementation components to the Vendor Repositories. Once this has been done, those components will benefit from the features implemented in Vendor Repositories, such as component classification and retrieval. In order to accomplish that the IPSComp prototype counts with the means to translate IPSComp Java Implementation into the Essence Java Implementation. Once this has been done the repository integrator can use the means in the Essence Java Implementation to read data from it. It is also an API. In the same manner, the IPSComp Java Implementation has an API to read data from it. The transformation from the IPSComp Java Implementation to Vendor Repositories must be understood as a mean to migrate the IPSComp Repository if some other approach is taken as standard. This allows sending the components described in the system to the model in the standard one. It is important to point out that this model transformation from the IPSComp Java Implementation to Vendor Java Implementation is not included in the IPSComp project requirements. Actually, the site must provide means to keep information under certain security levels, in order to guarantee that the information will not be used in a harmful way by different competitors. Anyway the aim of this facility to allow migration towards other repository if any other standard is taken and this facility should be excluded in the deployment phase. 63 Furthermore, the model transformation facility is provided by means of an API, the IPSComp Transformation API. The reason to choose this approach is that as the component description is still an open issue, there is a high risk that the description changes with time, which means that either the IPSComp Component Description Meta-Model or Vendor Component Description Meta-Model changes. The IPSComp Transformation API can be changed accordingly. But there is not control over the changes performed in Vendor Component Description Meta-Model. With the IPSComp Transformation API, repository integrators can adapt the changes to the transformation or incorporate the changes carried out in the ontology IPSComp Component Description Meta-Model. Even though not final, it is the aim of this research to contribute to the component description that industry and academy is trying to reach. As it can be inferred from the IPSComp ontology here proposed in order to describe a component there is a wide set of information that must be provided. As a matter of fact, the component subscription to the system is a highly time demanding process. The developer must provide all the required information to guarantee the correct component description. This process can become tedious and there is not guarantee that users will be willing to go through it. For instance, in order to register a component in the Componex web site () (Vendor Repository), it is necessary to fill out a 10-page formulary. As a matter of fact, in [15] the author states that he does not know if a component producer will be willing to fill out all that information. In fact up to date only 6 components have been registered and those are examples. That was also one of the reasons to include the repository integration analysis in this research. As such we can take advantage of the existing repositories. Furthermore, also aiming to decrease this factor the IPSComp Transformation API provides a method to include EJBs from a jar file. Anyway in order to accomplish a more accurate description, it is mandatory to provide that information either in the Component Description Meta-Model or as explained in section 4.4 in the ejb-jar.xml for the specific EJBs case. Because the software component is an existing artifact, it is necessary to identify which components are useful when assembling systems. In order to describe existing components a standardized specification is needed. If there is little incentive or pressure to agree on open standards a set of proprietary descriptions will be created, that is a reality today. In this research IPSComp ontology has been proposed, by taking an existing one called XCM [28] as base, and adding the quality attributes to provide non-functional description to the components (item {3} in Figure 5-2). The quality attributes have been tailored to components from the ISO 9126 norm [52]. It is a good starting point to take a standard as a base. It is important to remark that the component provider must provide most quality attribute values, but it is necessary to have a feedback from the component end user in order to validate them. This feedback will allow obtaining more reliable information. This feedback is also important because even though the quality attribute is concrete, and the mechanism to calculate its value is clear it can have a subjective point of view. For instance to measure level of complexity to parameterize a component it will depend directly of the skills of the person using the component, accordingly a wider sample will be helpful to calculate a more accurate value. Moving on to the Ontology implementation, (item {1} in Figure 5-2) once the IPSComp ontology was defined it was included in PLIB. Some issues concerning to the tool did not allow to create instances of the ontology. Anyway, PLIB provides a Java API, which will allow handling the ontology elements from an external application. The aim of this point was to communicate the system with the ontology manager to manipulate the ontology and to perform transformation on different ontologies. Furthermore the ontology manager system that has to be integrated to the IPSCompo project must be a robust one. The IPSComp project will have several ontologies, one of them is the one describing software components the IPSComp ontology, but software development industry is dealing, or in contact with a variety of domains. As a consequence for each domain a specific ontology must be included in the system (item Domain-specific Ontologies and Taxonomies not red highlighted in Figure 5-2). The domain ontology is a task that should be performed by a domain specialist. This ontology must evolve with the domain. As such this will be a time demanding task, which qualified people should perform. 64 On the other hand, identification of software components is a complex task. It has to deal with two main issues unstructured information to describe components and also with an impressive number of possible candidates. To overcome the former the IPSComp ontology is aiming to standardize the component description. The number of candidates can not be diminished, but once the standardization has been done, we provide means to integrate component repositories in such a way that they will be described with the IPSComp ontology, it is possible to create an image of the component description in our repository, such description is IPSComp ontology compatible and as such they can share the set of tools that will be implemented in the project such as the recommender system. Different retrieval schemes have been proposed throughout the years to retrieve software components (item {4} in Figure 5-2). Those different schemes for software retrieval process are being combined and implemented in some commercial sites as well as in research projects. As result of the combination of such techniques, researchers have shown an improvement in the retrieving process. For instance, once of the drawbacks of Signature Matching-based technique is that the result set can have components which do not accomplish the desired behavior, even though the signature matches exactly, take into account the strcpy and the strcat functions in the C language, the signature is the same but the task they perform is completely different. Preceding this technique with a semantic-based approach it is possible to limit the sample on which the signature matching is going to be performed [42]. Such techniques are applied on a specific model representation. If it is possible to perform a model transformation towards such model the source model instances will benefit from the features provided in the target repository. The IPSComp ontology holds the concepts necessary to implement a search tools based on different searching techniques. 5-1 IPSComp System Architecture [48] 65 5-2 IPSComp System Architecture Analyzed 66 6 Future Work The present research has shown some considerations that must be taken into account for the IPSComp project functional architecture. These considerations must be implemented and validated. Some fields will definitely need further research in order to accomplish a complete solution which will satisfy the requirements from the different users involved in the project. It is necessary to arrive to a standardized component description. To accomplish this it is essential to start monitoring the model transformation, annotating how the ontology evolves, what concepts are being used and adjusting the IPSComp Component Description Meta-Model as well as the IPSComp Transformation API to support this information. The IPSComp ontology has provided Non-Functional description by means of quality attributes, which have been implemented with 2 main characteristics that look for further research. The former is the IValue interface to measure different data types. It should be extended with a grammar to compare its values. This will enable the components search based on non-functional characteristics. The second characteristic is the feature concept, it provides context to the quality attribute. This must be complemented with an external ontology, related to the domain at which the quality attribute belongs to. As a consequence an ontology for each quality attribute should be created by an expert in the domain. The IPSComp ontology will provide the information needed to be able to implement a component retrieval tool. As a matter of fact Tansalarak et al. [51] present a description of the components retrieval tool created on top of the XCM ontology. Either by extending this search tool including the new features the IPSComp ontology has or creating a new retrieval scheme, combining several schemes, a search tool can be implemented. It is not just a matter of providing the search tool but evaluating it. For instance, lets suppose that the work done in [51] is extended by including a grammar to retrieve components by quality attributes. It is not a matter of retrieving less total number of hits from the search but to be more accurate with respect to the user needs. The architecture should be complemented with Users Web Mining techniques. This allows monitoring components, users and queries. The impact produced by the lack of standards can be diminished by the gathering of information. Besides, feedback from end users relating metric values and components in gender are necessary to improve the architecture, as well as the IPSComp ontology. At the implementation level, it provides means to include EJBs into the IPSComp ontology; this has been implemented for EJB version 2.1, it is necessary to include different EJBs versions as well as components developed in other architectures such as .NET, CORBA, etc. On the other hand, Web Services are intended to provide a standardized mechanism to describe, locate, and communicate with online applications. In order to offer a service description it uses Web Service Description Language (WSDL). WSDL usually describes interface information for publicly available methods, data type information for messages, binding information for transport protocols, and address information for locating services. If we take a look at the way WSDL describes a Web Service, it can be compared to the way IDL specifies a software component. As a matter of fact, both IDL and WSDL do not support any sort of semantic description. Taking advantage of this commonality some researches are being addressed to provide semantic meaning to components and web services at the same time [15, 42]. This will help in the description and discovery of web services as well as components because the same approaches can be taken in both domains. 67 68 Appendix A - IPSComp Ontology UML Class Diagram IPSC omp O ntol ogy component enumer ati on Aggr eg ationB ase Invariant (from component ) (from component ) -location: List - invari ant:String Gen er alInfo 1..* -operatingSystem:String + Contai nedIn 1..** Contai nee + lCompIns tanc e 1..* + eCompInstance Measur eAt -version:String -pac kag eName: String -language:String -componentM odel:String -domain :List * Price (from component) (from component) -price:float -currenc y:String -des cripti on:String * + Cointai ns The enumer ation pac kage c ontains the list of val ues i mpl emented by enumer ati on patter ns. All the class es have a Final attribute name String. A method to String whic h retur ns the String val ue St yle Deliv er y (from enumer ati on ) (from enumer ati on) EventSt atus (from enumer ati on) Quealit yAttribut eISO 9126 (from enumer ati on) Access Role Status (from enumer ati on ) (from enumer ati on ) (from enumer ati on) * * Publish er -id :int -name:String -des c:String << Client , Master , Support >>- role:Role[ *] -comp: String SubCharacteristicISO9126 (from enumer ati on) (from enumer ati on) contai ner Component (from component) ConnectionOriented (from component ) - eAc tion:String Oper ating Syst em (from component ) License (from component) (from component) -name:String -email : String -webSite:String -phone:String -id :String -des cripti on: String -name:String -des cripti on:String xml Pars er << inter fac e >> << inter fac e>> XMLTagF actory (from x ml Pars er) IXMLT ag IVisitor (from x ml Pars er) (from x ml Pars er ) +acc ept(visitor:IVisitor):void +visit(xmlT ag:IXMLTag ): void +getXM LTag( tag :String ,element:org.jdom.El ement):Bas eXMLT ag + Underl yingComponent * Pre (from component ) Post (from component ) - pre:String - pos t:String << inter fac e >> IANoth er XMLTag (from x ml Pars er) Prop ert y (from component) << ReadO nl y , ReadWrite , WriteOnl y>>-acc ess:Acc ess -tag:String +loadModel(component:Component):void -element:org.jdom.El ement << Bound , Cons trai nt , Indexed , Simple >>-style:Styl e -pName: String -des c:String +acc ept(visitor:IVisitor):void XMLG eneralInfoT ag (from x ml Pars er ) -writeMehtod:String -readMethod:String Scen ario (from component ) Event (from component ) * Method (from component) << Provi ded , Required >> -status :Status - mN ame: String - des c: String XMLScenar ioT ag (from x ml Pars er) -eT ype:String << Multicast , Unicas t >>- deli ver y: Deli ver y << Cons umed , Published >> -status :EventStatus -addListenerMethod:String -removeLis tenerMethod: String -listenerT ype:String * + listenerMethods JDomLo ad Par serTr aver sal (from x ml Pars er ) -component:Component -method:Method -property:Property -event:Event -price:Price -scenario:Scenario -qualit yAttri bute:QualityAttribute -metric:Metric +proc ess( xmlFile:java.io.Fil e):void +visit(xmlT ag:XMLGeneralInfoTag):void +visit(xmlT ag:XMLScenarioT ag) :void +loadModel( component:Component):void + event - name: String - retur nT ype:String - par aT ype:List VisitorLo adMod el (from x ml Pars er ) BaseXMLTag (from x ml Pars er) MethodFind er (from x ml Pars er ) +loadModel( component:Component,scenario:Scenario):void +getPol ymor phicM ethod(xmlT ag: IXMLTag, visitor: IVisitor):java.lang.refl ect.Method + pT ype T ype + retur nT ype (from component) -typeName:String + par aT ype qualit yAttri bute (from quality Attri bute) metric FloatValu e << inter fac e >> IValu e (from metric ) IntValue (from metric ) -value:int BooleanValu e (from metric ) String Value (from metric ) Computation alAccur acy -qualit yAttri bute:QuealityAttributeISO 9126 -meas urabl eAt:Meas ureAt -subC harac teristic:SubChar acteristicISO 9126 -metric:Metric DataEncription +getMetric(metric:String):Metric << create >>+Time( ): Time Per sistent (from quality Attri bute) Capacit y Controllabilit y (from quality Attri bute) (from quality Attri bute) Respo nseTime Throughput (from quality Attri bute) Number (from metric ) Metric Time (from metric ) Qualit yAttrib uteF actor yClass (from quality Attri bute) +getQ ualityAttribute( classN ame:String):QualityAttribute Aud itabilit y (from quality Attri bute) (from quality Attri bute) (from metric ) << create >> +Level (): Level -attri bute_1:int (from quality Attri bute) MetricFactory (from metric) (from metric ) << create >> +Pres ence(): Pres ence -value:float Ser ializ able QualityAttrib ute (from quality Attri bute) Level -value :String Presence (from metric ) - value:boolean +setVal ue( value:String ) :void +getStri ngValue():String * Presicio n * (from metric ) << create >>+Number():Number (from quality Attri bute) ErrorH and ling Disk Tran saction al (from quality Attri bute) (from quality Attri bute) (from quality Attri bute) (from quality Attri bute) -value:IValue -unit :IValue -feature:IValue +setVal ue(value:String ) : void +setU nit( unit: String):void +setF eatur e(feature :String ) :void Memor y Ratio (from metric ) (from quality Attri bute) << create >>+Ratio():Ratio 69 70 Appendix B - IPSComp Ontology component Package UML Class Diagram IPSComp component Package enumeration component AggregationBase (from component ) Invariant (from component ) -location : List - invariant :String + ContainedIn 1..** Containee GeneralInfo - version: String - packageName : String - language :String - componentModel :String - domain :List * + lCompInstance 1..* + eCompInstance The enumeration pac kage contains the list of values implemented by enumeration patterns. All the classes have a Final attribute name String. A method to String w hich returns the String value Price (from component ) (from component ) 1..* -operatingSystem : String * -price :float -currency:String -description :String + Cointains container Publisher -id :int -name :String -desc : String << Client , Master , Support >> - role :Role [*] -comp :String Access (from enumeration ) EventStatus (from enumeration ) MeasureAt (from enumeration ) Role (from enumeration ) Style (from enumeration ) * * Component (from component ) ConnectionOriented (from component ) - eAction: String OperatingSystem (from component ) License (from component ) (from component ) - name : String - email : String - w ebSite :String - phone :String - id :String - description :String -name :String -description : String QuealityAttributeISO9126 (from enumeration ) + UnderlyingComponent * Status (from enumeration ) Property Pre (from component ) Post (from component ) (from component ) - pre:String - post: String << ReadOnly , ReadWrite , WriteOnly >> -access:Access << Bound , Constraint , Indexed , Simple >> - style :Style -pName :String -desc : String -w riteMehtod :String -readMethod :String SubCharacteristicISO9126 (from enumeration ) Delivery (from enumeration ) Scenario (from component ) + event - name : String - returnType :String - paraType :List * Method (from component ) << Provided , Required >> - status :Status - mName :String - desc:String Event (from component ) qualityAttribute - eType :String << Multicast , Unicast >>- delivery :Delivery << Consumed , Published >> - status :EventStatus - addListener Method : String - removeListener Method :String - listener Type :String QualityAttribute (from qualityAttribute ) + pType * + listener Methods Type + returnType (from component ) * -qualityAttribute :QuealityAttributeISO9126 -measurableAt :MeasureAt -subCharacteristic :SubCharacteristic ISO9126 -metric :Metric -typeName :String + paraType * 71 72 Appendix C - IPSComp Ontology qualityAttribute Package UML Class Diagram IPSComp qaulityAttribute Package qualityAttribute Presicion (from qualityAttribute) ComputationalAccuracy (from qualityAttribute) Capacity (from qualityAttribute) Serializable (from qualityAttribute) QualityAttribute (from qualityAttribute) -qualityAttribute:QuealityAttributeISO9126 -measurableAt:MeasureAt -subCharacteristic :SubCharacteristicISO9126 -metric:Metric +getQualityAttribute(className:String) :QualityAttribute Auditability (from qualityAttribute) DataEncription (from qualityAttribute) Persistent (from qualityAttribute) Controllability (from qualityAttribute) ErrorHandling (from qualityAttribute) ResponseTime (from qualityAttribute) QualityAttributeFactoryClass (from qualityAttribute) Disk (from qualityAttribute) Transactional (from qualityAttribute) Throughput (from qualityAttribute) Memory (from qualityAttribute)) 73 74 Appendix D - IPSComp Ontology metric Package UML Class Diagram IPSComp mtric Package metric << interface>> IValue IntValue (from metric ) BooleanValue (from metric) (from metric) -value:int FloatValue (from metric) -value:float -value:boolean +setValue(value:String):void +getStringValue():String MetricFactory (from metric) StringValue Level (from metric ) (from metric) +getMetric( metric:String):Metric -value:String << create >>+Level():Level Presence (from metric ) << create >>+Presence():Presence Metric (from metric) Time (from metric) -attribute_1:int << create >>+Time():Time Number (from metric) << create >>+Number():Number -value:IValue -unit:IValue -feature:IValue +setValue(value:String):void +setUnit(unit:String):void +setFeature(feature:String):void Ratio (from metric) << create >>+Ratio ():Ratio ) 75 76 Appendix E - IPSComp Ontology xmlParser Package UML Class Diagram IPSComp xmlParser Package xmlParser << interface>> IXMLTag (from xmlParser) << interface>> IVisitor (from xmlParser) +accept(visitor:IVisitor):void +visit(xmlTag:IXMLTag):void << interface >> IANotherXMLTag (from xmlParser) BaseXMLTag (from xmlParser) -tag:String -element:org.jdom.Element +loadModel(component:):void +accept(visitor:IVisitor):void XMLGeneralInfoTag (from xmlParser) +loadModel( component:):void XMLScenarioTag (from xmlParser) +loadModel(component:,scenario:):void XMLTagFactory (from xmlParser) +getXMLTag(tag:String,element:org.jdom.Element):BaseXMLTag VisitorLoadModel (from xmlParser) JDomLoadParserTraversal (from xmlParser) -component:Component -method:Method -property:Property -event:Event -price:Price -scenario:Scenario -qualityAttribute:QualityAttribute -metric:Metric +process(xmlFile:java.io.File):void +visit(xmlTag:XMLGeneralInfoTag):void +visit(xmlTag:XMLScenarioTag):void MethodFinder (from xmlParser) +getPolymorphicMethod (xmlTag:IXMLTag,visitor:IVisitor) :java.lang.reflect.Method 77 78 Appendix F - IPSComp XML Meta-Model - XSD Schema <?xml version="1.0" encoding="UTF-8"?> <!-- edited with XML Spy v4.2 U () by Administrator (Administrator) --> <!--W3C Schema generated by XML Spy v4.2 U ()--> <xs:schema xmlns: > > 79 </xs:element> <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:element <xs:element <xs:element </xs:sequence> ="DiskUtilization"> ="addListenerMethod" type="xs:string"/> :simpleType> 80 <xs:restriction <xs:enumeration <xs:enumeration <xs:enumeration <xs:enumeration <xs:enumeration </xs:restriction> </xs:simpleType> </xs:element> <xs:element <xs:element <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:element </xs:sequence> > 81 <xs:element </xs:sequence> </xs:complexType> </xs:element> <xs:element :choice <xs:element <xs:element <xs:element <xs:element </xs:choice> <xs:attribute <xs:simpleType> <xs:restriction <xs:enumeration 82 :complexType> <:complexType> <xs:sequence> <xs:element 83 84 </xs:sequence> </xs:complexType> </xs:element> <xs:element <xs:element <xs:element <xs:element <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:element </xs:sequence> <xs:attribute </xs:complexType> </xs:element> <xs:element <xs:simpleType> <xs:restriction <xs:enumeration <xs:enumeration <xs:enumeration </xs:restriction> </xs:simpleType> </xs:element> <xs:element <xs:element <xs:complexType> <xs:sequence> <xs:element </xs:sequence> </xs:complexType> <:element </xs:schema> 85 86 Appendix G - IPSComp Component Description – XML Example <?xml version="1.0" encoding="UTF-8"?> <!-- edited with XML Spy v4.2 U () by Administrator (Administrator) --> <componentSpecification xmlns: <id>543</id> <name>javax.composite.SliderFieldPanel</name> <generalInfo> <version>Java</version> <package>javax.composite </package> <language>Java </language> <model>JavaBean</model> <domain>Interface</domain> <domain>MVC</domain> <os>Windows</os> <os>Linux</os> <prices> <price> <amount>43.26</amount> <currency>Euros</currency> <description>This component does not have any discounts</description> </price> <price> <amount>33.99</amount> <currency>Euros</currency> <description>This price is only for partners</description> </price> </prices> <publisher> <id>SunId</id> <name>Sun Microsystems </name> <email>[email protected]</email> <webSite></webSite> <phone>(1) 737 883 2694</phone> </publisher> <license> <name>OpenSource</name> <description>This is an open source software ... </description> </license> </generalInfo> <features> <methods> <method status="Provided"> <mName>addInt</mName> <desc>This is is provided interface to add two ints</desc> <pre>true</pre> <post>true</post> <returnType>int</returnType> <paraType>int</paraType> <paraType>int</paraType> <scenarios> <scenario sName="First>Kill</scenarioParaType> </scenario> <scenario sName="Second>Kill</scenarioParaType> </scenario> </scenarios> 87 </method> <method status="Provided"> <mName>multiplyInt</mName> <desc>This is is provided interface to multiply two ints</desc> <pre>true</pre> <post>true</post> <returnType>int</returnType> <paraType>int</paraType> <paraType>int</paraType> <scenarios> <scenario sName="Third>Static</scenarioParaType> </scenario> <scenario sName="Forth>Static</scenarioParaType> </scenario> </scenarios> </method> </methods> <properties> <property access="ReadWrite" style="Simple"> <pName>minimumValue </pName> <pType>int </pType> <readMethod>getMinimum </readMethod> <writeMethod>setMinimum </writeMethod> </property> <property access="ReadWrite" style="Bound"> <pName>currentValue </pName> <pType>int </pType> <readMethod>getCurrentValue </readMethod> <writeMethod>setCurrentValue </writeMethod> </property> <property access="ReadWrite" style="Simple"> <pName>fieldWidth </pName> <pType>int </pType> <readMethod>getFieldWidth </readMethod> <writeMethod>setFieldWidth </writeMethod> </property> <property access="WriteOnly" style="Simple"> <pName>minimumSize </pName> <pType>int </pType> <readMethod>getMinimumSize</readMethod> </property> <property access="ReadOnly" style="Simple"> <pName>preferredSize </pName> <pType>int </pType> <readMethod>getPreferredSize </readMethod> </property> </properties> <events> <event delivery="MultiCast" status="publish"> <eType>java.Beans.PropertyChangeEvent </eType> </event> <event delivery="UniCast" status="publish"> <eType>event1</eType> <addListenerMethod>addListener1</addListenerMethod> <removeListenerMethod>removeListener1</removeListenerMethod> <listenerMethods> <listenerMethod> <mName>listenerMethod11</mName> <returnType>int</returnType> <paraType>int</paraType> 88 <paraType>int</paraType> </listenerMethod> <listenerMethod> <mName>listenerMethod12</mName> <returnType>float</returnType> <paraType>float</paraType> <paraType>float</paraType> </listenerMethod> </listenerMethods> </event> <event delivery="MultiCast" status="Consumed"> <eType>event2</eType> <addListenerMethod>addListener2</addListenerMethod> <removeListenerMethod>removeListener2</removeListenerMethod> <listenerMethods> <listenerMethod> <mName>listenerMethod21</mName> <returnType>String</returnType> <paraType>double</paraType> <paraType>char</paraType> </listenerMethod> <listenerMethod> <mName>listenerMethod22</mName> <returnType>Object</returnType> <paraType>List</paraType> <paraType>List</paraType> </listenerMethod> </listenerMethods> </event> </events> </features> <design> <compInstances> <compInstance role="Master"> <comp>javax.swing.JPanel </comp> <cid>panel</cid> </compInstance> <compInstance role="Support"> <comp> java.awt.BorderLayout </comp> <cid>border</cid> </compInstance> <compInstance role="Client"> <comp> javax.swing.JSlider </comp> <cid> slider</cid> </compInstance> <compInstance role="Client"> <comp> javax.swing.JTextField </comp> <cid>field</cid> </compInstance> <compInstance role="Client"> <comp> javax.awt.BoxContainer </comp> <cid> boxContainer</cid> </compInstance> </compInstances> <compositions> <eCompositions> <eComposition> <constraint> <inv>true</inv> <pre>true</pre> <post>true</post> </constraint> <eCompInstances> <eCompInstance> <rid>slider</rid> <event>change</event> <eAction> stateChanged </eAction> </eCompInstance> </eCompInstances> <lCompositions> 89 <lCompInstance> <rid>slider</rid> <callMethod> getValue </callMethod> </lCompInstance> <op type="|"/> <lCompInstance> <rid> filed </rid> <callMethod> setText </callMethod> </lCompInstance> </lCompositions> </eComposition> <eComposition> <constraint> <inv>true</inv> <pre>true</pre> <post>true</post> </constraint> <eCompInstances> <eCompInstance> <rid>field</rid> <event>action</event> <eAction>actionPerformed</eAction> </eCompInstance> </eCompInstances> <lCompositions> <lCompInstance> <rid>field</rid> <callMethod>getText</callMethod> </lCompInstance> <op type="|"/> <lCompInstance> <rid>slider</rid> <callMethod>setValue</callMethod> </lCompInstance> </lCompositions> </eComposition> </eCompositions> <cCompositions> <cComposition> <container> <rid>panel</rid> </container> <containees> <containee> <rid>boxContainer </rid> </containee> </containees> </cComposition> <cComposition> <container> <rid>boxContainer</rid> </container> <containees> <containee> <rid>slider</rid> </containee> <containee> <rid>field </rid> </containee> </containees> </cComposition> <cComposition> <container> <rid>frame</rid> </container> <containees> <containee> <rid>sliderField </rid> <location>BorderLayout.SOUTH </location> </containee> 90 <containee> <rid>logo</rid> <location>BorderLayout.CENTER </location> </containee> </containees> </cComposition> </cCompositions> </compositions> </design> <qualityAttributes> <DataEncription metric="Presence"> <characteristic>Functionality</characteristic> <lifeCycle>Runtime</lifeCycle> <subCharacteristic>Security</subCharacteristic> <metric> <value>true</value> <feature>SSL ceritificate</feature> <unit/> </metric> </DataEncription> <DiskUtilization metric="Number"> <characteristic>Efficiency</characteristic> <lifeCycle>Runtime</lifeCycle> <subCharacteristic>Resource Behavior</subCharacteristic> <metric> <value>10</value> <feature/> <unit>MB</unit> </metric> </DiskUtilization> </qualityAttributes> </componentSpecification> 91 92 References [1] Rui S. Moreira, Gordon S. Blair, Eurico Carrapatoso. A Reflective Component-Based & Architecture Aware Framework to Manage Architecture Composition. Third International Symposium on Distributed Objects and Applications (DOA'01). September 17 - 20, 2001. Rome, Italy. p. 0187. [2] Ranieri Baraglia, Fabrizio Silvestri. An Online Recommender System for Large Web Sites. Web Intelligence, IEEE/WIC/ACM International Conference on (WI'04). September 20 - 24, 2004. Beijing, China. p. 199-205. [3] J. Kontio. A case study in applying a systematic method for COTS selection. 18th International Conference on Software Engineering (ICSE'96). March 25 - 29, 1996. Berlin, GERMANY. p. 201. [4] Jie Yang, Lei Wang, Song Zhang, Xin Sui, Ning Zhang, Zhuoqun Xu. Building Domain Ontology Based on Web Data and Generic Ontology. Web Intelligence, IEEE/WIC/ACM International Conference on (WI'04). September 20 - 24, 2004. Beijing, China. p. 686-689. [5] Nead Stojanovic, Jorge Gonzalez, Ljiljana Stojanovic. ONTOLOGER: a system for usage-driven management of ontology-based information portals. International Conference On Knowledge Capture archive. Proceedings of the international conference on Knowledge capture. 2003. Sanibel Island, FL, USA October 23 - 25, 2003. Pages: 172 - 179. Year of Publication: 2003. ISBN:1-58113-583-1. [6] Jean-Christophe Mielnik, Bernard Lang, Stéphane Laurier. eCots Platform: An Inter-industrial Initiative for COTS-Related Information Sharing. Proceedings of the Second International Conference on COTS-Based Software Systems. Pages: 157 - 167. Year of Publication: 2003. ISBN:3-540-00562-5. [7] Fabrizio Silvestri, Diego Puppin, Domenico Laforenza, Salvatore Orlando. A Search Architecture for Grid Software Components. Web Intelligence, IEEE/WIC/ACM International Conference on (WI'04). September 20 - 24, 2004. Beijing, China. p. 495-498. [8] Seoyoung Park, Chisu Wu. Intelligent Search Agent for Software Components. Sixth Asia Pacific Software Engineering Conference. December 07 - 10, 1999. Takamatsu, Japan. p. 154. [9] Sandip Debnath, Sandip Sen, Brent Blackstock. LawBot: A Multiagent Assistant for Legal Research. Internet Computing online IEEE. November/December 2000 (Vol. 4, No. 6). p. 32-37. [10] Jyrki Kontio, Gianluigi Caldiera and Victor R. Basili. Defining Factors, Goals and Criteria for Reusable Component Evaluation. Presented at the CASCON ’96 conference, Toronto, Canada, November 1214, 1996. [11] Robert C. Seacord, Scott A. Hissam, Kurt C. Wallnau. Agora: A Search Engine for Software Components. Internet Computing online IEEE. November/December 1998 (Vol. 2, No. 6). p. 62-70. [12] L. Stojanovic, N. Stojanovic, J. Gonzalez, R. Studer. The OntoManager - a system for the usagebased ontology management. Proceeding, ODBASE 2003, 3-7 November 2003, Catania, Sicily (Italy). [13] Hideki Hara, Shigeru Fujita, Kenji Sugawara, Chiba Institute of Technology. Reusable Software Components Based on an Agent Model. Seventh International Conference on Parallel and Distributed Systems: Workshops (ICPADS'00 Workshops). July 04 - 07, 2000. Iwate, Japan. p. 447. [14] John Davies, A. Duke, and York Sure (2003). OntoShare - A Knowledge Management Environment for Virtual Communities of Practice. Proceedings of the 2nd International Conference on Knowledge Capture (K-CAP2003), 23-26 October 2003, Florida, USA. Edited by . ACM Press. [15] Czarnecki, K., Dittmar, T., Franczyk, B., Hoffmann, R., Kühnhauser, W., Langhammer, F., Lenz, B., Müller-Schloer, C., Unland, R., Weber, M., Weissenbach, H., Westerhausen, J. CompoNex: A Marketplace for Trading Software Components in Immature Markets. Proceedings Net.ObjectDays 2003. Overhage, S., Thomas, P. (2003). Transit (ISBN 3-9808628-2-8), p. 145-163. [16] Sven Overhage. Towards a Standardized Specification Framework for Component Development, Discovery, and Configuration. WCOP 2003. Eighth International Workshop on Component-Oriented Programming. Monday, July 21, 2003 At ECOOP 2003, Darmstadt, Germany (July 21-25, 2003). [17] Johannes Maria ZAHA, Alexander KEIBLINGER, Klaus TUROWSKI. Component Market Specification Demand and Standardized Specification Of Business Components. 1st International workshop Component Based Business Information Systems Engineering September 2nd, 2003 - Geneva, Switzerland. [18] Vishnu Kotrajaras. Towards an Agent-Searchable Software Component Using CafeOBJ Specification and Semantic Web. Workshop on Formal Aspects of Component Software (FACS 03). Pisa, Italy, 8-9 September 2003. 93 [19] World Wide Web Consortium Issues RDF and OWL Recommendations. Semantic Web emerges as commercial-grade infrastructure for sharing data on the Web.. [20] F. McCarey, N. Kushmerick. RASCAL: A Recommender Agent for SoftwComponents in an Agile Environment. Proceedings of the 15th Artificial Intelligence and Cognitive Science Conference, Castlebar, Ireland, Se2004. [21] Juan P. Carvallo, Xavier Franch, Carme Quer, Marco Torchiano. Characterization of a Taxonomy for Business Applications and the Relatioships Among Them. 3rd International Conference on COTSBased Software Systems. ICCBSS 2004. 1-4 February 2004. [22] C. Brewster and K. O’Hara. Knowledge Representation with Ontologies: The Present and Future. IEEE Intelligent Systems, 19(2):72 - 81, may 2004. [23] R. Braga, M. Mattoso, and C. Werner. The use of mediation and ontology technologies for software component information retrieval. In Proceedings of the 2001 Symposium on Software Reusability: putting software reuse in context, pages 19-28. ACM, 2001. [24] M. Missikoff and F. Taglino. SymOntoX: A Web-Ontology Tool for eBusiness Domain. In Proceedings of the Fourth International Conference on Web Information Systems Engineering (WISE’03), pages 343-346. IEEE, 2003. [25] A. Rector. Modularisation of Domain Ontologies Implemented in Description Logics and related formalisms including OWL. In Proc. of Knowledge Capture (KCAP’03), pages 121-128. ACM, 2003. [26] C. Pahl. Ontology-based Description and Reasoning for Component-based Development on the Web. In Proceedings of SAVCBS’03-ESEC/FSE’03 Workshop. ACM, 2003. Septiembre 1-2, 2003. Helsinki, Finland. [27] M. Tallis, N. Goldman, and R. Balzer. The Briefing Associate: A Role for COTS Applications in the Semantic Web. In Proceedings of the Semantic Web Working Symposium (SWWS), 2001. [28] N. Tansalarak and K. Claypool. XCM: A Component Ontology. In OOPSLA’04 Workshop - Ontologies as Software Engineering Artifacts. 24-28 October 2004, Vancouver, British Columbia, Canada. [29] Guy Pierra. The PLIB Ontology-based approach to data integration. 18th IFIP World Computer Congress (WCC\'2004). 2004. [30] Guy Pierra. Context-explication in conceptual ontologies: PLIB ontologies and their use for industrial data. Technical report : Research Report LISI/ENSMA 04-001. 2004. [31] Guy Pierra and Hondjack Dehainsala and Yamine Ait Ameur and Ladjel Bellatreche. Base de données à base ontologique :principe et mise en ouvre. Journal Ingénierie des systèmes d'information. 2005. [32] Nicola Guarino, Claudio Masolo, Guido Vetere. OntoSeek: Content-Based Access to the Web. IEEE Intelligent Systems. Volume 14, Issue 3 (May 1999). Pages: 70 - 80. Year of Publication: 1999. ISSN: 1094-7167. [33] Overhage, S., Thomas, P. WS-Specification: Specifying Web Services Using UDDI Improvements. In: Chaudri, A. B., Jeckle, M., Rahm, E., Unland, R. (eds.): Web, Web Services, and Database Systems. Lecture Notes in Computer Science (LNCS 2593), Springer, Berlin (2003): 100-118. [34] Kokkinaki, A.I., N. Karakapilides, R. Dekker, and C. Pappis. A web-based recommender system for End-of-use ict products. In Proceedings of the Second IFIP Conference on E-commerce, E-business, E-government, October 2002. [35] S. Varadarajan, A. Kumar, D. Gupta, and P. Jalote. ComponentXchange: An E-Exchange for Software Components. In Poster Proceedings of the Tenth International World Wide Web Conference (WWW 10), 2001 [36] Peter FETTKE, Peter LOOS. A Proposal for Specifying Business Components. 1st International workshop "Component Based Business Information Systems Engineering". September 2nd, 2003 Geneva, Switzerland. [37] Naiyana Tansalarak and Kajal T. Claypool. CoCo: Composition Model and Composition Model Implementation. Technical Report 2004-006, Department of Computer Science, University of Massachusetts - Lowell, June 2004.. [38] Bobeff G. Noyè Jacques. Component Specialization. PEP M'04 August 24-26m 2004. Verona, Italy. [39] R. de Souza, M. Costa, R. Bragga, C. Werner, M. Mattoso. Software Components Reuse Through Web Search and Retrieval. Computer Science Department, Federal University of Rio de Janeiro, Brazil. Department of Computer Science – CTU/UFJF. 94 [40] O. Constant, A. Réquilé, B. Yap. Deriving Action-Based Semantics from Learning Repositories. Proceeding of the First International Conference on Information Technology & Applications.(ICITA 2002). November 25-28 2002. BATHURST, AUSTRALIA. IEEE, ISBN: 1-86467-114-9 - Track 2: T in Multimedioa; Computer Networking; and Database Interface. [41] Haining Yao, Letha Etzkorn. Towards A Semantic-based Approach for Software Reusable Component Classification and Retrieval. ACM Southeast Regional Conference Proceedings of the 42nd annual Southeast regional conference. Huntsville, Alabama. Session: Software engineering #1. Pages: 110 115. Year of Publication: 2004. ISBN:1-58113-870-9. Publisher ACM Press. New York, NY, USA. [42] Sugumaran, Vijayan; Storey, Veda C. A Semantic-Based Approach to Component Retrieval. The DATA BASE for Advances in Information Systems – Summer 2003, Vol. 34, No. 3, p. 8-24. [43] Massimo Paolucci, Takahiro Kawamura, Terry R. Payne, Katia P. Sycara. Importing the Semantic Web in UDDI. Lecture Notes In Computer Science; Vol. 2512 archive. Revised Papers from the International Workshop on Web Services, E-Business, and the Semantic Web. Pages: 225 - 236. Year of Publication: 2002. ISBN:3-540-00198-0. Publisher Springer-Verlag London, UK. [44] Sivashanmugam, K.; Verma, K.; Sheth, A.; Miller, J. Adding Semantics to Web Services Standards. The 2003 International Conference on Web Services (ICWS'03), June 2003. [45] Stefan Decker, Prasenjit Mitra, Sergey Melnik. Framework for the Semantic Web: An RDF Tutorial. Internet Computing online. November/December 2000 (Vol. 4, No. 6). p 68-73. [46] Michael Klein, Birgitta Konig-Ries. Combining Query and Preference An Approach to Fully Automatize Dynamic Service Binding. Proceedings of the IEEE International Conference on Web Services (ICWS’04). June 06 - 09, 2004. San Diego, California. Publication Date: June 2004. p. 788. [47] Gerald C. Gannod, Sushant Bhatia. Facilitating Automated Search for Web Services. IEEE International Conference on Web Services (ICWS'04). June 06 - 09, 2004. San Diego, California. Publication Date: June 2004. p. 761. [48] Annya Réquilé-Romanczuk, Alejandra Cechich, Anne Dourgnon-Hanoune. Towards a KnowledgeBased Framework for COTS Component Identification. 27th International Conference on Software Engineering ICSE’05-MPEC’05. May 21st, 2005, St. Louis, Missouri, USA. [49] T.R. Gruber. A Translation Approach to Portable Ontology Specifications. Knowledge Acquisition, 5:199-220, 1993. [50] Hall, R.J. Generalized Behavior-Based Retrieval. Proceedings of the Fifteenth International Conference on Software Engineering, Baltimore, MD, May 1993. p. 371 - 380. [51] Naiyana Tansalarak and Kajal Claypool. Finding a Needle in the Haystack: A Technique for Ranking Matches between Components. Eighth International SIGSOFT Symposium on Component-based Software Engineering (CBSE 2005): Software Components at Work. St. Louis, Missouri. May 14-15 2005. [52] Bertoa, M. F., Vallecillo, A. Quality Attributes for COTS Components. In: Proceedings of the 6th ECOOP Workshop on Quan-titative Approaches in Object-Oriented Software Engineering (QAOOSE 2002). June 11th, 2002. Málaga Spain. [53] Joaquina Martín-Albo, Manuel F. Bertoa, Coral Calero, Antonio Vallecillo, Alejandra Cechich, Mario Piattini. CQM: A Software Component Metric Classification Model. 7th ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE'2003). Darmstadt, Germany. Tuesday, July 22nd, 2003. [54] Luis Iribarne, Carina Alves, Jaelson Castro, and Antonio Vallecillo. A non-functional approach for cotscomponents trading. In Proc. of WER 2001, Buenos Aires, Argentina, 2001. [55] A. Podgurski & L. Pierce. Behavior sampling: a technique for automated retrieval of reusable components. In Proc. 14th International Conference on Software Engineering, 349-360. New York, N.Y.: The Association for Computing Machinery, Inc. 1992. [56] Michael Soden, Hajo Eichler, Joachim Hoessler. Inside MDA: Mapping MOF2.0 Models to Components. First European Workshop on Model Driven Architecture with Emphasis on Industrial Application. March 17-18, 2004. University of Twente, Enschede, The Netherlands. [57] Daniel Exertier, Benoit Langlois, Xavier Le Roux. PIM Definition and Description. First European Workshop on Model Driven Architecture with Emphasis on Industrial Application. March 17-18, 2004. University of Twente, Enschede, The Netherlands. [58] Uche Ogbuji. XML, The Model Driven Architecture, and RDF. XML Europe 2002. Down to Business: Getting serious about XML. 23, 24 May 2002. Bacelona, Spain. 95 [59] Ivan Kurtev, Klaas van den Berg. Model driven architecture based XML processing. Proceedings of the 2003 ACM symposium on Document engineering. Grenoble, France. SESSION: Document based architecture & applications. Pages: 246 - 248. Year of Publication: 2003. ISBN:1-58113-724-9. [60] A Proposal for an MDA Foundation Model. An ORMSC White Paper. V00-02. ormsc/05-04-01. [61] Tewfik Ziadi, Bruno Traverson, Jean-Marc Jézéquel. From a UML Platform Independent Component Model to Platform Specific Component Models. Workshop in Software Model Engineering. Tuesday October 1st 2002. Dresden, Germany. [62] Jack Greenfield. UML Profile For EJB. Java Specification Request JSR-000026 UML/EJB(TM) Mapping Specification 1.0 Public Review Draft. Rational Software Corporation.. [63] PIM to PSM mapping techniques. First European Workshop on Model Driven Architecture with Emphasis on Industrial Application. March 17-18, 2004. University of Twente, Enschede, The Netherlands. MASTER-2003-D5.1-V1.0-PUBLIC. December 2003. [64] [65] Richard Monson-Haefel. Enterprise JavaBeans, Second Edition. March 2000. ISBN: 1-56592-869-5. [66] Chuck McManis. Take a look inside Java classes. Learn to deduce properties of a Java class from inside a Java program. Java Indepth.. [67],,. [68] Enterprise JavaBeansTM Specification, Version 2.1. Sun Microsystems. Version 2.1, Final Release. November 12, 2003. [69] Bézivin Jean. From Object Composition to Model Transformation with the MDA. Proceedings of the 39th International Conference and Exhibition on Technology of Object-Oriented Languages and Systems (TOOLS39). Page: 350. Year of Publication: 2001. ISSN:1530-2067. Publisher IEEE Computer Society Washington, DC, USA. [70] Joaquin Miller, Jishnu Mukerji. MDA Guide Version 1.0.1. Copyright © 2003 OMG. Document Number: omg/2003-06-01. 12th June 2003. [71] Architecture Board MDA Drafting Team. Model Driven Architecture A Technical Perspective. Draft 21st February 2001. Document Number ab/2001-02-04. [72] M. D. Mcllroy, Mass-produced software components. In Software Engineering Concepts and Techniques, NATO Conference on Software Engineering, 1969. [73] Weaver James, Kevin Muckar, Crume James, Phillips Ron. Beginning J2EE 1.4. Wrox Press Ltd. 2003. United States. ISBN 1-86100-833-3. [74] XML Schema Part 0: Primer Second Edition. W3C Recommendation 28 October 2004.. 96
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
|
https://manualzz.com/doc/48360914/vrije-universiteit-brussel-%E2%80%93-belgium-ipscomp
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Sentry catches loggings and errors from client applications.
The only way I know of creating a project in Sentry is by manually submitting the form in the Web application interface.
I'm searching for a way to create a project into Sentry from the command line in any way (options, config file) ?
This would be greatly valuable for deployment scripts. Otherwise no automation is possible.
Just found this discussion while Googling around but no answer:
Any idea?
It's a django project, of course you can:
from sentry.models import Project project = Project(...) ... project.save()
Edit: You could write a custom management command to get functionality on the command line
Edit by question's author: Yes indeed it is a django project, so like a django project I automated my deployment in the three following steps:
Run dumpdata like you'd do with any django project (sentry will implicitly call manage.py):
sentry --config=sentry.conf.py dumpdata --indent=2 auth > auth_data.json
sentry --config=sentry.conf.py dumpdata --indent=2 sentry > sentry_data.json
Deploy step by step:
sentry --config=sentry.conf.py syncdb --noinput
sentry --config=sentry.conf.py migrate
sentry --config=sentry.conf.py loaddata auth_data.json
sentry --config=sentry.conf.py loaddata sentry_data.json
Works pretty well. Hope this will help others.
|
https://codedump.io/share/eBdIpDZ2YzEq/1/create-a-project-from-command-line
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
*autocmd.txt* Nvim VIM REFERENCE MANUAL by Bram Moolenaar Automatic commands *autocommand* For a basic explanation, see section |40.3| in the user manual. Type <M-]> to see the table of contents. ============================================================================== 1. Introduction *autocmd-intro* You can specify commands to be executed automatically when reading or writing a file, when entering or leaving a buffer or window, and when exiting Vim. For example, you can create an autocommand to set the 'cindent' option for files matching *.c. You can also use autocommands to implement advanced features, such as editing compressed files (see |gzip-example|). The usual place to put autocommands is in your vimrc file. *E203* *E204* *E143* *E855* *E937* WARNING: Using autocommands is very powerful, and may lead to unexpected side effects. Be careful not to destroy your text. - It's a good idea to do some testing on an expendable copy of a file first. For example: If you use autocommands to decompress a file when starting to edit it, make sure that the autocommands for compressing when writing work correctly. - Be prepared for an error halfway through (e.g., disk full). Vim will mostly be able to undo the changes to the buffer, but you may have to clean up the changes to other files by hand (e.g., compress a file that has been decompressed). - If the BufRead* events allow you to edit a compressed file, the FileRead* events should do the same (this makes recovery possible in some rare cases). It's a good idea to use the same autocommands for the File* and Buf* events when possible. ==============================================================================]. Note that special characters (e.g., "%", "<cword>") in the ":autocmd" arguments are not expanded when the autocommand is defined. These will be expanded when the Event is recognized, and the {cmd} is executed. The only exception is that "<sfile>" is expanded when the autocmd is defined. Example: :au BufNewFile,BufRead *.html so <sfile>:h/html.vim Here Vim expands <sfile> to the name of the file containing this line. `:autocmd` adds to the list of autocommands regardless of whether they are already present. When your .vimrc file is sourced twice, the autocommands will appear twice. To avoid this, define your autocommands in a group, so that you can easily clear them: augroup vimrc autocmd! " Remove all vimrc autocommands au BufNewFile,BufRead *.html so <sfile>:h/html.vim augroup END If you don't want to remove all autocommands, you can instead use a variable to ensure that Vim includes the autocommands only once: :if !exists("autocommands_loaded") : let autocommands_loaded = 1 : au ... :endif When the [group] argument is not given, Vim uses the current group (as defined with ":augroup"); otherwise, Vim uses the group defined with [group]. Note that [group] must have been defined before. You cannot define a new group with ":au group ..."; use ":augroup" for that. While testing autocommands, you might find the 'verbose' option to be useful: :set verbose=9 This setting makes Vim echo the autocommands as it executes them. When defining an autocommand in a script, it will be able to call functions local to the script and use mappings local to the script. When the event is triggered and the command executed, it will run in the context of the script it was defined in. This matters if |<SID>| is used in a command. When executing the commands, the message from one command overwrites a previous message. This is different from when executing the commands manually. Mostly the screen will not scroll up, thus there is no hit-enter prompt. When one command outputs two messages this can happen anyway. ============================================================================== 3. Removing autocommands *autocmd-remove* :au[tocmd]! [group] {event} {pat} [nested] {cmd} Remove all autocommands associated with {event} and {pat}, and add the command {cmd}. See |autocmd-nested| for [nested]. :au[tocmd]! [group] {event} {pat} Remove all autocommands associated with {event} and {pat}. :au[tocmd]! [group] * {pat} Remove all autocommands associated with {pat} for all events. :au[tocmd]! [group] {event} Remove ALL autocommands for {event}. Warning: You should not do this without a group for |BufRead| and other common events, it can break plugins, syntax highlighting, etc. :au[tocmd]! [group] Remove ALL autocommands. Warning: You should normally not do this without a group, it breaks plugins, syntax highlighting, etc. When the [group] argument is not given, Vim uses the current group (as defined with ":augroup"); otherwise, Vim uses the group defined with [group]. ============================================================================== 4. Listing autocommands *autocmd-list* :au[tocmd] [group] {event} {pat} Show the autocommands associated with {event} and {pat}. :au[tocmd] [group] * {pat} Show the autocommands associated with {pat} for all events. :au[tocmd] [group] {event} Show all autocommands for {event}. :au[tocmd] [group] Show all autocommands. If you provide the [group] argument, Vim lists only the autocommands for [group]; otherwise, Vim lists the autocommands for ALL groups. Note that this argument behavior differs from that for defining and removing autocommands.* You can specify a comma-separated list of event names. No white space can be used in this list. The command applies to all the events in the list. For READING FILES there are four kinds of events possible: BufNewFile starting to edit a non-existent file BufReadPre BufReadPost starting to edit an existing file FilterReadPre FilterReadPost read the temp file with filter output FileReadPre FileReadPost any other file read Vim uses only one of these four kinds when reading a file. The "Pre" and "Post" events are both triggered, before and after reading the file. Note that the autocommands for the *ReadPre events and all the Filter events are not allowed to change the current buffer (you will get an error message if this happens). This is to prevent the file to be read into the wrong buffer. Note that the 'modified' flag is reset AFTER executing the BufReadPost and BufNewFile autocommands. But when the 'modified' option was set by the autocommands, this doesn't happen. You can use the 'eventignore' option to ignore a number of events or all events. |TermOpen| when a terminal buffer is starting |TermClose| when a terminal buffer ends Options |FileType| when the 'filetype' option has been set |Syntax| when the 'syntax' option has been set shada file |VimLeave| before exiting Vim, after writing the shada file Various |DirChanged| after the |current-directory| was changed |WinEnter| after entering another window |WinLeave| before leaving a window |TabEnter| after entering another tab page |TabLeave| before leaving a tab page |TabNew| when creating a new tab page |TabNewEntered| after entering a new tab page |TabClosed| after closingYankPost| when some text is yanked or deleted * *BufCreate* *BufAdd* BufAdd or BufCreate Just after creating a new buffer which is added to the buffer list, or adding a buffer to the buffer list. Also used just after a buffer in the buffer list has been renamed. The BufCreate event is for historic reasons. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being created "<afile>". *BufDelete* BufDelete Before deleting a buffer from the buffer list. The BufUnload may be called first (if the buffer was loaded). Also used just before a buffer in the buffer list is renamed. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being deleted "<afile>". *BufHidden* BufHidden Just after a buffer has become hidden. That is, when there are no longer windows that show the buffer, but the buffer is not unloaded or deleted. Not used for ":qa" or ":q" when exiting Vim. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being unloaded "<afile>". *BufLeave* BufLeave Before leaving to another buffer. Also when leaving or closing the current window and the new current window is not for the same buffer. Not used for ":qa" or ":q" when exiting Vim. *BufNew* BufNew Just after creating a new buffer. Also used just after a buffer has been renamed. When the buffer is added to the buffer list BufAdd will be triggered too. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being created "<afile>". *BufNewFile* BufNewFile When starting to edit a file that doesn't exist. Can be used to read in a skeleton file. *BufRead* *BufReadPost* BufRead or BufReadPost When starting to edit a new buffer, after reading the file into the buffer, before executing the modelines. See |BufWinEnter| for when you need to do something after processing the modelines. This does NOT work for ":r file". Not used when the file doesn't exist. Also used after successfully recovering a. *BufUnload* BufUnload Before unloading a buffer. This is when the text in the buffer is going to be freed. This may be after a BufWritePost and before a BufDelete. Also used for all buffers that are loaded when Vim is going to exit. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being unloaded "<afile>".. *BufWinLeave* BufWinLeave Before a buffer is removed from a window. Not when it's still visible in another window. Also triggered when exiting. It's triggered before BufUnload or BufHidden. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being unloaded "<afile>". When exiting and v:dying is 2 or more this event is not triggered. *BufWipeout* BufWipeout Before completely deleting a buffer. The BufUnload and BufDelete events may be called first (if the buffer was loaded and was in the buffer list). Also used just before a buffer is renamed (also when it's not in the buffer list). NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being deleted "<afile>".|. *CmdwinEnter* CmdwinEnter After entering the command-line window. Useful for setting options specifically for this special type of window. This is triggered _instead_ of BufEnter and WinEnter. <afile> is set to a single character, indicating the type of command-line. |cmdwin-char| *CmdwinLeave* CmdwinLeave Before leaving the command-line window. Useful to clean up any global setting done with CmdwinEnter. This is triggered _instead_ of BufLeave and WinLeave. <afile> is set to a single character, indicating the type of command-line. |cmdwin the completed item. *CursorHold* CursorHold When the user doesn't press a key for the time specified with 'updatetime'. Not re-triggered until the user has pressed a key (i.e. doesn't fire every 'updatetime' ms if you leave Vim to make some coffee. :) See |CursorHold-example| for previewing tags. This event is only triggered in Normal mode.. Note: Interactive commands cannot be used for this event. There is no hit-enter prompt, the screen is updated directly (when needed). Note: In the future there will probably be another option to set the time. Hint: to force an update of the status lines use: :let &ro = &ro . *DirChanged* DirChanged After the |current-directory| was changed. Sets these |v:event| keys: cwd: current working directory scope: "global", "tab", "window" Recursion is ignored. regains input focus. This autocommand is triggered for each changed file. It is not used when 'autoread' is set and the buffer was not changed. If a FileChangedShell autocommand is present the warning message and prompt is not given.. *FocusGained* FocusGained When Vim got input focus. Only for the GUI version and a few console versions where this can be detected. *FocusLost* FocusLost When Vim lost input focus. Only for the GUI version and a few console versions where this can be detected.IEnter* GUIEnter After starting the GUI successfully, and after opening the window. It is triggered before VimEnter when using gvim. Can be used to position the window from a gvimrc file: :autocmd GUIEnter * winpos 100 50 . *TextYankPost* TextYankPost Just after a |yank| or |deleting| command, but not if the black hole register |quote_| is used nor for |setreg()|. Pattern must be *. Sets these |v:event| keys: operator regcontents regname regtype Recursion is ignored. It is not allowed to change the text |textlock|. |, |:make| and |:grep|. Can be used to check for any changed files. For non-blocking shell commands, see |job-control|. Enter* TabEnter Just after entering a tab page. |tab-page| After triggering the WinEnter and before triggering the BufEnter event. *TabLeave* TabLeave Just before leaving a tab page. |tab-page| A WinLeave event will have been triggered first. {Nvim} *TabNew* TabNew When creating a new tab page. |tab-page| After WinEnter and before TabEnter. {Nvim} *TabNewEntered* TabNewEntered After entering a new tab page. |tab-page| After BufEnter. {Nvim} *TabClosed* TabClosed After closing a tab page. <afile> can be used for the tab page number. *TermChanged* TermChanged After the value of 'term' has changed. Useful for re-loading the syntax file to update the colors, fonts and other terminal-dependent settings. Executed for all loaded buffers. {Nvim} *TermClose* TermClose When a terminal buffer ends. {Nvim} *TermOpen* TermOpen When a terminal buffer is starting. This can be used to configure the terminal emulator by setting buffer variables. |terminal| *TermResponse* TermResponse After the response to |t_RV| is received from the terminal. The value of |v:termresponse| can be used to do things depending on the terminal version. Note that this event may be triggered halfway through another event (especially if file I/O, a shell command, or anything else that takes time is involved). .shada file. Executed only once, like VimLeavePre. < Use |v:dying| to detect an abnormal exit. Use |v:exiting| to get the exit code. Not triggered if |v:dying| is 2 or more. *VimLeavePre* VimLeavePre Before exiting Vim, just before writing the .shada file. This is executed only once, if there is a match with the name of what happens to be the current buffer when exiting. Mostly useful with a "*" pattern. :autocmd VimLeavePre * call CleanupStuff() Use |v:dying| to detect an abnormal exit. Use |v:exiting| to get the exit code. Not triggered if |v:dying| is 2 or more. *VimResized* VimResized After the Vim window was resized, thus 'lines' and/or 'columns' changed. Not when starting up though. *WinEnter* WinEnter After entering another window. Not done for the first window, when Vim has just started. Useful for setting the window height. If the window is for another buffer, Vim executes the BufEnter autocommands after the WinEnter autocommands. Note: When using ":split fname" the WinEnter event is triggered after the split but before the file "fname" is loaded. *WinLeave* WinLeave Before leaving a window. If the window to be entered next is for a different buffer, Vim executes the BufLeave autocommands before the WinLeave autocommands (but not for ":new"). Not used for ":qa" or ":q" when exiting Vim. *WinNew* WinNew When a new window was created. Not done for the first The file pattern {pat} is tested for a match against the file name in one of two ways: 1. When there is no '/' in the pattern, Vim checks for a match against only the tail part of the file name (without its leading directory path). 2. When there is a '/' in the pattern, Vim checks for a match against. Examples: :autocmd BufRead *.txt set et Set the 'et' option for all text files. :autocmd BufRead /vim/src/*.c set cindent Set the 'cindent' option for C files in the /vim/src directory. :autocmd BufRead /tmp/*.c set ts=5 If you have a link from "/tmp/test.c" to "/home/nobody/vim/src/test.c", and you start editing "/tmp/test.c", this autocommand will match. Note: To match part of a path, but not from the root directory, use a '*' as the first character. Example: :autocmd BufRead */doc/*.txt set tw=78 This autocommand will for example be executed for "/tmp/doc/xx.txt" and "/usr/home/piet/doc/yy.txt". The number of directories does not matter here. The file name that the pattern is matched against is after expanding wildcards. Thus if you issue this command: :e $ROOTDIR/main.$EXT The argument is first expanded to: /usr/root/main.py Before it's matched with the pattern of the autocommand. Careful with this when using events like FileReadCmd, the value of <amatch> may not be what you expect. Environment variables can be used in a pattern: :autocmd BufRead $VIMRUNTIME/doc/*.txt set expandtab And ~ can be used for the home directory (if $HOME is defined): :autocmd BufWritePost ~/.config/nvim/init.vim so <afile> :autocmd BufRead ~archive/* set readonly The environment variable is expanded when the autocommand is defined, not when the autocommand is executed. This is different from the command! *file-pattern* The pattern is interpreted like mostly used in file names: * matches any sequence of characters; Unusual: includes path separators ? matches any single character \? matches a '?' . matches a '.' ~ matches a '~' , separates patterns \, matches a ',' { } like \( \) in a |pattern| , inside { }: like \| in a |pattern|||| \} literal } \{ literal { \\\{n,m\} like \{n,m} in a |pattern| \ special meaning like in a |pattern| [ch] matches 'c' or 'h' [^ch] match any character but 'c' and 'h' Note that for all systems the '/' character is used for path separator (even Windows). This was done because the backslash is difficult to use in a pattern and to make the autocommands portable across different systems. It is possible to use |pattern| items, but they may not work as expected, because of the translation done for the above. *autocmd-changes* Matching with the pattern is done when an event is triggered. Changing the buffer name in one of the autocommands, or even deleting the buffer, does not change which autocommands will be executed. Example: au BufEnter *.foo bdel au BufEnter *.foo set modified This will delete the current buffer and then set 'modified' in what has become the current buffer instead. Vim doesn't take into account that "*.foo" doesn't match with that buffer name. It matches "*.foo" with the name of the buffer at the moment the event was triggered.. ============================================================================== 8. Groups *autocmd-groups* Autocommands can be put together in a group. This is useful for removing or executing a group of autocommands. For example, all the autocommands for syntax highlighting are put in the "highlight" group, to be able to execute ":doautoall highlight BufRead" when the GUI starts. When no specific group is selected, Vim uses the default group. The default group does not have a name. You cannot execute the autocommands from the default group separately; you can execute them only by executing autocommands for all groups. Normally, when executing autocommands automatically, Vim uses the autocommands for all groups. The group only matters when executing autocommands with ":doautocmd" or ":doautoall", or when defining or deleting autocommands. The group name can contain any characters except white space. The group name "end" is reserved (also in uppercase). The group name is case sensitive. Note that this is different from the event name! *:aug* *:augroup* :aug[roup] {name} Define the autocmd group name for the following ":autocmd" commands. The name "end" or "END" selects the default group.. To enter autocommands for a specific group, use this method: 1. Select the group with ":augroup {name}". 2. Delete any old autocommands with ":au!". 3. Define the autocommands. 4. Go back to the default group with "augroup END". Example: :augroup uncompress : au! : au BufEnter *.gz %!gunzip :augroup END This prevents having the autocommands defined twice (e.g., after sourcing the vimrc file again). ============================================================================== 9. Executing autocommands *autocmd-execute* Vim can also execute Autocommands non-automatically. This is useful if you have changed autocommands, or when Vim has executed the wrong autocommands (e.g., the file pattern match was wrong). Note that the 'eventignore' option applies here too. Events listed in this option will not cause any commands to be executed. *:do* *:doau* *:doautocmd* *E217* :do[autocmd] [<nomodeline>] [group] {event} [fname] Apply the autocommands matching [fname] (default: current file name) for {event} to the current buffer. You can use this when the current file name does not match the right pattern, after changing settings, or to execute autocommands for a certain event. It's possible to use this inside an autocommand too, so you can base the autocommands for one extension on another extension. Example: :au BufEnter *.cpp so ~/.config/nvim/init_cpp.vim :au BufEnter *.cpp doau BufEnter x.c Be careful to avoid endless loops. See |autocmd-nested|. When the [group] argument is not given, Vim executes the autocommands for all groups. When the [group] argument is included, Vim executes only the matching autocommands for that group. Note: if you use an undefined group name, Vim gives you an error message. *>] [group] {event} [fname] Like ":doautocmd", but apply the autocommands to each loaded buffer. Note that [fname] is used to select the autocommands, not the buffers to which they are applied. Careful: Don't use this for autocommands that delete a buffer, change to another buffer or change the contents of a buffer; the result is unpredictable. This command is intended for autocommands that set options, change highlighting, and things like that. ============================================================================== 10. Using autocommands *autocmd-use* For WRITING FILES there are four possible sets of events. Vim uses only one of these sets for a write command: BufWriteCmd BufWritePre BufWritePost writing the whole buffer FilterWritePre FilterWritePost writing to filter temp file FileAppendCmd FileAppendPre FileAppendPost appending to a file FileWriteCmd FileWritePre FileWritePost any other file write When there is a matching "*Cmd" autocommand, it is assumed it will do the writing. No further writing is done and the other events are not triggered. |Cmd-event| Note that the *WritePost commands should undo any changes to the buffer that were caused by the *WritePre commands; otherwise, writing the file will have the side effect of changing the buffer. Before executing the autocommands, the buffer from which the lines are to be written temporarily becomes the current buffer. Unless the autocommands change the current buffer or delete the previously current buffer, the previously current buffer is made the current buffer again. The *WritePre and *AppendPre autocommands must not delete the buffer from which the lines are to be written. The '[ and '] marks have a special position: - Before the *ReadPre event the '[ mark is set to the line just above where the new lines will be inserted. - Before the *ReadPost event the '[ mark is set to the first line that was just read, the '] mark to the last line. - Before executing the *WriteCmd, *WritePre and *AppendPre autocommands the '[ mark is set to the first line that will be written, the '] mark to the last line. Careful: '[ and '] change when using commands that change the buffer. In commands which expect a file name, you can use "<afile>" for the file name that is being read |:<afile>| (you can also use "%" for the current file name). "<abuf>" can be used for the buffer number of the currently effective buffer. This also works for buffers that doesn't have a name. But it doesn't work for files without a buffer (e.g., with ":r file"). *gzip-example* Examples for reading and writing compressed files: :augroup gzip : autocmd! : autocmd BufReadPre,FileReadPre *.gz set bin : autocmd BufReadPost,FileReadPost *.gz '[,']!gunzip : autocmd BufReadPost,FileReadPost *.gz set nobin : autocmd BufReadPost,FileReadPost *.gz execute ":doautocmd BufReadPost " . expand("%:r") : :augroup END The "gzip" group is used to be able to delete any existing autocommands with ":autocmd!", for when the file is sourced twice. ("<afile>:r" is the file name without the extension, see |:_%:|) The commands executed for the BufNewFile, BufRead/BufReadPost, BufWritePost, FileAppendPost and VimLeave events do not set or reset the changed flag of the buffer. When you decompress the buffer with the BufReadPost autocommands, you can still exit with ":q". When you use ":undo" in BufWritePost to undo the changes made by BufWritePre commands, you can still do ":q" (this also makes "ZZ" work). If you do want the buffer to be marked as modified, set the 'modified' option. To execute Normal mode commands from an autocommand, use the ":normal" command. Use with care! If the Normal mode command is not finished, the user needs to type characters (e.g., after ":normal m" you need to type a mark name). If you want the buffer to be unmodified after changing it, reset the 'modified' option. This makes it possible to exit the buffer with ":q" instead of ":q!". *autocmd-nested* *E218* By default, autocommands do not nest. If you use ":e" or ":w" in an autocommand, Vim does not execute the BufRead and BufWrite autocommands for those commands. If you do want this, use the "nested" flag for those commands in which you want nesting. For example: :autocmd FileChangedShell *.c nested e! The nesting is limited to 10 levels to get out of recursive loops. It's possible to use the ":au" command in an autocommand. This can be a self-modifying command! This can be useful for an autocommand that should execute only once. If you want to skip autocommands for one command, use the |:noautocmd| command modifier or the 'eventignore' option. Note: When reading a file (with ":read file" or with a filter command) and the last line in the file does not have an <EOL>, Vim remembers this. At the next write (with ":write file" or with a filter command), if the same line is written again as the last line in a file AND 'binary' is set, Vim does not supply an <EOL>. This makes a filter command on the just read lines write the same file as was read, and makes a write command on just filtered lines write the same file as was read from the filter. For example, another way to write a compressed file: :autocmd FileWritePre *.gz set bin|'[,']!gzip :autocmd FileWritePost *.gz undo|set nobin *autocommand-pattern* You can specify multiple patterns, separated by commas. Here are some examples: :autocmd BufRead * set tw=79 nocin ic infercase fo=2croq :autocmd BufRead .letter set tw=72 fo=2tcrq :autocmd BufEnter .letter set dict=/usr/lib/dict/words :autocmd BufLeave .letter set dict= :autocmd BufRead,BufNewFile *.c,*.h set tw=0 cin noic :autocmd BufEnter *.c,*.h abbr FOR for (i = 0; i < 3; ++i)<CR>{<CR>}<Esc>O :autocmd BufLeave *.c,*.h unabbr FOR For makefiles (makefile, Makefile, imakefile, makefile.unix, etc.): :autocmd BufEnter ?akefile* set include=^s\=include :autocmd BufLeave ?akefile* set include& To always start editing C files at the first function: :autocmd BufRead *.c,*.h 1;/^{ Without the "1;" above, the search would start from wherever the file was entered, rather than from the start of the file. *skeleton* *template* To read a skeleton (template) file when opening a new file: :autocmd BufNewFile *.c 0r ~/vim/skeleton.c :autocmd BufNewFile *.h 0r ~/vim/skeleton.h :autocmd BufNewFile *.java 0r ~/vim/skeleton.java To insert the current date and time in a *.html file when writing it: :autocmd BufWritePre,FileWritePre *.html ks|call LastMod()|'s :fun LastMod() : if line("$") > 20 : let l = 20 : else : let l = line("$") : endif : exe "1," . l . "g/Last modified: /s/Last modified: .*/Last modified: " . : \ strftime("%Y %b %d") :endfun You need to have a line "Last modified: <date time>" in the first 20 lines of the file for this to work. Vim replaces <date time> (and anything in the same line after it) with the current date and time. Explanation: ks mark current position with mark 's' call LastMod() call the LastMod() function to do the work 's return the cursor to the old position The LastMod() function checks if the file is shorter than 20 lines, and then uses the ":g" command to find lines that contain "Last modified: ". For those lines the ":s" command is executed to replace the existing date with the current one. The ":execute" command is used to be able to use an expression for the ":g" and ":s" commands. The date is obtained with the strftime() function. You can change its argument to get another date string. When entering :autocmd on the command-line, completion of events and command names may be done (with <Tab>, CTRL-D, etc.) where appropriate. Vim executes all matching autocommands in the order that you specify them. It is recommended that your first autocommand be used for all files by using "*" as the file pattern. This means that you can define defaults you like here for any settings, and if there is another matching autocommand it will override these. But if there is no other matching autocommand, then at least your default settings are recovered (if entering this file from another for which autocommands did match). Note that "*" will also match files starting with ".", unlike Unix shells. *autocmd-searchpat* Autocommands do not change the current search patterns. Vim saves the current search patterns before executing autocommands then restores them after the autocommands finish. This means that autocommands do not affect the strings highlighted with the 'hlsearch' option. Within autocommands, you can still use search patterns normally, e.g., with the "n" command. If you want an autocommand to set the search pattern, such that it is used after the autocommand finishes, use the ":let @/ =" command. The search-highlighting cannot be switched off with ":nohlsearch" in an autocommand. Use the 'h' flag in the 'shada' option to disable search- highlighting when starting Vim. *Cmd-event* When using one of the "*Cmd" events, the matching autocommands are expected to do the file reading, writing or sourcing. This can be used when working with a special kind of file, for example on a remote system. CAREFUL: If you use these events in a wrong way, it may have the effect of making it impossible to read or write the matching files! Make sure you test your autocommands properly. Best is to use a pattern that will never match a normal file name, for example "ftp://*". When defining a BufReadCmd it will be difficult for Vim to recover a crashed editing session. When recovering from the original file, Vim reads only those parts of a file that are not found in the swap file. Since that is not possible with a BufReadCmd, use the |:preserve| command to make sure the original file isn't needed for recovery. You might want to do this only when you expect the file to be modified.
|
https://neovim.io/doc/user/autocmd.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Java provides a rich set of built-in operators which can be categorized as follows.
- Arithmetic Operators
- Relational Operators
- Increment and Decrement Operators
- Logical Operators
- Assignment Operators
Arithmetic Operations
Arithmetic Operators are the operators which perform arithmetic calculations on operands same as these are used in algebra.
In the following table, the value of 'a' is 8 whereas that of 'b' is 4.
class O1{ public static void main(String[] args){ int a=10, b=2; System.out.println("a+b = " + (a+b)); System.out.println("a-b = " + (a-b)); System.out.println("a*b = " + (a*b)); System.out.println("a/b = " + (a/b)); } }
a-b = 8
a*b = 20
a/b = 5
int a=10, b=12; - As discussed earlier, int a=10; will first allocate space for 'a' in memory and then give it a value 10. Same will be done for b and will be given a value 2.
Here, "a+b = " is written inside (" "). Whatever is written inside " " is not evaluated and will be printed as it is. So, a+b = will not be evaluated and get printed as it is. Then (a+b) after + will be evaluated (i.e. 12) and printed.
So, "a+b = " will combine with the calculated value of a+b i.e. ( 10+2 i.e. 12 ) and printed as a+b = 12
We can also do:
class O2{ public static void main(String[] args){ int a=10, b=2, z; z = a+b; System.out.println("a+b = "+z); } }
Here, z will become a+b i.e. 12. And in 'println', 'a+b = ' is inside " ", so it will be printed as it is ( without evaluation ) and then the value of z i.e. 12 will be printed. So, a+b = 12 will get printed.
For example, 3/2 returns 1 whereas 2/3 returns 0.
If at least one of numerator or denominator has a decimal, then we will get the exact decimal value of the answer.
All 3/2.0, 3.0/2 and 3.0/2.0 return 1.5.
class O3{ public static void main(String[] args){ System.out.println("3 / 2 = " + (3 / 2)); System.out.println("3 / 2.0 = " + (3 / 2.0)); System.out.println("3.0 / 2 = " + (3.0 / 2)); System.out.println("3.0 / 2.0 = " + (3.0 / 2.0)); } }
3 / 2.0 = 1.5
3.0 / 2 = 1.5
3.0 / 2.0 = 1.5
As we have seen now that 3/2 ( both int ) is giving 1 whereas 3.0/2 or 3/2.0 or 3.0/2.0 ( at least one is double ) is giving us 1.5.
So now think, if we are using int for some variable and at some point of our code we need double, then we can easily achieve it with type-casting as discussed in the previous chapter. An example on this is:
class O4{ public static void main(String[] args){ int x = 10, y = 3; double z = 4.5; System.out.println(x/y); System.out.println((int)z); System.out.println((double)x/y); } }
4
3.3333333333333335
As discussed earlier, we have type casted z to int ( (int)z ). And x to double ( (double)x ) to give a double value after the division.
Relational Operators
Following are the relationship operators in Java which return true if the relationship is true and false if the relationship is false.
In the following table, assume the value of 'a' to be 8 and that of 'b' to be 4.
Let's look at an example to see the use of these.
class O5{ public static void main(String[] args){ int a=10, b=25; System.out.println("a == b = " + (a == b) ); System.out.println("a != b = " + (a != b) ); System.out.println("a > b = " + (a > b) ); System.out.println("a < b = " + (a < b) ); System.out.println("b >= a = " + (b >= a) ); System.out.println("b <= a = " + (b <= a) ); } }
a != b = true
a > b = false
a < b = true
b >= a = true
b <= a = false
This example is on the above table.
Since
a is not equal to b, so a == b gave us false and a != b ( a not equal to b ) gave us true
Similarly, since a is smaller than b,
so a > b gave us false and a < b gave us true.
Same with >= ( greater than equal to ) and <= ( less than equal to ).
Difference between = and ==
= and == perform different operations. = is the assignment operator while == is the equality operator.
= assigns values from its right side operands to its left side operands whereas == compares values.
By writing x = 5, we assign a value of 5 to x, whereas by writing x == 5, we check if the value of x is 5 or not.
So, == gives us true if both operands are same and false, if both are not.
In Java programming, if we are writing A and B, then the expression is true if both A and B are true. Whereas, if we are writing A or B, then the expression is true if either A or B or both are true.
A and B - Both A and B.
A or B - Either A or B or both.
Symbol for AND is && while that of OR is ||.
Let the value of 'a' be 4 and that of 'b' be 0.
In Logical AND (&&) operator, if any one of the expressions is false, the condition becomes false. Therefore, for the condition to become true, both the expressions must be true.
For example, (3>2)&&(5>4) returns true because both the expressions are true. Conditions (3>2)&&(5<4), (3<2)&&(5>4) and (3<2)&&(5<4) are false because one of the expressions is false in each case.
For Logical OR (||) operator, the condition is false only when both the expressions are false. If any one expression is true, the condition returns true. Therefore (3<2)||(5<4) returns false whereas (3>2)||(5<4), (3<2)||(5>4) and (3>2)||(5>4) returns true.
Logical Not (!) operator converts true to false and vice versa. For example, !(4<7) is true because the expression (4<7) is false and the operator ! makes it true.
class O6{ public static void main(String[] args){ int a=10, b=0; System.out.println("!(a>b) = " + !(a>b) ); System.out.println("(a>b) && (b==0) = " + ((a>b) && (b==0)) ); System.out.println("(a>b) && !(a == 10) = " + ((a>b) && !(a==10)) ); } }
(a>b) && (b==0) = true
(a>b) && !(a == 10) = false
In the above example, since the value of 'a>b' is true. So, !(a>b) makes it false.
(a>b) && (b==0) - b==0 is true and a > b is also true (since a is greater than b). Thus, (a>b) && (b==0) is true && true i.e. true as both the operands are true.
Assignment Operators
Java provides the following assignment operators which assign values from its right side operands to its left side operands.
For example, a = 5; assigns a value of '5' to the variable 'a'.
= operator makes the left operand equal to the right one. This means that x=y will make x equal to y and not y equal to x.
Before going further, try this example:
class O7{ public static void main(String[] args){ int a=10; System.out.println(a); a = a+2; System.out.println(a); a = a*2; System.out.println(a); a = a-2; System.out.println(a); } }
12
24
22
a = 10; - In this line, a is 10. So, 10 is printed from the first 'println'.
a = a+2 - Remember, we discussed that = makes the left operand equal to the right. In the right side, we have a+2. So, a+2 will be calculated i.e. 12 thus making the expression equivalent to a = 12;. So, now a is 12. Therefore, 12 is printed from the second 'println'. And now a is 12.
After that a = a*2; - In the right side, we have a*2. So, it will be evaluated as 12*2 i.e. 24 thus making the expression a = 24;. So, a is 24 now and 24 will be printed from the third 'println'.
4 = c will try to make 4 as c, but it is not possible.
Suppose the value of an integer variable 'a' is 8. When we write a += 2, this is equivalent to a = a + 2, thus adding 2 to the value of 'a' and making the value of 'a' equal to 10.
Similarly, a -= 2 equals the expression a = a - 2, thus subtracting 2 from the value of 'a' and then assigning that value to 'a'
Similarly, we can perform other operations of multiplication and division.
class O8{ public static void main(String[] args){ int a=10; System.out.println("a = 5 " + "Value of a = " + a); System.out.println("a += 5 " + "Value of a = " + (a+=5) ); System.out.println("a -= 5 " + "Value of a = " + (a-=5) ); System.out.println("a *= 5 " + "Value of a = " + (a*=5) ); System.out.println("a /= 5 " + "Value of a = " + (a/=5) ); System.out.println("a %= 5 " + "Value of a = " + (a%=5) ); } }
a += 5 Value of a = 15
a -= 5 Value of a = 10
a *= 5 Value of a = 50
a /= 5 Value of a = 10
a %= 5 Value of a = 0 a++ and ++a changes the value of 'a' to 6. Similarly.
class O9{ public static void main(String[] args){ int a=10,b=10,c=10,d=10; System.out.println("value of a++ = " +(a++)); System.out.println("a = "+a); System.out.println("value of ++b = " +(++b)); System.out.println("value of c-- = " +(c--)); System.out.println("value of --d = " +(--d)); } }
a = 11
value of ++b = 11
value of c-- = 10
value of --d = 9
As just saw that in the first 'println', 10 is printed and then its value increased to 11. In the second 'println', 11 got printed. But with ++b, b first got increased to 11 and then printed on screen.
Precedence of Operators
In Maths, you might have learned about BODMAS rule, but that rule is not applied here. If we have written more than one operation in one line, then which operation should be done first is governed by the following rules :- Expression inside brackets '()' are evaluated first. After that, this table is followed ( The operator at the top has higher precedence and that at the bottom has the least precedence ):
Consider an example.
n = 3 * 4 + 5
The priority order of the multiplication operator ( * ) is greater than that of the addition operator ( + ). So, first 3 and 4 will be multiplied and then 5 will be added to their product. Thus the value of n will be 17.
If two operators are of same priority, then evaluation starts from left or right as stated in the table.
e.g.-
k = 8/4+3-5*6+8
Solving, '*' and '/' first from left to right.
k = 2+3-30+8
Now solving, '+' and '-' from left to right
k = -17
Now solving '=' from right to left
k is made -17.
Let's import maths
What if you want to take out the sine, cos or log of a number ?
Java allows us to perform such mathematical operations.
In Java, we can perform such mathematical operations with the help of Math class. Java has a number of predefined classes which we can use about which we will learn later.
Predefined classes are organized in the form of packages. This Math class comes under java.lang package. So we first need to import java.lang package in our program.
We use import keyword to import any package in our program. We can either import the java.lang.Math class or the entire java.lang package.
So we have to add one of the following code in the beginning of our program before the main class.
import java.lang.*;
Here '.*' imports all the classes of java.lang package.
import java.lang.Math;
java.lang.Math will only import Math from lang package.
After importing the Math class, we can now enjoy the different mathematical functions in Java. These functions are called methods of Math class.
Some of those functions are listed below.
Now let's have a look at some of these functions with their examples.
Math.abs()
It returns the absolute value of the number passed to it. Absolute value of a number is the magnitude of the number with a positive sign. For example, the absolute value of 2 is 2 whereas the absolute value of -2 is also 2.
To take out the absolute value of -2.3, we have to write the following code.
import java.lang.Math; class Test { public static void main(String[] args) { System.out.println(Math.abs(-2.3)); } }
Math.ceil()
It calculates the floating value of a number which is equal to the integer just greater than it. For example, if the number 3.45 is passed to the function, it will return 4.0.
import java.lang.Math; class Test { public static void main(String[] args) { System.out.println(Math.ceil(3.4)); } }
Math.pow()
It takes two numbers and returns the value of the first number raised to the power equal to the second parameter.
For example, the value of the number 3 raised to the power 2 is equal to multiplying three two times which is equal to 9 (= 3*3).
import java.lang.Math; class Test { public static void main(String[] args) { System.out.println(Math.pow(5,3)); } }
Java will be much more fun when you will learn more. Making of android apps is one of the most interesting applications of Java.
Talent is good, Practice is better, and Passion is Best
-Frank Lloyd Wright
|
https://www.codesdope.com/java-operators/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I have a prewritten java program to calculate the total of monthly bills. It should execute until the user enters "done" instead of bill type. After the user enters a bill type, he should be asked to enter the bill amount. I don't have a clue where to start. The prewritten program is as follows....
// MonthlyBills.java - This program calculates the total of your monthly bills.
// Input: Bill type and bill amount.
// Output: Prints the total of your monthly bills.
import javax.swing.JOptionPane;
public class MonthlyBills
{
public static void main(String args[])
{
String billType; // Description of bill.
String stringAmount; // String version of bill amount.
double billAmount; // Amount of the bill.
double sum = 0; // Accumulates sum of bills.
/* You should set up your loop to execute as long as the user
has not entered the word done. You can use these input and
output statements anywhere in your program. They are not in
any particular order.
*/
// This input statement asks the user to enter a bill type or the word none.
billType = JOptionPane.showInputDialog("Enter bill type or the word done to quit.");
// This input statement asks your user to enter a bill amount.
stringAmount = JOptionPane.showInputDialog("Enter amount of bill");
// This statement converts the string version of the amount to a double.
billAmount = Double.parseDouble(stringAmount);
// This statement displays the sum of monthly bills.
System.out.println("Sum of monthly bills is $: " + sum);
// This statement causes the program to exit.
System.exit(0);
} // End of main() method.
} // End of MonthlyBills class.
Can anyone help complete this??
Thanks
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center
|
http://forums.devx.com/showthread.php?148769-accumulating-totals-in-a-loop&p=443101&mode=threaded
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
3-LED Backlight: Xamarin and Arduino With HC05
Introduction: 3-LED Backlight: Xamarin and Arduino With HC05
Hello dear community,
Today im gonna show you how to make a full Androidapp-controllable LED Backlight for your TV for under 10$.
In this Instructable i will show you:
- To use my Xamarin Bluetooth-APP(Open Source C#-Code)
- Programming your Arduino
- And let them communicate to each other
I am using an Arduino Uno r3(Sainsmart UNO) for the HC-05 and Visual Studio Ultimate 2013(Xamarin Plug-In) for programming our Bluetoothsoftware. I really looked a long time on the internet to find such a tutorial, but there just were a few good tutorials, which only described the Xamarin- or the Arduino-sided stuff well. Especially the the compatibilty of my Smartphone(Android 4.1.2) made huge problems, but i found a solution which worked for me. And it runs more stable on Android 5.0> :). Ok Here we go:
You will need:
- Arduino(Mega, uno, etc.)
- HC-05(HC-06 is also possible)
- Xamarin Plugin or IDE. It's free for 30 days of trial ()
Actuallly this Instructable is not directed to C#-Programming, but the code should be easy to understand.
If not post a comment, or write a message :)
Step 1: The Xamarin - Code
Okay first of all we have to write our Code:
Make your own Design or use mine(recommended) to have a GUI which the user can use easily.
"Disconnect" causes an event which sends the value 187 to the Arduino which restarts the HC-05(required).
The Buttons LED1, LED2, LED3 causes the event to send a 1,2 or 3 to the Arduino.
The Brightness seekBar sends a value between 10 and 168.
Download below !
_____
Create a new Xamarin-Project and call it e.g. "BluetoothApp".
When you have done this, please change the namespace (Options>Android options) to Backlight, or change the namespace in the MainActivity.cs, you already integrated, to yours. After this you have to pair your device over Bluetoothsettings.
We want to create this App as easy as possible, so we only use the static name of our HC-05/-06 in my case the standard name of this module "HC-05" to connect and communicate.
The only thing i found out to get this work is to create a "BluetoothDevice" by using the Name "HC-05" of your Bluetoothmodule. When I try to create a BluetoothSocket with a BluetoothDevice which is not created with the name, it crashes. I dont know why but it's a adequately solution.
In my code you can change the name to your devices name in the class BluetoothConnection. This step is required to get the App running!
this Line:
public void getDevice() { this.thisDevice = (from bd in this.thisAdapter.BondedDevices where bd.Name == "HC-05" select bd).FirstOrDefault(); }
When you have done this, try to get away possible errors and then we come to the interesting Part :)
Step 2: Arduino Code
In this .ino file you find the program i am running on my Sainsmart Uno. I used numbers to code the information i want to transmit, e.g. '187' for killing, '1' for LED one etc. There are also values between 15 and 168 for the brightness given by the APP(Slidebar). The standard brightness is 80.
I used following Pin-assignments:
13 - Pinout TTL 5V for HC05/06-Supplyvoltage(and also for restarting)
3,5 and 6 - PWM output for LED's
Tx - to HC05/06 Rx
Rx - to HC05/06 Tx ( a bit confusing, but very important! )
GND - TO HC05/06 GND
In some documentations I found a voltage divider between Tx and Rx(and of course GND) to set the Voltage to 3.3V, but my HC05/06 also works great with 5V potential. So just connect and go on! :)
________
I also tried this with an AtTiny45, but this is a bit more complicated, becauce this IC only has one PWM-port and you need 3 for 3 LED's. That's why you have to programm a timer.
An other way is, to use a bipolartransistor as amplifier.
Write me for more information!
Step 3: Relax and Be Proud of Yourself
Hopefully you got it soon. If there are some Questions, just write a comment or a private message. I will reply as soon as possible.
If you enjoyed this instructable please leave a comment and follow me :)
See you!
Why is it that everytime I try to Connect, the app closes
"The application has stopped."
Thank you so much. I send data to my special bluetooth device which is not Arduino+HC05+BTModul. After I send data, I want to read that coming data from my special bluetooth device by changing Baud Rate. How Can I do this?
Thank you SO MUCH! You helped out a lot with your code, I didn't know the UUID so just one adjustment did the job. By the way, how did you even find out the UUID? ^^
Hi,
Thanks for you great post!
I am trying to get it running but is get the error "CS0103 - The name 'Rasource' does not exist in the current context".
I do not understand why this is happeping, can you help?
Mark
Sorry for the typo "Rasource" must be "Resource". The first occurence is on line 29 see below:
SetContentView(Resource.Layout.Main);
Thanks for the tutorial, it's really simple and useful! But now I need something to receive data from arduino, can you recommend me something, please?
Yes, the HC-05 / 06 is a Transceiver. That means, that you can also transmit data. Google something like "HC-05 Arduino transmitting"
regards
I have led in 5,6,9 pins but if i write in code 5,6,9 program don't work i connect to the bluetooth modul hc05 and next i click the buttons but bluetooth module don't turn on the leds
Does your HC-05 give any Signals? Try to measure some stuff with a simple Serialmonitor-Loop.
regards
I get a NullReferenceExection while debugging the App on my Nexus5 at the line _socket = MyConnection.thisDevice.Create[...]
Any help to get this work for me would be realy great!
Thanks a Lot!
Sorry for the late reply. It seems like there is an error while creating the Device. This could happen, when your device was not found in Bluetooth Discovery. is Your Bluetooth enabled?
I also used HC-06
|
http://www.instructables.com/id/3-LED-Backlight-Xamarin-and-Arduino-With-HC05/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Hi,
I have a folder on our main network share (server 2008 OS) that permissions seem to not work. EG, let's say I'm in this folder:
\\server\share\\folder1\folder2\
There are 2 folders in here. a folder called mydata1 and mydata2.
so mydata1 inherits permissions and works fine. mydata2, however, does not inherit permissions.
Only managers are to have access to mydata2. I've added my user account to the managers group and managers have FULL CONTROL access to the folder. Both one of the managers and I are unable to access the folder, and notice it disappears intermittently. The error we get is:
"\\server\share\folder1\folder2\mydata2."
That's all fine and dandy except I'm perfectly able to browse the data on the server itself as domain admin. The managers group and domain admin have the same permissions, except domain admin is the owner. What gives?
If I add my account to the security settings with full access, I get access. But members of the managers group do not.
9 Replies
Mar 19, 2014 at 4:55 UTC
add a entry for the server into your host file. I know its old and archaic but I had this problem a few weeks ago and this solved it for me.
Mar 19, 2014 at 5:08 UTC
OK that sounds like a terrible solution, as I'd need to do that on many computers and it's a management hassle.
That said, accessing the share with an IP address works. Why? How can I overcome this problem?
Also, now that I've removed myself from the permissions to access the folder, if I access it through IP address I'm still able to get in despite security settings. I have tried issuing ipconfig /flushdns as well to no avail.
Mar 19, 2014 at 5:11 UTC
After removing myself from the group then going back to the share with the IP address, I can no longer access the folder.
Mar 19, 2014 at 5:12 UTC
Do the managers have implicit permissions on the parent folders? You said the rest was inherited except for that folder, but that doesn't necessarily mean they have at least read/traverse permissions. (if they do then I suppose you can disregard this!) Oh, and is access based enumeration on?
Mar 19, 2014 at 5:27 UTC
Everyone that is meant to access this folder definitely has access to the path all the way up, but they get roadblocked when they reach this folder. So they do have implicit permissions. I assume access-based enumeration is enabled because I can see the folder in Explorer but I get the error in my original post when I try to access it.
Mar 19, 2014 at 5:47 UTC
Windows seems to think the folder is shared. There are 2 folders where the problematic folder arrives, one has more access for non-managers. If I create a new folder in \\server\share\\folder1\folder2\, I can drag and drop the accessible folder into the new folder. I get an error when I try to drag the other, however. It claims the folder is shared with other people, but it's not.
Mar 19, 2014 at 5:51 UTC
Does it have a namespace or DFS? In computer management does anyone have an active session to the folder?
Mar 19, 2014 at 8:56 UTC.
Jun 10, 2016 at 1:36 UTC
I know it's an old thread, but THANK YOU Stephen. I truly hadn't figured this one out, and was running out of time to troubleshoot. I even wiresharked, but the traffic was all there.I know it's an old thread, but THANK YOU Stephen. I truly hadn't figured this one out, and was running out of time to troubleshoot. I even wiresharked, but the traffic was all there..
My issue: one subfolder in a share was dropping about 35% of the time, but only for one client (also Server 2012 O/S). It couldn't be reproduced from anywhere else, and when the permissions of the share folder were changed to far too much privilege, the problem went away. It wasn't happening for any of the other subfolders at the same level, or any other folder that we could tell. But it wasn't a permissions problem, as it worked ~65% of the time! My percentages seem too exact? It's because I made a script to test what was going on, especially from multiple locations.
I followed Stephen's advice, although I'm using Server 2012. I unshared the folder, put the exact same share permissions right back on it, and haven't had a hiccup since.
Here is that script, not that it's anything special or even purposeful for hardly anything. It was very quick, dirty, and unpolished versus the scripts I need to schedule and use every day. Didn't even comment it..
Clear-Host $i = 0 $f = 1 $t = 0 $tot = 0 $pPath = "\\Server\Share\Subfolder" Function TestMyPath { Test-Path -Path $pPath } While ($i -eq 0) { $pTime = Get-Date -Format T If (TestMyPath -eq "True") { #Write-Host = "True at $pTime for t = $t with total runs $tot" $t++ } Else { Write-Host = "False at $pTime for f = $f with total runs $tot" $f++ } $tot++ Start-Sleep -Seconds 1 }
This discussion has been inactive for over a year.
You may get a better answer to your question by starting a new discussion.
|
https://community.spiceworks.com/topic/459965-one-folder-in-windows-share-randomly-disappears-permissions-don-t-work
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
27 August 2012 06:17 [Source: ICIS news]
SINGAPORE (ICIS)--A vehicle collision between a passenger bus and a methanol tank truck on killed 36 early Sunday morning, China News Agency reported on Monday.
The accident occurred at 2:00 local time ?xml:namespace>
The truck was enroute to deliver about 35 tonnes of methanol from Yulin Energy and Chemical
|
http://www.icis.com/Articles/2012/08/27/9590097/methanol-truck-bus-collision-kills-36-in-shaanxi-china-report.html
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
This project is archived and is in readonly mode.
Floats are not exactly preserved
Reported by Psycopg website | January 19th, 2013 @ 08:16 PM
Submitted by: Div Shekhar
Python float roundtrip to & from double precision seems to be losing precision.
Occurring on both Mac & Linux:
- OS X 10.8.2, Postgres.app 9.2.2.0, psycopg2 2.4.6 - Ubuntu 12.04 LTS, stock postgres (9.1.7), stock psycopg2 (2.4.5)
BTW, MySQL for Python seems to have the same so I adapted the test program from their bug report.
bug
------ floatbug.py:
import math
import psycopg2
import struct
conn = psycopg2.connect(host="localhost", database="bindertestdb", user="bindertest", password="binderpassword")
before = 3.14159265358979323846264338327950288
cursor = conn.cursor()
cursor.execute('DROP TABLE IF EXISTS test')
cursor.execute('CREATE TABLE test (a DOUBLE PRECISION)')
cursor.execute('INSERT INTO test VALUES (%s)',(before,))
cursor.execute('SELECT a FROM test')
after = cursor.fetchall()[0][0]
print after == before
print "%.36g" % after # bug -> 3.14159265358979000737349451810587198
print "before: %.20f" % before
print "before [m] %.20f" % float("%.15g" % before)
print "after: %.20f" % after
print "before : ",bin(struct.unpack('Q', struct.pack('d',
before))[0])
print "before [m] : ",bin(struct.unpack('Q', struct.pack('d', float("%.15g" % before)))[0])
print "after : ",bin(struct.unpack('Q', struct.pack('d', after))[0])
---- Output:
False
3.14159265358979000737349451810587198
before: 3.14159265358979311600
before [m] 3.14159265358979000737
after: 3.14159265358979000737
before : 0b100000000001001001000011111101101010100010001000010110100011000
before [m] : 0b100000000001001001000011111101101010100010001000010110100010001
after : 0b100000000001001001000011111101101010100010001000010110100010001
Daniele Varrazzo January 21st, 2013 @ 01:02 PM
- State changed from new to invalid
Python doesn't store all these digits: python's float is only 64 bits.
before = 3.14159265358979323846264338327950288 >>> print before 3.141592653589793
Your own test shows it: printing more than 15 decimal digits only shows noise.
>>> "%.36g" % 3.14159265358979323846264338327950288 '3.14159265358979311599796346854418516' # aligned for comparison
Even if Python passed all the digits you want (which it cannot as they are stored nowhere) Postgres does its own clipping to the 64 float:
piro=> select '3.14159265358979323846264338327950288'::double precision; float8 ------------------ 3.14159265358979 (1 row)
and that's what is returned to psycopg.
If you want larger precision you will have to use the decimal data type both in Postgres and in Python.
Div Shekhar January 21st, 2013 @ 01:42 PM
I understand the significant digits limit, but I only care that 'before == after' is False
The test shows the 64-bit float binary value has changed:
before : ...11000
after : ...10001
Python float and postgresql double precision are both 64-bit floating point so I would expect the value to roundtrip correctly. Am I missing something here?
Daniele Varrazzo January 21st, 2013 @ 02:49 PM
As shown above, Python and Postgres parse floats in different ways:
piro@risotto:~$ python Python 2.7.2+ (default, Jul 20 2012, 22:15:08) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print repr(3.14159265358979323846264338327950288) 3.141592653589793 piro@risotto:~$ psql psql (9.1.7) Type "help" for help. piro=> select '3.14159265358979323846264338327950288'::float8; float8 ------------------ 3.14159265358979 (1 row)
so, the two models, Postgres and Python, just don't match. They may match using binary communication protocol but I'm not sure about that, and psycopg doesn't support it yet anyway.
More in general, asking for exact match between floating point numbers is asking for troubles. Any robust application using floating point number should check that two numbers are close enough (i.e. abs(B-A) < epsilon), never equal (B == A). This is basic scientific computing.
See also relevant Postgres docs at:
""" The data types real and double precision are inexact, variable-precision numeric types. [...]. [...]
Comparing two floating-point values for equality might not always work as expected. """
Python has similar notes at, and a reference to an exhaustive article.
So, just don't expect an exact roundtrip as there's not an exact representation. If you need exact precision in storage, you must use decimal in the database. This would roundtrip as expected:
In [1]: import psycopg2 In [2]: cnn = psycopg2.connect('') In [3]: cur = cnn.cursor() In [4]: before = 3.14159265358979323846264338327950288 In [5]: before Out[5]: 3.141592653589793 In [11]: cur.execute("select %s::decimal", [before,]) In [12]: float(cur.fetchone()[0]) 3.141592653589793
you can use this recipe from the FAQ to get Python float from Postgres decimals:.
Div Shekhar January 28th, 2013 @ 12:36 PM
Agreed.
Thanks for the detailed response, and - yes - I should not be doing exact compares on float.
BTW, I added a second roundtrip to the returned value and the float does NOT change again so there's no worry that the value will keep drifting.
---- add to end of the test:
cursor.execute('UPDATE test SET a=%s',(after,))
cursor.execute('SELECT a FROM test')
after2 = cursor.fetchall()[0][0]
print after2 == after # True!.
|
https://psycopg.lighthouseapp.com/projects/62710/tickets/145
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
public class NeverSkipItemSkipPolicy extends java.lang.Object implements SkipPolicy
SkipPolicyimplementation that always returns false, indicating that an item should not be skipped.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public NeverSkipItemSkipPolicy()
public boolean shouldSkip(java.lang.Throwable t, int skipCount)
SkipPolicy
skipCount<0to probe for exception types that are skippable, so implementations should be able to handle gracefully the case where
skipCount<0. Implementations should avoid throwing any undeclared exceptions.
shouldSkipin interface
SkipPolicy
t- exception encountered while reading
skipCount- currently running count of skips
|
http://docs.spring.io/spring-batch/apidocs/org/springframework/batch/core/step/skip/NeverSkipItemSkipPolicy.html
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
H ?xml:namespace>
The
There are expectations that this decision by Uralkali could force Canpotex to follow a course and accept that the marketplace has dramatically changed, resulting in a lower price base and reduction in profit margins for the near term.
($1 = €0.75)
|
http://www.icis.com/resources/news/2013/07/30/9692458/potash-markets-producers-react-to-uralkali-export-withdrawal/
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
16 March 2007 06:36 [Source: ICIS news]
SINGAPORE (ICIS news)--Formosa Petrochemical Corp (FPC) will not increase its ethylene dichloride (EDC) capacity after the startup of the new upstream No 3 cracker in Mailiao and increase in available feedstock, a company source said on Friday.
Production at FPC’s 1m tonne/year EDC plant in Mailiao has not been slated for expansion even after ?xml:namespace>
The 1.2m tonne/year cracker project was earlier scheduled to start up at end-2006, but was delayed to the second quarter of this year due to a labour shortage and problems with securing construction materials.
If the new Formosa No 3 cracker results in an increase in EDC capacity, that would help ease the regional shortfall now being addressed by imports of deep-sea cargoes from south America, the US and the Middle East.
“As our caustic soda production requires stable output, there won’t be any EDC capacity increase,” said the company source.
Downstream projects which will come up in line with the new cracker include a 700,000 tonne/year styrene monomer (SM) unit at Formosa Chemicals and Fibre Corp (FCFC) and Nan Ya Plastics’ monoethylene glycol (MEG) and bisphenol-A (BPA)
|
http://www.icis.com/Articles/2007/03/16/9014017/formosa-keeps-edc-capacity-despite-new-cracker.html
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
Preface
=======
[RT] are Ramdom Thoughts. This is a tradition in the Cocoon.
(preface by Sam Ruby :)
Context
=======
I am looking at the experimental merlin container
(avalon.apache.org/sandbox/merlin) in development @ avalon, in
particular comparing it to phoenix and fortress (two other containers @
avalon). I am looking at the decomposition/assembly semantics at the moment.
from the merlin docs:
Blocks enable
the separation of components as an implementation
solution from services that are established by those
components.
That means zip to me, however, I have previously understood from Steve
that the merlin notion of a block is similar to the cocoon notion of a
block. Links to talks about cocoon blocks are at. I've just read Stefano's
RT on that (on that wiki as well), and what strikes me is the *huge*
resamblance to phoenix .sar and .bar files.
This kinda provoked thoughts, since I think we really need less
semantics rather than more.
Please send replies to dev@avalon.
Random Comments on Stefano's RT
===============================
Cocoon specific item: a sitemap. I don't know exactly what the sitemap
does within cocoon, but IIUC it basically ties avalon-style components
to various parts and tasks within a web application. It is sort of an
xml description of the process-based MVC architecture cocoon implies.
The whole concept of a sitemap is pretty much specific to a certain
style of user interface application, and I believe in the current cocoon
case pretty specific to web-based user interfaces. Not going to talk
about it further.
An interesting comment from Stefano is that the servlet spec implies
monolithic applications by design, since it stimulates seperation of
wars. This is quite interesting because phoenix seperates its sar files
(Server Application aRchive) more rigorously than the average servlet
container its wars, yet the applications built on top of it (that I
know) are rarely monolithic.
Rather, they are split into logical units (components, identified by a
provided work interface and a passive nature), tied together using some
kind of mechanism which is outside of the scope of phoenix. Common
setups include making use of a JMX or JNDI registry or having the apps
talk via (Alt)RMI.
Maybe the average phoenix user understands more about smart software
architecture than the average servlet coder, or maybe it is basically an
awareness issue where users follow existing practices. I doubt it would
be very difficult to write a JNDI servlet which would serve as the
central registry for a multitude of (cocoon) servlets. It is just not
how webapps are built, and not what existing patterns and tools focus on.
Instance sharing above and beyond library sharing
-------------------------------------------------
Stefano also points out you almost always have to install multiple
copies of jars: there is no way to install a single jar and use it
multiple times (java has no functionality for .Net shared assemblies, so
to speak). This is not strictly always true (most servlet engines
provide a /common/lib), but it is the most common case for webapps to
provide everything.
What he doesn't explicitly state is that besides sharing of common jars,
you want to share common instances, or at least get your instances from
a common pool. This is basically what he dubs "component-oriented
deployment", IIUC:
---------------------------------------------------
| Inside the running application appserver |
---------------------------------------------------
| ---------------------- |
| | Processor Pool | |
| | |<-----------------\ |
| | | | |
| ---------------------- | |
| ^ get [ProcessorInstance] | |
| | get [ProcessorInstance] | |
| | --------------- |
| --------------- | Application | |
| | Application | --------------- |
| --------------- |
---------------------------------------------------
in other words, there are various resources multiple applications might
share, just like in daily life you often have multiple web applications
talking to the same instance of a (say, mysql) database. In the above
diagram (don't you love ascii ;), the Processor Pool might be replaced
by the database (like hsqldb), and the ProcessorInstance might be
replaced by a DataSource instance. Depends on the granularity.
This kind of setup is not implemented in phoenix (it is left up to the
user to setup JNDI or something like that, and perform the get() against
something fetched from JNDI).
A container which does enable this (and very elegantly, using the proven
to be very useful altrmi), is EOB,. They (Paul Hammant being a prime
mover) summarise it as "A bean server that uses .Net style remoting for
Java". EOB runs on top of phoenix, btw.
IMO, this feature from should be implemented using several different
mechanisms in all avalon-style containers. It rocks.
Seperation of interface/impl
----------------------------
Very, very important. GoF already knew that :D
The second major bullet point Stefano has is "polymorphic behavior",
which in avalon-speak we call "seperation of interface and
implementation", where you rigorously make sure you couple components
only via work interfaces, and via nothing else. A simple example is that
----
import java.util.ArrayList;
class DefaultMyService
implements Servicable
{
ArrayList m_someList;
service( ServiceManager sm )
{
m_someList = (ArrayList)sm.lookup(
"java.util.ArrayList" );
}
}
----
can be improved to
----
import java.util.List;
class DefaultMyService
implements Servicable
{
List m_someList;
service( ServiceManager sm )
{
m_someList = (List)sm.lookup(
"java.util.List" );
}
}
----
this is a contrived example (a List is not a good example of a
component), but the point is simple: by programming to the List
interface, you make it easy to plug in (for example) the FastArrayList
from commons-collections. And you make it easy to change to the use of a
LinkedList if it turns out that's better for performance. Etc etc. This
is a very general principle and it has nothing to do with a particular
COP architecture. It is why interfaces exist in java :D
Inheritance already available in java
-------------------------------------
Stefano's bullet point number three is inheritance, where a block
identified by a URL extends another block identified by some other URL.
This is both potentially complicated to implement, and already
implemented perfectly well with standard java inheritance. Inheritance
has its use in COP, and I really don't understand why a different
mechanism is neccessary. In code,
interface MyCustomService extends MyService {}
class DefaultMyCustomService implements MyCustomService {}
SARS/COBS and web locations
---------------------------
Stefano envisions specifying an URI for a cocoon block (.cob) which
identifies where it can be located on the web. This seems similar to me
to the <object/> tags in html and the references to the ActiveX controls
those contain. This can be tremendously useful. In the phoenix world,
you would add a new .sar via JMX, and then have phoenix figure out what
the .sar has in terms of external dependencies. If it can't provide
implementations for all of those, it can download the suggested
implementation from the web. Again an idea already implemented in .net
:D. Also reminds me of maven.
Taking this idea somewhat further, one could imagine some kind of
customizable policy where the appserver might ignore the suggestion for
an implementation, and instead talk with a central registry to figure
out what component to download. Your "local repository", or perhaps a
company-wide repository of COM or corba objects.
The easiest way to implement this is to make the avalon ROLE into a URN
(urn being a superset of URI, being a superset of URL), ie attaching
additional semantic meaning to what currently can be any unique string.
Versioning
----------
Stefano talks about what is basically versioning of implementation. Of
course, you also want versioning of work interface. The level at which
to implement versioning is the subject of debate. Java has the extension
mechanism for doing it at the jar level (the recent releases of the
excalibur components do this neatly for example, and merlin and phoenix
make use of that. I think netbeans uses this as well). There's also the
option to brute-force require a certain format for the name of the
provided jar (ie ${id}-${version}.jar). OSGi does bundle versioning in a
way similar to the extension mechanism IIRC.
Another option which has been tried (and discarded by some, ie Peter
Donald :D) is the association of a version with a work interface rather
than with the archive containing the work interface. .Net does
versioning of the archive (the assembly), and has a pretty extensive and
usable mechanism for it (basics at).
Most other stuff I know does this, too.
Possible Implementation (my own thoughts)
=========================================
I'll explore a possible implementation setup here (not optimal,
probably, but possible). I'm going to use the name "merlin" because
that's where we are most likely to want to experiment with this stuff.
Implementing Versioning
-----------------------
My opinion is that it normally makes sense to provide a version for a
small group of components, not individual work interfaces, and that it
makes sense to package such a component into a seperately distributed
archive (jar). I also think it makes sense to use the java extension
mechanism to do this.
My idea is that what we might want to do is provide a tool to parse all
classes for something like
package com.my;
/**
* @merlin.extension
* name="MyService"
* vendor="Apache Software Foundation"
* version="1.3.22"
* vendor-id="ASF"
* @merlin.component
* vendor="Apache Software Foundation"
* version="1.1"
* vendor-id="ASF"
*/
class DefaultMyService implements MyService {}
and transform that into
Manifest-Version: 1.0
Created-By: Avalon-Merlin metadata parsing tool v1.0
Name: com/my/
Extension-Name: MyService
Specification-Vendor: Apache Software Foundation
Specification-Version: 1.3.22
Implementation-Vendor: Apache Software Foundation
Implementation-Version: 1.1
Implementation-Vendor-Id: ASF
to be added to the MANIFEST.MF. I don't know how much of that is already
in place in the merlin meta tool, but I expect some of it at least.
Should be pretty straightforward.
Implementing dependency resolution and autodownload
---------------------------------------------------
IMNHSO, the Class-Path mechanism used in the java extension mechanism is
plain silly in its limited applicability as it works only with relative
URLs. Since we are moving towards providing @avalon.dependency anyway
with components, there should be plenty of info there which should make
it possible to combine a simple resolve.properties with available
metadata so that autodownload can be facilitated. IOW, when you already have
/**
* @avalon.dependency
* type="MyPool"
*/
public void service( ServiceManager sm );
combining that with
# ~/.merlin/resolve.properties
componany.repository=
com.my.MyPool=${company.repository}/mypool/jars/my-pool-3.1.jar
is the minimum that would allow autodownload & install. Of course,
instead of resolve.properties one could use an xml file, the manifest
file, or yet more attributes parsed into one of those. Something like
/**
* @avalon.dependency
* type="MyPool"
* @merlin.dependency-info
* type="MyPool"
* version="3.1"
* default-impl-jar-location =
* ""
* optional=true
* @avalon.dependency
* type="MyCLITool"
* @merlin.dependency-info
* type="MyCLITool"
* version="1.0"
* default-jar-location =
* ""
*/
public void service( ServiceManager sm ); /* ... */
might be parsed into
Merlin-Dependency-Name: com/my/MyPool
Merlin-Dependency-Version: 3.1
Merlin-Dependency-Implementation-Location:
Merlin-Dependency-Optional: true
Merlin-Dependency-Name: com/my/cli/MyCLITool
Merlin-Dependency-Version: 1.0
Merlin-Dependency-Implementation-Location:
Merlin-Dependency-Optional: false
which could be parsed by the container at runtime. On drag-and-drop of
my-service-1.0.sar into the merlin apps/ dir, the assembly package might
scan the manifest file for Merlin-Dependency-Name, and try and find an
implementation package for com/my/MyPool. If not found, it could
autodownload the specified jar, verify the dependencies of that package
are satisfied, until all deps are satisfied.
When all jars are available and on the classpath, the
avalon-framework-specific part of merlin could kick in and determine
what services to instantiate (ie do all the stuff it already does).
Now your comment is that the versioning metadata is applied to a single
component and not a set of components, where I said versioning at the
jar level is needed. Which is true. However, doing things this way I
think will actually reduce duplication (we already need the dependency
and service declaration on a per-component basis!). It also reduces the
number of files which need to be maintained. For multiple components in
the same jar, one could simply do an
package com.my;
/**
* @merlin.dependency-reference type="DefaultMyService"
*/
class SomeOtherDefaultServiceinSameJarAsDefaultMyService
implements ObjectPool {}
or even just
/**
* You should take a look at {@link DefaultMyService#service} for
* the merlin- and avalon-related dependency metadata which should
* be applied to this service as well when doing auto-resolution of
* {@link jar-level
* dependencies}.
*/
class SomeOtherDefaultServiceinSameJarAsDefaultMyService
implements ObjectPool {}
which would allow moving to the right piece of info on required
extensions for this class by a single click in IntelliJ IDEA.
Nevertheless, the metadata parser could implement a best-guess but fail
early mechanism where components in a single jar pointing to other
implementation jars containing conflicting versions of the same
component results in an error. As a first step that is; doing component
isolation and fancy classloading like available in .Net is of course
possible.
I don't see technical impossibilities at all.
More on per-component vs per-jar dependency resolution
------------------------------------------------------
Once again, my current thinking is that you specify which work interface
implementations a component requires (to be provided in a
ServiceManager) using the AMTAGS @avalon.dependency tag, then add
information on versioning and an associated java extension mechanism jar
using @merlin.dependency-info to enable autodownload. This provides a
coupling between component dependencies and jar dependencies. The idea
is that the jar dependencies of all components that go into a single jar
are merged together during build time, and that conflicts during the
merge result in an error.
This gets you all the benefits of using meta tags (easy enough to
understand, works well with existing editors, tools available for
parsing them, reduce the number of source files that need to be
maintained, etc), while not dragging us down into needing to do real
complex (potentially impossible) classloading or into the (very
unpractical) need to provide a single jar per single component
implementation. It also means no dependency info duplication.
Compatible with cocoon blocks needs?
------------------------------------
Instead of opting for a central repository, I'm opting for the easier to
implement darwinism where the option is left open but the first version
of an implementation doesn't need to figure out which files it needs to
download, because the developer specifies @merlin.dependency-info
default-jar-location = ""
Other than that, I think the usecase is addressed. Additional comments
on what stuff doesn't address a use case below ;)
Instance sharing
----------------
AltRMI rocks for that. I need to think more on generalizing the EOB
semantics for implementation inside merlin. This is a nearly totally
seperate concern, to be implemented (in a container) after dependencies
have already been downloaded, verified, and classloaded.
Low on semantics, low on design! XP!
------------------------------------
The attribute-driven tag-based approach is tried-and-tested and many
developers know how to work with it. Some tools (xdoclet, qdox,
MetaGenerateTask) are already available for generating various kinds of
files from those tags. By reusing the jar MANIFEST.MF file and the
extension mechanism (extending on it a little to specify dependencies at
runtime), there is no need for a custom archive format like .sar or
.cob. In fact, the entire concept of a "block" as distinct from "some
kind of aggregation of some components in a jar with a manifest file" is
simply not (formally or strictly) needed.
I also removed the concept of a "behaviour URI" from cocoon blocks, as
behaviour is already specified by a work interface, and a work interface
is already identified by a role.
Finally, I removed the concept of "block inheritance". It might make
sense in the cocoon context, but in general I can't see what it does
that java inheritance doesn't.
This thing still does support "optional COP", and is fully
backwards-compatible with any and all software I can think of. A
container which doesn't support auto-assembly simply ignores the extra
entries in the manifest, a metadata parser tool simply ignores the
@merlin.<blah> stuff. The idea is also to reuse all existing
infrastructure for this stuff.
Also note the attribute setup is completely optional. What it all boils
down to is that having a few extra lines like
Manifest-Version: 1.0
Created-By: Avalon-Merlin metadata parsing tool v1.0
Name: com/my/service
Extension-Name: MyService
Specification-Vendor: Apache Software Foundation
Specification-Version: 1.3.22
Implementation-Vendor: Apache Software Foundation
Implementation-Version: 1.1
Implementation-Vendor-Id: ASF
Merlin-Dependency-Name: com/my/pool/MyPool
Merlin-Dependency-Version: 3.1
Merlin-Dependency-Implementation-Location:
Merlin-Dependency-Optional: true
Merlin-Dependency-Name: com/my/cli/MyCLITool
Merlin-Dependency-Version: 1.0
Merlin-Dependency-Implementation-Location:
Merlin-Dependency-Optional: false
in my-service-1.3.22.jar!/META-INF/MANIFEST.MF (something which is
doable by hand, or probably relatively easily generated from a slightly
modified maven POM using a few lines of jelly) allows automatic
resolution of dependencies, and addresses half of the cocoon blocks
requirements without needing additional semantics. The other half can be
addressed using the EOB approach.
Low on ideas from the rest of the world
---------------------------------------
Haven't taken a detailed look at JBoss, OSGi, EJBs, netbeans, eclipse,
any of them. For the most part constrained my view to the existing
avalon world. And I haven't read up or followed all of the prior cocoon
blocks discussions at all. I think most of the problem has already been
solved in various places. Ignorance is fun; one has the illusion of
having an original random thought ;)
g'night,
- LSD
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200304.mbox/%[email protected]%3E
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
Umbraco/Create xslt exstension like umbraco.Library in C
Create a new xslt extension like umbraco.library in C#.
Sometimes you need more functionality in your xslt, and most of the time umbraco.Library is enough. But what do you do if that isn’t enough?
There are 2 ways to create your own functions
1. Inline code. 2. xslt extension
My opinions.
Inline code.
Inline code get my xslt look messy, and are difficult to reuse, but works fine if you only need a single function. More info about inline code can be found here
xslt extension.
xslt extension on the other hand looks much cleaner, and is easy to re/use again.
Step by Step Ill go through it step by step
1. Create a class library, in my case “ronnie.library”.
2. Create your class/es that you need. In my case “dateMethods.cs”
3. Create the methods you need (remember the methods have to be public and static) e.g.
public static int ugeNummer(DateTime dato) { string res; res = Microsoft.VisualBasic.DateAndTime.DatePart(Microsoft.VisualBasic.DateInterval.WeekOfYear, dato, Microsoft.VisualBasic.FirstDayOfWeek.Monday, Microsoft.VisualBasic.FirstWeekOfYear.System).ToString(); return Int32.Parse(res); }
4. When you are done creating the methods, build and copy the dll in my case “ronnie.library.dll” into the bin folder.
5. Now you just have to register your xslt extension, and this is done in xsltExtensions.xml (placed in the config folder).
6. Open the file and add the following line. Note that starting with Umbraco 4.5, /bin/ is not needed anymore!
<ext assembly="/bin/ronnie.library" type="ronnie.dateMethods" alias="CoolDateMethods" />
- Assembly
- here you type where you have placed your dll file, (without the .dll extension)
- type
- ".NET_namespace.ClassName” Here you have to write the namespace followed by a dot and the class you want to use.
- alias
- This is your xmlns like umbraco.Library, call it what you like. In my case CoolDateMethods
7. Last but not least you have to remember to add the xmlns in your xslt document; this is done like this:
xmlns:CoolDateMethods ="urn:CoolDateMethods" exclude-result-prefixes="msxml umbraco.library CoolDateMethods
Now you should be ready to use your new xslt extension. Hope this quick’n dirty article was informative for you.
//Ronnie
|
http://en.wikibooks.org/wiki/Umbraco/Create_xslt_exstension_like_umbraco.Library_in_C
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
This is the mail archive of the [email protected] mailing list for the GCC project.
"make bootstrap" for cross builds
"personal" branch?
'make bootstrap' oprofile (13% on bash?)
-finstrument-functions and C++ exceptions
2 suggestions
3.4.3 on Solaris9, boehm-gc probs.
4.0 regression: g++ class layout on PPC32 has changed
4.0-20050319 / 4-020050402 build error due to --enable-mapped location
Re: 4.0-20050319 / 4-020050402 build error due to --enable-mappedlocation
Re: <A> at web: /install/specific.html
转发: Call into a function?
Re: 转发: Call into a function?
Inline round for IA64
Re: Inline round for IA64
Re: SMS in gcc4.0
Re: [rtl-optimization] Improve Data Prefetch for IA-64
Re: [Ada] PR18847 Ada.Numerics.xx_Random.Value does not handle junk strings
[Ada] PR18847 Ada.Numerics.xx_Random.Value does not handle junkstrings
Re: [Ada] PR18847 Ada.Numerics.xx_Random.Value does not handlejunk strings
[BENCHMARK] comparing GCC 3.4 and 4.0 on an AMD Athlon-XP 2500+
[BUG mm] "fixed" i386 memcpy inlining buggy
Re: [bug] gcc-3.4-20050422 compiling glibc-2.3.5 internal compiler error in libm-test.c:ctanh_test()
[bug] gcc-3.4-20050422 compiling glibc-2.3.5 internal compiler errorin libm-test.c:ctanh_test()
Re: [bug] gcc-3.4-20050422 compiling glibc-2.3.5 internal compilererror in libm-test.c:ctanh_test()
[gnu.org #222786] GCC Testsuite Tests Exclude List Contribution to FSF
Re: [gnu.org #222786] GCC Testsuite Tests Exclude List Contributionto FSF
FW: [gnu.org #232014] GNU Mailing Lists Question #1
Re: [gnu.org #232052] FW: GNU Mailing Lists Question #1
Re: [gnu.org #232057] FW: GNU Mailing Lists Question #2
[m68k]: More trouble with byte moves into Address registers
Re: [PATCH] Cleanup fold_rtx, 1/n
Re: [PATCH] Debugging Vector Types
[PATCH] RE: gcc for syntax check only (C): need to read source from stdin
[PATCH] VAX: cleanup; move macros from config/vax/vax.h to normalin config/vax/vax.c
[RFA] Invalid mmap(2) assumption in pch (ggc-common.c)
Re: [RFA] Which is better? More and simplier patterns? Fewer patterns with more embedded code?
[RFA] Which is better? More and simplier patterns? Fewer patternswith more embedded code?
Re: [RFA] Which is better? More and simplier patterns? Fewerpatterns with more embedded code?
[RFC] warning: initialization discards qualifiers from pointer target type
[RFC] warning: initialization discards qualifiers from pointertarget type
[RFC][PATCH] C frontend: Emit &a as &a[0].
Re: [rtl-optimization] Improve Data Prefetch for IA-64
[wwwdocs] PATCH for GCC 4.0 branch open for regression fixes
about "Alias Analysis for Intermediate Code"
about alias analysis
about how to write makefile.in config-lang.in for a frontend
about madd instruction in mips instruction sets
Re: about new_regalloc
About the number of DOM's iterations.
about the parse tree
Ada and bad configury architecture.
ada build failure?
Ada test suite
Another ms-bitfield question...
apply_result_size vs FUNCTION_VALUE_REGNO_P
ARM EABI Exception Handling
Backporting to 4_0 the latest friend bits
Basic block reordering algorithm
benchmark call malloc a lot?
benchmarks
Biagio Lucini [[email protected]] bouncing
Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?
Bootstrap fails on HEAD 4.1 for AVR
Bootstrap failure on i686-pc-linux-gnu since 2005-04-09 20:41UTC
A bug in the current released GCC 4.0.0
Re: Bug#300945: romeo: FTBFS (amd64/gcc-4.0): invalid lvalue in assignment
Build and test results for GCC 4.0.0
Build gcc-4.0.0
Build of GCC 4.0.0 successful
Build report for AIX 5.1
Re: building GCC 4.0 for arm-elf target on mingw host
building gcc 4.0.0 on Solaris
Built gcc 4.0.0, without C++ support
C++ ABI mismatch crashes
c54x port
C54x port: some general technical questions
call for testers!
Call into a function?
Can I comment out a GTY variable?
Can't build gcc cvs trunk 20050409 gnat tools on sparc-linux: tree check: accessed operand 2 of view_convert_expr with 1 operands in visit_assignment, at tree-ssa-ccp.c:1074
Re: Can't build gcc cvs trunk 20050409 gnat tools on sparc-linux:tree check: accessed operand 2 of view_convert_expr with 1 operands invisit_assignment, at tree-ssa-ccp.c:1074
Canonical form of the RTL CFG for an IF-THEN-ELSE block?
CC_REG: "Ian's cc0 replacement machinery", request for stage 2 conceptual approval
Re: CC_REG: "Ian's cc0 replacement machinery", request for stage2 conceptual approval
Comparing free'd labels
compile error for gcc-4.0.0-20050410
CPP inconsistency
Cross Compile PowerPC for ReactOS
Cross-compiling for PPC405 core...
Re: different address spaces
Re: different address spaces (was Re: internal compiler error atdwarf2out.c:8362)
Re: different address spaces (was Re: internal compiler error atdwarf2out.c:8362)
different address spaces (was Re: internal compiler error at dwarf2out.c:8362)
Re: different address spaces (was Re: internal compiler error atdwarf2out.c:8362)
Re: different address spaces (was Re: internal compiler erroratdwarf2out.c:8362)
Dirac, GCC-4.0.0 and SIMD optimisations on x86 architecture
Does anyone use -fprofile-use with C++?
Doubt : Help
EABI stack alignment for ppc
emit_no_conflict_block breaks some conditional moves
empty switch substituion doesn't erase matching switch?
ERROR : pls help
exceptions with longjmp (perhaps i am too stupid)
Re: ext/stdio_sync_filebuf/wchar_t/12077.cc
FAIL: ext/stdio_sync_filebuf/wchar_t/12077.cc
Fixing of bug 18877
fold_indirect_ref bogous
folding after TER notes
Free-Standing and Non-OS Dependent
Free-Standing Implementation
front-end tools for preprocessor / macro expansion
function name lookup within templates in gcc 4.1
GCC 3.3 status
GCC 3.4.3
GCC 3.4.4 Status (2005-04-29)
GCC 4.0 Ada Status Report (2005-04-09)
GCC 4.0 branch open for regression fixes
GCC 4.0 build fails on Mac OS X 10.3.9/Darwin kernel 7.9
gcc 4.0 build status
GCC 4.0 Freeze
gcc 4.0 miscompilation on sparc(32) with ultrasparc optmization
GCC 4.0 RC1 Available
GCC 4.0 RC2
GCC 4.0 RC2 Available
GCC 4.0 RC2 Status
GCC 4.0 Status Report (2005-04-05)
GCC 4.0, Fast Math, and Acovea
GCC 4.0.0 bootstrap success
GCC 4.0.0 build report on Fedora Core 3
gcc 4.0.0 build status on AIX 5.2
GCC 4.0.0 fsincos?
GCC 4.0.0 has been released
gcc 4.0.0 optimization vs. id strings (RCS, SCCS, etc.)
gcc 4.0.0 successful build
gcc 4.0.0 test status on AIX 5.2
GCC 4.0.0: (mostly) successful build and installation on GNU/LinuxPowerPC
Re: GCC 4.1 bootstrap failed at ia64-*-linux
GCC 4.1: Buildable on GHz machines only?
Re: gcc and vfp instructions
Re: gcc cache misses [was: Re: OT: How is memory latency important on AMD64 box while compiling large C/C++ sources]
gcc cache misses [was: Re: OT: How is memory latency important onAMD64 box while compiling large C/C++ sources]
Re: gcc cache misses [was: Re: OT: How is memory latency importanton AMD64 box while compiling large C/C++ sources]
GCC Cross Compilation
FW: GCC Cross Compiler for cygwin
GCC errors
gcc for syntax check only (C): need to read source from stdin
GCC superblock and region formation support
gcc-3.3-20050406 is now available
gcc-3.3-20050413 is now available
gcc-3.3-20050420 is now available
gcc-3.3-20050427 is now available
GCC-3.3.6 prerelease for testing
GCC-3.3.6 release status
gcc-3.4-20050401 BUG? generates illegal instruction in X11R6.4.2/mkfontscale/freetypemacro
Re: gcc-3.4-20050401 BUG? generates illegal instruction in X11R6.4.2/mkfontscale/freetypemacro(worksforme)
Re: gcc-3.4-20050401 BUG? generates illegal instruction inX11R6.4.2/mkfontscale/freetypemacro (worksforme)
gcc-3.4-20050401 is now available
gcc-3.4-20050408 is now available
gcc-3.4-20050415 is now available
gcc-3.4-20050422 is now available
gcc-3.4-20050429 is now available
gcc-4.0 non-local variable uses anonymous type warning
gcc-4.0-20050402 is now available
gcc-4.0-20050409 is now available
gcc-4.0-20050416 is now available
gcc-4.0-20050423 is now available
gcc-4.0-20050430 is now available
gcc-4.0.0 build failed
gcc-4.0.0 build problem on solaris
gcc-4.1-20050403 is now available
gcc-4.1-20050410 is now available
gcc-4.1-20050417 is now available
gcc-4.1-20050424 is now available
gcc4, namespace and template specialization problem
gcc4, static array, SSE & alignement
Re: Getting rid of -fno-unit-at-a-time
Getting rid of -fno-unit-at-a-time [Was Re: RFC: Preserving order of functions and top-level asms via cgraph]
Re: Getting rid of -fno-unit-at-a-time [Was Re: RFC: Preserving order offunctions and top-level asms via cgraph]
Re: Getting rid of -fno-unit-at-a-time [Was Re: RFC: Preserving orderof functions and top-level asms via cgraph]
Re: Getting rid of -fno-unit-at-a-time [Was Re: RFC: Preservingorder of functions and top-level asms via cgraph]
Global Objects initialization Problem.......
FW: GNU Mailing Lists Question #1
FW: GNU Mailing Lists Question #2
Re: GNU toolchain for blackfin processor
gpg signatures on tar/diff
Haifa scheduler question: the purpose of move_insn??
HEAD regression: All java tests are failing with an ICE when optimized
Re: HEAD regression: All java tests are failing with an ICE whenoptimized
Heads-up: volatile and C++
Help installing & using GCC
Help me about C language Specification
Help Required on HP-UX 11.0 & 11.11
Hey? Where did the intrinsics go?
hot/cold vs glibc
Re: How is lang.opt processed?
how small can gcc get?
How to "disable" register allocation?
How to -Werror in a fortran testcase?
How to specify customized base addr?
HPUX/HPPA build broken (was Re: call for testers!)
RE:
Re:*-*-solaris2*
i want to connect gcc's front-end to my'back-end
i want to join
i386 stack slot optimisation
i?86-*-sco3.2v5* / i?86-*-solaris2.10 / x86_64-*-*, amd64-*-*
Re: ia64 bootstrap failure with the reload-branch
IA64 Pointer conversion question / convert code already wrong?
Illegal promotion of bool to int....
implicit type cast problem of reference of ponter to const type
Re: Inline round for IA64
inline-unit-growth trouble
Input and print statements for Front End?
install
internal compiler error at dwarf2out.c:8362
Interprocedural Dataflow Analysis - Scalability issues
Is there a way to specify profile data file directory?
Re: ISO C prototype style for libiberty?
Java failures [Re: 75 GCC HEAD regressions, 0 new, with your patch on 2005-04-20T14:39:10Z.]
Re: Java failures [Re: 75 GCC HEAD regressions, 0 new, with yourpatch on 2005-04-20T14:39:10Z.]
Re: Java field offsets
Java field offsets [was; GCC 4.0 RC2 Available]
Joseph appointed i18n maintainer
ld segfaults on ia64 trying to create libgcj.so
libgcc_s.so 3.4 vs 3.0 compatibility
libiberty configure mysteries
Re: libjava/3.4.4 problem
libjava/3.4.4 problem (was Re: GCC 3.4.4 Status (2005-04-29))
libraries - double set
libstdc++ link failures on ppc64
libstdc++ problem after compiling gcc-4.0 with the -fvisibity-inlines
Re: libstdc++ problem after compiling gcc-4.0 with the-fvisibity-inlines
line-map question
The Linux binutils 2.16.90.0.1 is released
The Linux binutils 2.16.90.0.2 is released
Mainline bootstrap failure in tree-ssa-pre.c:create_value_expr_from
Mainline Bootstrap failure on x86-64-linux-gnu
Mainline build failure on i686-pc-linux-gnu
Mainline has been broken for more than 3 days now
Major bootstrap time regression on March 30
makeinfo 4.8 generates non-standard HTML for @emph{..@samp{..}..}
Re: memcpy(a,b,CONST) is not inlined by gcc 3.4.1 in Linux kernel
Merging stmt_ann_d into tree_statement_list_node
Mike Stump added as Darwin maintainer
Mike Stump named as Objective-C/Objective-C++ maintainer
MIPS, libsupc++ and -G 0
missed mail
My opinions on tree-level and RTL-level optimization
New gcc 4.0.0 warnings seem spurious
New optimisation idea ?
Novell thinks you are spam
object code execution statistics
Objective-C++ Status
Obsoleting c4x last minute for 4.0
An old timer returns to the fold
One fully and one partially successful build
Re: OT: How is memory latency important on AMD64 box while compiling large C/C++ sources
OT: How is memory latency important on AMD64 box while compilinglarge C/C++ sources
Packaging error in 4.0RC1 docs? [was RE: Problem compiling GCC 4.0 RC1 on powerpc-ibm-aix5.2.0.0 ]
Re: Packaging error in 4.0RC1 docs? [was RE: Problem compiling GCC4.0 RC1 on powerpc-ibm-aix5.2.0.0 ]
PATCH: Speed up AR for ELF
PATCH: Speed up ELF section merge
Patches for coldfire v4e
Pinapa: A SystemC front-end based on GCC
Re: A plan for eliminating cc0
PowerPC sections ?
PPC 64bit library status?
ppc32/e500/no float - undefined references in libstdc++ _Unwind_*
Re: PR 20505
Problem compiling GCC 4.0 RC1 on powerpc-ibm-aix5.2.0.0
Problem with weak_alias and strong_alias in gcc-4.1.0 with MIPS...
Problems using cfg_layout_finalize()
Problems with MIPS cross compiling for GCC-4.1.0...
Processor-specific code
Propagating attributes for to structure elements (needed for different address spaces)
Re: Propagating attributes for to structure elements (needed fordifferent address spaces)
Propagating loop carried memory dependancies to SMS
proposal: explicit context pointers in addition to trampolines in C frontend
Proposal: GCC core changes for different address spaces
Protoize does not build with gcc 4.x
Q: C++ FE emitting assignments to global read-only symbols?
Question about "#pragma pack(n)"
Re: Question regarding MIPS_GPREL_16 relocation
Questions on CC
Register allocation in GCC 4
register name for DW_AT_frame_base value
Re: Regression involving COMMON(?)
Regression on mainline in tree-vrp.c
Reload Issue -- I can't believe we haven't hit this before
Re: reload-branch created
Re: RFA: .opt files for x86, darwin and lynxos
RFC: #pragma optimization_level
Re: RFC: #pragma optimization level
Re: RFC: #pragma optimization_level
RFC: ms bitfields of aligned basetypes
Re: RFC: Plan for cleaning up the "Addressing Modes" macros
RFC:Updated VEC API
RTL code
rtx/tree calling function syntax
Semi-Latent Bug in tree vectorizer
Should there be a GCC 4.0.1 release quickly?
Side-effect latency in DFA scheduler
sjlj exceptions?
Slow _bfd_strip_section_from_output
Re: SMS in gcc4.0
some problem about cross-compile the gcc-2.95.3
Some small optimization issues with gcc 4.0 20050418
Sorry for the noise: Bootstrap fails on HEAD 4.1 for AVR
sparc.c:509:1: error: "TARGET_ASM_FILE_END" redefined...
specification for gcc compilers on sparc and powerpc
specs file
spill_failure
Stack and Function parameters alignment
Stack frame question on x86 code generation
Re: static inline functions disappear - incorrect static initialiser analysis?
static inline functions disappear - incorrect static initialiseranalysis?
Status of conversions to predicates.md
std::string support UTF8?
Store scheduling with DFA scheduler
struct __attribute((packed));
Re: Struggle with FOR_EACH_EDGE
Submission Status: CRX port ?
The subreg question
Re: SUBTARGET_OPTIONS / SUBTARGET_SWITCHES with .opt
Successful bootstrap of GCC 3.4.3 on i586-pc-interix3 (with one little problem)
successful bootstrap/install of GCC 4.0 RC1 on OpenDarwin 7.2.1/x86
successful build of GCC 4.0.0 on Mac OS 10.3.9 (bootstrap, Fortran95)
Successful Build Report for GCC 4.0.0 C and C++
Successful gcc4.0.0 build (MinGW i386 on WinXP)
Successful gcc4.0.0 build (Redhat 9. Kernel 2.4.25)
Re: symbol_ref constants
sync operations: where's the barrier?
target_shift_truncation_mask for all shifts?!
tcc_statement vs. tcc_expression in the C++ frontend
Re: Template and dynamic dispatching
Templates and C++ embedded subsets
Testcase for loop in try_move_mult_to_index?
tips on debugging a GCC 3.4.3 MIPS RTL optim problem?
tree-cleanup-branch is now closed
Tree-ssa dead store elimination
Tru64 5.1B gcc 4.0.0 build
Re: Trying to build crosscompiler for Sparc Solaris 8 -> SparcSolaris 10 (& others)...
RE: Trying to build crosscompiler for Sparc Solaris 8 ->SparcSolaris 10 (& others)...
Typo in online GCJ docs.
Unnecessary sign- and zero-extensions in GCC?
Unnesting of nested subreg expressions
unreducable cp_tree_equal ICE in gcc-4.0.0-20050410
Use Bohem's GC for compiler proper in 4.1?
Use normal section names for comdat group?
use of extended asm on ppc for long long data types
Using inline assembly with specific register indices
Vectorizing my loops. Some problems.
What's the fate of VARRAY_*?
Where did the include files go?
Re: Whirlpool oopses in 2.6.11 and 2.6.12-rc2
wiki changed to require fake logins
writeable-strings (gcc 4 and lower versions) clarification
|
http://gcc.gnu.org/ml/gcc/2005-04/subjects.html
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
sd_listen_fds, SD_LISTEN_FDS_START — Check for file descriptors passed by the system manager
#include <systemd/sd-daemon.h>
#define SD_LISTEN_FDS_START 3
sd_listen_fds() shall be called by a
daemon to check for file descriptors passed by the init system as
part of the socket-based activation logic.
If the
unset_environment parameter is
non-zero,
sd_listen_fds() will unset the
$LISTEN_FDS and
$LISTEN_PID
environment variables before returning (regardless of whether the
function call itself succeeded or not). Further calls to
sd_listen_fds() will then fail, but the
variables are no longer inherited by child processes..
On failure, this call, this function.
|
http://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
I'm trying to get my program to count the occurences of vector in a file in the same directory as the program. Basic stuff, I just don't know how to get the for(int i = 0; word == "vector"; i++) right i think anyone got advice.
Code:#include <iostream> #include <fstream> #include <vector> #include <string> using namespace std; int main() { vector<string> lookfor vector<string> words; ifstream in("occurence.txt"); string word; int number = 0; while(in >> word) lookfor.push_back(word); for(int i = 0;word == "vector"; i++) number = number + 1; cout << endl << "The document has " << number << " occurences of the word 'vector'." << endl; }
|
http://cboard.cprogramming.com/cplusplus-programming/143368-soooooo-my-word-recognition-isn%27t-right-i-think.html
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
Source on "the source"
Download the J2SE 5.0 Source using SCSL or JRL
You can now download the Source code for J2SE 5.0 using either the Sun Community Source License (SCSL) or the Java Research License (JRL). You can find out more about the JRL here on java.net and follow the link to the J2SE source download from the JRL homepage. On the J2SE Source Code download page you are advised that "if you decide to use your project internally for productive use or distribute your product to others, you must sign a commercial agreement and meet the Java compatibility requirements."
Calvin Austin points to the availability of the J2SE source in his entry in today's Weblogs. In addition to the general announcement, he adds "If you are just getting started with the Sun source and are interested in the linux port then I can recommend getting involved with the blackdown.org porting project."
Billy Newport reports that WebSphere 5.1 XD goes GA. One of the features he highlights are the OnDemand Router which is a "is a Java proxy server that sits in front of a set of HTTP servers. These HTTP servers can be WAS servers or servers from other vendors such as BEA as well as servers on the LAMP stack (PHP etc)." He also calls out the WebSphere Partitioning Facility which ."
John Reynolds reacts to Brian Marrick's blog and responds that Adolescence isn't all that it's cracked up to be. He is happy to put the adolescent years of the Java community behind us and move on to maturity. He suggests we consider ."
In Also
in Java Today , Adam Bosworth writes that services are an example of Evolution in Action because ." He acknowledges that services are not the answer to everything, for example "If you need offline access, if you're manipulating rich media (photoshop), if you need to search those files customers choose to keep privately on their PC's then client side code is required."
Nuno Santos was one of the many who greeted the arrival of high-performance IO in the form of Java 1.4's NIO package, only to ask "where's the SSL support" and find that the two couldn't be used together. Fortunately, J2SE 5.0 sets that right. In Using SSL with Non-Blocking IO, he investigates SSL support provided by J2SE 5.0, saying it "solves the problem once and for all, both for existing and future IO and threading models, by providing a transport-independent approach for protecting the communication between two peers. Unfortunately, this is a complex API with a long and steep learning curve." His article shows the details of setting up and using an SSL session to exchange secure data.
In Projects and
Communities Tomorrow on Java Live find out What's New With Swing including features such as a skinnable look and feel (Synth) and printing support for JTable components.
The JavaPedia page on Radio Frequency Identification ( RFID ) is currently a collection of links that would welcome your contributions and comments to flesh out the page.
Should you put Multiple public classes/interfaces in a single file? In today's Forums, Zander writes "I'm basically unconvinced why its a bad thing to have 3 files for 3 classes. If you want better understanding by having them all 3 on your screen at ones; use a better text-editor that can open 3 files in a split screen. I'm inclined to conclude you are trying to solve the wrong problem here."
M R Atkinson writes about Application Customization saying "Individual users may have the ability to alter UI layouts, change settings and properties. Many applications allow different settings for different projects (single user, across a workgroup or company). As there is no standard way of performing these customizations each application has its own implementation, with varying standards of usability, completeness and reusability. "
Mark Swanson takes a moment to say "
Generics have helped me tremendously. I'd like to send a huge Thank you to Sun for doing this."
In today's java.net
News Headlines
:
- J2SE 5.0 Source Code Released
- Eclipse IDE 3.1 M3
- PicoContainer 1.1
- AspectJ 1.2.1
- Java Plugin Framework (JPF) 0.3
- Joda-Time 0.98
- xlSQL Y7.
Download the J2SE 5.0 Source using SCSL or JRL
- Login or register to post comments
- Printer-friendly version
- daniel's blog
- 648 reads
|
https://weblogs.java.net/blog/editors/archives/2004/11/source_on_the_s.html
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
Here's an inconvenient truth: just about every "public surface area" change you make to your code is a potential breaking change.
First off, I should clarify what I mean by a "breaking change" for the purposes of this article. If you provide a component to a third party, then a "breaking change" is a change such that the third party's code compiled correctly with the previous version, but the change causes a recompilation to fail. (A more strict definition would be that a breaking change is one where the code recompiles successfully but has a different meaning; for today let's just consider actual "build breaks".) A "potential" breaking change is a change which might cause a break, if the third party happens to have consumed your component in a particular way. By a "public surface area" change, I mean a change to the "public metadata" surface of a component, like adding a new method, rather than changing the behaviour of an existing method by editing its body. (Such a change would typically cause a difference in runtime behaviour, rather than a build break.)
Some public surface area breaking changes are obvious: making a public method into a private method, sealing an unsealed class, and so on. Third-party code that called the method, or extended the class, will break. But a lot of changes seem a lot safer; adding a new public method, for example, or making a read-only property into a read-write property. As it turns out, almost any change you make to the public surface area of a component is a potential breaking change. Let's look at some examples. Suppose you add a new overload:
// old component code:public interface IFoo {...}public interface IBar { ... }public class Component{ public void M(IFoo x) {...}}
Suppose you then later add
public void M(IBar x) {...}
to Component. Suppose the consumer code is:
// consumer code:class Consumer : IFoo, IBar{ ... component.M(this); ...}
The consumer code compiles successfully against the original component, but recompiling it with the new component suddenly the build breaks with an overload resolution ambiguity error. Oops.
What about adding an entirely new method?
// old component code:...public class Component{ public void MFoo(IFoo x) {...}}
and now you add
public void MBar(IBar x) {...}
No problem now, right? The consumer could not possibly have been consuming MBar. Surely adding it could not be a build break on the consumer, right?
class Consumer{ class Blah { public void MBar(IBar x) {} } static void N(Action<Blah> a) {} static void N(Action<Component> a) {} static void D(IBar bar) { N(x=>{ x.MBar(bar); }); }}
Oh, the pain. In the original version, overload resolution has two overloads of N to choose from. The lambda is not convertible to Action<Component> because typing formal parameter x as Component causes the body of the lambda to have an error. That overload is therefore discarded. The remaining overload is the sole applicable candidate; its body binds without error with x typed as Blah.
In the new version of Component the body of the lambda does not have an error; therefore overload resolution has two candidates to choose from and neither is better than the other; this produces an ambiguity error.
This particular "flavour" of breaking change is an odd one in that it makes almost every possible change to the surface area of a type into a potential breaking change, while at the same time being such an obviously contrived and unlikely scenario that no "real world" developers are likely to run into it. When we are evaluating the impact of potential breaking changes on our customers, we now explicitly discount this flavour of breaking change as so unlikely as to be unimportant. Still, I think its important to make that decision with eyes open, rather than being unaware of the problem.
You pretty much have to ignore these possible problems, the same way you can't guess what extension methods we've made that may be silently overridden by new methods in a class.
Thanks for the great analysis. I love the suspense building until you finally admit at the end that the examples are "such an obviously contrived and unlikely scenario that no "real world" developers are likely to run into it". I had been wondering if that would be mentioned, and the article does not disappoint! :)
As usual, more interesting insight into the world of language design.
(by the way, I find that if I have waited too long after navigating to the blog article page, then when I attempt to post a comment, the submit button results in the page reloading and anything I've typed being discarded. while bugs like these have led to me to have the habit of making sure I've copied typed text before clicking anything, it's still pretty annoying).
Great article, as usual!
Could you give an example where making a read-only property into a read-write property would result in a breaking change? I can't think of any...
Same thing. class C { public int P { get; set; } } class D { public int P { get; private set; } } class E { static void M(Action<C> ac){} static void M(Action<D> ad) {} static void X() { M(q=>{q.P = 123; }); } }
The body of X binds without error as long as D.P's setter is private. If it becomes public then the call to M is ambiguous. -- Eric
These problems probably aren't very common.
They haven't happened with me even a sigle time.
It is good to know, and the post itself is great, 5 stars.
Thomas: See stackoverflow.com/.../8581029 for an example by Eric Lippert. I was expecting this article to be about that question, actually.
I've been meaning to write this post for years; we identified this as a problem when we added lambdas to C# 3.0. The SO question in December motivated me to actually do it. -- Eric
Hmm, as long as the consuming code actually fails to compile I would agree this is not a big problem. But what if client code suddenly binds to a different member but still compiles? This is actually not a breaking change by your definition, but I would think this is a bigger problem as there is no warning whatsoever.
I can only think of extension methods that could causes this kind of situation, or are there also more subtle situations?
I'm curious to see how unsealing a class can cause code to stop compiling.
Same trick. This trick is surprisingly versatile.
class B {}sealed class C {}class D : B, I {}interface I {}class P { static void M(Func<C, I> fci){} static void M(Func<B, I> fbi){} // special agent! static void Main() { M(q=>q as I); }}
That compiles successfully because "q as I" is illegal if q is of type C. The compiler knows that C does not implement I, and because it is sealed, it knows that no possible type that is compatible with C could implement I. Therefore overload resolution chooses the func from B to I, because a derived type B, say, D, could implement I. When C is unsealed then overload resolution has no basis to choose one over the other, so it becomes an ambiguity error. -- Eric
Seems to me you would be much better off in this regard avoiding the new functional features and sticking to the C# 2.0 specifications.
That way you're pretty much protected against breakages like these right?
Sure, but at what cost? Is it worthwhile to eschew the benefits of LINQ in order to avoid the drawbacks of some obscure, unlikely and contrived breaking changes? The benefits of LINQ outweigh the costs by a huge margin, in my opinion. -- Eric
You know, all these examples make me think overload resolution is just too clever for its own good -- these kinds of algorithms are more the kind you'd expect in the optimizing part, where cleverness abounds, not in the basic semantic part.
Clearly, the solution is to abolish overload resolution, assign every method an unambiguous canonical name (that's stable in the face of changes) and force the developer to specify this name on every call. Alternatively, make overload resolution unnecessary by forbidding overloads. Not sure how to handle generics in such a world -- abolishing them seems too harsh.
(Mandatory Internet disclaimer: the above is facetious, but slightly ha-ha-only-serious.)
There are languages that do not have overload resolution. A .NET language could, for instance, specify the unique metadata token associated with the method definition. But in a world without overload resolution, how would you do query comprehensions? -- Eric
I believe a made a comment before but it seemed to disappear.
Howevver, it seems to me that the breaking changes tradeoff always comes up. In an agile world this just doesn't seem to fit my view of how things should be done. We promote constant refactoring and agile mehotds everywhere else. Isn't it time for us developers to change perspective? Instead of trying to provide non-breaking changes - shouldn't we deliver a tool that lets the consumer convert his code to the new specifications. Obviously this tool could be pedagogic, combine warnings presentations with errors etc. etc. We only need a good abstraction over C# code now! Oh wait...yeah Roslyn is on its way.
However, this will naturally require time and effort. You can't help but thinking that a programming language that includes a "transaction log" of the program would be beneficial, i.e. information on how methods have come and gone and changed name or so. But I know of no such language.
Opinions, Eric?
I agree that this probably isn't something most developers need to worry about, but it is absolutely something most architects should think about. In a perfect world, The unfortunate fact of the world is that people who consume library code don't want to have to update their code just because you changed something. Library architects have to go out of their way to prevent breaking changes.
Obviously the example here is a little contrived, but there are applications. For example, these problems are a lot more likely with fluent-style APIs due to the common method names. Architects should balance that risk with the potential improvement in usability before picking a pattern. Extension methods are another example - core, critical functionality should be placed directly on the class whenever possible to prevent accidental overrides by consumers. I'm sure there are other examples where these sorts of issues can drive important decisions in the real world. They shouldn't just be ignored.
The ultimate breaking change would be to flip a meaningless bit and break client code that checks the assemblies exact hash. I am sure this has happened.
I would like to add that I've had code break simply from the *creation* of a public class!
@Staffan:
1. I really appreciate that when I upgraded my server from .Net 2.0 to .Net 4.0, I only had to fix a few things (aka, breaking changes) before my site would work. I would have been hesitant to update if I believed this would not be the case.
2. Breaking changes that are easy to fix automatically can usually be made in a way that prevents them from being breaking changes (ignoring edge cases like this one), and such automation will usually fail in the same places as the change is breaking.
3. If you're using a library you don't own, having it break is horrible.
The problem I see here is Overloading Abuse!
Too much Overloading is bad for your App
|
http://blogs.msdn.com/b/ericlippert/archive/2012/01/09/10250547.aspx
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
Advanced Filters
This sample demonstrates a Windows Communication Foundation (WCF) routing service. The routing service is a WCF component that makes it easy to include a content-based router in your application. This sample adapts the standard WCF Calculator sample to communicate using the routing service. This sample shows how to define content-based routing logic through the use of message filters and message filter tables.
The following table shows the message filters that are added to the message filter table of the routing service.
The message filters and message filter tables can be created and configured either through code or in the application configuration file. For this sample, you can find the message filters and message filter tables defined through code in the RoutingService\routing.cs file, or defined in the application configuration file in the RoutingService\App.config file. The following paragraphs describe how the message filters and message filter tables are created for this sample through code.
First, an XPathMessageFilter looks for the custom header. Note that WSHttpBinding results in an envelope version using SOAP 1.2, so the XPath statement is defined to use the SOAP 1.2 namespace. The default namespace manager for XPathMessageFilters already defines a prefix for the SOAP 1.2 namespace, /s12, which can be used. However, the default namespace manager does not have the custom namespace that the client uses to define the actual header value, so that prefix must be defined. Any message that shows up with this header matches this filter.
The second filter is an EndpointNameMessageFilter, which matches any message that was received on the calculatorEndpoint. The endpoint name is defined when a service endpoint object is created.
The third filter is a PrefixEndpointAddressMessageFilter. This matches any message that showed up on an endpoint with an address that matches the address prefix (or the front portion) provided. In this example the address prefix is defined as "". This means that any incoming messages that are addressed to "*" are matched by this filter. In this case, it is messages that show up on the Rounding Calculator endpoint, which has the address of "".
The last two message filters are custom MessageFilters. In this example, a "RoundRobin" message filter is used. This message filter is created in the provided RoutingService\RoundRobinMessageFilter.cs file. These filters, when set to the same group, alternate between reporting that they match the message and that they do not, such that only one of them responds true at a time.
Next, all of those messages are added to a MessageFilterTable<TFilterData>. In doing so, priorities are specified to influence the order in which the message filter table executes the filters. The higher the priority, the sooner the filter is executed; the lower the priority, the later a filter is executed. Thus a filter at priority 2 runs before a filter at priority 1. The default priority level if none is specified is 0. A message filter table executes all of the filters at a given priority level before moving to the next lowest priority level. If a match is found at a particular priority, then the message filter table does not continue trying to find matches at the next lower priority.
The first filter to be added is the XPath filter and its priority is set to 2. This is the first message filter that executes. If it finds the custom header, regardless of what the results of the other filters would be, the message is routed to the Rounding Calculator endpoint.
At priority 1, two filters are added. Again, these only run if the XPath filter at priority 2 does not match the message. These two filters show two different ways to determine where the message was addressed when it showed up. Because the two filters effectively check to see whether the message arrived at one of the two endpoints, they can be run at the same priority level because they do not both return true at the same time.
Finally, at Priority 0 (the lowest priority) run the RoundRobin message filters. Because the filters are configured with the same group name, only one of them matches at a time. Because all messages with the custom header have been routed and all those addressed to the specific virtualized endpoints, messages handled by the RoundRobin message filters are only messages that were addressed to the default router endpoint without the custom header. Because these messages switch on a message for each call, half of the operations go to the Regular Calculator endpoint and the other half go to the Rounding Calculator endpoint.
To use this sample
Using Visual Studio 2012, open AdvancedFilters.sln.
To open Solution Explorer, select Solution Explorer from the View menu.
Press F5 or CTRL+SHIFT+B in Visual Studio.
If you would like to auto-launch the necessary projects when you press F5, right-click the solution and select Properties. Select the Startup Project node under Common Properties in the left pane. Select the Multiple Startup Projects radio button and set all of the projects to have the Start action.
If you build the project with CTRL+SHIFT+B, you must start the following applications:
Calculator Client (./CalculatorClient/bin/client.exe)
Calculator Service (./CalculatorService/bin/service.exe)
Routing Calculator Service (./RoundingCalcService/bin/service.exe)
RoutingService (./RoutingService/bin/RoutingService.exe)
In the console window of the Calculator client, press ENTER to start the client. The client returns a list of destination endpoints to choose from.
Choose a destination endpoint by typing its corresponding letter and press ENTER.
Next, the client asks you if you want to add a custom header. Press Y for Yes or N for No, then press ENTER.
Depending on the selections you made, you should see different outputs.
The following is the output returned if you added a custom header to the messages.
Add(100,15.99) = 116 Subtract(145,76.54) = 68.5 Multiply(9,81.25) = 731.3 Divide(22,7) = 3.1
The following is the output returned if you chose the Rounding Calculator endpoint without a custom header.
Add(100,15.99) = 116 Subtract(145,76.54) = 68.5 Multiply(9,81.25) = 731.3 Divide(22,7) = 3.1
The following is the output returned if you chose the Regular Calculator endpoint without a custom header.
Add(100,15.99) = 115.99 Subtract(145,76.54) = 68. 46 Multiply(9,81.25) = 731.25 Divide(22,7) = 3.14285714285714
The following is the output returned if you chose the Default Router endpoint without a custom header.
Add(100,15.99) = 116 Subtract(145,76.54) = 68.46 Multiply(9,81.25) = 731.3 Divide(22,7) = 3.14285714285714
The Calculator Service and the Rounding Calculator Service also prints out a log of the operations invoked to their respective console windows.
In the client console window, type quit and press ENTER to exit.
Press ENTER in the services console windows to terminate the services.
The sample ships configured to use an App.config file to define the router’s behavior. You can also change the name of the RoutingService\App.config file to something else so that it is not recognized and uncomment the method call to ConfigureRouterViaCode() in RoutingService\routing.cs. Either method results in the same behavior from the router.
Scenario
This sample demonstrates the router acting as a content-based router allowing multiple types or implementation of services to be exposed through one endpoint.
Real World Scenario
Contoso wants to virtualize all of their services to expose only one endpoint publicly through which they offer access to multiple different types of services. In this case they utilize the routing service’s content-based routing capabilities to determine where the incoming requests should be sent.
|
https://msdn.microsoft.com/en-us/library/ee667249.aspx
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
The MainWindow Class
The MainWindow class is responsible for displaying the main window and also functions as the controller. In a larger system I would probably split it into multiple files and maybe use some application framework, but in such a small utility I felt okay with just managing everything in a single class. The UI is defined declaratively using XAML in MainWindow.xaml and the controller code is in the code-behind class MainWindow.xaml.cs.
Let's start with the UI. The root element is naturally the <Window>. It contains the usual XML namespaces for XAML and the local namespace for the MP3DurationCalculator. It also contains the title of the window and its initial dimensions. There is also a <Window.Resources> sub-element that contains resources that can be shared by all elements in the window. In this case it's the DurationConverter class. I'll talk more about it later. Here is the XAML for <Window> element:
<Window x: <Window.Resources> <local:DurationConverter x:</local:DurationConverter> </Window.Resources> ... </Window>
The Window element corresponds to the WPF Window class, which is a content container. That means it can hold just one item. That sounds pretty limited, but in practice this single item can be a layout container that itself can hold many items. This is how WPF layout works. You nest containers within containers to get the exact desired layout. The most common and flexible layout container is the grid. WPF Layout is a big topic that warrants its own article (or more). In this article, I try to get away with the minimal amount of explanations necessary to understand the layout of the MP3 Duration Converter. The grid element contains two elements: a media element and a dock panel. The media element is not displayed actually because we use it only to play audio. The dock panel contains the rest of the UI. Here is a collapsed view of the grid:
<Grid> <MediaElement Height="0" Width="0" Name="mediaElement" LoadedBehavior="Manual" /> <DockPanel LastChildFill="True"> ... </DockPanel> </Grid>
The dock panel is another layout container, which contains three stripes arranged vertically. The top stripe contains a browse button for selecting a folder and a text box to display the selected folder name (see Figure 3).
The middle stripe contains a list view that displays the MP3 files in the selected folder (see Figure 4).
The bottom stripe contains a couple of buttons for selecting all or none of the files and a text box to display the total duration of the selected songs (see Figure 5).
The top stripe is a dock panel that has several attributes. The DockPanel.Dock attribute is actually an attached attribute and it applies to outer dock panel (the one that contains all three stripes). It has the value Top, which means that the top strip will be docked to the top part of outer dock panel. The height is defined to be 30 and the vertical alignment is set to Stretch, which means that the top stripe will stretch to fit the entire width of the outer dock panel. The LastChildFill attribute is set to True, which means that that last child laid out will fill the remaining area. The top stripe contains two controls a button and a text box. Both has attached DockPanel.Dock properties that apply to the top stripe itself. The button is docked to the left and the text box is docked to the right. In addition, since the top stripe has a LastChildFill ="True" the text box will stretch to fill the remaining area of the top stripe after the button is laid out. The margin attribute of the button and text box ensures that there will be some spacing between them, so they are not squished together. The button has an event handler called Click that calls the btnSelectDir_Click method when the button is clicked. This method is defined in the code-behind class. Here is XAML for the top stripe:
<DockPanel DockPanel. <Button DockPanel.Browse...</Button> <TextBox Height="22" Name="tbTargetFolder" MinWidth="258" Margin="3" TextChanged="tbTargetFolder_TextChanged"></TextBox> </DockPanel>
The second stripe is actually the bottom stripe and not the middle stripe. The reason for this supposed inconsistency is that I want the middle stripe to fill the remaining area of the outer stripe after the top and bottom have been laid out (according to the LastChildFill attribute), so it must be laid out last. I actually find it a little disorienting. I would prefer if you could simply set one of the DockPanel.Dock of one of the children to Fill instead of making sure it appears last in the XAML file. Anyway, that's the the choice the WPF designers made, so the second stripe is the bottom stripe. It is very similar to the top stripe. It contains two buttons for selecting all files or unseelcting all files and two text boxes. The total text box contains the total duration of all the selected files and the status text box shows various status messages. Named elements like total and status can be accessed using their name in the code-behind class. The text of the "total" text box is data bound to the total.Text property of the MainWindow object. More on that later. Here is the XAML for the bottom stripe:
<DockPanel DockPanel. <Button DockPanel.Select All</Button> <Button DockPanel.Unselect All</Button> <TextBox DockPanel.</TextBox> <TextBox Height="22" Name="status" MinWidth="100" Margin="3" /> </DockPanel>
The middle stripe is a list view. It has no DockPanel.Dock attached property because it is the last child and thus just fills the area left between the top stripe and the bottom stripe. When the Window is resized the top and bottom stripe keep their fixed height and the list view is resize to accommodate the new height of the window. The list view has a name ("files") because it is accessed programmatically from the MainWindow class. It also has an event handler for the SelectionChanged event. The list view also has a view property, which is a grid view in this case (the list view supports several view types including custom views). The grid view has two columns, which are bound to the Name and Duration properties of its item object. The items that populate the list must be object that have a Name and Duration properties. The Duration property is converted using the DurationConverter to a more display-friendly format. Here is the XAML for the list view:
<ListView MinHeight="223" Name="files" MinWidth="355" Background="White" SelectionChanged="files_SelectionChanged"> <ListView.View> <GridView> <GridView.Columns> <GridViewColumn Header="Filename" DisplayMemberBinding="{Binding Path=Name}" Width="Auto" /> <GridViewColumn Header="Duration" DisplayMemberBinding="{Binding Path=Duration, Converter={StaticResource DurationConverter}}" Width="Auto" /> </GridView.Columns> </GridView> </ListView.View> </ListView>
Let's move on to the MainWindow code-behind class. This class oversees the following actions: browse for a new folder that contains MP3 files, collect track info about every MP3 file, select files from the current folder, calculate and display the total duration of all the selected files.
The state the class works with the following member variables:
_folderBrowserDialog . This is a System.Windows.Forms component used to display a dialog for browsing the file system and selecting a folder. WPF doesn't have its own component, but using the Windows.Forms component is just as easy.
_trackDurations. This is an instance of our very own TrackDurations class described earlier
_results. This is an observable collection of TrackInfo objects received from the TrackDurations class whenever the selection of MP3 files is changed.
_fileCount. Just an integer that says how many files are in the current selected directory.
_maxLength. Just an integer that measures the length of the longest track name. It is used to properly resize the list view columns to make sure track names are not truncated.
FolderBrowserDialog _folderBrowserDialog; TrackDurations _trackDurations; ObservableCollection<TrackInfo> _results; int _fileCount = 0; double _maxLength = 0;
In addition to these member variables defined in code the MainWindow class also has additional member variables, which are all the named elements in the XAMNL file like the files list view, the total and status text boxes.
The constructor calls the mandatory InitializeComponent() method that reads the XAML and builds all the UI and then instantiates the folder browser dialog and initializes its properties so it starts browsing from the Downloads directory (folder) under the current user's desktop directory. It also instantiates _results to an empty collection of TrackInfo objects. The most important action is binding the 'files' list view to _results object by assigning _results to files.ItemsSource. This ensures that the files list view will always display the contents of the _results object.
public MainWindow() { InitializeComponent(); _folderBrowserDialog = new FolderBrowserDialog(); _folderBrowserDialog.Description = "Select the directory that containd the MP3 files."; // Do not allow the user to create new files via the FolderBrowserDialog. _folderBrowserDialog.ShowNewFolderButton = false; _folderBrowserDialog.RootFolder = Environment.SpecialFolder.DesktopDirectory; var dt = Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory); var start = System.IO.Path.Combine(dt, "Downloads"); _folderBrowserDialog.SelectedPath = start; _results = new ObservableCollection<TrackInfo>(); files.ItemsSource = _results; }
The action is usually triggered by selecting a folder to browse. When you click the 'Browse...' button the BrowseFolderDialog pops up and lets you select a target directory. If the user selected a folder and clicked 'OK' the text of the tbTargetFolder text box is set to the selected path. If the user clicked 'Cancel' nothing happens.
void btnSelectDir_Click(object sender, RoutedEventArgs e) { DialogResult r = _folderBrowserDialog.ShowDialog(); if (r == System.Windows.Forms.DialogResult.OK) { tbTargetFolder.Text = this._folderBrowserDialog.SelectedPath; } }
Another way to initiate the action is to directly type a folder path into the tbTargetFolder text box. In both cases the TextChanged event of the text box will fire and corresponding event handler will be called. It will check if the text constitutes a valid folder path and if so calls the collectTrackInfo() method.
private void tbTargetFolder_TextChanged(object sender, TextChangedEventArgs e) { if (Directory.Exists(tbTargetFolder.Text)) collectTrackInfo(tbTargetFolder.Text); }
The collectTrackInfo() method is pretty central so I'll explain it in detail. First of all, it disables the browse button and the target folder text box to ensure that the user doesn't try to go to a different folder while the collection is in progress. This prevents a whole class of race condition and synchronization issues.
void collectTrackInfo(string targetFolder) { btnSelectDir.IsEnabled = false; tbTargetFolder.IsEnabled = false;
The next part is getting all the MP3 files in the target folder. I used a LINQ expression that reads almost like English: "From the files in the target folder select all the files whose extension is ".mp3":
var mp3_files = from f in Directory.GetFiles(targetFolder) where System.IO.Path.GetExtension(f) == ".mp3" select f;
The collection of mp3 files is returned in the reverse order for some reason, so I reverse them back.
mp3_files = mp3_files.Reverse();
Now, the _fileCount member variable is updated and the _results collection is cleared:
_fileCount = mp3_files.Count(); _results.Clear();
If _fileCount is 0 it means no mp3 files were found and there is no need to collect any track information. The status text box is updated and the browse button and the target folder text box are enabled.
if (_fileCount == 0) { status.Text = "No MP3 files in this folder."; btnSelectDir.IsEnabled = true; tbTargetFolder.IsEnabled = true; }
If _fileCount is greater than 0, then a new instance of _trackDurations is created and receives the media element, the collection of mp3 files in the target folder and the onTrackInfo() callback.
else _trackDurations = new TrackDurations(mediaElement, mp3_files, onTrackInfo);
The onTrackInfo() callback is called by TrackDurations every time the information about one of the tracks is collected and once more in the end (with a null TrackInfo). If ti (the TrackInfo object) is null it means we are done with the current directory. The _maxLength variable is reset to 0, the _trackDurations object is disposed of, the status text box displays "Ready." and the selection controls are enabled again.
void onTrackInfo(TrackInfo ti) { if (ti == null) { _maxLength = 0; _trackDurations.Dispose(); status.Text = "Ready."; btnSelectDir.IsEnabled = true; tbTargetFolder.IsEnabled = true; }
If ti is not null it means a new TrackInfo object was received asynchronously. First of all the TrackInfo object is added to the _results collection. As you recall (or not) the _results collection is data-bound to the list view, so just adding it to _results make the new track info show up in the list view.
else { _results.Add(ti);
The next step is to make sure the new filename fits in the first list view column. This is a little cumbersome and involves first creating a FormattedText object with the proper font of the list view and then checking if the size of this formatted text object is greater than the current _maxWidth. If it is greater than it becomes the new _maxWidth and the width of first column of the list view is set to the new _maxWidth.
// Make sure the new filename fits in the column var ft = new FormattedText( ti.Name, CultureInfo.GetCultureInfo("en-us"), System.Windows.FlowDirection.LeftToRight, new Typeface(files.FontFamily, files.FontStyle, files.FontWeight, files.FontStretch), files.FontSize, Brushes.Black); if (ft.Width > _maxLength) { _maxLength = ft.Width; var gv = (GridView)files.View; var gvc = gv.Columns[0]; var curWidth = gvc.Width; // Reset to a specific width before auto-sizing gvc.Width = _maxLength; // This causes auto-sizing gvc.Width = Double.NaN; }
The last part of the onTrackInfo() method is updating the status line with current count of track info object out of the total number of MP3 files.
// Update the status line var st = String.Format("Collecting track info {0}/{1} ...", _results.Count, _fileCount); status.Text = st; } }
After all the information has been collected the user may select files in the list view using the standard Windows selection conventions (click/space to select, ctrl+click/space to toggle selection and shift+click/space to extend selection). Whenever the selected files change the SelectionChanged event is fired and the event handler calculates the total duration of all the currently selected files and update 'total' text box.
private void files_SelectionChanged(object sender, SelectionChangedEventArgs e) { var tp = new TimeSpan(); foreach (var f in files.SelectedItems) { tp += ((TrackInfo)f).Duration; } var d = new DateTime(tp.Ticks); string format = "mm:ss"; if (tp.Hours > 0) format = "hh:mm:ss"; total.Text = d.ToString(format); }
There are two buttons called 'Select All' and 'Unselect All' that are hooked to corresponding event handlers and simply select or unselect all the files in list view when clicked. This results of course in a SelectionChanged event handled by the files _SelectionChanged event handler described above.
private void SelectAll_Click(object sender, RoutedEventArgs e) { files.SelectAll(); } private void UnselectAll_Click(object sender, RoutedEventArgs e) { files.UnselectAll(); }
|
http://www.drdobbs.com/architecture-and-design/your-own-mp3-duration-calculator/222500141?pgno=4
|
CC-MAIN-2015-14
|
en
|
refinedweb
|
iOS project with your desired networks via CocoaPods and manual setup, and configure all your ad formats.
Minimum requirements:
- iOS 10.0 or higher. You still can integrate Appodeal SDK into a project with a lower value of minimum iOS version. However, on devices that don’t support iOS 10.0+ our SDK will just be disabled.
- Appodeal SDK is compatible with both ARC and non-ARC projects.
- Use Xcode 13 or higher.
You can use our demo app as a reference project.
Step 1. Import SDK
Fat and CocoaPods 3.0.0 versions work in pure Obj-C, and Swift as well as mixed Obj-C or Swift projects.
If your project is a pure Objective-C project, you should add an empty Swift file. For example,
Dummy.swift.
1. Podfile configuration
CocoaPods 1.10.0 or higher is required. To get information about CocoaPods update, please see this documentation.
Please select the ad types that are used in your application and the ad networks that you want to include. Then press
Generate Podfile button and copy the resulting configuration in your project's Podfile. ''
Step 2. Prepare your application.
- SKAdNetworks IDs in
Info.plist format:
2.2 Configure App Transport Security settings
In order to serve ads, the SDK requires you to allow arbitrary loads. Set up the following keys in info.plist of your app:
- Go to your info.plist file, then press Add+ anywhere in the first column of the key list.
- Add App Transport Security Settings key and set its type to Dictionary in the second column.
- Press Add+ at the end of the name App Transport Security Settings key and choose Allow Arbitrary loads . Set its type to Boolean and its value to Yes .
You can also add the key to your info.plist directly, using this code:
<key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> </dict>
2.3 Other feature usage descriptions
To improve ad performance the following entries should be added:
- GADApplicationIdentifier - When including AdMob in your project, you must also add your AdMob app ID to your info.plist . Use the key
GADApplicationIdentifierwith the value being your AdMob app ID.
For more information about Admob sync check out our FAQ.
- NSUserTrackingUsageDescription - Starting.
<key>GADApplicationIdentifier</key> <string>YOUR_ADMOB_APP_ID</string> <key>NSUserTrackingUsageDescription</key> <string><App Name> needs your advertising identifier to provide personalised advertising experience tailored to you</string> <key>NSLocationWhenInUseUsageDescription</key> <string><App Name> needs your location for analytics and advertising purposes</string> <key>NSCalendarsUsageDescription</key> <string><App Name> needs your calendar to provide personalised advertising experience tailored to you</string>
Step 3. Initialize SDK
Before initialisation, we highly recommend to receive all required permissions from the user. Please follow this Data Protection guide.
Import Appodeal into AppDelegate (AppDelegate.m) class and initialise the SDK:
import Appodeal
didFinishLaunchingWithOptionsfunction:
} }
adTypesis a parameter responsible for ad formats (ex.
AppodealAdTypeRewardedVideo, AppodealAdTypeInterstitial).
consentis an object responsible for agreement to having user's personal data collected according to GDPR and CCPA laws. You can find more information here.
YOUR_APP_KEYwith the actual app key. You can find it in the list of applications in your personal account.. Publish your application
Update App Store IDFA usage. When you submit your application to the App Store you need to update its "Advertising Identifier (IDFA)" settings in order to comply with Apple advertising policy.
- Go to the Advertising Identifier section.
- Set Yes on the right panel.
- Tick Serve advertisements within the app .
- Tick confirmation box under Limit Ad tracking setting in iOS .
Configure App Privacy according to this doc:
Step 6.:
Known issues
1. AdColony presentation issue
AdColony always checks if the key window’s
rootViewController matches the passed
rootViewController. Otherwise, Adcolony fails to show. } }
You can find more information here.
3. MyTarget auto-initialisation issue
MyTarget supports auto-initialisation that may cause a crash at application startup. To disable this behaviour add the following key value pair into info.plist:
<key>MyTargetSDKAutoInitMode</key> <false/> </key>
|
https://wiki.appodeal.com/en/ios-beta-3-0-0/get-started
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Spark Message and Meet Widget (widget-message-meet)
- THIS WIDGET CONTAINS EXPERIMENTAL CODE *
The Spark Message Meet widget allows developers to easily incorporate Cisco Spark 1 on 1 messaging into an application.
Table of Contents
Background
This widget handles the heavy lifting of coordinating between your application and the Spark APIs, and provides components of the Spark messaging experience without having to build all front end UI yourself.
Our widget is built using React, Redux, and the Spark Javascript SDK.
This widget supports:
- 1 on 1 messaging
- Inline Markdown
- Sharing of files and documents
- Preview and download of files and documents
- Flagging messages for follow up
Install
Depending on how comfortable you are with these frameworks, there are are a number of ways you can "install" our code.
Spark for Developers
If you haven't already, go to the Spark for Developers Portal () and sign up for an account. Once you've created an account you can get your developer access token by clicking on your avatar at the top right of the screen.
When you want to eventually create an integration and have your own users take advantage of the widget, you'll need to create an integration with the following scopes:
spark:kms spark:rooms_read spark:rooms_write spark:memberships_read spark:memberships_write spark:messages_read spark:messages_write
Head over to the Spark for Developers Documentation for more information about how to setup OAuth for your app:
CDN
Using our CDN requires the least amount of work to get started. Add the following into your HTML file:
<!-- Latest compiled and minified CSS --> <link rel="stylesheet" href=""> <!-- Latest compiled and minified JavaScript --> <script src=""></script>
Build from Source
- Follow these instructions to checkout and build the SDK
- Be sure to run
npm installand
npm run bootstrapfrom the root of the sdk repo. You will want to run this every time you pull down any new updates for the sdk.
- From the root of the sdk repo, run the following to build the widget:
PACKAGE=widget-message-meet npm run grunt:package -- build
- The built bundles are located at
package/widget-message-meet/dist.
Usage
Quick Start
If you would just like to get running immediately follow these instructions to get a webpack-dev-server running with the widget.
Create a
.envfile in the root of the SDK project with the following lines, replacing the Xs with the appropriate value:
CISCOSPARK_ACCESS_TOKEN=YOUR_ACCESS_TOKEN TO_PERSON_EMAIL=XXXXX@XXXXXXXXX
From the root directory run:
PACKAGE=widget-message-meet npm run grunt:package serve
HTML
The easiest way to get the Spark Message and Meet Widget into your web site is to add the built resources and attach data attributes to your a container.
- If you're using our CDN, skip to step 2.
- Copy the resources in the
distdirectory to own project.
- Add a
<script />tag to your page to include the
bundle.js
- Add a
<link />tag to include
main.css
- Create a container where you would like to embed the widget and add the following attributes to configure the widget:
data-toggle="spark-message-meet": (required)
data-access-token: (required) Access token for the user account initiating the messaging session. For testing purposes you can use a developer access token from.
Include one of the following attributes:
data-to-person-email: Email of the message recipient.
data-to-person-id: User Id of the message recipient.
<div data-
JSX
Because our widgets are built using React, you'll be able to directly import the modules and components into your React app.
Replace
YOUR_ACCESS_TOKEN with the access token of the user who is going to be sending the messages (for development purposes this can just be your Developer Access Token),
TARGET_USER_EMAIL with the email of the user who is receiving the messages, and
ELEMENT with the ID of the element you want to inject into.
If you have the User ID of the recipient, you may provide that in the property
toPersonId of
MessageMeetWidget instead of using
toPersonEmail.
import MessageMeetWidget from '@ciscospark/widget-message-meet'; ReactDOM.render( <MessageMeetWidget accessToken="YOUR_ACCESS_TOKEN" toPersonEmail="TARGET_USER_EMAIL" />, document.getElementById('ELEMENT') );
Browser Support
This widget supports the follow browsers:
- Current release of Chrome
- Current release of Firefox
- Internet Explorer 11 or later
Contribute
Please see CONTRIBUTING.md for more details.
License
|
https://www.npmtrends.com/@ciscospark/widget-message-meet
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
How to extract licenses from headers¶
Sometimes there is no
license file, and you will need to extract the license from a header file, as in the following example:
def package(): # Extract the License/s from the header to a file tmp = tools.load("header.h") license_contents = tmp[2:tmp.find("*/", 1)] # The license begins with a C comment /* and ends with */ tools.save("LICENSE", license_contents) # Package it self.copy("license*", dst="licenses", ignore_case=True, keep_path=False)
|
https://docs.conan.io/en/1.31/howtos/extract_licenses_from_headers.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Available with Spatial Analyst license.
Available with Image Analyst license.
Summary
Returns 1 for cells where the first raster is less than the second raster and 0 if it is not.
Illustration
Discussion
The relational less-than operation evaluates the first input value in relation to the second input value on a cell-by-cell basis within the Analysis window. In the relational evaluation, if the condition is true (the first input value is less than the second input value), the output is 1; if it is false, the output is 0.
Input1 < Input2, Output = 1 Input1 = Input2, Output = 0 Input1 > Input2, Output = 0
When one or both input values are NoData, the output is NoData. Work Build complex statements.
Two inputs are necessary for the evaluation to take place.
The order of the input is relevant Less Than operation on two input rasters.
import arcpy from arcpy import env from arcpy.sa import * env.workspace = "C:/sapyexamples/data" outLessThan = Raster("degs") < Raster("negs") outLessThan.save("C:/sapyexamples/output/outlt.tif")
This sample performs a Less Than operation on two input rasters.
# Name: Op_LessThan_Ex_02.py # Description: Performs a relational less-than LessThan outLessThan = inRaster1 < inRaster2 # Save the output outLessThan.save("C:/sapyexamples/output/outlt")
|
https://pro.arcgis.com/en/pro-app/latest/arcpy/spatial-analyst/relational-less-than-operator.htm
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Sample Code
/* * Copyright 2019.add_a_point_scene_layer; import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.layout.StackPane; import javafx.stage.Stage; import com.esri.arcgisruntime.ArcGISRuntimeEnvironment;Style; import com.esri.arcgisruntime.mapping.view.SceneView; import com.esri.arcgisruntime.mapping.Surface; public class AddAPointSceneLayerSample extends Application { private SceneView sceneView; @Override public void start(Stage stage) { try { // set the title and size of the stage and show it StackPane stackPane = new StackPane(); Scene fxScene = new Scene(stackPane); stage.setTitle("Add a Point Scene Layer Sample"); stage.setWidth(800); stage.setHeight(700); // create a JavaFX scene with a stackpane and set it to the stage stage.setScene(fxScene); stage.show(); // authentication with an API key or named user is required to access basemaps and other location services String yourAPIKey = System.getProperty("apiKey"); ArcGISRuntimeEnvironment.setApiKey(yourAPIKey); // create a scene view and add it to the stack pane sceneView = new SceneView(); stackPane.getChildren().add(sceneView); // create a scene with a basemap and add it to the scene view ArcGISScene scene = new ArcGISScene(BasemapStyle.ARCGIS_IMAGERY); sceneView.setArcGISScene(scene); // set the base surface with world elevation Surface surface = new Surface(); surface.getElevationSources().add(new ArcGISTiledElevationSource("")); scene.setBaseSurface(surface); // add a point scene layer with points at world airport locations ArcGISSceneLayer pointSceneLayer = new ArcGISSceneLayer(""); scene.getOperationalLayers().add(pointSceneLayer); } catch (Exception e) { // on any error, display the stack trace. e.printStackTrace(); } } /** * Stops and releases all resources used in application. */ @Override public void stop() { if (sceneView != null) { sceneView.dispose(); } } /** * Opens and runs application. * * @param args arguments passed to this application */ public static void main(String[] args) { Application.launch(args); } }
|
https://developers.arcgis.com/java/sample-code/add-point-scene-layer/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Modified from the Medium blog post by Noel Gorelick
Histogram matching is a quick and easy way to "calibrate" one image to match another. In mathematical terms, it's the process of transforming one image so that the cumulative distribution function (CDF) of values in each band matches the CDF of bands in another image.
To illustrate what this looks like and how it works, I'm going to histogram-match a high-resolution (0.8m/pixel) SkySat image to the Landsat 8 calibrated surface reflectance images taken around the same time. Below is what it looks like with the SkySat image overlaid on top of the Landsat data, before the matching. Each image's histogram is shown as well:
SkySat image swath overlaid on Landsat 8 image
Cumulative histogram for SkySat (left) and Landsat 8 surface reflectance (right).
To make the histograms match, we can interpolate the values from the source image (SkySat) into the range of the target image (Landsat), using a piecewise-linear function that puts the correct ratio of pixels into each bin.
Let's get started: setup the Earth Engine API.
Setup Earth Engine
import ee ee.Authenticate() ee.Initialize()
Functions
The following is code to generate the piecewise-linear function using the cumulative count histograms of each image.
def lookup(source_hist, target_hist): """Creates a lookup table to make a source histogram match a target histogram. Args: source_hist: The histogram to modify. Expects the Nx2 array format produced by ee.Reducer.autoHistogram. target_hist: The histogram to match to. Expects the Nx2 array format produced by ee.Reducer.autoHistogram. Returns: A dictionary with 'x' and 'y' properties that respectively represent the x and y array inputs to the ee.Image.interpolate function. """ # Split the histograms by column and normalize the counts. source_values = source_hist.slice(1, 0, 1).project([0]) source_counts = source_hist.slice(1, 1, 2).project([0]) source_counts = source_counts.divide(source_counts.get([-1])) target_values = target_hist.slice(1, 0, 1).project([0]) target_counts = target_hist.slice(1, 1, 2).project([0]) target_counts = target_counts.divide(target_counts.get([-1])) # Find first position in target where targetCount >= srcCount[i], for each i. def make_lookup(n): return target_values.get(target_counts.gte(n).argmax()) lookup = source_counts.toList().map(make_lookup) return {'x': source_values.toList(), 'y': lookup}
This code starts by splitting the (2D Array) histograms into the pixel values (column 0) and pixel counts (column 1), and normalizes the counts by dividing by the total count (the last value).
Next, for each source bin, it finds the index of the first bin in the target histogram where
target_count ≥ src_count[i]. To determine that, we compare each value from
source_count to the entire array of
target_counts. This comparison generates an array of 0s where the comparison is false and 1s where the comparison is true. The index of the first non-zero value can be found using the
Array.argmax() function, and using that index, we can determine the
target_value that each
src_value should be adjusted to. The output of this function is formatted as a dictionary that's suitable for passing directly into the
Image.interpolate() function.
Next, here's the code for generating the histograms and adjusting the images.
def histogram_match(source_img, target_img, geometry): """Performs histogram matching for 3-band RGB images by forcing the histogram CDF of source_img to match target_img. Args: source_img: A 3-band ee.Image to be color matched. Must have bands named 'R', 'G', and 'B'. target_img: A 3-band ee.Image for color reference. Must have bands named 'R', 'G', and 'B'. geometry: An ee.Geometry that defines the region to generate RGB histograms for. It should intersect both source_img and target_img inputs. Returns: A copy of src_img color-matched to target_img. """ args = { 'reducer': ee.Reducer.autoHistogram(**{'maxBuckets': 256, 'cumulative': True}), 'geometry': geometry, 'scale': 1, # Need to specify a scale, but it doesn't matter what it is because bestEffort is true. 'maxPixels': 65536 * 4 - 1, 'bestEffort': True } # Only use pixels in target that have a value in source (inside the footprint and unmasked). source = source_img.reduceRegion(**args) target = target_img.updateMask(source_img.mask()).reduceRegion(**args) return ee.Image.cat( source_img.select(['R']).interpolate(**lookup(source.getArray('R'), target.getArray('R'))), source_img.select(['G']).interpolate(**lookup(source.getArray('G'), target.getArray('G'))), source_img.select(['B']).interpolate(**lookup(source.getArray('B'), target.getArray('B'))) ).copyProperties(source_img, ['system:time_start'])
This code runs a
reduceRegion() on each image to generate a cumulative histogram, making sure that only pixels that are in both images are included when computing the histograms (just in case there might be a cloud or something else just outside of the high-res image, that might distort the results). It's not important to generate that histogram with a really high fidelity, so the
maxPixels argument is set to use less than "4 tiles" of data (256 * 256 * 4) and
bestEffort is turned on, to make the computation run fast. When these arguments are set this way, the
reduceRegion() function will try to figure out how many pixels it would need to process at the given scale, and if that's greater than the
maxPixels value, it computes a lower scale to keep the total number of pixels below
maxPixels. That all means you need to specify a scale, but it doesn't matter what it is as it'll be mostly ignored.
This code then generates the lookup tables for each band in the input image, calls the
interpolate() function for each, and combines the results into a single image.
Before the
histogram_match function can be used, we need to identify the source and target images. The following function is for finding a target RGB-reference image within a given image collection that is nearest in observation date to the image we want color matched.
def find_closest(target_image, image_col, days): """Filter images in a collection by date proximity and spatial intersection to a target image. Args: target_image: An ee.Image whose observation date is used to find near-date images in the provided image_col image collection. It must have a 'system:time_start' property. image_col: An ee.ImageCollection to filter by date proximity and spatial intersection to the target_image. Each image in the collection must have a 'system:time_start' property. days: A number that defines the maximum number of days difference allowed between the target_image and images in the image_col. Returns: An ee.ImageCollection that has been filtered to include those images that are within the given date proximity to target_image and intersect it spatially. """ # Compute the timespan for N days (in milliseconds). range = ee.Number(days).multiply(1000 * 60 * 60 * 24) filter = ee.Filter.And( ee.Filter.maxDifference(range, 'system:time_start', None, 'system:time_start'), ee.Filter.intersects('.geo', None, '.geo')) closest = (ee.Join.saveAll('matches', 'measure') .apply(ee.ImageCollection([target_image]), image_col, filter)) return ee.ImageCollection(ee.List(closest.first().get('matches')))
The previous functions are generically useful for performing image histogram matching; they are not specific to any particular image or image collection. They are the building blocks for the procedure.
Application
The following steps are specific to the SkySat-to-Landsat scenario introduced earlier.
First define a region of interest and the source SkySat image to histogram-match to Landsat 8 images; we'll also clip the image by the region of interest.
geometry = ee.Geometry.Polygon( [[[-155.97117211519446, 20.09006980142336], [-155.97117211519446, 19.7821681268256], [-155.73256280122962, 19.7821681268256], [-155.73256280122962, 20.09006980142336]]], None, False) skysat = (ee.Image('SKYSAT/GEN-A/PUBLIC/ORTHO/RGB/s01_20161020T214047Z') .clip(geometry))
Next prepare a Landsat 8 collection by applying a cloud/shadow mask, scaling, and selecting/renaming RGB bands.
def prep_landsat(image): """Apply cloud/shadow mask and select/rename Landsat 8 bands.""" qa = image.select('pixel_qa') return (image.updateMask( qa.bitwiseAnd(1 << 3).eq(0).And(qa.bitwiseAnd(1 << 5).eq(0))) .divide(10000) .select(['B4', 'B3', 'B2'], ['R', 'G', 'B']) .copyProperties(image, ['system:time_start'])) # Get the landsat collection, cloud masked and scaled to surface reflectance. landsat_col = (ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') .filterBounds(geometry) .map(prep_landsat))
Now find the Landsat images within 32 days of the SkySat image, sort the images by cloud cover and then mosaic them. Use the result as the reference image to histogram-match the SkySat image to.
reference = find_closest(skysat, landsat_col, 32).sort('CLOUD_COVER').mosaic() result = histogram_match(skysat, reference, geometry)
Results
Setup folium for interactive map viewing.
import folium) folium.Map.add_ee_layer = add_ee_layer
Define a folium map object, add layers, and display it. Until you zoom in really far, it's nearly impossible to tell where the Landsat image ends and the SkySat image begins.
lon, lat, zoom = -155.79584, 19.99866, 13 map_matched = folium.Map(location=[lat, lon], zoom_start=zoom) vis_params_refl = {'min': 0, 'max': 0.25} vis_params_dn = {'min': 0, 'max': 255} map_matched.add_ee_layer(reference, vis_params_refl, 'Landsat-8 reference') map_matched.add_ee_layer(skysat, vis_params_dn, 'SkySat source') map_matched.add_ee_layer(result, vis_params_refl, 'SkySat matched') display(map_matched.add_child(folium.LayerControl()))
Caveats
If there's anything anomalous in your image that's not in the reference image (or vice versa), like clouds, the CDF can end up skewed, and the histogram matching results might not look that good. Additionally, a little mis-registration between the source and target images is usually ok, since it is using the statistics of the whole region and doesn't really rely on a pixel-to-pixel correspondence.
|
https://developers.google.cn/earth-engine/tutorials/community/histogram-matching
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Windows Subsystems¶
On Windows, you can run different subsystems that enhance the operating system with UNIX capabilities.
Conan supports
MSYS2,
CYGWIN,
WSL and in general any subsystem that is able to run a
bash
shell.
Many libraries use these subsystems in order to use the Unix tools like the
Autoconf suite
that generates
Makefiles.
The difference between MSYS2 and CYGWIN is that MSYS2 is oriented to the development of native Windows packages, while CYGWIN tries to provide a complete POSIX-like system to run any Unix application on it.
For that reason, we recommend the use of MSYS2 as a subsystem to be used with Conan.
Operation Modes¶
The
MSYS2 and
CYGWIN can be used with different operation modes:
- You can use them together with MinGW to build Windows-native software.
- You can use them together with any other compiler to build Windows-native software, even with Visual Studio.
- You can use them with MinGW to build specific software for the subsystem, with a dependency to a runtime DLL (
msys-2.0.dlland
cygwin1.dll)
If you are building specific software for the subsystem, you have to specify a value for the setting
os.subsystem,
if you are only using the subsystem for taking benefit of the UNIX tools but generating native Windows software, you
shouldn’t specify it.
Running commands inside the subsystem¶
self.run()¶
In a Conan recipe, you can use the
self.run method specifying the parameter
win_bash=True
that will call automatically to the tool tools.run_in_windows_bash.
It will use the bash in the path or the bash specified for the environment variable CONAN_BASH_PATH to run the specified command.
Conan will automatically escape the command to match the detected subsystem.
If you also specify the
msys_mingw parameter to False, and the subsystem is
MSYS2 it will
run in Windows-native mode, the compiler won’t link against the
msys-2.0.dll.
Controlling the build environment¶
Building software in a Windows subsystem for a different compiler than MinGW can sometimes be painful. The reason is how the subsystem finds your compiler/tools in your system.
For example, the icu library requires Visual Studio to be built in Windows, but also a subsystem
able to build the Makefile. A very common problem and example of the pain is the
link.exe program.
In the Visual Studio suite,
link.exe is the linker, but in the
MSYS2 environment the
link.exe
is a tool to manage symbolic links.
Conan is able to prioritize the tools when you use
build_requires, and put the tools in the PATH in
the right order.
There are some packages you can use as
build_requires:
From Conan-center:
- mingw_installer/1.0@conan/stable: MinGW compiler installer as a Conan package.
- msys2/20190524@: MSYS2 subsystem as a Conan package (Conan Center Index).
- cygwin_installer/2.9.0@bincrafters/stable: Cygwin subsystem as a Conan package.
For example, create a profile and name it msys2_mingw with the following contents:
[build_requires] mingw_installer/1.0@conan/stable msys2/20190524 [settings] os_build=Windows os=Windows arch=x86_64 arch_build=x86_64 compiler=gcc compiler.version=4.9 compiler.exception=seh compiler.libcxx=libstdc++11 compiler.threads=posix build_type=Release
Then you can have a conanfile.py that can use
self.run() with
win_bash=True to run any
command in a bash terminal or use the
AutoToolsBuildEnvironment to invoke
configure/make
in the
subsystem:
from conans import ConanFile import os class MyToolchainXXXConan(ConanFile): name = "mylib" version = "0.1" ... def build(self): self.run("some_command", win_bash=True) env_build = AutoToolsBuildEnvironment(self, win_bash=True) env_build.configure() env_build.make() ...
Apply the profile in your recipe to create a package using the MSYS2 and MINGW:
$ conan create . user/testing --profile msys2_mingw
As we included in the profile the
MinGW and then the
MSYS2 build_require, when we run a command, the PATH
will contain first the MinGW tools and finally the MSYS2.
What could we do with the Visual Studio issue with
link.exe? You can pass an additional parameter to
run_in_windows_bash
with a dictionary of environment variables to have more priority than the others:
def build(self): # ... vs_path = tools.vcvars_dict(self)["PATH"] # Extract the path from the vcvars_dict tool tools.run_in_windows_bash(self, command, env={"PATH": vs_path})
So you will get first the
link.exe from the Visual Studio.
Also, Conan has a tool
tools.remove_from_path that you can use in a recipe to temporarily remove a
tool from the path if you know that it can interfere with your build script:
class MyToolchainXXXConan(ConanFile): name = "mylib" version = "0.1" ... def build(self): with tools.remove_from_path("link"): # Call something self.run("some_command", win_bash=True) ...
|
https://docs.conan.io/en/1.32/systems_cross_building/windows_subsystems.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Clipping images with patches#
Demo of image that's been clipped by a circular patch.
import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.cbook as cbook with cbook.get_sample_data('grace_hopper.jpg') as image_file: image = plt.imread(image_file) fig, ax = plt.subplots() im = ax.imshow(image) patch = patches.Circle((260, 200), radius=200, transform=ax.transData) im.set_clip_path(patch) ax.axis('off') plt.show()
References
The use of the following functions, methods, classes and modules is shown in this example:
Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery
|
https://matplotlib.org/stable/gallery/images_contours_and_fields/image_clip_path.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Translates predefined Japanese character classes.
Standard C Library (libc.a)
#include <ctype.h> int atojis (Character) int Character; int jistoa (Character) int Character; int _atojis (Character) int Character; int _jistoa (Character) int Character; int tojupper (Character) int Character; int tojlower (Character) int Character; int _tojupper (Character) int Character; int _tojlower (Character) int Character; int toujis (Character) int Character; int kutentojis (Character) int Character; int tojhira (Character) int Character; int tojkata (Character) int Character;
When running AIX with Japanese Language Support on your system, the legal value of the Character parameter is in the range from 0 to NLCOLMAX.
The jistoa subroutine converts an SJIS ASCII equivalent to the corresponding ASCII equivalent. The atojis subroutine converts an ASCII character to the corresponding SJIS equivalent. Other values are returned unchanged.
The _jistoa and _atojis routines are macros that function like the jistoa and atojis subroutines, but are faster and have no error checking function.
The tojlower subroutine converts a SJIS uppercase letter to the corresponding SJIS lowercase letter. The tojupper subroutine converts an SJIS lowercase letter to the corresponding SJIS uppercase letter. All other values are returned unchanged.
The _tojlower and _tojupper routines are macros that function like the tojlower and tojupper subroutines, but are faster and have no error-checking function.
The toujis subroutine sets all parameter bits that are not 16-bit SJIS code to 0.
The kutentojis subroutine converts a kuten code to the corresponding SJIS code. The kutentojis routine returns 0 if the given kuten code is invalid.
The tojhira subroutine converts an SJIS katakana character to its SJIS hiragana equivalent. Any value that is not an SJIS katakana character is returned unchanged.
The tojkata subroutine converts an SJIS hiragana character to its SJIS katakana equivalent. Any value that is not an SJIS hiragana character is returned unchanged.
The _tojhira and _tojkata subroutines attempt the same conversions without checking for valid input.
For all functions except the toujis subroutine, the out-of-range parameter values are returned without conversion.
These subroutines are part of Base Operating System (BOS) Runtime.
The ctype subroutine, conv subroutine, getc, getchar, fgetc, or getw subroutine, getwc, fgetwc, or getwchar subroutine, setlocale subroutine.
List of Character Manipulation Services, National Language Support Overview for Programming, Subroutines Overview in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs.
|
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/basetrf1/jconv.htm
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Typed, extensible, dependency free configuration reader for Python projects for multiple config sources and working well in IDEs for great autocomplete performance.
Project description
typed-config
Typed, extensible, dependency free configuration reader for Python projects for multiple config sources and working well in IDEs for great autocomplete performance.
pip install typed-config
Requires python 3.6 or above.
Basic usage
# my_app/config.py from typedconfig import Config, key, section from typedconfig.source import EnvironmentConfigSource @section('database') class AppConfig(Config): host = key(cast=str) port = key(cast=int) timeout = key(cast=float) config = AppConfig() config.add_source(EnvironmentConfigSource()) config.read()
# my_app/main.py from my_app.config import config print(config.host)
In PyCharm, and hopefully other IDEs, it will recognise the datatypes of your configuration and allow you to autocomplete. No more remembering strings to get the right thing out!
Upgrading from 0.x.x
There is one breaking change when moving from
0.x.x to
1.x.x. The
key function now expects all arguments to be keyword arguments. So simply replace any calls like so:
key('section', 'key', True, str, 'default') # 0.x.x key(section_name='section', key_name='key', required=True, cast=str, default='default') # 1.x.x
The reason for this change is to tie down the return type of
key properly. Previously, when
required=False or when
cast=None the return type would not include the possibility of
None or
string. The type checker should now be able to infer the return type based on the values of
required,
cast and
default.
How it works
Configuration is always supplied in a two level structure, so your source configuration can have multiple sections, and each section contains multiple key/value configuration pairs. For example:
[database] host = 127.0.0.1 port = 2000 [algorithm] max_value = 10 min_value = 20
You then create your configuration hierarchy in code (this can be flat or many levels deep) and supply the matching between strings in your config sources and properties of your configuration classes.
You provide one or more
ConfigSources, from which the config for your application can be read. For example, you might supply an
EnvironmentConfigSource, and two
IniFileConfigSources. This would make your application first look for a configuration value in environment variables, if not found there it would then look at the first INI file (perhaps a user-specific file), before falling back to the second INI file (perhaps a default configuration shared between all users). If a parameter is still not found and is a required parameter, an error would be thrown.
There is emphasis on type information being available for everything so that an IDE will autocomplete when trying to use your config across your application.
Multiple data sources
from typedconfig import Config, key, section, group_key from typedconfig.source import EnvironmentConfigSource, IniFileConfigSource @section('database') class DatabaseConfig(Config): host = key(cast=str) port = key(cast=int) username = key(cast=str) password = key(cast=str) config = DatabaseConfig() config.add_source(EnvironmentConfigSource(prefix="EXAMPLE")) config.add_source(IniFileConfigSource("config.cfg")) # OR provide sources directly to the constructor config = DatabaseConfig(sources=[ EnvironmentConfigSource(prefix="EXAMPLE"), IniFileConfigSource("config.cfg") ])
Since you don't want to hard code your secret credentials, you might supply them through the environment. So for the above configuration, the environment might look like this:
export EXAMPLE_DATABASE_USERNAME=my_username export EXAMPLE_DATABASE_PASSWORD=my_very_secret_password export EXAMPLE_DATABASE_PORT=2001
Those values which couldn't be found in the environment would then be read from the INI file, which might look like this:
[database] HOST = db1.mydomain.com PORT = 2000
Note after this,
config.port will be equal to
2001 as the value in the environment took priority over the value in the INI file.
Caching
When config values are first used, they are read. This is lazy evaluation by default so that not everything is read if not necessary.
After first use, they are cached in memory so that there should be no further I/O if the config value is used again.
For fail fast behaviour, and also to stop unexpected latency when a config value is read partway through your application (e.g. your config could be coming across a network), the option is available to read all config values at the start. Just call
config.read()
This will throw an exception if any required config value cannot be found, and will also keep all read config values in memory for next time they are used. If you do not use
read you will only get the exception when you first try to use the offending config key.
Hierarchical configuration
Use
group_key to represent a "sub-config" of a configuration. Set up "sub-configs" exactly as demonstrated above, and then create a parent config to compose them in one place.
from typedconfig import Config, key, section, group_key from typedconfig.source import EnvironmentConfigSource, IniFileConfigSource @section('database') class DatabaseConfig(Config): host = key(cast=str) port = key(cast=int) @section('algorithm') class AlgorithmConfig(Config): max_value = key(cast=float) min_value = key(cast=float) class ParentConfig(Config): database = group_key(DatabaseConfig) algorithm = group_key(AlgorithmConfig) description = key(cast=str, section_name="general") config = ParentConfig() config.add_source(EnvironmentConfigSource(prefix="EXAMPLE")) config.add_source(IniFileConfigSource("config.cfg")) config.read()
The first time the
config.database or
config.algorithm is accessed (which in the case above is when
read() is called), then an instance will be instantiated. Notice that it is the class definition, not an instance of the class, which is passed to the
group_key function.
Custom section/key names, optional parameters, default values
Let's take a look at this:
from typedconfig import Config, key, section @section('database') class AppConfig(Config): host1 = key() host2 = key(section_name='database', key_name='HOST2', required=True, cast=str, default=None)
Both
host1 and
host2 are legitimate configuration key definitions.
section_name- this name of the section in the configuration source from which this parameter should be read. This can be provided on a key-by-key basis, but if it is left out then the section name supplied by the
@sectiondecorator is used. If all keys supply a
section_name, the class decorator is not needed. If both
section_nameand a decorator are provided, the
section_nameargument takes priority.
key_name- the name of this key in the configuration source from which this parameter is read. If not supplied, some magic uses the object property name as the key name.
required- default True. If False, and the configuration value can't be found, no error will be thrown and the default value will be used, if provided. If a default not provided,
Nonewill be used.
cast- probably the most important option for typing. If you want autocomplete typing support you must specify this. It's just a function which takes a string as an input and returns a parsed value. See the casting section for more. If not supplied, the value remains as a string.
default- only applicable if
requiredis false. When
requiredis false this value is used if a value cannot be found.
Types
from typedconfig import Config, key, section from typing import List def split_str(s: str) -> List[str]: return [x.strip() for x in s.split(",")] @section('database') class AppConfig(Config): host = key() port = key(cast=int) users = key(cast=split_str) zero_based_index = key(cast=lambda x: int(x)-1) config = AppConfig(sources=[...])
In this example we have three ways of casting:
- Not casting at all. This defaults to returning a
str, but your IDE won't know that so if you want type hints use
cast=str
- Casting to an built in type which can take a string input and parse it, for example
int
- Defining a custom function. Your function should take one string input and return one output of any type. To get type hint, just make sure your function has type annotations.
- Using a lambda expression. The type inference may or may not work depending on your expression, so if it doesn't just write it as a function with type annotations.
Validation
You can validate what has been supplied by providing a custom
cast function to a
key, which validates the configuration value in addition to parsing it.
Extending configuration using shared ConfigProvider
Multiple application modules may use different configuration schemes while sharing the same configuration source. Analogously, various
Config classes may provide different view of the same configuration data, sharing the same
ConfigProvider.
# app/config.py from typedconfig.provider import ConfigProvider from typedconfig.source import EnvironmentConfigSource, IniFileConfigSource provider = ConfigProvider() provider.add_source(EnvironmentConfigSource(prefix="EXAMPLE")) provider.add_source(IniFileConfigSource("config.cfg")) __all__ = ["provider"]
# app/database/config.py from typedconfig import Config, key, section from app.config import provider @section('database') class DatabaseConfig(Config): host = key(cast=str) port = key(cast=int) database_config = DatabaseConfig(provider=provider)
# app/algorithm/config.py from typedconfig import Config, key, section from app.config import provider @section('algorithm') class AlgorithmConfig(Config): max_value = key(cast=float) min_value = key(cast=float) algorithm_config = AlgorithmConfig(provider=provider)
Shared configuration provider can be used by plugins, which may need to declare additional configuration sections within the same configuration files as the main application. Let's assume we have
[database] section used by main application and
[app_extension] that provides 3rd party plugin configuration:
[database] host = 127.0.0.1 port = 2000 [app_extension] api_key = secret
e.g.
app/config.py may look like that:
from typedconfig import Config, key, section, group_key from typedconfig.source import EnvironmentConfigSource, IniFileConfigSource @section('database') class DatabaseConfig(Config): """Database configuration""" host = key(cast=str) port = key(cast=int) class ApplicationConfig(Config): """Main configuration object""" database = group_key(DatabaseConfig) app_config = ApplicationConfig(sources=[ EnvironmentConfigSource(), IniFileConfigSource("config.cfg") ])
and plugin can read additional sections by using the same configuration provider as main application config.
e.g.
plugin/config.py:
from typedconfig import Config, key, section, group_key from app.config import ApplicationConfig, app_config @section('app_extension') class ExtensionConfig(Config): """Extension configuration""" api_key = key(cast=str) # ExtendedAppConfig extends ApplicationConfig # so original sections are also included class ExtendedAppConfig(ApplicationConfig): """Extended main configuration object""" app_extension = group_key(ExtensionConfig) # ExtendedAppConfig uses the same provider as the main app_config extended_config = ExtendedAppConfig(provider=app_config.provider)
from plugin.config import extended_config # Plugin can access both main and extra sections print(extended_config.app_extension.api_key) print(extended_config.database.host)
Configuration variables which depend on other configuration variables
Sometimes you may wish to set the value of some configuration variables based on others. You may also wish to validate some variables, for example allowed values may be different depending on the value of another config variable. For this you can add a
post_read_hook.
The default implementation of
post_read_hook returns an empty
dict. You can override this by implementing your own
post_read_hook method. It should receive only
self as an input, and return a
dict. This
dict should be a simple mapping from config keys to values. For hierarchical configurations, you can nest the dictionaries. If you provide a
post_read_hook in both a parent and a child class which both make changes to the same keys (don't do this) then the values returned by the child method will overwrite those by the parent.
This hook is called whenever you call the
read method. If you use lazy loading and skip calling the
read method, you cannot use this hook.
# my_app/config.py from typedconfig import Config, key, group_key, section from typedconfig.source import EnvironmentConfigSource @section('child') class ChildConfig(Config): http_port_plus_one = key(cast=int, required=False) @section('app') class AppConfig(Config): use_https = key(cast=bool) http_port = key(key_name='port', cast=int, required=False) child = group_key(ChildConfig) def post_read_hook(self) -> dict: config_updates = dict() # If the port has not been provided, set it based on the value of use_https if self.http_port is None: config_updates.update(http_port=443 if self.use_https else 80) else: # Modify child config config_updates.update(child=dict(http_port_plus_one=self.http_port + 1)) # Validate that the port number has a sensible value # It is recommended to do validation inside the cast method for individual keys, however for dependent keys it can be useful here if self.http_port is not None: if self.use_https: assert self.http_port in [443, 444, 445] else: assert self.http_port in [80, 81, 82] return config_updates config = AppConfig() config.add_source(EnvironmentConfigSource()) config.read()
Configuration Sources
Configuration sources are how your main
Config class knows where to get its data from. These are totally extensible so that you can read in your configuration from wherever you like - from a database, from S3, anywhere that you can write code for.
You supply your configuration source to your config after you've instantiated it, but before you try to read any data from it:
config = AppConfig() config.add_source(my_first_source) config.add_source(my_second_source) config.read()
Or you can supply the sources directly in the constructor like this:
config = AppConfig(sources=[my_first_source, my_second_source]) config.read()
Modifying or refreshing configuration after it has been loaded
In general it is bad practice to modify configuration at runtime because the configuration for your program should be fixed for the duration of it. However, there are cases where it may be necessary.
To completely replace the set of config sources, you can use
config = AppConfig(sources=[my_first_source, my_second_source]) config.set_sources([my_first_new_source, my_second_new_source])
To replace a specific config source, for example because a config file has changed and you need to re-read it from disk, you can use
replace_source:
from typedconfig.source import IniFileConfigSource original_source = IniFileConfigSource("config.cfg") config = AppConfig(sources=[source]) # Now say you change the contents to config.cfg and need to read it again new_source = IniFileConfigSource("config.cfg") # re-reads file during construction config.replace_source(original_source, new_source)
Important: if you add or modify the config sources the config has been read, or need to refresh the config for some reason, you'll need to clear any cached values in order to force the config to be fetched from the
ConfigSources again. You can do this by
config.clear_cache() config.read() # Read all configuration values again
Supplied Config Sources
EnvironmentConfigSource
This just reads configuration from environment variables.
from typedconfig.source import EnvironmentConfigSource source = EnvironmentConfigSource(prefix="XYZ") # OR just source = EnvironmentConfigSource()
It just takes one optional input argument, a prefix. This can be useful to avoid name clashes in environment variables.
- If prefix is provided, environment variables are expected to look like
{PREFIX}_{SECTION}_{KEY}, for example
export XYZ_DATABASE_PORT=2000.
- If no prefix is provided, environment variables should look like
{SECTION}_{KEY}, for example
export DATABASE_PORT=2000.
IniFileConfigSource
This reads from an INI file using Python's built in configparser. Read the docs for
configparser for more about the structure of the file.
from typedconfig.source import IniFileConfigSource source = IniFileConfigSource("config.cfg", encoding='utf-8', must_exist=True)
- The first argument is the filename (absolute or relative to the current working directory).
encodingis the text encoding of the file.
configparser's default is used if not supplied.
must_exist- default
True. If the file can't be found, an error will be thrown by default. Setting
must_existto be
Falseallows the file not to be present, in which case this source will just report that it can't find any configuration values and your
Configclass will move onto looking in the next
ConfigSource.
IniStringConfigSource
This reads from a string instead of a file
from typedconfig.source import IniStringConfigSource source = IniStringConfigSource(""" [section_name] key_name=key_value """)
DictConfigSource
The most basic source, entirely in memory, and also useful when writing tests. It is case insensitive.
from typedconfig.source import DictConfigSource source = DictConfigSource({ 'database': dict(HOST='db1', PORT='2000'), 'algorithm': dict(MAX_VALUE='20', MIN_VALUE='10') })
It expects data type
Dict[str, Dict[str, str]], i.e. such that
string_value = d['section_name']['key_name']. Everything should be provided as string data so that it can be parsed in the same way as if data was coming from a file or elsewhere.
This is an alternative way of supplying default values instead of using the
default option when defining your
keys. Just provide a
DictConfigSource as the lowest priority source, containing your defaults.
Writing your own
ConfigSources
An abstract base class
ConfigSource is supplied. You should extend it and implement the method
get_config_value as demonstrated below, which takes a section name and key name, and returns either a
str config value, or
None if the value could not be found. It should not error if the value cannot be found,
Config will throw an error later if it still can't find the value in any of its other available sources. To make it easier for the user try to make your source case insensitive.
Here's an outline of how you might implement a source to read your config from a JSON file, for example. Use the
__init__ method to provide any information your source needs to fetch the data, such as filename, api details, etc. You can do sanity checks in the
__init__ method and throw an error if something is wrong.
import json from typing import Optional from typedconfig.source import ConfigSource class JsonConfigSource(ConfigSource): def __init__(self, filename: str): # Read data - will raise an exception if problem with file with open(filename, 'r') as f: self.data = json.load(f) # Quick checks on data format assert type(self.data) is dict for k, v in self.data.items(): assert type(k) is str assert type(v) is dict for v_k, v_v in v.items(): assert type(v_k) is str assert type(v_v) is str # Convert all keys to lowercase self.data = { k.lower(): { v_k.lower(): v_v for v_k, v_v in v.items() } for k, v in self.data.items() } def get_config_value(self, section_name: str, key_name: str) -> Optional[str]: # Extract info from data which we read in during __init__ section = self.data.get(section_name.lower(), None) if section is None: return None return section.get(key_name.lower(), None)
Additional config sources
In order to keep
typed-config dependency free,
ConfigSources requiring additional dependencies are in separate packages, which also have
typed-config as a dependency.
These are listed here:
Contributing
Ideas for new features and pull requests are welcome. PRs must come with tests included. This was developed using Python 3.7 but Travis tests run with all versions 3.6-3.9 too.
Development setup
- Clone the git repository
- Create a virtual environment
virtualenv venv
- Activate the environment
venv/scripts/activate
- Install development dependencies
pip install -r requirements.txt
Running tests
pytest
To run with coverage:
pytest --cov
Making a release
- Bump version number in
typedconfig/__version__.py
- Add changes to CHANGELOG.md
- Commit your changes and tag with
git tag -a v0.1.0 -m "Summary of changes"
- Travis will deploy the release to PyPi for you.
Staging release
If you want to check how a release will look on PyPi before tagging and making it live, you can do the following:
pip install twineif you don't already have it
- Bump version number in
typedconfig/__version__.py
- Clear the dist directory
rm -r dist
python setup.py sdist bdist_wheel
twine check dist/*
- Upload to the test PyPI
twine upload --repository-url dist/*
- Check all looks ok at
- If all looks good you can git tag and push for deploy to live PyPi
Here is a good tutorial on publishing packages to PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/typed-config/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
12672/how-to-exit-from-python-without-traceback
shutil has many methods you can use. One ...READ MORE
You are presumably encountering an exception and ...READ MORE
Using the default CSV module
Demo:
import csv
with open(filename, "r") ...READ MORE
print(*names, sep = ', ')
This is what ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
Enumerate() method adds a counter to an ...READ MORE
You can simply the built-in function in ...READ MORE
You probably want to use np.ravel_multi_index:
import numpy as ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/12672/how-to-exit-from-python-without-traceback
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Get the maximum priority for the scheduling policy
#include <sched.h> int sched_get_priority_max( int policy );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The sched_get_priority_max() function returns the maximum value for the scheduling policy specified by policy.
See sched_getparam().
|
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/s/sched_get_priority_max.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Kubernetes - vi-credo/knowledge_corner Wiki
ContentsContents
- Architecture
- Installation and configuration
- Maintenance
- API
- Manifest
- Pod
- Controllers
- Scheduling
- Autoscaling
- Resource management
- Observability
- Monitoring
- Network
- Storage
- Security
- kubectl
- References
Registration (new nodes can easily register with a master node and accept a workload) and service discovery (automatic detection of new services via DNS or environment variables) enable easy scalability and availability.
Containers are isolated user spaces per running application code. The user space is all the code that resides above the kernel, and includes the applications and their dependencies. Abstraction is at the level of the application and its dependencies.
Containerization helps with dependency isolation and integration problem troubleshooting.
Core technologies that enhanced containerization:
- process - each process has its own virtual memory address space separate from others
- Linux namespaces - used to control what application can see (process ID numbers, directory trees, IP addresses, etc)
- cgroups - controls what resources an application can use (CPU time, memory, IO bandwidth, etc)
- union file system - encapsulating application with its dependencies
Everything in Kubernetes is represented by an object with state and attributes that user can change. Each object has two elements: object spec (desired state) and object state (current state). All Kubernetes objects are identified by a unique name(set by user) and a unique identifier(set by Kubernetes).
Sample image for testing
gcr.io/google-samples/hello-app:1.0.
ArchitectureArchitecture
Cluster Add-on pods provide special services to the cluster, f.e. DNS, Ingress (HTTP load balancer), dashboard. Popular option for logging - Fluentd, metrics
- Prometheus.
Control plane (master) components:
Worker components (also present on control plane node):
API serverAPI server
Exposes RESTFUL operations and accepts commands to view or change the state of
a cluster (user interacts with it via
kubectl). Handles all calls, both
internal and external. All actions are validated and authenticated. Manages
cluster state stored in etcd database being the only component that has
connection to it.
SchedulerScheduler
Watches API server for unscheduled Pods and schedules them on nodes. Uses an algorithm to determine where a Pod can be scheduled: first current quota restrictions are checked, then taints, tolerations, labels, etc. Scheduling is done by simply adding the node in Pod's data.
Controller managerController manager
Continuously monitors cluster's state through API server. If state does not match, contacts necessary controller to match the desired state. Multiple roles are included in a single binary:
- node controller - worker state
- replication controller - maintaining correct number of Pods
- endpoint controller - joins services and pods together
- service account and token controller - access management
pod-eviction-timeout (default 5m) specifies a timeout after which Kubernetes
should give up on a node and reschedule Pod(s) to a different node.
ETCDETCD
Cluster's database (distributed b+tree key-value store) for storing cluster, network states and other persistent info. Instead of updating existing data, new data is always appended to the end; previous copies are marked for removal.
The
etcdctl command allows for snapshot save and snapshot restore.
Cloud managerCloud manager
Manages controllers that interact with external cloud providers.
Container runtimeContainer runtime
Container runtime, which handles container's lifecycle. Kubernetes supports the following and can use any other that is CRI (Container Runtime Interface) compliant:
- docker
- containerd, includes
ctrfor managing images and
crictlfor managing containers
- CRI-Os
- frakti
Each node can run different container runtime.
KubeletKubelet
Kubernetes agent on each node that interacts with API server. Responsible for Pod's lifecycle.
- receives
PodSpec(Pod specification)
- passes requests to local container runtime
- mounts volumes to Pods
- ensures access to storage, Secrets and ConfigMaps
- executes health checks for Pod/node
kubelet process is managed by systemd when building cluster with kubeadm. Once
running it will start every Pod found in
/etc/kubernetes/manifests.
ProxyProxy
Provides network connectivity to Pods and maintains all networking rules using
iptables entries. Works as a local load-balancer, forwards TCP and UDP
traffic, implements services.
kube-proxy has 3 modes:
- user mode
iptablesmode
ipvsmode (alpha)
High availabilityHigh availability
HA cluster (stacked etcd topology) has at least 3 control plane nodes, since
etcd requires at least 2 nodes online to reach a quorum. API server and etcd
Pods run on all control plane nodes, which
scheduler and
controller-manager
are on standby node, which ensures only one replica is running at any given
time (implemented via lease mechanism). Load balancer in front of API server
evenly distributes traffic from nodes and outside the cluster. Since API server
and etcd are linked together on a given control plane node, there is a linked
redundancy between components.
In external etcd topology etcd cluster (at least 3 nodes) is set up separately from the control plane. API server references this etcd cluster, while the rest is the same as in previous topology.
Installation and configurationInstallation and configuration
Recommended minimum hardware requirements:
Cluster network ports:
Self-install options:
- kubeadm
- kubespray - advance Ansible playbook for setting up cluster on various OSs and using different network providers
- kops (Kubernetes operations) - CLI tool for creating a cluster in Cloud (AWS officially supported, GKE, Azure, etc on the way); also provisions necessary cloud infrastructure; how to
- kind - running Kubernetes locally on Docker containers
- Kubernetes in Docker Desktop for Mac
Bootstrapping with kubeadmBootstrapping with kubeadm
kubeadm init performs the following actions in the order by default (highly
customizable):
- Pre-flight checks (permissions on the system, hardware requirements, etc)
- Create certificate authority
- Generate kubeconfig files
- Generate static Pod manifests (for Control Plane components)
- Wait for Control Plane Pods to start
- Taint Control Plane node
- Generate bootstrap token
# List join token $ kubeadm token list # Regenerate join token $ kubeadm token create
- Start add-on Pods (DNS, kube-proxy, etc)
Pod networkingPod networking
Container-to-container networking is implemented by Pod concept, External-to-Pod is implemented by services, while Pod-to-Pod is expected to be implemented outside Kubernetes by networking configuration.
Overlay networking (also software defined networking) provides layer 3 single network that all Pods can use for intercommunication. Popular network add-ons:
- Flannel - L3 virtual network between nodes of a cluster
- Calico - flat L3 network without IP encapsulation; policy based traffic management;
calicoctl; Felix (interface monitoring and management, route programming, ACL configuration and state reporting) and BIRD (dynamic IP routing) daemons - routing state is read by Felix and distributed to all nodes allowing a client to connect to any node and get connected to a workload even if it is on a different node
- Weave Net multi-host network typically used as an add-on in CNI-enabled cluster.
- Kube-Router
MaintenanceMaintenance
NodeNode
Node is an API object outside a cluster representing an virtual/physical
instance. All nodes reside in
kube-node-lease namespace.
Scheduling Pods on the node on/off with
kubectl cordon/uncordon.
# Remove node from cluster # 1. remove object from API server $ kubectl delete node <node_name> # 2. remove cluster specific info $ kubeadm reset # 3. may also need to remove iptable entries # View CPU, memory and other resource usage, limits, etc $ kubectl describe node <node_name>
If node is rebooted, Pods running on that node stay scheduled on it, until
kubelet's eviction timeout parameter (default 5m) is exceeded,
Upgrading clusterUpgrading cluster
kubeadm upgrade
- plan - check installed version against newest in the repository, and verify that upgrade is possible
- apply - upgrade first cp node to the specified version
- diff - (similar to apply --dry-run) show difference applied during an upgrade
- node - allows updating
kubeletof worker node or control planes of other cp node; accesses phase command to step through the process
Control plane node(s) should be upgraded first. Steps are similar for control plane and worker nodes.
kubeadm-based cluster can only be upgraded by one minor version (e.g 1.16 -> 1.17).
Control plane upgradeControl plane upgrade
Check available and current versions. Then upgrade
kubeadm and verify. Drain the
Pods (ignoring DaemonSets). Verify upgrade plan and apply it.
kubctl get nodes
would still show old version at this point. Upgrade
kubelet,
kubectl and restart the daemon. Now
..get nodes should be
updated. Allow pods to be scheduled on the node.
Worker upgradeWorker upgrade
Same process as on control plane, except
kubeadm upgrade command
difference, and
kubectl commands being executed on control plane.
Upgrade CLIUpgrade CLI
- view available versions
$ sudo apt update $ sudo apt-cache madison kubeadm # view current version $ sudo apt list --installed | grep -i kube
- upgrade
kubeadmon the given node
$ sudo apt-mark unhold kubeadm $ sudo apt-get install kubeadm=<version> $ sudo apt-mark hold kubeadm $ sudo kubeadm version
- drain pods (from control plane for both)
$ kubectl drain <node_name> --ignore-daemonsets
- view and apply node update
# control plane $ sudo kubeadm upgrade plan $ sudo kubeadm upgrade apply <version> # worker (on the node) $ sudo kubeadm upgrade node
- upgrade
kubeletand
kubectl
$ sudo apt-mark unhold kubelet kubectl $ sudo apt-get install kubelet=<version> kubectl=<version> $ sudo apt-mark hold kubelet kubectl # restart daemon $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet
- allow pods to be deployed on the node
$ kubectl uncordon <node_name>
etcd backupetcd backup
Etcd backup file contains the entire state of the cluster. Secrets are not encrypted (only hashed), therefore, backup file should be encrypted and securely stored.
Usually backup script is performed by Linux or Kubernetes cron jobs.
By default, there is single etcd Pod running on control plane node. All data is
stored at
/var/lib/etcd, which is backed by
hostPath volume on the node.
etcdctl should match etcd running in a Pod, use
etcd --version command to
find out.
$ ETCDCTL_API=3 etcdctl --endpoints=<host>:<port> <command> <args> # Running etcdctl on master node $ ETCDCTL_API=3 etcdctl --endpoints= --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ snapshot save /var/lib/dat-backup.db # Check the status of backup $ etcdctl --write-out=table snapshot status <backup_file>
Restoring the backup to the default location:
$ export ETCDCTL_API=3 # Restore backup to another location # By default restores in the current directory at subdir ./default.etcd $ etcdctl snapshot restore <backup_file> # Move the original data directory elsewhere $ mv /var/lib/etcd /var/lib/etcd.OLD # Stop etcd container at container runtime level, since it is static container # Move restored backup to default location, `/var/lib/etcd` $ mv ./default.etcd /var/lib/etcd # Restarted etcd will find new data
Restoring the backup to the custom location:
$ etcdctl snapshot restore <backup_file> --data-dir=/var/lib/etcd.custom # Update static Pod manifest: # 1. --data-dir=/var/lib/etcd.custom # 2. mountPath: /var/lib/etcd.custom (volumeMounts) # 3. path: /var/lib/etcd.custom (volumes, hostPath) # Updating manifest triggers Pod restart (also kube-controller-manager # and kube-scheduler
APIAPI
Object organization:
- Kind - Pod, Service, Deployment, etc (available object types)
- Group - core, apps, storage, etc (grouped by similar functionality)
- Version - v1, beta, alpha
apiVersion: v1 kind: Pod metadata: name: nginx-pod spec: containers: - name: nginx image: nginx
Core (Legacy) group includes fundamental objects such as Pod, Node, Namespace, PersistentVolume, etc. Other objects are grouped under named API groups such as apps (Deployment), storage.k8s.io (StorageClass), rbac.authorization.k8s.io (Role) and so on
Versioning follows
Alpha->Beta->Stable lifecycle:
- Alpha (
v1alpha1) - disabled by default
- Beta (
b1beta1) - enabled by default, more stable, considered safe and tested, forward changes are backward compatible
- Stable (
v1) - backwards compatible, production ready
RequestRequest
API requests are RESTful (GET, POST, PUT, DELETE, PATCH)
Special API requests:
- LOG - retrieve container logs
- EXEC - exec command in a container
- WATCH - get change notifications on a resource
API resource location:
- Core API: (in namespace)
- API groups
Response codes:
2xx(success) - f.e. 201 (created), 202 (request accepted and performed async)
4xx(client side errors) - f.e. 401 (unauthorized, not authenticated), 403 (access denied), 404 - not found
5xx(server side errors) - 500 (internal error)
CurlCurl
Get certificates for easy request writing:
$ export client=$(grep client-cert $HOME/.kube/config | cut -d" " -f6) $ export key=$(grep client-key-data $HOME/.kube/config | cut -d" " -f6) $ export auth=$(grep certificate-authority-data $HOME/.kube/config | cut -d" " -f6) $ echo $client | base64 -d - > ./client.pem $ echo $key | base64 -d - > ./client-key.pem $ echo $auth | base64 -d - > ./ca.pem
Make requests using keys and certificates from previous step:
$ curl --cert client.pem --key client-key.pem \ --cacert ca.pem \
Another way to make authenticated request is to start a proxy session in the background:
# once done run fg and ctrl+C $ kubectl proxy & # or whatever is outputed by proxy command $ curl localhost:8001/<request>
ManifestManifest
Minimal Deployment manifest explained:
# API version # if API changes, API objects follow and may introduce breaking changes apiVersion: apps/v1 # object type kind: Deployment metadata: # required, must be unique to the namespace name: foo-deployment # specification details of an object spec: # number of pods replicas: 1 # a way for a deployment to identify which pods are members of it selector: matchLabels: app: foo # pod specifications template: metadata: # assigned to each pod # must match the selector labels: app: foo # container specifications spec: containers: - image: nginx name: foo
Generate with
--dry-run parameter:
$ kubectl create deployment hello-world \ --image=nginx \ --dry-run=client \ -o yaml > deployment.yaml
- Root
metadatashould have at least a name.
generationrepresents a number of changes made to the object.
resourceVersionvalue is tied to etcd to hepl with concurrency of objects. Any changes in database will change this number.
uid- unique id of the object throughout its lifetime.
PodPod
Pod is the smallest deployable object (not container). Pod embodies the environment where container lives, which can hold one or more containers. If there are several containers in a Pod, they share all resources like networking (unique IP is assigned to a Pod), access to storage and namespace (Linux). Containers in a Pod start in parallel (no way to determine which container becomes available first, but InitContainers can set the start up order). Loopback interface, writing to files in a common filesystem or inter-process communication (IPC) can be used by containers within a Pod for communication.
Secondary container may be used for logging, responding to requests, etc. Popular terms are sidecar, adapter, ambassador.
Pod states:
Pending- image is retrieved, but container hasn't started yet
Running- pod is scheduled on a node, all containers are created, at least one is running
Succeded- containers terminated successfully and won't be restarting
Failed- all container have terminated with at least one with failed status
Unknown- most likely communication error between master and kubelet
CrashLoopBackOff- one of containers unexpectedly exited after it was restarted at least once (most likely pod isn't configured correctly); k8s repeatedly makes new attempts
Environment variablesEnvironment variables
User defined environment variables are defined in Pod spec as key/value pairs
or via
valueFrom parameter referencing some location or other Kubernetes
resource.
System defined environment variables include Service names available at the time of Pod creation.
Both types can not be updated once Pod is created.
Pause containerPause container
Pause container is used to get an IP address, prior to other containers,
which is then used in a shared network namespace. This container is not seen
within Kubernetes, but can be discovered by container engine tools.
Container(s) will have
a Pod.
Init containerInit container
InitContainer runs (must successfully complete) before main application container. Multiple init containers can be specified, in which case they run sequentially (in Pod spec order). Primary reasons are setting up environment, separating duties (different storage and security settings) and environment verification (block main application start up if environment is not properly set up).
spec: containers: - name: main-app image: databaseD initContainers: - name: wait-database image: busybox command: ['sh', '-c', 'until ls /db/dir ; do sleep 5; done; ']
Static PodStatic Pod
Static Pod is managed by kubelet (not API server) on Nodes. Pod's
manifest is placed on a specific location on a node,
staticPodPath, that
kubelet is continuously watching. Default is
/etc/kubernetes/manifests.
Kubelet's configuration -
/var/lib/kubelet/config.yaml.
Kubelet creates a mirror Pod, so that it is visible to API server commands. However, deleting those Pods through API server will not affect those, and mirror Pod will be shown again.
Control Plane core component manifests - etcd, API server, controller manager, scheduler are static Pods.
ResourcesResources
Kubernetes does not have a direct manipulation on a container level, but resources can be managed through resources section in PodSpec, where CPU and memory resources that a container can consume can be specified. ResourceQuota object can set hard and soft limits (and of more types of resources) in a namespace, thus, on multiple objects.
Example:
apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" labels: app: hog name: hog namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: hog strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: hog spec: containers: - image: vish/stress imagePullPolicy: Always name: stress resources: limits: cpu: "1" memory: "2Gi" requests: cpu: "0.5" memory: "500Mi" args: - -cpus - "2" - -mem-total - "950Mi" - -mem-alloc-size - "100Mi" - -mem-alloc-sleep - "1s" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30
ProbesProbes
Probes let you run custom health checks on container(s) in a Pod.
livenessProbeis a continuous check to see if a container is running. Restart policy is applied on failure event.
readinessProbeis diagnostic check to see if a container is ready to receive requests. On failure event Pod's IP address is removed from all service endpoints by the endpoint controller (restart policy is not applied). Usually used to protect applications that temporary can't serve requests.
startupProbe- one time check during startup process , ensuring containers are in a
Readystate. All other probes are disabled until
startupProbesucceeds. On failure event restart policy is applied. Usually used for applications requiring long startup times.
Probes can be defined using 3 types of handlers: command, HTTP, and TCP.
- command's exit code of zero is considered healthy:
exec: command: - cat - /tmp/ready
- HTTP GET request return code
=> 200 and < 400:
[...] httpGet: path: /healthz port: 8080
- successful attempt establishing TCP connection:
[...] tcpSocket: port: 8080
Settings:
LifecycleLifecycle
Stopping/terminating Pod: When stop command is sent to a Pod, SIGTERM is sent to containers and Pod's status is set to Terminating. If container is not terminated by the end of grace period timer (default 30s) SIGKILL is sent, API server and etcd are updated.
# if termination is stuck # to immediately delete records from API and etcd # still have to clean up resources manually $ kubectl delete pod <name> --grace-perioud=0 --force
Container(s) in a Pod can restart independent of the Pod. Restart process is protected by exponential backoff - 10s, 20s, 40s and up to 5m. Resets to 0s after 10m of continuous successful run.
Restart policy:
- Always (default) - restarts all containers in a Pod, if one stops running
- OnFailure - restarts only on non-graceful termination (non-zero exit codes)
- Never
ControllersControllers
Controllers or operators are series of watch-loops that request API server for a particular object state and modify the object until desired state is achieved. Kubernetes comes with a set of default controllers, while others can be added using custom resource definitions.
ReplicaSetReplicaSet
Deploy and maintain defined number of Pods. Usually, not used by itself, but through Deployment controller. Consists of a selector, number of replicas and Pod spec.
ReplicaSet selector can be of
matchLabels or
matchExpression type. The latter
allows the use of operators
In,
NotIn,
Exists and
DoesNotExist.
matchExpressions: - key: foo operator: In values: - bar
DeploymentDeployment
Manages the state of ReplicaSet and the Pods within, thus, providing flexibility with upgrades and administration. Rolling upgrade is performed by creating a second ReplicaSet and increasing/descreasing Pods in two sets. It is also possible to roll back to a previous version, pause the deployment and make changes.
Designed for stateless applications, like web front end that doesn't store data or application state to a persistent storage.
Changes in configuration file automatically trigger rolling updates. You can
pause, resume and check status of this behaviour. Exit code of 0 for
status
command indicates success, while non-zero - failure.
$ kubectl rollout [pause|resume|status] deployment <name> $ kubectl rollout history deployment <name> # get detailed info $ kubectl rollout history deployment <name> --revision=<number> # roll back to a previous version $ kubectl rollout undo deployment <name> # roll back to a specific version $ kubectl rollout undo deployment <name> --to-revision=<number>
Restart all Pods. New ReplicaSet is created with the same Pod spec. Specified update strategy is applied.
$ kubectl rollout restart deployment <name>
Pod names are constructed as follows -
<deployment_name>-<pod_template_hash>-<pod_ip>.
pod_template_hash is unique
ReplicaSet hash within Deployment.
pod_id is unique Pod identifier within
ReplicaSet.
Create Deployment:
- declaratively via YAML file:
$ kubectl apply -f [DEPLOYMENT_FILE]
- imperatively using
kubectl runcommand:
$ kubectl create deployment <name> \ --image [IMAGE]:[TAG] --replicas 3 --labels [KEY]:[VALUE] --port 8080 --generator deployment/apps.v1 # api version to be used --save-config # saves the yaml config for future use
ScalingScaling
$ kubectl scale deployment [DEPLOYMENT_NAME] -replicas=5 # manual scaling $ kubectl autoscale deployment [DEPLOYMENT_NAME] \ --min=5 \ --max=15 \ --cpu-percent=75 # autoscale based on cpu threshold
Autoscaling creates
HorizontalPodAutoscaler object. Autoscaling has a
thrashing problem, that is when the target metric changes frequently, which
results in frequent up/down scaling. Use
--horizontal-pod-autoscaler-downscale-delay option to control this behavior (by
specifying a wait period before next down scale; default is 5 minute delay).
Update strategyUpdate strategy
RollingUpdate (default) strategy - new ReplicaSet starts scaling up, while
old one starts scaling down.
maxUnavailable specifies number of Pods from
the total number in a ReplicaSet that can be unavailable,
maxSurge
specifies number of Pods allowed to run concurrently on top of total number
of replicas in a ReplicaSet. Both can be specified as a number of Pods or
percentage.
Recreate strategy - all old Pods are terminated before new ones are
created. Used when two versions can't run concurrently.
Other strategies that can be implemented:
- blue/green deployment - create completely new Deployment of an application and change app's version. Traffic can be redirected using Services. Good for testing, disadvantage in doubled resources
- canary deployment - based on blue/green, but traffic is shifted gradually to a new version. This is achieved by avoiding specifying app's version and just by creating pods of a new version.
Related settings:
progressDeadlineSeconds- time in seconds until a progress error is reported (image issues, quotas, limit ranges)
revisionHistoryLimit(default 10) - how many old ReplicaSet specs to keep for rollback
StatefulSetStatefulSet
Managing stateful application with a controller. Provides network names, persistent storage and ordered operations for scaling and rolling updates.
Each pod maintains a persistent identity and has an ordinal index with a relevant
pod name, stable hostname, and stable identified storage. Ordinal index is just
a unique sequential number given for each pod representing the order in
sequence of pods. Deployment, scaling and updates are performed based on this
index. For example, second pod waits until first one is ready and running before
it is deployed. Scaling and updates happen is reverse order. Can be changed in
pod management policy, where
OrderedReady is default and can be switched to
Parallel. Each pod has it's own unique PVC, which uses
ReadWriteOnce access
mode.
StatefulSet require a service to control their networking.
example.yml
specifies headless service with no load balancing by
clusterIP: None option.
Examples are database workloads, caching servers, application state for web farms.
Naming must be persistent and consistent, as stateful application often needs to know exactly where data resides. Persistent storage ensures data is stored and can be retrieved later on. Headless service (without load balancer or cluster IP) allows applications to use cluster DNS to locate replicas by name.
DaemonSetDaemonSet
Ensures that a specific single Pod is always running on all or some subset of the
nodes. If new nodes are added, DaemonSet will automatically set up Pods on
those nodes with the required specification. The word
daemon is a computer
science term meaning a non-interactive process that provides useful services to
other processes.
Examples include logging (
fluentd), monitoring, metric and storage daemons.
RollingUpdate (default) update strategy terminates old Pods and creates new
in their place.
maxUnavailable can be set to integer or percentage value,
default is 1. In
OnDelete strategy old Pods are not removed automatically.
Only if administrator removes them manually, new Pods are created.
JobJob
Define, run and ensure that specified number of Pods successfully terminate.
restartPolicy must be set to either
OnFailure or
Never, since default
policy is
Always. In case of restart failed Pods are recreated with an
exponentially increasing delay: 10, 20, 40... seconds, to a maximum of 6
minutes.
No matter how Job completes (success or failure) Pods are not deleted (for logs and inspection). Administrator can delete Job manually, which will also delete Pods.
activeDeadlineSeconds- max duration time, has precedence over
backoffLimit
backoffLimit- number of retries before being marked as
Failed, defaults to 6
completions- number of Pods that need to finish successfully
parallelism- max number of Pods running in a Job simultaneously
Non-parallelNon-parallel
Creates only one pod at a time; job is completed when pod terminates successfully or, if completion counter is defined, when the required number of completions is performed.
Execute from cli:
$ kubectl run pi --image perl --restart Never -- perl -Mbignum -wle 'print bpi(2000)'
ParallelParallel
Parallel job can launch multiple pods to run the same task. There are 2 types of parallel jobs - fixed task completion count and the other which processes a work queue.
Work queue is created by leaving
completions field empty. Job controller
launches specified number of Pods simultaneously and waits until one of them
signals successful completion. Then it stops and removes all pods.
In a situation of a job with both completion and parallelism options set, the controller won't start new containers if the remaining number of completions is less that parallelism value.
CronJobCronJob
Create and manage Jobs on a defined schedule. CronJob is created at the time of submission to API server, but Job is created on schedule.
suspend- set to
trueto not run Jobs anymore
concurrencyPolicy-
Allow(default),
Forbid, or
Replace. Depending on how frequently jobs are scheduled and how long it takes to finish a Job, CronJob might end up executing more than one job concurrently.
In some cases may not run during a time period or run twice, thus, requested Pod should be idempotent.
Kubernetes retains number of successful and failed jobs in history, which is by
default 3 and 1 respectively. Options
successfulJobsHistoryLimit and
failedJobsHistoryLimit may be used to control this behavior. Deleting
CronJob also deletes all Pods.
SchedulingScheduling
The job of a scheduler is to assign new Pods to nodes. Default is
kube-scheduler, but a custom one can be written and set.
Node selection goes through 3 stages:
- Filtering - remove nodes that can not run the Pod (apply hard constraints, such as available resources,
nodeSelectors, etc)
- Scoring - gather list of nodes that can run the Pod (apply scoring functions to prioritize node list for the most appropriate node to run workload); ensure Pods of the same service are spread evenly across nodes, node affinity and taints are also applied
- Binding - updating node name in Pod's object
PriorityClass and
PriorityClassName Pod's settings can be used to evict
lower priority Pods to allow higher priority ones to be scheduled (scheduler
determines a node where a pending Pod could run if one or more lower priority
ones were to be evicted). Pod Disruption Budget can limit number of Pods to be
evicted and ensure enough Pods are running at all times, but it could still
be violated by scheduler, if no other option is available.
End result of a scheduling process is assigning a Binding (Kubernetes API
object in
api/v1 group) to a Pod that specifies where it should run. Can also
be assigned manually without any scheduler.
To manually schedule a Pod to a node (bypass scheduling process) specify
nodeName (node must already exist); resource constraints still apply. This
way a Pod can still run on a cordoned node, since scheduling is basically
disabled and node is assigned directly.
Custom scheduler can be implemented; also multiple schedulers can run concurrently. Custom scheduler is packed and deployed as a system Pod. Default scheduler code. Define which scheduler to use in Pod's spec, if none specified, default is used. If specified one isn't running, the Pod remains in Pending state.
# View scheduler and other info $ kubectl get events
Scheduling policyScheduling policy
Priorities are functions used to weight resources. By default, node with
the least number of Pods will be ranked the highest (unless
SelectorSpreadPriority is set).
ImageLocalityPriorityMap favors nodes that
already have the container image.
cp/pkg/scheduler/algorithm/priorities
contains the list of priorities.
Example file for a scheduler policy:
kind: Policy apiVersion: v1 predicates: - name: MatchNodeSelector order: 6 - name: PodFitsHostPorts order: 2 - name: PodFitsResources order: 3 - name: NoDiskConflict order: 4 - name: PodToleratesNodeTaints order: 5 - name: PodFitsHost order: 1 priorities: - name: LeastRequestedPriority weight : 1 - name: BalancedResourceAllocation weight : 1 - name: ServiceSpreadingPriority weight : 2 - name: EqualPriority weight : 1 hardPodAffinitySymmetricWeight: 10
Typically passed as
--policy-config-file and
--scheduler-name parameters.
This would result in 2 schedulers running in a cluster. Client can then choose
one in Pods spec.
Node selectorNode selector
Assign labels to nodes and use
nodeSelector on Pods to place them on certain
nodes. Simple key/value check based on matchLabels. Usually used to apply
hardware specification (hard disk, GPU) or workload isolation. All selectors
must be met, but node could have more labels.
nodeName could be used to schedule a Pod to a specific single node.
AffinityAffinity
Like node selector uses labels on nodes to make scheduling decisions, but with matchExpressions. matchLabels can still be used with affinity as well for simple matching.
nodeAffinity- use labels on nodes (should some day replace
nodeSelector)
podAffinity- try to schedule Pods together using Pod labels (same nodes, zone, etc)
podAntiAffinity- keep Pods separately (different nodes, zones, etc)
Scheduling conditions:
requiredDuringSchedulingIgnoredDuringExecution- Pod is scheduled only if all conditions are met (hard rule)
preferredDuringSchedulingIgnoredDuringExecution- Pod gets scheduled even if a node with all matching conditions is not found (soft rule, preference); weight 1 to 100 can be assigned to each rule
Affinity rules use In, NotIn, Exists, and DoesNotExist operators.
Particular label is required to be matched when the Pod starts, but not required,
if the label is later removed. However,
requiredDuringSchedulingRequiredDuringExecution is planned for the future.
Schedule caching Pod on the same node as a web server Pod.
spec: containers: - name: cache ... affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - webserver topologyKey: "kubernetes.io/hostname"
Taints and tolerationsTaints and tolerations
Opposite of selectors, keeps Pods from being placed on certain nodes. Taints allow to avoid scheduling, while Tolerations allow to ignore a Taint and be scheduled as normal.
$ kubectl taint nodes <node_name> <key>=<value>:<effect> $ kubectl taint nodes <node_name> key1=value1:NoSchedule # Remove a taint $ kubectl taint nodes <node_name> key:<effect>-
Effects:
NoSchedule- do not schedule Pod on a node, unless toleration is present, existing Pods continue to run
PreferNoSchedule- try to avoid particular node, running pods are unaffected
NoExecute- evacuate all existing Pods, unless one has a toleration, and do not schedule new Pods;
tolerationSecondscan specify for how long a Pod can run before being evicted, in certain cases
kubeletcould add 300 seconds to avoid unnecessary evictions
Default operator is
Equal.
Exists should not be specified; if an empty key
uses
Exists operator, it will tolerate every taint. If effect is not
specified, but a key and operator are declared, all effects are matched.
All parts have to match to the taint on the node:
spec: containers: ... tolerations: - key: <key> operator: "Equal" value: <value> effect: NoSchedule
Node cordoningNode cordoning
Marks node as unschedulable, preventing new Pods from being scheduled, but does not remove already running Pods. Used as preparatory step before reboot or maintenance.
$ kubectl cordon <node> # gracefuly evict Pods # optionally ignore daemonsets, since f.e. kube-proxy is deployed as daemonset $ kubectl drain <node> --ignore-daemonsets
Individual Pods won't be removed by draining the node, since they are not
managed by a controller. Add
--force option to remove.
AutoscalingAutoscaling
Horizontal Pod AutoscalerHorizontal Pod Autoscaler
Automatically scales Replication Controllers, ReplicaSets, or Deployments to based on target of 50% CPU by default.
Cluster AutoscalerCluster Autoscaler
Adds or removes a node based on inability to deploy Pods or having low utilized nodes for at least 10 minutes.
Vertical Pod AutoscalerVertical Pod Autoscaler
In development. Adjust the amount of CPU and memory requested by Pods.
Resource managementResource management
ResourceQuotaResourceQuota
Define limits for total resource consumption in a namespace. Applying ResourceQuota with a limit less than already consumed resources doesn't affect existing resources and objects consuming them.
apiVersion: v1 kind: ResourceQuota metadata: name: storagequota spec: hard: persistentvolumeclaims: "10" requests.storage: "500Mi"
LimitRangeLimitRange
Define limits for resource consumption per objects. For example:
- min/max compute resource per Pod or container
- min/max storage request per PersistentVolumeClaim
ObservabilityObservability
NamespaceNamespace
Namespaces can abstract single physical layer into multiple clusters. They provide scope for naming resources like pods, controllers and deployments. Primarily used for resource isolation/organization. User can create namespaces, while Kubernetes has 3 default ones:
default- for objects with no namespace defined
kube-system- for objects created by Kubernetes itself (ConfigMap, Secrets, Controllers, Deployments); by default these items are excluded, when using kubectl command (can be viewed explicitly)
kube-public- for objects publicly readable for all users
Creating a Namespace also creates DNS subdomain
<ns_name>.svc.<cluster_domain>.
Can also be used as a security boundary for RBAC or naming boundary (same resource name in different namespaces). An given object can exist only in one namespace. Not all objects are namespaced (generally physical objects like PersistenVolumes and Nodes).
$ kubectl api-resources --namespaced=true $ kubectl api-resources --namespaced=false
Can be applied for a resource in command line or manifest file, while first option is preferred for manifest file to be more flexible.
Deleting namespace deletes all resources inside of it as well.
Namespace is defined in metadata section on an object.
apiVersion: kind: metadata: namespace:
Some related commands:
create namespace
$ kubectl create namespace [NAME]
deploy object in a specific namespace, append
-n [NAMESPACE_NAME]
Set namespace for all subsequent commands:
$ kubectl config set-context --current --namespace=<insert-namespace-name-here> # Validate it $ kubectl config view --minify | grep namespace:
Not all resources are in a namespace. Use commands below to list resources that are in a namespace and those that are not.
# In a namespace $ kubectl api-resources --namespaced=true # Not in a namespace $ kubectl api-resources --namespaced=false
Four default namespaces:
- default - assumed namespace
- kube-node-lease - worker node lease info
- kube-public - readable by all, even not authenticated, general info
- kube-system - infrastructure pods
Specify in metadata section of an object.
LabelLabel
Labels enable managing objects or collection of objects by organizing them into groups, including objects of different types. Label selectors allow querying/selecting multiple objects. Kubernetes also leverages labels for internal operations.
Non-hierarchical key/value pair (up to 63/253 characters long). Can be assigned at creation time or be added/edited later. Immutable.
$ kubectl label <object> <name> <key1>=<value1> <key2>=<value2> $ kubectl label <object> <name> <key1>=<value1 <key2>=<value2> --overwrite $ kubectl label <object> --all <key>=<value> # delete $ kubectl label <object> <name> <key>- # output additional column with all labels $ kubectl get <object> --show-labels # specify columns (labels) to show $ kubectl get <object> -L <key1>,<key2> # query $ kubectl get <object> --selector <key>=<value> $ kubectl get <object> -l '<key1>=<value1>,<key2>!=<value2>' $ kubectl get <object> -l '<key1> in (<value1>,<value2>)' $ kubectl get <object> -l '<key1> notin (<value1>,<value2>)'
Controllers and Services match Pods using labels. Pod scheduling (f.e. based on hardware specification, SSD, GPU, etc) uses labels as well.
Deployment and Service example, all labels must match:
kind: Deployment ... spec: selector: matchLabels: <key>: <value> ... template: metadata: labels: <key>: <value> spec: containers: --- kind: Service ... spec: selector: <key>: <value>
Labels are also used to schedule Pods on a specific Node(s):
kind: Pod ... spec: nodeSelector: <key>: <value>
AnnotationAnnotation
Annotations include object's metadata that can be useful outside cluster's object interaction. For example, timestamp, pointer to related objects from other ecosystems, developer's email responsible for the object.
Used to add additional info about objects in the cluster. Mostly used by people or third-party applications. Non-hierarchical key/value pairs (up to 63 characters, 256KB). Can't be used for querying/selecting.
Operations:
- manifest file
kind: Pod ... metadata: annotation: owner: Max
- cli
$ kubectl annotate <object_type> <name> key=<value> $ kubectl annotate <object_type> --all key=<value> --namespace <name> $ kubectl annotate <object_type> <name> key=<new_value> --overwrite # delete $ kubectl annotate <object_type> <name> <key>-
NetworkNetwork
All Pods can communicate with each other on all nodes. Software (agents) on a given node can communicate with all Pods on that node.
Network types:
- node (real infrastructure)
- pod - implemented by network plugin, IPs are assigned from
PodCidrRange, but could also be assigned from the node network
- cluster - used by Services using
ClusterIPtype, assigned from
ServiceClusterIpRangeparameter from API server and controller manager configurations
Pod-to-Pod communication on the same node goes through bridge interface. On
different nodes could use Layer 2/Layer 3/overlay options. Services are
implemented by
kube-proxy able to expose Pods both internally and
externally.
Pause/Infrastructure container starts first and sets up the namespace and network stack inside a Pod, which is then used by the application container(s). This let's container(s) restart without interrupting network namespace. Pause container has a lifecycle of the Pod (created and deleted along with the Pod).
Container Network Interface (CNI) is abstraction for implementing container and pod networking (setting namespaces, interfaces, bridge configurations, IP addressing). CNI sits between k8s and container runtime. CNI plugins are usually deployed as Pods controlled by DaemonSets running on each node.
Expose container directly to the client:
$ kubectl port-forward <pod_name> <localhost_port>:<pod_port>
DNSDNS
DNS is available as a Service in a cluster, and Pods by default are configured
to use it. Provided by CoreDNS (since v1.13). Configuration is stored as ConfigMap
coredns
in
kube-system namespace, which is mounted to
coredns Pods as
/etc/coredns/Corefile. Updates to ConfigMap get propagated to CoreDNS Pods in
about 1-2 minutes - check logs for reload message. More
plugins can be enabled for additional
functionality.
dnsPolicy settings in Pod spec can be set to the following:
ClusterFirst(default) - send DNS queries with cluster prefix to
corednsservice
Default- inherit node's DNS
None- specify DNS settings via another parameter,
dnsConfig
spec: dnsPolicy: "None" dnsConfig: nameservers: - 9.9.9.9
A records:
- for Pods -
<ip_in_dash_form>.<namespace>.pod.cluster.local
- for Services -
<service_name>.<namespace>.svc.cluster.local
Troubleshooting DNS can be done by creating a Pod with network tools, creating a Service and running a DNS lookup (other tools dig, nc, wireshark):
$ nslookup <service_name> <kube-dns_ip>
Traffic can access a Service using a name, even in a different namespace just by adding a namespace's name:
# will fail if service is in different namespace $ curl <service_name> # works across namespaces $ curl <service_name>.<namespace>
ServiceService
Provides persistent endpoint for clients (virtual IP and DNS). Load balances
traffic to Pods and automatically updates during Pod controller operations.
Labels and selectors are used to determine which Pods are part of a
Service. Default and popular implementation is
kube-proxy on the node's
iptables.
Acts as a network abstraction for Pod access. Allows communication between sets of deployments. A unique ID is assigned at creation time, which can be used by other Pods to talk to each other.
Service is a operator inside
kube-controller-manager, which sends requests
via
kube-apiserver to network plugin (f.e. Calico) and
kube-proxy on nodes.
Also creates an Endpoint operator, which queries Pods with specific label
for ephemeral IPs.
Imperatively create a new Service (
NodePort type):
# create a service $ kubectl expose deployment <name> \ --port 80 \ --target-port 8080
Service lists endpoints which are individual IP:PORT pairs of underlying pods. Can access directly for troubleshooting.
service/kubernetes is a API server service.
A Service is an controller which listens to endpoint controller to provide
persistent IP for Pods. Send messages (settings) via API server to
kube-proxy
on every node, and to network plugin. Also handles access policies for inbound
requests.
Creates an Endpoint object. See the routing IPs:
$ kubectl describe endpoints <service_name>
Each Service gets a DNS A/AAAA record in cluster DNS in the form
<svc_name>.<namespace>.svc.<cluster_domain>.
kubectl proxy command create a local proxy allowing requests to Kubernetes API:
$ kubectl proxy & # access foo service $ # if service has a port_name configured $:<port_name>
ClusterIPClusterIP
Default Service type. Exposes the Service on a cluster-internal IP (exists
in iptables on the nodes). IP is chosen from a range specified as a
ServiceClusterIPRange parameter both on API server and controller manager
configurations. If service is created before corresponding Pods, they get
hostname and IP address as environment variables.
NodePortNodePort
Exposes the Service on the IP address of each node in the cluster at a
specific port number, making it available outside the cluster. Built on top of
ClusterIP Service - creates
ClusterIP Service and allocates port on all
nodes with a firewall rule to direct traffic on that node to
ClusterIP persistent
IP.
NodePort option is set automatically from the range 30000 to 32767 or can be
specified by user, if it falls within the range.
Regardless of which node is requested traffic is routed to
ClusterIP
Service and then to Pod(s) (all implemented by
kube-proxy on the node).
LoadBalancerLoadBalancer
Exposes the service externally, using a load balancing service provided by a cloud provider or add-on.
Creates a
NodePort Service and makes an async request for to use a load
balancer. If listener does not answer (no load balancer is created), stays in
Pending state.
With GKE it is implemented using GCP's network Load Balancer. GCP will assign
assign static IP address to load balancer, which directs traffic to
nodes (random).
kube-proxy will choose random Pod, which may reside on a
different node to ensure even balance (this is default). Respond will take same
route back. Use
externalTrafficPolicy: Local option to disable this behaviour
and enforce
kube-proxy to direct traffic to local pods.
ExternalNameExternalName
Provides service discovery for external services. Kubernetes creates a
CNAME
record for external DNS record, allowing Pods to access external services
(does not have selectors, defined Endpoints or ports).
Headless servicesHeadless services
Allows interfacing with other service discovery mechanisms (not tied to
Kubernetes). Define by explicitly specifying
None in
spec.clusterIP field.
ClusterIP is not allocated and
kube-proxy does not handle this Service
(no load balancing nor proxying).
Service with selectors - Endpoint controller creates endpoint records and modifies DNS config to return A records (IP addresses) pointing directly to Pods. Client decides which one to use. Often used with stateful applications.
Service without selectors - no Endpoints are created. DNS config may look
for CNAME record for
ExternalName type or any Endpoint records that share
a name with a Service (Endpoint object(s) needs to be created manually, and
can also include external IP).
EndpointEndpoint
Usually not managed directly, represents IPs for Pods that match particular service. If Endpoint is empty, meaning no matching Pods, service definition might be wrong.
IngressIngress
Consists of Ingress object describing various rules on how HTTP traffic gets
routed to Services (and ultimately to Pods) and Ingress controller
(daemon in a Pod) watching for new rules (
/ingresses endpoint in the API
server). Cluster may have multiple Ingress controllers. Ingress class or annotation can be used to
associated an object with a particular controller (can also create a default
class). Absense of an Ingress class or annotation will cause every controller
to try to satisfy the traffic.
Both L4 and L7 can be configured.
Ingress also provides load balancing directly to Endpoint bypassing
ClusterIP. Name-based virtual hosting is available via host header in HTTP
request. Also provides path-based routing and TLS termination.
Ingress controller can be implemented in various ways: nginx Pods, external hardware (f.e. Citrix), cloud-ingress provider (f.e. AppGW, AWS ALB).
Main difference with a
LoadBalancer Service is that this resource operates
on level 7, which allows it to provide name-based virtual hosting, path-based
routing, tls termination and other capabilities.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: ingressClassName: nginx # non-matching traffic or when no rules are defined defaultBackend: service: name: example-service port: number: 80
Currently 3 Ingress Controllers are supported: AWS, GCE, nginx. Nginx Ingress setup.
StorageStorage
Rook is a storage orchestration solution.
Kubernetes provides storage abstraction as volumes and persistent volumes. Volumes is a method by which storage is attached to Pods (not containers).
Volume is a persistent storage deployed as part of of Pod spec, including
implementation details of particular Volume type (
nfs,
ebs). Has a same
lifecycle as Pod.
Volumes are declared with
spec.volumes and mount points are declared with
spec.containers.volumeMounts parameters.
Access modes:
ReadWriteOnce- read/write to a single node
ReadOnlyMany- read-only by multiple nodes
ReadWriteMany- read/write by multiple nodes
Kubernetes groups volumes with the same access mode together and sorts them by size from smallest to largest. Claim is checked against each volume in the access mode group until matching size is found.
PersistentVolumePersistentVolume
Storage abstraction with a separate from Pod lifecycle. Managed by
kubelet
- maps storage on the node and exposes it as a mount.
Persistent volume abstraction has 2 components: PersistentVolume and
PersistentVolumeClaim. PersistentVolume is a durable and persistent storage
resource managed at the cluster level. PersistentVolumeClaim are request and
claim made by Pods to use
PersistentVolumes. User specifies volume size,
access mode, and other storage characteristics. If a claim matches a volume,
then claim is bound to that volume and Pod can consume that resource.
If no match can be found, Kubernetes will try to allocate one dynamically.
PersistentVolume is not a namespaced object, while PersistentVolumeClaim is.
Static provisioning workflow includes manually creating PersistentVolume, PersistentVolumeClaim, and specifying volume in Pod spec.
PersistentVolume:
apiVersion: v1 kind: PersistentVolume metadata: name: nfs-store spec: capacity: storage: 10Gi accessModes: - ReadWriteMany nfs: ...
PersistentVolumeClaim
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-store spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi
Reclaim policyReclaim policy
When PersistentVolumeClaim object is deleted, PersistentVolume may be deleted depending on reclaim policy.
With
Retain reclaim policy PersistentVolume is not reclaimed after
PersistentVolumeClaim is deleted. PersistentVolume changes to
Released status. Creating new PersistentVolumeClaim doesn't provide access
to that storage, and if no other volume is available, claim stays in
Pending
state.
Dynamic provisioningDynamic provisioning
StorageClass resource allows admin to create a persistent volume provisioner (with type specific configurations). User requests a claim, and API server auto-provisions a PersistentVolume. The resource is reclaimed according to reclaim policy stated in StorageClass.
Dynamic provisioning workflow includes creating a StorageClass object and
PersistentVolumeClaim pointing to this class. When a Pod is created,
PersistentVolume is dynamically created.
Delete reclaim policy in
StorageClass will delete a PersistentVolume, if PersistentVolumeClaim
is deleted.
StorageClass
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: main provisioner: kubernetes.io/azure-disk parameters: ...
GKE has a default
standard storage class that uses Compute Engine standard
Persistent Disk. In GKE PVC with no defined storage class will use the
standard one.
emptyDiremptyDir
Simply empty directory that can be mount to container in a Pod. When a Pod
is destroyed, the directory is deleted. Kubernetes creates
emptyDir volume
from a node's local disk or using a memory band file system.
ConfigMapConfigMap
Can ingest data from a literal value, from a file or from a directory of files.
Provides a way to inject application configuration data into pods. Can be referenced in a volume.
Used to store config files, command line arguments, environment variables, port number, etc.
kubelet periodically syncs with
ConfigMaps to keep
ConfigMap volume up to
date. Data is updated even if it is already connected to a pod (matter of
seconds-minutes).
Volume ConfigMap can be updated. Can be set as immutable, meaning can't be changed once created. Namespaced.
Creating individual key value pairs, f.e. via
--from-literal, results in a
map, while
--from-file results in a single key with file's contents as a
value.
apiVersion: v1 kind: ConfigMap metadata: name: app1config data: key: value
ConfigMap from command line:
# create $ kubectl create configmap [NAME] [DATA] $ kubectl create configmap [NAME] --from-file=[KEY_NAME]=[FILE_PATH] # examples $ kubectl create configmap demo --from-literal=lab.difficulty=easy $ kubectl create configmap demo --from-file=color.properties $ cat color.properties color.good=green color.bad=red
Pods can refer to ConfigMap in 3 ways:
- environment variables -
valueFromspecifies each key individually,
envFromcreates a variable for each key/value pair in ConfigMap
spec: containers: - name: app1 env: - name: username valueFrom: configMapKeyRef: name: app1config key: username envFrom: - configMapRef: name: app1env
- volume - depending on how ConfigMap is created could result in one file with many values or many files with value in each one
spec: volumes: - name: app1config configMap: name: app1config containers: - name; app1 volumeMounts: - name: app1config mountPath: /etc/config
- Pods commands
System components and controllers can also use ConfigMaps.
SecretSecret
Similar to
ConfigMap, but used to store sensitive data.
In case of passing value from files, they should have single entries. File's name serves as a key, while value is it's content.
Kubelet syncs
Secret volumes just as
ConfigMap volumes.
By default values are only base64 encoded, but encryption can also be set up. Secret resource is namespaced, and only Pods in the same namespace can reference given Secret.
Values passed will be base64 encoded strings (check result with commands below):
$ echo -n "admin" | base64 $ echo -n "password" | base64
Secret types:
generic- creating secrets from files, directories or literal values
- TLS - private-public encryption key pair; pass Kubernetes public key certificate encoded in PEM format, and also supply the private key of that certificate
docker-registry- credentials for a private docker registry (Docker Hub, cloud based container registries)
Can be exposed to Pod as environment variable or volume/file, latter being
able to be updated and reflected in a Pod. A Secret can be marked as
immutable - can no be changed after creation. A Pod using such Secret
must also be deleted to be able to read a new Secret with the same name and
updated value.
Secrets can be specified individually or all from a Secret object, in which case keys will be used as environment names:
spec: containers: - name: one env: - name: APP_USERNAME valueFrom: secretKeyRef: name: app1 key: USERNAME - name: two envFrom: - secretKeyRef: name: app2
Exposing as a file creates a file for each key and puts its value inside the file:
spec: volumes: - name: appconfig secret: secretName: app containers: volumeMounts: - name: appconfig mountPath: /etc/appconfig
SecuritySecurity
Certificate authorityCertificate authority
By default self signed CA is created. However, an external PKI (Public Key
Infrastructure) can also be joined. CA is used for secure cluster
communications (API server) and for authentication of users and cluster
components. Files are located at
/etc/kubernetes/pki and distributed to each
node upon joining the cluster.
Authentication and authorizationAuthentication and authorization
Kubernetes provides two types of users: normal user and service account. Normal users are managed outside Kubernetes, while service accounts are created by Kubernetes itself to provide identity for processes in pods to intereact with Kubernetes cluster. Each namespace has default service account.
After successful authentication, there are two main ways to authorize what an account can do: Cloud IAM and RBAC (Kubernetes roles-base access control). Cloud IAM is access control system to use cloud resources and perform operations on project and cluster levels (outside cluster - view and change configuration of Kubernetes cluster). RBAC provides permission inside cluster at the cluster and namespace levels (view and change Kubernetes objects).
API server listens for remote requests on HTTPS port and all requests must be authenticated before it's acted upon. API server provides 3 methods for authentication: OpenID connect tokens, x509 client certs and basic authentication using static passwords. While OpenID is preferred method, last two are disabled by default in GKE.
# check allowed action as current or any given user $ kubectl auth can-i create deployments $ kubectl auth can-i create deployments --as bob
Service accountService account
Provides an identifier for processes in a Pod to access API server and perform
actions. Certificates for are made available to a Pod at
/var/run/secrets/kubernetes.io/serviceaccount/.
RBACRBAC
Built on 3 base elements: subject (who - users or processes that can make requests to Kubernetes API), which (resources - API objects such as pods, deployments..), what (verbs, operations such as get, watch, create).
Elements are connected together using 2 RBAC API objects: Roles (connect API resources and verbs) and RoleBidnings (connect Roles to subjects). Both can be applied on a cluster or namespace level.
RBAC has Roles (defined at namespace level) and ClusterRoles (define at cluster level).
get,
list and
watch are often used together to provide read-only access.
patch and
update are also usually used together as a unit.
Only
get,
update,
delete and
patch can be used on named resources.
Monitoring and loggingMonitoring and logging
Install Kubernetes dashboard (runs Pods in
kubernetes-dashboard namespace):
deploy
$ kubectl apply -f
start proxy
$ kubectl proxy
access the following page (port may vary)
choose token option and supply the output of the following command:
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
-
Logging with kibana and elasticsearch
Logging with fluentd, kibana and elasticsearch
-
-
Comparing monitoring options
kubectlkubectl
kubectl [command] [type] [Name] [flag]
- apply/create - create resources
- run - start a pod from an image
- explain - documentation of object or resource
- delete - delete resources
- get - list resources
- get - detailed resources information
- exec - execute command on a container (in multi container scenario, specify container with
-cor
--container; defaults to the first declared container)
- logs - view logs on a container
output formation (
-o <format>) - wide, yaml, json, dry-run (print object
without sending to the API server)
-v adds verbosity to the output, can set different level, f.e. 7. Can be any
number, starts from 0, but there is no implementation greater than 10.
# basic server info $ kubectl config view # list all contexts $ kubectl config get-contexts $ kubectl config use-context <context_name> # context, cluster info # useful to verify the context $ kubectl cluster-info # configure user credentials # token and username/password are mutually exclusive $ kubectl config set-credentials
kubectl interacts with
kube-apiserver and uses configuration file,
home/.kube/config, as a source of server information and authentication.
context is a combination of cluster and user credentials. can be passed as
cli params, or switch the shell contexts:
$ kubectl config use-context <context>
list known api resources (also short info):
$ kubectl api-resources $ kubectl api-resources --api-group=apps # list api versions and groups $ kubectl api-versions | sort
all commands can be supplied with object type and a name separately or in the
form
object_type/object_name.
--watch give output over time (updates when status changes)
kubectl:
explain [object]- show built-in documentation, can also pass object's properties via dot notation, fe
pod.spec.containers
--recursive- show all inner fields
get [object] [name]- show info on object group (
pods,
deployments,
all)
--show-labels- adds extra columb
labelsto the output
-l,
--selector [label=value]- search using labels; add more labels after comma; can also be supplied to other commands, such as
delete
label in (value1,value2)- outputs objects that satisfy the supplied list of possible values;
notinoperator can be used to show inverse
-o yaml- show full info in yaml format
delete [object] [name]- delete k8s object
describe [object] [name]- get detailed info on specific object
label [object] [name] [new_label=value]- add a label to a resource, supply
--overwriteflag if changing the existing label
label [object] [name] [label]-- delete label
logs [pod_name]- view logs of a pod; use
-cflag to specify container; contains both stdout and stderr
--tail=[number]- limit output to the last 20 lines
--since=3h- limit output based on time limit
exec [pod_name] -- [command]- execute commands and application on a pod; use
-cflag to specify container
cp path/on/host $[pod_name]:path/in/container- copy files
top nodes- info on nodes status
imperative
$ kubectl create deployment nginx --image=nginx # single pod without a controller $ kubectl run nginx --image=nginx
use
cricrl to view containers running on a node (for containerd):
$ sudo crictl --runtime-endpoint unix://run/continerd/containerd.sock ps
get runtime info:
$ kubectl get deployment <deployment_name> -o yaml | less $ kubectl get deployment <deployment_name> -o json | less
dry-run
--dry-run either
server or
client.
server side processed as a typical request, but aren't persisted in storage. could fail if syntax error is present or if object already exists. on client side the request is presented on stdout and is useful for validating the syntax. could also be used to generate syntactically correct manifests:
$ kubectl create deployment nginx --image nginx --dry-run=client -o yaml
when applying change check the difference with
diff command. outputs
differences between objects in the cluster and the ones defined in the manifest
to stdout.
kubectl diff -f manifest.yaml.
running imperative commands, adhocs, like
set,
create, etc does not leave
any change information.
--record option can be used to write the command to
kubernetes.io/change-cause annotation to be inspected later on, for example,
by
kubectl rollout history.
--cascade=orphane deletes the controller, but not objects it has created.
Change manifest and object from cli (in JSON parameter specify field to change from root):
$ kubectl patch <object_type> <name> -p <json>
Get privileged access to a node through interactive debugging container (will
run in host namespace, node's filesystem will be mounted at
/host)
$ kubectl debug node/<name> -ti --image=<image_name>
EventsEvents
Record events
$ kubectl get events --watch &
useful notesuseful notes
- get container ids in a pod:
$ kubectl get pod <pod_name> -o=jsonpath='{range .status.containerstatuses[*]}{@.name}{" - "}{@.containerid}{"\n"}{end}'
- fire up ubuntu pod:
$ kubectl run <name> --rm -i --tty --image ubuntu -- bash
kubeconfigkubeconfig
kubeconfig files define connection settings to a cluster: client
certificates, cluster api server network location. often ca certificate that
was used to sign the certificate of api server is also included, thus, client
can trust the certificate presented by api server upon connection.
various files are created for different components at
/etc/kubernetes.
admin.conf is cluster admin account.
kubelet.conf,
controller-manager.conf,
scheduler.conf include location of api server and
client certificate to use.
|
https://github-wiki-see.page/m/vi-credo/knowledge_corner/wiki/Kubernetes
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
How Amazon Keyspaces works with IAM
Before you use IAM to manage access to Amazon Keyspaces, you should understand what IAM features are available to use with Amazon Keyspaces. To get a high-level view of how Amazon Keyspaces and other AWS services work with IAM, see AWS services that work with IAM in the IAM User Guide.
Topics
Amazon Keyspaces identity-based policies
With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. Amazon Keyspaces supports specific actions and resources, and condition keys. To learn about all of the elements that you use in a JSON policy, see IAM JSON policy elements reference in the IAM User Guide.
To see the Amazon Keyspaces service-specific resources and actions, and condition context keys that can be used for IAM permissions policies, see the Actions, Resources, and Condition Keys for Amazon Keyspaces (for Apache Cassandra) in the Service Authorization Reference. Keyspaces use the following prefix before the action:
cassandra:. For example, to grant someone permission to
create an Amazon Keyspaces keyspace with the Amazon Keyspaces
CREATE CQL statement, you
include the
cassandra:Create action in their policy. Policy statements
must include either an
Action or
NotAction element.
Amazon Keyspaces defines its own set of actions that describe tasks that you can
perform with this service.
To specify multiple actions in a single statement, separate them with commas as follows:
"Action": [ "cassandra:CREATE", "cassandra:MODIFY" ]
To see a list of Amazon Keyspaces actions, see Actions Defined by Amazon Keyspaces (for Apache Cassandra) in the Service Authorization Reference.": "*"
In Amazon Keyspaces keyspaces and tables can be used in the
Resource element of IAM permissions.
The Amazon Keyspaces keyspace resource has the following ARN:
arn:${Partition}:cassandra:${Region}:${Account}:/keyspace/${KeyspaceName}
The Amazon Keyspaces table resource has the following ARN:
arn:${Partition}:cassandra:${Region}:${Account}:/keyspace/${KeyspaceName}/table/${tableName}
For more information about the format of ARNs, see Amazon Resource Names (ARNs) and AWS service namespaces.
For example, to specify the
mykeyspace keyspace in your
statement, use the following ARN:
"Resource": "arn:aws:cassandra:us-east-1:123456789012:/keyspace/mykeyspace"
To specify all keyspaces that belong to a specific account, use the wildcard (*):
"Resource": "arn:aws:cassandra:us-east-1:123456789012:/keyspace/*"
Some Amazon Keyspaces actions, such as those for creating resources, cannot be performed on a specific resource. In those cases, you must use the wildcard (*).
"Resource": "*"
To connect to Amazon Keyspaces programmatically with a standard driver, a user must have SELECT access to the system tables,
because most drivers read the system keyspaces/tables on connection.
For example, to grant
SELECT permissions to a user for
mytable in
mykeyspace, the IAM
user must have permissions to read both,
mytable and the
system keyspace. To specify
multiple resources in a single statement, separate the ARNs with commas.
"Resource": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable", "arn:aws:cassandra:us-east-1:111122223333:/keyspace/system*"
To see a list of Amazon Keyspaces resource types and their ARNs, see Resources Defined by Amazon Keyspaces (for Apache Cassandra) in the Service Authorization Reference. To learn with which actions you can specify the ARN of each resource, see Actions Defined by Amazon Keyspaces (for Apache Cassandra). Keyspaces defines its own set of condition keys and also supports using some global condition keys. To see all AWS global condition keys, see AWS global condition context keys in the IAM User Guide.
All Amazon Keyspaces actions support the
aws:RequestTag/${TagKey}, the
aws:ResourceTag/${TagKey}, and the
aws:TagKeys condition
keys. For more information, see
Amazon Keyspaces resource access based on tags.
To see a list of Amazon Keyspaces condition keys, see Condition Keys for Amazon Keyspaces (for Apache Cassandra) in the Service Authorization Reference. To learn with which actions and resources you can use a condition key, see Actions Defined by Amazon Keyspaces (for Apache Cassandra).
Examples
To view examples of Amazon Keyspaces identity-based policies, see Amazon Keyspaces identity-based policy examples.
Amazon Keyspaces resource-based policies
Amazon Keyspaces does not support resource-based policies. To view an example of a detailed resource-based policy page, see.
Authorization based on Amazon Keyspaces tags
You can manage access to your Amazon Keyspaces resources by using tags. To manage resource access based on tags,
you provide tag information in
the condition
element of a policy using the
cassandra:ResourceTag/,
key-name
aws:RequestTag/, or
key-name
aws:TagKeys condition keys. For more information about tagging
Amazon Keyspaces resources, see Adding tags and labels to Amazon Keyspaces resources.
To view example identity-based policies for limiting access to a resource based on the tags on that resource, see Amazon Keyspaces resource access based on tags.
Amazon Keyspaces IAM roles
An IAM role is an entity within your AWS account that has specific permissions.
Using temporary credentials with Amazon Keyspaces
You can use temporary credentials to sign in with federation, assume an IAM role, or to assume a cross-account role. You obtain temporary security credentials by calling AWS STS API operations such as AssumeRole or GetFederationToken.
Amazon Keyspaces supports using temporary credentials with the Amazon Keyspaces authentication plugin. To view an example of how to use the authentication plugin to access a table programmatically, see Accessing Amazon Keyspaces using the authentication plugin..
For details about creating or managing Amazon Keyspaces service-linked roles, see Using service-linked roles for Amazon Keyspaces.
Service roles
Amazon Keyspaces does not support service roles.
|
https://docs.aws.amazon.com/keyspaces/latest/devguide/security_iam_service-with-iam.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Implement a user profile service
Use a User Profile Service to persist information about your users and ensure variation assignments are sticky. The User Profile Service implementation you provide will override Optimizely's default bucketing behavior in cases when an experiment assignment has been saved.
When implementing in a multi-server or stateless environment, we suggest using this interface with a backend like Cassandra or Redis. You can decide how long you want to keep your sticky bucketing around by configuring these services.
Implementing a User Profile Service is optional and is only necessary if you want to keep variation assignments sticky even when experiment conditions are changed while it is running (for example, audiences, attributes, variation pausing, and traffic distribution). Otherwise, the Go SDK is stateless and relies on deterministic bucketing to return consistent assignments.
If the User Profile Service doesn't bucket a user as you expect, then check whether other features are overriding the bucketing. For more information, see How bucketing works.
Implement a user profile service
Refer to the code samples below to provide your own User Profile Service. It should expose two functions with the following signatures:
- lookup: Takes a user ID string and returns a user profile matching the schema below.
- save: Takes a user profile and persists it.
import ( "github.com/optimizely/go-sdk/pkg/decision" ) // CustomUserProfileService is custom implementation of the UserProfileService interface type CustomUserProfileService struct { } // Lookup is used to retrieve past bucketing decisions for users func (s *CustomUserProfileService) Lookup(userID string) decision.UserProfile { return decision.UserProfile{} } // Save is used to save bucketing decisions for users func (s *CustomUserProfileService) Save(userProfile decision.UserProfile) { }
The UserProfile struct
type UserProfile struct { ID string ExperimentBucketMap map[UserDecisionKey]string } // UserDecisionKey is used to access the saved decisions in a user profile type UserDecisionKey struct { ExperimentID string Field string } // Sample user profile with a saved variation userProfile := decision.UserProfile{ ID: "optly_user_1", ExperimentBucketMap: map[decision.UserDecisionKey]string{ decision.UserDecisionKey{ExperimentID: "experiment_1", Field: "variation_id" }: "variation_1234", }, }
Use
experiment_bucket_map from the
UserProfile struct.
The Go SDK uses the field
variation_id by default to create a decision key. Create a decision key manually with the method
decision.NewUserDecisionKey:
decisionKey := decision.NewUserDecisionKey("experiment_id")
Passing a User Profile Service Implementation to the OptimizelyClient:
userProfileService := new(CustomUserProfileService) optimizelyClient, err := factory.Client( client.WithUserProfileService(userProfileService), )
Updated almost 2 years ago
|
https://docs.developers.optimizely.com/full-stack/docs/implement-a-user-profile-service-go
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Documentation
¶
Overview ¶
Package buildmerge implements the build.proto tracking and merging logic for luciexe host applications.
You probably want to use `go.chromium.org/luci/luciexe/host` instead.
This package is separate from luciexe/host to avoid unnecessary entaglement with butler/logdog; All the logic here is implemented to avoid:
* interacting with the environment * interacting with butler/logdog (except by implementing callbacks for those, but only acting on simple datastructures/proto messages) * handling errors in any 'brutal' ways (all errors in this package are handled by reporting them directly in the data structures that this package manipulates).
This is done to simplify testing (as much as it can be) by concentrating all the environment stuff into luciexe/host, and all the 'pure' functional stuff here (search "imperative shell, functional core").
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Agent ¶
type Agent struct { // MergedBuildC is the channel of all the merged builds generated by this // Agent. // // The rate at which Agent merges Builds is governed by the consumption of // this channel; Consuming it slowly will have Agent merge less frequently, // and consuming it rapidly will have Agent merge more frequently. // // The last build before the channel closes will always be the final state of // all builds at the time this Agent was Close()'d. MergedBuildC <-chan *bbpb.Build // Wait on this channel for the Agent to drain. Will only drain after calling // Close() at least once. DrainC <-chan struct{} // contains filtered or unexported fields }
Agent holds all the logic around merging build.proto streams.
func New ¶
func New(ctx context.Context, userNamespace types.StreamName, base *bbpb.Build, calculateURLs CalcURLFn) *Agent
New returns a new Agent.
Args:
* ctx - used for logging, clock and cancelation. When canceled, the Agent will cease sending updates on MergedBuildC, but you must still invoke Agent.Close() in order to clean up all resources associated with the Agent. * userNamespace - The logdog namespace (with a trailing slash) under which we should monitor streams. * base - The "model" Build message that all generated builds should start with. All build proto streams will be merged onto a copy of this message. Any Output.Log's which have non-absolute URLs will have their Url and ViewUrl absolutized relative to userNamespace using calculateURLs. * calculateURLs - A function to calculate Log.Url and Log.ViewUrl values. Should be a pure function.
The following fields will be merged into `base` from the user controlled build.proto stream(s):
Steps SummaryMarkdown Status StatusDetails UpdateTime Tags EndTime Output
The frequency of updates from this Agent is governed by how quickly the caller consumes from Agent.MergedBuildC.
func (*Agent) Attach ¶
Attach should be called once to attach this to a Butler.
This must be done before the butler receives any build.proto streams.
type CalcURLFn ¶
type CalcURLFn func(namespaceSlash, streamName types.StreamName) (url, viewUrl string)
CalcURLFn is a stateless function which can calculate the absolute url and viewUrl from a given logdog namespace (with trailing slash) and streamName.
|
https://pkg.go.dev/github.com/tetrafolium/luci-go/luciexe/host/buildmerge
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Data common code for database interactions by Equinox
Project description
Equinox Common Code Utility for Python 3 for DB interactions! There are currently interaction classes for the following DBs and Apps:
- MSSQL
- MySQL
- SQLite
- Postgres
- Redshift
Quick Start
Sample Usage
from datacoco_db import MSSQLInteraction mssql = MSSQLInteraction(dbname="db_name", host="server", user="user", password="password", port=1433) mssql.conn() # open a connection mssql.batch_open() # cursor results = mssql.fetch_sql_one("SELECT * FROM MyTable") # fetch one print(results) mssql.batch_close() # close cursor
The example above makes use of mssql_tools. All tools follows the same pattern in terms of usage.
Installation
datacoco-db requires Python 3.6+
python3 -m venv <virtual env name> source <virtual env name>/bin/activate pip install datacoco-db
Development
Getting Started
It is recommended to use the steps below to set up a virtual environment for development:
python3 -m venv <virtual env name> source <virtual env name>/bin/activate pip install -r requirements.txt
Pyodbc Dependency Installation
Installing the Microsoft ODBC Driver for SQL Server on Linux and macOS
Testing
pip install -r requirements-dev.txt
Modify the connection configuration for integration testing.
To run the testing suite, simply run the command: python -m unittest discover tests
For coverage report, run tox View the results in .tox/coverage/index.html
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/datacoco-db/0.1.10/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
#include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h>
s = socket(AF_INET, SOCK_STREAM, 0);
#include <paths.h> #include <fcntl.h> #include <netinet/ip_var.h>
#include <netinet/tcp.h> #include <netinet/in_pcb.h> #include <netinet/tcp_timer.h>
#include <netinet/tcp_var.h>
fd = open(_PATH_TCP, flags);
Sockets using the TCP protocol are either ``active'' or ``passive''. Active sockets initiate connections to passive sockets. By default TCP sockets are created active; to create a passive socket the listen(SSC) system call must be used after binding the socket with the bind(SSC) system call. Only passive sockets may use the accept(SSC) call to accept incoming connections. Only active sockets may use the connect(SSC) call to initiate connections.
Passive sockets may ``underspecify'' their location to match incoming connection requests from multiple networks. This technique, called ``wildcard addressing'', allows a single server to provide service to clients on multiple networks. To create a socket that several socket options which are set with setsockopt and tested with getsockopt (see getsockopt(SSC)).
Under most circumstances, TCP sends data when it is presented; when outstanding data has not yet been acknowledged, it gathers small amounts of output to be sent in a single packet once an acknowledgment is received. For a small number of clients, such as window systems that send a stream of mouse events which receive no replies, this packetization may cause significant delays. Therefore, TCP provides a boolean option, TCP_NODELAY (from <netinet/tcp.h>, to defeat this algorithm. The option level for the setsockopt call is the protocol number for TCP, available from getprotobyname (see getprotoent(SLIB)).
It is possible to retrieve or change the value being used for the maximum segment size of an active connection with the TCP_MAXSEG option. TCP_MAXSEG cannot be set to a value larger than TCP has already determined; it can only be made smaller.
The TCP_KEEPIDLE option can be used to control the start interval for TCP keep-alive messages. Normally, when enabled via SO_KEEPALIVE, keep-alives do not start until the connection has been idle for 2 hours. This option can be used to alter this interval. The option value should be specified in seconds. The minimum is restricted to 10 seconds. Setting TCP_KEEPIDLE to 0 restores the keep-alive start interval to the default value.
Normally TCP will send a keep-alive every 75 seconds once the connection has been idle for the KEEPIDLE period. Keep-alives may be sent more frequently by using the TCP_KEEPINTVL option to specify the interval in seconds. The minimum is restricted to 1 second. Setting TCP_KEEPINTVL to 0 restores the keep-alive interval to the default value.
Normally TCP will send 8 keep-alives prior to giving up. This number may be altered by using the TCP_NKEEP option to specify the desired number of keep-alives. The minimum value is constrained to be 1. Setting TCP_NKEEP to 0 restores the keep-alive interval to the default value.
Normally TCP will try to retransmit for 511 seconds before dropping a connection. This value can be changed using the TCP_MAXRXT option. Setting TCP_MAXRXT to a value between 180 and 2^32-2 causes TCP to wait that number of seconds before giving up. Setting TCP_MAXRXT to 0 restores the retransmission interval to the default value. Setting the retransmission interval to 2^32-1 (0xffffffff) causes TCP to retransmit forever. The retransmission period cannot be set to less than three minutes (180 seconds).
Note that many of the default values may be changed by the system administrator using inconfig(ADMN). getsockopt(SSC) or t_optmgmt(NET) may be used to determine the current system default values.
Options at the IP network level may be used with TCP; see ip(ADMP). Incoming connection requests that are source-routed are noted, and the reverse source route is used in responding.
TCP is also available as a TLI connection-oriented protocol via the special file /dev/inet/tcp. Interpreted TCP options are supported via the TLI options mechanism (see t_optmgmt(NET)).
TCP provides a facility, one-packet mode, that attempts to improve performance over Ethernet interfaces that cannot handle back-to-back packets. One-packet mode may be set by ifconfig(ADMN) for such an interface. On a connection that uses an interface for which one-packet mode has been set, TCP attempts to prevent the remote machine from sending back-to-back packets by setting the window size for the connection to the maximum segment size for the interface.
Certain TCP implementations have an internal limit on packet size that is less than or equal to half the advertised maximum segment size. When connected to such a machine, setting the window size to the maximum segment size would still allow the sender to send two packets at a time. To prevent this, a ``small packet size'' and a ``small packet threshold'' may be specified when setting one-packet mode. If, on a connection over an interface with one-packet mode enabled, TCP receives a number of consecutive packets of the small packet size equal to the small packet threshold, the window size is set to the small packet size.
A TCP endpoint can also be obtained by opening the TCP driver directly. Networking statistics can be gathered by issuing ioctl directives to the driver. The following ioctl commands, defined in <netinet/ip_var.h> and <netinet/tcp_var.h>, are supported by the TCP driver:
struct tcp_stuff { struct tcpstat tcp_stat; int tcp_rto_algorithm; int tcp_max_rto; int tcp_min_rto; int tcp_max_conn; int tcp_urgbehavior; };Note that the member
int tcp_urgbehavioris not used.
gi_sizeset to 0. This returns the size of the table. Second, allocate sufficient memory (found in step 1) and issue the ioctl again with
gi_sizeset to the value returned in the step above (or any value greater than 0). The TCP driver will copy the TCP connection table to the user allocated area. tcb_entry is the format of the entries extracted by the ioctl. Structures gi_arg and tcb_entry are as defined below:
struct gi_arg { caddr_t gi_where; /* user addr. */ unsigned gi_size; /* table size */ };
struct tcb_entry { struct inpcb in_pcb; struct tcpcb tcp_pcb; };
typedef struct _tcpconn { struct in_addr local_ip_addr; u_short local_port; struct in_addr rmt_ip_addr; u_short rmt_port; } TcpConn_t;The local and remote addresses and ports serve to uniquely identify an active TCP connection. Only root may use this ioctl.
struct tcp_dbg_hdr { caddr_t tcp_where; /* where to copy-out result */ u_int tcp_size; /* size of buffer */ u_int tcp_debx; /* current slot in debug ring */ u_int tcp_ndebug; /* number of debug records */ };First issue the ioctl with
tcp_wherepointing to the address of the structure and
tcp_sizeset to the size of the structure. TCP will fill in the number of debugging entries in the
tcp_ndebugfield. This information can be used to allocate a buffer large enough to hold the debugging information. The buffer size is calculated as:
sizeof(struct tcp_dbg_hdr) + sizeof(struct tcp_debug) * tcp_ndebugIssuing the ioctl again with
tcp_whereset to the start of the buffer and
tcp_sizeset to the size as computed above will return a buffer consisting of the tcp_dbg_hdr structure followed by tcp_ndebug tcp_debug structures. The
tcp_debxfield will be set to the current offset into the buffer.
ioc_countis not TRANSPARENT for a transparent ioctl
RFC 1337
RFC 793 (STD 7), RFC 1122 (STD 5)
|
http://osr507doc.xinuos.com/en/man/html.ADMP/tcp.ADMP.html
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Chapter 31. Customizing the Web Console
31.1. Overview
Administrators can customize the web console using extensions, which let you run scripts and load custom stylesheets when the web console loads. Extension scripts allow you to override the default behavior of the web console and customize it for your needs.
For example, extension scripts can be used to add your own company’s branding or to add company-specific capabilities. A common use case for this is rebranding or white-labelling for different environments. You can use the same extension code, but provide settings that change the web console. You can change the look and feel of nearly any aspect of the user interface in this way.
31.2. Loading Extension - ...
Wrap extension scripts in an Immediately Invoked Function Expression (IIFE). This ensures that you do not create global variables that conflict with the names used by the web console or by other extensions. For example:
(function() { // Put your extension code examples in the following sections show common ways you can customize the web console.
Additional extension examples are available in the OpenShift Origin repository on GitHub.
31.2.1. Setting Extension Properties
If you have a specific extension, but want to use different text in it for each of the environments, you can define the environment in the master-config.yaml file, and use the same extension script across environments. Pass settings from the master-config.yaml file to be used by the extension using the
extensionProperties mechanism:
assetConfig: extensionDevelopment: true extensionProperties: doc_url: key1: value1 key2: value2 extensionScripts:
This results in a global variable that can be accessed by the extension, as if the following code was executed:
window.OPENSHIFT_EXTENSION_PROPERTIES = { doc_url: "", key1: "value1", key2: "value2", }
31.2.2. Customizing the Logo
The following style changes the logo in the web console header:
#header-logo { background-image: url(""); width: 190px; height: 20px; }
Replace the example.com URL with a URL to an actual image, and adjust the width and height. The ideal height is 20px.
Save the style to a file (for example, logo.css) and add it to the master configuration file:
assetConfig: ... extensionStylesheets: - /path/to/logo.css
31.2.3. Changing Links to Documentation
Links to external documentation are shown in various sections of the web console. The following example changes the URL for two given links to the documentation:
window.OPENSHIFT_CONSTANTS.HELP['get_started_cli'] = ""; window.OPENSHIFT_CONSTANTS.HELP['basic_cli_operations'] = "";
Alternatively, you can change the base URL for all documentation links.
This example would result in the default help URL:
window.OPENSHIFT_CONSTANTS.HELP_BASE_URL = ""; 1
Save this script to a file (for example, help-links.js) and add it to the master configuration file:
assetConfig: ... extensionScripts: - /path/to/help-links.js
31.2.4. Adding or Changing Links to Download the CLI
The About page in the web console provides download links for the command line interface (CLI) tools. These links can be configured by providing both the link text and URL, so that you can choose to point them directly to file packages, or to an external page that points to the actual packages.
For example, to point directly to packages that can be downloaded, where the link text is the package platform:
window.OPENSHIFT_CONSTANTS.CLI = { "Linux (32 bits)": "https://<cdn>/openshift-client-tools-linux-32bit.tar.gz", "Linux (64 bits)": "https://<cdn>/openshift-client-tools-linux-64bit.tar.gz", "Windows": "https://<cdn>/openshift-client-tools-windows.zip", "Mac OS X": "https://<cdn>/openshift-client-tools-mac.zip" };
Alternatively, to point to a page that links the actual download packages, with the Latest Release link text:
window.OPENSHIFT_CONSTANTS.CLI = { "Latest Release": "https://<cdn>/openshift-client-tools/latest.html" };
Save this script to a file (for example, cli-links.js) and add it to the master configuration file:
assetConfig: ... extensionScripts: - /path/to/cli-links.js
31.2.5. Customizing the About Page
To provide a custom About page for the web console:
Write an extension that looks like:
angular .module('aboutPageExtension', ['openshiftConsole']) .config(function($routeProvider) { $routeProvider .when('/about', { templateUrl: 'extensions/about/about.html', controller: 'AboutController' }); } ); hawtioPluginLoader.addModule('aboutPageExtension');
- Save the script to a file (for example, about/about.js).
Write a customized template.
- Start from the version of about.html from the OpenShift Container Platform release you are using. Within the template, there are two angular scope variables available:
version.master.openshiftand
version.master.kubernetes.
- Save the custom template to a file (for example, about/about.html).
Modify the master configuration file:
assetConfig: ... extensionScripts: - about/about.js ... extensions: - name: about sourceDirectory: /path/to/about
31.2.7. Configuring Catalog Categories
Catalog categories organize the display of builder images and templates on the Add to Project page on the OpenShift Container Platform web console. A builder image or template is grouped in a category if it includes a tag with the same name of the category or category alias. Categories only display if one or more builder images or templates with matching tags are present in the catalog.
Significant customizations to the catalog categories may affect the user experience and should be done with careful consideration. You may need to update this customization in future upgrades if you modify existing category items.
Create the following configuration scripts within a file (for example, catalog-categories.js):
// Add Go to the Languages category var category = _.find(window.OPENSHIFT_CONSTANTS.CATALOG_CATEGORIES, { id: 'languages' }); category.items.splice(2,0,{ // Insert at the third spot // Required. Must be unique id: "go", // Required label: "Go", // Optional. If specified, defines a unique icon for this item iconClass: "font-icon icon-go-gopher", // Optional. If specified, enables matching other tag values to this category // item categoryAliases: [ "golang" ] }); // Add a Featured category section at the top of the catalog window.OPENSHIFT_CONSTANTS.CATALOG_CATEGORIES.unshift({ // Required. Must be unique id: "featured", // Required label: "Featured", // Optional. If specified, each item in the category will utilize this icon // as a default iconClassDefault: "fa fa-code", items: [ { // Required. Must be unique id: "go", // Required label: "Go", // Optional. If specified, defines a unique icon for this item iconClass: "font-icon icon-go-gopher", // Optional. If specified, enables matching other tag values to this // category item categoryAliases: [ "golang" ], // Optional. If specified, will display below the item label description: "An open source programming language developed at Google in " + "2007 by Robert Griesemer, Rob Pike, and Ken Thompson." }, { // Required. Must be unique id: "jenkins", // Required label: "Jenkins", // Optional. If specified, defines a unique icon for this item iconClass: "font-icon icon-jenkins", // Optional. If specified, will display below the item label description: "An open source continuous integration tool written in Java." } ] });
Save the file and add it to the master configuration at /etc/origin/master/master-config.yml:
assetConfig: ... extensionScripts: - /path/to/catalog-categories.js
Create the following configuration scripts within a file (for example, create-from-url-whitelist.js):
// Add a namespace containing the image streams and/or templates window.OPENSHIFT_CONSTANTS.CREATE_FROM_URL_WHITELIST.push( 'shared-stuff' );
Save the file and add it to the master configuration file at /etc/origin/master/master-config.yml:
assetConfig: ... extensionScripts: - /path/to/create-from-url-whitelist.js
Restart the master host:
# systemctl restart atomic-openshift-master
31.3. Enabling Wildcard Routes
If you enabled wildcard routes for a router, you can also enable wildcard routes in the web console. This lets users enter hostnames starting with an asterisk like
*.example.com when creating a route. To enable wildcard routes:
Save this script to a file (for example, enable-wildcard-routes.js):
window.OPENSHIFT_CONSTANTS.DISABLE_WILDCARD_ROUTES = false;
Add it to the master configuration file:
assetConfig: ... extensionScripts: - /path/to/enable-wildcard-routes.js
Learn how to configure HAProxy routers to allow wildcard routes.
31.3.1. Enabling Features in Technology Preview
Sometimes features are available in Technology Preview. By default, these features are disabled and hidden in the web console.
Currently, there are no web console features in Technology Preview.
To enable a Technology Preview feature:
Save this script to a file (for example, tech-preview.js):
window.OPENSHIFT_CONSTANTS.ENABLE_TECH_PREVIEW_FEATURE.<feature_name> = true;
Add it to the master configuration file:
assetConfig: ... extensionScripts: - /path/to/tech-preview.js
31"); }
31.
31.5. Customizing the Login Page
You can also change the login page, and the login provider selection page for the web console. Run the following commands to create templates you can modify:
$ oadm create-login-template > login-template.html $ oadm create-provider-selection-template > provider-selection-template.html
Edit the file to change the styles or add content, but be careful not to remove any required parameters inside the curly brackets.
To use your custom login page or provider selection page, set the following options in the master configuration file:
oauthConfig: ... templates: login: /path/to/login-template.html providerSelection: /path/to/provider-selection Container Platform expires, the user is presented with this custom page before they can proceed with other tasks.
31.
31.
31.
31.8. Configuring Web Console Customizations with Ansible
During advanced installations, many modifications to the web console can be configured using the following parameters, which are configurable in the inventory file:
Example Web Console Customization with Ansible
# Configure logoutURL in the master config for console customization # See: #openshift_master_logout_url= # Configure extensionScripts in the master config for console customization # See: #openshift_master_extension_scripts=['/path/on/host/to/script1.js','/path/on/host/to/script2.js'] # Configure extensionStylesheets in the master config for console customization # See: #openshift_master_extension_stylesheets=['/path/on/host/to/stylesheet1.css','/path/on/host/to/stylesheet2.css'] # Configure extensions in the master config for console customization # See: #openshift_master_extensions=[{'name': 'images', 'sourceDirectory': '/path/to/my_images'}] # Configure extensions in the master config for console customization # See: #openshift_master_oauth_template=/path/on/host/to/login-template.html # Configure metricsPublicURL in the master config for cluster metrics. Ansible is also able to configure metrics for you. # See: #openshift_master_metrics_public_url= # Configure loggingPublicURL in the master config for aggregate logging. Ansible is also able to install logging for you. # See: #openshift_master_logging_public.
|
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.4/html/installation_and_configuration/install-config-web-console-customization
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
import "k8s.io/client-go/kubernetes/typed/coordination/v1beta1/fake"
Package fake has the automatically generated clients.
doc.go fake_coordination_client.go fake_lease.go
func (c *FakeCoordinationV1beta1) Leases(namespace string) v1beta1.LeaseInterface
func (c *FakeCoordinationV1beta1) RESTClient() rest.Interface
RESTClient returns a RESTClient that is used to communicate with API server by this client implementation.
type FakeLeases struct { Fake *FakeCoordinationV1beta1 // contains filtered or unexported fields }
FakeLeases implements LeaseInterface
Create takes the representation of a lease and creates it. Returns the server's representation of the lease, and an error, if there is any.
func (c *FakeLeases) Delete(name string, options *v1.DeleteOptions) error
Delete takes name of the lease and deletes it. Returns an error if one occurs.
func (c *FakeLeases) DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error
DeleteCollection deletes a collection of objects.
func (c *FakeLeases) Get(name string, options v1.GetOptions) (result *v1beta1.Lease, err error)
Get takes name of the lease, and returns the corresponding lease object, and an error if there is any.
func (c *FakeLeases) List(opts v1.ListOptions) (result *v1beta1.LeaseList, err error)
List takes label and field selectors, and returns the list of Leases that match those selectors.
func (c *FakeLeases) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1beta1.Lease, err error)
Patch applies the patch and returns the patched lease.
Update takes the representation of a lease and updates it. Returns the server's representation of the lease, and an error, if there is any.
func (c *FakeLeases) Watch(opts v1.ListOptions) (watch.Interface, error)
Watch returns a watch.Interface that watches the requested leases.
Package fake imports 9 packages (graph) and is imported by 4 packages. Updated 2018-10-16. Refresh now. Tools for package owners.
|
https://godoc.org/k8s.io/client-go/kubernetes/typed/coordination/v1beta1/fake
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Paul Morrow wrote: >Jeff Epler wrote: >> In your proposed model, what does the following code do? >> def f(): >> __var__ = 3 >> return __var__ >> f.__var__ += 1 >> print f() >> > > Only assignments to __xxx__ variables at the very start of a function > def would have this special semantics. So your return statement would > be referencing a name (__var__) that doesn't exist in the function's > local variable namespace. No, that would silently return a surprising result if __var__ existed in the global namespace. Better make it a syntax error or something. -- Hallvard
|
https://mail.python.org/pipermail/python-list/2004-September/278936.html
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
No more null check with an dart equivalent of Maybe (Haskel, Elm) / Option (F#).
The key is that you need to call the
some or
when to access your potential value so you are forced to check its status before using it.
Maybe<T>.nothing: creating an optional item that is empty
final maybe = Maybe<String>.nothing();
Maybe.some: creating an optional item with a value
final maybe = Maybe.some("hello world");
final isNothing = Maybe<String>.some(null); // By default `some` with a null value is converted to `nothing` final isNotNothing = Maybe<String>.some(null, nullable: true);
some: extracting some value
final maybe = Maybe.some("hello world"); final value = some(maybe, "default"); // == "hello world"
final maybe = Maybe<String>.nothing(); final value = some(maybe, "default"); // == "default"
isNothing: testing if some value
final maybe = Maybe.some("hello world"); final value = isNothing(maybe); // false
final maybe = Maybe<String>.nothing(); final value = isNothing(maybe); // true
when: triggering an action
var maybe = Maybe.some("hello world"); when(maybe, some: (v) { print(v); // "hello world" }); // Defining nothing maybe = Maybe.nothing(); when(maybe, some: (v) { print(v); // not called! }); // You can add a default value when nothing maybe = Maybe<String>.some(null); when(maybe, some: (v) { print(v); // "hello world" }, defaultValue: () => "hello world");
mapSome: converts a value type to another
var maybe = Maybe.some("hello world"); var converted = mapSome<String,int>(maybe, (v) => v.length); var value = some(converted,0); // == 11
var maybe = Maybe<String>.nothing(); var converted = mapSome<String,int>(maybe, (v) => v.length); var value = some(converted, 0); // == 0
MaybeMap<K,V>: a map with optional values (aka Map<K, Maybe)
var map = MaybeMap<String,String>(); map["test"] = Maybe.nothing(); // doesn't add value map["test"] = Maybe.some("value"); // adds value when(map["test"], some: (v) => print(v)); map["test"] = Maybe.nothing(); // deletes key when(map["test"], isNothing: (v) => print("deleted :" + map.containsKey("test").toString()));
Map<String,String> maybeMap = { "test": "value", }; var maybeMap = MaybeMap<String,String>.fromMap(maybeMap); when(map["test"], some: (v) => print(v));
Optional?
The Optional type has several similarities with
Maybe, but there are several subtle differences.
null
Let's take a quick example :
class Update { final Optional<String> title; final Optional<String> description; Update({Optional<String> title, Optional<String> description}) : this.title = title ?? Optional<String>.absent(), this.description = description ?? Optional<String>.absent(); } final update = Update(title: Optional.of('sample')); update.title.ifPresent((v) { print('title: $v'); }); update.description.ifPresent((v) { print('description: $v'); });
Thanks to static functions, all can be replaced by :
class Update { final Maybe<String> title; final Maybe<String> description; Update({this.title this.description}); } final update = Update(title: Maybe.some('sample')); when(update.title, some: (v) { print('title: $v'); }); when(update.description, some: (v) { print('description: $v'); });
So, the critical part is that you can forget that
Optional can be
null itself and produce exceptions (
update.title.ifPresent in our example). You are then forced to test its nullity and you come back to the initial problematic. This is where
Maybe feels more robust to me.
absentis similar to
null
With
Maybe, values can be nullable.
In the following example, we explicitly say that the title should have a new
null value.
class Update { final Maybe<String> title; final Maybe<String> description; Update({ this.title, this.description}); } final update = Update(title: Maybe.some(null, nullable: true);
This is really different than having a
nothing title, which significates that the title shouldn't be modified.
final update = Update(title: Maybe.nothing());
example/main.dart
import 'package:maybe/maybe.dart'; class Update { final Maybe<String> title; final Maybe<String> description; Update({this.title, this.description}); } void main(args) { // Update for the title, none for description var update = Update(title: Maybe.some("new title"), description: Maybe.nothing()); // No update for title when(update.title, some: (v) { print("Updating title $v"); }); // if is also possible if(isNothing(update.title)) { print("No description"); } // Fallback value print(some(update.description, "default description")); }
Add this to your package's pubspec.yaml file:
dependencies: maybe: ^0.3.8:maybe/maybe.dart';
We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter, web, other
No platform restriction found in primary library
package:maybe/maybe.dart.
Fix
lib/maybe.dart. (-30.08 points)
Analysis of
lib/maybe.dart failed with 1 error, 14 hints, including:
line 130 col 25: The constructor 'Iterable.empty' does not have type parameters.
line 3 col 1: Prefer using /// for doc comments.
line 13 col 1: Prefer using /// for doc comments.
line 27 col 1: Prefer using /// for doc comments.
line 37 col 1: Prefer using /// for doc comments.
Fix
lib/maybe_iterables.dart. (-26.12 points)
Analysis of
lib/maybe_iterables.dart failed with 1 error, 3 hints:
line 11 col 23: The constructor 'Iterable.empty' does not have type parameters.
line 3 col 1: Prefer using /// for doc comments.
line 15 col 1: Prefer using /// for doc comments.
line 28 col 1: Prefer using /// for doc comments.
Fix
lib/map.dart. (-1.99 points)
Analysis of
lib/map.dart reported 4 hints:
line 5 col 1: Prefer using /// for doc comments.
line 16 col 3: Prefer using /// for doc comments.
line 25 col 3: Prefer using /// for doc comments.
line 138 col 3: This function has a return type of 'Maybe
Provide a file named
CHANGELOG.md. (-20 points)
|
https://pub.dev/packages/maybe
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Package org.eclipse.core.runtime
Class InvalidRegistryObjectException
- java.lang.Object
- java.lang.Throwable
- java.lang.Exception
- java.lang.RuntimeException
- org.eclipse.core.runtime.InvalidRegistryObjectException
- All Implemented Interfaces:
Serializable
public class InvalidRegistryObjectException extends RuntimeExceptionAn unchecked exception indicating that an attempt to access an extension registry object that is no longer valid.
This exception is thrown by methods on extension registry objects. It is not intended to be instantiated or subclassed by clients.
This class can be used without OSGi running.
- See Also:
- Serialized Form
- Restriction:
- This class is not intended to be subclassed by clients.
- Restriction:
- This class is not intended to be instantiated by clients.
Method Summary
Methods inherited from class java.lang.Throwable
addSuppressed, fillInStackTrace, getCause, getLocalizedMessage, getMessage, getStackTrace, getSuppressed, initCause, printStackTrace, printStackTrace, printStackTrace, setStackTrace, toString
|
http://help.eclipse.org/2019-03/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/core/runtime/InvalidRegistryObjectException.html
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
This is a
playground to test code. It runs a full
Node.js environment and already has all of
npm’s 400,000 packages pre-installed, including
stack-overflow-copy-paste with all
npm packages installed. Try it out:
require()any package directly from npm
awaitany promise instead of using callbacks (example)
This service is provided by RunKit and is not affiliated with npm, Inc or the package authors.
A collection of utility JavaScript functions copy/pasted and slightly modified from StackOverflow answers 😀 (Not intended to be used in actual programs)
This repo is used as the basis for an Egghead.io series entitled: How to Contribute to an Open Source Project on GitHub
This repository exists as a resource for people to learn how to contribute to open source in a safe and friendly environment. Feel free to watch the video series and then contribute to this project. See the contributing guidelines.
import {flatten, snakeToCamel, clone} from 'stack-overflow-copy-paste' flatten([[1, 2,], 3]) // [1, 2, 3] snakeToCamel('snake-case-string') // 'snakeCaseString' const testObj = {a: 1, b: 2} const copyObj = clone(testObj)
MIT
|
https://npm.runkit.com/stack-overflow-copy-paste
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Question: where to find detailed description and working examples of Python regular expressions?
1
2.6 years ago by
natasha.sernova • 3.5k
natasha.sernova • 3.5k wrote:
Dear all!
I know Python regexps were taken from Perl, but in my opinion they lost a lot in this transition.
So far I've found several "theoretical" examples of Python regular expression application.
There are a few ways to search for them, like 'match' or 'findall' depending on the final goal.
This is not enough for me, I need your advice where to find more information with short script examples?
Many-many thanks!
Natasha
Example of my unsuccessul trials with SwissProt file:
import re import random import math import sys print "This is the name of the script: ", sys.argv[0] print "Number of arguments: ", len(sys.argv) print "The arguments are: " , str(sys.argv) #if m: #print 'Match found: ', m.group() #else: #print 'No match' fin = open(sys.argv[1], 'r') for line in fin: a = re.match(r'^AC', line) if a is not None: a.group a = re.match(r'^DE', line) if a is not None: a.group a = re.match(r'^OC', line) if a is not None: a.group a = re.match(r'^KW', line) if a is not None: a.group a = re.match(r'(^SQ).+(\d+\s+\w{2})', line) if a is not None: a.group # print "\1"+"\s"+"\2" re.sub(r'[\s,\n]','', line) print line a = re.match(r'^//', line) if a is not None: break fin.close()
I've got the whole initial SwissProt-file,
ADD COMMENT • link •modified 2.6 years ago • written 2.6 years ago by natasha.sernova • 3.5k
beside documentation of course, that could help you
and for example you can test by yourself here
That's perfect! Thousand thanks! I've never seen anything similar - usually a few lines and that's it.
Note: It is better performance to use
Thank you very much again! I've realized that I write regexps from Perl in Python...
|
https://www.biostars.org/p/223755/
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
AWSSDK AppleOS
Version of the AWS SDK for the Swift programming language that supports Apple platforms (including iOS) as first class citizens. This repository is based off the aws-sdk-swift repository. As this version is reliant on
Network.framework it only works for Apple platforms, but it does support iOS unlike
aws-sdk-swift.
Supported Platforms and Swift Versions
| Platform | Version | |---|:---:| |iOS | v12.2 | |macOS | v10.14 | |tvOS | v12.2 |
Documentation
Visit the
aws-sdk-swift documentation for instructions and browsing api references.
Swift NIO
This client utilizes Swift NIO to power its interactions with AWS. It returns an
EventLoopFuture in order to allow non-blocking frameworks to use this code. This version of aws-adk-swift!
Installation
Swift Package Manager
AWS SDK Apple OS uses the Swift Package manager. It is recommended you create a swift package that only includes the AWS services you are interested in. The easiest method to do this is create an empty folder, enter this folder and type
swift package init. This will create a Package.swift file. Add the package dependency on package. Add the services you are using in the dependency list for your target. See below for an example. Make sure you add the line indicating the platforms you are targetting. In future versions of XCode it will be possible to include Swift Package files directly into your project. In the meantime the method to add aws-sdk-appleos into your project is as follows.
- Create an xcodeproj file. Run
swift package generate-xcodeprojin the same folder as your Package.swift file.
- Open your project and add the generated xcodeproj in to your project.
- Include the framework for the services you require in the Embedded Binaries for the project.
Example Package.swift
import PackageDescription let package = Package( name: "MyAWSLib", platforms: [.iOS("12.2"), .macOS(.v10_14)], dependencies: [ .package(url: "", from: "0.1.0") ], targets: [ .target( name: "MyAWSLib", dependencies: ["S3", "SES", "CloudFront", "ELBV2", "IAM", "Kinesis"]), .testTarget( name: "MyAWSToolTests", dependencies: ["MyAWSLib"]), ] )
Carthage
Not supported yet
Cocoapods
Not supported yet
Contributing
All developers should feel welcome and encouraged to contribute to
aws-sdk-swift.
As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.
To contribute a feature or idea to
aws-sdk-swift, submit an issue and fill in the template. If the request is approved, you or one of the members of the community can start working on it. It is prefereable that pull requests are made in the original swift repositories and not the appleos versions of the code. If you have a change that is specific to apple OS then make a pull request to the appleos branch in repositories
aws-sdk-swift or
aws-sdk-swift-core.
If you find a bug, please submit an issue with a failing test case displaying the bug or create an issue.
If you find a security vulnerability, please contact [email protected] and reach out on the #aws channel on the Vapor Discord as soon as possible. We take these matters seriously.
Configuring Credentials
Before using the SDK, ensure that you've configured credentials.
Pass the Credentials to the AWS Service struct directly
All of the AWS Services's initializers accept
accessKeyId and
secretAccessKey
let ec2 = EC2( accessKeyId: "YOUR_AWS_ACCESS_KEY_ID", secretAccessKey: "YOUR_AWS_SECRET_ACCESS_KEY" )
Using the
aws-sdk-swift
AWS Swift Modules can be imported into any swift project. Each module provides a struct that can be initialized, with instance methods to call aws services. See documentation for details on specific services.
The underlying aws-sdk-swift httpclient returns a swift-nio EventLoopFuture object. An EvenLoopFuture is not the response, but rather a container object that will be populated with the response sometime later. In this manner calls to aws do not block the main thread.
However, operations that require inspection or use of the response require code to be written in a slightly different manner that equivalent synchronous logic. There are numerous references available online to aid in understanding this concept.
The recommended manner to interact with futures is chaining.
import S3 //ensure this module is specified as a dependency in your package.swift import NIO do { let bucket = "my-bucket" let s3 = S3( accessKeyId: "Your-Access-Key", secretAccessKey: "Your-Secret-Key", region: .uswest2 ) // Create Bucket, Put an Object, Get the Object let createBucketRequest = S3.CreateBucketRequest(bucket: bucket) try s3.createBucket(createBucketRequest).thenThrowing { response -> Future<S3.PutObjectOutput> in // Upload text file to the s3 let bodyData = "hello world".data(using: .utf8)! let putObjectRequest = S3.PutObjectRequest(acl: .publicRead, bucket: bucket, contentLength: Int64(bodyData.count), body: bodyData, key: "hello.txt") return try s3.putObject(putObjectRequest) }.thenThrowing { response -> Future<S3.GetObjectOutput> in let getObjectRequest = S3.GetObjectRequest(bucket: bucket, key: "hello.txt") return try s3.getObject(getObjectRequest) }.whenSuccess { futureResponse in futureResponse.whenSuccess { response in if let body = response.body { print(String(data: body, encoding: .utf8)) } } } } catch { print(error) }
Or you can use the nested method
import S3 //ensure this module is specified as a dependency in your package.swift do { let bucket = "my-bucket" let s3 = S3( accessKeyId: "Your-Access-Key", secretAccessKey: "Your-Secret-Key", region: .uswest1 ) // Create Bucket, Put an Object, Get the Object let createBucketRequest = S3.CreateBucketRequest(bucket: bucket) try s3.createBucket(createBucketRequest).whenSuccess { response in do { let bodyData = "hello world".data(using: .utf8)! let putObjectRequest = S3.PutObjectRequest(acl: .publicRead, key: "hello.txt", body: bodyData, contentLength: Int64(bodyData.count), bucket: bucket) try s3.putObject(putObjectRequest).whenSuccess { response in do { let getObjectRequest = S3.GetObjectRequest(bucket: bucket, key: "hello.txt") try s3.getObject(getObjectRequest).whenSuccess { response in if let body = response.body { print(String(data: body, encoding: .utf8)) } } } catch { print(error) } } } catch { print(error) } } } catch { print(error) }
License
aws-sdk-swift is released under the Apache License, Version 2.0. See LICENSE for details.
Github
Help us keep the lights on
Dependencies
Used By
Total:
Releases
0.3.1 - Jul 8, 2019
0.3.0 - Jul 1, 2019
- Using release v0.2.1 of aws-sdk-appleos-core
- Sync service models files to v1.20.11. See aws-sdk-go versions 1.19.44 to 1.20.11 for details of changes.
- Added new services ApiGatewayManagementApi, ApiGatewayV2 ApplicationInsights DocDB EC2InstanceConnect GroundStation IoTEvents, IoTEventsData, IoTThingsGraph ManagedBlockchain MediaPackageVod Personalize, PersonalizeEvents, PersonalizeRuntime ServiceQuotas Textract WorkLink
0.2.0 - Jun 26, 2019
- Output an empty body when the payload body is nil
- Added collection encoding to shape member. Use this to make decisions on encoding/decoding of arrays and dictionaries in XML and queries (fixes multiple issues across XML and query based services)
- XML can decode dictionaries with enum keys (fixes SQS.GetQueueAttributesRequest)
- Percent encode more characters placed in query body (fixes SQS.DeleteMessage, SNS.CreatePlatformApplication or any query based request that requires the '+' sign.)
- Add ability to flatten all arrays in query encoder (fixes multiple issues with EC2)
0.1.0 - Jun 18, 2019
Initial version of aws-sdk-appleos
- Based on NIO2.0 version of aws-sdk-swift
- Compiles for macOS, iOS, tvOS
- Added custom HttpClient that is setup to use NIOTransportServices
- Added custom version of XMLNode library for iOS, tvOS as it is not available for them
|
https://swiftpack.co/package/swift-aws/aws-sdk-appleos
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
[
]
Timothy M. Rodriguez commented on LUCENE-8000:
----------------------------------------------
Makes sense, agreed on both points.
> Document Length Normalization in BM25Similarity correct?
> --------------------------------------------------------
>
> Key: LUCENE-8000
> URL:
> Project: Lucene - Core
> Issue Type: Bug
> Reporter: Christoph Goller
> Priority: Minor
>
> Length of individual documents only counts the number of positions of a document since
discountOverlaps defaults to true.
> {quote} @Override
> public final long computeNorm(FieldInvertState state) {
> final int numTerms = discountOverlaps ? state.getLength() - state.getNumOverlap()
: state.getLength();
> int indexCreatedVersionMajor = state.getIndexCreatedVersionMajor();
> if (indexCreatedVersionMajor >= 7) {
> return SmallFloat.intToByte4(numTerms);
> } else {
> return SmallFloat.floatToByte315((float) (1 / Math.sqrt(numTerms)));
> }
> }{quote}
> Measureing document length this way seems perfectly ok for me. What bothers me is that
> average document length is based on sumTotalTermFreq for a field. As far as I understand
that sums up totalTermFreqs for all terms of a field, therefore counting positions of terms
including those that overlap.
> {quote} protected float avgFieldLength(CollectionStatistics collectionStats) {
> final long sumTotalTermFreq = collectionStats.sumTotalTermFreq();
> if (sumTotalTermFreq <= 0) {
> return 1f; // field does not exist, or stat is unsupported
> } else {
> final long docCount = collectionStats.docCount() == -1 ? collectionStats.maxDoc()
: collectionStats.docCount();
> return (float) (sumTotalTermFreq / (double) docCount);
> }
> }{quote}
> Are we comparing apples and oranges in the final scoring?
> I haven't run any benchmarks and I am not sure whether this has a serious effect. It
just means that documents that have synonyms or in our case different normal forms of tokens
on the same position are shorter and therefore get higher scores than they should and that
we do not use the whole spectrum of relative document lenght of BM25.
> I think for BM25 discountOverlaps should default to false.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
|
http://mail-archives.us.apache.org/mod_mbox/lucene-dev/201710.mbox/%[email protected]%3E
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Dave Yeo <daveryeo at telus.net> writes: > libavformat now has a dependency on libavutil causing this error on a libavformat has depended on libavutil for as long as libavutil has existed. Nothing has changed there. > static build (similar error building a shared libavformat) > [...] > R:\tmp\ldconv_libavformat_s_a_74454c088fcb1ebd70.lib(utils.obj) : > error LNK2029: "_ff_toupper4" : unresolved external > R:\tmp\ldconv_libavformat_s_a_74454c088fcb1ebd70.lib(utils.obj) : > error LNK2029: "_av_get_codec_tag_string" : unresolved external > > There were 2 errors detected > make: *** [ffmpeg_g.exe] Error 1 What is the exact command that fails? Run "make V=1" and paste the last command along with the full output from it. > Fix is to rearrange the build order and linking as in the attached > patch or any other order where libavutil comes before libavformat. For > shared builds we'd still need to pass something like LDFLAGS=-Lavutil > -lavutil. > Dave > > Index: common.mak > =================================================================== > --- common.mak (revision 23463) > +++ common.mak (working copy) > @@ -31,7 +31,7 @@ > $(eval INSTALL = @$(call ECHO,INSTALL,$$(^:$(SRC_DIR)/%=%)); $(INSTALL)) > endif > > -ALLFFLIBS = avcodec avdevice avfilter avformat avutil postproc swscale > +ALLFFLIBS = avutil avcodec avdevice avfilter avformat postproc swscale > > CPPFLAGS := -I$(BUILD_ROOT_REL) -I$(SRC_PATH) $(CPPFLAGS) > CFLAGS += $(ECFLAGS) That patch has no effect at all. You must be confused. -- M?ns Rullg?rd mans at mansr.com
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2010-June/090962.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
A BGP update packet can have many NLRIs. More...
#include <trie_payload.hh>
A BGP update packet can have many NLRIs.
Each NLRI is stored in a trie node. Rather than keep multiple copies of a BGP update packet. A single reference counted copy is kept in TrieData. A TriePayload is stored in the trie and holds a pointer to the TrieData.
|
http://xorp.org/releases/current/docs/kdoc/html/classTrieData.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
> lib > > > This generally means that 'using namespace std;' needs to be added to the top of the file. But why wouldn't this have been fixed for hppa or ia64? ===== James Morrison University of Waterloo Computer Science - Digital Hardware 2A co-op Anyone refering this as 'Open Source' shall be eaten by a GNU __________________________________________________ Do You Yahoo!? Yahoo! Health - your guide to health and wellness -- To UNSUBSCRIBE, email to [email protected] with a subject of "unsubscribe". Trouble? Contact [email protected]
|
https://lists.debian.org/debian-hurd/2002/05/msg00095.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
hi,
Through getter setter methods we can indirectly access private data members of the class
as shown in the below code. Then what is the significance of encapsulation in OOPS concept. I am not getting the appropiate answer for that. Can anyone help me?
Thanks in advance
#include<iostream> using namespace std; class Test { private: int a; int b; public: void setValue(int val) { a=val; } int getValue() { return a; } }; int main() { Test objTest; objTest.setValue(1); int result=objTest.getValue(); cout<<result<<endl; return 1; }
|
https://www.daniweb.com/programming/software-development/threads/289946/encapsulation
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
extend traitwillMessagewQueuewrappedQueuewrappedQueuewrapped in an
akka.util.PriorityQueueStabilizerMessagewith higher priority
- Backed by two
java.util.concurrent.ConcurrentLinkedQueueand blocking on enqueue if capacity has been reached
- Blocking: Yes if used with non-zero
mailbox-push-timeout-time, otherwise No
- Bounded: Yes
- Configuration name: "akka.dispatch.BoundedControlAwareMailbox"
Mailbox configuration examples
PriorityMailbox
How to create a PriorityMailbox:
import akka.dispatch.PriorityGenerator import akka.dispatch.UnboundedStablePriorityMailbox import com.typesafe.config.Config // We inherit, in this case, from UnboundedStablePriorityMailbox // and seed it with the priority generator class MyPrioMailbox(settings: ActorSystem.Settings, config: Config) extends UnboundedStablePriorityMailbox( // Create a new PriorityGenerator, lower prio means more important PriorityGenerator { // 'highpriority messages should be treated first if possible case 'highpriority =>],:
import akka.actor.Props val myActor = context.actorOf(Props[MyActor], "priomailboxactor")
Or code like this:
import akka.actor.Props val myActor = context.actorOf(Props[MyActor]:
import akka.dispatch.ControlMessage case object MyControlMessage extends ControlMessage
And then an example on how you would use it:
// We create a new Actor that just prints out what it processes class Logger extends Actor { val log: LoggingAdapter = Logging(context.system, this) self ! 'foo self ! 'bar self ! MyControlMessage self ! PoisonPill def receive = { case x => log.info(x.toString) } } val a = system.actorOf(Props(classOf[Logger],() } = "jdocs.dispatcher.MyUnboundedMessageQueueSemantics" } akka.actor.mailbox.requirements { "jdocs.dispatcher.MyUnboundedMessageQueueSemantics" = custom-dispatcher-mailbox } custom-dispatcher-mailbox { mailbox-type = "jdocs.dispatcher.MyUnboundedMailbox" }
Or by defining the requirement on your actor class like this:
class MySpecialActor extends Actor with RequiresMessageQueue[MyUnbounded,
|
http://doc.akka.io/docs/akka/current/scala/mailboxes.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
mmap, mmap64
mmap(), mmap64()
Map a memory region into a process's address space
Synopsis:
#include <sys/mman.h> void * mmap( void * addr, size_t len, int prot, int flags, int fildes, off_t off ); void * mmap64( void * addr, size_t len, int prot, int flags, int fildes, off64_t off );
Arguments:
- addr
- NULL, or a pointer to where you want the object to be mapped in the calling process's address space.
- len
- The number of bytes to map into the caller's address space. It can't be 0.
- prot
- The access capabilities that you want to use for the memory region being mapped. You can combine.
- flags
- Flags that specify further information about handling the mapped region. POSIX defines the following:
- MAP_PRIVATE
- MAP_SHARED
- MAP_FIXED
The following are Unix or QNX Neutrino.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The mmap() function maps a region within the object specified by filedes, beginning at off and continuing for len, into the caller's address space and returns the location. The object that you map from can be one of the following:
- a file, opened with open()
- a shared memory object, opened with shm_open()
- a typed memory object, opened with posix_typed_mem_open()
- physical memory — specify NOFD for fildes
If you want to map a device's physical memory, use mmap_device_memory() instead of mmap(); if you want to map a device's registers, use mmap_device_io().
If fildes isn't NOFD, you must have opened the file descriptor for reading, no matter what value you specify for prot; write access is also required for PROT_WRITE if you haven't specified MAP_PRIVATE.
The mapping is as shown below.
Mapping memory with mmap().
Typically, you don't need to use addr; you can just pass NULL instead. If you set addr to a non-NULL value, whether the object is mapped depends on whether or not you set MAP_FIXED in flags:
- MAP_FIXED is set
- The object is mapped to the address in addr, or the function fails.
- MAP_FIXED isn't set
- The value of addr is taken as a hint as to where to map the object in the calling process's address space. The mapped area won't overlay any current mapped areas.
There are two parts to the flags parameter. The first part is a type (masked by the MAP_TYPE bits), which QNX Neutrino.
- MAP_FIXED
- Map the object to the address specified by addr. If this area is already mapped, the call changes the existing mapping of the area.
A memory area being mapped with MAP_FIXED is first unmapped by the system using the same memory area. See munmap() for details.
- MAP_LAZY
- Delay acquiring system memory, and copying or zero-filling the MAP_PRIVATE or MAP_ANON pages, until an access to the area has occurred. If you set this flag, and there's no system memory at the time of the access, the thread gets a SIGBUS with a code of BUS_ADRERR. This flag is a hint to the memory manager.
For anonymous shared memory objects (those created via mmap() with MAP_ANON | MAP_SHARED and a file descriptor of -1), a MAP_LAZY flag implicitly sets the SHMCTL_LAZY flag on the object (see shm_ctl()).
- MAP_NOINIT
- When specified, the POSIX requirement that the memory be zeroed is relaxed. QNX Neutrino Programmer's Guide..:
- The address from off for len bytes is invalid for the requested object.
-.; }
Classification:
mmap() is POSIX 1003.1 MF|SHM|TYM; mmap64() is Large-file support
See also:
mmap_device_io(), mmap_device_memory(), munmap(), msync(), posix_typed_mem_open(), setrlimit(), shm_open()
“Shared memory” and “Typed memory” in the Interprocess Communication (IPC) chapter of the System Architecture guide
|
http://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/m/mmap.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Opened 8 years ago
Closed 7 years ago
Last modified 17 months ago
#8486 closed enhancement (fixed)
make trac.web.auth.LoginModule cookie path configurable
Description (last modified by )
Currently, the
LoginModule sets (and deletes) a cookie on
req.base_path:
req.outcookie['trac_auth'] = cookie req.outcookie['trac_auth']['path'] = req.base_path or '/'
def _expire_cookie(self, req): """Instruct the user agent to drop the auth cookie by setting the "expires" property to a date in the past. """ req.outcookie['trac_auth'] = '' req.outcookie['trac_auth']['path'] = req.base_path or '/' req.outcookie['trac_auth']['expires'] = -10000 if self.env.secure_cookies: req.outcookie['trac_auth']['secure'] = True
It would be nice for this to be configurable, so that, optionally the cookie path could be specified as an
Option with the current behavior being the fallback if the
Option is not set.
The reason I would like this is so that cookies can be parseable via multiple Trac environments on the same server, optionally. Currently, (in the soon to be released SharedCookieAuthPlugin) I work around this with a monkey-patch:
class GenericObject(object): def __init__(self, **kw): for key, item in kw.items(): setattr(self, key, item) def _do_login(self, req): kw = [ 'incookie', 'remote_user', 'authname', 'remote_addr', 'outcookie' ] kw = dict([ (i, getattr(req, i)) for i in kw ]) kw['base_path'] = '/' fake_req = GenericObject(**kw) auth_login_module_do_login(self, fake_req)
Attachments (2)
Change History (16)
comment:1 follow-up: 2 Changed 8 years ago by
comment:2 Changed 8 years ago by
I'm not sure I understand the use case. Isn't the cookie a randomly-generated identifier that only makes sense in the context of a specific Trac instance?
see SharedCookieAuthPlugin.
There are two basic cases of interest (probably more, but let's start with two):
- cookies are associated with a single environment; that is, you want different auth for different Trac instances
- cookies are readable across environments on the same server; that is, if you log into trac.example.com/foo you are also logged into trac.example.com/bar (single-sign on)
SharedCookieAuthPlugin is a very hacky way of doing this. On the other hand, it works, consumes existing code, and should be secure. I welcome better SSO solutions that are easily deployable, pluggable, and exist within the Trac framework and don't rely on other (usually less deployable) software.
comment:3 Changed 8 years ago by
+1 to this. We use a single-sign on auth system for Trac, but still have to modify
trac.web.auth.LoginModule to set the cookie path to
/ so that it can work without fuss with other plugins that use it, such as AccountManagerPlugin.
comment:4 follow-up: 5 Changed 7 years ago by
Patch welcomed, then.
Changed 7 years ago by
patch against 0.11-stable
comment:5 Changed 7 years ago by
Patch welcomed, then.
attached, login_cookie_path.r8732.diff
comment:6 Changed 7 years ago by
Thanks!
Two minor nits:
- instead of adding:
from trac.config import Option, modify the previous import, add
Optionafter
BoolOption
- more important, the docstring is a bit scarce; without being too long, it should nevertheless give a hint about why someone should ever want to change the default.
comment:7 follow-up: 8 Changed 7 years ago by
Well, feel free to update the patch as requested.
Changed 7 years ago by
revised patch vs 0.11-stable
comment:8 Changed 7 years ago by
Well, feel free to update the patch as requested.
Done, login_cookie_path.r8885.diff.
If the docstring isn't adequate, suggestions welcome
comment:9 Changed 7 years ago by
comment:10 Changed 7 years ago by
comment:11 Changed 7 years ago by
comment:12 Changed 5 years ago by
I've been unsuccessful in seeking to resolve an issue rendering th:SharedCookieAuthPlugin useless in recent Trac applications (Trac 0.13dev-r10883 tested, but should apply at least down to Trac 0.12 as well).
After many hours of code studies and testing I've dropped that approach and created an alternative solution instead, on-top of th:AccountManagerPlugin. This effort is tracked in th:#9676 now. Any feedback welcome.
I'm not sure I understand the use case. Isn't the cookie a randomly-generated identifier that only makes sense in the context of a specific Trac instance?
|
https://trac.edgewall.org/ticket/8486
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
hello people,
its the very first time im doing programming stuff.And i have done nothing so far.I found a booklet in my bookcase,it's about C++.,ts just the first booklet of a serie.
i use visual c++ 2008 express,i did new>file>C++
i used the same codes as in the booklet,here it is
//File Name: Source1.cpp
//This is my first program.
/*I will just make it write something*/
#include <iostream> int main() { cout << "TEST\n"; return 0; }
i added the cpp file into a project because it wasnt compiling if i didnt add it.But when i click the compile it gives an error.it says:
'cout' : undeclared identifier
then i added ":" next to the "cout"
i clicked compile it said:
error C2143: syntax error : missing ';' before '<<'
and i added ";" before "<<" but it keeps telling me that.and there were no ":" next to "cout" on the booklet. what am i gonna do?by the way i dont know nothing about programming,i just follow the instructions.and its a very old booklet,cant reach the author.
|
https://www.daniweb.com/programming/software-development/threads/169032/i-cant-compile-help
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Tk_GetScrollInfoObj man page
Tk_GetScrollInfoObj, Tk_GetScrollInfo — parse arguments for scrolling commands
Synopsis
#include <tk.h> int Tk_GetScrollInfoObj(interp, objc, objv, dblPtr, intPtr) int Tk_GetScrollInfo(interp, argc, argv, dblPtr, intPtr)
Arguments
- Tcl_Interp *interp (in)
Interpreter to use for error reporting.
- int objc (in)
Number of Tcl_Obj's in objv array.
- Tcl_Obj *const objv[] (in)
Argument objects. These represent the entire widget command, of which the first word is typically the widget name and the second word is typically xview or yview.
- int argc (in)
Number of strings in argv array.
- const char *argv[] (in)
Argument strings. These represent the entire widget command, of which the first word is typically the widget name and the second word is typically xview or yview.
- double *fractionPtr (out)
Filled in with fraction from moveto option, if any.
- int *stepsPtr (out)
Filled in with line or page count from scroll option, if any. The value may be negative.
Description
Tk_GetScrollInfoObj parses the arguments expected by widget scrolling commands such as xview and yview. It receives the entire list of words that make up a widget command and parses the words starting with objv[2]. The words starting with objv[2] must have one of the following forms:
moveto fraction scroll number units scroll number pages
Any of the moveto, scroll, units, and pages_UNITS or TK_SCROLL_PAGES.
Keywords
parse, scrollbar, scrolling command, xview, yview
Referenced By
Tk_GetScrollInfo(3) is an alias of Tk_GetScrollInfoObj(3).
|
https://www.mankier.com/3/Tk_GetScrollInfoObj
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
A class that encapsulates a newick string description of a tree and metadata about the tree. More...
#include <nxstreesblock.h>
A class that encapsulates a newick string description of a tree and metadata about the tree.
the NxsTreesBlock stores the trees as NxsFullTreeDescription because during its parse and validation of a tree string. By default, NCL will "process" each tree -- converting the taxon labels to numbers for the taxa (the number will be 1 + the taxon index). During this processing, the trees block detects things about the tree such as whether there are branch lengths on the tree, whether there are polytomies...
This data about the tree is then stored in a NxsFullTreeDescription so that the client code can access some information about a tree before it parses the newick string.
If you do not want to parse the newick string yourself, you can construct a NxsSimpleTree object from a NxsFullTreeDescription object if the NxsFullTreeDescription is "processed"
If the NxsTreesBlock is configured NOT to process trees (see NxsTreesBlock::SetProcessAllTreesDuringParse())
Definition at line 383 of file nxstreesblock.h.
|
http://phylo.bio.ku.edu/ncldocs/v2.1/funcdocs/classNxsFullTreeDescription.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
To turn C++ code into a runnable program, you need to compile and
link the code. Compiling the code puts it together into things
called libraries (lib files). Linking the code uses the information
in these libraries to build a program.
The program is the ultimate goal of the compile-and-link stage.
The programs are usually called executables, binaries,
or applications. BaBar application names usually end in "App".
For example, when you compile and link the code in the BetaMiniUser
package, it produces an application called BetaMiniApp.
In the last section, you made some changes to C++ code in the
BetaMiniUser package. But in order for these changes to be implemented,
you have to recompile and relink your code to make a new BetaMiniApp
that includes your changes.
This section begins with an introduction to gmake, BaBar's
compile-and-link utility. Then you will compile and link your
code from Example 2.
The section also includes suggestions for how to deal with
compile-and-link problems, and an optional, more detailed discussion
of gmake's GNUmakefiles.
BaBar's compile-and-link utility is called gmake.
The basic gmake command is:
> gmake <target>
This tells gmake to "make" or "build" the target.
Different targets correspond to different gmake actions.
Some important targets include lib to
compile code, and bin to link code.
lib
bin
The available targets and the instructions for building them
are defined in a file called GNUmakefile.
Whenever you issue a gmake command, gmake looks in the
current directory for a GNUmakefile. Then it follows the
instructions for that target.
GNUmakefile
There is a GNUmakefile in every release, including the test release you
created. GNUmakefiles are also included in any package that you check
out. The GNUmakefile of the release is the master GNUmakefile.
This is
why you usually issue gmake commands from the test release
directory. Commands issued from a package in a release will only
have access to targets defined in the package's GNUmakefile.
GNUmakefiles
For general use of BaBar software, you will use gmake commands
often, but you will probably not need to write or modify a GNUmakefile.
Even users writing new analysis modules rarely need to make significant
modifications to a GNUmakefile..
These files record the dependencies among your source
code files. Their names have the same root as the source
file and a '.d' suffix. In BaBar software these files
are stored in the tmp directory of your release.
Like the lib and bin directories, the tmp directory has a subdirectory
for each architecture that you compile code on.
The command to compile your code is gmake lib.
Compiling a C++ file converts it.
Once object modules have been created they need to be linked together
to build the executable. The command to link your code is gmake bin.
The new executables are placed in your bin/$BFARCH directory.
If you list this directory, you will see a '*' after the names of most of
the files, indicating that they are executables..
A related useful target is cleanarch. The command gmake cleanarch
is like gmake clean, but it removes only library, binary and temporary files saved under
the current architecture (i.e. the architecture set when you most
recently issued an srtpath command in the current session).
It leaves the libraries and/or binaries built with other architectures
intact.:
gmake lib BetaMiniUser.bin
You will notice that the link command, gmake BetaMiniUser.bin,
is a bit different from the gmake bin command that you used before.
In general, commands of the form "gmake package.target" perform
the task "gmake target" on the files in "package" only.
The command "gmake BetaMiniUser.bin" links only the code in
BetaMiniUser. This is useful if you have a lot of packages
checked out, because instead of making a bunch of executables
that you don't need, you make only the BetaMiniUser executables.06 2.4.21-47.ELsmp #1 SMP Thu Jul 20 10:30:12 CDT 2006 i686 athlon i386 GNU/Linux [uname -a]
echo "-> lib";
for package in xx BetaMiniUser database workdir; do if [ "$package" != "xx" -a "$package" != "BASE" ];
then gmake $package.lib NOPKG=yes; fi; done
ana42>.
Quick links:
Now it is time to compile and link your new code from
Example 2
of the Editing Code section.
First, make sure you have done:
ana42> srtpath <enter> <enter>
ana42> cond22boot
If you look in your lib and bin directories, you will
see your old lib and bin files from the
Quicktour:
ana42> ls -l lib/$BFARCH
total 2520
-rw-r--r-- 1 penguin br 2577960 Apr 20 21:17 libBetaMiniUser.a
drwxr-xr-x 5 penguin br 2048 Apr 20 21:05 templates
ana42> ls -l bin/$BFARCH/
total 69009
-rwxr-xr-x 1 penguin br 70663891 Apr 20 21:21 BetaMiniApp
-rw-r--r-- 1 penguin br 72 Apr 20 21:23 Index
Let's clean out these old files before we begin, just to be on the safe side:
ana42> gmake clean
GNU Make version 3.79.1,
Build OPTIONS = Linux24SL3_i386_gcc323-Debug-native-Objy-Optimize-Fastbuild-Ldlink2-SkipSlaclog-Static-Lstatic
Linux yakut03 2.4.21-47.ELsmp #1 SMP Thu Jul 20 10:30:12 CDT 2006 i686 athlon i386 GNU/Linux [uname -a]
-> clean:
-> cleanarch: Linux24SL3_i386_gcc323
'tmp/Linux24SL3_i386_gcc323' is soft link to /afs/slac.stanford.edu/g/babar/build/p/penguin/ana42/tmp/Linux24SL3_i386_gcc323
'shtmp/Linux24SL3_i386_gcc323' is soft link to /afs/slac.stanford.edu/g/babar/build/p/penguin/ana42/shtmp/Linux24SL3_i386_gcc323
rm -f -r
mkdir
-> installdirs:
The gmake messages are actually quite useful - gmake tells you what it is doing.
You can see from the above message that gmake clean removes your old xxx/$BFARCH
directories, and then does a "gmake installdirs" to put back new, empty ones.
If you check your lib and bin directories again, you will
find that they are now (almost) empty:
ana42> ls -l lib/$BFARCH
total 2
drwxr-xr-x 5 penguin br 2048 Apr 21 20:44 templates
ana42> ls -l bin/$BFARCH
total 0
ana42>
Now you can compile and link your code. Compile and/or link jobs
should always be send to the bldrecoq
queue. From your test release issue
ana42> rm all.log
ana42> bsub -q bldrecoq -o all.log gmake all
A feature of the batch system (bsub part of the command)
is that log files are NOT overwritten. Instead, if you
do not remove your old all.log file, then the output for the current
job will be appended to the bottom of the old log file.
To avoid that, you either have to delete the old log file, or choose
a new name for your new log file. That is why you removed the
old log file above.
The gmake all command compiles and links the code.
The first thing to do after compiling and linking is to
check your bin directory to make sure that your binary
was produced and that it is brand new:
ls -l bin/$BFARCH
total 69014
-rwxr-xr-x 1 penguin br 70669051 Apr 21 20:56 BetaMiniApp
-rw-r--r-- 1 penguin br 72 Apr 21 20:57 Index
If you make a mistake when you edit BaBar code, then you
will get an error messages on your terminal or in your log file.
There are two types of problems that you could encounter:
This section will focus on the first type of error: compile
and link errors. Later, the
debugging section
will teach you how to deal with run-time errors.
You may not have a successful compilation the first time.
The more code you edit, the more likely you are to make a
small mistake that could cause the compile or link process
to fail.
The first thing you should do after every compile or
link is check your bin directory to make sure your binary was
produced and that it is brand new. If it was
not produced, then you need to look in your log
file to find out why gmake failed.
As an example, suppose you accidentally removed the line
#include "BetaMiniUser/QExample.hh"
When this is done, you check the bin directory:
ana42> ls -l bin/$BFARCH
total 1
-rw-r--r-- 1 penguin br 72 May 15 04:40 Index
You investigate the log file for further clues.
Here is the log file: all.log with errors.
Note that the target that gmake is working on is indicated by the little
arrows in the log file, "->BetaMiniUser.lib" and "->BetaMiniUser.bin".)
Scrolling down the log file, you find several gmake error messages.
There's one in the BetaMiniUser.lib stage:
gmake[3]: *** [/afs/slac.stanford.edu/u/br/penguin/ana42/lib/Linux24SL3_i386_gcc323/l
ibBetaMiniUser.a(QExample.o)] Error 1
gmake[2]: *** [BetaMiniUser.lib] Error 2
gmake[4]: *** [/afs/slac.stanford.edu/u/br/penguin/ana42/lib/Linux24SL3_i386_gcc323/l
ibBetaMiniUser.a(QExample.o)] Error 1
Whenever you see those stars (***) and the word Error,
that means you are in trouble.
Now look at the lines just above the ***/Error line to find out what was the last
thing gmake did before it failed. Right before the lib-stage error, the message is:
Compiling QExample.cc [libBetaMiniUser.a] [cc-1]
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:25: syntax
error before `::' token
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:37: syntax
error before `::' token
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:51: syntax
error before `::' token
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:57: warning: ISO
C++ forbids declaration of `_numTrkHisto' with no type
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:57: `manager
' was not declared in this scope
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:58: warning: ISO
C++ forbids declaration of `_pHisto' with no type
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:58: `manager
' was not declared in this scope
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:60: syntax
error before `return'
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:66: syntax
error before `::' token
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:75: syntax
error before `::' token
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:83: syntax
error before `->' token
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:86: `trkList
' was not declared in this scope
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:88: syntax
error before `while'
(In this case, the error messages for the lib
and bin stages are identical. This does not always happen.
In any case you should look at the errors from the earliest target
first, because later targets depend on earlier targets. Fixing
the lib-stage error might fix the bin-stage error automatically.)
One message that is repeated over and over again is "syntax error before '::' token."
What does that mean? You're not sure. So you decide to look at the first error message
that occurs, since that is the first place that gmake ran into trouble:
/afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:25: syntax
error before `::' token
QExample::QExample( const char* const theName,
The job fails at the beginning of the QExample constructor. Not only that,
but according to the error message, it fails before the '::' token.
Now, there is only one thing in front of the '::' token: the word
QExample. But QExample is a perfectly valid class - why should gmake think
that it is a syntax error? It seems that for some reason, gmake does not
recognize the QExample class. And if gmake does not recognize a class,
then that is probably because it has not read the header file for that class.
Sure enough, when you check QExample.cc, you find that you have forgotten
to include QExample.hh. This would explain the other error messages as well:
every time gmake sees "QExample", it is confused because it does not know what
QExample is. Furthermore, it also does not recognize _numTrkHisto, because
this object was defined in QExample.hh as a private member object.
You put back the #include statement, and send your "gmake all" job again,
ana42> rm all.log
ana42> bsub -q bldrecoq -o all.log gmake all
This time, everything worked fine:
ana42> ls -l lib/$BFARCH
total 2408
-rw-r--r-- 1 penguin br 2462826 May 15 04:43 libBetaMiniUser.a
drwxr-xr-x 5 penguin br 2048 May 15 04:36 templates
ana42> ls -l bin/$BFARCH
total 48073
-rwxr-xr-x 1 penguin br 49225543 May 15 04:44 BetaMiniApp
-rw-r--r-- 1 penguin br 72 May 15 04:44 Index
There is a brand new BetaMiniApp! And you can tell from the
time that BetaMiniUser's lib file has also been recompiled.
Normally, gmake is pretty good at figuring out what needs to
be recompiled and what doesn't. But to be safe, you can issue
a "gmake clean" to clean out all the old lib and bin files
before you recompile.
The example above of course shows you only one example of the
many, many possible error messages that can appear in your log file.
Other typical compile errors include:
srtpath
Compile/link error messages can be confusing. It is not always obvious
what the problem is. In the end, all you can do is try your best to decipher
the log file. If you can't figure it out, then your best bet is to ask
someone who has had more practice with gmake errors, or, if you are all
alone, submit your problem to the prelimbugs Hypernews forum. When you submit your
question, be sure to indicate:
For example, if you could not figure out what was wrong in the
above example, you would first explain your problem, and then
provide the following details:
You will get an answer much faster if you provide complete information.
If your compile-and-link problem occurs in one of the Workbook examples,
then you can email me, the
Workbook editor, and I'll see what I can do. Perhaps you have
discovered a bug in the Workbook! (But please try to solve it yourself,
first.)
Now you have learned most of what you need to know to
use gmake to compile and link. But you may find it helpful
to learn a bit more about how GNUmakefiles work.
One of the virtues of GNUmakefiles is their ability to
handle the many dependencies among the many, many
lines of C++ code that must be put together to make an
executable. This (optional) section revisits the GNUmakefile,
with a focus on how these dependencies are managed.
An executable is built from the code defined in multiple files. When
changes are made in one or more of these files the code in the
modified files and all dependent files will need to be
re-compiled and re-linked. Compile and link commands often involve
many options, such as directory search paths for included files, names
for compiled files, warning messages, debugging information and so
forth. Compile and link commands quickly become quite lengthy, and in
large software projects the dependencies amongst files is usually
rather involved. gmake's job is to manage this complicated process.
GNUmakefiles define a set of instructions for how
to compile and assemble code segments into the executable. The
file also defines many variables such as search paths, which compiler
to use, etc. The instructions specify the dependencies of the code
segments so that the gmake utility can recognize only
those components that need to be reprocessed. A well-written makefile
can reduce unnecessary compilation and save much time.
The gmake facility looks in the current directory for a file named
GNUmakefile, makefile, or Makefile (in that order). The first one of
these files found is used to define the dependency/build rules for the given
target.
The general structure of an instruction in the GNUmakefile consists of
a target, a dependency, and a rule.
Associated with each target is a set of dependent files. When any of the
dependent files have been modified or if the target file
does not exist, the target will be rebuilt (or built)
in accordance with the rule. For example, a line in a
GNUmakefile might be:.
An example of a phony target is workdir's setup target.
The command gmake workdir.setup does not compile or link code.
Instead, it creates some links in workdir: PARENT, RELEASE, bin, and shlib.
setup
As such, to force a file to be recompiled, you may need to
manually "touch" one of the relevant files. An example of
such a command is:
touch BetaMiniUser/AppUserBuildBase.cc
Last modified: January 2008
|
http://www.slac.stanford.edu/BFROOT/www/doc/workbook/compile/compile.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Processing Atom 1.0
September 14, 2005
In the fast-moving world of weblogs and Web-based marketing, the approval of the Atom Format 1.0 by the Internet Engineering Task Force (IETF) as a Proposed Standard is a significant and lasting development. Atom is a very carefully designed format for syndicating the contents of weblogs as they are updated, the usual territory of RSS, but its possible uses are far more general, as illustrated in the description on the home page:.
Atom is a very important development in the XML and Web world. Atom technology is already deployed in many areas (though not all up-to-date with Atom 1.0), and parsing and processing Atom is quickly becoming an important task for web developers. In this article, I will show several approaches to reading Atom 1.0 in Python. All the code is designed to work with Python 2.3, or more recent, and is tested with Python 2.4.1.
The example I'll be using of an Atom document is a modified version of the introduction to Atom on the home page, reproduced here in listing 1.
Listing 1 (atomexample.xml). Atom Format 1.0 Example
<?xml version="1.0" encoding="utf-8"?> <feed xml: <id>urn:uuid:60a76c80-d399-11d9-b93C-0003939e0af6</id> <title>Example Feed</title> <updated>2005-09-02T18:30:02Z</updated> <link href=""/> <author> <name>John Doe</name> </author> <entry> <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> <title>Atom-Powered Robots Run Amok</title> <link href=""/> <updated>2005-09-02T18:30:02Z</updated> <summary>Some text.</summary> </entry> <entry> <id>urn:uuid:8eb00d01-d632-40d4-8861-f2ed613f2c30</id> <title type="xhtml"> <xh:div> The quick <xh:del>black</xh:del><xh:ins>brown</xh:ins> fox... </xh:div> </title> <link href=""/> <updated>2005-09-01T12:15:00Z</updated> <summary>jumps over the lazy dog</summary> </entry> </feed>
Using MiniDOM
If you want to process Atom with no additional dependencies besides Python, you can do so using MiniDOM. MiniDOM isn't the most efficient way to parse XML, but Atom files tend to be small, and rarely get to the megabyte range that bogs down MiniDOM. If by some chance you are dealing with very large Atom files, you can use PullDOM, which works well with Atom because of the way the format can be processed in bite-sized chunks. MiniDOM isn't the most convenient API available, either, but it is the most convenient approach in the Python standard library. Listing 2 is MiniDOM code to produce an outline of an atom feed, containing much of the information you would use if you were syndicating the feed.
Listing 2. MiniDOM Code to Print a Text Outline of an Atom Feed
from xml.dom import minidom from xml.dom import EMPTY_NAMESPACE ATOM_NS = '' doc = minidom.parse('atomexample.xml') #Ensure that all text nodes can be simply retrieved doc.normalize() def get_text_from_construct(element): ''' Return the content of an Atom element declared with the atomTextConstruct pattern. Handle both plain text and XHTML forms. Return a UTF-8 encoded string. ''' if element.getAttributeNS(EMPTY_NAMESPACE, u'type') == u'xhtml': #Grab the XML serialization of each child childtext = [ c.toxml('utf-8') for c in element.childNodes ] #And stitch it together content = ''.join(childtext).strip() return content else: return element.firstChild.data.encode('utf-8') #process overall feed: #First title element in doc order is the feed title feedtitle = doc.getElementsByTagNameNS(ATOM_NS, u'title')[0] #Field titles are atom text constructs: no markup #So just print the text node content print 'Feed title:', get_text_from_construct(feedtitle) feedlink = doc.getElementsByTagNameNS(ATOM_NS, u'link')[0] print 'Feed link:', feedlink.getAttributeNS(EMPTY_NAMESPACE, u'href') print print 'Entries:' for entry in doc.getElementsByTagNameNS(ATOM_NS, u'entry'): #First title element in doc order within the entry is the title entrytitle = entry.getElementsByTagNameNS(ATOM_NS, u'title')[0] entrylink = entry.getElementsByTagNameNS(ATOM_NS, u'link')[0] etitletext = get_text_from_construct(entrytitle) elinktext = entrylink.getAttributeNS(EMPTY_NAMESPACE, u'href') print etitletext, '(', elinktext, ')'
The code to access XML is typical of DOM and, as such, it's rather clumsy when compared
to
much Python code. The normalization step near the beginning of the listing helps eliminate
even more complexity when dealing with text content. Many Atom elements are defined
using
the
atomTextConstruct pattern, which can be plain text, with no embedded
markup. (HTML is allowed, if escaped, and if you flag this case in the
type
attribute.) Such elements can also contain well-formed
XHTML fragments wrapped
in a
div. The
get_text_from_construct function handles both cases
transparently, and so it is generally a utility routine for extracting content from
compliant Atom elements. In this listing, I use it to access the contents of the
title element, which is in XHTML form in one of the entries in listing 1. Try
running listing 2 and you should get the following output.
$ python listing2.py Feed title: Example Feed Feed link: Entries: Atom-Powered Robots Run Amok ( ) <xh:div> The quick <xh:del>black</xh:del><xh:ins>brown</xh:ins> fox... </xh:div> ( )
Handling Dates
Handling Atom dates in Python is a topic that deserves closer attention. Atom dates
are
specified in the
atomDateConstruct pattern, of which the specification.
The examples given are:
2003-12-13T18:30:02Z
2003-12-13T18:30:02.25Z
2003-12-13T18:30:02+01:00
2003-12-13T18:30:02.25+01:00
You may be surprised to find that Python is rather limited in the built-in means it
provides for parsing such dates. There are good reasons for this: many aspects of
date
parsing are very hard and can depend a lot on application-specific needs. Python 2.3
introduced the handy
datetime data type, which is the recommended way to store
and exchange dates, but you have to do the parsing into date-time yourself, and handle
the
complex task of time-zone processing, as well. Or you have to use a third-party routine
that
does this for you. I recommend that you complement Python's built-in facilities with
Gustavo
Niemeyer's DateUtil. (Unfortunately
that link uses HTTPS with an expired certificate, so you may have to click through
a bunch
of warnings, but it's worth it.) In my case I downloaded the 1.0 tar.bz2 and installed
using
python setup.py install.
Using DateUtil, the following snippet is a function that returns a date read from an atom element:
from dateutil.parser import parse feedupdated = doc.getElementsByTagNameNS(ATOM_NS, u'updated')[0] dt = parse(feedupdated.firstChild.data)
And as an example of how you can work with this date-time object, you can use the following code to report how long ago an Atom feed was updated:
from datetime import datetime from dateutil.tz import tzlocal #howlongago is a timedelta object from present time to target time howlongago = dt - datetime.now(tzlocal()) print "Time since feed was updated:", abs(howlongago)
Using Amara Bindery
Because the DOM code above is so clumsy, I shall present similar code using a friendlier Python library, Amara Bindery, which I covered in an earlier article, Introducing the Amara XML Toolkit. Listing 3 does the same thing as listing 2.
Listing 3. Amara Bindery Code to Print a Text Outline of an Atom Feed
from amara import binderytools doc = binderytools.bind_file('atomexample.xml') def get_text_from_construct(element): ''' Return the content of an Atom element declared with the atomTextConstruct pattern. Handle both plain text and XHTML forms. Return a UTF-8 encoded string. ''' if hasattr(element, 'type') and element.type == u'xhtml': #Grab the XML serialization of each child childtext = [ (not isinstance(c, unicode) and c.xml(encoding=u'utf-8') or c) for c in element.xml_children ] #And stitch it together content = u''.join(childtext).strip().encode('utf-8') return content else: return unicode(element).encode('utf-8') print 'Feed title:', get_text_from_construct(doc.feed.title) print 'Feed link:', doc.feed.link print print 'Entries:' for entry in doc.feed.entry: etitletext = get_text_from_construct(entry.title) print etitletext, '(', entry.link.href, ')'
Using Feedparser (Atom Processing for the Desperate Hacker)
A third approach to reading Atom is to let someone else handle the parsing and just
deal
with the resulting data structure. This might be especially convenient if you have
to deal
with broken feeds (and fixing the broken feeds is not an option). It does usually
rob you of
some flexibility of interpretation of the data, although a really good library would
be
flexible enough for most users. Probably the best option is Mark Pilgrim's Universal Feed Parser, which parses almost every
flavor of RSS and Atom. In my case, I downloaded the 3.3 zip package and installed
using
python setup.py install. Listing 4 is code similar in function to that of
listings 2 and 3.
Listing 4. Universal Feed Parser Code to Print a Text Outline of an Atom Feed
import feedparser #A hack until Feed parser supports Atom 1.0 out of the box #(Feedparser 3.3 does not) from feedparser import _FeedParserMixin _FeedParserMixin.namespaces[""] = "" feed_data = feedparser.parse('atomexample.xml') channel, entries = feed_data.feed, feed_data.entries print 'Feed title:', channel['title'] print 'Feed link:', channel['link'] print print 'Entries:' for entry in entries: print entry['title'], '(', entry['link'], ')'
Overall the code is shorter because we no longer have to worry about the different forms of Atom text construct. The library takes care of that for us. Of course I'm pretty leery of how it does so, especially the fact that it strips Namespaces in XHTML content. This is an example of the flexibility you lose when using a generic parser, especially one designed to be as liberal as Universal Feed Parser. That's a trade-off from the obvious gain in simplicity. Notice the hack near the top of listing 4. These two lines should be temporary, and no longer needed, once Mark Pilgrim updates his package to support Atom 1.0.
Wrapping up, on a Grand Scale
Atom 1.0 is pretty easy to parse and process. I may have serious trouble with some of the design decisions for the format, but I do applaud its overall cleanliness. I've presented several approaches to processing Atom in this article. If I needed to reliably process feeds retrieved from arbitrary locations on the Web, I would definitely go for Universal Feed Parser. Mark Pilgrim has dunked himself into the rancid mess of broken Web feeds so you don't have to. In a project where I controlled the environment, and I could fix broken feeds, I would parse them myself, for the greater flexibility. One trick I've used in the past is to use Universal Feed Parser as a proxy tool to convert arbitrary feeds to a single, valid format (RSS 1.0 in my past experience), so that I could use XML (or in that case RDF) tools to parse the feeds directly.
And with this month's exploration, the Python-XML column has come to an end. After discussions with my editor, I'll replace this column with one with a broader focus. It will cover the intersection of Agile Languages and Web 2.0 technologies. The primary language focus will still be Python, but there will sometimes be coverage of other languages such as Ruby and ECMAScript. I think many of the topics will continue to be of interest to readers of the present column. I look forward to continuing my relationship with the XML.com audience.
This brings me to the last hurrah of the monthly round up of Python-XML community news. Firstly, given the topic of this article, I wanted to mention Sylvain Hellegouarch's atomixlib, a module providing a simple API for generation of Atom 1.0, based on Amara Bindery. See his announcement. And relevant to recent articles in this column, Andrew Kuchling wrote up a Python Unicode HOWTO.
Julien Anguenot writes in XML Schema Support on Zope3:
I added a demo package to illustrate the zope3/xml schema integration. [Download the code here]
The goal of the demo is to get a new content object registered within Zope3, with an "add "and "edit" form driven by an XML Schema definition.
The article goes on to show a bunch of Python and XML code to work a sample W3C XML schema file into a Zope component.
Mark Nottingham announced sparta.py 0.8, a simple API for RDF.
Sparta is databinding from RDF to Python objects.
See the announcement.
Guido Wesdorp announced Templess 0.1.
Templess is an XML templating library for Python, which is very compact and simple, fast, and has a strict separation.
|
http://www.xml.com/pub/a/2005/09/14/processing-atom-in-python.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Difference between revisions of "Chatlog 2013-02-25"
From Linked Data Platform
Latest revision as of 19:00, 25 February 2013
See original RRSAgent log or preview nicely formatted version.
Please justify/explain all edits to this page, in your "edit summary" text.
14:58:37 <RRSAgent> RRSAgent has joined #ldp 14:58:37 <RRSAgent> logging to 14:58:39 <trackbot> RRSAgent, make logs public 14:58:39 <Zakim> Zakim has joined #ldp 14:58:41 <trackbot> Zakim, this will be LDP 14:58:41 <Zakim> ok, trackbot, I see SW_LDP()10:00AM already started 14:58:42 <trackbot> Meeting: Linked Data Platform (LDP) Working Group Teleconference 14:58:42 <trackbot> Date: 25 February 2013 14:59:03 <Zakim> +JohnArwe 14:59:26 <Zakim> +cygri 14:59:27 <Zakim> + +1.214.537.aaaa 14:59:33 <Zakim> +SteveBattle 15:00:00 <Zakim> +Arnaud 15:00:21 <Arnaud> zakim, who's here? 15:00:22 <Zakim> On the phone I see [IPcaller], JohnArwe, cygri, +1.214.537.aaaa, SteveBattle, Arnaud 15:00:22 :00:23 <Zakim> +??P24 15:00:29 <dret> zakim, IPcaller is me 15:00:30 <Zakim> +dret; got it 15:00:31 <Zakim> +[OpenLink] 15:00:35 <cody> (1 214 537.aaaa is Cody, who hasn't learned to change Zakim's prompt from phone # to name) 15:00:38 <svillata> Zakim, ??P24 is me 15:00:38 <Zakim> +svillata; got it 15:00:43 <TallTed> Zakim, [OpenLink] is OpenLink_Software 15:00:43 <Zakim> +OpenLink_Software; got it 15:00:47 <TallTed> Zakim, OpenLink_Software is temporarily me 15:00:47 <Zakim> +TallTed; got it 15:00:49 <TallTed> Zakim, mute me 15:00:49 <Zakim> TallTed should now be muted 15:01:21 <Zakim> +bblfish 15:01:35 <Kalpa> Kalpa has joined #ldp 15:01:43 <bblfish> hi, in train from Paris to Amsterdam 15:02:23 <Kalpa> Kalpa has left #ldp 15:02:23 <Arnaud> zakim, who's here? 15:02:24 <Zakim> On the phone I see dret, JohnArwe, cygri, +1.214.537.aaaa, SteveBattle, Arnaud, svillata, TallTed (muted), bblfish 15:02:25 :02:30 <JohnArwe> zakim, aaaa is cody 15:02:30 <Zakim> +cody; got it 15:02:35 <bblfish> afternoon! 15:02:59 <Arnaud> chair: Arnaud 15:03:07 <Arnaud> scribe: svillata 15:03:08 <svillata> scribe: svillata 15:03:15 <bblfish> svillata: you can use this: 15:03:16 <Kalpa> Kalpa has joined #ldp 15:03:28 <svillata> thanks bblfish 15:03:34 <nmihindu> nmihindu has joined #ldp 15:03:37 <dret> +1 15:03:46 <svillata> Topic: Approving minutes Feb 18 15:03:54 <svillata> Resolved: Minutes of Feb 18 approved 15:04:05 <SteveS> SteveS has joined #ldp 15:04:11 <Zakim> +[IBM] 15:04:12 <Kalpa> zakim, who is on the phone 15:04:12 <Zakim> I don't understand 'who is on the phone', Kalpa 15:04:27 <SteveS> zakim, [IBM] is me 15:04:27 <Zakim> +SteveS; got it 15:04:28 <JohnArwe> zakim, who is on the phone? 15:04:28 <Zakim> On the phone I see dret, JohnArwe, cygri, cody, SteveBattle, Arnaud, svillata, TallTed (muted), bblfish, SteveS 15:04:32 <svillata> Arnaud: F2F is coming up 15:04:37 <stevebattle> I'll be travelling 15:05:08 <stevebattle> ..on the monday before the F2F 15:05:09 <svillata> Arnaud: indicate your participation to F2F meeting <svillata> Topic: Tracking of issues and actions 15:05:53 <svillata> subtopic: Pending review ISSUE-47 15:05:56 <bblfish> Issue-47? 15:05:56 <trackbot> ISSUE-47 -- publish ontology -- pending review 15:05:56 <trackbot> 15:06:07 <Zakim> +??P31 15:06:33 <krp> krp has joined #ldp 15:06:44 <nmihindu> Zakim, ??P31 is me 15:06:44 <Zakim> +nmihindu; got it 15:06:48 <svillata> Arnaud: do we want to close ISSUE-47? 15:06:53 <stevebattle> q+ 15:06:55 <Zakim> -bblfish 15:07:00 <svillata> q? 15:07:13 <bblfish> makes sense to close it if the actions are taken. ( I can't hear much breaks up a lot in the train ) 15:07:15 <Zakim> +roger 15:07:31 <Zakim> +Sandro 15:07:59 <Arnaud> ack stevebattle 15:08:11 <cody> Should it not have a date pattern in the URL like most W3C published schemas? How to handle new versions? <svillata> stevebattle: afraid publishing the ontology as linked data with hyperlinked classnames etc is overkilling 15:08:23 <JohnArwe> arnaud: we now have a turtle document in the cvs ... that seems like linked data "enough" 15:08:53 <JohnArwe> ...expect editors to update ontology based on future resolutions of issues 15:09:07 <TallTed> cody - those date patterns are associated with the start of the WGs, not the schemas 15:08:21 <svillata> Resolved: Close ISSUE-47 15:08:21 <trackbot> Closed ISSUE-47 publish ontology. 15:09:13 <svillata> Topic: LDP specification and publishing a second draft 15:09:39 <cody> thx 15:09:43 <roger> roger has joined #ldp 15:10:07 <svillata> Arnaud: we have to discuss what we think we need to do for publishing the second draft 15:10:21 <TallTed> TallTed has changed the topic to: Linked Data Platform WG -- -- current agenda: 15:10:30 <JohnArwe> q+ 15:10:31 <svillata> ... what do the editors need to publish a second draft? 15:10:41 <svillata> q? 15:11:07 <Kalpa> Kalpa has joined #ldp 15:11:24 <svillata> SteveS: pretty good shape wrt the resolved issues 15:11:39 <Zakim> -nmihindu 15:11:41 <Zakim> +??P29 15:11:56 <krp> zakim, ??P29 is me 15:11:56 <Zakim> +krp; got it 15:12:07 <Zakim> +??P31 15:12:17 <svillata> Arnaud: how are we doing with regard to linking all the issues from the spec? <svillata> steves: as of last week the spec was up to date so that shouldn't be a problem 15:13:19 <dret> dret has joined #LDP 15:13:46 <Zakim> -??P31 15:13:50 <bblfish> concerning draft is the relative urls resolved? 15:14:09 <svillata> Arnaud: would be good to have a week to review the spec? 15:14:13 <SteveS> bblfish: it is an open action, minor update we can do 15:14:16 <Zakim> +??P31 15:14:38 <stevebattle> I'm happy to be transparent and publish internally and externally simultaneously. 15:14:49 <svillata> ... start review, and for March 11 decide whether to publish it 15:15:05 <Arnaud> q? 15:15:12 <Arnaud> ack john 15:15:13 <SteveS> q+ 15:15:17 <nmihindu> Zakim, ??P31 is me 15:15:17 <Zakim> +nmihindu; got it 15:15:20 <Arnaud> ack steve 15:17:17 <svillata> Arnaud: maybe next week spec will be in a good shape, and we can decide then whether to publish it #15:18:55 <svillata> which issue are we discussing? #15:19:34 <svillata> ok, thanks 15:19:44 <stevebattle> q+ <svillata> Topic: Open Issues 15:20:29 <svillata> subtopic: Composition vs Aggregation ontology (related to ISSUE-34) #15:20:59 <Zakim> +Sandro.a 15:21:03 <svillata> JohnArwe: the ontology itself is subject to change 15:21:06 <Zakim> -Sandro 15:21:11 <SteveS> Think this is more narrowly issue-32 and somewhat a part of it 15:21:18 <Arnaud> ack stevebattle 15:21:59 <svillata> stevebattle: issue-34 brings to an ontology about aggregation and composition 15:22:30 <Zakim> -nmihindu 15:23:00 <Zakim> +??P28 15:23:21 <JohnArwe> ashok's email: item 2 15:23:34 <nmihindu> Zakim, ??P28 is me 15:23:34 <Zakim> +nmihindu; got it 15:23:49 <svillata> Arnaud: proposal now is to have two subclasses for composition and aggregation 15:24:46 <svillata> ... container is a useful notion independently from aggregation/composition 15:25:03 <SteveS> q+ 15:25:28 <svillata> ... we are discussing how many classes to define, which properties 15:25:29 <Arnaud> ack steves 15:26:19 <roger> q+ 15:26:21 <stevebattle> q+ 15:26:27 <Arnaud> ack roger 15:26:49 <svillata> ISSUE-34? 15:26:49 <trackbot> ISSUE-34 -- Adding and removing arcs in weak aggregation -- closed 15:26:49 <trackbot> 15:27:07 <Arnaud> ack stevebattle 15:27:28 <svillata> stevebattle: important to make a distinction in the ontology 15:28:33 <cygri> cygri has joined #ldp 15:28:50 <Arnaud> 15:29:09 <roger> It would be good to get feedback from Richard about issue 34 (because he originally raised the issue). 15:29:16 <svillata> Arnaud: email JohnArwe sent out on Friday with a proposal 15:29:50 <JohnArwe> SteveB: as long as real behavioral difference, happy to have different classes in ontology 15:29:52 <SteveS> roger: I believe cygri opened on behalf of us at F2F1…but would be good to get feedback, not arguing that 15:30:56 <svillata> Proposed: adopting ontology proposed by JohnArwe () 15:30:57 <Zakim> +bblfish 15:31:06 <stevebattle> +1 15:31:15 <bblfish> bblfish has joined #ldp 15:31:24 <SteveS> +1 15:32:06 <stevebattle> No - they have different deletion behaviour. 15:32:21 <svillata> cygri: reading the ontology I have no idea of what the difference is 15:32:52 <JohnArwe> @cygri: the example in the email ontology is (as resolved in 34) currently the only difference between them. 15:32:52 <TallTed> I'd suggest changing :Aggregation to :aggregateContainer and :Composition to :compositeContainer 15:33:17 <stevebattle> That sounds a bit verbose to me. 15:33:21 <svillata> Arnaud: when you delete the container, different behaviors about the deletion of the resources it contains 15:33:27 <stevebattle> It's going to be used a lot 15:33:41 <TallTed> but otherwise I'm OK with the suggested change *as a start* ... I agree with cygri that the specific differences in behavior must be explicitly noted. 15:34:27 <Zakim> +??P33 15:34:52 <bblfish> back in new train 15:34:56 <svillata> cygri: having two subclasses which differ only for a sentence does not make sense, my feeling is that just using the super-class would be sufficient 15:35:19 <nmihindu> Zakim, ??P33 is me 15:35:19 <Zakim> +nmihindu; got it 15:35:33 <svillata> Arnaud: think richard is suggesting parent is aggregation and the subclass is the composition 15:35:54 <bblfish> the question I would have is what happens when something is changed from an Aggregation to a Container, especially concerning the members. 15:35:59 <svillata> cygri: members may continue to exist is not a constraint 15:36:15 <svillata> ... it doen't commit the server 15:36:20 <Arnaud> q? 15:36:22 <svillata> q? 15:36:25 <bblfish> q+ 15:36:25 <TallTed> q+ 15:36:31 <svillata> q? 15:36:34 <bblfish> please see my question above: 15:36:35 <TallTed> Zakim, unmute me 15:36:35 <Zakim> TallTed should no longer be muted 15:37:06 <svillata> Arnaud: how do we insert this aggregation concept? 15:37:17 <Arnaud> q? 15:37:19 <bblfish> please see above 15:37:23 <bblfish> the question I would have is what happens when something is changed from an Aggregation to a Container, especially concerning the members. 15:37:45 <stevebattle> q+ 15:38:08 <bblfish> ack me 15:38:20 <JohnArwe> I don't know if we'd allow a change in container behavior dynamically... new conversation? 15:38:24 <Arnaud> ack TallTed 15:38:26 <roger> that (in my opinion) is a very dodgy thing 15:38:40 <svillata> SteveS: we can open an issue and address the question of bblfish 15:39:20 <svillata> q? 15:39:29 <bblfish> my guess is that this will only work if you add a :contains relation 15:39:48 <svillata> Arnaud: we have to make concrete proposals 15:39:50 <Arnaud> ack stevebattle 15:39:57 <JohnArwe> Ted: if (in the end) there is no behavioral difference between Container and AggregateContainer, would you like cygri want to collapse them? 15:40:12 <svillata> stevebattle: cygri's proposal appealing 15:40:23 <JohnArwe> s/Ted:/Question for Ted:/ 15:40:54 <svillata> Arnaud: changing container to something else change the spec quite a lot, John's proposal is trying to minimize the change 15:41:05 <stevebattle> In OOD, composition is not (typically) a subclass of aggregation. They're commonly subclasses of association. 15:41:16 <Arnaud> q? 15:41:20 <svillata> s/change /changes 15:42:00 <svillata> q? 15:42:18 <stevebattle> Isn't Container an abstract superclass that is useful for property definitions? #15:42:28 <Zakim> +Sandro.aa #15:42:32 <Zakim> -Sandro.a 15:42:46 <svillata> TallTed: propose to use aggregate containers and composite containers 15:43:07 <svillata> ... superclass Container 15:43:17 <sandro> q+ to ask a naive question (can't we just use URLs?) 15:43:18 <SteveS> stevebattle: agree, we can multi-type if we even wanted to say it is a ldp:Container and a ldp:Aggregation 15:43:29 <SteveS> q+ 15:43:33 <stevebattle> Yes - agreed that Aggregation and Composition are mutually exclusive classes. 15:43:35 <Arnaud> q? 15:43:44 <svillata> TallTed: proposal to change aggregation VS composition into aggregate containers/composite containers 15:43:44 <Arnaud> ack sandro 15:43:44 <Zakim> sandro, you wanted to ask a naive question (can't we just use URLs?) <svillata> sandro: after weeks of discussion we still don't seem to have a resolution, so why not instead rely on the structure of the URLs to determine whether member resources should be deleted or not? 15:44:25 <stevebattle> I proposed that at the last F2F and got voted down :) 15:44:28 <bblfish> I think it is an interesting idea 15:44:31 <Arnaud> ack steves <svillata> steves: this would go against the opacity principle 15:44:51 <bblfish> I was going to propose that urls ending in / are LDPCs 15:45:09 <Ruben> mmm, I don't like "urls ending in" 15:45:16 <Ruben> should be opaque 15:45:18 <bblfish> we spoke about this at the last F2F, but since then I have changed my mind. 15:46:02 <bblfish> Ruben, URLs are opaque as far as emantics goes, but in fact the URI spec does give / a special significance 15:46:09 <cygri> q+ 15:46:14 <bblfish> s/emantics/semantics/ 15:46:22 <Arnaud> ack cygri 15:46:44 <svillata> cygri: think one issue that was discussed at F2F1 and that led us to where we are was the idea of using the url structure to indicate composition 15:47:22 <svillata> ... can't give any special semantics to the relations to keep the implementation really simple 15:47:56 <stevebattle> q+ 15:47:59 <sandro> I see that, but I don't find that compelling, giving the simplicity provided. 15:48:09 <svillata> q? 15:48:41 <Arnaud> ack stevebattle 15:49:26 <sandro> I probably voted against stevebattle at the F2F, but now that I see how long we've spent trying to figure this out, I lean more toward simplicity. 15:49:43 <bblfish> I can make a proposal 15:49:44 <svillata> stevebattle: is it possible to re-open the issue? 15:49:52 <sandro> q+ 15:50:02 <SteveS> q+ 15:50:14 <svillata> Arnaud: possible but better to re-open issues when new information comes 15:50:14 <bblfish> stevebattle: I have an idea on how to do this in a way that is uncontroversial 15:50:19 <Arnaud> ack sandro 15:50:19 <sandro> q- 15:50:26 <Arnaud> ack steves 15:50:28 <bblfish> ro was that Sandro 15:50:58 <stevebattle> An aggregate could generate URIs at the same level at the aggregation. 15:51:15 <sandro> sandro: I think it might be new information that this is so hard to us to figure out. 15:51:21 <stevebattle> They wouldn't be nested below the Aggregation 15:51:42 <stevebattle> ..In the URI structure 15:51:53 <JohnArwe> I think Sandro was proposing that "if the URL is structured ..., then the client Knows the behavior is delete (or not) members." 15:52:07 <SteveS> I think we are arguing over minor details of class hierarchy and not fundamental behavioral difference 15:52:09 <bblfish> sandro, we should get together on this. 15:52:19 <sandro> yes, JohnArwe 15:52:23 <SteveS> s/difference/differences/ 15:52:39 <Arnaud> proposed: use John's proposed ontology with Aggregation renamed as AggregateContainer, Composition as CompositeContainer, and better documentation 15:52:45 <svillata> Arnaud: TallTed proposal from JohnArwe proposal 15:52:50 <sandro> in fact -- I probably shouldn't be in the lead or critical path for this 15:53:07 <stevebattle> +0 (not convinced about the long names) 15:53:18 <svillata> Arnaud: how do we feel with TallTed's proposal? 15:53:20 <TallTed> +1 15:53:21 <JohnArwe> When we talk about URL structures yielding client assumptions, we'd be making it harder for any existing implementations to comply. 15:53:30 <SteveS> +0 (I go back to my +1 for JohnArwe's proposal) 15:53:40 <roger> +0 15:53:48 <sandro> +0 15:53:51 <JohnArwe> +1 (rename things at will - I hate arguing over them, you'll win all the time ) 15:53:59 <cody> +0 15:54:01 <svillata> +1 15:54:08 <cygri> -0 not convinced that aggregate is needed. ted's names are an improvement 15:54:23 <nmihindu> +0 15:54:36 <stevebattle> vote on the original proposal? 15:54:39 <svillata> Arnaud: we don't seem to have consensus 15:54:56 <dret> +/-0 <svillata> TallTed: I think we do, nobody has voted against it 15:54:59 <Zakim> -bblfish 15:55:17 <svillata> Arnaud: JohnArwe proposal? 15:55:49 <stevebattle> +1 (use namespaces for disambiguation) 15:56:52 <stevebattle> I prefer the shorter local names - we don't need to append 'Container' 15:56:56 <svillata> TallTed: what do you mean stevebattle as using namespaces for disambiguation? 15:57:24 <stevebattle> yez 15:57:52 <stevebattle> s/z/s/ 15:58:56 <Arnaud> resolved: Go with John's proposal amended by Ted 15:58:21 <svillata> subTopic: LDP model section 16:00:59 <svillata> Arnaud: maybe we should leave to the editors to choose among the two proposals 16:01:23 <Zakim> -cygri 16:01:23 <Kalpa> Kalpa has left #ldp 16:01:30 <stevebattle> q+ 16:01:39 <Arnaud> ack stevebattle 16:02:03 <svillata> stevebattle: the two proposals are materially the same, but I prefer Henry's proposal 16:02:22 <dret> yeah, that was just a proposal. 16:02:36 <svillata> Arnaud: do we have any text to put in the second draft of the spec? 16:02:38 <dret> no complete text yet, but i can take an action for that. 16:03:48 <SteveS> agree that editors can take the pen, using the feedback that is there now 16:03:55 <svillata> dret: we can write a complete section 16:04:12 <dret> in that case, can i have an action? 16:04:41 <Zakim> -SteveS 16:04:41 <svillata> ACTION: dret to create complete section 16:04:41 <trackbot> Created ACTION-38 - Create complete section [on Erik Wilde - due 2013-03-04]. <svillata> Arnaud: Meeting adjourned 16:04:43 <Zakim> -roger 16:04:45 <stevebattle> Thanks, bye. 16:04:49 <dret> thanks everybody! 16:04:52 <Zakim> -cody 16:04:53 <Zakim> -TallTed 16:04:53 <Zakim> -SteveBattle 16:04:54 <Zakim> -Arnaud #16:04:54 <Zakim> -Sandro.aa 16:04:56 <Zakim> -svillata 16:04:56 <Zakim> -dret 16:04:56 <cody> One question 16:04:57 <Zakim> -krp 16:04:57 <Zakim> -JohnArwe 16:05:03 <cody> regarding the face to face coming up 16:05:17 <Ruben> Ruben has left #ldp 16:05:20 <JohnArwe> what's your q cody? #16:05:31 <Zakim> -nmihindu.a 16:05:34 <cody> The line opens at 2:00 AM - 12:00 PM Boston time. 16:05:44 <cody> Is this because of overseas participation? 16:05:55 <cody> And is that the actual meeting start/end time? 16:06:02 <JohnArwe> probably - and probably copied from F2F1 16:06:34 <JohnArwe> ...when it was in France. Usually they run 8 (or later) to 5 (or later) local time. 16:07:19 <cody> Just seems like a face to face hosted in the U.S. would require the overseas participants to join at the odd times. 16:07:26 <JohnArwe> Eric P one of the staff contacts made the arrangements - suggest email the list so he'll see your q and respond. 16:07:59 <cody> OK. Thx. 16:08:19 <JohnArwe> the assumption is most participants will be local, so local time is "it". I can attest to the effect you describe (I was in NY during the Lyon F2F) 16:09:32 <JohnArwe> ...local time also tends to dictate when rooms can be booked, when meals are available (espec in a case like F2F2 when it appears there will be no sponsors so lunch is a "go out and get it" thing) 16:10:20 <cody> I still think I am confused. 2:00 AM to start a meeting in the U.S.? 16:10:31 <Zakim> disconnecting the lone participant, nmihindu, in SW_LDP()10:00AM 16:10:32 <Zakim> SW_LDP()10:00AM has ended 16:10:32 <Zakim> Attendees were JohnArwe, cygri, +1.214.537.aaaa, SteveBattle, Arnaud, dret, svillata, TallTed, bblfish, cody, SteveS, nmihindu, roger, Sandro, krp 16:10:43 <TallTed> TallTed has joined #ldp 16:11:07 <Arnaud> hmm, I wish I knew who was 1.214.537.aaaa 16:11:17 <cody> That is Cody 16:11:23 <Arnaud> ah, thanks 16:11:32 <cody> I do not know yet how to tell Zakim to use my name 16:11:39 <Arnaud> zakim is supposed to learn over time 16:12:00 <Arnaud> zakim, aaaa is cody 16:12:00 <Zakim> sorry, Arnaud, I do not recognize a party named 'aaaa' 16:12:15 <sandro> 214 537 is appears to be Richardson, TX 16:12:17 <sandro> dunno if that helps. 16:12:41 <cody> Someone already said "zakim aaaa is cody", so maybe that is why the statement no longer works 16:12:43 <Arnaud> cody is saying it's him 16:13:02 <sandro> ah. i'm slow. 16:13:39 <Arnaud> I think it's because the call is over 16:13:51 <Arnaud> zakim, +aaaa is cody 16:13:51 <Zakim> sorry, Arnaud, I do not recognize a party named '+aaaa' 16:13:55 <Arnaud> right 16:14:21 <Arnaud> it's ok I can fix the minutes to reflect it anyway 16:14:39 <cody> Thx. 16:15:29 <JohnArwe> arnaud your transcript should show that we attributed aaaa to cody in zakim Very Shortly after he joined. he said he did not know how to do so, so I did it. 16:15:50 <Arnaud> ok 16:16:10 <JohnArwe> remember that zakim for attendees unions them all together. I forget if the minuting script collapsing resolved aliases or not. 16:18:06 <JohnArwe> cody, wrt to the 0200 start that is Very Likely wrong, copied from Lyon (where 0800 CET would be 0200 ET) 16:19:01 <JohnArwe> ...hence: email to list on it. EricP presumably will then check whatever he booked at MIT and make Zakim's times align, then reflect that on the page (correctly) 16:19:12 <cody> OK 16:19:28 <cody> Is there a private list email? I seem to only have the public-ldp@ 16:20:38 <sandro> The charter says the group will work in public, so that's the main list. There is also member-ldp-wg for confidentail stuff like phone numbers, but that's rarely used. 16:20:39 <Arnaud> there are two lists: public-ldp and public-ldp-wg 16:20:50 <JohnArwe> all our emails are public. there is another list (public) for non-members to append to if needed. 16:20:56 <sandro> (and you are on member-ldp.wg too.) 16:21:00 <cody> Ok- got it. Thanks! 16:21:59 <Arnaud> as a member you can post to either list 16:22:03 <JohnArwe> cody: you in vegas next week? 16:22:14 <Arnaud> non members can subscribe to both but only post to public-ldp 16:24:44 <cody> No. I'm in Dallas/Fort Worth next week. Was unaware of Vegas. (Sorry, I am just really, really green at this). 16:25:25 <cody> What is going on in Las Vegas? IBM conf? 16:25:29 <JohnArwe> cody: (2) I also see you posed a question in IRC that may have been missed. Short answer on dates is that the month/year gets added very close to the end, because they are taken from the date it hits Rec. Until then all ns values we own should be thought of as provisional. 16:26:02 <JohnArwe> cody: (1) yeah Pulse Conf. if you were going to be there would be an opp for F2F meeting was the thought. NP. 16:27:13 <cody> Got it on the URL. Thanks. And enjoy the conference! 16:27:36 <JohnArwe> cody: (2) ...also the email contents were an excerpt; in the ttl file in mercurial the ns we're using for now is <>. 16:27:48 <gavinc> gavinc has joined #ldp 16:52:06 <jmv> jmv has joined #ldp 17:37:33 <bblfish> bblfish has joined #ldp 17:52:41 <cygri> cygri has joined #ldp # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000369
|
http://www.w3.org/2012/ldp/wiki/index.php?title=Chatlog_2013-02-25&curid=108&diff=2248&oldid=2247
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
The base Teuchos class. More...
#include <Teuchos_Object.hpp>
The Object class provides capabilities common to all Teuchos objects, such as a label that identifies an object instance, constant definitions, enum types.
Definition at line 56 of file Teuchos_Object.hpp.
Default Constructor.
Object is the primary base class in Teuchos. All Teuchos class are derived from it, directly or indirectly. This class is seldom used explictly.
Definition at line 37 of file Teuchos_Object.cpp.
Labeling Constructor.
Creates an Object with the given label.
Definition at line 43 of file Teuchos_Object.cpp.
Copy Constructor.
Makes an exact copy of an existing Object instance.
Definition at line 49 of file Teuchos_Object.cpp.
Destructor.
Completely deletes an Object object.
Definition at line 76 of file Teuchos_Object.cpp.
Define object label using a character std::string.
Defines the label used to describe
this object.
Definition at line 89 of file Teuchos_Object.cpp.
Set the value of the Object error traceback report mode.
Sets the integer error traceback behavior. TracebackMode controls whether or not traceback information is printed when run time integer errors are detected:
<= 0 - No information report
= 1 - Fatal (negative) values are reported
>= 2 - All values (except zero) reported.
Definition at line 56 of file Teuchos_Object.cpp.
Access the object label.
Returns the std::string used to define this object.
Definition at line 84 of file Teuchos_Object.cpp.
Get the value of the Object error traceback report mode.
Definition at line 63 of file Teuchos_Object.cpp.
Print method for placing the object in an output stream.
Reimplemented in Teuchos::SerialDenseMatrix< OrdinalType, ScalarType >, Teuchos::SerialDenseVector< OrdinalType, ScalarType >, and Teuchos::SerialSymDenseMatrix< OrdinalType, ScalarType >.
Definition at line 71 of file Teuchos_Object.cpp.
Method for reporting errors with Teuchos objects.
Definition at line 134 of file Teuchos_Object.hpp.
Output stream operator for handling the printing of Object.
Definition at line 169 of file Teuchos_Object.hpp.
|
http://trilinos.sandia.gov/packages/docs/r10.4/packages/teuchos/doc/html/classTeuchos_1_1Object.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
13 June 2012 05:42 [Source: ICIS news]
SINGAPORE (ICIS)--China Resources will start up its new 300,000 tonne/year polyethylene terephthalate (PET) bottle chip line at Zhuhai on 15 June, a company source said on Friday.
The company is running its two newer PET bottle chip lines at ?xml:namespace>
The company has another two older lines at
These two older lines will be shut for maintenance when the two newer lines are operating at full rates,
|
http://www.icis.com/Articles/2012/06/13/9568812/china-resources-to-start-up-pet-bottle-chip-line-at.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
On Wed, Mar 04, 2009 at 09:31:44PM +0100, Fons Adriaensen wrote: > On Wed, Mar 04, 2009 at 05:06:08PM +0100, Michael Niedermayer. What i meant was that following the first time things behave somehow "asymetrically" giving the first time a much higher effective weight ... > > > and then "add" future times slowly into this to > > compensate. That is it weights the first sample > > very differently than the following ones, this is > > clearly not optimal. > > What makes you think that ? common sense > > >. heres a simple example with the recently posted timefilter patch code below will simulate random uncorrelated jitter and a samplerate error, it will find the best values for both parameters using a really lame search. if you enable the code under ALTERNATIVE it will adapt the first factor in a very primitive way that likely isnt optimal either but beats the simple constant case main(){ double n0,n1; #define SAMPLES 1000 double ideal[SAMPLES]; double samples[SAMPLES]; for(n0= 0; n0<40; n0=2*n0+1){ for(n1= 0; n1<10; n1=2*n1+1){ double best_error= 1000000000; double bestpar0=1; double bestpar1=0.00001; int better, i; srandom(123); for(i=0; i<SAMPLES; i++){ ideal[i] = 10 + i + n1*i/(10*10); samples[i]= ideal[i] + n0*(rand()-RAND_MAX/2)/(RAND_MAX*10LL); } do{ double par0, par1; better=0; for(par0= bestpar0*0.8; par0<=bestpar0*1.21; par0+=bestpar0*0.05){ for(par1= bestpar1*0.8; par1<=bestpar1*1.21; par1+=bestpar1*0.05){ double error=0; TimeFilter *tf= ff_timefilter_new(1, par0, par1); for(i=0; i<SAMPLES; i++){ double filtered; #if ALTERNATIVE tf->feedback2_factor= FFMAX(par0, 1.0/(i+1)); #endif ff_timefilter_update(tf, samples[i]); filtered= ff_timefilter_read(tf); error += (filtered - ideal[i]) * (filtered - ideal[i]); } ff_timefilter_destroy(tf); if(error < best_error){ best_error= error; bestpar0= par0; bestpar1= par1; better=1; } } } }while(better); printf(" [%f %f %f]", bestpar0, bestpar1, best_error); } printf("\n"); } } results of this are [0.800000 0.000008 0.000000] [2.021560 1.006953 0.000100] [2.021560 1.006953 0.000901] [2.021560 1.006953 0.004903] [0.052227 0.000000 0.040452] [0.149737 0.009816 0.143180] [0.275251 0.027830 0.269643] [0.409600 0.061186 0.444376] [0.052227 0.000000 0.364068] [0.085899 0.003979 0.719558] [0.149737 0.009816 1.288622] [0.233964 0.021402 2.096296] [0.052227 0.000000 1.982147] [0.061332 0.002239 2.795419] [0.094704 0.004949 4.427345] [0.149737 0.009816 7.015830] [0.052227 0.000000 9.101697] [0.051952 0.001481 10.538358] [0.068719 0.002750 14.464223] [0.101469 0.005209 21.232117] [0.052227 0.000000 38.874358] [0.048488 0.001091 40.524498] [0.054976 0.001810 48.958747] [0.072155 0.003031 64.794577] ALTERNATIVE: [0.800000 0.000008 0.000000] [2.021560 1.006953 0.000100] [2.021560 1.006953 0.000901] [2.021560 1.006953 0.004903] [0.001144 0.000001 0.010675] [0.134218 0.009644 0.131673] [0.261489 0.026504 0.261801] [0.389120 0.061033 0.440048] [0.001144 0.000001 0.096077] [0.057724 0.003997 0.572458] [0.134218 0.009644 1.185061] [0.220201 0.021150 2.017409] [0.001144 0.000001 0.523088] [0.035184 0.002340 1.930126] [0.068719 0.004759 3.683940] [0.134218 0.009644 6.451997] [0.001144 0.000001 2.401933] [0.029555 0.001397 6.646778] [0.041562 0.002893 10.444674] [0.075763 0.005290 17.908798] [0.001144 0.000001 10.258925] [0.024826 0.000836 22.467767] [0.033249 0.001882 32.411319] [0.041781 0.003046 47.7710: <>
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-March/072359.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Previous article I'm using the same example as in my previous article, so the following are all the terms and code from the previous example: //Simple delegate declaration public delegate int BinaryOp(int x, int y); //An Add() method that do some simple arithamtic operation public int Add(int a, int b) { Console.WriteLine("Add() running on thread {0}", Thread.CurrentThread.ManagedThreadId); Thread.Sleep(500); return (a + b); } BinaryOp bp = new BinaryOp(Add); IAsyncResult iftAr = bp.BeginInvoke(5, 5, null, null);
Just as a reminder, bp is the instance of a BinaryOP delegate that points to a simple Add() method that performs some basic arithmetic operations. We passed the arguments to BeginInvoke() with some null values. Now instead of using null in the parameters you'll use objects which are part of the signature of the BeginInvoke() method of dynamic class of delegate BinaryOp.BeginInvoke(int x, int y, AsyncCallback cb, Object state);Let's discuss what the last two parameters are for.
Note: In the previous example we used a Boolean variable in two different threads which is not thread safe. However to use such a shared variable the very good rule of thumb is to ensure that data that can be shared among multiple threads is locked down.Now when the secondary thread completes the Add() operation, the question should be where am I gonna call the EndInvoke()? Well you can call the EndInvoke in two places.
Inside the main just after the condition where you're waiting for the secondary thread to complete. But this does not satisfy the purpose of passing AsyncCallback.public delegate int BinaryOp(int x, int y); static void Main(string[] args) { Console.WriteLine("Main() running on thread {0}", Thread.CurrentThread.ManagedThreadId); Program p=new Program(); BinaryOp bp = new BinaryOp(p.Add); IAsyncResult iftAr = bp.BeginInvoke(5, 5, new AsyncCallback(p.AddComplete), null); while (!iftAr.AsyncWaitHandle.WaitOne(100,true)) { Console.WriteLine("Doing some work in Main()!"); } int result = bp.EndInvoke(iftAr); Console.WriteLine("5 + 5 ={0}", result); Console.Read(); }//An Add() method that do some simple arithamtic operation public int Add(int a, int b) { Console.WriteLine("Add() running on thread {0}", Thread.CurrentThread.ManagedThreadId); Thread.Sleep(500); return (a + b); } //Target of AsyncCallback delegate should match the following pattern public void AddComplete(IAsyncResult iftAr) { Console.WriteLine("AddComplete() running on thread {0}", Thread.CurrentThread.ManagedThreadId); Console.WriteLine("Operation completed."); }Output:Here you can see the AddComplete() is called after the Main() completed its execution. If you are looking deep into the code and find iftAr.AsyncWaitHandle.WaitOne(100,true)) as something alien statement then you should look at the Delegate and Async Programming Part I - C#.
The second and more interesting approach is showing the result by calling EndInovke() inside the AddComplete(). Now you should have a question here. The instance of the BinaryOp delegate is running on the primary thread in the Main(). How can we access it to secondary thread to call the EndInvoke()? The answer is the System.Runtime.Remoting.Messaging.AsyncResult class.
©2014
C# Corner. All contents are copyright of their authors.
|
http://www.c-sharpcorner.com/UploadFile/vendettamit/delegate-and-async-programming-C-Sharp-asynccallback-and-object-state/
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
29 July 2011 13:01 [Source: ICIS news]
SHANGHAI (ICIS)--Offers for 57% diammonium phosphate (DAP) on the Chinese domestic market have reached yuan (CNY) 3,050/tonne EXW (ex-works), up by CNY50–100/tonne in the past month on strong demand, industry sources said on Friday.
Currently, Chinese producers are providing a limited supply of 64% DAP to the domestic market because they have been focusing on exports since 1 June. One producer says most of its products are supplying the export market.
There are two factors stimulating the 57% DAP market. Firstly, domestic traders are choosing to purchase 57% DAP because some small plants that have not signed any export contracts are able to supply the domestic market to some extent.
Secondly, the wholesale price of 57% DAP is lower than that of 64% DAP, which makes domestic buyers more likely to accept 57% DAP prices.
Wholesale prices of 64% DAP are mostly in the range of CNY3,450-3,650/tonne in some districts in northern, eastern and northwestern ?xml:namespace>
Many Chinese users feel that the DAP price is too high for them following its considerable increase
|
http://www.icis.com/Articles/2011/07/29/9481164/chinas-domestic-57-dap-prices-continue-to-rise-on-strong.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
#!/usr/bin/env python
# authored by shane lindberg
# This script makes configuration files for mplayer. In particular it makes a configuration that crops widescreen
# avi files so they will better fit your 4:3 aspect tv or computer moniter
# to run this program you need to to be in the directory that contains your avi files. Then just simply run the command
# it will check for the dimensions of the avi file using avitype, I think this is a part of transcode. If avitype is not
# installed the script will not work properly. This does not affect your media it only makes a config file that mplayer
# will use. At any time you can simply do 'rm *conf' to remove all of the config files this program created
# then you will be back to your old widescreen self
import os
import sys
current_dir = os.getcwd()
# this python function gets the dimensions of a video file and returns them as a tuple (width,height)
# it uses the linux video program avitype (I think this is part of the transcode package)
# getdimensions.py
def getdimensions(video_file):
import commands
avitype_command= '/usr/bin/avitype "%s" | grep WxH' % video_file
dimensions = commands.getoutput(avitype_command)
width = int(dimensions[-7:-4])
height = int(dimensions[-3:])
WxH = (width,height)
return WxH
# this function finds all media in a given directory by file extention. It then places this media in a list
def movie_find(directory):
ls_dir = os.listdir(directory)
dir_list =[]
for i in ls_dir:
if i.endswith('.avi'):
dir_list.append(i)
return dir_list
# this part checks to make sure the user has root privleges, if not it exits the script
current_user = os.geteuid()
#you may want to remove this if statment. It is needed for me because my movie files are in a write protected space
if current_user != 0:
print "you need to be root to run this script"
sys.exit()
# this part checks to make sure you are in the directory of the files you want to make .conf files for
print "is this the directory which contains the files you want to make .confs for"
print current_dir
answer = raw_input("enter 1 to continue")
if answer != '1':
print "change to the correct directory then restart the script"
sys.exit()
movie_list = movie_find(current_dir)
for i in movie_list:
conf_name = "%s.conf" %i
wxh = getdimensions(i)
width = wxh[0]
# you can change the amount of crop by adjusting the number multiplied by width. The lower the number
# the more of a crop you will get. If the number is at the max 1, it will not be cropped at all
cropped_width = int(.80 * width)
print_tuple = (cropped_width,wxh[1])
conf_file = open(conf_name, "w")
conf_file.write("vf=crop=%s:%s\n"%print_tuple)
conf_file.close()
|
http://www.linuxquestions.org/questions/linux-software-2/watch-your-widescreen-movies-more-full-screen-349052/
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
10 July 2008 18:56 [Source: ICIS news]
LONDON (ICIS news)--?xml:namespace>
With Chinese exports all but excluded, granular urea supply for the
As a result, the forward urea market is strong.
Following this, CF Industries increased its list prices, quoting $800/short ton FOB for August, $810/short ton for September, $820/short ton for Q4 and $840/short ton for Q1 2009.
By Thursday, August barges had traded at $750-755/short ton FOB Nola (
(
|
http://www.icis.com/Articles/2008/07/10/9139405/us-urea-hits-record-high-800-for-q1-2009.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Supplies NOX with the set nonlinear equations. More...
#include <NOX_Epetra_Interface_Required.H>
Supplies NOX with the set nonlinear equations.
This is the minimum required information to solve a nonlinear problem using the NOX::Epetra objects for the linear algebra implementation. Used by NOX::Epetra::Group to provide a link to the external code for residual fills.
Type of fill that a computeF() method is used for.
computeF() can be called for a variety of reasons:
This flag tells computeF() what the evaluation is used for. This allows the user to change the fill process to eliminate costly terms. For example, sometimes, terms in the function are very expensive and can be ignored in a Jacobian calculation. The user can query this flag and determine not to recompute such terms if the computeF() is used in a Jacobian calculation.
|
http://trilinos.sandia.gov/packages/docs/r10.12/packages/nox/doc/html/classNOX_1_1Epetra_1_1Interface_1_1Required.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
COSMOSDataCollectionMeeting20071113
From Eclipsepedia
Agenda
- Status updates and agenda changes
- Review action items
- F2F wrapup and followup items:
- Code Refactoring (Hubert)
- Annotations (Joel)
- Error Logging (Jack/ Martin)
Attendees
- Don Ebright
- Joel Hawkins
- Hubert Leung
- Jack Devine
- Martin Simmonds
- Paul Stratton
- John Todd
- Maosheng Liang
- Bill Muldoon
- Mark Weitzel
Discussion
- Hubert has checked in the code from the sandbox. He will build an i7 driver with this code and schedule a call for Thursday 9AM-11AM EST.
- Joel described the state of his annotations work - link. We had some confusion regarding wiring terminology and namespace for bug 209244. We agreed to call the automated process autowire for now. Mark wanted to change the namespace from wsdm to a package-oriented name. We need an alternative to WS-Addressing to provide support for CMDBf. Mark updated the CMDBf addressing bug 209246 to sev 1 blocker and will discuss with Balan.
- Martin discussed some questions regarding the implementation of the error logging enhancement. He suggested creating a web service appender for log4j. He also expressed a requirement for the ability to remotely set logging levels.
Action Items
- Hubert to build a driver and schedule a call for Thursday 9AM-11AM EST.
- Mark to repoen the enhancement request to add a requirement for Joel to provide documentation of the new annotations support.
|
http://wiki.eclipse.org/COSMOSDataCollectionMeeting20071113
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Difference between revisions of "Hammer LCD 8bit Color STN"
Revision as of 17:50, 13 May 2011
This is a "HowTo" for the Panasonic EDMGRB8KJF (Datasheet) available from EarthLCD to be used with the Hammer development module from TinCanTools
NOTE: This screen is an 8-bit STN screen, however in order to use it on the S3C2410A Hammer 12-bit STN must be used. The last 4 bits are simply masked off. If you have intentions of using this screen for frameworks such as Qt4 or others please first verify that they have 12-bit color support.
Hardware
NOTE: VCON should be variable control to adjust contrast and the VDD should have separate power control, i.e. via a gpio NOTE: 3.3V from the Hammer dev kit is not enough power and a seperate regulator will be required to power the device. Please make sure you also tie together the ground pins from the external regulator and the dev board.
NOTE: This display also requires a backlight inverter (see datasheet for requirements)
We have a login prompt!
Picture of LCD Display running a mixer console Ncurses GUI
.
Adding LCD to Machine File)
Open up your mach-tct_hammer.c file as mentioned above and verify the following include headers exist
#include <asm/mach-types.h> #include <mach/regs-lcd.h> #include <mach/fb.h> #include <mach/regs-gpio.h>
If they do not exist please add them.
Next add this in the upper section of the file
/* LCD/VGA controller */ #ifdef CONFIG_FB_S3C2410 static struct s3c2410fb_display __initdata tct_hammer_lcd_info = { .width = 640, .height = 480, .type = S3C2410_LCDCON1_STN8, .pixclock = 120000, .xres = 640, .yres = 480, .bpp = 12, .hsync_len = 48, .left_margin = 4 << (4 + 3), .right_margin = 8 << 3, .upper_margin = 10, .lower_margin = 10, .lcdcon5 = 2 }; /* LCD/VGA controller */ static struct s3c2410fb_mach_info __initdata tct_hammer_fb_info = { .displays = &tct_hammer_lcd_info, .num_displays = 1, .default_display = 0, .gpccon = 0xaaaaa9aa, .gpccon_mask = 0xffffffff, .gpcup = 0x0000ffff, .gpcup_mask = 0xffffffff, .gpdcon = 0x00000000, .gpdcon_mask = 0xffffffff, .gpdup = 0x00000000, .gpdup_mask = 0xffffffff, .lpcsel = ((0xCE6) & ~7) | 1<<4, }; #endif
These definitions are what notify the s3c2410fb driver how to configure itself in order to talk to the LCD display properly. We still have to tell the kernel to initialize these as well as configure the GPIO pins to run as the VD pins.
In the *tct_hammer_devices[] definition add
#ifdef CONFIG_FB_S3C2410 &s3c_device_lcd, #endif
Finally in the tct_hammer_init function add the following lines to the top of the function:
#ifdef CONFIG_FB_S3C2410 // disable LCD_DISPON. s3c2410_gpio_setpin(S3C2410_GPC4, 0); s3c2410_gpio_cfgpin(S3C2410_GPC4, S3C2410_GPIO_OUTPUT); // connect any frame buffer. s3c24xx_fb_set_platdata(&tct_hammer_fb_info); #endif
NOTE: TODO - Add information for modifying the s3c2410fb.c file as a setting needs changed to swap colors
Now build your kernel and install it on the device. The last thing you may have to do is configure your getty file in your rootfs to put a console on tty0 (this will include adding tty0 to your securetty list on some occasions)
|
http://elinux.org/index.php?title=Hammer_LCD_8bit_Color_STN&diff=47581&oldid=15609
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
I write perl for fun and profit.
My namespace.
I've had the chance to meet some of thee. Please let me know if I've forgotten anyone:
Tron
Wargames
Hackers (boo!)
The Net
Antitrust (gahhh!)
Electric dreams (yikes!)
Office Space
Jurassic Park
2001: A Space Odyssey
None of the above, please specify
Results (109 votes),
past polls
|
http://www.perlmonks.org/?node_id=140506
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
I actually just started trying the C++ tutorials on the main site last night, and I'm now completely and utterly engrossed. I don't know why I suddenly decided I wanted to learn a programming language, but anyway, that's not important.
I've hit a snag somewhere, well, it's probably something really simple that I'm too inexperienced to notice. I'm only on lesson 3, loops. But when I try the codes in this lesson, they don't work. I type them, letter for letter, compile and then run, and don't get the expected results, but just 0s trailing down the screen. I mess around a bit, in case I made a mistake, and that doesn't help. I don't know anyone else who knows much about C++, the only person I do know is about 6 miles away and doesn't have a house phone.
I'm actually in the dark with the compiler, too. Nowhere can I find any information on how to use it, so I could be doing something wrong there, too.
Anyway, although you can find the code in the lesson 3 tutorial, I'll just add what I've got down. Maybe someone could spot where I've gone wrong.
According to the tutorial the text should be printed more than once, but as I said, all I get is lots of 0s. If anyone can help, I'd be very greatful.According to the tutorial the text should be printed more than once, but as I said, all I get is lots of 0s. If anyone can help, I'd be very greatful.Code:#include <iostream> using namespace std; int main() { int x; x = 0; do { cout<<"Good morning maddam!\n"; } while ( x != 0 ); cin.get(); }
|
http://cboard.cprogramming.com/cplusplus-programming/86988-never-programmed-before-got-problem.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
« Return to documentation listing
#include <mpi.h>
MPI_Put(const void *origin_addr, int origin_count, MPI_Datatype
origin_datatype, int target_rank, MPI_Aint target_disp,
int target_count, MPI_Datatype target_datatype, MPI_Win win)
INCLUDE ’mpif.h’
MPI_PUT(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR)
<type> ORIGIN_ADDR(*)
INTEGER(KIND=MPI_ADDRESS_KIND) TARGET_DISP
INTEGER ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_COUNT,
TARGET_DATATYPE, WIN, IERROR
#include <mpi.h>
void MPI::Win::Put(const void* origin_addr, int origin_count, const
MPI::Datatype& origin_datatype, int target_rank, MPI::Aint
target_disp, int target_count, const MPI::Datatype&
target_datatype) const
The target buffer is specified
by the arguments target_count and target_datatype.
The data transfer is
the same as that which would occur if the origin process executed a send
operation with arguments origin_addr, origin_count, origin_datatype, target_rank,
tag, comm, and the target process executed a receive operation with arguments
target_addr, target_count, target_datatype, source, tag, comm, where target_addr
is the target buffer address computed as explained above, and comm is a
communicator for the group of win.
The communication must satisfy the same
constraints as for a similar message-passing communication. The target_datatype
may not specify overlapping entries in the target buffer. The message sent
must fit, without truncation, in the target buffer. Furthermore, the target
buffer must fit in the target window. In addition, only processes within
the same buffer can access the target window.
The target_datatype argument
is a handle to a datatype object defined at the origin process. However,
this object is interpreted at the target process: The outcome is as if
the target datatype object were defined at the target process, by the same
sequence of calls used to define it at the origin process. The target data
type must contain only relative displacements, not absolute addresses. The
same holds for get and accumulate.
The performance of a put transfer can
be significantly affected, on some systems, from the choice of window location
and the shape and location of the origin and target buffer: Transfers to
a target window in memory allocated by MPI_Alloc_mem may be much faster
on shared memory systems; transfers from contiguous buffers will be faster
on most, if not all, systems; the alignment of the communication buffers
may also impact performance.
INTEGER*MPI_ADDRESS_KIND TARGET_DISP_Get
MPI_Accumulate
Table of Contents
|
http://icl.cs.utk.edu/open-mpi/doc/v1.7/man3/MPI_Put.3.php
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
.subject.support; 20 21 import org.apache.shiro.subject.Subject; 22 import org.apache.shiro.util.ThreadState; 23 24 /** 25 * A {@code SubjectRunnable} ensures that a target/delegate {@link Runnable Runnable} will execute such that any 26 * call to {@code SecurityUtils.}{@link org.apache.shiro.SecurityUtils#getSubject() getSubject()} during the 27 * {@code Runnable}'s execution will return the associated {@code Subject} instance. The {@code SubjectRunnable} 28 * instance can be run on any thread (the current thread or asynchronously on another thread) and the 29 * {@code SecurityUtils.getSubject()} call will still work properly. This implementation also guarantees that Shiro's 30 * thread state will be identical before and after execution to ensure threads remain clean in any thread-pooled 31 * environment. 32 * <p/> 33 * When instances of this class {@link Runnable#run() run()}, the following occurs: 34 * <ol> 35 * <li>The Subject and any of its associated thread state is first bound to the thread that executes the 36 * {@code Runnable}.</li> 37 * <li>The delegate/target {@code Runnable} is {@link #doRun(Runnable) run}</li> 38 * <li>Any previous thread state that might have existed before the {@code Subject} was bound is fully restored</li> 39 * </ol> 40 * <p/> 41 * 42 * <h3>Usage</h3> 43 * 44 * This is typically considered a support class and is not often directly referenced. Most people prefer to use 45 * the {@code Subject.}{@link Subject#execute(Runnable) execute} or 46 * {@code Subject.}{@link Subject#associateWith(Runnable) associateWith} methods, which transparently perform the 47 * necessary association logic. 48 * <p/> 49 * An even more convenient alternative is to use a 50 * {@link org.apache.shiro.concurrent.SubjectAwareExecutor SubjectAwareExecutor}, which transparently uses 51 * instances of this class but does not require referencing Shiro's API at all. 52 * 53 * @see Subject#associateWith(Runnable) 54 * @see org.apache.shiro.concurrent.SubjectAwareExecutor SubjectAwareExecutor 55 * @since 1.0 56 */ 57 public class SubjectRunnable implements Runnable { 58 59 protected final ThreadState threadState; 60 private final Runnable runnable; 61 62 /** 63 * Creates a new {@code SubjectRunnable} that, when executed, will execute the target {@code delegate}, but 64 * guarantees that it will run associated with the specified {@code Subject}. 65 * 66 * @param subject the Subject to associate with the delegate's execution. 67 * @param delegate the runnable to run. 68 */ 69 public SubjectRunnable(Subject subject, Runnable delegate) { 70 this(new SubjectThreadState(subject), delegate); 71 } 72 73 /** 74 * Creates a new {@code SubjectRunnable} that, when executed, will perform thread state 75 * {@link ThreadState#bind binding} and guaranteed {@link ThreadState#restore restoration} before and after the 76 * {@link Runnable Runnable}'s execution, respectively. 77 * 78 * @param threadState the thread state to bind and unbind before and after the runnable's execution. 79 * @param delegate the delegate {@code Runnable} to execute when this instance is {@link #run() run()}. 80 * @throws IllegalArgumentException if either the {@code ThreadState} or {@link Runnable} arguments are {@code null}. 81 */ 82 protected SubjectRunnable(ThreadState threadState, Runnable delegate) throws IllegalArgumentException { 83 if (threadState == null) { 84 throw new IllegalArgumentException("ThreadState argument cannot be null."); 85 } 86 this.threadState = threadState; 87 if (delegate == null) { 88 throw new IllegalArgumentException("Runnable argument cannot be null."); 89 } 90 this.runnable = delegate; 91 } 92 93 /** 94 * {@link ThreadState#bind Bind}s the Subject thread state, executes the target {@code Runnable} and then guarantees 95 * the previous thread state's {@link ThreadState#restore restoration}: 96 * <pre> 97 * try { 98 * threadState.{@link ThreadState#bind bind()}; 99 * { }
|
http://shiro.apache.org/static/1.2.2/xref/org/apache/shiro/subject/support/SubjectRunnable.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
If you have not already read my first article (from several years ago), please do so now.
Now that you are done, welcome back! I've received a lot of positive feedback regarding the first article, and had always intended on writing another one on the topic, but years passed. Well, here it finally is, long overdue!
The purpose of this particular article is to take some of the functionality we discussed in the last article and encapsulate it into a simple, easy to use library that anybody can include in their own projects. We'll take it one step further by making use of Generics. As a bonus, we'll also build in the ability for it to load source files as plug-ins (they will be compiled at runtime using CodeDom). In this light, an appropriate name for the library would be ExtensionManager, which is what I'm calling it, since it will load compiled assemblies and un-compiled source files in the same manner.
First off, let's define what an Extension really is, in this case. We have two levels of extensions in this library, if you want to think of them that way. First, we have an Extension<ClientInterface> object which isn't an extension, but a wrapper to store information for an actual extension. This wrapper's job is just to store information that we need for each extension, in addition to the methods and properties of the actual extensions.
Extension<ClientInterface>
Our extension wrapper will include several bits of data. It will accept a single generic parameter which will be the plug-in interface (IPlugin in our last article), or as we now call it, ClientInterface, which will specify the interface that all acceptable extensions will need to inherit.
IPlugin
ClientInterface
We also want to store the filename of the extension in our wrapper and what kind of extension it is (an assembly or source file). Even though it won't matter for assembly extensions, if the extension is a source file, we need to keep track of the Language of that source file for compilation later.
Language
Finally, the extension wrapper needs to store an instance of the actual extension once it is loaded. In this project, we also expose the Assembly object of that instance, in case it is needed later. (Similarly, there is also a GetType(string name) method that looks for a type in the extension's assembly object. This is needed since the Type.GetType() method will not look in our extension assemblies for types.)
Assembly
GetType(string name)
Type.GetType()
public class Extension<ClientInterface>
{
public Extension()
{
}
public Extension(string filename, ExtensionType extensionType,
ClientInterface instance)
{
this.extensionType = extensionType;
this.instance = instance;
this.filename = filename;
}
private ExtensionType extensionType = ExtensionType.Unknown;
private string filename = "";
private SourceFileLanguage language = SourceFileLanguage.Unknown;
private ClientInterface instance = default(ClientInterface);
private Assembly instanceAssembly = default(Assembly);
public ExtensionType ExtensionType
{
get { return extensionType; }
set { extensionType = value; }
}
public string Filename
{
get { return filename; }
set { filename = value; }
}
public SourceFileLanguage Language
{
get { return language; }
set { language = value; }
}
public ClientInterface Instance
{
get { return instance; }
set { instance = value; }
}
public Assembly InstanceAssembly
{
get { return instanceAssembly; }
set { instanceAssembly = value; }
}
public Type GetType(string name)
{
return instanceAssembly.GetType(name, false, true);
}
}
Now, let's focus on the ExtensionManager object. This is the core of the library, and as its name implies, it is responsible for finding, loading, and managing all of our extensions.
ExtensionManager
I'm going to go through the ExtensionManager in logical order of its usage. First of all, you need to tell the manager what file extensions it should be on the lookout for, and how they should be mapped. For this purpose, ExtensionManager has two properties: SourceFileExtensionMappings and CompiledFileExtensions.
SourceFileExtensionMappings
CompiledFileExtensions
SourceFileExtensionMappings is a dictionary that will be responsible for mapping certain file extensions to certain languages. For example, if we wanted to map a custom file extension of ".customcsharp" to C#, we would call:
SourceFileExtensionMappings.Add(".customcsharp", SourceFileLanguage.CSharp);
In the above example, all *.customcsharp files that the ExtensionManager finds will be compiled as C# files.
CompiledFileExtensions is a simple string list containing extensions that should be loaded as assemblies that are compiled already. Typically, you would do:
CompiledFileExtensions.Add(".dll");
This would treat any .dll extension file as a compiled assembly.
You can optionally call the method ExtensionManager.LoadDefaultFileExtensions() which will load .cs, .vb, and .js as SourceFileExtensionMappings, as well as .dll as a CompiledExtension.
ExtensionManager.LoadDefaultFileExtensions()
CompiledExtension
The next step is to tell the ExtensionManager where to look for extensions to load. You can load a single file, or look through a directory of files with the LoadExtension() and LoadExtensions() methods, respectively. It's pretty common to set something up like this, where your extensions are stored in the "Extensions" directory in your application's directory:
LoadExtension()
LoadExtensions()
myExtensionManager.LoadExtensions(Application.StartupPath + \\Extensions);
It is in these methods that the extension manager will decide if the file is a compiled assembly or source code file, and take the appropriate action to load it into memory. It will call the private methods loadSourceFile or loadCompiledFile, as appropriate.
loadSourceFile
loadCompiledFile
Since my first plug-in article already covers the concept of loading an assembly based on it containing a certain interface, I'm not going to cover that again here. I will, however, go over the process of taking a source code file and compiling it, and loading it all in memory.
The real magic here is in System.CodeDom.Compiler. This lets us take the source code and compile it into an assembly with relative ease! Our loadSourceFile private method calls upon another private method compileScript which handles the process of taking the source file and getting it into an Assembly. From there, the rest of the process is the same as loading a compiled assembly. The only other thing to note is that loadSourceFile will raise an AssemblyFailedLoading event if there are compilation errors. The AssemblyFailedLoadingEventArgs for this will provide information about the compilation errors that could be used in your application for debugging.
System.CodeDom.Compiler
compileScript
AssemblyFailedLoading
AssemblyFailedLoadingEventArgs
In compileScript, all we're doing is creating a CodeDomProvider based on the given language. Now, by default, CodeDom supports C#, VB ,and JavaScript. Other languages may be supported, but you would have to download the appropriate assemblies and reference them in this project to include them. IronPython might be a nice CodeDom provider to include in this for your own use!
CodeDomProvider
Other than that, we set some parameters to tell CodeDom not to produce an executable, and to compile to memory. We also specify to leave out debugging symbols. You could expose this option as a property of ExtensionManager, if you desire.
The other very important step here is telling the CodeDom what references to use. With this, you can reference third party assemblies for the ExtensionManager to use. These can be setup in the ExtensionManager's ReferencedAssemblies property.
ReferencedAssemblies
Finally, we invoke the compiler and return its results. CodeDom, as you can see, is very simple to work with!
private CompilerResults compileScript(string filename,
List<string> references, string language)
{
System.CodeDom.Compiler.CodeDomProvider cdp =
System.CodeDom.Compiler.CodeDomProvider.CreateProvider(language);
// Configure parameters
CompilerParameters parms = new CompilerParameters();
parms.GenerateExecutable = false; //Don't make exe file
parms.GenerateInMemory = true; //Don't make ANY file, do it in memory
parms.IncludeDebugInformation = false; //Don't include debug symbols
//Add references passed in
if (references != null)
parms.ReferencedAssemblies.AddRange(references.ToArray());
// Compile
CompilerResults results = cdp.CompileAssemblyFromFile(parms, filename);
return results;
}
As with my previous article, I do have a method to 'Unload' an extension, but this doesn't really unload the extension. The problem is, we are loading all of our extensions into the same AppDomain as the host. You can only truly unload assemblies from an AppDomain by unloading the AppDomain itself. As a future consideration, I may make ExtensionManager truly load extensions into their own AppDomain. For now, this is not an option.
Unload
There are a couple other events that ExtensionManager exposes. AssemblyLoading and AssemblyLoaded provide notification of what they sound like they would.
AssemblyLoading
AssemblyLoaded
I've included an example solution to help you understand how to implement this in your own projects. There are seven projects in this solution (four of them are extensions):
It is worth noting, each extension project has a post build event to copy the extension (whether it be a .cs or a .dll file) to the HostApplication's Extensions folder. You should just be able to compile everything and run the HostApplication for a demo.
This ExtensionManager is fairly simplistic, not full of features, but it does what it is intended to do. I've used it countless times in my own projects where I've needed to provide a quick and simple way of extending my application.
Maybe, it can serve as a base from which you build upon, or perhaps, it does just what you need. Either way, I hope you have found this article useful. Enjoy!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
AppDomain currentDomain = AppDomain.CurrentDomain;
currentDomain.AssemblyResolve += new ResolveEventHandler(currentDomain_AssemblyResolve);
private Assembly currentDomain_AssemblyResolve(object sender, ResolveEventArgs args)
{
string assemblyPath = Path.Combine(PluginFolderPath, args.Name.Substring(0, args.Name.IndexOf(",")) + ".dll");
return Assembly.LoadFrom(assemblyPath);
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/21118/Plug-ins-in-C-Generics-Enabled-Extension-Libra?msg=2319280
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
#include "stdafx.h"
#include <iostream>
#include <string>
using namespace std;
int main()
{
string name;
double Total_amount_of_money, Remainder;
double const One_Dollar = 100, Dollarv = 1, HDollarv = .50, Quarterv = .25, Dimev = .10, Nickelv = .05, Pennyv = .01;
int Total_amount_of_cents, Dollars, Half_Dollars, Quarters, Dimes, Nickels, Pennies;
cout << "This is a program to determine the fewest of each money denomination required\nto make change.\n";
cout << endl;
cout << "Please enter your name: ";
cin >> name;
cout << endl;
cout << "Hello," << name << ". Please enter your total amount of cents: ";
cin >> Total_amount_of_cents;
cout << endl;
cout << Total_amount_of_cents << " cents is equal to: ";
cout << endl;
Dollars = Total_amount_of_cents / One_Dollar;
cout << "Dollars: " <<Dollars;
cout << endl;
Remainder = ((Total_amount_of_cents / One_Dollar) - Dollars);
Half_Dollars = (Remainder / HDollarv);
cout << "Half Dollars: " <<Half_Dollars;
cout << endl;
Quarters = ((Remainder - HDollarv) / Quarterv);
cout << "Quarters: " <<Quarters;
cout << endl;
Dimes = (((Remainder - HDollarv) - Quarterv) / Dimev);
cout << "Dimes: " <<Dimes;
cout << endl;
Nickels = ((((Remainder - HDollarv) - Quarterv) - Dimev) / Nickelv);
cout << "Nickels: " <<Nickels;
cout << endl;
Pennies = ((Remainder - HDollarv - Quarterv - Dimev - Nickelv) / Pennyv);
cout << "Pennies: " <<Pennies;
}
1234
Dollars = Total_amount_of_cents / One_Dollar;
....
....
Remainder = ((Total_amount_of_cents / One_Dollar) - Dollars);
|
http://www.cplusplus.com/forum/beginner/1685/
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
perlmeditation agianni <readmore> <table width="100%"><tr> <td align="left"> <i>[node://622705|<#1 - Extract Method]</i> </td> <td align="center"><i><a href="?node_id=3989&n0=0&BIT=&BIS=+&BH=1&HIT=refactoring+perl&HIS=+&xa=0&a=agianni&nf=0&yr=&mo=&dy=&xs=0&M=1&BET=&BES=+&HET=&HES=+&xr=0&re=N&xpa=0&pa=&go=Search&as_user=530995">Series Index</a></i></td> <td align="right"> <i>[node://627111|#3 - Inline Temp>]</i> </td> </tr></table> </readmore> <blockquote> <p>A method's body is just as clear as its name.</p> <p><i>Put the method's body into the body of its callers and remove the method.</i></p> <p>(<a href="">Fowler</a>, p. 117)</p> </blockquote> <p> Fowler's second refactoring pattern is remarkably simple, but it is a classic example of a refactoring that breaks tests for me. It's not generally a big problem but sometimes it makes me wonder if I'm doing something wrong. </p> <p> Fowler's example in Perl looks like this: </p> <code> sub get_rating{ my $self = shift; return $self->more_than_five_late_deliveries() ? 2 : 1; } sub more_than_five_late_deliveries{ my $self = shift; return $self->{_number_of_late_deliveries} > 5; } </code> <p>becomes:</p> <code> sub get_rating{ my $self = shift; return ( $self->{_number_of_late_deliveries} > 5 ) ? 2 : 1; } </code> <p> <i>[ the code]</i> <readmore> <p>. </p> <p>. </p> <p> I might actually consider refactoring this differently given the right context. From a business logic perspective, it appears that <code>$self->{_number_of_late_deliveries}</code> has some business significance, so I might simply change change the name of the original <code>more_than_five_late_deliveries</code> method to <code>too_many_late_deliveries</code>. Alternately, I might refactor it as Fowler does, but extract the value of the number 5 to a config-ish method called something like <code>late_delivery_threshold</code>. </p> <p>. </p> </readmore> <!-- Node text goes above. Div tags should contain sig only --> <div class="pmsig"><div class="pmsig-530995"> <code> perl -e 'split//,q{john hurl, pest caretaker}and(map{print @_[$_]}(join(q{},map{sprintf(qq{%010u},$_)}(2**2*307*4993,5*101*641*5261,7*59*79*36997,13*17*71*45131,3**2*67*89*167*181))=~/\d{2}/g));' </code> </div></div>
|
http://www.perlmonks.org/?displaytype=xml;node_id=623470
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
If.
Introduction to Clojure v2
A fun and gentle introduction to the Clojure language, functional programming, and data-drive programming, through the eyes of baking robot X5.
3 Functional Tools
Learn these three tools -- map, filter, and reduce-- and you'll be well on your way to developing a functional mindset.
If you come from a non-JVM language, you should brush up on the Java Virtual Machine.
JVM Fundamentals for Clojure
This course teaches you the hands-on, nitty gritty details of the JVM and Clojure/Java interop that you would pick up after many years of programming Java. It combines videos with reference sheets to give you easy access to the accumulated skills of years of experience. With this course, you won't have to worry about stack traces, JVM options, or the standard library again.
In Clojure, collections are used extensively. This will go over some of the patterns Clojure programmers use with collections.
Clojure Collections
Clojure is based on collections, but how are they used? What are some patterns for making the most of them? This course introduces you to the workhorses of the Clojure programming language, the immutable collections.
Scope refers to the rules which tell you which parts of your code can access which variables.
.
Clojure is designed to be developed using Repl-Driven Development. This is a workflow that interacts and modifies a live, running system.
Repl-Driven Development in Clojure
This course teaches the mindset, practices, and tools of Repl-Driven Development as practiced in Clojure.
Recursion is the main way we build data structures iteratively.
Recursion 101
What is recursion? How do you write recursive functions? Does it work with laziness? How is it different from a
for loop? All of these questions are answered, and more, using simple, clear examples. You'll never have fear of recursion again.
Clojure organizes code into namespaces, which are simply files with certain names. This course goes over how to declare a namespace and how to navigate namespaces at the REPL.
Namespaces
Namespace declarations can be complicated. They manage all of the dependencies of one namespace on another. There are a lot of options. In this course, we go over how to make best use of them.
Leiningen is the powerful project tool that is used by most Clojure programs. It keeps track of dependencies, has a powerful plugin system, and will run your projects.
Leiningen
Leiningen is the de facto standard project tool in Clojure. This course gives an overview of
lein commands, projects, templates, and dependencies.
Web development in Clojure is based around the Ring system. This course explains all of the concepts you’ll need.
Web Development in Clojure
Web development in Clojure is just like everything else: you build up complex behavior from simple parts. This course builds a TODO list app from scratch, explaining the important concepts, and deploying it to a cloud service.
Learn the built-in testing library called clojure.test so you can do unit testing in Clojure.
Intro to clojure.test.
Clojure is known for its powerful concurrency primitives. This course is a compendium of them. Dip in and out as you need something.
.
Learn some of the basic syntax of Clojure. This course includes function syntax and for-comprehension syntax.
Clojure Syntax
Learn the tricky corners of Clojure syntax, like for comprehensions and function definitions.
Clojure’s sequences are lazy, meaning they don’t execute the elements until they are needed. This has important consequences, and you should learn those in this course.
.
A few projects done in an hour or less.
Clojure in One Hour.
Learn to read and write different data formats.
Data Formats
With so many data formats out there, it's good to see some example code for reading and writing different formats. I'm talking JSON, CSV, EDN, and more. This course explores how to read and write data formats using Clojure.
This course shows how we can use Data to model interesting properties of our systems.
.
|
https://purelyfunctional.tv/learning-paths/object-oriented-programmer/
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
1 year, 11 months ago.
Playing with sleep modes.
Why does this code perform as it does. It starts out in a sleep mode pulling around 600uA as measured on the CPU on a NUCLEO-L152RE. It seems to take close to 30 seconds for the thread waiting on an event to fire, turning the LED on. Once on, the MCU is pulling around 3.3 mA. However, it never goes back to sleep and stays on forever. It seems to me that it should loop back around and wait for another event to fire and go back to sleep mode until this happens.
#include "mbed.h" DigitalOut pin(LED1); LowPowerTicker ticker; EventFlags event; Thread t1; void setLED() { while (true) { event.wait_any(0x01); pin = !pin; } } void blink() { event.set(0x01); } int main() { t1.start(setLED); ticker.attach(blink,5.0); while(1) { sleep(); } }
Replacing the call to sleep in the main thread with wait(osWaitForever) it performs as expected, however also never goes to sleep since sleep is not called. Is this a bug in mbed?
1 Answer
1 year, 11 months ago.
Hi Kevin,
The reason why there is a difference when using sleep() vs wait(osWaitForever) is that sleep will just do a normal sleep, while wait(osWaitForever) will try to do a deep sleep. Deep Sleep will shut down all the hardware except for the Low Power Tickers, while the normal sleep will use the microsecond tickers. The microsecond tickers will use a lot more power than the low power tickers, but it will be more accurate. However, since you trigger an interrupt every 5s, the lp power ticker would do just fine.
Please let me know if you have any questions!
- Peter Nguyen, team Mbed
If this solved your question, please make sure to click the "Thanks" link below!
I'm still confused. Whether I do sleep or wait(osWaitForever) the timer should trigger blink, which then sends an event to a waiting thread to toggle the pin. When using sleep(), nothing happens. According to what you have described I should see the same behavior either way, just differences in power usage.posted by 21 Aug 2018
|
https://os.mbed.com/questions/82109/Playing-with-sleep-modes/
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Managing the Compute Grid¶
Overview¶
Users in Domino assign their Runs to Domino Hardware Tiers. A hardware tier defines the type of machine a job will run on, and the resource requests and limits for the pod that the Run will execute in. When configuring a hardware tier, you will specify the machine type by providing a Kubernetes node label.
You should create a Kubernetes node label for each type of node you want available for compute workloads in Domino, and apply it consistently to compute nodes that meet that specification. Nodes with the same label become a node pool, and they will be used as available for Runs assigned to a Hardware Tier that points to their label.
Which pool a Hardware Tier is configured to use is determined by the value in the Node Pool field of the Hardware
Tier editor. In the screenshot below, the
large-k8s Hardware Tier is configured to use the
default node pool.
The diagram below shows a cluster configured with two node pools for Domino, one named
default and one named
default-gpu. You can make additional node pools available to Domino by labeling them with the same scheme:
dominodatalab.com/node-pool=<node-pool-name>. The arrows in this diagram represent Domino requesting that a node
with a given label be assigned to a Run. Kubernetes will then assign the Run to a node in the specified pool that has
sufficient resources.
By default, Domino creates a node pool with the label
dominodatalab.com/node-pool=default and all compute nodes
Domino creates in cloud environments are assumed to be in this pool. Note that in cloud environments with automatic node
scaling, you will configure scaling components like AWS Auto Scaling Groups or Azure Scale Sets with these labels to
create elastic node pools.
Kubernetes pods¶
Every Run in Domino is hosted in a Kubernetes pod on a type of node specified by the selected Hardware Tier.
The pod hosting a Domino Run contains three containers:
- The main Run container where user code is executed
- An NGINX container for handling web UI requests
- An executor support container which manages various aspects of the lifecycle of a Domino execution, like transferring files or syncing changes back to the Domino file system
Resourcing requests¶
The amount of compute power required for your Domino cluster will fluctuate over time as users start and stop Runs. Domino relies on Kubernetes to find space for each execution on existing compute resources. In cloud autoscaling environments, if there’s not enough CPU or memory to satisfy a given execution request, the Kubernetes cluster autoscaler will start new compute nodes to fulfill that increased demand. In environments with static nodes, or in cloud environments where you have reached the autoscaling limit, the execution request will be queued until resources are available.
Autoscaling Kubernetes clusters will shut nodes down when they are idle for more than a configurable duration. This reduces your costs by ensuring that nodes are used efficiently, and terminated when not needed.
Cloud autoscaling resources have properties like the minimum and maximum number of nodes they can create. You should set the node maximum to whatever you are comfortable with given the size of your team and expected volume of workloads. All else equal, it is better to have a higher limit than a lower one, as nodes are cheap to start up and shut down, while your data scientists’ time is very valuable. If the cluster cannot scale up any further, your users’ executions will wait in a queue until the cluster can service their request.
The amount of resources Domino will request for a Run is determined by the selected Hardware Tier for the Run. Each Hardware Tier has five configurable properties that configure the resource requests and limits for Run pods.
Cores
The number of requested CPUs.
Cores limit
The maximum number of CPUs. Recommended to be the same as the request.
Memory
The amount of requested memory.
Memory limit
The maximum amount of memory. Recommended to be the same as the request.
Number of GPUs
The number of GPU cards available.
The request values, Cores and Memory, as well as Number of GPUs, are thresholds used to determine whether a node has capacity to host the pod. These requested resources are effectively reserved for the pod. The limit values control the amount of resources a pod can use above and beyond the amount requested. If there’s additional headroom on the node, the pod can use resources up to this limit.
However, if resources are in contention, and a pod is using resources beyond those it requested, and thereby causing excess demand on a node, the offending pod may be evicted from the node by Kubernetes and the associated Domino Run is terminated. For this reason, Domino strongly recommends setting the requests and limits to the same values.
User Executions Quota¶
To prevent a single user from monopolizing a Domino deployment, an administrator can set a limit on the number of simultaneous executions that a user can have running concurrently. Once the number of simultaneously running executions is reached for a given user, any additional executions will be queued. This includes executions for Domino workspaces, jobs, web applications, as well as any executions that make up an on-demand distributed compute cluster. For example, in the case of an on-demand Spark cluster an execution slot will be consumed for each Spark executor and for the master.
See Important settings for details.
Common questions¶
How do I view the current nodes in my compute grid?¶
From the top menu bar in the admin UI, click Infrastructure. You will see both Platform and Compute nodes in this
interface. Click the name of a node to get a complete description, including all applied labels, available resources,
and currently hosted pods. This is the full
kubectl describe for the node. Non-Platform nodes in this interface with a value in the Node Pool column are compute nodes that can
be used for Domino Runs by configuring a Hardware Tier to use the pool.
How do I view details on currently active executions?¶
From the top menu of the admin UI, click Executions. This interface lists active Domino execution pods and shows the
type of workload, the Hardware Tier used, the originating user and project, and the status for each pod. There are also
links to view a full
kubectl describe output for the pod and the node, and an option to download
the deployment lifecycle log for the pod generated by Kubernetes and the Domino application.
How do I create or edit a Hardware Tier?¶
From the top menu of the admin UI, click Advanced > Hardware Tiers, then on the Hardware Tiers page click New to create a new Hardware Tier or Edit to modify an existing Hardware Tier.
Keep in mind that your a Hardware Tier’s CPU, memory, and GPU requests should not exceed the available resources of the machines in the target node pool after accounting for overhead. If you need more resources than are available on existing nodes, you may need to add a new node pool with different specifications. This may mean adding individual nodes to a static cluster, or configuring new auto-scaling components that provision new nodes with the required specifications and labels.
Important settings¶
The following settings in the
common namespace of the Domino central configuration affect compute grid behavior.
Deploying state timeout¶
- Key:
com.cerebro.computegrid.timeouts.sagaStateTimeouts.deployingStateTimeoutSeconds
- Value: Number of seconds an execution pod in a deploying state will wait before timing out. Default is 60 * 60 (1 hour).
Preparing state timeout¶
- Key:
com.cerebro.computegrid.timeouts.sagaStateTimeouts.preparingStateTimeoutSeconds
- Value: Number of seconds an execution pod in a preparing state will wait before timing out. Default is 60 * 60 (1 hour).
Maximum executions per user¶
- Key:
com.cerebro.domino.computegrid.userExecutionsQuota.maximumExecutionsPerUser
- Value: Maximum number of executions each user may have running concurrently. If a user tries to run more than this, the excess executions will queue until existing executions finish. Default is 25.
Quota state timeout¶
- Key:
com.cerebro.computegrid.timeouts.sagaStateTimeouts.userExecutionsOverQuotaStateTimeoutSeconds
- Value: Number of seconds an execution pod that cannot be assigned due to user quota limitations will wait for resources to become available before timing out. Default is 24 * 60 * 60 (24 hours).
|
https://admin.dominodatalab.com/en/4.1/compute/compute-grid.html
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
import "github.com/hyperledger/fabric/core/ledger/ledgermgmt"
ErrLedgerAlreadyOpened is thrown by a CreateLedger call if a ledger with the given id is already opened
ErrLedgerMgmtNotInitialized is thrown when ledger mgmt is used before initializing this
type Initializer struct { CustomTxProcessors map[common.HeaderType]ledger.CustomTxProcessor StateListeners []ledger.StateListener DeployedChaincodeInfoProvider ledger.DeployedChaincodeInfoProvider MembershipInfoProvider ledger.MembershipInfoProvider ChaincodeLifecycleEventProvider ledger.ChaincodeLifecycleEventProvider MetricsProvider metrics.Provider HealthCheckRegistry ledger.HealthCheckRegistry Config *ledger.Config HashProvider ledger.HashProvider EbMetadataProvider MetadataProvider }
Initializer encapsulates all the external dependencies for the ledger module
LedgerMgr manages ledgers for all channels
func NewLedgerMgr(initializer *Initializer) *LedgerMgr
NewLedgerMgr creates a new LedgerMgr
Close closes all the opened ledgers and any resources held for ledger management
CreateLedger creates a new ledger with the given genesis block. This function guarantees that the creation of ledger and committing the genesis block would an atomic action The chain id retrieved from the genesis block is treated as a ledger id
GetLedgerIDs returns the ids of the ledgers created
OpenLedger returns a ledger for the given id
Package ledgermgmt imports 11 packages (graph) and is imported by 85 packages. Updated 2020-07-30. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/hyperledger/fabric/core/ledger/ledgermgmt
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
SD_BUS_IS_OPEN(3) sd_bus_is_open SD_BUS_IS_OPEN(3)
sd_bus_is_open, sd_bus_is_ready - Check whether the bus connection is open or ready
#include <systemd/sd-bus.h> int sd_bus_is_open(sd_bus *bus); int sd_bus_is_ready(sd_bus *bus);._start(3), sd_bus_close_IS_OPEN(3)
Pages that refer to this page: sd_bus_get_connected_signal(3), sd_bus_get_watch_bind(3), sd_bus_set_connected_signal(3), sd_bus_set_watch_bind(3), 30-systemd-environment-d-generator(7), systemd.index(7)
|
https://www.man7.org/linux/man-pages/man3/sd_bus_is_ready.3.html
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
#include <qgsattributeform.h>
Definition at line 29 of file qgsattributeform.h.
Definition at line 41 of file qgsattributeform.cpp.
Definition at line 67 of file qgsattributeform.cpp.
Definition at line 153 of file qgsattributeform.h.
Takes ownership.
Definition at line 95 of file qgsattributeform.cpp.
Notifies about changes of attributes. 112 of file qgsattributeform.cpp.
Disconnects the button box (Ok/Cancel) from the accept/resetValues slots If this method is called, you have to create these connections from outside.
Definition at line 89 of file qgsattributeform.cpp.
Returns if the form is currently in editable mode.
Definition at line 100 of file qgsattributeform.cpp.
Intercepts keypress on custom form (escape should not close it)
Reimplemented from QObject.
Definition at line 834 of file qgsattributeform.cpp.
Definition at line 37 of file qgsattributeform.h.
Is emitted, when a feature is changed or added.
Hides the button box (Ok/Cancel) and enables auto-commit.
Definition at line 73 of file qgsattributeform.cpp.
Returns the layer for which this form is shown.
Definition at line 66 of file qgsattributeform.h.
reload current feature
Definition at line 346 of file qgsattributeform.cpp.
Alias for resetValues()
Definition at line 160 of file qgsattributeform.h.
Sets all values to the values of the current feature.
Definition at line 267 of file qgsattributeform.cpp.
Save all the values from the editors to the layer.
Definition at line 138 of file qgsattributeform.cpp.
Sets the edit command message (Undo) that will be used when the dialog is accepted.
Definition at line 89 of file qgsattributeform.h.
Update all editors to correspond to a different feature.
Definition at line 124 105 of file qgsattributeform.cpp.
Shows the button box (Ok/Cancel) and disables auto-commit.
Definition at line 82 of file qgsattributeform.cpp.
|
https://api.qgis.org/2.12/classQgsAttributeForm.html
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Alias analysis pass. More...
#include <alias_analysis.h>
Alias analysis pass.
This pass produces an AliasDb that contains aliasing and mutation information about the graph. Users can use this information to determine whether mutations to the graph are safe, i.e. they don't reorder/change nodes in a way that affects output.
Every value with a mutable type (Tensors, Lists, Tuples, etc.) will be associated with one or more "alias sets". If two values share an alias set, that means they may alias, implying that a mutation to one value cannot be reordered past a use of the other. Only reordering two reads of an alias set is considered safe.
There is a special alias set called the "wildcard set", which indicates that we're not sure what this value may alias. To be conservative, we consider the wildcard alias set as potentially aliasing any value.
Definition at line 28 of file alias_analysis.h.
|
https://caffe2.ai/doxygen-c/html/classtorch_1_1jit_1_1_alias_db.html
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
The following forum(s) have migrated to Microsoft Q&A:
All English Microsoft Azure forums!
Visit Microsoft Q&A to post new questions.
I have added ApplicationInsights to my application and can host it locally and see the telemetry reaching my Application Insight Resource. However, when I deploy my application to the Azure App Service Application Insights stops working.
I have managed to trace this to the ApplicationInsights.config file not being read/used. I serialized the `TelemetryConfiguration.Active` object and wrote it to a log file locally and on Azure. The local file corresponds to my settings in ApplicationInsights.config,
the on on Azure does not.
In fact I can reproduce what is created on Azure locally if I delete the ApplicationInsights.config. So it's kind of a fall back configuration I guess?
The file looks to be in the correct place on Azure App Service (in the root folder with the web.config).
I'll be happy to provide more details if someone is able to help.
Thanks!
TelemetryConfiguration.Active is indeed a default configuration for App Insights that first looks for the default ApplicationInsights.config and sets everything to default if it can't find one.
Could you share the code you are using to configure Application Insights? When setting up Application Insights
in Visual Studio the initialization is handled in the web.config not Global.asax or a Startup.cs.
You can also try setting the path to ApplicationInsights.config manually to see if that will clear up the issue:
using System.IO;
TelemetryConfiguration configuration = TelemetryConfiguration.CreateFromConfiguration(File.ReadAllText(".\ApplicationInsights.config"));
var telemetryClient = new TelemetryClient(configuration);
|
https://social.msdn.microsoft.com/Forums/azure/en-US/af020fe2-8418-4057-ab74-4f1f83839d7e/applicationinsightsconfig-is-not-being-used-in-azure-app-service?forum=ApplicationInsights
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
When a Yii application starts processing a requested URL, the first step it takes is to parse the URL into a route. The route is then used to instantiate the corresponding controller action to handle the request. This whole process is called routing.
The reverse process of routing is called URL creation, which creates a URL from a given route and the associated query parameters. When the created URL is later requested, the routing process can resolve it back into the original route and query parameters.
The central piece responsible for routing and URL creation is the URL manager,
which is registered as the
urlManager application component. The URL manager
provides the parseRequest() method to parse an incoming request into
a route and the associated query parameters and the createUrl() method to
create a URL from a given route and its associated query parameters.
By configuring the
urlManager component in the application configuration, you can let your application
recognize arbitrary URL formats without modifying your existing application code. For example, you can
use the following code to create a URL for the
post/view action:
use yii\helpers\Url; // Url::to() calls UrlManager::createUrl() to create a URL $url = Url::to(['post/view', 'id' => 100]);
Depending on the
urlManager configuration, the created URL may look like one of the following (or other format).
And if the created URL is requested later, it will still be parsed back into the original route and query parameter value.
/index.php?r=post%2Fview&id=100 /index.php/post/100 /posts/100
The URL manager supports two URL formats:
The default URL format uses a query parameter named
r to represent the route and normal query parameters
to represent the query parameters associated with the route. For example, the URL
/index.php?r=post/view&id=100 represents
the route
post/view and the
id query parameter
100. The default URL format does not require any configuration of
the URL manager and works in any Web server setup.
The pretty URL format uses the extra path following the entry script name to represent the route and the associated
query parameters. For example, the extra path in the URL
/index.php/post/100 is
/post/100 which may represent
the route
post/view and the
id query parameter
100 with a proper URL rule. To use
the pretty URL format, you will need to design a set of URL rules according to the actual
requirement about how the URLs should look like.
You may switch between the two URL formats by toggling the enablePrettyUrl property of the URL manager without changing any other application code.
Routing involves two steps:
When using the default URL format, parsing a request into a route is as simple as getting the value of a
GET
query parameter named
r.
When using the pretty URL format, the URL manager will examine the registered URL rules to find matching one that can resolve the request into a route. If such a rule cannot be found, a yii\web\NotFoundHttpException exception will be thrown.
Once the request is parsed into a route, it is time to create the controller action identified by the route.
The route is broken down into multiple parts by the slashes in it. For example,
site/index will be
broken into
site and
index. Each part is an ID which may refer to a module, a controller or an action.
Starting from the first part in the route, the application takes the following steps to create modules (if any),
controller and action:
Among the above steps, if any error occurs, a yii\web\NotFoundHttpException will be thrown, indicating the failure of the routing process.
When a request is parsed into an empty route, the so-called default route will be used, instead. By default,
the default route is
site/index, which refers to the
index action of the
site controller. You may
customize it by configuring the defaultRoute property of the application
in the application configuration like the following:
[ // ... 'defaultRoute' => 'main/index', ];
Similar to the default route of the application, there is also a default route for modules, so for example if there
is a
user module and the request is parsed into the route
user the module's defaultRoute
is used to determine the controller. By default the controller name is
default. If no action is specified in defaultRoute,
the defaultAction property of the controller is used to determine the action.
In this example, the full route would be
user/default/index.
catchAllRoute ¶
Sometimes, you may want to put your Web application in maintenance mode temporarily and display the same informational page for all requests. There are many ways to accomplish this goal. But one of the simplest ways is to configure the yii\web\Application::$catchAll property like the following in the application configuration:
[ // ... 'catchAll' => ['site/offline'], ];
With the above configuration, the
site/offline action will be used to handle all incoming requests.
The
catchAll property should take an array whose first element specifies a route, and
the rest of the elements (name-value pairs) specify the parameters to be bound to the action.
Info: The debug toolbar in development environment will not work when this property is enabled.
Yii provides a helper method yii\helpers\Url::to() to create various kinds of URLs from given routes and their associated query parameters. For example,');
Note that in the above example, we assume the default URL format is being used. If the pretty URL format is enabled, the created URLs will be different, according to the URL rules in use.
The route passed to the yii\helpers\Url::to() method is context sensitive. It can be either a relative route or an absolute route which will be normalized according to the following rules:
Starting from version 2.0.2, you may specify a route in terms of an alias. If this is the case, the alias will first be converted into the actual route which will then be turned into an absolute route according to the above rules.
For example, assume the current module is
admin and the current controller is yii\helpers\Url::to() method is implemented by calling the createUrl() and createAbsoluteUrl() methods of the URL manager. In the next few subsections, we will explain how to configure the URL manager to customize the format of the created URLs.
The yii\helpers\Url::to() method also supports creating URLs that are not related with particular routes. Instead of passing an array as its first parameter, you should pass a string in this case. For example,
use yii\helpers\Url; // currently requested URL: /index.php?r=admin%2Fpost%2Findex echo Url::to(); // an aliased URL: Yii::setAlias('@example', ''); echo Url::to('@example'); // an absolute URL: echo Url::to('/images/logo.gif', true);
Besides the
to() method, the yii\helpers\Url helper class also provides several other convenient URL creation
methods. For example,
use yii\helpers\Url; // home page URL: /index.php?r=site%2Findex echo Url::home(); // the base URL, useful if the application is deployed in a sub-folder of the Web root echo Url::base(); // the canonical URL of the currently requested URL // see echo Url::canonical(); // remember the currently requested URL and retrieve it back in later requests Url::remember(); echo Url::previous();
To use pretty URLs, configure the
urlManager component in the application configuration like the following:
[ 'components' => [ 'urlManager' => [ 'enablePrettyUrl' => true, 'showScriptName' => false, 'enableStrictParsing' => false, 'rules' => [ // ... ], ], ], ]
The enablePrettyUrl property is mandatory as it toggles the pretty URL format. The rest of the properties are optional. However, their configuration shown above is most commonly used.
/index.php/post/100, by setting this property to be
false, a URL
/post/100will be generated.
Note: In order to hide the entry script name in the created URLs, besides setting showScriptName to be
false, you may also need to configure your Web server so that it can correctly identify which PHP script should be executed when a requested URL does not explicitly specify one. If you are using Apache or nginx Web server, you may refer to the recommended configuration as described in the Installation section.
A URL rule is a class implementing the yii\web\UrlRuleInterface, usually yii\web\UrlRule. Each URL rule consists of a pattern used for matching the path info part of URLs, a route, and a few query parameters. A URL rule can be used to parse a request if its pattern matches the requested URL. A URL rule can be used to create a URL if its route and query parameter names match those that are given.
When the pretty URL format is enabled, the URL manager uses the URL rules declared in its rules property to parse incoming requests and create URLs. In particular, to parse an incoming request, the URL manager examines the rules in the order they are declared and looks for the first rule that matches the requested URL. The matching rule is then used to parse the URL into a route and its associated parameters. Similarly, to create a URL, the URL manager looks for the first rule that matches the given route and parameters and uses that to create a URL.
You can configure yii\web\UrlManager::$rules as an array with keys being the patterns and values the corresponding
routes. Each pattern-route pair constructs a URL rule. For example, the following rules
configuration declares two URL rules. The first rule matches a URL
posts and maps it into the route
post/index.
The second rule matches a URL matching the regular expression
post/(\d+) and maps it into the route
post/view and
defines a query parameter named
id.
'rules' => [ 'posts' => 'post/index', 'post/<id:\d+>' => 'post/view', ]
Info: The pattern in a rule is used to match the path info part of a URL. For example, the path info of
/index.php/post/100?source=adis
post/100(the leading and ending slashes are ignored) which matches the pattern
post/(\d+).
Besides declaring URL rules as pattern-route pairs, you may also declare them as configuration arrays. Each configuration array is used to configure a single URL rule object. This is often needed when you want to configure other properties of a URL rule. For example,
'rules' => [ // ...other url rules... [ 'pattern' => 'posts', 'route' => 'post/index', 'suffix' => '.json', ], ]
By default if you do not specify the
class option for a rule configuration, it will take the default
class yii\web\UrlRule, which is the default value defined in
yii\web\UrlManager::$ruleConfig.
A URL rule can be associated with named query parameters which are specified in the pattern in the format
of
<ParamName:RegExp>, where
ParamName specifies the parameter name and
RegExp is an optional regular
expression used to match parameter values. If
RegExp is not specified, it means the parameter value should be
a string without any slash.
Note: You can only use regular expressions inside of parameters. The rest of a pattern is considered plain text.
When a rule is used to parse a URL, it will fill the associated parameters with values matching the corresponding
parts of the URL, and these parameters will be made available in
$_GET later by the
request application component.
When the rule is used to create a URL, it will take the values of the provided parameters and insert them at the
places where the parameters are declared.
Let's use some examples to illustrate how named parameters work. Assume we have declared the following three URL rules:
'rules' => [ 'posts/<year:\d{4}>/<category>' => 'post/index', 'posts' => 'post/index', 'post/<id:\d+>' => 'post/view', ]
When the rules are used to parse URLs:
/index.php/postsis parsed into the route
post/indexusing the second rule;
/index.php/posts/2014/phpis parsed into the route
post/index, the
yearparameter whose value is 2014 and the
categoryparameter whose value is
phpusing the first rule;
/index.php/post/100is parsed into the route
post/viewand the
idparameter whose value is 100 using the third rule;
/index.php/posts/phpwill cause a yii\web\NotFoundHttpException when yii\web\UrlManager::$enableStrictParsing is
true, because it matches none of the patterns. If yii\web\UrlManager::$enableStrictParsing is
false(the default value), the path info part
posts/phpwill be returned as the route. This will either execute the corresponding action if it exists or throw a yii\web\NotFoundHttpException otherwise.
And when the rules are used to create URLs:
Url::to(['post/index'])creates
/index.php/postsusing the second rule;
Url::to(['post/index', 'year' => 2014, 'category' => 'php'])creates
/index.php/posts/2014/phpusing the first rule;
Url::to(['post/view', 'id' => 100])creates
/index.php/post/100using the third rule;
Url::to(['post/view', 'id' => 100, 'source' => 'ad'])creates
/index.php/post/100?source=adusing the third rule. Because the
sourceparameter is not specified in the rule, it is appended as a query parameter in the created URL.
Url::to(['post/index', 'category' => 'php'])creates
/index.php/post/index?category=phpusing none of the rules. Note that since none of the rules applies, the URL is created by simply appending the route as the path info and all parameters as the query string part.
You can embed parameter names in the route of a URL rule. This allows a URL rule to be used for matching multiple
routes. For example, the following rules embed
controller and
action parameters in the routes.
'rules' => [ '<controller:(post|comment)>/create' => '<controller>/create', '<controller:(post|comment)>/<id:\d+>/<action:(update|delete)>' => '<controller>/<action>', '<controller:(post|comment)>/<id:\d+>' => '<controller>/view', '<controller:(post|comment)>s' => '<controller>/index', ]
To parse a URL
/index.php/comment/100/update, the second rule will apply, which sets the
controller parameter to
be
comment and
action parameter to be
update. The route
<controller>/<action> is thus resolved as
comment/update.
Similarly, to create a URL for the route
comment/index, the last rule will apply, which creates a URL
/index.php/comments.
Info: By parameterizing routes, it is possible to greatly reduce the number of URL rules, which can significantly improve the performance of URL manager.
By default, all parameters declared in a rule are required. If a requested URL does not contain a particular parameter, or if a URL is being created without a particular parameter, the rule will not apply. To make some of the parameters optional, you can configure the defaults property of a rule. Parameters listed in this property are optional and will take the specified values when they are not provided.
In the following rule declaration, the
page and
tag parameters are both optional and will take the value of 1 and
empty string, respectively, when they are not provided.
'rules' => [ // ...other rules... [ 'pattern' => 'posts/<page:\d+>/<tag>', 'route' => 'post/index', 'defaults' => ['page' => 1, 'tag' => ''], ], ]
The above rule can be used to parse or create any of the following URLs:
/index.php/posts:
pageis 1,
tagis ''.
/index.php/posts/2:
pageis 2,
tagis ''.
/index.php/posts/2/news:
pageis 2,
tagis
'news'.
/index.php/posts/news:
pageis 1,
tagis
'news'.
Without using optional parameters, you would have to create 4 rules to achieve the same result.
Note: If pattern contains only optional parameters and slashes, first parameter could be omitted only if all other parameters are omitted.
It is possible to include Web server names in the patterns of URL rules. This is mainly useful when your application
should behave differently for different Web server names. For example, the following rules will parse the URL into the route
admin/user/login and into
site/login.
'rules' => [ '' => 'admin/user/login', '' => 'site/login', ]
You can also embed parameters in the server names to extract dynamic information from them. For example, the following rule
will parse the URL into the route
post/index and the parameter
language=en.
'rules' => [ 'http://<language:\w+>.example.com/posts' => 'post/index', ]
Since version 2.0.11, you may also use protocol relative patterns that work for both,
http and
https.
The syntax is the same as above but skipping the
http: part, e.g.:
'//' => 'site/login'.
Note: Rules with server names should not include the subfolder of the entry script in their patterns. For example, if the applications entry script is at, then you should use the pattern of. This will allow your application to be deployed under any directory without the need to change your url rules. Yii will automatically detect the base url of the application.
You may want to add suffixes to the URLs for various purposes. For example, you may add
.html to the URLs so that they
look like URLs for static HTML pages; you may also add
.json to the URLs to indicate the expected content type
of the response. You can achieve this goal by configuring the yii\web\UrlManager::$suffix property like
the following in the application configuration:
[ // ... 'components' => [ 'urlManager' => [ 'enablePrettyUrl' => true, // ... 'suffix' => '.html', 'rules' => [ // ... ], ], ], ]
The above configuration will allow the URL manager to recognize requested URLs and also create
URLs with
.html as their suffix.
Tip: You may set
/as the URL suffix so that the URLs all end with a slash.
Note: When you configure a URL suffix, if a requested URL does not have the suffix, it will be considered as an unrecognized URL. This is a recommended practice for SEO (search engine optimization) to avoid duplicate content on different URLs.
Sometimes you may want to use different suffixes for different URLs. This can be achieved by configuring the
suffix property of individual URL rules. When a URL rule has this property set, it will
override the suffix setting at the URL manager level. For example, the following configuration
contains a customized URL rule which uses
.json as its suffix instead of the global
.html suffix.
[ 'components' => [ 'urlManager' => [ 'enablePrettyUrl' => true, // ... 'suffix' => '.html', 'rules' => [ // ... [ 'pattern' => 'posts', 'route' => 'post/index', 'suffix' => '.json', ], ], ], ], ]
When implementing RESTful APIs, it is commonly needed that the same URL be parsed into different routes according to
the HTTP methods being used. This can be easily achieved by prefixing the supported HTTP methods to the patterns of
the rules. If a rule supports multiple HTTP methods, separate the method names with commas. For example, the following
rules have the same pattern
post/<id:\d+> with different HTTP method support. A request for
PUT post/100 will
be parsed into
post/update, while a request for
GET post/100 will be parsed into
post/view.
'rules' => [ 'PUT,POST post/<id:\d+>' => 'post/update', 'DELETE post/<id:\d+>' => 'post/delete', 'post/<id:\d+>' => 'post/view', ]
Note: If a URL rule contains HTTP method(s) in its pattern, the rule will only be used for parsing purpose unless
GETis among the specified verbs. It will be skipped when the URL manager is called to create URLs.
Tip: To simplify the routing of RESTful APIs, Yii provides a special URL rule class yii\rest\UrlRule which is very efficient and supports some fancy features such as automatic pluralization of controller IDs. For more details, please refer to the Routing section in the RESTful APIs chapter.
URL rules can be dynamically added to the URL manager. This is often needed by redistributable modules which want to manage their own URL rules. In order for the dynamically added rules to take effect during the routing process, you should add them during the bootstrapping stage of the application. For modules, this means they should implement yii\base\BootstrapInterface and add the rules in the bootstrap() method like the following:
public function bootstrap($app) { $app->getUrlManager()->addRules([ // rule declarations here ], false); }
Note that you should also list these modules in yii\web\Application::bootstrap() so that they can participate the bootstrapping process.
Despite the fact that the default yii\web\UrlRule class is flexible enough for the majority of projects, there
are situations when you have to create your own rule classes. For example, in a car dealer Web site, you may want
to support the URL format like
/Manufacturer/Model, where both
Manufacturer and
Model must match some data
stored in a database table. The default rule class will not work here because it relies on statically declared patterns.
We can create the following URL rule class to solve this problem.
namespace app\components; use yii\web\UrlRuleInterface; use yii\base\BaseObject; class CarUrlRule extends BaseObject implements UrlRuleInterface { public function createUrl($manager, $route, $params) { if ($route === 'car/index') { if (isset($params['manufacturer'], $params['model'])) { return $params['manufacturer'] . '/' . $params['model']; } elseif (isset($params['manufacturer'])) { return $params['manufacturer']; } } return false; // this rule does not apply } public function parseRequest($manager, $request) { $pathInfo = $request->getPathInfo(); if (preg_match('%^(\w+)(/(\w+))?$%', $pathInfo, $matches)) { // check $matches[1] and $matches[3] to see // if they match a manufacturer and a model in the database. // If so, set $params['manufacturer'] and/or $params['model'] // and return ['car/index', $params] } return false; // this rule does not apply } }
And use the new rule class in the yii\web\UrlManager::$rules configuration:
'rules' => [ // ...other rules... [ 'class' => 'app\components\CarUrlRule', // ...configure other properties... ], ]
Since version 2.0.10 UrlManager can be configured to use UrlNormalizer for dealing
with variations of the same URL, for example with and without a trailing slash. Because technically
and are different URLs, serving the same content for both of them can degrade SEO ranking.
By default normalizer collapses consecutive slashes, adds or removes trailing slashes depending on whether the
suffix has a trailing slash or not, and redirects to the normalized version of the URL using permanent redirection.
The normalizer can be configured globally for the URL manager or individually for each rule - by default each rule will use the normalizer
from URL manager. You can set UrlRule::$normalizer to
false to disable normalization
for particular URL rule.
The following shows an example configuration for the UrlNormalizer:
'urlManager' => [ 'enablePrettyUrl' => true, 'showScriptName' => false, 'enableStrictParsing' => true, 'suffix' => '.html', 'normalizer' => [ 'class' => 'yii\web\UrlNormalizer', // use temporary redirection instead of permanent for debugging 'action' => UrlNormalizer::ACTION_REDIRECT_TEMPORARY, ], 'rules' => [ // ...other rules... [ 'pattern' => 'posts', 'route' => 'post/index', 'suffix' => '/', 'normalizer' => false, // disable normalizer for this rule ], [ 'pattern' => 'tags', 'route' => 'tag/index', 'normalizer' => [ // do not collapse consecutive slashes for this rule 'collapseSlashes' => false, ], ], ], ]
Note: by default UrlManager::$normalizer is disabled. You need to explicitly configure it in order to enable URL normalization.
When developing a complex Web application, it is important to optimize URL rules so that it takes less time to parse requests and create URLs.
By using parameterized routes, you may reduce the number of URL rules, which can significantly improve performance.
When parsing or creating URLs, URL manager examines URL rules in the order they are declared. Therefore, you may consider adjusting the order of the URL rules so that more specific and/or more commonly used rules are placed before less used ones.
If some URL rules share the same prefix in their patterns or routes, you may consider using yii\web\GroupUrlRule so that they can be more efficiently examined by URL manager as a group. This is often the case when your application is composed by modules, each having its own set of URL rules with module ID as their common prefixes.
Found a typo or you think this page needs improvement?
Edit it on github !
|
https://www.yiiframework.com/doc/guide/2.0/pl/runtime-routing
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Hi all,
I just started using the Hadoop DFS last night, and it has already
solved a big performance problem we were having with throughput from
our shared NFS storage. Thanks for everyone who has contributed to
that project.
I wrote my own MapReduce implementation, because I needed two
features that Hadoop didn't have: Grid Engine integration and easy
record I/O (described below). I'm writing this message to see if
you're interested in these ideas for Hadoop, and to see what ideas I
might learn from you.
Grid Engine: All the machines available to me run Sun's Grid Engine
for job submission. Grid Engine is important for us, because it
makes sure that all of the users of a cluster get their fair share of
resources--as far as I can tell, the JobTracker assumes that one user
owns the machines. Is this shared scenario you're interested in
supporting? Would you consider supporting job submission systems
like Grid Engine or Condor?
Record I/O: My implementation is something like
org.apache.hadoop.record implementation, but with a couple of
twists. In my implementation, you give the system a simple Java
class, like this:
public class WordCount {
public String word;
public long count;
}
and my TypeBuilder class generates code for all possible orderings of
this class (order by word, order by count, order by word then count,
order by count then word). Each ordering has its own hash function
and comparator.
In addition, each ordering has its own serialization/deserialization
code. For example, if we order by count, the serialization code
stores only differences between adjacent counts to help with
compression.
All this code is grouped into an Order object, which is accessed like
this:
String[] fields = { "word" };
Order<WordCount> order = (new WordCountType()).getOrder( fields );
This order object contains a hash function, a comparator, and
serialization logic for ordering WordCount objects by word.
Is this code you'd be interested in?
Thanks,
Trevor
(by the way, Doug, you may remember me from a panel at the OSIR
workshop this year on open source search)
|
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/200610.mbox/%[email protected]%3E
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
A custom service for token management (previously, one extending
InstanceIDListenerService) is no longer required and should be removed. If you
want to access the FCM token instead override the
onNewToken method on
FirebaseMessagingService.
This is needed if you want to
Manage device tokens to send a messages to single device directly, or
Send messages to device groups, or
Subscribe devices to topics with the server subscription management API.
If you don't use these features, you can completely remove your client code to initiate the generation of a registration token.
Registration token best practices for FCM
Send requests to GCM tokens will continue to work until the same instance
registers using a new version of the FCM SDK. At that time, a new token may be
returned that replaces all previous tokens, and previous tokens for that
instance may be invalidated. In some situations a registration request will
return the same token as an older registration, but you should not rely
on that behavior; instead, implement
onNewToken() to store only the latest
valid token. The last token generated is the only "safe" token
that is guaranteed to work, so you should replace any other tokens
associated with the same instance.
Update the Android Manifest
Before
<service android: <intent-filter> <action android: </intent-filter> </service>
After
<service android: <intent-filter> <action android: </intent-filter> </service>
Migrate your Instance ID listener service
Change
MyInstanceIDListenerService to extend
FirebaseMessagingService,
and update code to listen for token updates and get the token whenever a new
token is generated. This is the same class used for receiving messages. If you
are also migrating a class extending
GcmListenerService these will need to be
combined into a single class.
MyInstanceIDListenerService.java Before
public class MyInstanceIDListenerService extends InstanceIDListenerService { ... @Override public void onTokenRefresh() { // Fetch updated Instance ID token and notify our app's server of any changes (if applicable). Intent intent = new Intent(this, RegistrationIntentService.class); startService(intent); } }
MyFcmListenerService.java After
public class MyFcmListenerService extends FirebaseInstanceIdService { ... /** * Called if InstanceID token is updated. This may occur if the security of * the previous token had been compromised. Note that this is also called * when the Instance ID token is initially generated, so this is where * you retrieve the token. */ // [START refresh_token] @Override public void onNewToken(String token) { Log.d(TAG, "New token: " + token); // TODO: Implement this method to send any registration to your app's servers. sendRegistrationToServer(token); } }
Remove registration
You no longer need to explicitly initiate the generation of a registration token—the library does this automatically. Therefore, you can remove code like the following:
InstanceID instanceID = InstanceID.getInstance(this); String token = instanceID.getToken(getString(R.string.gcm_defaultSenderId), GoogleCloudMessaging.INSTANCE_ID_SCOPE, null); // [END get_token] Log.i(TAG, "GCM Registration Token: " + token);
Note that you still can retrieve the current token in FCM by using
FirebaseInstanceId.getInstance().getInstanceId();.
|
https://developers.google.com/cloud-messaging/android/android-migrate-iid-service?hl=ja
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.