text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hide Forgot The program segfaults if you link the pthread library as the first library, because on intel it works in this order I think it a bug in the egcs alpha. It needs KDE but this is not a bug in KDE, because it works on my Intel. /* This program runs in both versions on my Intel RedHat 5.2 but on the alpha it gives a segfault if the pthread library is linked at first. I think this is a bug. Intel works: RH5.2 Alpha Debian segfault. * egcs-2.90.29 980515 (egcs-1.0.3 release) * RedHat 52 segfaults */ // KDE include files #include <kapp.h> #include <pthread.h> /* Success : g++ -I/opt/kde/include -I/opt/qt/include -I/usr/X11R6/include -L/opt/kde/lib -L/opt/qt/lib -L/usr/X11R6/lib test.cpp -lkfile -lkfm -lkdeui -lkdecore -lqt -lXext -lX11 -lpthread SEGFAULT : g++ -I/opt/kde/include -I/opt/qt/include -I/usr/X11R6/include -L/opt/kde/lib -L/opt/qt/lib -L/usr/X11R6/lib test.cpp -lpthread -lkfile -lkfm -lkdeui -lkdecore -lqt -lXext -lX11 The only difference is the linking order of the pthread library. */ static void *writerThread(void *arg){ printf("a\n"); return NULL; } int main(int nargs,char** args) { printf("hi\n"); KApplication a( nargs, args, "kmpg" ); pthread_t tr; printf("hello\n"); pthread_create(&tr,NULL,writerThread,NULL); printf("b\n"); a.exec(); return 0; } Are the other kde libraries linked in the example thread safe? Is this still valid on the RH 6.0? assigned to pbrown for follow ups. This still happens under RHL 6.0 on alpha. Cristian, even if the kde libraries aren't thread safe, all the KDE/X stuff is being done in one thread, so it shouldn't segfault, right? I am no linking expert, but this behaviour does seem strange to me too. Jim, can you please take a look into this? What libraries do I need to use to reproduce this bug? I tried it on a few systems here and I just got things like: /usr/include/kconfigbase.h:80: qcolor.h: No such file or directory It looks like it may be a real bug (link order does matter, but getting it wrong should not - usually - cause core dumps). Preston, is there a machine at redhat (say a chroot jails on porky/jetson?) which can reproduce this?
https://bugzilla.redhat.com/show_bug.cgi?id=816
CC-MAIN-2020-24
refinedweb
399
76.22
- Trouble making a chevron - Games in C/C++ help needed - know of any GUI programming books? - C++ Pointers and Strings - where can i found the ansy c functions documents - need help on c++ - What's wrong with this simple IF statement? - Format of OBJ file - memory leak check problem - How do I specify the font for a combobox in win32 - CFile vs. CStdioFile - Adding events to an object model exposed to Explorer - Help with first MFC program - get login name under windows xp - Copy a file? - Enabling backspace key in win32 comboBox window - Help with first MFC program - i have a question about one of the exercises in the k & r book.... - C/C++ Socket Server API: Where to start? - empty file string manipulation - sending an array too and from the main function... - all C programmers should read this book - implementing high-level checksum - bits, bytes, etc... - Dword? - A good C++ user group - Manually repainting owner draw buttons? - Opening a File - Emulating Double Click - Registry - Problem adding time by breaking hours to seconds... - Calculate Average Prob - i need help - Fractions (int/int) question... - equivalent to PHP explode or PERL split - the k&r book has hard....... - old code troubles - LZW compression - Pointers - Two Recursively Searching Algorithms - and one really annoying bug! - Mac programming with codewarrior - What is this? - unresolved external errors ??? - Implemenation of <vector> class function, PushBack() - handling arbitrary number of pipes - Splitting a Variable - Compile .cpp in vs.net without solution file? - Modifying script command functionality - need help with output redirection - Passing Pointers - alarm clock idea - win32 color constant for winxp? - Where can I learn about programing C with OO principles? - using std namespace; what does it mean? - getline (from k&r) question... - Erase junk prog - commas in MFC apps - using exponents - changing text color in win32 - GDI Question - Ripping source code from .exe - read from a formated data file - fast way to save data in C - Flickering w/ Win32 API and Painting - removing trailing newline in C - Control Panel in c/c++, through web-based interface - how to use Dos commands in Borlad C - need help with manipulating user input - Searching for MS Visual C++ book! Recommend! - Help with my bit extraction code - Passing fake keystrokes to a third part application
http://forums.devshed.com/sitemap/f-42-p-4/
crawl-001
refinedweb
373
65.22
MonadFix From HaskellWiki The MonadFix typeclass provides the 3 MonadFix laws Here are the laws of MonadFix and some implications. - purity:mfix (return . h) = return (fix h) - over pure things is the same as pure recursion.mfixdoes not add any monadic action of its own.mfix - left shrinking:mfix (\x -> a >>= \y -> f x y) = a >>= \y -> mfix (\x -> f x y) - A monadic action on the left (at the beginning) that does not involve the recursed value (here) can be factored out ofx. Somfixdoes not change the number of times the action is performed, since putting it inside or outside makes no difference.mfix - sliding: ifis strict,hmfix (liftM h . f) = liftM h (mfix (f . h)) - nesting:mfix (\x -> mfix (\y -> f x y)) = mfix (\x -> f x x) - these two laws are analogous to those of pure recursion, i.e., laws of.fix
https://wiki.haskell.org/index.php?title=MonadFix&direction=prev&oldid=41637
CC-MAIN-2015-40
refinedweb
145
71.14
Environment variables can be your friend when you don't want things to happen in productions. You can put WING_DEBUG into your environment where you develop. When you run the process under mod_wsgi, or on a machine you don't develop on, this won't be in the environment, and you won't import it. I use this one in my Django settings for every project DEBUG = bool(int(os.environ.get('DJANGO_DEBUG', 0))) and all my working environment define DJANGO_DEBUG=1. This means things like runserver/shell (any management command) runs in debug mode, but production runs automatically default to DEBUG=False For wingdbstub, you can something like this: if os.environ["WING_DEBUG"] import wingdbstub endif However I find that the debugger in wing makes my django app so much slower that I'm not willing to run with it unless I really need it. I don't find the toggling back and forth between debugging/not debugging. -------------- next part -------------- An HTML attachment was scrubbed... URL: </pipermail/wingide-users/attachments/20130124/d6cb922c/attachment.html>
http://wingware.com/pipermail/wingide-users/2013-January/010079.html
CC-MAIN-2014-52
refinedweb
175
54.42
hello everyone,i'm new to classes but i have adapted well with it. i've been given this task and i'm left with three days to submit.i tried it myself but i just can't get it please help. this is the problem below. proram 2 MULTICHOICE VIDEO RENTAL SHOP needs an Object Oriented program that process the following information about ten (10) videos in their stock • The title, the director, the year produced and a list of main actors for each video. (If there are more than five main actors, include only the five most famous actors) • The function setVideo that input information into each video object • The function getVideo that displays information of a particular video on the screen • A customer (identified by ID) plus his full names and address can borrow at most 2 videos @ R12.50 per day. A penalty of 10% per day is charged if returned late. The borrow period is at most 7 days • Video titles should not exceed 25 characters in length. If that happens, truncate the length to a most 25 characters • A video should be checked if is not rented out before borrow transaction is processed. If borrowed out the customer should either be requested for the second choice or be advised when the video is expected back in the shop • Failure for the shop to receive the video back within 7 days, is considered permanent loss and the customer is liable to a R400.00 compensation fee • Information about the number of films in stock at any point in time should be readily available • At the end of business, a report should be generated showing o Which films are rented out and when are they due o To which customers they are borrowed to o Which videos are left in stock o How much money was collected for the day Generate your own test data that will test all cases to illustrate the accuracy and consistency of your program so firstly my problem is the user has to enter 10 movies in the system,rent them to a customers,each customer has to rent maximum of two movies. the price of renting is 12.50 at the end,i have to print how many movies were rented,to who and the money i have plus how many movies are left.please help. this is what i did below. #include<iostream> #include<iomanip> #include<string> using namespace std; class Video { private: string Title; string Director; int Year; string Actors; public: Video(string title,string director,int year,string actors) { SetVideo(title,director,year,actors); } void SetVideo(string title,string director,int year,string actors) { Title=title; Director=director; Year=year; Year=(year<=0 && year<2007) ? year:1900; Actors=actors; } void GetVideo() { cout<<Title<<"\n" <<Director<<"\n" <<Year<<"\n" <<Actors<<"\n\n"; } void display() { string title,director,actors,name; int year,loop,option; cout<<"\t\t"<<"VODASHOP MOVIE RENTAL"<<"\n\n"; cout<<"How many movies to store in the system?:"; cin>>loop; cout<<endl; cin.get(); for(int x=0;x<loop;x++) { cout<<"movie information :\n\n"; cout<<"Title:"; getline(cin,title); cin.get(); cout<<"Director:"; cin.get(); getline(cin,director); cout<<"Year produced:"; cin>>year; cout<<endl; cin.get(); cout<<"HOW many actors in this movie?:"; cin>>option; cin.get(); cout<<endl; for(int y=0;y<option;y++) { cout<<"Actor:"; getline(cin,actors); cout<<"\n"; SetVideo(title,director,year,actors); } cout<<endl; } int total=loop; cout<<"There is :"<<loop<<"\n"<<"Movies in the system\n"; actors=actors; } }; class Customer { private: string Id; string Name; string Address; public: Customer(string id,string name,string address) { SetInfo(id,name,address); } void SetInfo(string id,string name,string address) { Id=id; Id=13; Name=name; Address=address; } void GetInfo() { cout<<Id<<"\n" <<Name<<"\n" <<Address<<"\n\n"; } void display() { string id,name,address; int number; cout<<"PLEASE ENTER THE CLIENT'S DETAILS:\n\n"; cout<<"ID:"; getline(cin,id); cout<<"Name:"; cin.get(); getline(cin,name); cout<<"Address:"; cin.get(); getline(cin,address); cout<<"how many movies to rent to this customers?The maximum is two:"; cin>>number; while(number<0 && number>2) { cout<<"enter number of movies:"; cin>>number; } cout<<endl; cout<<"The movies you entered is/are:\n\n"; } }; void MenuAndGet(int); int main() { Video myvideo("0","0",0,"0"); Customer myclient("0","0","0"); return 0; } void MenuAngGet() { cout<<"Please choose the correct option below\n"; }
https://www.daniweb.com/programming/software-development/threads/94347/movie-rental-customer-s-program
CC-MAIN-2017-26
refinedweb
741
57.2
We are a Scala/Java shop and we use Gradle for our build and Hudson for CI. We recently wrote some node.js code with tests in mocha. Is there anyway to get that included in our gradle workflow and setup in Hudson? I looked at the gradle-javascript-plugin but I could not figure out how to run npm test or npm install through it and not sure how to make it run through gradle-build or gradle-test commands and also let Hudson pick it up. I can get you part of the way there, I am mid-stream on this task as well. Make sure you have at least Gradle 1.2. import org.gradle.plugins.javascript.coffeescript.CoffeeScriptCompile apply plugin: 'coffeescript-base' repositories { mavenCentral() maven { url '' } } task compileCoffee(type: CoffeeScriptCompile){ source fileTree('src') destinationDir file('lib') } Reference: Provided with a way to compile my coffeescript I can now add the npm install cmd into a groovy exec request and barf depending on the exec cmd result providing stdout/stderr npm install echo $? 0 npm install npm ERR! install Couldn't read dependencies npm ERR! Failed to parse json npm ERR! Unexpected token } npm ERR! File: /<>/package.json npm ERR! Failed to parse package.json data. npm ERR! package.json must be actual JSON, not just JavaScript. npm ERR! npm ERR! This is not a bug in npm. npm ERR! Tell the package author to fix their package.json file. JSON.parse npm ERR! System Darwin 11.4.2 npm ERR! command "/usr/local/bin/node" "/usr/local/bin/npm" "install" npm ERR! cwd /<>/ npm ERR! node -v v0.8.14 npm ERR! npm -v 1.1.65 npm ERR! file /<>/package.json npm ERR! code EJSONPARSE npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /<>/npm-debug.log npm ERR! not ok code 0 echo $? 1 Results in: task npmDependencies { def proc = 'npm install'.execute() proc.in.eachLine { line -> println line} proc.err.eachLine { line -> println 'ERROR: '+line } proc.waitFor() if (proc.exitValue()!=0){ throw new RuntimeException('NPM dependency installation failed!') } } As far as the mocha tests, I don't have first-hand knowledge of this, however I suspect you can handle similarly.
https://codedump.io/share/d2OR9TRHwLIB/1/how-to-include-nodejs-tests-into-gradle-and-hudson
CC-MAIN-2017-09
refinedweb
371
71.1
Hi I have search the net to an extent that I can not more and I still have not found the answer, so I am someone here can tell me, I want to be able to list all local drives in a system, but excluding USB. Network, DVD, etc This is what I have but I don't know how to pickup only the Local Drive public class ListAllDrives { public static void main(String[] args) { List <File>files = Arrays.asList(File.listRoots()); for (File drv : files) { String s1 = FileSystemView.getFileSystemView().getSystemDisplayName (drv); String s2 = FileSystemView.getFileSystemView().getSystemTypeDescription(drv); System.out.println(s2 + " : " + s1); } } } Any suggestion would be greatly appreciated.
https://www.daniweb.com/programming/software-development/threads/357979/java-list-only-local-drives-only
CC-MAIN-2017-51
refinedweb
109
60.65
Dynamic Event Handlers Dynamic Event Handlers Hi Why don't you do exactly what you suggested? Create a new listener... say ReloadCustomersListener. Then make the class having your grid be a listener (or create a listener inside it). When receiving the event, then simply reload the grid.... I don't see the point here... No, the listeners too has to be dynamic .. so what I meant is .. the pseudo code will be something like EventBus ev = EventBus(); then the grid will add itself to listed for reloadCustomers event GridPanel gp = new GridPanel(); ev.addEventHandler("reloadCustomers", gp); so the form after submitting the data will say ev.fireEvent("reloadCustomers"); So basically the listeners, handlers & eventbus all has to be dynamic You should not use string as type for events, since it's not typesafe. Instead create a new event type like this. Code: public class MyEvents extends Events { public static final EventType ReloadCustomers = new EventType(); } fireEvent(MyEvents.ReloadCustomers,event); Similar Threads Event handlersBy eoriodan in forum Sencha Touch 1.x: DiscussionReplies: 6Last Post: 9 Sep 2010, 4:02 PM Help with event handlers in GridBy Zoze in forum Ext 2.x: Help & DiscussionReplies: 1Last Post: 11 Feb 2009, 4:19 AM Field event handlers (need help)By end-user in forum Ext 1.x: Help & DiscussionReplies: 6Last Post: 24 Aug 2007, 7:14 AM Removing event handlersBy incunix in forum Ext 1.x: Help & DiscussionReplies: 1Last Post: 4 Jul 2007, 2:43 AM
http://www.sencha.com/forum/showthread.php?123330-Dynamic-Event-Handlers
CC-MAIN-2014-42
refinedweb
242
66.64
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Each entry in the following list describes a particular issue, complete with sample source code to demonstrate the effect. Most sample code herein has been verified to compile with gcc 2.95.2 and Comeau C++ 4.2.44. _MSC_VERis defined for all Microsoft C++ compilers. Its value is the internal version number of the compiler interpreted as a decimal number. Since a few other compilers also define this symbol, boost provides the symbol BOOST_MSVC, which is defined in boost/config.hpp to the value of _MSC_VER if and only if the compiler is really Microsoft Visual C++. The following table lists some known values. using-declarations using-declarations does not work. void f(); namespace N { using ::f; } void g() { using N::f; // C2873: 'f': the symbol cannot be used in a using-declaration } #include <stdio.h> template<class T> void f() { printf("%d\n", sizeof(T)); } int main() { f<double>(); // output: "1" f<char>(); // output: "1" return 0; } forloops should be local to the loop's body, but it is instead local to the enclosing block. int main() { for(int i = 0; i < 5; ++i) ; for(int i = 0; i < 5; ++i) // C2374: 'i': Redefinition; multiple initialization ; return 0; }Workaround: Enclose the offending forloops in another pair of curly braces. Another possible workaround (brought to my attention by Vesa Karvonen) is this: #ifndef for #define for if (0) {} else for #endifNote that platform-specific inline functions in included headers might depend on the old-style forscoping. std::numeric_limitstemplate, does not work. struct A { static const int i = 5; // "invalid syntax for pure virtual method" };Workaround: Either use an enum (which has incorrect type, but can be used in compile-time constant expressions), or define the value out-of-line (which allows for the correct type, but prohibits using the constant in compile-time constant expressions). See Coding Guidelines for Integral Constant Expressions for guidelines how to define member constants portably in boost libraries. namespace N { struct A {}; void f(A); } void g() { N::A a; f(a); // 'f': undeclared identifier } template<class T> struct A {}; struct B { template<class T> friend struct A; // "syntax error" }; template<class T> struct A { template<class U> void f(); }; template<class T> template<class U> // "syntax error" void A<T>::f() // "T: undeclared identifier" { }Workaround: Define member templates in-line within their enclosing class. template<class T> struct A {}; template<class T> struct B {}; template<class T> struct A<B<T> > {}; // template class was already defined as a non-templateWorkaround: In some situations where interface does not matter, class member templates can simulate partial specialization. template<class T, typename T::result_type> // C1001: INTERNAL COMPILER ERROR: msc1.cpp, line 1794 struct B {}; // (omit "typename" and it compiles)Workaround: Leave off the "typename" keyword. That makes the program non-conforming, though. wchar_tis not built-in wchar_tis not a built-in type. wchar_t x; // "missing storage class or type identifier"Workaround: When using Microsoft Visual C++, the header boost/config.hpp includes <cstddef>, which defines wchar_tas a typedef for unsigned short. Note that this means that the compiler does not regard wchar_tand unsigned shortas distinct types, as is required by the standard, and so ambiguities may emanate when overloading on wchar_t. The macro BOOST_NO_INTRINSIC_WCHAR_Tis defined in this situation. const X *does not work void f() { const int *p = new int(5); delete p; // C2664: cannot convert from "const int *" to "void *" }Workaround: Define the function inline void operator delete(const void *p) throw() { operator delete(const_cast<void*>(p)); }and similar functions for the other cv-qualifier combinations, for operator delete[], and for the std::nothrowvariants. Library names from the <c...> headers are in the global namespace instead of namespace std. Workaround: The header boost/config.hpp will define BOOST_NO_STDC_NAMESPACE. It can be used as follows: # ifdef BOOST_NO_STDC_NAMESPACE namespace std { using ::abs; using ::fabs; } # endif Because std::size_t and std::ptrdiff_t are so commonly used, the workaround for these is already provided in boost/config.hpp.
http://www.boost.org/doc/libs/1_31_0/more/microsoft_vcpp.html
CC-MAIN-2014-42
refinedweb
682
52.6
- NAME - SYNOPSIS - our $Encoding - Argument based singleton - Lexicon hashes that are "read only" - Aliasing - METHODS - Deprecated for clarity: - $lh->get_base_class() - $lh->get_language_class() - $lh->get_language_tag() - $lh->langtag_is_loadable($lang_tag) - $lh->lang_names_hashref() - $lh->loadable_lang_names_hashref() - $lh->append_to_lexicons( $lexicons_hashref ); - $lh->remove_key_from_lexicons($key) - $lh->get_locales_obj() - $lh->get_language_tag_name(); - $lh->get_html_dir_attr() - $lh->get_locale_display_pattern() - $lh->get_language_tag_character_orientation() - $lh->lextext('Your key here.') - $lh->makethis() - $lh->makethis_base() - $lh->makevar() - $lh->get_asset() - $lh->get_asset_file() - $lh->get_asset_dir() - $lh->delete_cache() - Automatically _AUTO'd Failure Handling with hooks - Improved Bracket Notation - Additional bracket notation methods - join() - list_and() - list_or() - list_and_quoted() - list_or_quoted() - datetime() - current_year() - format_bytes() - convert() - boolean() - is_defined() - is_future() - comment() - asis() - output() - Arbitrary name/value attribute list - output() context - Project example - Phrase Utils - ENVIRONMENT - SEE ALSO - TODO - SUGGESTIONS - AUTHOR NAME Locale::Maketext::Utils - Adds some utility functionality and failure handling to Local::Maketext handles SYNOPSIS In MyApp/Localize.pm: package MyApp::Localize; use Locale::Maketext::Utils; use base 'Locale::Maketext::Utils'; our $Encoding = 'utf-8'; # see below # no _AUTO our %Lexicon = (... Make all the language Lexicons you want. (no _AUTO) Then in your script: my $lang = MyApp::Localize->get_handle('fr'); Now $lang behaves like a normal Locale::Maketext handle object but there are some new features, methods, and failure handling which are described below. our $Encoding If you set your class's $Encoding variable the object's encoding will be set to that. my $enc = $lh->encoding(); $enc is $MyApp::Localize::fr::Encoding || $MyApp::Localize::Encoding || encoding()'s default Argument based singleton The get_handle() method returns an argument based singleton. That means the overhead of initializing an object and compiling parts of the lexicon being used only happen once even if get_handle() is called several times with the same arguments. Lexicon hashes that are "read only" Sometimes you want your lexicon to be a tied hash that is read only which would be fatal when storing a compiled key's value (e.g. GDBM_File). To faciltate that (without needing other special tie()s as per "Tie::Hash::ReadonlyStack compat Lexicon") you can simply init() your handle with: $lh->{'use_external_lex_cache'} = 1; That will cause all compiled strings to be stored in the object instead of back in the lexicon. Aliasing In your package you can create an alias with this: __PACKAGE__->make_alias($langs, 1); or MyApp::Localize->make_alias([qw(en en_us i_default)], 1); __PACKAGE__->make_alias($langs); or MyApp::Localize::fr->make_alias('fr_ca'); Where $langs is a string or a reference to an array of strings that are the aliased language tags. You must set the second argument to true if __PACKAGE__ is the base class. The reason is there is no way to tell if the pakage name is the base class or not. This needs done before you call get_handle() or it will have no effect on your object really. Ideally you'd put all calls to this in the main lexicon to ensure it will apply to any get_handle() calls. Alternatively, and at times more ideally, you can keep each module's aliases in them and then when setting your obj require the module first. METHODS Deprecated for clarity: These are deprecated because they are ambiguous names (i.e. used in other places by other things) and thus problematic when harvesting phrases. $lh->print($key, @args); Shortcut for print $lh->maketext($key, @args); $lh->fetch($key, @args); Alias for $lh->maketext($key, @args); $lh->say($key, @args); Like $lh->print($key, @args); except appends $/ || \n $lh->get($key, @args); Like $lh->fetch($key, @args); except appends $/ || \n $lh->get_base_class() Returns the base class of the object. So if $lh is a MyApp::Localize::fr object then it returns MyApp::Localize $lh->get_language_class() Returns the language class of the object. So if $lh is a MyApp::Localize::fr object then it returns MyApp::Localize::fr $lh->get_language_tag() Returns the real language name space being used, not language_tag()'s "cleaned up" one $lh->langtag_is_loadable($lang_tag) Returns 0 if the argument is not a language that can be used to get a handle. Returns the language handle if it is a language that can be used to get a handle. $lh->lang_names_hashref() This returns a hashref whose keys are the language tags and the values are the name of language tag in $lh's native langauge. It can be called several ways: Give it a list of tags to lookup $lh->lang_names_hashref(@lang_tags) Have it search @INC for Base/Class/*.pm's $lh->lang_names_hashref() # IE no args Have it search specific places for Base/Class/*.pm's local $lh->{'_lang_pm_search_paths'} = \@lang_paths; # array ref of directories $lh->lang_names_hashref() # IE no args The module it uses for lookup (Locales::Language) is only required when this method is called. Make sure you have the latest verison of Locales as 0.04 (i.e. Locales::Base 0.03) is buggy! The module it uses for lookup (Locales::Language) is currently limited to two character codes but we try to handle it gracefully here. In array context it will build and return an additional hashref with the same keys whose values are the language name in the langueage itself. Does not ensure that the tags are loadable, to do that see below. $lh->loadable_lang_names_hashref() Exactly the same as $lh->lang_names_hashref() (because it calls that method...) except it only contains tags that are loadable. Has additional overhead of calling $lh->langtag_is_loadable() on each key. So most likely you'd use this on a single specific place (a page to choose their language setting for instance) instead of calling it on every instance your script is run. $lh->append_to_lexicons( $lexicons_hashref ); This method allows modules or script to append to the object's Lexicons. Consider using "Tie::Hash::ReadonlyStack compat Lexicon" instead. Each key is the language tag whose Lexicon you will prepend its value, a hashref, to. So assuming the key is 'fr', then this is the lexicon that gets appended to: __PACKAGE__::fr::Lexicon The only exception is if the key is '_'. In that case the main package's Lexicon is appended to: __PACKAGE__::Lexicon $lh->append_to_lexicons({ '_' => { 'Hello World' => 'Hello World', }, 'fr' => { 'Hello World' => 'Bonjour Monde', }, }); $lh->remove_key_from_lexicons($key) Removes $key from every lexicon. Consider using "Tie::Hash::ReadonlyStack compat Lexicon" instead. What is removed is stored in $lh->{'_removed_from_lexicons'} If defined, $lh->{'_removed_from_lexicons'} is a hashref whose keys are the index number of the $lh->_lex_refs() arrayref. The value is the key and the value that that lexicon had. This is used internally to remove _AUTO keys so that the failure handler below will get used $lh->get_locales_obj() Return a Locales object suitable for the object (or the optional locale tag argument). If the locale tag can not be loaded it tries the super (if any) then $lh->{'fallback_locale} (if set) and its super (if any) before defaulting to 'en'. Locales is where all the CLDR data and logic comes from. $lh->get_language_tag_name(); Takes 2 optional arguments: - 1. A locale tag whose name you want (defaults to the object's locale). - - 2. A locale tag whose language you want the name to be in (defaults to the object's locale). - These names are defined by the CLDR. $lh->get_html_dir_attr() With no argument, returns the object locale’s character orientation as a string suitable for HTML’s dir attribute. Given a CLDR character orientation string it will return a string suitable for HTML’s dir attribute. Given a locale tag and second argument of true (specifying that the first argument is a tag not a CLDR character orientation string ) it returns that locale’s character orientation as a string suitable for HTML’s dir attribute. The character orientation comes from CLDR. $lh->get_locale_display_pattern() Returns the locale display pattern for the object or the tag given as the ofrst optional argument. The pattern comes from the CLDR. $lh->get_language_tag_character_orientation() Returns the character orientation string for the object or the tag given as the first optional argument. The string comes from the CLDR. $lh->lextext('Your key here.') Get the lexicon’s text of 'Your key here.' without compiling it (in other words all bracket notation is still in tact). The results are suitable for the first arg to makethis(). $lh->text('Your key here.')–deprecated for clarity Deprecated name of $lh->lextext(). It is deprecated because it is an ambiguous name (i.e. used in other places by other things) and thus problematic when harvesting phrases. $lh->makethis() Like maketext() but does not lookup the phrase in the lexicon and compiles the phrase exactly as given. $lh->makethis_base() Like makethis() but uses the base class as the phrase compiler instead of the object. This is usful in testing when you want consistent semantics on arbitrary objects. $lh->makevar() This is an alias to maketext(). Its sole purpose is to be used to semantically indicate that the source code does not contain a translatable string in the call to maketext(). For example: $lh->maketext('Hello World') It is easy to tell that we need to provide translations for 'Hello World'. But given: $lh->maketext($api_rv->{'string'}) It is not easy to determine what $api_rv->{'string'} is in order to pass it on to the translation team. However if we do that like this: my $string = translatable('Hello World'); # See Locale::Maketext::Utils::MarkPhrase … $lh->makevar($api_rv->{'string'}) Then the parser can simply ignore the call to makevar() and find the value it is interested in via translatable() Additionally, since makevar() is meant to work with variables it also has the distinction of taking an array ref, as its only arg, that contains the arguments you’d normally pass to make*(). This makes it easier/possible to do something like this: $locale->makevar(@mystuff) in, say, Template Toolkit syntax. For example, if 'api_mt_result' is [ 'You have [quant,_1,user,users].', 42 ] you could do: [% locale.makevar(api_mt_result) %] instead of something hacky and convoluted like: [%- SET item_list = [] -%] [%- FOREACH i IN api_mt_result; item_list.push(i.json); END -%] [% '\[% locale.makevar(' _ item_list.join(", ") _ ') %\]' | evaltt %] $lh->get_asset() Helps you find an asset for a locale based on the locale's fallback. Takes a code ref and an optional locale tag (default to the object's locale). The code ref is passed a locale tag (from the list of the locale's fallbacks). The first tag that returns a defined value halts the loop and that value is returned. my $foo = $lh->get_asset(sub { my ($tag) = @_; return "foo+$tag" if Foo->has_locale($tag); return; }); $lh->get_asset_file() Takes a path to look for with %s where the locale tag should be. The first locale (in the object's locale's fallback list) whose path passes -f gets the path returned. Does a return; if none are found. my $template = $lh->get_asset_file('…/.locale/%s.tt'); # …/.locale/fr.tt The optional second argument is a string to return when the path passes -f. my $js = $lh->get_asset_file('…/.locale/%s.css',''); # $lh->get_asset_dir() Same as get_asset_file() but the path must pass -d. $lh->delete_cache() Delete the internal cache. Returns the hashref that was removed. You can pass it a key and only that is removed (i.e. instead of the entire cache). my $get_asset_file_cache = $lh->delete_cache('get_asset_file'); # only delete the cached data for get_asset_file() my $entire_cache = $lh->delete_cache();# delete the entire cache Currently this applies to 'get_asset_file', 'get_asset_dir', 'makethis', and 'makethis_base'. Automatically _AUTO'd Failure Handling with hooks This module sets fail_with() so that failure is handled for every Lexicon you define as if _AUTO was set and in addition you can use the hooks below. This functionality is turned off if: _AUTO is set on the Lexicon (and it was not removed internally for some strange reason) you've changed the failure function with $lh->fail_with() (If you do change it be sure to restore your _AUTO's inside $lh->{'_removed_from_lexicons'}) The result is that a key is looked for in the handle's Lexicon, then the default Lexicon, then the handlers below, and finally the key itself (Again, as if _AUTO had been set on the Lexicon). I find this extremely useful and hope you do as well :) $lh->{'_get_key_from_lookup'} If lookup fails this code reference will be called with the arguments ($lh, $key, @args) It can do whatever you want to try and find the $key and return the desired string. return $string_from_db; If it fails it should simply: return; That way it will continue on to the part below: $lh->{'_log_phantom_key'} If $lh->{'_get_key_from_lookup'} is not a code ref, or $lh->{'_get_key_from_lookup'} returned undef then this method is called with the arguments ($lh, $key, @args) right before the failure handler does its _AUTO wonderfulness. Improved Bracket Notation numf() This uses the decimal format defined in CLDR to format the number. That means there is no need to subclass or define special data per class. It takes an additional argument to specify the maximum number of decimal places you want. numerate() CLDR plural rule aware version of the Locale::Maketext numerate(). That means there is no need to subclass. See Locales::DB::Docs::PluralForms for locale specific arguments. quant() CLDR plural rule aware version of the Locale::Maketext quant(). That means there is no need to subclass, you need to specify all arguments (except the always optional “Special Zero” argument that some locales have) See Locales::DB::Docs::PluralForms for locale specific arguments. e.g. Key is 'en', value is 'ru': '… [quant,_1,one category text,other category text,special_zero text] …' => '… [quant,_1,one category text,few category text,many category text,other category text] …' You can use '%s' to specify the position of the number. Other wise it is prepended with a space (except “Special Zero” which won't contain the number.) The number is formatted via numf() with a max decimal place of 3. You can pass in an array ref instead of the number in order to pass in a max decimal places value. The first item is the number, the second item is the max decimal places. maketext('The average monthly rainfall is [quant,_1,inch,inches].', $n); # … is 42.869 inches. maketext('The average monthly rainfall is [quant,_1,inch,inches].', [$n,2]); # … is 42.87 inches. Additional bracket notation methods join() Joins the given arguments with the first argument: [join,-,_*], @numbers becomes 1-2-3-4-5 [join,,_*], @numbers becomes 12345 [join,~,,_*], @numbers becomes 1,2,3,4,5 [join,~, ,_*], @numbers becomes 1, 2, 3, 4, 5 Array ref arguments are expanded: $lh->maketext('… [join,-,_1,2] …', [1,2,3],4); # … 1-2-3-4 … list_and() Take a list of arguments (like join–array ref arguments are expanded) and format it per the locale's CLDR list pattern for and. You chose [list_and,_1]., \@pals You chose Rhiannon. You chose Rhiannon and Parker. You chose Rhiannon, Parker, and Char. You chose Rhiannon, Parker, Char, and Bean. See "get_list_and()" in Locales for more information. list_or() Same as list_and but with or-lists. See "get_list_or()" in Locales for more information and an important caveat. list_and_quoted() Like list_and() but all values are quoted via the CLDR to disambiguate them. list_or_quoted() Like list_or() but all values are quoted via the CLDR to disambiguate them. list()–deprecated Creates a phrased list "and/or" style: You chose [list,and,_*]., @pals You chose Rhiannon. You chose Rhiannon and Parker. You chose Rhiannon, Parker, and Char. You chose Rhiannon, Parker, Char, and Bean. The 'and' above is by default an '&': You chose [list,,_*] You chose Rhiannon, Parker, & Char A locale can set that but I recommend being explicit in your lexicons so the translators will know what you're trying to say: [list,and,_*] [list,or,_*] A locale can also control the seperator and "oxford" comma character (IE empty string for no oxford comma) The locale can do this by setting some variables in the same manner you'd set 'numf_comma' to change how numf() behaves for a class without having to write an almost identical method. The variables are (w/ defaults shown): $lh->{'list_seperator'} = ', '; $lh->{'oxford_seperator'} = ','; $lh->{'list_default_and'} = '&'; Array ref arguments are expanded. datetime() Allows you to get datetime output formatted for the current locale. 'Right now it is [datetime]' It can take 2 arguments which default to DateTime->now and 'date_format_long' respectively. The first argument tells the function what point in time you want. The values can be: - A hashref of arguments suitable for DateTime->new() An epoch suitable for DateTime->from_epoch()'s 'epoch' field. Uses UTC as the time zone A time zone suitable for DateTime constructors' 'time_zone' field The current time is used. Passing it an empty string will result in UTC being used. An epoch and time zone as above joined together by a colon A colon followed by nothing will result in UTC The second tells it what format you'd like that point in time stringified. The values can be: A coderef that returns a string suitable for DateTime->format_cldr() A string that is the name of a DateTime::Locale *_format_* method A string suitable for DateTime->format_cldr() current_year() CLDR version of current year. i.e. Shortcut to [datetime,,YYYY]. format_bytes() Convert byte count to human readable format. Does not require external modules. 'You have used [format_bytes,_1] of your alloted space.', $bytes Accepts an optional argument for max number of decimal places, default is 2. convert() Shortcut to Math::Units convert() 'The fish was [convert,_1,_2,_3]" long', $feet,'ft','in' boolean() This method allows you to choose a word or phrase to use based on a boolean. The first argument is the boolean value which should be true, false, or undefined. The next arguments are the string to use for a true value, the string to use for a false value and an optional value for an undefined value (if none is given undefined uses the false value). 'You [boolean,_1,have won,didn't win] a new car.' 'You [boolean,_1,have won,didn't win,have not entered our contest to win] a new car.' $lh->maketext(q{Congratulations! It's a [boolean,_1,girl,boy]!}, $is_a_girl); It can have “embedded args”: 'You must specify a value[boolean,_1, for the “_1” field]. is_defined() This method allows you to choose a word or phrase to use based on definedness. The first argument is the value which should be defined or undefined. The next arguments are: the string to use for a defined value, the string to use for a undefined value and an optional string for a defined value that is false (if none is given undefined uses the undefined value). It can have “embedded args”. 'Sorry, [is_defined,_2,“_2” is an invalid,you must specify a valid] value for “[_1]”.' 'domain', $domain # Sorry, “localhost” is an invalid value for “domain”. # Sorry, you must specify a valid value for “domain”. is_future() The first argument is the same as the first argument to datetime(). Then comes the string for future and the string for false. Your session [is_future,_1,will expire,expired] on [datetime,_1,date_format_medium]. comment() Embed comments in your phrases. 'The transmogrifier has been constipulated to level “[_1]”[comment,The argument is the variable name containing the superposition of the golden ratio’s decimal place in relation to π as mildegredaded by the authoritative falloosifier.].' # The transmogrifier has been constipulated to level “☺”. asis() Include non-translatable text in your phrase, e.g. a proper name. 'Thanks you for contacting [asis,Feel Good Inc.].' This is a short-name alias to 'output,asis' so it can have embedded methods like any output() method. 'Thanks you for contacting [asis,Foo chr(38) Barsup(®)].' Does not support embedded args. output() When you output a phrase you might mark it up by wrapping the string in, say, <p> tags. You wouldn't inlcude HTML *in* the key itself for a number of obvious reasons (HTML is not human, HTML is not the only possible output you may ever want, etc): print $lh->maketext('<p class="ok">Good news everyone!</p>'); # WRONG DO NOT DO THIS !! print q{<p class="ok">} . $lh->maketext('Good news everyone!') . "</p>"; # good What about when you want to format something inside the string? For example, you want to be sure certain words stand out. Or the argument is a URL that you want to be a link? Again, you don't want to add formatting inside the string so what do you do? You use the output() method. This method allows you to specify various output types. Those types allows a key to specify how a word or phrase should be output without having to understand or anticipate every possible context it might be used in. 'What ever you do, do [output,strong,NOT] cut the blue wire!' 'Your addon domain [output,underline,_1] has been setup.' Default output methods. Each output method name is the second argument to output. e.g. if the output method is 'xyz' you'd use it like this [output,xyz,…] and define a new one like this 'sub output_xyz { … }'. All output() methods support embedded methods: sub(), sup(), chr(), amp(), and numf(). Note: sub(), sup(), and numf() are simplified in that they only work with one argument. These default bare bones methods support 3 contexts: HTML, ANSI, and plain text. See "output() context" below. Feel free to over ride them if they do not suit your needs. The terminal control codes were ripped from Term::ANSIColor but the module itself is not used. underline() Underline the string: 'You [output,underline,must] be on time from now on.' For HTML it uses a span tag w/ CSS, for text it uses the standard terminal control code 4. Allows embedded arguments in the string. Supports "Arbitrary name/value attribute list". strong() Make the string strong: 'You [output,strong,do not] want to feed the velociraptors.' For HTML it uses a <strong>, for text it uses the standard terminal control code 1. Allows embedded arguments in the string. Supports "Arbitrary name/value attribute list". em() Add emphasis to the string: 'We [output,em,want] you to succeed.' For HTML it uses a <em>, for text it uses the standard terminal control code 3. (This may change in the future. See the blurb about "not all displays are ISO 6429-compliant" at "NOTES" in Term::ANSIColor.) Allows embedded arguments in the string. Supports "Arbitrary name/value attribute list". url() Handle URLs appropriately: In its simplest form you pass it a URL and the text: 'Visit [output,url,_1,CPAN] today.', '' in HTML context you get: Visit <a href="">CPAN</a> today. in non-HTML context you get: Visit CPAN () today. It is more flexible by using a special hash. 'You must [output,url,_1,html,click here,plain,go to] to complete your registration.' The arguments after the method name ('output') and the output type ('url') are: the URL, a hash of values to use in determining the string that the URL is turned into. The main keys are 'html' and 'plain' (the latter is used for both 'plain' and 'ansi' contexts). Their values are the string to use in conjuction with the context's rendering of the value. Embedded arguments are supported in those values: 'You must [output,url,_1,html,click on the _2 icon,plain,go to] to complete your registration.', $URL, '<img …/>' For HTML it uses a plain anchor tag. You can specify _type => 'offsite' to the arguments and it will have 'target="_blank" class="offsite"' as attributes. Again, feel free to create your own if this does not suit your needs. [output,url,_1,html,click here,_type,offsite,…] For text the URL is appended unless it had embedded args and the string contains the URL after those arguments are applied. 'You should [output,url,plain:visit _1 soon,…].' becomes 'You should visit soon.' and 'You should [output,url,_1,plain,visit,…].' becomes 'You should visit.' Both 'html' and 'plain' fallback to the URL itself if no value is given: My favorite site is [output,url,_1,_type,offsite]. text: My favorite site is. html: My favorite site is <a target="_blank" class="offsite" href=""></a>. This method can be used also when the context has different types of values. For example, a web based UI might have a URL but via command line there is an equivalent command to run. 'To unlock this account [output,url,_1,plain,execute `%s` via SSH,html,click here].' Tips: Pass the URL in as an argument so that if the URL changes your phrase won't. That also lends itself to reusability. Try to use context agnostic verbiage. e.g. Click [output,url,_1,here] for the documentation. It won't look right in a terminal (e.g. Click here (http://….) for the documentation.) thus it takes away reusability. Supports "Arbitrary name/value attribute list". The display text (whether from arg (i.e. simple form) or from “html” or “plain” keys) can have embedded methods. chr() Output the character represented by the given number. It is a wrapper around perl's built-in chr function that also encodes the value into the handle's encoding if it's over 127, and outputs as appropriate for the UI. $lh->maketext('I [output,chr,60]3 ascii art!'); For text you get 'I <3 ascii art!' For HTML you get 'I <3 ascii art!' class() Output the given string as a certain class of text. Since terminals have no concept of a styling classes we currently just make it bold. You could create your own 'sub output_class' that has a map of your project's standard visual CSS classes to ANSIColor escape sequences to use. $lh->maketext('The user [output,class,_1,highlight,contrast] was updated.', $user); For text you get 'The user bob was updated.' with 'bob' wrapped in the standard terminal control code 1. For HTML you get 'The user <span class="highlight contrast">bob</span> was updated.' encode_puny() $lh->maketext('The ascii safe version of your domain is “[output,encode_puny,_1]”.', $domain); If the string is already punycode it will return the string as-is. If there are any problems encoding the string it will return 'Error: invalid string for punycode'. decode_puny() $lh->maketext('The unicode version of your domain is “[output,decode_puny,_1]” is really cool.', $domain); If the string is not punycode it will return the string as-is. If there are any problems decoding the string it will return 'Error: invalid punycode'. asis_for_tests() Returns the given string as-is. Named so as to explicitly indicate a testing state. Allows embedded arguments in the string. attr() Alias for inline() inline() Allows assigning attributes to part of a string. The first argument is the string. The rest are outlined in "Arbitrary name/value attribute list". Allows embedded arguments in the string. In HTML context it is a span tag. block() Same as inline() except, in HTML context, it uses a div instead of span. The div should conceptually be an inline-div for positioning part of a string and not for document stucture. (Bracket notation is not a template system!) When we get real world examples of this I'll update the POD. For now you probably really want output,inline or output,sub or output,sup. img() Output an image. In non-HTML context the ALT text is what is output. The arguments are the images src and alt (alt default to src but don't do that). '[output,img,big_brother.png,Big Brother] is watching you!' Allows embedded arguments in the alt string. Supports "Arbitrary name/value attribute list" except 'src' and 'alt' which will be ignored if given. abbr() Takes 2 arguments: the abbreviated form and the non-abbreviated form. [output,abbr,Abbr.,Abbreviation] Supports "Arbitrary name/value attribute list" except 'title' which will be ignored if given. Best for truncation type abbreviations. (Mnemonic: abbr is a truncated word itself) If you want to further pin down the type of abbreviation is is you can specify a more specific class (e.g. end-clip, blend, numeronym, begin-clip, phonogram, contraction, portmanteau, apheresis, aphesis, etc). acronym() Takes 2 arguments: the acronym and what it stands for. [output,acronym,SCUBA,Self Contained Underwater Breathing Apparatus] Supports "Arbitrary name/value attribute list" except 'title' which will be ignored if given. Best for initial type abbreviations. (Typically all caps) To be HTML5 compat it outputs <abbr> with a class of “initialism” (like bootstrap). If you want to further pin down the type of abbreviation is is you can specify a more specific class (e.g. acronym, hybrid, acrostic, alphabetism, backronym, macronym, recursive, context, composite, etc). If you do pass in a class value “initialism” is still retained. sup() Super script the argument. [output,sup,X] Allows embedded arguments in the string. Supports "Arbitrary name/value attribute list". sub() Sub script the argument. [output,sub,X] Allows embedded arguments in the string. Supports "Arbitrary name/value attribute list". nbsp() Convenience method to get a non breaking space character (not the HTML entity, the character–works the same as the entity ina browser). Helps to visually indicate you intend a non breaking space when it is required. 'foo[output,nbsp]bar' vs 'foo bar' amp() [output,amp] is a shortcut to [output,chr,38] lt() [output,lt] is a shortcut to [output,chr,60] gt() [output,gt] is a shortcut to [output,chr,62] apos() [output,apos] is a shortcut to [output,chr,39] quot() [output,quot] is a shortcut to [output,chr,34] shy() [output,shy] is a shortcut to [output,chr,173] asis() [output,asis,…] Adding your own output methods Output methods can be created (and overridden) simply by defining a method prefixed by output_ followed by the output type. For example in your lexicon class you would: sub output_de_profanitize { my ($lh, $word_or_phrase, $level, $substitute) = @_; return get_clean_text({ 'lang' => $lh->get_language_tag(), 'text' => $word_or_phrase, 'level' => $level, 'character' => $substitute, }); } Then you can use this in your lexicon key: 'Quote of the day "[output,de_profanitize,_1,9,*]"' Your class can do whatever you like to determine the context and is by no means limited to 'plain' and 'html' types. Keys that are not context names (i.e. _type) should be preceded by an underscore. Arbitrary name/value attribute list Methods that support this feature are able to accept additional arguments treated as name/value pair attributes. The idea is to embed ones that will likely not change and hopefully add to the meaning of the string. Your hair [output,inline,is on fire,class,urgent]! After that list (or instead of it) a single hashref can be passed in. The intent here is to be able to do any arbitrary name/value that the caller might want to use but is likely to change and/or adds little meaning if any to the string. output() context Context as used here means what type of output we want based on where it will be happening at. 'html' will do output suitable for use in HTML code. 'ansi' will do output suitable for a terminal. 'plain' will do output without any sort of formatting. set_context() Set the context. If no arguments are given it will set it to 'html' or 'ansi' based on IO::Interactive::Tiny. This happens automatically if needed so you shouldn't have to call it unless you want to change it. Otherwise it accepts 'html', 'ansi', or 'plain'. Returns the context that it sets it to (or an empty string if you pass in a second true argument). get_context() Takes no arguments. Returns 'html', 'ansi', or 'plain'. Calls $lh->set_context() if it has not been set yet. set_context_html() Takes no arguments. Sets the contect to 'html'. Returns the context it was set to previously (or an empty string if you pass in a second true argument) on success, false otherwise. set_context_ansi() Takes no arguments. Sets the contect to 'ansi'. Returns the context it was set to previously (or an empty string if you pass in a second true argument) on success, false otherwise. set_context_plain() Takes no arguments. Sets the contect to 'plain'. Returns the context it was set to previously (or an empty string if you pass in a second true argument) on success, false otherwise. context_is() Takes one argument and returns true if that is what the context currently is. context_is_html() Takes no arguments. Returns true if that is what the context is currently 'html'. context_is_ansi() Takes no arguments. Returns true if that is what the context is currently 'ansi'. context_is_plain() Takes no arguments. Returns true if that is what the context is currently 'plain'. maketext_html_context() Does maketext() under the 'html' context regardless of what the current context is. maketext_ansi_context() Does maketext() under the 'ansi' context regardless of what the current context is. maketext_plain_context() Does maketext() under the 'plain' context regardless of what the current context is. Project example Main Class: package MyApp::Localize; use Locale::Maketext::Utils; use base 'Locale::Maketext::Utils'; our $Encoding = 'utf-8'; __PACKAGE__->make_alias([qw(en en_us i_default)], 1); our %Lexicon = ( 'Hello World' => 'Hello World', # $Onesided used to allow for 'Hello World' => '', ); 1; French class: package MyApp::Localize::fr; use base 'MyApp::Localize'; our %Lexicon = ( 'Hello World' => 'Bonjour Monde', ); # not only is this too late to be of any use # but it's pointless as it already in essence happens since a failed NS # lookup tries the superordinate (in this case 'fr') before moving on # __PACKAGE__->make_alias('fr_ca'); sub init { my ($lh) = @_; $lh->SUPER::init(); $lh->{'numf_comma'} = 1; # Locale::Maketext numf() return $lh; } 1; Standard" .pm layout In the name of consistency I recommend the following "Standard" namespace/file layout. You put all of your locales in MainNS::language_code You put any utility functions/methods in MainNS::Utils and/or MainNS::Utils::* So assuming a main class of MyApp::Localize the files && name spaces would be: MyApp/Localize.pm MyApp::Localize MyApp/Localize/Utils.pm MyApp::Localize::Utils MyApp/Localize/Utils/Etc.pm MyApp::Localize::Utils::Etc MyApp/Localize/Utils/AndSoOn.pm MyApp::Localize::Utils::AndSoOn MyApp/Localize/fr.pm MyApp::Localize::fr MyApp/Localize/it.pm MyApp::Localize::it MyApp/Localize/es.pm MyApp::Localize::es ... If you choose to use this paradigm you'll have two additional methods available: $lh->get_base_class_dir() Returns the directory that correspnds to the base class's name space. Again, assuming a main class of MyApp::Localize it'd be '/usr/lib/whatever/MyApp/Localize' $lh->list_available_locales() Returns a list of locales available. These are based on the .pm files in $lh->get_base_class_dir() that are not 'Utils.pm'. They are returned in the order glob() returns them. (i.e. no particular order) Assuming the file layout above you'd get something like (fr, it, es, ...) This would be useful for creating a menu of available languages to choose from: my ($current_lookup, $native_lookup) = $lh->lang_names_hashref('en', $lh->list_available_locales()); # since our main lexicon only has aliases (i.e. no .pm file): # we want the main language on top and we only want one of the aliases: the superordinate for my $langtag ('en', sort $lh->list_available_locales()) { if ($current_lookup->{$langtag} eq $native_lookup->{$langtag}) { # do menu entry like "Current $current_lookup->{$langtag} ($langtag)" # Currently English (en) } else { # do menu entry like "$current_lookup->{$langtag} :: $native_lookup->{$langtag} :: ($langtag)" # Italian :: Italiano (it) } } Tie::Hash::ReadonlyStack compat Lexicon Often you'll want to add things to the lexicon. Perhaps a server's local version of a few strings or a context specific lexicon and using append_to_lexicons() and remove_key_from_lexicons() is too cumbersome. Buy making your lexicon a Tie::Hash::ReadonlyStack hash we can do just that. First we make our main lexicon: use Tie::Hash::ReadonlyStack; tie %MyApp::Localize::Lexicon, 'Tie::Hash::ReadonlyStack', \%actual_lexicon; '%actual_lexicon' can be a normal hash or a specially tied hash (e.g. a GDBM_READER GDBM_File hash) Next we add the server admin's overrides: $lh->add_lexicon_override_hash($tag, 'server', \%server); When we init a user we add their override: $lh->add_lexicon_override_hash($tag, 'user', \%user); Then we start a request and add request specific keys (perhaps a small lexicon package included with the module that implements the functionality for the current request) to fallback on if they do not exist: $lh->add_lexicon_fallback_hash($tag, 'request', \%request); After the request we don't need that last one any more so we remove it: $lh->del_lexicon_hash($tag, 'request'); When the user context goes out of scope out we clean up theirs as well: $lh->del_lexicon_hash($tag, 'user'); If you choose to use this paradigm (via Tie::Hash::ReadonlyStack or a class implementing the methods in use below) you'll have three additional methods availble: These methods all returns false if the lexicon is not tied to an object that implements the method necessary to do this. Otherwise they return whatever the tied class's method returns add_lexicon_override_hash() This adds a hash to be checked before any others currently in the stack. Takes 2 or 3 arguments. The language tag whose lexicon we are adding to, a short identifier string, and a reference to a hash. If the language tag is not specified or not in use in the current object the main lexicon is the one it gets asssigned to. # $lh is 'fr' and the main language is english, both are tied to Tie::Hash::ReadonlyStack $lh->add_lexicon_override_hash('fr', 'user', \%user_fr); # updated the 'fr' lexicon $lh->add_lexicon_override_hash('user', \%user_en); # updates main lexicon since no language was specified $lh->add_lexicon_override_hash('it', 'user', \%user_it); # updates main lexicon since 'it' is not in use in the handle Uses "add_lookup_override_hash()" in Tie::Hash::ReadonlyStack under the hood. add_lexcion_fallback_hash() Like "add_lexicon_override_hash()" except that it adds the hash after any others currently in the stack. Uses "add_lookup_fallback_hash()" in Tie::Hash::ReadonlyStack under the hood. $lh->{'add_lex_hash_silent_if_already_added'} That attribute when true (e.g. set in init()) will cause the add_lexicon* methods to return true if the given name has been added before it tries to add them (which will return false since they exist already) Care must be taken that you're not using the same identifier with different hashes or you some lexicons will simply not be added. A better approach is to design your stack modifying logic so it doesn't try to add uplicate entries. This option is really only inteneded for debugging and testing. del_lexicon_hash() This deletes a hash added via add_lexicon_override_hash() or add_lexicon_fallback_hash() from the stack. Its arguments are the langtag and the short identifier string. If langtag is not specified or is an '*' then it is removed from all lexicons in use. If the specified langtag is not in use in the current object it gets removed from the main lexicon. $lh->del_lexicon_hash('fr', 'user'); # remove 'user' from the 'fr' lexicon $lh->del_lexicon_hash('*', 'user'); # remove 'user' from all the handle's lexicons $lh->del_lexicon_hash('user'); # remove 'user' from all the handle's lexicons $lh->del_lexicon_hash('it', 'user'); # remove 'user' from the main lexicon since 'it' is not in use Uses "del_lookup_hash()" in Tie::Hash::ReadonlyStack under the hood. Phrase Utils See Locale::Maketext::Utils::Phrase::Norm for pragmatic examination and normalization of maketext phrase. See Locale::Maketext::Utils::Phrase::cPanel for the same but via a cPanel recipe. See Locale::Maketext::Utils::Mock for a mock object you can use for testing phrases. See Locale::Maketext::Utils::MarkPhrase for a lightweight way to mark phrases in source code as translatable. ENVIRONMENT $ENV{'maketext_obj'} gets set to the language object on initialization ( for functions to use, see "FUNCTIONS" below ) unless $ENV{'maketext_obj_skip_env'} is true FUNCTIONS Locale::Maketext::Pseudo has some exportable functions that make use of $ENV{'maketext_obj'} to do things like: use Locale::Maketext::Pseudo qw(env_maketext); ... env_maketext("Hello, my name is [_1]", $name); # use real object if we have one otherwise use pseudo object SEE ALSO Locale::Maketext, Locales::Language, Locale::Maketext::Pseudo, Text::Extract::MaketextCallPhrases If you use "$lh-"lang_names_hashref()> or "$lh-"loadable_lang_names_hashref()> make sure you have the latest verison of Locales as 0.04 (i.e. Locales::Base 0.03) is buggy! TODO Audit that “Arbitrary name/value attribute list” is being used everywhere it makes sense to and that each use of it is documented. Add in currently beta datetime_duration() ("LOCALIZATION of DateTime::Format modules" in DateTime::Format::Span and company) Add in currently beta currency(), currency_convert() Add more tests for v0.20: The changes have been in production outside of CPAN for a while, this was just a release to bring the CPAN verison up to date. SUGGESTIONS If you have an idea for a method that would fit into this module just let me know and we'll see what can be done AUTHOR Daniel Muey, This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.6 or, at your option, any later version of Perl 5 you may have available.
https://metacpan.org/pod/Locale::Maketext::Utils
CC-MAIN-2016-26
refinedweb
6,777
55.74
Mitaka Series Release Notes¶ 8.4.0¶ Known Issues¶ - In kernels < 3.19 net.ipv4.ip_nonlocal_bind was not a per-namespace kernel option. L3 HA sets this option to zero to avoid sending gratuitous ARPs for IP addresses that were removed while processing. If this happens then gratuitous ARPs are going to be sent which might populate ARP caches of peer machines with the wrong MAC address. Upgrade Notes¶ - Server notifies L3 HA agents when HA router interface port status becomes active. Then L3 HA agents spawn keepalived process. So, server has to be restarted before the L3 agents during upgrade. Bug Fixes¶ - Versions of keepalived < 1.2.20 don’t send gratuitous ARPs when keepalived process receives SIGHUP signal. These versions are not packaged in some Linux distributions like RHEL, CentOS or Ubuntu Xenial. Not sending gratuitous ARPs may lead to peer ARP caches containing wrong information about floating IP addresses until the entry is invalidated. Neutron now sends gratuitous ARPs for all new IP addresses that appear on non-HA interfaces in router namespace which simulates behavior of new versions of keepalived. 8.3.0¶ DHCP and L3 Agent scheduling is availability zone aware. The Neutron server no longer needs to be configured with a firewall driver and it can support mixed environments of hybrid iptables firewalls and the pure OVS firewall. By default, the QoS driver for the Open vSwitch and Linuxbridge agents calculates the burst value as 80% of the available bandwidth. New Features¶ - A DHCP agent is assigned to an availability zone; the network will be hosted by the DHCP agent with availability zone specified by the user. - An L3 agent is assigned to an availability zone; the router will be hosted by the L3 agent with availability zone specified by the user. This supports the use of availability zones with HA routers. DVR isn’t supported now because L3HA and DVR integration isn’t finished. - The Neutron server now learns the appropriate firewall wiring behavior from each OVS agent so it no longer needs to be configured with the firewall_driver. This means it also supports multiple agents with different types of firewalls. Upgrade Notes¶ - A new option ha_keepalived_state_change_server_threadshas been added to configure the number of concurrent threads spawned for keepalived server connection requests. Higher values increase the CPU load on the agent nodes. The default value is half of the number of CPUs present on the node. This allows operators to tune the number of threads to suit their environment. With more threads, simultaneous requests for multiple HA routers state change can be handled faster. Other Notes¶ - Please read the OpenStack Networking Guide. 8.2.0¶ Add options to designate external dns driver of neutron for SSL based connections. This makes it possible to use neutron with designate in scenario where endpoints are SSL based. Users can specify to skip cert validation or specify path to a valid cert in [designate] section of neutron.conf file. Support for IPv6 addresses as tunnel endpoints in OVS. New Features¶ - Two new options are added to [designate] section to support SSL. - First option insecure allows to skip SSL validation when creating a keystone session to initate a designate client. Default value is False, which means to always verify connection. - Second option ca_cert allows setting path to a valid cert file. Default is None. - The local_ip value in ml2_conf.ini can now be set to an IPv6 address configured on the system. Other Notes¶ - Requires OVS 2.5+ version or higher with linux kernel 4.3 or higher. More info at OVS github page. 8.1.0¶ Support configuration of greenthreads pool for WSGI. Several NICs per physical network can be used with SR-IOV. Bug Fixes¶ - The ‘physical_device_mappings’ of sriov_nic configuration now can accept more than one NIC per physical network. For example, if ‘physnet2’ is connected to enp1s0f0 and enp1s0f1, ‘physnet2:enp1s0f0,physnet2:enp1s0f1’ will be a valid option. Other Notes¶ - Operators may want to tune the max_overflowand wsgi_default_pool_sizeconfiguration options according to the investigations outlined in this mailing list post. The default value of wsgi_default_pool_sizeinherits from that of oslo.config, which is currently 100. This is a change in default from the previous Neutron-specific value of 1000.
https://docs.openstack.org/releasenotes/neutron/mitaka.html
CC-MAIN-2017-09
refinedweb
702
58.08
FOSSASIA is maintaining a superior blog and it contains blog posts about projects and programs run by FOSSASIA. While we were implementing SUSI Web Chat Application we got a requirement to implement a blog page. Speciality of this blog page is it is not a separate blog page, it fetches blog posts and other related data by filtering the FOSSASIA’s main blog. In this blog post I’ll discuss how we fetched and managed those data on front-end and how we made the appearance same as the FOSSASIA main blog. First we get blog posts as a JSON. For that we used rss2json API. we can get the RSS feed as a JSON by sending our RSS feed URL to the rss2json API. Check the rss2json API documentation here. It produces all posts as items array. Next we store this array of responses in our application as a state. This response contains blog post titles featured images’ details and post content and other metadata such as tags, author name and published date. We had few requirements to fulfill. First one is to show full content of the blogpost in a new blog page. We can take the full content from response like this, this.state.posts.slice(this.state.startPage, this.state.startPage + 10).map((posts, i) => { let content = posts.content; }) We can use “cintent” variable to show content but it contains the featured image. We have to skip that image. For that, let htmlContent = content.replace(/<img.*?>/, ''); Now we have to render this string value as HTML. For that we have to install “test-to-html” package using below command. npm install html-to-text --save Now we can convert text into html like this htmlContent = renderHTML(htmlContent); We used this HTML content inside the “CardText” tag. <CardText> {htmlContent} </CardText> At the bottom of the post we needed to show author name, tags list and categories list. Since tags and categories come in one array, we have to separate them. First we defined an array which contains all the categories in Fossasia blog. Then we compared that array with the categories we got like this. const allCategories = ['FOSSASIA','GSoC','SUSI.AI'] Compare two arrays, posts.categories.map((cat) => { let k = 0; for (k = 0; k < allCategories.length; k++) { if (cat === allCategories[k]) { category.push(cat); } } }); we defined this “arrDiff” simple function to get the difference of two arrays. var tags=arrDiff(category,posts.categories) Make the list of categories let fCategory=category.map((cat) => <span key={cat} ><a className="tagname" href={'' + cat.replace(/\s+/g, '-').toLowerCase()}{cat}</a></span> ); We can use above step to make tags list. Then after used it in the “CardActions” <span className='categoryContainer'> <i className="fa fa-folder-open-o tagIcon"></i> {fCategory} </span> According to the final requirement we needed to add social media share buttons for Facebook and Twitter. If we need to make a twitter share button we have to follow up this method. But we can use “react-share” npm package to make these kind of share buttons. This is how we made Facebook and Twitter share buttons. First of all we have to install “react-share” package using below command. npm install react-share --save Then we have to import the installed package. import { ShareButtons, generateShareIcon } from 'react-share'; Then after we defined Button and Icon like this. const {FacebookShareButton,TwitterShareButton} = ShareButtons; const FacebookIcon = generateShareIcon('facebook'); const TwitterIcon = generateShareIcon('twitter'); Now we can use these components. <TwitterShareButton url={posts.guid} title={posts.title} via='asksusi' hashtags={posts.categories.slice(0, 4)} > <TwitterIcon size={32} round={true} /> </TwitterShareButton> <FacebookShareButton url={posts.link}> <FacebookIcon size={32} round={true} /> </FacebookShareButton> We have to send URL and title of the post with the tweet and tags as hashtags. So we have to pass them into the component as above. Above code produces this model of tweets. That’s how “text-to-htm”l and “react-share” works on react. If you would like to contribute to SUSI project or any other FOSSASIA project please fork our repositories on github. Resources: - Html-to-text package: - Social Media Buttons Package: - RSS2JSON API Documentation:
https://blog.fossasia.org/displaying-blog-posts-on-susi-ai-web-chats-blog-page-and-share-posts/
CC-MAIN-2021-43
refinedweb
690
58.89
Change data capture (CDC) is a process to capture changes made to data in the database and stream those changes to external processes, applications, or other databases. Prerequisites - The database and its tables must be created using YugabyteDB version 2.13 or later. - CDC supports YSQL tables only. (See Limitations.) Be aware of the following: - You can't stream data out of system tables. - You can't create a stream on a table created after you created the stream. For example, if you create a DB stream on the database and then create a new table in that database, you won't be able to stream data out of the new table. You need to create a new DB Stream ID and use it to stream data. NoteThe current YugabyteDB CDC implementation supports only Debezium and Kafka. WarningYugabyteDB doesn't yet support the DROP TABLE and TRUNCATE TABLE commands. The behavior of these commands while streaming data from CDC is undefined. If you need to drop or truncate a table, delete the stream ID using yb-admin. See also Limitations. Process architecture The core primitive of CDC is the stream. Streams can be enabled and disabled on databases. Every change to a watched database table is emitted as a record in a configurable format to a configurable sink. Streams scale to any YugabyteDB cluster independent of its size and are designed to impact production traffic as little as possible. CDC streams Streams are the YugabyteDB endpoints for fetching database changes by applications, processes, and systems. Streams can be enabled or disabled (on a per namespace basis). Every change to a database table (for which the data is being streamed) is emitted as a record to the stream, which is then propagated further for consumption by applications, in this case to Debezium, and then ultimately to Kafka. DB stream To facilitate the streaming of data, you have to create a DB Stream. This stream is created at the database level, and can access the data in all of that database's tables. Debezium connector for YugabyteDB The Debezium connector for YugabyteDB pulls data from YugabyteDB and publishes it to Kafka. The following illustration explains the pipeline: See Debezium connector for YugabyteDB to learn more, and Running Debezium with YugabyteDB to get started with the Debezium connector for YugabyteDB. Java CDC console client The Java console client for CDC is strictly for testing purposes only. It can help in building an understanding what change records are emitted by YugabyteDB. TServer configuration There are several GFlags you can use to fine-tune YugabyteDB's CDC behavior. These flags are documented in the Change data capture flags section of the yb-tserver reference page. Consistency semantics Per-tablet ordered delivery guarantee All changes for a row (or rows in the same tablet) are received in the order in which they happened. However, due to the distributed nature of the problem, there is no guarantee of the order across tablets. At least once delivery Updates for rows are streamed at least once. This can happen in the case of Kafka Connect Node failure. If the Kafka Connect Node pushes the records to Kafka and crashes before committing the offset, on restart, it will again get the same set of records. No gaps in change stream Note that after you have received a change for a row for some timestamp t, you won't receive a previously unseen change for that row at a lower timestamp. Receiving any change implies that you have received all older changes for that row. Performance impact The change records for CDC are read from the WAL. The CDC module maintains checkpoints internally for each of the stream-ids and garbage collects the WAL entries if those have been streamed to CDC clients. In case CDC is lagging or away for some time, the disk usage may grow and may cause YugabyteDB cluster instability. To avoid this scenario, if a stream is inactive for a configured amount of time, the WAL is garbage-collected. This is configurable using a Gflag. yb-admin commands The commands used to manipulate CDC DB streams are described in the yb-admin reference documentation. Snapshot support Initially, if you create a stream for a particular table that already contains some records, the stream takes a snapshot of the table, and streams all the data that resides in the table. After the snapshot of the whole table is completed, YugabyteDB starts streaming the changes that happen in the table. The snapshot feature uses the cdc_snapshot_batch_size GFlag. This flag's default value is 250 records included per batch in response to an internal call to get the snapshot. If the table contains a very large amount of data, you may need to increase this value to reduce the amount of time it takes to stream the complete snapshot. You can also choose not to take a snapshot by modifying the Debezium configuration. Limitations - YCQL tables aren't currently supported. Issue 11320. - User Defined Types (UDT) are not supported. Issue 12744. NoteIn the current implementation, information related to the columns for the UDTs will not be there in the messages published on Kafka topic. - Enabling CDC on tables created using previous versions of YugabyteDB is not supported, even after YugabyteDB is upgraded to version 2.13 or higher. - DROP and TRUNCATE commands aren't supported. If a user tries to issue these commands on a table while a stream ID is there for the table, the server might crash, the behaviour is unstable. Issues for TRUNCATE 10010 and DROP 10069. - If a stream ID is created, and after that a new table is created, the existing stream ID is not able to stream data from the newly created table. The user needs to create a new stream ID. Issue 10921. - CDC is not supported on a target table for xCluster replication 11829. - Support for DDL commands is incomplete. - A single stream can only be used to stream data from one namespace only. - There should be a primary key on the table you want to stream the changes from. In addition, CDC support for the following features will be added in upcoming releases: - Support for tablet splitting is tracked in issue 10935. - Support for point-in-time recovery (PITR) is tracked in issue 10938. - Support for transaction savepoints is tracked in issue 10936. - Support for enabling CDC on Read Replicas is tracked in issue 11116. - Support for enabling CDC on Colocated Tables is tracked in issue 11830.
https://docs.yugabyte.com/preview/explore/change-data-capture/
CC-MAIN-2022-40
refinedweb
1,087
63.59
Microbenchmarking with JMH: Measure, Don't Guess! Microbenchmarking with JMH: Measure, Don't Guess!’m sure you’ve all heard that assigning a variable to null helps the Garbage Collector, or not declaring a method final improves in lining…. But what you also know is that JVMs have evolved drastically and what was true yesterday may not be true today. So, how do we know that our code performs? Well, we don’t, because we are not supposed to guess what the JVM does… we just measure! Measure, don’t guess! As my friend Kirk Pepperdine once said, “Measure, don’t guess“. We’ve all faced performance problems in our projects and were asked to tune random bits in our source code… hoping that performance will get improved. Instead, we should setup a stable performance environment (operating system, JVM, application server, database…), measure continuously, set some performance goals… then, take action when our goals are not achieved. Continuous delivery, continuous testing… is one thing, but continuous measuring is another step. Anyway, performance is a dark art and it’s not the goal of this blog post. No, I just want to focus on microbenchmarking and show you how to use JMH on a real use case: logging. Microbenchmarking Logging I’m sure that, like me, you’ve spent the last decades going from one logging framework to another one and you’ve seen different ways to write debug logs: logger.debug("Concatenating strings " + x + y + z); logger.debug("Using variable arguments {} {} {}", x, y, z); if (logger.isDebugEnabled()) logger.debug("Using the if debug enabled {} {} {}", x, y, z); These are all debug messages and we usually don’t care because in production we run with an INFO or WARNING level. But debug logs can have an impact on our performances… even if we are in WARNING level. To prove it, we can use Java Microbenchmarking Harness (JMH) to make a quick microbenchmark and measure the performance of the three logging mechanism: Concatenating strings, Using variable arguments and Using the if debug enabled. Setting up JMH JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targeting the JVM. It’s really easy to setup and, thanks to the Maven archetype, we can quickly get a JMH project skeleton and get going. For that, execute the following Maven command: $ mvn archetype:generate -DinteractiveMode=false -DarchetypeGroupId=org.openjdk.jmh \ -DarchetypeArtifactId=jmh-java-benchmark-archetype -DarchetypeVersion=1.4.1 \ -DgroupId=org.agoncal.sample.jmh -DartifactId=logging -Dversion=1.0 This Maven archetype creates the following project structure: - A pom.xml file with the JMH dependencies and a customized maven-shade-plugin to get a uber-jar - An empty MyBenchmark class with a @Benchmark annotation At this point we haven’t done anything yet, but the microbenchmark project is already up and running. Packaging the code with Maven will create a uber-jar called benchmarks.jar. $ mvn clean install $ java -jar target/benchmarks.jar When we execute the uber-jar, we see a funny output in the console: JMH goes into a loop, warms up the JVM, executes the code inside the method annotated @Benhmark (empty method for now) and gives us the number of operations per seconds # Run progress: 30,00% complete, ETA 00:04:41 # Fork: 4 of 10 # Warmup Iteration 1: 2207650172,188 ops/s # Warmup Iteration 2: 2171077515,143 ops/s # Warmup Iteration 3: 2147266359,269 ops/s # Warmup Iteration 4: 2193541731,837 ops/s # Warmup Iteration 5: 2195724915,070 ops/s # Warmup Iteration 6: 2191867717,675 ops/s # Warmup Iteration 7: 2143952349,129 ops/s # Warmup Iteration 8: 2187759638,895 ops/s # Warmup Iteration 9: 2171283214,772 ops/s # Warmup Iteration 10: 2194607294,634 ops/s # Warmup Iteration 11: 2195047447,488 ops/s # Warmup Iteration 12: 2191714465,557 ops/s # Warmup Iteration 13: 2229074852,390 ops/s # Warmup Iteration 14: 2221881356,361 ops/s # Warmup Iteration 15: 2240789717,480 ops/s # Warmup Iteration 16: 2236822727,970 ops/s # Warmup Iteration 17: 2228958137,977 ops/s # Warmup Iteration 18: 2242267603,165 ops/s # Warmup Iteration 19: 2216594798,060 ops/s # Warmup Iteration 20: 2243117972,224 ops/s Iteration 1: 2201097704,736 ops/s Iteration 2: 2224068972,437 ops/s Iteration 3: 2243832903,895 ops/s Iteration 4: 2246595941,792 ops/s Iteration 5: 2241703372,299 ops/s Iteration 6: 2243852186,017 ops/s Iteration 7: 2221541382,551 ops/s Iteration 8: 2196835756,509 ops/s Iteration 9: 2205740069,844 ops/s Iteration 10: 2207837588,402 ops/s Iteration 11: 2192906907,559 ops/s Iteration 12: 2239234959,368 ops/s Iteration 13: 2198998566,646 ops/s Iteration 14: 2201966804,597 ops/s Iteration 15: 2215531292,317 ops/s Iteration 16: 2155095714,297 ops/s Iteration 17: 2146037784,423 ops/s Iteration 18: 2139622262,798 ops/s Iteration 19: 2213499245,208 ops/s Iteration 20: 2191108429,343 ops/s Adding SLF4j to the benchmark Remember that the use case is to microbench logging. In the created project I use SFL4J with Logback. So I need to add those dependencies to the pom.xml : <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.7</version> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>1.0.11</version> </dependency> Then I add a logback.xml file which outputs only INFO logs (so I’m sure that the debug level traces are not logged) : <configuration> <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%highlight(%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n)</pattern> </encoder> </appender> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder><pattern>%msg%n</pattern></encoder> </appender> <root level="INFO"> <appender-ref </root> </configuration> The good thing with the maven-shade-plugin is that when I package the application, all the dependencies, configuration files and so on, will all get flatten into the uber-jar target/benchmarks.jar. Using String Concatenation in the Logs Let’s do the first micro benchmark: using logs with string concatenation. The idea here is to take the MyBenchmark class and add the needed code into the method annotated with @Benchmark, and let JMH do the rest. So, we add a Logger, create a few string (x, y, z), do a loop, and log a debug message with string concatenation. This will look like this: import org.openjdk.jmh.annotations.Benchmark; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class MyBenchmark { private static final Logger logger = LoggerFactory.getLogger(MyBenchmark.class); @Benchmark public void testConcatenatingStrings() { String x = "", y = "", z = ""; for (int i = 0; i < 100; i++) { x += i; y += i; z += i; logger.debug("Concatenating strings " + x + y + z); } } } To execute this micro benchmark, we do as usual, and we will see the iteration outputs : $ mvn clean install $ java -jar target/benchmarks.jar Using Variable Arguments in the Logs The second micro-benchmark is to use variable arguments in the logs instead of string concatenation. Just change the code, repackage, and execute it. @Benchmark public void testVariableArguments() { String x = "", y = "", z = ""; for (int i = 0; i < 100; i++) { x += i; y += i; z += i; logger.debug("Variable arguments {} {} {}", x, y, z); } } Using a If Statement in the Logs Last but not least, the good old isDebugEnabled() in the logs that is “supposed to optimize things”. @Benchmark public void testIfDebugEnabled() { String x = "", y = "", z = ""; for (int i = 0; i < 100; i++) { x += i; y += i; z += i; if (logger.isDebugEnabled()) logger.debug("If debug enabled {} {} {}", x, y, z); } } Result of the Microbenchmarks After running the three micro-benhmarks we get what we had expected (remember, don’t guess, measure). The more operation per second, the better. So if we look at the last line of the following table, we notice that the best performance is with the isDebugEnabled and the worse is String concatenation. Then, as we can see, variable argument without isDebugEnabled is not bad either… plus we gain in visibility (less boiler plate code). So I’ll go with variable arguments ! Conclusion In the last decades JVMs have evolved drastically. Design pattern that would optimize our code ten years ago are not accurate anymore. The only way to be sure that one piece of code is better that another piece of code, is to measure it. JMH is the perfect tool to easily and quickly micro benchmark pieces of code, or like in this post, an external framework (logging, utility classes, date manipulation, apache commons….). Of course, reasoning about a small section of code is only one step because we usually need to analyze the overall application performance. Thanks to JMH this first step is easy to make. And remember to check the JMH examples, it’s full of interesting ideas. References Discovering, responding to, and resolving incidents is a complex endeavor. Read this narrative to learn how you can do it quickly and effectively by connecting AppDynamics, Moogsoft and xMatters to create a monitoring toolchain. }}
https://dzone.com/articles/microbenchmarking-jmh-measure
CC-MAIN-2018-34
refinedweb
1,493
52.39
NAME ng_tty - netgraph node type that is also a line discipline SYNOPSIS #include <sys/types.h> #include <sys/ttycom performed in either direction. While the line discipline is installed on a tty, the normal read and write operations are unavailable, returning EIO. The node supports an optional “hot character”. If set to non-zero,... SHUTDOWN This node shuts down when the corresponding device is closed (or the line discipline is uninstalled on the device). The NGM_SHUTDOWN control message is not valid, and always returns the error EOPNOTSUPP. SEE ALSO ioctl(2), netgraph(4), ng_async(4), tty(4), ngctl(8) HISTORY The ng_tty node type was implemented in FreeBSD 4.0. AUTHORS Archie Cobbs 〈archie.
http://manpages.ubuntu.com/manpages/intrepid/man4/ng_tty.4freebsd.html
CC-MAIN-2015-48
refinedweb
114
59.7
Lesson 8 - Arena with a mage in Python (inheritance and polymorphism) In the previous lesson, Inheritance and polymorphism in Python, we went over inheritance and polymorphism in Python. In today's tutorial, we're going to make a sample program using these concepts, as promised. We'll get back to our arena and inherit a mage from the Warrior class. These next couple of lessons will be among the most challenging. Which is why you should be working on OOP on your own, solving our exercises and coming up with your own applications. Try putting all that you've learned to practice. Make programs that you would find useful, so it will be more engaging and fun Before we even get to coding, we'll think about what 10 every round and the mage would only be able to perform regular attacks. Once the mana is full, he'll be able to use his magic attack again. The mana will be displayed using a graphical indicator just like the health bar. Let's create a Mage class and inherit it from the Warrior class. It'll look something like this (don't forget to add class Mage(Warrior): We don't have access to all the warrior variables in the mage class because we still have the attributes in the Warrior class set as private. We'll have to change the private attributes to protected in the Warrior class. We'll have to update this in all methods using these attributes. All we really need now is the die and the name attributes. Either way, we'll set all of the warrior's attributes as protected because they might come in handy for future descendants. On second thought, it wouldn't be wise to set the message attribute as protected since it is not related to the warrior directly. With all of that in mind, the class would look something like this: class Mage(Warrior): def __init__(self, name, health, damage, defense, die): self._name = name self._health = health self._max_health = health self._damage = damage self._defense = defense self._die = die self.__message = "" Moving on to the constructor. Descendant constructor We can't use the Warrior's constructor since the Mage has 2 extra parameters (mana and the magic damage). We'll define the constructor in the descendant, which will take both parameters needed to create a warrior and the extra ones needed for the mage. The descendant constructor must always call the parent constructor. If you forget to do so, the instance may not be properly initialized. The only time we don't call the ancestor constructor is when there isn't one. Our constructor must have all needed parameters for an ancestor and the new ones that the descendant needs. The ancestor's constructor will be executed before the descendant's. In Python, there is a method known as super() which allows us to call a method on the parent. Meaning, that we can call the ancestor constructor with the given parameters and initialize the mage as well. In Python, calling the ancestor constructor has to be included in the method body. Mage's constructor should look something like this: def __init__(self, name, health, damage, defense, die, mana, magic_damage): super().__init__(name, health, damage, defense, die) self.__mana = mana self.__max_mana = mana self.__magic_damage = magic_damage Now, let's go at the end of the program and change the second warrior (Shadow) to a mage. Like this: gandalf = Mage("Gandalf", 60, 15, 12, die, 30, 45) We will also have to change the line where we put the warrior in the arena. Polymorphism and overriding methods It would be nice if the Arena could work with the mage in the same way as it does with the warrior. We already know that in order to do so we must apply the concept of polymorphism. The arena will call the attack() method, passing an enemy as a parameter. It won't care whether the attack will be performed by a warrior or by a mage, the arena will work with them in the same way. We'll have to override the ancestor's attack() method in the mage class. We'll rewrite its inherited method so the attack will use mana, but the header of the method will remain the same. In a descendant, we can simply override any method. In Python, all methods are "virtual" as of the C++/C# language terms, meaning they can be overrided. Talking about methods, we'll certainly need a set_message() method which is private now. We have to make it protected: def _set_message(self, message): ... When you create a class you should always consider whether it would have descendants and therefore make appropriate protected attributes and methods. I didn't mean to overwhelm you with all of this information when we first made the Warrior class but now that we understand these modifiers, we should use them Now let's go back to the attack() method and override its behavior. It won't be complicated. Depending on the mana value, we'll either perform a normal attack or a magic attack. The mana value will be either increased by 10 each round or in the case where the mage uses a magic attack, it will be reduced to 0: def attack(self, enemy): # Mana isn't full if self.__mana < self.__max_mana: self.__mana = self.__mana + 10 if self.__mana > self.__max_mana: self.__mana = self.__max_mana hit = self._damage + self._die.roll() message = "{0} attacks with a hit worth {1} hp".format(self._name, hit) self._set_message(message) # Magic damage else: hit = self.__magic_damage + self._die.roll() message = "{0} used magic for {1} hp.".format(self._name, hit) self._set_message(message) self.__mana = 0 enemy.defend(hit) The code is clear. Notice how we limit the mana to maxMana since it could be that it exceeds the maxim value when increasing it by 10 each round. When you think about it, the normal attack is already implemented in the ancestor's attack() method. Certainly, it'd be better to just call the ancestor's attack() instead of copying its behavior. We'll use super() to do just that: def attack(self, enemy): # Mana isn't full if self.__mana < self.__max_mana: self.__mana = self.__mana + 10 if self.__mana > self.__max_mana: self.__mana = self.__max_mana super().attack(enemy) # Magic damage else: hit = self.__magic_damage + self._die.roll() message = "{0} used magic for {1} hp.".format(self._name, hit) self._set_message(message) self.__mana = 0 enemy.defend(hit) There are lots of time-saving techniques we can set up using inheritance. In this case, all it did is save us a few lines, but in a larger project, it would make a huge difference. Other non-private methods will be inherited automatically. The application now works as expected. Console application -------------- Arena -------------- Warriors health: Zalgoren [########### ] Gandalf [############## ] Gandalf attacks with a hit worth 23 hp Zalgoren defended against the attack but still lost 9 hp For completeness' sake, let's make the arena show us the mage's current mana state using a mana bar. We'll add a public method and name it mana_bar(). It will return a string with a graphical mana indicator. We'll modify the health_bar() method in the Warrior class to avoid writing the same graphical bar logic twice. Let me remind us how the original method looks: def health_bar(self): total = 20 count = int(self._health / self._max_health * total) if (count == 0 and self.is_alive()): count = 1 return "[{0}{1}]".format("#"*count, " "*(total-count)) The health_bar() method doesn't really depend on a character's health. All it needs is a current value and a maximum value. Let's rename the method to graphical_bar() and give it two parameters: current value and maximum value. We'll rename the health and max_health variables to current and maximum. def graphical_bar(self, current, maximum): total = 20 count = int(current / maximum * total) if (count == 0 and self.is_alive()): count = 1 return "[{0}{1}]".format("#"*count, " "*(total - count)) Let's implement the health_bar() method in the Warrior class again. It'll be a one-liner now. All we have to do now is call the graphical_bar() method and fill the parameters accordingly: def health_bar(self) return self.graphical_bar(self._health, self._max_health); Of course, I could add the graphical_bar() method in the Warrior before, but I wanted to show you how to deal with cases where we would need to accomplish similar functionality multiple times. You'll need to put this kind of parametrization in practice since you never know exactly what you'll need from your program at any given moment during the design stage. .[hint Now, we can easily draw graphical bars as needed. Let's move to the Mage class and implement the mana_bar() method: def mana_bar(self): return self.graphical_bar(self.__mana, self.__max_mana) Simple, isn't it? Our mage is done now, all that's left to do is tell the arena to show mana in case the warrior is a mage. Let's move to the Arena class. Recognizing the object type We'll add a separate printing method for warriors, print_warrior(), to keep things nice and neat. Its parameter will be a Warrior instance: def __print_warrior(self, warrior): print(warrior) print("Health: {0}".format(warrior.health_bar())) Now, let's tell it to show the mana bar if the warrior is a mage. We'll use the isinstance operator to do just that: def __print_warrior(self, warrior): print(warrior) print("Health: {0}".format(warrior.health_bar())) if isinstance(warrior, Mage): print("Mana: {0}".format(warrior.mana_bar())) This is it, print_warrior() will be called in the render() method, which now looks like this: def __render(self): self.__clear_screen() print("-------------- Arena -------------- \n") print("Warriors: \n") self.__print_warrior(self.__warrior_1) self.__print_warrior(self.__warrior_2) print("") Done Console application -------------- Arena -------------- Warriors: Gandalf Health: [####### ] Mana: [# ] Zalgoren Health: [#### ] Gandalf used magic and took 48 hp off Zalgoren defended against the attack but still lost 33 hp You can download the code below. If there is something you don't quite understand, try reading the lesson several times, this content is extremely important for you to know. In the next lesson, Static class members in Python, we'll explain the concept of static class members. Did you have a problem with anything? Download the sample application below and compare it with your project, you will find the error easily. DownloadBy downloading the following file, you agree to the license terms Downloaded 6x (7.84 kB) Application includes source codes in language Python No one has commented yet - be the first!
https://www.ictdemy.com/python/oop/arena-with-a-mage-in-python-inheritance-and-polymorphism
CC-MAIN-2021-25
refinedweb
1,774
65.83
Getting Started with REST API Development A web service is a communication mechanism defined between various computer systems. Without web services, custom peer-to-peer communication becomes cumbersome and platform-specific. The web needs to understand and interpret a hundred different things in the form of protocols. If computer systems can align with the protocols that the web can understand easily, it is a great help. A web service is a software system designed to support interoperable machine-to-machine interaction over a network, as defined by the World Wide Web Consortium (W3C) at. Now, in simple words, a web service is a road between two endpoints where messages are transferred smoothly. The message transfer is usually one way. Two individual programmable entities can also communicate with each other through their own APIs. Two people communicate through language, two applications communicate through an Application Programming Interface (API). The reader might be wondering; what is the importance of the API in the current digital world? The rise of the Internet of Things (IoT) made API usage heavier than before. Awareness of APIs is growing day by day, and there are hundreds of APIs that are being developed and documented all over the world every day. Notable major businesses are seeing the future in the API as a Service (AaS). A bright example in recent times is Amazon Web Services (AWS). AWS is a huge success in the cloud world. Developers write their own applications using the Representational State Transfer (REST) API provided by AWS and access it via Command-Line Interface (CLI). A few more hidden use cases are from travel sites such as and, which fetch real-time prices by calling the APIs of third-party gateways and data vendors. Web service usage is often charged these days by the amount of data requests. In this chapter, we will focus on the following topics: - The different web services available - REST architecture in detail - The rise of Single-page applications (SPAs) with REST - Setting up a Go project and running a development server - Building our first service for finding the fastest mirror site from a list of Debian servers hosted worldwide - The Open API specification and Swagger documentation Technical requirements The following are the pieces of software that should be pre-installed for running the code samples in this chapter: - OS: Linux (Ubuntu 18.04)/ Windows 10/Mac OS X >=10.13 - Software: Docker >= 18 (Docker Desktop for Windows and Mac OS X) - The latest version of the Go compiler == 1.13.5 We use Docker in this book to run a few sandbox environments. Docker is a virtualization platform that imitates an OS in a sandbox. Using it, we can cleanly run an application or service without affecting the host system. You can find the code used in this chapter on the book's GitHub repository at. Types of web services There are many types of web services that have evolved over time. Some of the more prominent ones are as follows: - Simple Object Access Protocol (SOAP) - Universal Description, Discovery, and Integration (UDDI) - Web Services Description Language (WSDL) - Representational State Transfer (REST) Out of these, SOAP became popular in the early 2000s, when XML riding on a high wave. The XML data format is used by various distributed systems to communicate with each other. A SOAP request usually consists of these three basic components: - The envelope - The header - The body Just to perform an HTTP request and response cycle, we have to attach a lot of additional data in SOAP. A sample SOAP request to a fictional book server,, looks like this: POST /Books HTTP/1.1 Host: Content-Type: application/soap+xml; charset=utf-8 Content-Length: 299 SOAPAction: "" <?xml version="1.0"?> <soap:Envelope xmlns: <soap:Header> </soap:Header> <soap:Body> <m:GetBook> <m:BookName>Alice in the wonderland</m:BookName> </m:GetBook> </soap:Body> </soap:Envelope> This is a standard example of a SOAP request for getting book data. If we observe carefully, it is in XML format, with special tags specifying the envelope and body. Since XML works by defining a lot of namespaces, the response gets bulky. The main drawback of SOAP is that it is too complex for implementing web services and is a heavyweight framework. A SOAP HTTP request can get very bulky and can cause bandwidth wastage. The experts looked for a simple alternative, and in came REST. In the next section, we will briefly discuss REST. The REST API The name Representational state transfer (REST) was coined by Roy Fielding from the University of California. It is a very simplified and lightweight web service compared to SOAP. Performance, scalability, simplicity, portability, and flexibility architecture. When you are using a mobile app on your phone, your phone might be talking to many cloud services to retrieve, update, or delete your data. REST services have a huge impact on our daily lives. REST is a stateless, cacheable, and simple architecture that is not a protocol, but a pattern. This pattern allows different endpoints to communicate with each other over HTTP. Characteristics of REST services return the response. Once a request is served, the server doesn't remember whether the request arrived after a while. So, the operation will be a stateless one. - Cacheable: In order to scale an application well, we need to cache certain responses. REST services can be cached for better throughput. - Representation of resources: The REST API provides the uniform interface to talk to. It uses a Uniform Resource Identifier (URI) to map the resources (data). It also has the advantage of requesting a specific data - Implementation freedom: REST is just a mechanism to define your web services. It is an architectural style that can be implemented in multiple ways. Because of this flexibility, you can create REST services in the way you wish to. As long as it follows the principles of REST, you have the freedom to choose the platform or technology for your server. We have seen the types of web services and understood what is REST API. We also looked at the characteristics that make REST services unique. In the next section, we will take a look at REST verbs and status code and cover a few examples of path parameters. REST verbs and status codes REST verbs specify an action to be performed on a specific resource or a collection of resources. When a request is made by the client, it should send the following information in the HTTP request: - The REST verb - Header information - The body (optional) As we mentioned previously, REST uses the URI to decode the resource to be handled. There are quite a few REST verbs available, but six of them are used particularly frequently. They are presented, along with their expected actions, in the following table: The status of these operations can be known from HTTP status codes. Whenever a client initiates a REST operation, since REST is stateless, the client should know a way to find out whether the operation was successful or not. For that reason, HTTP responses have a status code. REST defines a few standard status code types for a given operation. This means a REST API should strictly follow the following rules to achieve stable results in client-server communication. There are three important ranges available based on the types of error. See the following table for error ranges: The detail of what each status code does is very precisely defined, and the overall count of codes increases every year. We mention the important ones in the upcoming section. All requests to REST services have the following format. It consists of the host and the API endpoint. The API endpoint is the URL path that is predefined by the server. It can also include optional query parameters. Let's. GET A GET method fetches the given resource from the server. To specify a resource, GET uses a few types of URI queries: - Query parameters - Path-based parameters In case you didn't know, most back a copy of the details in response (200 OK), or else it sends a response saying, "resource not found" (404). Using GET, you can also query a list of resources, instead of a single one as in the preceding example. PayPal's API for getting billing transactions related to an agreement can be fetched with /v1/payments/billing-agreements/transactions. This line fetches all transactions that occurred on that billing agreement. In both instances,, imagine a sample fictitious API. Let's assume this API is created for fetching, creating, and updating the details of the book. A query parameter based GET request will be in this format: /v1/books/?category=fiction&publish_date=2017 - The preceding URI has a couple of query parameters. The URI is requesting a book from the books. POST, PUT, and PATCH The POST method is used to create a resource on the server. In the previous books API, this operation creates a new book with the given details. A successful POST operation returns a 2xx status code. The POST request can update multiple resources: /v1/books. The POST request can have a JSON body like the following: {'s use the PUT request. The PUT method is similar to POST. It is used to replace the resource that already exists. The main difference is that PUT is an idempotent operation. A POST call creates two instances with the same data. But PUT updates a single resource that already exists: /v1/books/1256 PUT does this using a body containing JSON syntax, as follows: {"name" : "Lord of the rings", "year": 1955, "author" : "J. R. R. Tolkien"} 1256 is the ID of the book. It updates the preceding book's update the book 1256 with a new column called ISBN: /v1/books/1256 Let's use put the following JSON in the body: { a body. It just needs an ID of the resource to be deleted. Once a resource gets deleted, subsequent GET requests return a 404 not found status. The OPTIONS API method is the most underrated in API development. Given the resource, this method tries to find all possible methods (GET, POST, and so on) defined on the server. It is like looking at the menu card at a restaurant and then ordering an item that following is the diagram depicting the CORS process: Let's examine the steps followed in the preceding CORS diagram: - the access control to *. In the next section, we see why the REST API plays such a major role in the next generation of web services. SPAs made it possible to leverage APIs for all purposes, including the UI, clients, and so on. The rise of the REST API with SPAs Let's try to understand why SPAs are already standards of today's web. Instead of building a UI in the traditional way (that is, requesting rendered web pages), SPA designs allow developers to write code in a totally different way. There are many Model-View-Controller (MVC) frameworks, including Angular, React, Vue.js, and so on, for developing web UIs rapidly, but the essence of each of them is pretty simple. All MVC frameworks help us to implement one design pattern. That design pattern is no requesting of web pages, only REST API usage. Modern frontend web development has advanced a lot in the last decade (2010-2020). In order to exploit the features of the MVC architecture, we have to consider the frontend as a separate entity that talks to the backend only using the REST API (preferably using JSON data). Old and new methods of data flow in SPA In the traditional flow of serving requests, the order looks like this: - The client requests a web page from the server - The server authenticates and returns a rendered response - Every rendered response is in HTML with embedded data With SPAs, however, the flow is quite different: - Request the HTML templates with the browser in one single go - Then, query the JSON REST API to fill a model (the data object) - Adjust the UI according to the data in the model (in JSON) - From the browser, push back the changes to the server via an API call In this way, communication happens only in the form of the REST API. The client takes care of logically representing the data. This causes systems to move from Response-Oriented Architecture (ROA) to Service-Oriented Architecture (SOA). Take a look at the following diagram: SPAs reduce bandwidth usage and improve site performance. SPAs are a major boost for API-centric server development because now a server can satisfy requirements for both browser and API clients. Why use Go for REST API development? over ten years since its first appearance. It matured along the way with the developer community jumping in and creating huge-scale systems in it. We could choose Python or JavaScript (Node.js) for our REST API development, but the main advantage of Go lies in its speed and compile-time error detection. Go has been proven to be faster than dynamic programming languages in terms of computational performance according to various benchmarks. These are the three reasons why a company should write their next API in Go: - To scale your API for a wider audience - To enable your developers to build robust systems - To start simple and go big As we progress through this book, we learn how to build efficient REST services in Go. Setting up the project and running the development server This is a building series book. It assumes you already know the basics of Go. If not, no worries. You can get a jump-start and learn the basics quickly from Go's official site at. Writing a simple standalone program with Go is straightforward. But for big projects, we have to set up a clean project layout. For that reason, as a Go developer, you should know how Go projects are laid out and the best practices to keep your code clean. Make sure you have done the following things before proceeding: - Install the Go compiler on your machine - Set the GOROOT and GOPATH environment variables There are many online references from which you can get to know the preceding details. Depending on your machine type (Windows, Linux, or Mac OS X ), set up a working Go compiler. We will see more details about GOPATH in the following section. Demystifying GOPATH GOPATH is nothing but the current appointed workspace on your machine. It is an environment variable that tells the Go compiler where your source code, binaries, and packages are placed. The programmers coming from a Python background may be familiar with the Virtualenv tool for creating multiple projects (with different Python interpreter versions) at the same time. But at a given time, you can activate the environment for the project that you wish to work on and develop your as follows: mkdir /home/user/workspace export GOPATH=/home/user/workspace Now, we install external packages like this: go get -u -v github.com/gorilla/mux Go copies a project called mux from GitHub into the currently activated project workspace. A typical Go project, hello, should reside in the src directory in GOPATH, as mentioned on the official Go website: Let's understand this structure before digging further: - bin: Stores the binary of our project; a shippable binary that can be run directly - pkg: Contains the package objects; a compiled program that supplies package methods - src: The place for your project source code, tests, and user packages In Go, all the packages imported into the main program have an identical structure, github.com/user/project. But who creates all these directories? Should the developer do that? Yes. It is the developer's responsibility to create directories for their project. It means they only create the src/github.com/user/hello directory. When a developer runs the install command, the bin and package directories are created if they did not exist before. .bin consists of the binary of our project source code and .pkg consists of all internal and external packages we use in our Go programs: go install github.com/user/project Let's build a small service to brush up on our Go language skills. Operating systems such as Debian and Ubuntu host their release images on multiple FTP servers. These are called mirrors. Mirrors are helpful in serving an OS image from the closest point to a client. Let's build a service that finds the fastest mirror from a list of mirrors. Building our first service – finding the fastest mirror site from a list With the concepts we have built up to now, let's write our first REST service. Many mirror sites exist for hosting operating system images including Ubuntu and Debian. The mirror sites here are nothing but websites on which OS images are hosted to be geographically close to the downloading machines. Let's look at how we can create our first service: Problem: Build a REST service that returns the information of the fastest mirror to download a given OS from a huge list of mirrors. Let's take the Debian OS mirror list for this service. You can find the list at. We use that list as input when implementing our service. Design: Our REST API should return the URL of the fastest mirror. The block of the API design document may look like this: Implementation: Now we are going to implement the preceding API step by step: - As we previously discussed, you should set the GOPATH variable first. Let's assume the GOPATH variable is /home/user/workspace. Create a directory called mirrorFinder in the following path. git-user should be replaced with your GitHub username under which this project resides: mkdir -p $GOPATH/src/github.com/git-user/chapter1/mirrorFinder - Our project is ready. We don't have any data store configured yet. Create an empty file called main.go: touch $GOPATH/src/github.com/git-user/chapter1/mirrorFinder/main.go - Our main logic for the API server goes into this file. For now, we can create a data file that works as a data service for our main program. Create one more directory for packaging the mirror list data: mkdir $GOPATH/src/github.com/git-user/chapter1/mirrors - Now, create an empty file called data.go in the mirrors directory. The src directory structure so far looks like this: github.com \ -- git-user \ -- chapter1 -- mirrorFinder \ -- main.go -- mirrors \ -- data.go - Let's start adding code to the files. Create an input data file called data.go for our API to use: package mirrors // MirrorList is list of Debian mirror sites var MirrorList = [...]string{ "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", } We create a map of strings called MirrorList. This map holds information on the URL to reach the mirror site. We are going to import this information into our main program to serve the request from the client. - Open main.go and add the following code: import ( "encoding/json" "fmt" "log" "net/http" "time" "github.com/git-user/chapter1/mirrors" ) type response struct { FastestURL string `json:"fastest_url"` Latency time.Duration `json:"latency"` } func main() { http.HandleFunc("/fastest-mirror", func(w http.ResponseWriter, r *http.Request) { response := findFastest(mirrors.MirrorList) respJSON, _ := json.Marshal(response) w.Header().Set("Content-Type", "application/json") w.Write(respJSON) }) port := ":8000" server := &http.Server{ Addr: port, ReadTimeout: 10 * time.Second, WriteTimeout: 10 * time.Second, MaxHeaderBytes: 1 << 20, } fmt.Printf("Starting server on port %sn", port) log.Fatal(server.ListenAndServe()) } We created the main function that runs an HTTP server. Go provides the net/http package for that purpose. The response of our API is a struct with two fields: - fastest_url: The fastest mirror site - latency: The time it takes to download the README from the Debian OS repository - We will code a function called findFastest to make requests to all the mirrors and calculate the fastest of all. To do this, instead of making sequential API calls to each and every URL one after the other, we use Go routines to parallelly request the URLs and once a goroutine returns, we stop there and return that data back.: func findFastest(urls []string) response { urlChan := make(chan string) latencyChan := make(chan time.Duration) for _, url := range urls { mirrorURL := url go func() { start := time.Now() _, err := http.Get(mirrorURL + "/README") latency := time.Now().Sub(start) / time.Millisecond if err == nil { urlChan <- mirrorURL latencyChan <- latency } }() } return response{<-urlChan, <-latencyChan} } The findFastest function is taking a list of URLs and returning the response struct. The function creates a goroutine per mirror site URL. It also creates two channels, urlChan and latencyChan, which are passed to the goroutines. In the goroutines, we calculate the latency (time taken for the request). The smart logic here is, whenever a goroutine receives a response, it writes data into two channels with the URL and latency information respectively. Upon receiving data, the two channels make the response struct and return from the findFastest function. When that function is returned, all goroutines spawned from that are stopped from whatever they are doing. So, we will have the shortest URL in urlChan and the smallest latency in latencyChan. - Now if you add this function to the main file (main.go), our code is complete for the task: - Now, install this project with the Go command, install: go install github.com/git-user/chapter1/mirrorFinder This step does two things: - Compiles the package mirrors and places a copy in the $GOPATH/pkg directory - Places a binary in the $GOPATH/bin - We can run the preceding API server like this: $GOPATH/bin/mirrorFinder The server is up and running on. Now we can make a GET request to the API using a client such as a browser or a curl command. Let's fire a curl command with a proper API GET request. Request one is as follows: curl -i -X GET "" # Valid request The response is as follows: HTTP/1.1 200 OK Content-Type: application/json Date: Wed, 27 Mar 2019 23:13:42 GMT Content-Length: 64 {"fastest_url":"","latency":230} Our fastest-mirror-finding API is working great. The right status code is being returned. The output may change with each API call, but it fetches the lowest-latency link at any given moment. This example also shows where goroutines and channels shine. In the next section, we'll look at an API specification called Open API. An API specification is for documenting the REST API. To visualize the specification, we will use the Swagger UI tool. Open API and Swagger Because APIs are very common, the Open API Specification is a community-driven open specification within the OpenAPI Initiative, a Linux Foundation Collaborative Project. The OpenAPI Specification (OAS), formerly called the Swagger Specification, is an API description format for REST APIs. An Open API file allows you to describe your entire API, including the following: - Available endpoints - Endpoint operations (GET, PUT, DELETE, and so on) - Parameter input and output for each operation - Authentication methods - Contact information, license, terms of use, and other information. Open API has many versions and is rapidly developing. The current stable version is 3.0. There are two formats, JSON and YAML, that are supported by OAS. Swagger and Open API both are different. Swagger has many products, including the following: - Swagger UI (for validating Open API files and interactive docs) - Swagger Codegen (for generating server stubs) Whenever we develop a REST API, it is a better practice to create an Open API/Swagger file that captures all the necessary details and descriptions of the API. The file can then be used in Swagger UI to create interactive documentation. Installing Swagger UI Swagger UI can be installed/downloaded on various operating systems, but the best way could be using Docker. A Swagger UI Docker image is available on the Docker Hub. Then we can pass our Open API/Swagger file to the Docker container we run out of the image. Before that, we need to create a JSON file. The Swagger JSON file has few sections: - info - servers - paths Let's create a Swagger file with the preceding sections for the first service we built. Let's name it openapi.json: { "openapi": "3.0.0", "info": { "title": "Mirror Finder Service", "description": "API service for finding the fastest mirror from the list of given mirror sites", "version": "0.1.1" }, "servers": [ { "url": "", "description": "Development server[Staging/Production are different from this]" } ], "paths": { "/fastest-mirror": { "get": { "summary": "Returns a fastest mirror site.", "description": "This call returns data of fastest reachable mirror site", "responses": { "200": { "description": "A JSON object of details", "content": { "application/json": { "schema": { "type": "object", "properties": { "fastest_mirror": { "type": "string" }, "latency": { "type": "integer" } } } } } } } } } } } Please notice how we defined the info, servers, and paths sections. The openapi tag specifies the version of the API document we are using. The info section has a service-related description. The servers section has the URL of the server where the application/server is running. We used localhost:8000 as we are running it locally. The paths section has information about all the API endpoints a service provides. It also has information about the request body, response type, and body structure. Even the possible error codes can be encapsulated into paths. Now let's install Swagger UI and make use of our Swagger file: - To install Swagger UI via Docker, run this command from your shell: docker pull swaggerapi/swagger-ui If you are on Windows 10/Mac OS X , make sure Docker Desktop is running. On Linux, Docker is available all the time once installed. - This pulls the image from the Docker Hub to your local machine. Now we can run a container that can take an openapi.json file and launch Swagger UI. Assuming that you have this file in the chapter1 directory, let's use the following command: docker run --rm -p 80:8080 -e SWAGGER_JSON=/app/openapi.json -v $GOPATH/github.com/git-user/chapter1:/app swaggerapi/swagger-ui This command tells Docker to do the following things: - Run a container using the swaggerapi/swagger-ui image - Mount chapter1 (where openapi.json resides) to the /app directory in the container - Expose host port 80 to container port 8080 - Set the SWAGGER_JSON environment variable to /app/openapi.json When the container starts, launch in the browser. You will see nice documentation for your API: In this way, without any cost, we can create instant documentation of our REST API using Swagger UI and Open API 3.0. From now on, in all chapters, we will try to create Swagger files to document our API design. It is a wise decision to start API development by creating API specifications first and then jumping into implementation. I hope this chapter helped you to brush up on the basics of REST API fundamentals. In the following chapters, we will go deeply into many diverse topics. Summary had something called SOAP, which used XML as its data format. REST operates on JSON as its primary format. REST has verbs and status codes. We saw what these status codes refer to. We designed and implemented a simple service that finds the fastest mirror site to download OS images from all Debian mirror sites hosted worldwide. In this process, we also saw how to package a Go project into a binary. We understood the GOPATH environment variable, which is a workspace definition in Go. We now know that all packages and projects reside on that path. Next, we jumped into the world of OpenAPI specification by introducing Swagger UI and Swagger files. The structure of these files and how to run Swagger UI with the help of Docker were discussed briefly. We also saw why a developer should start API development by writing down the specifications in the form of a Swagger file. In the next chapter, we will dig deeper into URL routing. Starting from the built-in router, we will explore Gorilla Mux, a powerful URL routing library.
https://www.packtpub.com/product/hands-on-restful-web-services-with-go-second-edition/9781838643577
CC-MAIN-2020-40
refinedweb
4,671
63.8
1584949749 HTML forms can send an HTTP request declaratively while submitting forms and awaiting response. However, you have to wait for a full page reload before getting your results, which most times is not the best user experience. Forms can also prepare an HTTP request to send via JavaScript, which makes for a better user experience. This article explores ways to do that using three different frameworks: Vue, React, and Hyperapp. Vue is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed from the ground up to be incrementally adoptable. To learn more about Vue, you can visit the official homepage here. First, let’s define our HTML structure. Create a file named vue.html <link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.0/css/bootstrap.min.css" rel="stylesheet" id="bootstrap-css"> <script src=""></script> <script src=""></script> <div class="container" id="app"> <div class="row"> <div class="col-md-4"> <div class="panel"> <h4 class="heading"><strong>Quick </strong> Contact <span></span></h4> <div class="form"> <input type="text" required="" placeholder="Please input your Name" value="" v- <input type="text" required="" placeholder="Please input your mobile No" value="" v- <input type="text" required="" placeholder="Please input your Email" value="" v- <textarea placeholder="Your Message" v-</textarea> <input type="submit" value="submit" name="submit" class="btn btn-primary" @ </div> </div> </div> </div> </div> The code snippet above is a basic HTML declaration in which we: POSTrequests. You would notice that in each of the 5 elements, the first 4 declares a v-model attribute to some certain properties of form. V-model is a way of binding inputs to Vue, such that Vue has the values of these input as they change. Form does not refer to the HTML form, but refers to an object which we have used for the binding in our Vue component. Last, if you look at the button element, you would notice a little directive called @click. This directive binds the click event of the button to Vue, instructing Vue on what to do when the button is clicked. Implementing Vue into the form In the previous section, we have explained the reason you have seen attributes like v-model in your HTML structure and the @click directive. Here, we show what the Vue part that handles the rest looks like. Open a script file in your HTML document and paste in: <script> var app = new Vue({ el: '#app', data: { form: { name: '', mob: '', email: '', mess: '' } }, methods: { submitForm: function(){ axios.post('', this.form) .then(function (response) { console.log(response.data); }) .catch(function (error) { console.log(error); }); } } }) </script> In the code block above, we defined an Object called form, which comprises our data. Next, we defined a method called submitForm which does an Ajax request to [](). We use httpbin because their service allows us to perform free HTTP methods. The /anything route would return the exact data which we had sent to it. See how easy it is to submit a form using JavaScript? all you need do is change the URL to that of your server. Why is my Form is not submitting? Often we note that after writing what looks like the right piece of code, the form does not submit. How do we troubleshoot this? Let me highlight common reasons your Vue form might not submit. apppassed into the Vue object with the elkey does not exist, and the app is not bound to Vue React is a JavaScript library for building user interfaces developed and maintained by Facebook. React makes it painless to create interactive UIs. Design simple views for each state in your application and React will efficiently update and render just the right components when your data changes. First, let’s define our HTML structure. Create a file named react.html and add: <link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.0/css/bootstrap.min.css" rel="stylesheet" id="bootstrap-css"> <script crossorigin</script> <script crossorigin</script> <script src=""></script> <div class="container" id="app"> </div> The code snippet above is a basic HTML declaration in which we: POSTrequests app, which would be our root component Implementing React into the mix We have a basic setup with the required libraries available and a root element which react would be attached to. Let’s go ahead with the react implementation. Open a script tag and input: class Root extends React.Component { constructor(props) { super(props); this.state = { form: { name: "", mob: "", email: "", mess: "" } }; this._onInputChange = this._onInputChange.bind(this); this._onSubmit = this._onSubmit.bind(this); } _onInputChange(name, e) { var form = this.state.form; form[name] = e.target.value; this.setState(form); } _onSubmit() { axios .post("", this.state.form) .then(function(response) { console.log(response.data); }) .catch(function(error) { console.log(error); }); } render() { return ( <div className="row"> <div className="col-md-4"> <div className="panel"> <h4 className="heading"> <strong>Quick </strong> Contact <span /> </h4> <div className="form"> <input type="text" required="" placeholder="Please input your Name" className="form-control" onChange={e => this._onInputChange("name", e)} /> <input type="text" required="" placeholder="Please input your mobile No" className="form-control" onChange={e => this._onInputChange("mob", e)} /> <input type="text" required="" placeholder="Please input your Email" onChange={e => this._onInputChange("email", e)} <textarea placeholder="Your Message" className="form-control" onChange={e => this._onInputChange("mess", e)} /> <input type="submit" value="submit" name="submit" className="btn btn-primary" onClick={this._onSubmit} /> </div> </div> </div> </div> ); } } ReactDOM.render(<Root />, document.getElementById("app")); Let’s take a review of what we have above. Here, in our constructor, we declared an initial state that comprises our form object, we then moved ahead to bind two functions which we will set the state as the input changes and submit the form. In the _onInputChange function, we receive two arguments, which are: We use this two parameters to set the state of the exact input that was changed. In the _onSubmit function, we fire a post request to the []() endpoint, which returns the exact parameters sent. Here, which is what we use as our server. Let us take a critical look at the render function, where the elements are being rendered. Here, we defined 5 elements, which comprise 3 inputs, a text area whose change events are bound to the _onInputChange function, and a button element, whose click event is bound to the _onSubmit method. Finally, we attached the app to an element on our HTML markup. Why is my Form not displaying? I bet you have been getting a blank screen and cannot understand where the error is coming from. Taking a quick look at the render function, you would notice we have jsx syntax in there. Now, here is the catch. Unless you are using babel to compile your app, jsx would most likely fail. This is because jsx isn’t regular javascript syntax, and here, we are using the browser build of React. So how do we solve this? It’s a simple fix. Any JSX block can be converted into a call to React.createElement with three arguments: div, span, ul, e.t.c. class, style, required, e.t.c. React.createElementto get more elements. Replace the render function with this: render() { return ( React.createElement("div", { className: 'row' }, [ React.createElement("div", { className: 'col-md-4' }, [ React.createElement("div", { className: 'panel' }, [ React.createElement("h4", {}, 'Quick Contact'), React.createElement("div", { className: 'form' }, [ React.createElement("input", { type: 'text', placeholder: "Please input your Name", className: "form-control", name: 'name', onChange: (e) => this._onInputChange('name', e) }), React.createElement("input", { type: 'text', placeholder: "Please input your Mobile number", className: "form-control", name: 'mob', onChange: (e) => this._onInputChange('mob', e) }), React.createElement("input", { type: 'text', placeholder: "Please input your Email", className: "form-control", name: 'email', onChange: (e) => this._onInputChange('email', e) }), React.createElement("textarea", { placeholder: "Please your message", className: "form-control", name: 'mess', onChange: (e) => this._onInputChange('mess', e) }), React.createElement("button", { type: 'button', className: "btn btn-primary", onClick: () => this._onSubmit() }, "submit"), ]) ]) ]), ]) ); } Also, update the ReactDom.render call to this: ReactDOM.render( React.createElement(Root, null), document.getElementById('app') ); Why is my form not submitting? Even after performing each step we think is necessary and cross-checking our code, it is possible your form does not still submit, how do we trouble-shoot this? axioslibrary or the library you use for post requests is referenced HyperApp is a JavaScript micro-framework for building web applications. This framework has aggressively minimized the concepts you need to understand to be productive while remaining on par with what other frameworks can do. HyperApp holds firm on the functional programming front when managing your state, but takes a pragmatic approach to allowing for side effects, asynchronous actions, and DOM manipulations. First, let’s define our HTML structure. Create a file named hyper.html and add: <link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.0/css/bootstrap.min.css" rel="stylesheet" id="bootstrap-css"> <script src=""></script> <script src=""></script> <div class="container" id="app"> </div> The code snippet above is a basic HTML declaration in which we: POSTrequests app, which would be our root component Introducing Hyperapp to the app We have a basic setup with the required libraries available and a root element which HyperApp would be attached to. Let’s go ahead with the react implementation. Open a script tag and input: const h = hyperapp.h; const app = hyperapp.app; const state = { form: { name: '', mob: '', email: '', mess: '', } } const actions = { onInputChange: (event) => state => { state.form[event.target.name] = event.target.value; return state; }, submitForm: () => { console.log(state.form) axios.post('', state.form) .then(function (response) { console.log(response.data); }) .catch(function (error) { console.log(error); }); } } const view = (state, actions) => ( h("div", {class: 'row'}, [ h("div", {class: 'col-md-4'}, [ h("div", {class: 'panel'}, [ h("h4", {}, 'Quick Contact'), h("div", {class: 'form'}, [ h("input", {type: 'text', placeholder: "Please input your Name", class:"form-control", name: 'name', oninput: (e)=>actions.onInputChange(e)}), h("input", {type: 'text', placeholder: "Please input your Mobile number", class:"form-control", name: 'mob', oninput: (e)=>actions.onInputChange(e)}), h("input", {type: 'text', placeholder: "Please input your Email", class:"form-control", name: 'email', oninput: (e)=>actions.onInputChange(e)}), h("textarea", {placeholder: "Please your message", class:"form-control", name: 'mess', oninput: (e)=>actions.onInputChange( e)}), h("button", {type: 'button', class:"btn btn-primary", onclick: ()=>actions.submitForm()}, "submit"), ]) ]) ]), ]) ) app(state, actions, view, document.getElementById('app')) Let’s take a review of what we have above. Here, we declared an initial state that comprises our form object, we then moved ahead to declare two actions which we will set the state as the input changes and submit the form. In the onInputChange function, we receive one argument, which is: We use this two parameters to set the state of the exact input that was changed. In the _onSubmit function, we fire a post request to the []() endpoint, which returns the exact parameters sent. Here, which is what we use as our server. Here, we must have seen the similarities between React and Hyperapp. For our purposes, I’ll describe Hyperapp as a lightweight alternative to React. In the render function of the code above, we would notice the exact similarities to React. In fact, the only differences you would notice is the use of class instead of React’s className and onInput in place of onChange. For the same reason we did not use _jsx_in the React form, is the same reason we have not used _jsx_here. If you use the _npm_package and prefer to use _jsx_, please feel free. In this tutorial, we have seen how easy it is to submit forms using 3 different JavaScript frameworks. We have also seen how to solve common issues when our forms are not displaying or not submitting as intended. Do you have any observations about this tutorials or views you want to share? Let us know in the comments. #html #javascript #vue #react 1594963828 List of some useful JavaScript Frameworks and libraries for website, web apps, and mobile apps development, that developers should know about to make selection easier. This article will help. #javascript frameworks for web applications #web applications development companies #progressive javascript framework #javascript frameworks #javascript #frameworks #how to build a simple calculator in javascript #how to create simple calculator using javascript #javascript calculator tutorial #javascript birthday calculator #calculator using javascript and html. #javascript 1630506330 This video is about the product landing page using HTML CSS And JavaScript, in which we created a simple product landing page using HTML CSS and in order to perform those powerful animations we use the GSAP a JavaScript animation library for work done. In this video we broadly cover the concepts of CSS Flex box and CSS Grid system and Some CSS Properties such as nth child selector, ::before & ::after much more. Don't forget to join the channel for more videos like this. Code Savvy 0:00 - Intro 0:10 - Result 0:38 - Project Setup 01:35 – Reset HTML 02:21 – Left Container HTML 03:41 – Wrapper 14:58 – Bottom Shoe Nav 26:23 – Right Container HTML 33:10 – Product Size 35:49 – Reviews 41:11 – GSAP Animations Click to Watch Full tutorial on YOUTUBE #html #css #javascript #web-development #html5 #html #css #tailwindcss #javascript
https://morioh.com/p/ec3104eb314d
CC-MAIN-2022-05
refinedweb
2,196
57.06
Hi, I am developing an android app which basically needs to pull data from REST API every one minute and run in the background even when in background, terminated and on reboot. Currently, i am using the managed workflow and expo-background-fetch and expo-task-manager. All is working fine when app is in the background, i am getting the log statements. However, when i terminate the app, i stop to get any notifications and on startup again i get all of the ones which have missed. What am i missing? I am using the Expo Client on android to test the app. I have given the Expo Client all permissions as per Xiaomi | Don’t kill my app! That is my sample code. import * as BackgroundFetch from 'expo-background-fetch'; import * as TaskManager from "expo-task-manager" import { BACKGROUND_FETCH_PRICE_TASK } from './Constants' TaskManager.defineTask(BACKGROUND_FETCH_PRICE_TASK, async () => { const now = new Date(); console.log(`Got background fetch call at: ${now.toISOString()}`) try { return BackgroundFetch.Result.NewData; } catch (error) { return BackgroundFetch.Result.Failed; } }); export const initBackgroundTasks = async () => { await BackgroundFetch.registerTaskAsync(BACKGROUND_FETCH_PRICE_TASK, { minimumInterval: 60, //1 minute stopOnTerminate: false, startOnBoot: true }); TaskManager.getRegisteredTasksAsync() .then(tasks => { console.log(`Registered tasks: ${JSON.stringify(tasks)}`) }) }
https://forums.expo.io/t/testing-background-fetch-when-app-terminates-xiaomi-miui-12/48842
CC-MAIN-2021-10
refinedweb
198
52.05
Images from used by permission. Photographs by Liam Quin. This talk is for people who have to choose whether to use XQuery, or which implementation(s) to use, and for people who have to help others make those decisions. If you are implementing or marketing an XQuery engine yourself, you might also find it helpful. Shared understanding of what is available and how it works; Shared vocabulary to describe what is available; Shared expectations about what you can achieve. There are at least fifty different XQuery implementations. How do they work? What sets one apart from another? How can you describe them quickly and succinctly? It's not enough to know them all (and not necessary either) if we divide them into categories. In this talk I will name names but you must choose the most appropriate software for your task. Terminology can confuse and obfuscate. What we need is an ontology to simplify and clarify. I do not (yet) have a formal ontology; maybe someone would like to help me make a Topic Map for this? For now, I hope you find what I have already done to be of use. Plus, using jargon can make you sound like a more expensive consultant. Ongoing work will appear on the XQuery Home Page: What will you achieve by using XML Query? What will you achieve by using a particular implementation? Who will see the benefits? And who will do the work? Our vocabulary may help people to talk about this early on in a project, and to have a clearer understanding of what they might achieve. Business Model Access Primary Purpose Storage Strategy Supported Closed Source (many products) Unsupported Closed Source (avoid these people) Open Source with commercial version e.g. with more features (e.g. Saxon, Qizx) Open source with informal or third-party support (e.g. Galax) Research or personal project Demo Versions are not the same as Free or Open Source! Moving between implementations is facilitated by having a standard language. So choice of vendor can be less critical as long as you avoid using proprietary features or extensions. Conformance is what lets you migrate. Does the implementation run the XQuery Use Cases unchanged? Has the vendor submitted test results? Do they have a formal conformance statement? Moving between programming languages and operating systems can also be easy as long as you try to avoid writing language-native functions, or, if you do, put them in a module that has a clearly documented interface and try to make it platform-neutral. Moving between vendors often involves some money and some acrimony. How do you need to access the query engine? Most implementations support multiple access methods, but many have a primary method they support, and others may not be as well supported. Command-line (Galax, Saxon, Qizx) Servlet (Qizx) embedded in another language (SQL: DB2, Oracle, MSSQL) API/Library (BSD dbxml) Web service or server (Sedna, MarkLogic, DataDirect) GUI (e.g. StylusStudio, oXygen) Software can be very general-purpose or it can be very specialised. If you have particular needs, such as a low memory footprint, use the right tool; on the other hand, an implementation intended for streaming data on a mobile phone might have compromised on optimising joins or might not support external modules. Mobile/embedded (e.g. MXquery) Streaming (e.g. BEA AquaLogic) Database query language (many products) General query language (e.g. Sherlock) Middleware (e.g. DataDirect) Web applications (many products) general purpose (many products)development and debugging (StylusStudio, oXygen) large collections (MarkLogic, Oracle, DB2) On top of SQL (e.g. XQuark) Alongside SQL (e.g. DB2, Oracle, MonetDB/XQuery) XML-native Database (e.g. eXist, MarkLogic) Other XDM (e.g. Sherlock) Files (e.g. Saxon) Many implementations can read external (unindexed) XML documents, either locally or via HTTP. Many can also use ODBC or JDBC to run SQL and present the result as if it were XML. You might need to run an external indexing program if documents change. For large documents, e.g. 100MBytes, or for large collections (e.g. terabytes) indexing is very important. Make sure performance constraints are part of any contract. In some cases, the full text facility can give fastest results. Now we have talked about context, about how you get at the XQuery engine and where it lives. Next we talk a little about what it does, about specific features. Static typing is XPath 2’s name for strong typing, that is, using restrictions on values to help detect errors even before a program is run. Full schema support in XQuery lets you define your own types using an external XSD. Implementations can have any combination of these two features! typing + schema, E.g. Galax, MarkLogic typing, no schema, E.g. MSSQL no typing + schema, E.g. SaxonSA no typing no schema, E.g. Saxon, Qizx Don't get locked into open systems [IBM] If you write native functions, or if you use implementation-specific features, hide them in a module with its own namespace so you know exactly what you might have to reimplement. for $a in jdbc:get_rows("circus.performers") where ok_together($a/sql_row[4], $candidate/sock_colour) for $a in circus:getperformers() where circus:oktogether($a, $candidate) native functions (and Web services too) XSLT natively or via Saxon official extensions e.g. full text, updates proprietary extensions When a Web application is written with XQuery, the actual results might be delivered using XSLT. This means more XML people working on the project, but fewer SQL or Java people. XQuery implementations can be categorised usefully You can often move between implementations. The standardised language helps to encourage you to experiment! Use modules and namespaces to protect yourself Consider staffing and skills Very high performance is possible, but might not be easy or cheap.
http://www.w3.org/2007/Talks/0808-quin-extreme-query/
crawl-002
refinedweb
968
58.28
Re: My Computer Icon From: Wesley Vogel (123WVogel955_at_comcast.net) Date: 03/26/05 - ] Date: Sat, 26 Mar 2005 07:14:03 -0700 Rick, I don't use it, but you might want to look at System Restore. Here are some links... Accessing System Restore System Restore overview To create a Restore Point To select a restore point Understanding System Restore Using the System Restore Wizard How to turn on and turn off System Restore in Windows XP;en-us;310405 How to start the System Restore tool at a command prompt in Windows XP;en-us;304449 Frequently Asked Questions Regarding System Restore in Windows XP -- Hope this helps. Let us know. Wes MS-MVP Windows Shell/User In news:[email protected], Rick_C <[email protected]> hunted and pecked: > Thanks again Wesley... > > Your suggestions did not work. I can see all the drives in Explorer, > I just can see anything when I double click on the My Computer icon > on the Desktop. Another odd thing is that nothing happen when I > double click on System in the Control Panel. > > I don't know what happened my to PC. It was working great a couple of > days ago and all of a sudden, I have a lot of issues. > > I have checked and double checked for viruses or spyware. There are > none. I wonder what changed all my system settings? > > Any help is appreciated. > > Thanks in advance... > > Rick > > "Wesley Vogel" wrote: > >> Rick, >> >> 207. Right hand side >> Restore Properties - My Computer & Documents >> >> >> This applies to XP also if you have TweakUI... >> Missing Drives in My Computer and Windows Explorer >>;en-us;191579 >> >> The Floppy Disk Drive Is Missing From the My Computer Folder >>;en-us;812489 >> >> Floppy Disk Drive Does Not Appear in the My Computer Folder >>;EN-US;817196 >> >> My CD drives have vanished (from Explorer, Device Manager, etc.) >> >> >> If XP Pro check these settings in Group Policy. >> >> Open Group Policy... >> Start | Run | Type: gpedit.msc | OK | >> >> Navigate to >> >> User Configuration\Administrative Templates\Windows Components\ >> Windows Explorer\ >> Hide these specified drives in My Computer >> >> [[Removes the icons representing selected drives from My Computer and >> Windows Explorer. Also, the drive letters representing the selected >> drives do not appear in the standard Open dialog box. >> >> This policy removes the drive icons. Users can still gain access to >> drive contents by using other methods, such as by typing the path to >> a directory on the drive in the Map Network Drive dialog box, in the >> Run dialog box, or in a command window.]] >> >> HKCU\Software\Microsoft\Windows\ >> CurrentVersion\Policies\Explorer >> NoDrives >> ----- >> >> User Configuration\Administrative Templates\Windows Components\ >> Windows Explorer\ >> Prevent access to drives from My Computer >> >> [. >> >> The icons representing the specified drives still appear in My >> Computer, but if users double-click the icons, a message appears >> explaining that a policy prevents the action.]] >> >> HKCU\Software\Microsoft\Windows\ >> CurrentVersion\Policies\Explorer >> NoViewOnDrive >> >> >> -- >> Hope this helps. Let us know. >> >> Wes >> MS-MVP Windows Shell/User >> >> In news:[email protected], >> Rick_C <[email protected]> hunted and pecked: >>> Thank you Wesley... Your post was helpfull and I did get "My >>> Computer" icon back. >>> >>> When I double click on "My Computer", it is blank. How can I recover >>> the drives so that they are displayed. >>> >>> Thank you in advance for any replies. >>> >>> Rick >>> >>> "Wesley Vogel" wrote: >>> >>>> >>>> >>>> Read the instructions at the top >>>> Scroll down to >>>> 276. Enable/Disable My Computer Icon >>>> --- >>>> >>>> XP Pro?? Group Policy. >>>> >>>> Start | Run | Type: gpedit.msc | OK | >>>> Navigate to >> >>>> Local Computer Policy\User Configuration\Administrative >>>> Templates\Desktop\ Double click: Remove My Computer icon on the >>>> desktop, in the right hand pane | Setting tab | Check: Â Not >>>> Configured | Apply | OK >>>> >>>> If Enabled this setting grays out (makes unavailable) Â My >>>> Computer in >> Right click Desktop | Properties | Desktop Tab | >>>> Customize Desktop button >>>> >>>> [[This setting hides My Computer from the desktop and from the new >>>> Start menu. It also hides links to My Computer in the Web view of >>>> all Explorer windows, and it hides My Computer in the Explorer >>>> folder tree pane. If the user navigates into My Computer via the >>>> "Up" button while this setting is enabled, they view an empty My >>>> Computer folder. This setting allows administrators to restrict >>>> their users from seeing My Computer in the shell namespace, >>>> allowing them to present their users with a simpler desktop >>>> environment. >>>> >>>> If you enable this setting, My Computer is hidden on the desktop, >>>> the new Start menu, the Explorer folder tree pane, and the Explorer >>>> Web views. If the user manages to navigate to My Computer, the >>>> folder will be empty. >>>> >>>> If you disable this setting, My Computer is displayed as usual, >>>> appearing as normal on the desktop, Start menu, folder tree pane, >>>> and Web views, unless restricted by another setting. >>>> >>>> If you do not configure this setting, the default is to display My >>>> Computer as usual. >>>> >>>> Note: Hiding My Computer and its contents does not hide the >>>> contents of the child folders of My Computer. For example, if the >>>> users navigate into one of their hard drives, they see all of their >>>> folders and files there, even if this setting is enabled. ]] >>>> >>>> Or you can... >>>> Start | Run | regedit | OK | >>>> >>>> Navigate to >> >>>> >> HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\NonEnum >>>> {20D04FE0-3AEA-1069-A2D8-08002B30309D} >>>> REG_DWORD >>>> 0x0000000 (0) >>>> >>>> 0 = My Computer on Desktop & Start Menu. >>>> 1 = No My Computer on Desktop & Start Menu. >>>> >>>> Set to 0. >>>> >>>> Or.......... >>>> Delete the key >> >>>> >> HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\NonEnum >>>> >>>> >>>> >>>> -- >>>> Hope this helps. Let us know. >>>> >>>> Wes >>>> MS-MVP Windows Shell/User >>>> >>>> In news:[email protected], >>>> Rick_C <[email protected]> hunted and pecked: >>>>> The "My Computer" icon on my desktop has disappeared. No big >>>>> deal... I right clicked the Desktop, then clicked on Properties, >>>>> then clicked on the Desktop tab, then clicked Customize Desktop >>>>> so that I could put a checkmark to display My Computer. >>>>> >>>>> Now it gets strange. The My Computer is greyed out so that I >>>>> cannot put a checkmark. How can I get the My Computer back on my >>>>> Desktop? >>>>> >>>>> Any help would be greatly appreciated. >>>>> >>>>> Rick - ]
http://www.derkeiler.com/Newsgroups/microsoft.public.windowsxp.security_admin/2005-03/2021.html
crawl-002
refinedweb
1,019
57.47
HttpHandlers in C# Hi, In this post i want to write about HttpHandlers in C#. An overview on how web requests are handled and executed by IIS Generally whenever we request for a resource like localhost\MyWebApp\Firstpage.aspx 1. First of all the request reaches IIS and it identifies that this particular request should be handled by the .Net framework. 2. Next the request is handled by the worker process. 3. Next the worker process identifies the appropriate class that should handle this request. This particular class is nothing but a HttpHandler. Now let us go through this. HttpHandler A HttpHandler is nothing but a class that handles incoming requests. Asp.Net framework identifies the appropriate handler based on the file extension in the request url. IHttpHandler A HttpHandler is nothing but a class that implements IHttpHandler interface. IHttpHandler has two methods. public interface IHttpHandler { // Methods void ProcessRequest(HttpContext context); // Properties bool IsReusable { get; } } So any class implementing IHttphandler interface should define these two (ProcessRequest method and IsReusable property). ProcessRequest should have the key logic to handle the http request. This is the method that is executed whenever a request comes. Creating a class that implements IHttpHandler This handler is called whenever a file ending in .pavan is requested. A file with that extension does not need to exist. using System.Web; public class MyHandler : IHttpHandler { public MyHandler() { } public void ProcessRequest(HttpContext context) { System.Diagnostics.Debugger.Break(); HttpRequest Request = context.Request; HttpResponse Response = context.Response; Response.Write("<html>"); Response.Write("<body>"); Response.Write("<h1>Hi This is my first httphandler.</h1>"); Response.Write("</body>"); Response.Write("</html>"); } public bool IsReusable { get { return false; } } } IsReusable is used to specify whether we want to enable pooling or not. If set to true the handler is placed in the memory and it is pooled. In the processRequest function i am just writing the text “Hi this is my first httphandler” as a response to the request we got. So we are done with the creation of Httphandler. Now you might be having a question. How does IIS know about this handler. How this particular request localhost\MySample\Test.pavan is going to be redirected to our handler by IIS. Now let me answer these questions. 1. We are supposed to configure the web.config file Configuring Web.config For IIS7 i. There is a tag called Add, replace that with this tag or if not there add this tag. <add verb="*" path="*.pavan" type="MyHandler" /> Here verb corresponds to the http protocol that our handler supports like ‘GET’,’POST’ etc Path—The extension this particular handler should take care Type—The class that handles the request. In our case MyHandler (This should be qualified name). Mapping File extension to handlers in IIS7.0 1. Go to Command prompt and click Inetmgr. 2. create a website and host our website that contains Httphandler. I am naming my app as Test Click on the HttpHandlerMapping in the below image Now we can list of handler mappings in this image Now let me click on .aspx and see to which handler that is actually pointing Yes it is pointing to aspnet_isapi.dll. This is not a c# class but this will inturn process it and assign the aspx request to appropriate handler Now let us click on “Add Managed handler” Fill the fields with appropriate values. Requestpath—file extension to be handled. Type—name of the class Name—Name of the handler. Now click on save. Now go back to our solution and we can find that web.config is changed and few more tags were added to it <system.webServer> <handlers accessPolicy="Read, Execute, Script"> <remove name="Test" /> <add name="MyHandler" path="*.pavan" verb="GET,HEAD,POST,DEBUG" type="MyHandler" resourceType="Unspecified" requireAccess="Script" preCondition="integratedMode" /> </handlers> </system.webServer> These tags we added automatically when we add mappings in iis. The complete web.config file <configuration> <system.web> <compilation debug="true" /> <httpHandlers> <add verb="*" path="*.pavan" type="MyHandler" /> </httpHandlers> </system.web> <system.webServer> <handlers accessPolicy="Read, Execute, Script"> <remove name="Test" /> <add name="MyHandler" path="*.pavan" verb="GET,HEAD,POST,DEBUG" type="MyHandler" resourceType="Unspecified" requireAccess="Script" preCondition="integratedMode" /> </handlers> </system.webServer> </configuration> Testing our First Httphandler Type this url in the browser “” and we can see the result on the screen. Debugging Httphandler if we want to debug the httphandler in our local machines , we can add this line System.Diagnostics.Debugger.Break(); Complete HttpHandler class code using System.Web; public class MyHandler : IHttpHandler { public MyHandler() { } public void ProcessRequest(HttpContext context) { System.Diagnostics.Debugger.Break();; } } } In my next post i want to write about “How .asmx file request were handled”. Thanks, pavan Great info, very helpful. Small correction: “i. Add a tag called , inside that place this tag” should be “i. Add a tag called , inside that place this tag:” Marty August 18, 2012 at 6:22 pm […] HttpHandlers in C# – Pavanarya […] Interesting .NET Links - August 16 , 2012 | TechBlog June 18, 2014 at 11:05 pm
https://pavanarya.wordpress.com/2012/08/13/httphandlers-in-c/
CC-MAIN-2016-18
refinedweb
833
51.85
#include <stdio.h> int main(int argc, char *argv[]) { int loop; char name_array [20][30]; char course_array [20]; for (loop = 0; loop < 4; loop++) { puts("Enter name"); gets (name_array[loop]); puts("Now enter course"); scanf("%c", &course_array[loop]); } } The above is a code fragment that doesn't work as I want. The gets function will accept a string on the first iteration but after that the program skips gets and goes directly to the puts and scanf. Why is this? IF the puts and scanf are removed the loop works ok and I can complete the array. What is going on here or am I going crazy.
http://cboard.cprogramming.com/c-programming/2806-gets-function-can%27t-figure-out.html
CC-MAIN-2015-32
refinedweb
107
76.96
It’s a pretty short program, and it doesn’t do as much as I wish it did, but it will make a tedious and error-prone job a little bit less tedious. I’ve mentioned this before, but turning a website from html into a content-manged site is a pain in the neck, and one of the most painful things is getting the images and pdf files and such into the media library, because that’s a terrible web-based program, that only lets you do one file at a time. So my program uses the python wordpresslib. It takes one argument, which is the name of the file to upload. (The url, username, and password for the wordpress blog are hard-coded for my purposes.) The program uploads the file and returns the URL for accessing the raw file in the media library. This isn’t really as good as you would hope for — what you’d want is to supply a title and caption for the item and get the link that shows the image in your post as a link to the page page in the media library. You have to do all of that through the web interface. But at least that web interface is a reasonable program. The one for uploading gives you two choices, one of which never works for me and the other one makes you pick the filename via the browser even if you know it. I’m glad I got programming energy for this even if it isn’t much of a program. I think it’s important to use skills like that pretty often. Of course I’ve been writing some PHP in the course of the website redesign, but that’s closer to writing html than it is to real programming. Here’s the program if you want to use it: #!/usr/bin/env python # 09-Aug-20 lconrad; created # usage: addmedia.py filename # adds filename as a media object to the serpentpublications.org wordpress blog # import library import wordpresslib, sys if sys.argv < 2: print "usage: addmedia.py filename" filename = sys.argv[1] # note that it's the xmlrpc.php interface you need to specify wordpress = "" user = "*redacted*" password = "*redacted*" # prepare client object wp = wordpresslib.WordPressClient(wordpress, user, password) # select blog id wp.selectBlog(0) # upload image for post imageSrc = wp.newMediaObject(filename) print "Image uploaded to: %s" % imageSrc One thought on “Wrote a program yesterday”
https://blog.laymusic.org/2009/08/21/wrote-a-program-yesterday/
CC-MAIN-2019-09
refinedweb
410
72.46
Getting Started with Next.js for Jamstack Development There's been a lot of well-deserved buzz around Next.js lately - with two major consecutive releases in 9.3 and then 9.4 bringing tons of new functionality to the framework. Jamstack developers have been excited by a ton of new features that support Next.js for Jamstack sites including new data-fetching methods and incremental static regeneration. In this post, I want to give an introduction to Next.js for Jamstack development and explore how you can leverage it with Sourcebit to build CMS-driven Jamstack apps. What is Next.js? Next.js is a very popular React-based framework for building single-page applications (SPAs). It has gained a great deal of popularity in large part because it simplifies a number of aspects of developing full-stack React applications. For example, routes are handled automatically with zero configuration. However, your Next.js also handles all of the server-side rendering of the page, meaning you can use a single framework for both the frontend and backend of your application. What Makes Next.js Different? Next.js is also a static site generator (SSG), but it differs from tools like Hugo, Jekyll or even Gatsby in that generating static assets is not its only purpose. You can build an SPA with server-side rendering (SSR) using Next.js and then decide to static export all or even just part of your app. That's a key distinction because, when choosing most other SSGs, you are committing to going fully static right out of the gate. This can also present some mental hurdles for someone (like me) coming to Next.js after working primarily with more traditional SSGs like Hugo or Jekyll. Not only did it force me to grok a lot of React (which I've admittedly had limited experience with in the past) but it also somewhat upends the way I've traditionally thought of Jamstack, by challenging the static versus dynamic assets dichotomy. The Sample App For this tutorial, I decided to take a prior example written using Hugo and instead build it using Next.js. The site is a fan page for the video game Control (great game, highly recommend it). The site itself is fairly simple, made up of posts that talk about the game and an about page. You can view the live demo at. You can also view the project repository at. Contentful and Sourcebit Behind the scenes, the content actually comes from Contentful. In order to integrate Contentful with Next.js, I chose to use Sourcebit, an open source project that can connect data sources (like Contentful, Sanity and Kentico Kontent) with static site generators (like Hugo, Jekyll and, of course, Next.js). While it is possible to use Next.js directly with the Contentful API (see the official sample Contentful app from Next.js here), Sourcebit offers functionality that makes this integration even easier. We'll cover using Sourcebit with Next.js here, but you can also view the original tutorial (with the Hugo demo) here. In addition, Sourcebit is entirely extensible, and you can learn how to build a Sourcebit plugin to support your favorite headless CMS or static site generator. Please note that the example code shown throughout the article below are simplified versions of the actual code for the purposes of making them easier to understand. Challenges Converting an existing project (in my case from Hugo to Next.js) rather than starting from scratch did present some challenges. The site was not originally built using a "React mindset". While this largely meant making small changes to convert the HTML to be React components , there were also some small visual elements that relied upon DOM manipulation that happened on window load. These required some refactoring to get them to work properly using the componentDidMount(). While I'm not sure my solution is what you might consider React best practice, it worked and this wasn't something I'd consider core to the site's functionality. Getting Started The easiest way to get started with Next.js is to run the following command in the console/terminal: npm init next-app This will only ask a couple questions such as the project name and template. I chose to use the default starter template, but there are also currently a ton of examples from the Next.js example repo that you can also choose to start with. The default starter doesn't include too much boilerplate. Let's look at what's there: - The pagesdirectory is a critical part in any Next.js application. One of the benefits of using Next.js is that it will automatically create a route for any pages (which in the Next.js are React components) contained within this directory. If, for example, I were to create a foo.jsfile in this directory, then I could navigate to it via the /fooroute (i.e. you are testing locally). Pages can also handle dynamic routes, which we'll get into later. - The publicdirectory is where you can place static assets that you want to be able to serve and access via your site. These may be things like images, PDF files, downloads, or even CSS or JavaScript files you want included. For instance, any images under /public/imageswill be accessible to your site as just /images. Now I can run either npm run dev or yarn dev to start the local development server and go to localhost:3000 where you should see the Next.js default boilerplate page. Note that from here on out I use npm instead of Yarn as the libraries related to Sourcebit are automatically installed using npm during the configuration process. It's pretty common for a Jamstack site to store some portions of site data in a JSON or YAML file. A common use of this is often the site configuration file containing things like the site title, description and so on. For example, my site has a /data/config.json file that looks like this (note that Next.js has no opinion on where you should store any data files): { "title": "Control", "tagline": "Discover an unknown world.", "logo": "/images/logo.png", "bgimage": "/images/header-bg.jpg", "footerContent": "Content courtesy of the [Control Wiki]()." } To simplify the example, I'm going to create a simple home page in /pages/index.js without any of the site's design: import Head from 'next/head'; export default function Home() { return ( <div> <Head> <title>Create Next App</title> <link rel="icon" href="/favicon.ico" /> </Head> <h1>Hello World</h1> <p>This is my home page</p> </div> ); } Since this is just a local JSON file, I'll just import it at the top of the page: import configData from '../../data/config.json'; Now we need to make that data available to the site. Recall that Next.js handles both pages loaded via SSR and pages loaded as static. Data that needs to be accessible to a statically generated page is loaded via a special method called getStaticProps(). This method was added to Next.js as of version 9.3. I'll add that method to the home page and load in the config file shown above. export async function getStaticProps() { return { props: { configData } } } } Now I need to modify the Home component to receive the data from this method and then display it. export default function Home(props) { const config = props.configData; return ( <div> <Head> <title>{config.title}</title> <link rel="icon" href="/favicon.ico" /> </Head> <h1>{config.title}</h1> <p>{config.tagline}</p> </div> ); } As you can see, I set Home() to receive the props object and then assign the configData I loaded into a value called config (this latter part isn't required but I find it a useful for convenience of referencing the value). Finally, I output the values from the config data on the page as {config.title} or {config.tagline}. It is worth noting, as we move on to talk about components, that the getStaticProps() exists only on pages and is not available on components. Building Components While you can build the site by creating pages in the /pages directory, this would become unmaintainable very quickly as many aspects of each page will be reused. For a simple site like the sample app, this may only be a few components like the layout, header and footer. Next.js has no opinion about where you put your components, but generally developers choose to use a /components directory. To demonstrate an example, let's look at creating a Layout component that will handle all the common layout elements across the site. This is going to be a very simplified version of the project's full layout component. To begin, we'll create a /components/Layout.js. This layout will take the very basic layout that already existed in index.js above but abstract it so that it is available to add to any page. import Head from 'next/head'; export default function Layout({ children }) { return ( <div> <Head> <title>Control</title> <link rel="icon" href="/favicon.ico" /> </Head> <h1>Control</h1> <h2>Discover an unknown world.</h2> {children} </div> ); } The key thing to notice in the component is the inclusion of children, which will output any child elements contained by the component. I can now modify index.js to use the component. import Layout from '../components/Layout'; import configData from '../../data/config.json'; export default function Home(props) { const config = props.configData; return ( <Layout> <p>This is my homepage</p> </Layout> ); } export async function getStaticProps() { return { props: { configData } }; } First, import the component and then replace the existing layout elements with <Layout>. The paragraph element inside <Layout> will be what is output using children. There's one problem here, however. The layout is no longer leveraging the configuration data that I set up. As mentioned, getStaticProps() cannot be called from within a component. Instead, it needs to be passed in to give the component access to it. First, in index.js it is passed down. export default function Home(props) { const config = props.configData; return ( <Layout config={config}> <p>This is my homepage</p> </Layout> ); } Then in Layout.js it is used. export default function Layout({ children, config }) { return ( <div> <Head> <title>{config.title}</title> <link rel="icon" href="/favicon.ico" /> </Head> <h1>{config.title}</h1> <h2>{config.tagline}</h2> {children} </div> ); } Connecting to a CMS The sample site content comes from a pre-existing headless CMS instance on Contentful. Of course, Next.js makes it relatively painless to connect to APIs, but I used an open source tool called Sourcebit that makes it even easier. The best part is Sourcebit already has Next.js integration. Configuring Sourcebit for Next.js The first step is to configure Sourcebit. Thankfully, the tool uses an interactive configuration process so that I don't need to manually create or edit any configuration. To begin the interactive configuration process, enter: npx create-sourcebit The following video shows the setup process for configuring Sourcebit for Contentful and Next.js using my existing Contentful space (which has a simple content model with only two content types). Note that I am also configuring Sourcebit to pull any assets associated with the content locally into the /public/images folder. Once connecting to my Contentful account, I choose the space and the content types within that space that represent pages. I have set the route for the about page to /{slug} meaning that /about will match. However, blog posts are set to match a pattern of /posts/{slug}. In both cases, the title of the post is what will generate the slug. Getting Headless CMS Data from Sourcebit for Next.js Now that Sourcebit is configured, I can use it within my Next.js project. The first step is to create a next.config.js configuration file if there isn't one already. I'll require Sourcebit and call the fetch() method in this file so that Sourcebit will pull the data and images from Contentful whenever the site loads. const sourcebit = require('sourcebit'); const sourcebitConfig = require('./sourcebit.js'); sourcebit.fetch(sourcebitConfig); To use the Sourcebit CMS content in a page, I leverage the sourcebitDataClient that it provides. For example, let's look at a simplified version of /posts/[slug].js. The [slug].js filename indicates that this is a dynamic route in Next.js. In this case, this is the page that will match the /posts/{slug} pattern we defined when configuring Sourcebit above. Note that I am leveraging react-markdown to render the Markdown body content as React components. import Layout from '../../components/Layout'; import { sourcebitDataClient } from 'sourcebit-target-next'; import ReactMarkdown from 'react-markdown'; import configData from '../../data/config.json'; export default function Post(props) { const config = props.configData; const page = props.page; return ( <Layout config={config}> <h1>{page.page.title}</h1> <ReactMarkdown source={page.page.body} /> </Layout> ); } export async function getStaticPaths() { const paths = await sourcebitDataClient.getStaticPaths(); return { paths: paths.filter((path) => path.startsWith('/posts/')), fallback: false }; } export async function getStaticProps({ params }) { const configData = await import(`../../data/config.json`); const pagePath = '/posts/' + params.slug; const page = await sourcebitDataClient.getStaticPropsForPageAtPath(pagePath); return { props: { configData, page } }; } Let's look at getStaticPaths() first. Next.js uses this method to define all of the dynamic paths that need to be exported as static pages. Sourcebit also provides a method called getStaticPaths() that returns all the paths within the site content pulled from the CMS. In this example, I am filtering the list for this specific dynamic route to provide all the paths to blog posts. In the getStaticProps() method, I am still getting the site configuration as discussed earlier in this article. I am also determining the page path by pulling the slug variable Next.js provides. Since this dynamic route is defined as /pages/posts/[slug].js the slug variable that I defined will be populated with anything after /posts/ in the URL. For example, for a URL that went to the path /posts/federal-bureau-of-control (which is one of the posts in this example), the slug would be federal-bureau-of-control. We use the full directory and path to get all the page data from Sourcebit using getStaticPropsForPageAtPath(). This data is passed to the page via props and then displayed. When starting my local server using npm run dev, I can see the posts and assets that are pulled from Contentful by Sourcebit. If I load the URL locally via, I will see the post contents including the image, which uses the locally downloaded asset. This is the finished page on the sample site with the complete design and assets being loaded. One key difference you may notice between the finished code for this page and the sample code presented here is that in the finished project I am using getData() rather than getStaticPropsForPageAtPath(). The reason for this is that populating my navigation requires information about other pages in the CMS. Rather than make two separate calls to Sourcebit, I use one call and then filter the content based upon the path. For comparison's sake, here is the getStaticProps() method from the finished site: export async function getStaticProps({ params }) { const sb = await sourcebitDataClient.getData(); const pagePath = '/posts/' + params.slug; const page = sb.pages.filter((page) => page.path == pagePath)[0]; const pages = sb.pages.filter((page) => page.path !== '/' && !page.path.startsWith('/posts/')); return { props: { configData, page, pages } }; } Deploying the Site Running npm run export on the Next.js site will export everything you need to the /out folder. However, it's also easy to deploy on Jamstack-focused, continuous deployment solutions like Netlify. Let's take a quick look at how to do that. Once connected to the GitHub repository for the site, the build command should be set to npm run export while the publish directory set to /out. Sourcebit stores environment variables in a .env file that is loaded when the site starts up. This is how it knows the Contentful access token that has been configured. This .env file should be added to your .gitignore so that it is not committed to a source repository. Thus, your Netlify build does not have access to it. Instead, set an environment variable within the Netlify dashboard for CONTENTFUL_ACCESS_TOKEN with the proper value. With that configured, Sourcebit will pull the latest content and assets from Contentful every time your site rebuilds on Netlify. You can see my finished, deployed site on Netlify at. Where to Go Next See what I did there? 😉 I've only scratched the surface of building an application with Next.js, but hopefully I have piqued your curiosity to give it a try (and try Sourcebit as well). Be sure to check out the full source code of the example site at. If you'd like to explore further, I'll leave you with a bunch of worthy resources to explore: - Official Next.js getting started guide - This is an well done, step by step tutorial that will guide you through the ins and outs of Next.js, including SSR as well as static generation. - Building a Markdown blog with Next 9.3 and Netlify - Next.js 9.3 brought some big improvements to the framework, especially around generating Jamstack sites, and this tutorial walks through building a blog using traditional file-based Markdown. - Building a portfolio site with Contentful, Next.js and Netlify - While Sourcebit makes the process easier in my opinion, this tutorial will help you if you want to directly call Contentful APIs. - Make a blog with Next.js, React and Sanity - Sourcebit also has a plugin that supports Sanity, but this tutorial uses the Sanity API and SSR deployed to Now (note that it uses an earlier version of Next.js) - Data Fetching with NextJS: What I learned - A good overview of the data fetching methods in Next.js as of 9.3. Next.js is an increasingly popular solution for creating Jamstack sites using JavaScript and React. In this post we explore how to build your first Jamstack site using Next.js that connects to a headless CMS for content using the Sourcebit open source project.
https://www.stackbit.com/blog/getting-started-nextjs-sourcebit/
CC-MAIN-2022-05
refinedweb
3,035
57.98
Hi all For the past year, I have been evaluating various approaches to threading with a view to picking one consistent approach for all of Imorphics' internal code. In particular I have been worried about the likely exponential growth in the number of cores available, and how we efficiently and scalably use them. We already have compute servers with 8 cores per machine, and we are right at the edge of efficiently splitting our embarrassingly parallel problems. IO bandwidth constraints mean that we will have to get multiple cores working on the same data fairly soon. Now for the bad news: I believe that all the threading code in Boost is at too low a level for VXL's purposes. I do not want to be using threads and locks, in particular because lock-based threading is not composable*. Using locks means that I have to take an overview of all locks when I do any lock-based programming. This is particularly hard with a library like VXL where we try to impose as few constraints on the users as possible. * Composability is the property of programs A and program B which have no explicit interactions, such that when I join them to make program C, the effect should be that of A + B. We have all been relying on the straightforward composability of our code possibly without realising it. Locking and contention for similar resources can easily but quietly introduce hidden interdependencies between A & B such that C will deadlock or suffer from race conditions even though A & B are individually correct. There are ways around this (e.g. introduce a global lock-hierarchy) but they impose significant constraints on users - constraints that may not be compatible with other libraries. Imorphics and Manchester University heavily use the Strategy pattern (the run-time polymorphism of high-level algorithms). We deliberately write code that calls functions some derivations of which might not be written for a few years, and for which any interdependence is restricted to that explicitly enforced by the signature of the base-class API. Introducing locks could blow those carefully maintained contracts. More recently I have been looking at Intel's thread building block library and I am much more impressed by that. For C++ programmers it appears more powerful than OpenMP. In particular it can handle multiply nested threading directives - so maintaining composability. There are some interesting ideas using Futures and Active Objects in one of boost vault libraries however it isn't clear to me how much traction these libraries have. A similar proposal for C++0x appears close to being cut in the face of looming deadlines. Now, threading for GUI responsiveness and threading for the efficient use of multi-core machines may require different approaches, and it might be possible to use both as appropriate. However, I think we should avoid any approach that requires VXL to handle locks, mutexes, semaphones, etc. unless it can be shown that they and the resources they protect will remain independent of any other reasonable part of the same application. Assuming we go ahead, do you have a proposal for how to integrate boost threads? Are we going to VXL-ify it? In particular are we going to remove the namespace support, etc. Alternatively we could treat is as an early version of the similar c++0x extension, so it could migrate into vcl. Ian. BTW - I haven't been following VGUI at all, so please forgive the question. It used to be that VGUI wasn't recommended for use with a "serious" GUI application. Has that changed? Amitha Perera wrote: > Folks, > > I find myself doing more threading with vxl code. I think it makes > sense to bring in pieces of boost into vxl to support this. (For > example, multi-threading in vgui to make the GUI not hang on long > computations.) > > So, I'd like to propose that pieces of boost be brought into v3p. (Not > all of boost, because it is huge, and most of it will not be useful for > vxl.) In particular, I suggest > - Boost.Thread > - Boost.SmartPtr > - Boost.Bind > - Boost.Function > > The v3p version of Boost will be modified to build via CMake, instead of > Boost's jam, so the user will not see any major impact. > > Of course, it should always be possible to use an external version of > boost instead of the v3p version. > > Thoughts? > > Amitha. View entire thread
http://sourceforge.net/p/vxl/mailman/message/8328925/
CC-MAIN-2016-07
refinedweb
738
61.77
The following form allows you to view linux man pages. #include <dlfcn.h> void *dlopen(const char *filename, int flag); char *dlerror(void); void *dlsym(void *handle, const char *symbol); int dlclose(void *handle); Link with -ldl. vari- able LD_LIBRARY_PATH was defined to contain a colon-separated list of directories, then these are searched. (As a security measure this variable is ignored for set-user-ID and set-group-ID pro- grams.)). RTLD_NOW If this value is specified, or the environment variable LD_BIND_NOW is set to a nonempty string, all undefined symbols in the library are resolved before dlopen() returns. If this cannot be done, an error is returned. Zeroitialized reopened NULL, then the returned handle is for the main program. When given to dlsym(), this handle causes a search for a symbolzero using the gcc(1) -nostartfiles command-line option.. Glibc extensions: dladdr() and dlvsym() Glibc adds two functions not described by POSIX, with prototypes #define _GNU_SOURCE /* See feature_test_macros(7) */ #include <dlfcn.h> overlaps addr */ void *dli_saddr; /* Exact address of symbol named in dli_sname */ } Dl_info; If no symbol matching addr could be found, then dli_sname and dli_saddr are set to NULL. dladdr() returns 0 on error, and nonzero on success. The function dlvsym(), provided by glibc since version 2.1, does the same as dlsym() but takes a version string as an additional argument. POSIX.1-2001 describes dlclose(), dlerror(), dlopen(), and dlsym(). com- pile time, but merely point to the plt (Procedure Linkage Table) sec- tion of the original object (which dispatches the call after asking the dynamic linker to resolve the symbol). To work around this, you can try to compile the code to be position-independent: then, the compiler cannot prepare the pointer at compile time anymore and today's gcc(1) will generate code that just loads the final symbol address from the got (Global Offset Table) at run time before passing it to dladdr(). Load the math library, and print the cosine of 2.0: #include <stdio.h> #include <stdlib.h> #include <dlfcn.h> int main(int argc, char **argv) /* ld(1), ldd(1), dl_iterate_phdr(3), rtld-audit(7), ld.so(8), ldconfig(8) ld.so info pages, gcc info pages, ld info pages [email protected]
http://www.linuxguruz.com/man-pages/dlsym/
CC-MAIN-2018-47
refinedweb
373
56.55
Can someone please tell me how I would wrap my array. Lets say that it is of size 10. What I want it to do is count the number of letters in the name then mod it into the array (This is working properly). If a name has the same number of letters then it just looks at the next spot. So I need my array to wrap around meaning that if the 10th spot is filled then it would look at the 1st spot in the array. I know that you can do this by using the mod function but am not sure how.Here is my HashTable template <class T> hashTable<T>::hashTable(int Size = 10) { value_arr = new T[Size]; key_arr = new string[Size]; status_arr = new int[Size]; HTableSize = Size; // hashIndex + 1 % Size //Wrap around for (int i = 0; i < Size; i++) status_arr[i] = 0; } Here is my Hash Function template <class T> int hashTable<T>::hashFunction(string key) { return (key.length() - 1) % HTableSize; } Here is my add function template <class T> void hashTable<T>::add(string key, const T &value) { //while (exists() != true) //int hashIndex; hashIndex = hashFunction(key); while (status_arr[hashIndex] == 1) { hashIndex ++; } if (status_arr[hashIndex] !=1) { key_arr[hashIndex] = key; value_arr[hashIndex] = value; status_arr[hashIndex] = 1; } } Thanks for the help!
https://www.daniweb.com/programming/software-development/threads/204042/how-do-i-wrap-an-array
CC-MAIN-2017-34
refinedweb
213
77.37
view raw I have am building a simple software in which there are multiple processes communicating with each other using the user defined signals SIGUSR1 and SIGUSR2. I am just curious to know what happens in a scenario like what I am describing below. "Process-1 did send sigusr1 to process-2 using kill command and the process-2 is executing the signal-handler doing some computation. Even before process-2 completed the execution of signal handler process-1 sends sigusr1 again to process-2. What would happen. I am unable to get a proper answer for this one. I think of two possible scenarios. Processing signals take some time and if you send too much at a time, some could get ignored. What you could do is send a response signal to the other process , basically saying : Hey, I got your signal, send me the next one I'm ready This way, no signal would get ignored and your program will get much faster with less to no errors. If you want a cheap solution, just use usleep() and a small value (depend on your hardware actually) Here is an example where an ascii message is sent to another process using signals : Client - Sending part void decimal_conversion(char ascii, int power, int pid) { if (power > 0) decimal_conversion(ascii / 2, power - 1, pid); if ((ascii % 2) == 1) kill(pid, SIGUSR1); else kill(pid, SIGUSR2); usleep(1); } void ascii_to_binary(char *str, int pid) { int i; i = 0; while (str[i] != '\0') { decimal_conversion(str[i], 7, pid); i += 1; } } Server void sighandler(int signum) { static int ascii = 0; static int power = 0; if (signum == SIGUSR1) ascii += 1 << (7 - power); if ((power += 1) == 8) { putchar(ascii); power = 0; ascii = 0; } } int main(void) { signal(SIGUSR1, sighandler); signal(SIGUSR2, sighandler); while (42); return (0); }
https://codedump.io/share/g3xiFH2xcmgW/1/user-defined-signals-sigusr1-and-sigusr2
CC-MAIN-2017-22
refinedweb
300
54.26
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hey everyone, Goal : Pack 32bit values into a single pixel and send via texture over Spout. Problem : The RGB values of color() are multiplied by the alpha, so even if R, G, and B are set to 255, if the Alpha is set to 0, get() returns R,G,B values of 0. Question : Is there a way to prevent RGB from being multiplied by Alpha? I've tested this in different software and it works fine. I've made an example : Sender > The canvas is split into 2, the left side has 0 Alpha, and the right has full Alpha. Other than that they are the same. import spout.*; Spout spout; float rate = 1.0; float amplitude = 255; void setup() { size(640, 360, P3D); colorMode(RGB, 255); frameRate(30); // spout object spout = new Spout(this); // name sender spout.createSender("Spout Processing"); } // end setup void draw() { background(0,0,0,0); int m = millis(); float Hz = rate * 1000; float lfo = map(sin(radians(360) * m % Hz), -1, 1, 0, amplitude); //Sinewave LFO // left -- 0 alpha noStroke(); fill(lfo,lfo,lfo,0); rect(0,0,320,360); // right -- full alpha noStroke(); fill(lfo,lfo,lfo,255); rect(321,0,640,360); spout.sendTexture(); } // end draw Receiver > Captures the texture via spout and analyzes pixels from each half and prints out the values in the textport. import spout.*; Spout spout; void setup() { size(640, 360, P3D); colorMode(RGB, 255); frameRate(30); spout = new Spout(this); } // end setup void draw() { background(0,0,0,0); spout.receiveTexture(); // left -- 0 alpha color l = get(0,0); // right -- full alpha color r = get(321,0); print("0 ALPHA >>> "); print(red(l), blue(l), green(l), alpha(l)); print(" --- FULL ALPHA >>> "); println(red(r), blue(r), green(r), alpha(r)); } // end draw Thanks! Answers @nousbalance -- Unfortunately, I cannot test this (I'm not on Windows). However, your problem isn't spout -- it is that you are drawing to the Processing surface, then expecting to be able to recover RGB separate from A. It looks like you are using spout to draw onto the main drawing surface, then trying to recover RGB values independent of the alpha channel. The Processing surface doesn't work that way. The surface doesn't have an alpha channel (there is nothing 'behind' it) -- so anything with transparency gets mixed into whatever pixel is already there and becomes flat RGB values. Thus a perfectly transparent RGB pixel becomes a whatever-was-there-before pixel (e.g. grey). Then, when you check RGB on the pixel, you find grey at full-alpha rather than the original data. Here is a simple sketch similar to your two test sketches to let you inspect how this works. Options to solve your problem: Can you use spout to write to a PGraphics or PImage, which supports alpha, then draw that onto the sketch surface using image()? ...could this be related to the difference between using receiveTexture and receiveImage? @jeremydouglass, This is great info. PImage is working out exactly how I hoped. Thank you very much for your help @nousbalance -- great! Glad to hear that using PImage preserved your alpha channel. If you can, please share your specific solution -- it might be helpful to future Spout users.
https://forum.processing.org/two/discussion/18653/how-to-stop-rgb-from-being-multiplied-by-alpha
CC-MAIN-2020-45
refinedweb
560
62.68
Spring Boot Sidecar Carvia Tech | July 31, 2020 | 6 min read | 1,937 views In this tutorial we will integrate a non-JVM app (python Flask app in this case) into Spring Cloud seamlessly using polyglot sidecar support. We will use netflix eureka registry to manage service registrations and netflix zuul for API Gateway. Roadmap Cover scalability patterns using Sidecar approach for non JVM apps Custom health endpoint where its not possible to modify the non JVM app (elasticsearch, for example) Provide Code link for the app shared on Github Add steps to create the Spring Boot Project Add screenshots for Eureka Service Registry Any decent sizes non-trivial distributed application will have polyglot programming model to make best use of available technology stacks (libraries and frameworks). For example, if there is a IO intensive work (chat for example), then few microservices may be built using nodejs, or if there is something related to machine learning, then python may be the good choice for developing that recommendation service. Important Usecase A machine learning program written in python/flask/django needs to communicate to & forth with rest of the JVM apps in spring cloud ecosystem. Few IO intensive services are developed in nodejs, but rest of the code is using Spring Cloud, effective communication b/w JVM & nodejs is required. We want to access elasticsearch using API gateway rather then using separate host/port. Access database using sidecar pattern. The problem statement There are multiple problems related to inter service communication when polyglot comes into picture. few of them are: How does non-JVM apps discover JVM microservices in Spring Cloud environment and vice-versa. In any cloud native application, we would never want to hard code host and port of apps for discovery purpose. How does non-JVM app communicates (i.e. calls REST endpoints) with JVM apps and vice versa. How do we utilize the client side load balancing features for non-JVM apps. How do we utilize the config server for non-JVM apps? Sidecar for the Rescue Sidecar is built to solve the exact problem at hand. It allows any non-JVM apps to take advantage of Eureka, Ribbon and ConfigServer. Fig. Sidecar for non-jvm apps integration It includes an HTTP API to get all of the instances (by host and port) for a given service. This API is accessible to the non-JVM application (if the sidecar is on port 5678) at{serviceId}. For example, if we want to list the host and port details for sidecar-pdf itself, we can call the below API: [ { "host": "localhost", "port": 8058, "instanceId": "10.65.18.178:sidecar-pdf:5678", "secure": false, "uri": "", "serviceId": "SIDECAR-PDF" } ] Creating a Spring Boot Sidecar project You can create a new Spring Boot 2.x starter project from with the Spring Cloud dependencies (webflux, actuator, and eureka). Once project is created, import it into your favorite IDE (IntelliJ IDEA Ultimate in this tutorial) and manually add the sidecar dependencies. plugins { id 'org.springframework.boot' version '2.1.5.RELEASE' id 'java' } apply plugin: 'io.spring.dependency-management' dependencies { implementation 'org.springframework.boot:spring-boot-starter-webflux' implementation 'org.springframework.boot:spring-boot-starter-actuator' implementation 'org.springframework.cloud:spring-cloud-netflix-sidecar' implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client' testImplementation 'org.springframework.boot:spring-boot-starter-test' } To enable the Sidecar, create a Spring Boot application with @EnableSidecar. The main entry point for our application will look like below: @EnableSidecar (1) @SpringBootApplication public class SidecarApp { public static void main(String[] args) { SpringApplication.run(SidecarApp.class, args); } } This annotation includes @EnableCircuitBreaker, @EnableDiscoveryClient, and @EnableZuulProxy. We need to configure few properties for sidecar i.e. port of non jvm app, health-uri of non jvm app and hostname. server: port: 5678 (1) spring: application: name: sidecar-pdf (2) main: web-application-type: reactive sidecar: port: 8058 (3) health-uri: (4) hostname: localhost (5) Here in above configuration, sidecar app is running at port 5678 on the same host as that of non-JVM app. non JVM app is running on port 8058. Sample Flask App We need to expose a health endpoint in non JVM app that will return health of the app. This way Eureka can check if non JVM is actually in a healthy state or not. In this sample usecase, we will develop a small python app that we will integrate into Spring Cloud using the sidecar approach. from flask import Flask, jsonify app = Flask(__name__) @app.route("/") def home(): return "Hello, World!" @app.route("/health.json") def health(): return jsonify({"status": "UP"}), 200 if __name__ == "__main__": app.run(debug=True) Run the python Flask app using inbuilt development server, using the below command: $ python app.py Test that health endpoint is working: $ curl -X GET { "status": "UP" } That’s the exact format of JSON that spring boot sidecar app will expect from non JVM app. Increasing the Hystrix timeout We may want to set a different hystrix timeout value for non-JVM app, which can be done as follow: #Hystrix timeout configuration hystrix: command.default.execution.timeout.enabled: false command.default.execution.isolation.thread.timeoutInMilliseconds: 60000 Starting the Eureka registry We need to start the service registry now so that sidecar-pdf service can register there. Fig. Sidecar service entry in eureka registry Calling non-JVM API using sidecar Now we can invoke REST APIs defined in non-JVM through sidecar app, for example we can access health endpoint defined in flask app through API gateway with the help of sidecar app: GET Where 8080 is the port of API Gateway. API Gateway converts the sidecar-pdf endpoint to actual host/port using service registry lookup and facilitates the call. Scalability Patterns for Sidecar Approach Often we need the non JVM app to scale as per the load of requests, with sidecar we have two approaches to handle the scalability. Using one sidecar per non JVM app (preferred approach) Using a load balancer (nginx) in-front of non JVM apps and then just use one Sidecar app for the entire set of non JVM apps. One sidecar per non JVM app (preferred approach) In this setup, we create and deploy one sidecar app for each instance of non-jvm app. This normally happens in the same docker container which hosts the non-jvm app. Each pair of sidecar and non-jvm app must run on the same host (different port of-course), otherwise the setup won’t work. Fig. Sidecar for non-jvm apps integration with spring cloud This is most preferable approach due to various reasons: All non-jvm apps get registered into Eureka Service Registry because each one of them is accompanied by a sidecar app. Eureka knows the health of each non-jvm app, because sidecar is monitoring each instance dedicated manner. Single Sidecar with nginx load balancer We can configure single sidecar app to talk to load balancer instead of individual non-jvm app. The architecture diagram for such kind of arrangement is shown below: Fig. Sidecar for non-jvm apps integration There are certain pros and cons of this approach: Both sidecar and the load balancer shall run on the same host. If a non jvm app goes down, sidecar app may not know about it since it is talking only to load balancer not the real app instances. Individual non-jvm app’s metrics can not be collected using this approach. Integrating non-jvm app that can not provide its own health check endpoint (elasticsearch) In many cases its not feasible to add a custom health check endpoint to non-JVM app, and we will have to provide an alternate mechanism for health check. For example if we want to use sidecar pattern for elastic search/kafka or Postgres, which likely won’t provide a health check URL as spring boot’s requirements. In this case the sidecar app itself be used to implement such requirement. public interface SidecarHealthIndicator extends HealthIndicator { } Top articles in this category: - SendGrid emails in Spring Boot - Sendgrid Dynamic Templates with Spring Boot - Basic Auth Security in Spring Boot 2 - Setting a Random Port in Spring Boot Application at startup - Feign RequestInterceptor in Spring Boot - Run method on Spring Boot startup - Redis based rate limiter in Spring Boot Find more on this topic: >>IMAGE
https://www.javacodemonk.com/spring-boot-sidecar-89fe7d32
CC-MAIN-2020-34
refinedweb
1,387
52.29
ReentrantLocks Lets me first try to explain reentrancy concept in a simplistic and generic way. We will come to Java specific details a bit later. Reentrancy is lay man terms means ability to enter again. In terms of thread it mean thread can acquired same lock again without blocking itself. Refreshing our multi threading concepts here. When you synchronize over an object the thread obtains a lock on it before entering the critical region (inside synchronized block) and till this thread releases this lock no other thread can acquire it and enter the critical region. NOTE : We do this to make compound operations atomic so that there is no race condition or invalid state. But what happens when we call an synchronized instance method from inside another synchronized instance method. Eg - public class TestClass { public synchronized void method1() { // some code method2(); } public synchronized void method2() { // some other code } } Here for a thread to enter either of the method has to obtain a lock on the instance (this) before entering the method. Now we are calling method 2 from method1 which is again synchronized with same instance (this). So thread will try to acquire lock again. If locks were not reentrant in nature we would have ended up in deadlock. Note : In Java all intrinsic locks are reentrant in nature. Note :. Explicit locks are introduced in Java 1.5 like semaphore, cyclic barrier etc. Now lets see Reentant lock in Java that was introduced in Java 1.5. NOTE : This lock supports a maximum of 2147483647 recursive locks by the same thread. Attempts to exceed this limit result in Error throws from locking methods. Example - Most generic example are Segments used in ConcurrenHashMap. Each segment is essentially a ReentrantLock that allows only single thread to access that part of the map. You can refer to the link above to see how it works. Adding relevant snippet here - NOTE : ReentrantLock was introduced since Java 5. Now lets see Reentant lock in Java that was introduced in Java 1.5. ReentrantLock in Java A reentrant mutual exclusion Lock with the same basic behavior and semantics as the implicit monitor lock accessed using synchronized methods and statements, but with extended capabilities like - - It takes a. It will succeed if the lock is available even if other threads are waiting - ReentrantLock(boolean fair) - It provides tryLock() method which acquires lock only if it is not held by other threads. We can also use timeout with this method which means thread will time out of waiting if lock is not acquired till the timeout value. This is better than intrinsic locks where you have to wait indefinitely. - It also provides facility to interrupt thread while waiting using. ReentrantLock provides a method called lockInterruptibly() [Acquires the lock unless the current thread is interrupted.], which can be used to interrupt thread when it is waiting for lock. - Lastly it also provides functionality to get list of all threads waiting for the lock - getWaitingThreads(Condition condition) (Returns a collection containing those threads that may be waiting on the given condition associated with this lock). NOTE : This lock supports a maximum of 2147483647 recursive locks by the same thread. Attempts to exceed this limit result in Error throws from locking methods. Example - class Test { ReentrantLock reLock = new ReentrantLock(); // ... public void m() { reLock.lock(); // block until condition holds try { // ... method body } finally { reLock.unlock() } } } WorkingAlso, the way reentrancy is achieved is by maintaining a counter for number of locks acquired and owner of the lock. If the count is 0 and no owner is associated to it, means lock is not held by any thread. When a thread acquires the lock, JVM records the owner and sets the counter to 0.If same thread tries to acquire the lock again the counter is incremented, and when the owning thread exist synchronized block counter is decremented. When count reaches 0 again lock is released. Most generic example are Segments used in ConcurrenHashMap. Each segment is essentially a ReentrantLock that allows only single thread to access that part of the map. You can refer to the link above to see how it works. Adding relevant snippet here - static final class Segment<K,V> extends ReentrantLock implements Serializable { //The number of elements in this segment's region. transient volatile int count; //The per-segment table. transient volatile HashEntry<K,V>[] table; } V put(K key, int hash, V value, boolean onlyIfAbsent) { lock(); try { //logic to store data in map } finally { unlock(); } } NOTE : ReentrantLock was introduced since Java 5.
http://opensourceforgeeks.blogspot.com/2016_06_07_archive.html
CC-MAIN-2018-09
refinedweb
754
64.3
* Stefano Lattarini wrote on Sun, Apr 10, 2011 at 10:26:40AM CEST: > <> > > > Related to automake bug#7648 and PR automake/491. > > > > * lib/am/yacc.am (am__yacc_c2h): New internal variable. > > (?GENERIC?%EXT%%DERIVED-EXT%, ?!GENERIC?%OBJ%): Get the name of > > the header dynamically at make runtime, so that its extension is > > modelled after the extension of the source. > > * automake.in (lang_yacc_target_hook): Adjust the calculation of > > `$header' accordingly. > > * tests/yacc-cxx.test: New test. > > * tests/yacc-d-cxx.test: Likewise. > > * tests/yacc-weirdnames.test: Likewise. > > * tests/yacc-basic.test: Update comments. > > * tests/yacc-d-basic.test: Likewise. > > * tests/yaccpp.test: Updated and extended. > > * tests/Makefile.am (TESTS): Update. > > > This patch has been applied in the meantime, but it lacked NEWS and > documentation updates. I will post them soonish in two different > patches. While at it, you might want to fix > +// Valid as C++, but deliberatly invalid as C. deliberately I think you meant to write: Valid C++, but deliberately invalid C. > +#include <cstdio> > +#include <cstdlib> > +int yylex (void) { return (getchar ()); } Extra parentheses, not needed for return. which is how far I got with reviewing before you pushed the patch. Cheers, Ralf
http://lists.gnu.org/archive/html/automake-patches/2011-04/msg00061.html
CC-MAIN-2014-35
refinedweb
190
51.65
CodePlexProject Hosting for Open Source Software Hi, I am starting out with Orchard, trying to write my first module. I would like to have a new role with a few permissions that I can access in the standard users and roles admin interface. How do I go about getting them created? I have implemented a class with IPermissionProvider interface using the Orchard.Comments permissions.cs file as a starting point. However, when I run the application the roles/permissions do not show up. I guess I am missing some sort of start up routine that creates them, but don't know what of where. Can anyone help point me in the right direction please?? Thanks, Rob. OK, after digging around some more I have found that the Permissions have been created, but the role has not. It looks like the new role should be created in the DefaultRoleUpdater.AddDefaultRolesForFeature process but I don't understand how/when this will be called. Any ideas?? thanks, Heres my code if that helps: using System; using System.Collections.Generic; using System.Linq; using Orchard.Environment.Extensions.Models; using Orchard.Security.Permissions; namespace HelloWorld { public class HelloWorldPermissions : IPermissionProvider { public static readonly Permission FunctionOne = new Permission { Name = "FunctionOne", Description = "Function One" }; public static readonly Permission FunctionTwo = new Permission { Name = "FunctionTwo", Description = "Function Two" }; public static readonly Permission FunctionThree = new Permission { Name = "FunctionThree", Description = "Function Three" }; public virtual Feature Feature { get; set; } public IEnumerable<Permission> GetPermissions() { return new[] { FunctionOne, FunctionTwo, FunctionThree }; } public IEnumerable<PermissionStereotype> GetDefaultStereotypes() { return new[] { new PermissionStereotype { Name = "FunctionController", Permissions = new[] {FunctionOne,FunctionThree} } }; } } } Why do you want to create a role? The permissions and default stereotypes should be all you need in most situations. Roles should be created by users, not modules. Hi, I am trying to integrate an existing booking system that has a few user roles. These roles are not related to publishing, like the default Orchard roles, but are more specific to the application. I would like to use Orchard to host the app as it is a great framework providing I good deal of the other requirements of the application. So, in this case I want my module to drive the role existence if possible and to setup the basic permissions against those roles by default. Then you should inject a IRolesService and call CreateRole on it. OK, I'll give it a go. Thanks for your help. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/263551
CC-MAIN-2017-22
refinedweb
434
56.96
tag:blogger.com,1999:blog-133223062009-07-16T12:34:43.059+08:00Left-Handed MouseWords are flowing out like endless rain into a paper cup...daytripper1021's KitchenIt's a quaint fine-dining restaurant located along Connecticut Street in Greenhills. The ambience is cool to the eyes, with the food great as well. The crowd seemed to be from the high-end, which should've made us prepared for what our total bill would be.I ordered the Pakbet Rice with Lechon Kawali......ANG SARAP! The missus ordered fish and Owa ordered the beef roast ---both good as well. The daytripper1021 Millionaire (2008)Aww ganda!!!daytripper1021 Finds: Bagoong Club Resto"Bagoong Club tayo."I couldn't believe my ears. It was the missus uttering those 3 beautiful words that I've longed to hear for quite some time now. We had ran out of places to eat at the Tomas Morato area on our Saturday dinner after Mass when she finally decided to give us an opportunity to try Bagoong Club Resto just off Tomas Morato."Baka naman puro bagoong dyan!" she would say whenever we daytripper1021 Kind Rewind (2008)Hmmm, I didn't really like this.daytripper1021 Review: JG Summit Employees RunThis was the very first orgranized race conducted inside Heritage Park. With runrio.com overseeing the race, the 5km route would comprise two loops while the 10km would have 4 loops. With an entrance fee of P250, it came with the standard racebib and a cool singlet with the JG summit colors.I have to admit that running 4 loops would seem monotonous. But that changed when the gun sounded. The lastdaytripper1021 MichaelI) daytripper1021 2 (2008)Funny, especially King Julian! daytripper1021 First(?) Laiya 10kIt daytripper1021 Program for 21k daytripper1021 Stories (2008)Nice popcorn movie. Classic Adam Sandler.daytripper1021 Finds: Chef's BistroThe missus and I've been wanting to try Chef's Bistro ever since we saw it a few months back just off Tomas Morato. Last Saturday evening, we were already going for Chinese but then decided to give Chef's Bistro a try since anyway, the missus hasn't failed yet in trying out the places to dine.When you enter the place, you'll find yourself looking up to appreciate the works of art hanging above daytripper1021 And DemonsGreat plot. Not bad. The missus and I almost got teary-eyed in the last scene. Watch it. daytripper1021 Review: Mizuno 10kAfter days of rain, the sun had greeted all the racers yesterday morning at the starting gate of the Mizuno 10k. The 15k racers had left a little before 5:30am since I saw the lead pack when I was heading for the BHS parking lot.I pinned my racebib and headed to the corral where I met Obet, an old football buddy in DBTI. He was running the 5k and said he usually averages below 30mins which has daytripper1021 ConversationsLast Thursday evening my college pals and I met again at V8 Baresto in Pasig, just off Shaw Blvd. This is my second time at the place since RS introduced me to below-zero SanMig Light, the only served beer that I'll probably love from now on.It was great seeing the gang again minus BN who's settled into the US now. RS will be flying to NZ by August, something that the gang was quite sad about, daytripper1021 Dresses (2008)Maganda na sana pero it failed me in the ending. But I'll forgive it since Katherine Heigl starring. Pwede na rin.daytripper1021' Pepper SteakSizzlin’ Pepper Steak opened their branch at Robinson’s Galleria a few weeks back. Since I was starting to run out of options on where to have my lunch, I decided to enter their resto and try the food.<?xml:namespace prefix = o />Well, the rice is sprinkled with black pepper with a round of sliced beef on the side on top a sizzling plate.Masarap naman. daytripper1021 Review: AutoReview 10kAll my official funruns, 3 5k and 1 10k, were staged at The Fort with Boni High Street as the Start/Finish mark. I've been accustomed to the route except for my 1st 10k which included traversing the Kalayaan flyover up to Gil Puyat Avenue and back.The AutoReview 10k race route which was to begin and end at Mckinley Hill, was my first at the said venue. It was only me and DB this time, with DB daytripper1021: Nokia Sports Tracker BetaHaving used my sister’s ipod/nike+ for 3 weeks for my practice runs, I have to say that I was quite impressed of Apple’s nifty gadget. It was able to provide distance, pace, and time data simply by pressing the center button or by looking at the screen. It even had the audio announcements that indicate progress after each km, halfway-point, last 500 meters, and upon finishing your workout (“Greatdaytripper1021 Man (2008)Great movie! daytripper1021 Wordpress.com MigrationSheeeesh!I just found out that you can't use plug-ins with Wordpress.com. Arrrgghhhh! Andami ko pa namang plug-ins dito which I'd like to use with Wordpress.com. Ahhhgghhhh!!!!Oh well, balik Blogger.com na muna. I'll see if there's a better 3-column layout out there.daytripper1021 WordpressI'm finding it difficult lately to fix this blog's layout since I found the current setup to be messy. Looking for aides from other bloggers has been tiresome so I've been thinking of migrating all my posts to Wordpress. I've read many blogs/articles stating that Wordpress is better than Blogger. Most of the Worpdress blogs I've visited seem to be better looking.I think the difficult part of thisdaytripper1021 Review: Botak 10k Paa-TibayanRACE DAY. I woke up 5am as usual. This being my fourth funrun, I've somehow gotten used to waking up this early (and excited!) whenever it's race day. This is going to be extra special since it's my first ever 10km race and what a way to start it off by entering the Botak Paa-Tibayan.<?xml:namespace prefix = o /> I arrived around 5:30am and started stretching off thanks to some warm-up exercisesdaytripper1021 #2247 when stopped by MMDAIt was a smooth drive from Makati Shangrila to the office this afternoon with MP on the driver's seat. Such is the benefit of being a Senior Manager, having one of your engineers drive for you.The end of our ride was to exit EDSA just before the Ortigas flyover so that we enter Galleria. We were about 20meters from the make-shift partition between the flyover and the underpass when we were daytripper1021 Man (2008)Funny! daytripper1021 na mag-10k!Dalawang tulog na lang, May 10 na. It’s been like forever since I registered at ROX BHS last Apr25 for the BOTAK Paa-bilisan 10k. I was able to beat the deadline since it will be an additional P50.00 if I registered beyond that date. True enough, the BOTAK booth at the Step-Up-For-A-Cause FunRun said they’re already charging P300.00/head. Buti na lang! Since then, I've been eagerly waiting for daytripper1021
http://feeds.feedburner.com/lefthandedmouse
crawl-002
refinedweb
1,201
72.36
If this happens, enter the BIOS Setup and press F9 to load factory defaults. Do not power down the system during the BIOS update process. If the screen is black except for a blinking cursor, or if you receive one of the following error codes, this status indicates that the boot. This originally was a cmos checksum error. The laptop wouldn't keep time or date correctly so I replaced the cmos battery and everything is. Cmos check error default loader - sorry, that POST codes is to redirect the output of the console to a serial port (see Redirecting Console Output). 2.0 User’s Guide (820-1188). The BIOS skips certain tests while booting, such as the extensive memory test. This decreases the time it takes for the system to boot. - Quiet Boot: This option is disabled by default.. This option is disabled by default. POST Codes TABLE 8-1 81. In the POST codes listed in TABLE 8-1, the first two digits are from port 81 and the last two digits are from port 80. You can see some of the POST codes from primary I/O the 8-2. Please note that I have not tried these myself. Camileo HD - what is the default loader? Hello, I have so many things lying around here I do not know what adapter supplied with the equipment. Can someone tell me the default adapter that comes with a HD TV? And how it looks like maybe? I'm going crazy because I'm afraid to hurt my camera / battery so if someone can give me help, I'd be a happy man! Thanks in advance guys Hello the default loader is called: adapter SYS1196-0505-W2E Best regards my computer has started and then came up with a checksum CMOS defaults loaded error message my computer started then showed the message CMOS checksum error defaults loaded What dose this mean and can it be fixed This indicates a problem of motherboard - best of cases are the battery on the motherboard has failed and must be replaced, the worst case would be a fault on the motherboard that can be corrected. Start by checking your motherboard or system reference manual or the manufacturer's web site for instructions on replacing the battery and reconfigure the BIOS/CMOS settings. "Rustywedge68" wrote in the new message: * e-mail address is removed from the privacy... * my computer started then showed the message CMOS checksum error defaults loaded What dose this mean and can it be fixed. GIF animations appear as still images. Animation load only after a right click and will image info. You have the image.animation_mode set to none as you can see in the list of system details. You need on the other (normal or once) How to make a different splash screen and loading gif using visual studio? Hello I use visual studio to develop my application webworks. In my screen config.xml, I see a spot that says loading screen. I want to do is have a splash screen at startup up and load a gif to the remote page. I can't separate the 2 here. The start screen and the gif are different images. Thanks for any help What you are looking for is not vascular supported by the features of the screen load. This is a screen of loading stated in all scenarios. change the default loading screenBy default, Flex displays the loading progress bar while it initializes an application. Does anyone know how to customize this loading screen... Examples would be useful. Found this site that has the source code By default, load after CreateInsert.ExecuteHi all! This is my first post so be polite ;) Well, I have this sensor which is binding for a DataControl I have a button that executes: my input text is empty, and all I want is 1.-l' entry should load a default value, such as INITIAL 2.-l' entry, it must be read-only or hidden. No idea? Is the easiest to do Find the view object in your business components and open go to the attribute you want and dubleclick or press the edit icon to open the properties set the initial value in the value field. HP 7510 - Always scan the default loading paper A few months ago the default parser using the paper feeder instead of scan during the scanning bed. Even if I close the tray of paper, he will roll/try feeding of the paper in the tray instead of scanning from the bed. I have reset settings one or two times, but he always return to the default values. Is there a setting somewhere that I can put it to bed scan? Hello Just test mine (also a 7510). You hear a 'ting' noise when putting paper in the ADF? I think that it has a sensor and you may need to clean the surface of the sensor. How to clean I don't know, the printer is a printer ready so I don't want to break it. Kind regards. Message + loading GIF Anyone having problems with this? It worked when I got on the phone and then stopped working some time in September'ish. I was hoping the udpate 'Nougat' would fix it, but I still get the same error Steps to follow: -Open Message + app -Choose the conversation -Press on the icon «+» -Select the 'Camera '. Enable/disable picture to GIF format * This is when the error message appears. If I select "RETRY" it fails again. I'm still able to take & send photos under the same menu, just not GIF. Halp. Thank you Erica Solved! Went to 'info App' > clicked on storage > Selected 'CLEAR DATA' > and press 'OK' to the ' this application data will be permanently deleted. " This includes all files, settings, accounts, databases, etc."command prompt. After the reset of the app and the parameters to the top of my personalized preferences, the GIF option works again Default preloader in captivate 8 outputs HTML5 will not change Hello I have a problem with the preloader in Captivate 8. I chose the preloader but when I publish the course as HTML5, there is still the default, preloader whatever one I choose. Not to mention that I can't find the default preloader among other preload - I'd just replace it. I can change it in the published result, but I don't want to do whenever I publish a course. Anyone encountered this problem? Do you know how can I fix? Thank you. The same problem occurs in HTML CP9, regardless what preloader you always choose by default, loader.gif file located here: C:\Program Files\Adobe\Adobe Captivate 9 x64\HTML\assets\htmlimages We have replaced the loader.gif in our c: drive so he always published with our required preloader. This only works if you use the preloader even on most of your projects. Otherwise, you can replace the loader.gif manually once you have published your course. See the gif loader? Hi, my application has a button to synchronize the data with the back-end system. When the user hits the button, I want an animation of charger on the screen while the action is running. For example something like this: I'm sure that there is an example of this somewhere, but I can't... Thank you! gif of loading in the problem of flash player 8!Hi all Can anyone help please? I'm having a problem of dynamically loading a gif file in my swf, running with flash player 8. I use flashMX to develop my site but I do not understand that dynamically loading GIF and png has recently introduced in the new flash 8 player so I test my swf in a browser window to see the correct result. First of all, let me explain what im trying to do. I use myMovieClip.loadMovie("http//:myURL.picture.gif"); I want to load the image into a target clip. It works well when I load jpg or swf files, but when I try to load a gif file in the same way, nothing happens? I then used loadMovieNum ("http//:myURL.picture.gif", 2); to see if I got a different result. This loads correctly the gif into a new level in my flash animation which is good in a way because I know that the flash player to view the gif. BUT I don't want the image into a new level in my flash movie, I need it in a target movieclip? Can anyone suggest why the loadMovie does not load the gif. Thanks for any help Claire Wall Thanks Kglad I thought that this might be the case, I'll have a go at loadMovieNum, but I think it will be difficult for it intergrate into the rest of my film. My employer will decide wheather than it's worth. Bravo guys back there solverd the mystery at least; _0 Claire Wall Hi all! Here's my problem: I am display a gif animated using the class presented here. It is used in a specific class loader that perform an action, while the gif is displayed (the gif is a spinner). Only once the operation is completed, the LoadingScreen came out of the screen. Here's my class loader:public class LoadingScreen extends MainScreen { /** * The action to be processed during the loading. */ Runnable action = null; private AnimatedGIFField _image; /** * Default constructor. * * @param action the action to manage. */ public LoadingScreen(Runnable action) { super(FIELD_VCENTER); this.setTitle(CommonTools.initTitleBar()); this.action = action; displayAnimation(); if(this.action != null) { UiApplication.getUiApplication().invokeLater(this.action); } } /** * Display the animation during the loading process. */ private void displayAnimation() { EncodedImage encImg = GIFEncodedImage.getEncodedImageResource("loading.gif"); GIFEncodedImage img = (GIFEncodedImage) encImg; _image = new AnimatedGIFField(img); getMainManager().setBackground(BackgroundFactory.createSolidBackground(Color.BLACK)); LabelField label = new LabelField("Connect..."){ public void paint(Graphics g) { g.setColor(Color.WHITE); super.paint(g); } }; HorizontalFieldManager containerSpin = new HorizontalFieldManager(FIELD_HCENTER); HorizontalFieldManager containerLabel = new HorizontalFieldManager(FIELD_HCENTER); containerSpin.add(_image); containerLabel.add(label); containerSpin.setPadding(Display.getHeight()>>2, 0, 0, 0); this.add(containerSpin); this.add(containerLabel); KasualApplication.isLoading = true; } } The problem is that the gif is not animated, while the action runs. But if I have a popup appears at the end of the action to let the loading screen, I see that the gif is animated. As well as the animation starts when the action is complete. I think something is wrong in the clock of my son, but I don't see that. Could you please help me? Thanks in advance. It's ok, the solution is described here.
https://sprers.eu/cmos-check-error-default-loader.php
CC-MAIN-2022-40
refinedweb
1,772
65.32
makepp -- Command line syntax for makepp ?: -?, A: -A, --args-file, --arguments-file, --assume-new, --assume-old, B: -b, --build-cache, --build-check, --build-check-method, C: -C, -c, D: --defer-include, --directory, --do-build, --dont-build, --dont-read, --do-read, --dry-run, --dump-makefile, --dump-makeppfile, E: -e, --environment-overrides, --env-overrides, F: -F, -f, --file, --final-rules-only, --force-copy-from-bc, --force-rescan, G: --gullible, H: -h, --help, --hybrid, --hybrid-recursion, --hybrid-recursive-make, I: -I, --implicit-load-makeppfile-only, --include, --include-dir, --in-sandbox, --inside-sandbox, J: -j, --jobs, --just-print, K: -k, --keep-going, L: --last-chance-rules, --load-makefile, --load-makeppfile, --log, --log-file, M: -m, --makefile, $MAKEFLAGS, $MAKEPP_CASE_SENSITIVE_FILENAMES, --makeppfile, $MAKEPPFLAGS, --md5-bc, --md5-check-bc, N: -n, --new-file, --no-builtin-rules, --no-cache-scaninfos, --no-implicit-load, --no-log, --no-path-executable-dependencies, --no-path-exe-dep, --no-populate-bc, --no-print-directory, --no-remake-makefiles, --no-warn, O: -o, --old-file, --out-of-sandbox, --override-signature, --override-signature-method, P: --populate-bc-only, --profile, Q: --quiet, R: -R, -r, --recon, --remove-stale, --remove-stale-files, --repository, --rm-stale, --root-dir, --root-directory, S: -s, --sandbox, --sandbox-warn, --sandbox-warning, --signature, --signature-method, --silent, --stop, --stop-after-loading, --stop-on-race, --stop-race, --symlink-in-rep-as-file, --symlink-in-repository-as-file, T: --traditional, --traditional-recursion, --traditional-recursive-make, V: -V, -v, --verbose, --version, --virtual-sandbox, W: -W, --what-if makepp [ option ... ] [ VAR=value ] [ target ... ] mpp [ option ... ] [ VAR=value ] [ target ... ] Makepp supports most of the command line options and syntax that other makes support. The hyphens between the words are always optional, and can also be replaced by an underscore. You specify a list of targets to build on the command line. If you do not specify any targets, the first explicit target in the makefile is built. You can assign variables on the command line which will override any assignment or environment variable in every Makefile loaded, e.g., makepp CFLAGS=-O2 Valid options are most of the standard make options, plus a few new ones: Read the file and parse it as possibly quoted whitespace- and/or newline-separated options. Specifies the path to a build cache. See makepp_build_cache for details. The build cache must already exist; see "How to manage a build cache" in makepp_build_cache for how to make it in the first place. Build caches defined on the command line may be overridden by a build_cache statement in a makefile or a :build_cache rule modifier . If you work with several different builds, it may be useful to set the environment variable MAKEPPFLAGS to contain --buil d-cache=/path/to/build/cache so that all of your builds will take advantage of the build cache by default. The name of a build check method to use to decide whether files need to be rebuilt. Possible values are target_newer, exact_match, See makepp_build_check for information on build check methods. Cd to the given directory before loading the makefile and trying to build the targets. This is similar to specifying a directory with -F, except that subsequent -C, -f, -F, -I and -R options are interpreted relative to the new directory, rather than the old one. Cd up to the directory containing a RootMakeppfile. Workaround for include statement before the rule that builds the include file. This happens by pretending the include statements come last in the makefile. That way the include statement is performable, but variable overrides or modifications may still fail, in which case you should set the problematic ones on the command line (whereas gmake ignores any variable setting from the include file that might influence how that file itself gets built). Do not build the specified file, or, if it is a directory, everything thereunder, even though makepp thinks it should -- or do build, overriding the opposite specification from a higher directory. This is useful if you built a specific file by hand using different compilation options. Without this option, if you compile a module by hand and then run makepp to compile the rest of the program, makepp will also recompile the module you compiled by hand, because makepp cannot guarantee that the build is correct if any of the files were not built under its control. With this option, you tell makepp that you really know what you are doing in the case of this particular file and you promise that it's ok not to rebuild it. For example, % cc -g -DSPECIAL_DEBUG -c x.c -o x.o # Special compilation by hand % makepp cc -g -O2 -c x.c -o x.o # Makepp just overrode your compilation here! cc x.o y.o -o my_program # Relinks. % cc -g -DSPECIAL_DEBUG -c x.c -o x.o # Do it again. % makepp --dont-build x.o # Tell makepp not to rebuild x.o even if it wants to. cc x.o y.o -o my_program # Now it relinks without recompiling. If you want special compilation options for just one module, it's often easier to edit the makefile than to compile by hand as in this example; see "Target-specific assignments" in makepp_variables for an easy way of doing this. If you put a RootMakeppfile(.mk) at the root of your build system, that directory and everything under it defaults to --do-build, while the overall root of your file system defaults to --dont-build. That way, everything inside your build system is built (if necessary) but nothing outside is attempted. If, in this scenario, you want external parts to always be built as needed, you must explicitly pick them up with load_makefile statements in one of the makefiles within your tree. You may have one RootMakeppfile(.mk) each, in separate build trees, and they will be loaded if one tree has dependencies in another one. But you are not allowed to have RootMakeppfile(.mk) in nested directories, avoiding funny effects that tend to arise when you accidentally call makepp --repository again in a subdirectory. These effects include duplicate rules through duplicate sources, or eternal build cache reimports because cached files have the right signatures but the wrong relative pathes. Override --dont-build for the specified file or directory. If you have a RootMakeppfile(.mk) at the root of your build system, but you want makepp to build something outside of your build system just this once, you must explicitly mark it as --do-build. If you specify --do-build for a file or directory under a RootMakeppfile(.mk), without --dont-build for a higher directory, then the root (and all else under it) of your build system defaults to --dont-build. To resolve conflicts between --dont-build and --do-build, the one with the most specific path takes precedence regardless of order. If the same path is specified with both --dont-build and --do-build, then the rightmost one wins. The options --dont-build and --do-build can be dangerous if you give the wrong hints to makepp, since you are asking makepp not to do checks it needs, to guarantee a correct build. But since they allow greatly reducing the number of checks, they can speed up your builds dramatically, as explained in potentially unsafe speedup methods. Do not read the specified file, or, if it is a directory, everything thereunder -- or do read, overriding the opposite specification from a higher directory. Generate an error rather than read files marked for --dont-read. See --sandbox. The filesystem root always defaults to readable. Dump the raw contents of the makefile(s) for the current directory (as determined by the position of this option relative to any -C options) to filename. Include files are interpolated, comments are stripped out and ifdef's are resolved. # line "file" markers are inserted as necessary. The final value of any non-reference scalars in the makefile's package are printed following the makefile. This is useful for debugging, but (currently) you won't necessarily be able use the dump file as an equivalent makefile, for example because it contains both the include statement and the interpolated file. Causes variables from the environment to override definitions in the makefile. By default, assignments within the makefile override variable values which are imported from the environment. Loads the specified Makefile or, if you specify a directory, the Makefile therein, instead of the one in the current directory -- any target specified to the right of this option is interpreted relative to the directory containing the Makefile. For the details of the directory case and RootMakeppfile see the explanation at the next option. This option can be useful if you execute makepp from unpredictable directories. For example, if you compile from within emacs and you have sources scattered all over your directory tree, the current working directory for the compilation command will be the directory the last source file you edited was in, which may or may not be the top level directory for your compilation. However, you can specify your compilation command as makepp -F /your/source/dir/top and this will work no matter what your current directory is. Because this option doesn't affect the directory relative to which subsequent -C, -f, -F, -I and -R options are specified, you can make targets relative to the current directory like this: makepp -F /foo/bar -C . mytarget Loads the specified Makefile or, if you specify a directory, the Makefile therein, instead of the one in the current directory. If you do not specify the -f option or the -F option, makepp looks first for a file in the current directory (or the directory specified by the rightmost -C option, if any) called, then RootMakeppfile.mk, Makeppfile, then Makeppfile.mk, then makefile, then Makefile. Multiple -F and -f options may be specified. The first two (RootMakeppfile) are special (whether given explicitly or found implicitly). There must be at most one of those two in any given build tree on which makepp is to operate. But there may be several if you build several disjoint trees in one go. Those two are looked for not only in the aforementioned directory, but also upwards from there. If one is found, it is loaded before any other. Ignore the dependencies and implicit targets of the rule unless the target is phony. When using build caches, always copy files in and out of the cache, even if the source and target are on the same filesystem. This is mainly useful for testing (emulating) the case in which they are not. Don't use cached scanner results from previous runs. Believe that the rules create what they declare, rather than checking. This is faster, but doesn't catch bugs in rules. Print out a brief summary of the options. This option is present to allow makepp to work with old makefiles that use recursive make extensively, especially multiply into the same dir. load more than one makefile from the same directory. In that case this option says to fall back to starting another independent instance of makepp. If this fails, try --traditional-recursive-make. If you do use this option, you will get log files in the each directory the fall back occurred in. To get rid of only them use makeppclean --logs --recurse or mppc -lr. Search the given directory for included makefiles. If implicit loading of makefiles is enabled, then automatically load only a file called RootMakeppfile, RootMakeppfile.mk, Makeppfile, or Makeppfile.mk, and not makefile or Makefile. This is useful if makepp has dependencies that are generated by some other flavor of make, and makepp can't read that flavor's makefiles in general. (You want to avoid this situation if possible, but it tends to arise while you're in the process of porting a legacy build system to makepp.) This has no effect if implicit loading is disabled. Interprets the argument n as the number of shell commands that can be executed in parallel. By default, makepp does not execute commands in parallel. Unlike some other versions of make, when jobs are executed in parallel, makepp directs their output to a file and only displays the output when the commands have finished. This prevents output from several different commands from being mixed together on the display, but it does mean that you might have to wait a little longer to see the output, and stderr messages will usually appear before stdout stuff, differing from terminal output. Native Windows Perls (i.e. Strawberry and ActiveState), because they do not support the Unix fork/exec paradigm, do not allow this option (Cygwin works fine!). As a partial replacement, you can use the --sandbox option there, though this is far less comfortable. Build as many files as safely possible, even if some commands have errors. By default, makepp stops when it encounters the first error, even if there are other files that need to be built that don't depend on the erroneous file. Activate limited special handling for pattern rules with '%' only on the target side. This is needed because normally, unlike traditional makes, makepp instantiates all rules with all available files from the bottom up, allowing it to find all creatable dependencies. Loads the specified makefile before any other makefiles, except for RootMakeppfile, or RootMakeppfile.mk above it, but do not consider this option for the purposes of determining the default target. If no other makefile is specified, then one is sought using the usual rules. If the specified makefile is the same makefile that is found using the usual rules, then this option has no effect. Changes the name of the log file to the indicated name. By default, the log file is called .makepp/log. This file is readable with makepplog, mppl. Specifies the default signature method to use for rules which do not have the :signature modifier in makefiles which do not have a signature statement. Does not override the choice made by command parsers, e.g. C/C++ compilers. Possible values are md5, C or c_compilation_md5, xml and xml-space. For more details, see makepp_signatures. When importing from a build cache, reject cached targets unless the MD5_SUM is present and matches the imported target. When populating a build cache, calculate and store the MD5_SUM in the build info if it isn't there already. This is slower and leads to more rebuilds, but it guarantees that imported targets and build info files correspond exactly. Print out commands without actually executing them -- unreliably where commands depend on previous results. This allows you to see what makepp will do, without actually changing any files. More precisely, makepp executes all recursive make commands as normal (but hopefully you're not using recursive make anywhere!). Other commands are simply printed without being executed. Even commands which are prefixed with @ or noecho are printed after the @ or noecho is stripped off. However commands prefixed with + should be executed, but currently are not. Warning: The commands that makepp executes with -n are not necessarily the same thing it will do without -n. File signatures do not change at all with -n, which means that makepp cannot perform exactly the same build tests that it does when the signatures are changing. This will occasionally make a difference if you are using MD5 signatures (which is the default for compilation commands) or if you have shell commands that might or might not change the date. For example, suppose that you generate a .h file via some sort of preprocessor. This can happen in a lot of different ways. For concreteness, suppose you automatically generate a list of prototypes for functions defined in each C module (see for how the cproto application works or for the similar cfunctions). prototypes.h : *.c cproto $(CPPFLAGS) $(inputs) > $(output) Then each .c file will include prototypes.h. The purpose of this is to maintain the forward declarations for all functions automatically, so if you change a function's signature or add a new function, you don't ever have to put in forward or extern declarations anywhere. You don't even have to declare the dependency of your .o files on this one -- makepp will see the include statement and automatically see if it needs to (re)run cproto. Now suppose you change just one .c file. What happens when you run makepp with -n in this case is that it realizes that prototypes.h needs to be remade. In all probability, remaking prototypes.h won't affect its signature--the file contents will probably be identical because no function arguments have been changed--so most of the time, nothing that depends on prototypes.h actually has to be recompiled. But makepp doesn't know that unless it's actually allowed to execute the commands. So it assumes that anything that depends on prototypes.h will also have to be recompiled. Thus in this example, changing one .c file will cause makepp -n to think that every single .c file needs to be recompiled, even though most likely the regular makepp command will actually not run all those commands. This situation isn't all that common, and can only occur if (a) you use a signature method that depends on file contents rather than date, as the default compilation signature method does, or (b) if you have shell commands that don't always change the date. E.g., with a traditional implementation of make that only looks at dates instead of file signatures, sometimes people will write commands like this: prototypes.h : $(wildcard *.c) # Hacked technique not necessary for makepp cproto $(CPPFLAGS) $(inputs) > junk.h if cmp -s junk.h prototypes.h; then \ rm junk.h; \ else \ mv junk.h prototypes.h; \ fi Thus if rerunning cproto on all the files produces exactly the same file contents, the file date is not updated. This will have exactly the same problem as the above example with makepp -n: it is not known whether the date on prototypes.h changes unless the command is actually run, so makepp -n cannot possibly be 100% accurate. (Note that using the traditional make -n will also have exactly the same problem on this example.) makepp -n should always print out more commands than a regular invocation of makepp, not fewer. If it prints out fewer commands, it means that makepp does not know about some dependency; some file is changing that it is not expecting to change on the basis of what it knows about what files each rule affects. This means that your makefile has a bug. Do not record the results of scanning, forcing it to be reperformed next time makepp runs. Don't automatically load makefiles from directories referenced (see "Implicit loading" in makepp_build_algorithm). By default, makepp automatically loads a makefile from any directory that contains a dependency of some target it needs to build, and from any directory that is scanned by a wildcard. Sometimes, however, this causes a problem, since makefiles need to be loaded with different command line variables or options, and if they are implicitly loaded before they are explicitly loaded by a recursive make invocation or the load_makefile statement, makepp aborts with an error. You can also turn off makefile loading on a directory-by-directory basis by using the no_implicit_load statement in one of your makefiles. Don't bother writing a detailed description of what was done to the log file. By default, makepp writes out an explanation of every file that it tried to build, and why it built it or did not build it, to a file called .makepp/log, readable with makepplog, mppl. This can be extremely valuable for debugging a makefile--makepp tells you what it thought all of the dependencies were, and which one(s) it thought changed. However, it does take some extra CPU time, and you might not want to bother. Do not add implicit dependencies on executables picked up from the command search path. If this option is specified, then makepp assumes that any executable whose behavior might change with a new version will be specified with a name containing a slash. This is useful for programs such as grep and diff, which always do basically the same thing even if their implementation changes, though you're better off using the builtin commands for grep. You may also need this for repositories on NFS clusters, where the same commands might not have the same timestamp everywhere, causing unnecessary rebuilds depending what machine somebody works on. Don't populate the build cache, but still import from it when possible. This is useful when the environment might cause targets to be generated differently, but makepp doesn't know about such dependencies. It's also useful to avoid thrashing the build cache with a huge number of concurrent writers that might interfere with one another. Turn off the entering or leaving directory messages. Ordinarily, makepp loads each makefile in, then looks to see whether there is a rule that specifies how to update the makefile. If there is, and the makefile needs to be rebuilt, the command is executed, and the makefile is reread. This often causes problems with makefiles produced for the standard Unix make utility, because (in my experience) often the make rules for updating makefiles are inaccurate--they frequently omit targets which are modified. This can cause makepp to remake a lot of files unnecessarily. You can often solve this problem by simply preventing makepp from updating the makefile automatically (but you have to remember to update it by hand). Don't print any warning messages to stderr, only to log file. Most warning messages are about constructs that you might see in legacy makefiles that makepp considers dangerous, but a few of them concern possible errors in your makefile. Pretends that the specified file has not changed, even if it has. Any targets that depend on this file will not be rebuilt because of this file, though they might be rebuilt if some other dependency has also changed. The file itself might or might not be rebuilt, depending on whether it is out of date with respect to its dependencies. (To prevent that, use --dont-build.) Same as --signature-method, but even overrides the choice made by command parsers. Generate an error rather than write files outside the "sandbox". Like --dont-build, more specific paths override less specific paths. The filesystem root defaults to out-of-sandbox if there are any --sandbox options. The purpose of the sandbox is to enable multiple concurrent makepp processes to safely operate on disjoint parts of the filesystem. In order for this to work reliably, concurrent sandboxes must not overlap, and each process must mark the sandbox of every other concurrent makepp process for --dont-read. See partitioning into sandboxes. Don't import for the build cache. This is useful when you want to donate targets to the cache, but you don't want to rely on the contents of the cache (e.g. for mission-critical builds). Output raw timestamps before and after each action. Specify the given directory as a repository (see makepp_repositories for details). Repositories are added in the order specified on the command line, so the first one you specify has precedence. All files in the directory (and all its subdirectories) are automatically linked to the current directory (and subdirectories) if they are needed. If you just specify a directory after -R, its contents are linked into the current directory. You can link its contents into any arbitrary place in the file system by specifying the location before an equals sign, e.g, -R subdir1/subdir2=/users/joe/joes_nifty_library. Don't load the default rule sets. If this option is not specified, and the variable makepp_no_builtin is not defined in the makefile, then a set of rules for compiling C, C++, and Fortran code is loaded for each directory. Ignore stale files rather then treating them as new source files, removing them if necessary in order to prevent them from being read by a build command. This is not the default because it deletes things, but it is often required in order for incremental building to work properly. For example, assume that there is an x.c file that looks like this: #include "x.h" int main() { return X; } Consider this makefile: $(phony default): x x.h: &echo "#define X 1" -o $@ At some point, you change the makefile to look like this: CFLAGS := -Idir $(phony default): x dir/x.h: &mkdir -p $(dir $@) &echo "#define X 2" -o $@ Now if you build from clean, x exits with status 2, but if you build while the old ./x.h file still exists and you don't specify --rm-stale, then x exits with status 1, because the include directive picks up the stale generated header file. If you build with --rm-stale, then ./x.h is removed, and the result is the same as that of a clean build, which is almost always a good thing. Note that if you build in a repository, you have to give this option there first, because the importing makepp doesn't know what might be stale in the repository. Don't echo commands and don't print informational messages like "Scanning" or "Loading makefile". Restrain this instance of makepp to a subtree of a normally bigger build tree. See partitioning into sandboxes. Downgrade violations of "in-sandbox" and "dont-read" to warnings instead of errors. See partitioning into sandboxes. After loading the top level Makeppfile, and any others explicitly or implicitly (through dependencies from other directories) loaded from there, makepp will stop itself (go to sleep). This happens before it analyzes anything else. It will tell you the command needed to wake it up again. If you do it in a Shell, you get the prompt and can then fore- or background it. If you do it within an IDE, it'll just sleep, and you can awaken it from another Shell. The intention is that you can start makepp this way before you're finished editing some files. Depending on your project structure and size, this can allow makepp to get a headstart of many seconds worth of work by the time you're done. If you use prebuild or $(make) it will stop when it gets to that point, so it might not be so useful. Nor will it consider regeneration of Makeppfiles, but this is not expected to happen frequently. Exit in error rather than only warning about a sandbox access collision that could be fixed. If a repository contains a symbolic link, then by default that symbolic link is imported as a link, which is to say that the target of the imported link need not be identical to the target of the symbolic link in the repository. If the --symlink-in-repository-as-file option is specified, then the symbolic link is imported as its target file, which is to say that the imported link points to the same target file as the symbolic link in the repository. This is useful if the symbolic link in the repository was intended to have the build-time semantics of a copy. This option is present to allow makepp to work with old makefiles that use recursive make extensively, especially with varying options. use different command line options on different invocations of recursive make. Before you use this, try --hybrid-recursive-make. --traditional-recursive-make option makes makepp do recursive makes the same way as the traditional make, allowing more makefiles to work, but then repositories and parallel builds do not work properly. This option is rarely needed any more, and makepp will tell you if it runs into a construct that requires it. If you do use this option, you will get log files piling up in the various directories this changes to. To get rid of only them use makeppclean --logs --recurse or mppc -lr. Verbose mode. Explains what it is trying to build, and why each file is being built. This can be useful if you think a file is being rebuilt too often. This option actually takes what would be written to the log file and displays it on the screen. It's usually easier to run makepp and then look at the output of makepplog, which allows various selections and some rewriting. Print out the version number. Don't rewrite build infos of files that were not created by this makepp process. See partitioning into sandboxes. Pretends the specified file has changed, so that any targets that depend on that file will be rebuilt. The file itself is not necessarily changed (it might or might not be rebuilt, depending on whether it is up to date with respect to its dependencies), but everything that depends on it thinks that it has changed. This can be useful for debugging a makefile. Makepp searches upwards for a file called .makepprc when starting and again after every -C or -c option. Each time it finds such a file, but only once per file, it will read the file and parse it as possibly quoted options on one or several lines. Unlike the option -A, the options will be parsed relative to the directory where the file resides. Makepp looks at the following environment variables: Any flags in this environment variable are interpreted as command line options before any explicit options. All command line arguments are put into this variable Note that the traditional make also uses this variable, so if you have to use both make and makepp, you might want to consider using MAKEPPFLAGS. Same as MAKEFLAGS as far as makepp is concerned. If this variable is not blank, then MAKEFLAGS is ignored. Sometimes this is useful instead of MAKEFLAGS if you have to use both make and makepp, and you need to keep the options separate. Makepp will attempt to determine whether its default directory is case sensitive by creating a file and then accessing it with a different case. Usually this works fine, as long as all the files you're accessing are on the same file system as your default directory, so you should rarely need to use this option. If this variable is present in the environment, its value ( 0 or empty string for false, anything else for true) will override makepp's choice. This variable is mostly useful on Windows, if you want to override makepp's default setting. If you don't treat filenames as case sensitive, then makepp converts all filenames to lowercase, which causes occasional difficulties. (E.g., emacs may will open several buffers to the same file.) Makepp does not currently support a build across several file systems well, if one is case sensitive and the other case insensitive.
https://metacpan.org/pod/makepp
CC-MAIN-2014-23
refinedweb
5,127
62.38
Links on Android Authority may earn us a commission. Learn more. Build your first basic Android game in just 7 minutes (with Unity) Making a fully working game for Android is much easier than you might think. The key to successful Android development— or any kind of development— is to know what you want to achieve and find the necessary tools and skills to do it. Take the path of least resistance and have a clear goal in. You’ll be relying on some additional libraries. The list goes on. Unity is a highly professional tool that powers the vast majority of the biggest selling titles on the Play Store. Unity on the other hand does most of the work for you. This is a game engine, meaning that all the physics and many of the other features you might want to use are already taken care of. It’s cross platform and it’s designed to be very beginner-friendly for hobbyists and indie developers. At the same time, Unity is a highly professional tool that powers the vast majority of the biggest selling titles on the Play Store. There are no limitations here and no good reason to make life harder for yourself. It’s free, too! To demonstrate just how easy game development with Unity is, I’m going to show you how to make your first Android game in just 7 minutes. No – I’m not going to explain how to do it in 7 minutes. I’m going to do it in 7 minutes. If you follow along too, you’ll be able to do the precise same thing! Disclaimer: before we get started, I just want to point out that I’m slightly cheating. While the process of making the game will take 7 minutes, that presumes you’ve already installed Unity and gotten everything set up. But I won’t leave you hanging: you can find a full tutorial on how to do that over at Android Authority. Adding sprites and physics Start by double clicking on Unity to launch it. Even the longest journey starts with a single step. Now create a new project and make sure you choose ‘2D’. Once you’re in, you’ll be greeted with a few different windows. These do stuff. We don’t have time to explain, so just follow my directions and you’ll pick it up as we go. The first thing you’ll want to do is to create a sprite to be your character. The easiest way to do that is to draw a square. We’re going to give it a couple of eyes. If you want to be even faster still, you can just grab a sprite you like from somewhere. Save this sprite and then just drag and drop it into your ‘scene’ by placing it in the biggest window. You’ll notice that it also pops up on the left in the ‘hierarchy’. Now we want to create some platforms. Again, we’re going to make do with a simple square and we’ll be able to resize this freehand to make walls, platforms and what have you. There we go, beautiful. Drop it in the same way you just did. We already have something that looks like a ‘game’. Click play and you should see a static scene for now. We can change that by clicking on our player sprite and looking over to the right to the window called the ‘inspector’. This is where we change properties for our GameObjects. Choose ‘Add Component’ and then choose ‘Physics 2D > RigidBody2D’. You’ve just added physics to your player! This would be incredibly difficult for us to do on our own and really highlights the usefulness of Unity. We also want to fix our orientation to prevent the character spinning and freewheeling around. Find ‘constraints’ in the inspector with the player selected and tick the box to freeze rotation Z. Now click play again and you should find your player now drops from the sky to his infinite doom. Take a moment to reflect on just how easy this was: simply by applying this script called ‘RigidBody2D’ we have fully functional physics. Were we to apply the same script to a round shape, it would also roll and even bounce. Imagine coding that yourself and how involved that would be! To stop our character falling through the floor, you’ll need to add a collider. This is basically the solid outline of a shape. To apply that, choose your player, click ‘Add Component’ and this time select ‘Physics 2D > BoxCollider2D’. Take a moment to reflect on just how easy this was: simply by applying this script called ‘RigidBody2D’ we have fully functional physics. Do the precise same thing with the platform, click play and then your character should drop onto the solid ground. Easy! One more thing: to make sure that the camera follows our player whether they’re falling or moving, we want to drag the camera object that’s in the scene (this was created when you started the new project) on top of the player. Now in the hierarchy (the list of GameObjects on the left) you’re going to drag the camera so that it is indented underneath the player. The camera is now a ‘child’ of the Player GameObject, meaning that when the player moves, so too will the camera. Your first script We’re going to make a basic infinite runner and that means our character should move right across the screen until they hit an obstacle. For that, we need a script. So right click in the Assets folder down the bottom and create a new folder called ‘Scripts’. Now right click again and choose ‘Create > C# Script’. Call it ‘PlayerControls’. For the most part the scripts we create will define specific behaviors for our GameObjects. Now double click on your new script and it will open up in Visual Studio if you set everything up correctly. There’s already some code here, which is ‘boiler plate code’. That means that it’s code that you will need to use in nearly every script, so its ready-populated for you to save time. Now we’ll add a new object with this line above void Start(): public Rigidbody2D rb; Then place this next line of code within the Start() method to find the rigidbody. This basically tells Unity to locate the physics attached to the GameObject that this script will be associated with (our player of course). Start() is a method that is executed as soon as a new object or script is created. Locate the physics object: rb = GetComponent<Rigidbody2D>(); Add this inside Update(): rb.velocity = new Vector2(3, rb.velocity.y); Update() refreshes repeatedly and so any code in here will run over and over again until the object is destroyed. This all says that we want our rigidbody to have a new vector with the same speed on the y axis (rb.velocity.y) but with the speed of ‘3’ on the horizontal axis. As you progress, you’ll probably use ‘FixedUpdate()’ in future. Save that and go back to Unity. Click your player character and then in the inspector select Add Component > Scripts and then your new script. Click play, and boom! Your character should now move towards the edge of the ledge like a lemming. Note: If any of this sounds confusing, just watch the video to see it all being done – it’ll help! Very basic player input If we want to add a jump feature, we can do this very simply with just one additional bit of code: if (Input.GetMouseButtonDown(0)) { rb.velocity = new Vector2(rb.velocity.x, 5); } This goes inside the Update method and it says that ‘if the player clicks’ then add velocity on the y axis (with the value 5). When we use if, anything that follows inside the brackets is used as a kind of true or false test. If the logic inside said brackets is true, then the code in the following curly brackets will run. In this case, if the player clicks the mouse, the velocity is added. Android reads the left mouse click as tapping anywhere on the screen! So now your game has basic tap controls. Finding your footing This is basically enough to make a Flappy Birds clone. Throw in some obstacles and learn how to destroy the player when it touches them. Add a score on top of that. If you get this down, no challenge will be too great in future But we have a little more time so we can get more ambitious and make an infinite runner type game instead. The only thing wrong with what we have at the moment is that tapping jump will jump even when the player isn’t touching the floor, so it can essentially fly. Remedying this gets a little more complex but this is about as hard as Unity gets. If you get this down, no challenge will be too great in future. Add the following code to your script above the Update() method: public Transform groundCheck; public Transform startPosition; public float groundCheckRadius; public LayerMask whatIsGround; private bool onGround; Add this line to the Update method above the if statement: onGround = Physics2D.OverlapCircle(groundCheck.position, groundCheckRadius, whatIsGround); Finally, change the following line so that it includes && onGround: if (Input.GetMouseButtonDown(0) && onGround) { The entire thing should look like this: public class PlayerControls : MonoBehaviour { public Rigidbody2D rb; public Transform groundCheck; public Transform startPosition; public float groundCheckRadius; public LayerMask whatIsGround; private bool onGround; void Start() { rb = GetComponent<Rigidbody2D>(); } void Update() { rb.velocity = new Vector2(3, rb.velocity.y); onGround = Physics2D.OverlapCircle(groundCheck.position, groundCheckRadius, whatIsGround); if (Input.GetMouseButtonDown(0) && onGround) { rb.velocity = new Vector2(rb.velocity.x, 5); } } } What we’re doing here is creating a new transform – a position in space – then we’re setting its radius and asking if it is overlapping a layer called ground. We’re then changing the value of the Boolean (which can be true or false) depending on whether or not that’s the case. So, onGround is true if the transform called groundCheck is overlapping the layer ground. If you click save and then head back to Unity, you should now see that you have more options available in your inspector when you select the player. These public variables can be seen from within Unity itself and that means that we can set them however we like. Right-click in the hierarchy over on the left to create a new empty object and then drag it so that it’s just underneath the player in the Scene window where you want to detect the floor. Rename the object ‘Check Ground’ and then make it a child of the player just as you did with the camera. Now it should follow the player, checking the floor underneath as it does. Select the player again and, in the inspector, drag the new Check Ground object into the space where it says ‘groundCheck’. The ‘transform’ (position) is now going to be equal to the position of the new object. While you’re here, enter 0.1 where it says radius. Finally, we need to define our ‘ground’ layer. To do this, select the terrain you created earlier, then up in the top right in the inspector, find where it says ‘Layer: Default’. Click this drop down box and choose ‘Add Layer’. Now click back and this time select ‘ground’ as the layer for your platform (repeat this for any other platforms you have floating around). Finally, where it says ‘What is Ground’ on your player, select the ground layer as well. You’re now telling your player script to check if the small point on the screen is overlapping anything matching that layer. Thanks to that line we added earlier, the character will now only jump when that is the case. And with that, if you hit play, you can enjoy a pretty basic game requiring you to click to jump at the right time. With that, if you hit play you can enjoy a pretty basic game requiring you to click to jump at the right time. If you set your Unity up properly with the Android SDK, then you should be able to build and run this and then play on your smartphone by tapping the screen to jump. The road ahead Obviously there’s a lot more to add to make this a full game. The player should to be able to die and respawn. We’d want to add extra levels and more. My aim here was to show you how quickly you can get something basic up and running. Following these instructions, you should have been able to build your infinite runner in no time simply by letting Unity handle the hard stuff, like physics. If you know what you want to build and do your research, you don’t need to be a coding wizard to create a decent game!
https://www.androidauthority.com/can-build-basic-android-game-just-7-minutes-unity-813947/
CC-MAIN-2022-05
refinedweb
2,187
72.05
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. Elgg dokuwiki integration ------------------------ Adds wikis for groups, and integrates the user and permission system of elgg. NOTES: ------------------------ - Although the goal is to be able to use vanilla dokuwiki with maybe some extensions, at the moment we provide a slightly modified version. - doku.php globals are not known, and had to be redeclared. - $_REQUEST not defined... dont know the reason but its something of elgg. -> yes elgg is mangling it... **had to comment it on elgg core** - acl is hardcoded, added a getACL method on the auth class to be able to generate the acl on the fly... also setACL and modified the admin plugin to be able to use the auth backend. - uses an 'elgg' auth module, and a special dokuwiki template. - dokuwiki defines the javascript $ symbol, which conflicts with elgg usage of jquery, so had to change all usage inside dokuwiki (just a few instances). INSTALLATION: ------------------------ - copy into the mod folder as dokuwiki - only for elgg <= 1.7.1: you need to modify .htaccess, adding &%{QUERY_STRING} to the pg/ rules. after modifying they should look like this: RewriteRule ^pg\/([A-Za-z0-9\_\-]+)\/(.*)$ engine/handlers/pagehandler.php?handler=$1&page=$2&%{QUERY_STRING} RewriteRule ^pg\/([A-Za-z0-9\_\-]+)$ engine/handlers/pagehandler.php?handler=$1&%{QUERY_STRING} this change is already in elgg trunk, but 1.7.1 didn't have it yet. LICENSE: ----------------------- - elgg plugin uses GPLv2. dokuwiki and mods (all under lib/dokuwiki) go with original license to original copyright holders. REPO: ----------------------- You can follow code changes or submit tickets at: TODO: ------------------------ - Maybe just one big wiki for the whole site using namespaces? --- [email protected]
https://bitbucket.org/rhizomatik/elgg_dokuwiki/
CC-MAIN-2017-47
refinedweb
292
67.96
the Wing IDE documentation. There is a chapter on the Wing debugger, and another chapter on advanced debugging techniques which covers debugging code launched outside of Wing and/or on another host. Thanks to Stephan Deibel of Wing for updating this information. Getting started — pdb.set_trace() To start, I’ll show you the very simplest way to use the Python debugger. 1. Let’s start with a simple program, epdb1.py. #: pdb.set_trace() So now your program looks like this. # epdb1.py -- experiment with the Python debugger, pdb import pdb a = "aaa" pdb.set_trace() b = "bbb" c = "ccc" final = a + b + c print final 4. Now run your program from the command line as you usually do, which will probably look something like this:) Execute the next statement… with “n” (next) At the (Pdb) prompt, press the lower-case letter “n” (for “next”) on your keyboard, and then press the ENTER key. This will tell pdb to execute the current statement. Keep doing this — pressing “n”, then ENTER. Eventually you will come to the end of your program, and it will terminate and return you to the normal command prompt. Congratulations! You’ve just done your first debugging run! Repeating the last debugging command… with ENTER. Quitting it all… with “q” (quit). Printing the value of variables… with “p” (print) When does pdb display a line?”. Turning off the (Pdb) prompt… with “c” (continue). Seeing where you are… with “l” (list) - The pdb.set_trace() statement is encountered, and you start tracing with the (Pdb) prompt - You press “n” and then ENTER, to start stepping through your code. - You just press ENTER to step again. - You just press ENTER to step again. - You just press ENTER to step again. etc. etc. etc. - Eventually, you realize that you are a bit lost. You’re not exactly sure where you are in your program any more. So… - You press “l” and then ENTER. This lists the area of your program that is currently being executed. - You inspect the display, get your bearings, and are ready to start again. So…. - You press “n” and then ENTER, to start stepping through your code. - You just press ENTER to step again. - You just press ENTER to step again. etc. etc. etc. Stepping into subroutines… with “s” (step into) Eventually, you will need to debug larger programs — programs that use subroutines. And sometimes, the problem that you’re trying to find will lie buried in a subroutine. Consider the following program. # the next statement that pdb would show you would be the first statement in the combine subroutine: def combine(s1,s2): and you will continue debugging from there. Continuing… but just to the end of the current subroutine… with “r” (return). You can do anything at all at the (Pdb) prompt …! … but be a little careful! [Thanks to Dick Morris for the information in this section.] ‘= “BBB” ‘.. The End Well, that’s all for now. There are a number of topics that I haven’t mentioned, such as help, aliases, and breakpoints. For information about them, try the online reference for pdb commands on the Python documentation web site. it’s so easy in perl, without entering any source code. why not in python? These days, if you are on Linux, you should definitely check out pudb. You’ll get the console-deployment advantages of pdb with a sane UI. pudb is cross platform! Also, I concur that it is fabulous, and uses pretty much the same controls as described above, so you can use it more-or-less based on the instructions from this blog post. winpdb () deserves special mention. It is graphical (& command-line) python debugger which offers a number of exceptional features. For example, it can break into threads without the need to set a breakpoint. In spite of it’s name, it is cross-platform (using wxPython for the GUI). It’s a much smaller download than the full-featured IDEs. Excellent tutorial. Thank you.It will be awesome if you could write a “mid level one” with more commands. Loved the writing, I’ll sure point people here when they ask me. One note you may want to add — the list command doesn’t reset. That is, hitting “l” will give you the 11 lines, and “l” again will give you the next 11 lines. It could throw you off if you have something like “l l” because you forgot the name of some var but you are now in the next 11 lines. Sadly there is little to be done to “fix” this anyway awesome tutorial. > If you press ENTER without entering anything, pdb will re-execute the last command that you gave it. Hey Marge! I just DOUBLED my productivity! Great tutorial, much appreciated. Nice post. I would recommend using ipdb instead of pdb. This gives you the same features as well as syntax highlightning and completion. You left out one of the best ad-hock troubleshooting ways to figure out what caused an exception post mortem. Run: After the exception you will be left at the python interpreter prompt staring at your exception. Type: And you will be put into the context of the stack of the last exception. This means you can print/examine the local variables at the point of failure after the failure has occurred without having to change a line of code or import pdb in advance. Another method that requires a little bit of preparation is to put the import pdb and pdb.set_trace() in a signal handler trap. That way you can do a kill -SIGNAL PID or if the signal you trap is INT you can just Ctrl-C to get dropped into the debugger at any point. The signal handler technique is good for testing release candidates and released code because the signal handler doesn’t create any runtime overhead. Signal handler example. Debugger starts with Ctrl-C: Put that at the top of your script and you can start debugging your script at any point by type Ctrl-C. Resume programe execution by typing exit at the Pdb prompt. “it’s so easy in perl, without entering any source code. why not in python?” Although not mentioned in this tutorial that’s certainly possible by invoking the pdb module on start-up, e.g. like this: Hi, first of all, thanks a lot for a great tutorial. 🙂 I’m debugging a program and I have a question. I’m using both the “!” and the global directive to change the value of a global variable that’s causing the code to blow-up. Important note: the code is run from a CD, so I have no permanent access to the source. localpath = u’/some//path’ # notice the double slash, that’s the issue !global localpath; localpath=u’/some/path’ pp localpath u’/some//path’ So, can someone tell me why this is happening? A lot depends on the context. For example, here is a small chunk of code: Your commands would work fine with this script. But if you uncomment the line that is commented out, then localpath is a local (not global) variable, and you can’t reset it. I’ve tried resetting the “xm” variable (which is local to the main() function) from within a pdb debugging session, and I can’t. I don’t understand why not. It looks like a bug (or a limitation) in pdb to me. I’ve posted a question on comp.lang.python Kev Dwyer had a good response to the question that I posted on comp.lang.python. You can read his reply at The heart of the matter is: I think this may be the issue raised in bug 5215 (), committed in r71006. Apparently “The fix was made for 2.7, and not backported to 2.6.” So this is a bug in versions up through 2.6. The u and d debug commands are your friends and can answer your question as a frivolity. As you move up and down the stack the local scope changes to that level in the call stack. the l (lower-case L) command returns information appropriate to the stack. I find it useful to start an interactive Python session by typing “python” alone at a command prompt and then playing with different variations of the code until I am comfortable that the code does what I think it does. I like to write dense code and often bump up against the edges of idioms and have to deal realistically with the language lawyers to write reliable code. Oustanding, thanks. Our build code is written in Python and even though I used to admin it, I had never needed the debugger. Well, in this case, the code I was adding tested fine outside our environment but was getting a wrong value from somewhere during run-time, and I needed to use the debugger on the running process in our enviornment. I had no idea how to do this, but once I found your page I had the answers I needed in less time than it took me to write this note. Thanks. very nice! let me mention a trap. if when executing the program you redirect standard output, you will never see the pdb prompt. It looks like the Python documentation needs a Debugging HOWTO. How about we write it? 🙂 If you’d like to contribute your nice tutorial, we can work on a new document on bugs.python.org. I have started to work on the missing debugging HOWTO in the official documentation: Everyone is welcome to give feedback in that bug report. I’ll try to find time in the coming months and put it in the docs of Python 2.7, 3.2 and 3.3. Thanks for a really great introductory tutorial. Very helpful, and more good stuff in the comments too! I want to also add that with good tests, you almost never need to use a debugger – maybe once every couple of years. This is because there are fewer bugs to begin with, but also because getting to the right point in your program might require setting the value of several variables or input data. If you’re only going to be doing it once, a debugger is probably easier than writing a test. But if you’re going to be doing it twice or more, writing a test is generally easier than using a debugger. And if you have a bug, you need to be writing a test anyhow. So pretty soon, you just get in the habit of writing a test from the outset. Knowing how to use pdb is still very useful though for those occasional times when you simply cannot figure out what is going on. Great tutorial; maybe needs ‘refreshing’ for current Pythons eg 2.7 that do not need the ‘p’ command to evaluate / print expressions ?? QUESTION: probably related to ‘phillip’ s comment above: – I am trying to debug an app that makes use of MLT & has a ‘GUI’ interface; where occasionally it appears to go into an infinite loop: the app GUI locks as well as the whole desktop…(only way out is Ctrl-Alt-F2 to open a root console, & kill the process… top shows Python is using about 8% of the CPU and very gradually increasing memory footprint..). The app also outputs ‘status’ messages (corresponding to key / button clicks etc) to the console … I have used python -m –trace app > python_trace.log 2>&1 to capture a log of ‘where the app is stuck’ … I can see source lines of one of the inner modules that are ‘suspicious’ (ie logical analysis of the code indicates there are fail cases present). SO then I try python -m pdb app & setting a break-point on a line I can see is executed in the loop (after deleting the corresponding .pyc file); I have to use the full path name to the inner_module.py file. The breakpoint is APPARENTLY not hit .. meaning apps seems to run fine, and can eventually hit the lock-up, but no breakpoint is triggered (only .way out is as described above) SO then I add import pdb pdb.set_trace() at the same point, and start the app normally (ie type ‘app’ at the command line). The app stops at the expected point & pdp prompt appears like: (Pdb) on_frmMain_key_press_event (Pdb) (NOTE the on_frmMain_key_press_event message is the first status message printed by the app without the pdb.set_trace() statement after it has drawn the first GUI screen..) But I cannot type more than 1 character & the pdb console (ONLY) locks … but I can switch to another terminal window (same desktop, same user) and kill the process….at which point all the chars/commands that I was trying to type into the pdb console appear along with the message ‘Killed’ (expected due to kill -9 from root). ? how to proceed ? is ‘pudb’ the right tool ? (have not checked it out yet…); or is there a way from the root console to start pdb & ‘attach’ to the app that was started at the ‘user’ console like you can with gdb ?? (so hopefully pdb IO is not blocked on the same channel as the app IO…) thanks.. Try winpdb. It allows for threaded apps and seamless remote disconnected debugging at the console and with the gui fronend. pudb is console only and is not as tightly written performance and stability wise. At least not the last time I tried to use it which was several years ago. Pingback: Kenny Chowdhary · Python for Scientific Computing! (installing numpy, scipy, and matplotlib from source) Excellent tutorial for a beginner such as myself. Thank you so much! Pingback: Python debugging | Some Things Are Obvious Having read this I believed it was very informative. I appreciate you spending some time and effort to put this information together. I once again find myself personally spending a lot of time both reading and leaving comments. But so what, it was still worthwhile! I seldom leave a response, but i did a few searching and wound up here Debugging in Python | Python Conquers The Universe. And I actually do have a couple of questions for you if it’s allright. Is it only me or does it look like a few of these comments come across as if they are coming from brain dead individuals? 😛 And, if you are posting at additional sites, I would like to follow everything fresh you have to post. Would you list of every one of all your social sites like your linkedin profile, Facebook page or twitter feed?
https://pythonconquerstheuniverse.wordpress.com/2009/09/10/debugging-in-python/
CC-MAIN-2022-05
refinedweb
2,444
73.07
The following code fails to run on Python 2.5.4: from matplotlib import pylab as pl import numpy as np data = np.random.rand(6,6) fig = pl.figure(1) fig.clf() ax = fig.add_subplot(1,1,1) ax.imshow(data, interpolation='nearest', vmin=0.5, vmax=0.99) pl.colorbar() pl.show() The error message is C:\temp>python z.py Traceback (most recent call last): File "z.py", line 10, in <module> pl.colorbar() File "C:\Python25\lib\site-packages\matplotlib\pyplot.py", line 1369, in colorbar ret = gcf().colorbar(mappable, cax = cax, ax=ax, **kw) File "C:\Python25\lib\site-packages\matplotlib\figure.py", line 1046, in colorbar cb = cbar.Colorbar(cax, mappable, **kw) File "C:\Python25\lib\site-packages\matplotlib\colorbar.py", line 622, in __init__ mappable.autoscale_None() # Ensure mappable.norm.vmin, vmax AttributeError: 'NoneType' object has no attribute 'autoscale_None' How can I add colorbar to this code? Following is the interpreter information: Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> Note: I am using python 2.6.2. The same error was raised with your code and the following modification solved the problem. I read the following colorbar example: from matplotlib import pylab as pl import numpy as np data = np.random.rand(6,6) fig = pl.figure(1) fig.clf() ax = fig.add_subplot(1,1,1) img = ax.imshow(data, interpolation='nearest', vmin=0.5, vmax=0.99) fig.colorbar(img) pl.show() Not sure why your example didn't work. I'm not that familiar with matplotlib. (This is a very old question I know) The reason you are seeing this issue is because you have mixed the use of the state machine (matplotlib.pyplot) with the OO approach of adding images to an axes. The plt.imshow function differs from the ax.imshow method in just one subtly different way. The method ax.imshow: The function plt.imshow: plt.colorbarfunction). If you want to be able to use the plt.colorbar (which in all but the most extreme cases, you do) with the ax.imshow method, you will need to pass the returned image (which is an instance of a ScalarMappable) to plt.colorbar as the first argument: plt.imshow(image_file) plt.colorbar() is equivalent (without using the state machine) to: img = ax.imshow(image_file) plt.colorbar(img, ax=ax) If ax is the current axes in pyplot, then the kwarg ax=ax is not needed.
https://pythonpedia.com/en/knowledge-base/2643953/attributeerror-while-adding-colorbar-in-matplotlib
CC-MAIN-2020-16
refinedweb
426
63.36
This is an implementation of cubic spline interpolation based on the Wikipedia articles Spline Interpolation and Tridiagonal Matrix Algorithm. My goal in creating this was to provide a simple, clear implementation that matches the formulas in the Wikipedia articles closely, rather than an optimized implementation. Spline algorithms are a way to fit data points with a set of connecting curves (each one is called a spline) such that values between data points can be computed (interpolated). There are various types/orders of equations that can be used to specify the splines including linear, quadratic, cubic, etc. A set of N points requires N-1 splines to connect them. Each spline is described by an equation (e.g., a polynomial). The coefficients in those polynomials are initially unknown, and the spline algorithm computes them. Once the coefficients are known, to compute the Y for any particular X you find the spline that is relevant to that X and use those coefficients to evaluate the polynomial at that X. To compute the splines' coefficients, the algorithm uses the standard linear algebra procedure of creating a system of equations and then solving that system. The equations embody the constraints on the splines which are that they intersect the points to be fitted, and that at each junction between splines the prior splines' first and second derivatives equal the successor splines' first and second derivatives. That doesn't provide enough constraints yet to fully determine the system so two more are added that specify that the second derivatives of the first and last points are zero. Those final constraints are called the "natural" spline style. Alternatively, user Quience (below) has provided code such that the caller can specify the slopes at the first and last points explicitly. With either set of additional constraints, the system becomes solvable. Fortunately, the system of equations is constrained to be tridiagonal, so that solving the system is O(N) using the Thomas algorithm, rather than the O(N^3) of Gaussian elimination for example. Here is the tridiagonal matrix corresponding to the above image which is computed to solve the Wikipedia article's example problem: 2.0000, 1.0000, 0.0000 1.0000, 2.6667, 0.3333 0.0000, 0.3333, 0.6667 This example is a little too small to illustrate a tridiagonal matrix well, but tridiagonal means that only the main diagonal (row == col) elements, the diagonal below the main (row == col + 1), and the diagonal above the main (row == col - 1) can have non-zero values. row == col row == col + 1 row == col - 1 The linked Wikipedia articles do a good job of explaining splines so I won't provide any more redundant explanation here. I followed the Wikipedia articles closely for the implementation, including variable names where possible. Some comments refer to equation numbers in the articles (which unfortunately will break at some point when the articles change). The primary classes are CubicSpline and TriDiagonalMatrix. The CubicSpline class uses TriDiagonalMatrix to solve the system of equations. Generally, if you are just using this code to fit curves, you would just use CubicSpline. CubicSpline TriDiagonalMatrix In the source, there is also Program.cs which has a Main() that runs several functions that exercise the classes. Main() Here is an example use of CubicSpline from the TestSplineOnWikipediaExample() method in Program.cs. This part of the code sets up the test data to be fitted: TestSplineOnWikipediaExample() // Create the test points. float[] x = new float[] { -1.0f, 0.0f, 3.0f }; float[] y = new float[] { 0.5f, 0.0f, 3.0f }; Console.WriteLine("x: {0}", ArrayUtil.ToString(x)); Console.WriteLine("y: {0}", ArrayUtil.ToString(y)); The next step is to create a float[] (here called xs) which contains the values of X for which you want to determine the Y values. In other words, these are the X values for which you want to plot points on the graph: float[] xs // Create the upsampled X values to interpolate int n = 20; float[] xs = new float[n]; float stepSize = (x[x.Length - 1] - x[0]) / (n - 1); for (int i = 0; i < n; i++) { xs[i] = x[0] + i * stepSize; } And next is to use CubicSpline to fit the data and then evaluate the fitted spline at the xs values generated above: // Solve CubicSpline spline = new CubicSpline(); float[] ys = spline.FitAndEval(x, y, xs); Console.WriteLine("xs: {0}", ArrayUtil.ToString(xs)); Console.WriteLine("ys: {0}", ArrayUtil.ToString(ys)); // Plot string path = @"..\..\spline-wikipedia.png"; PlotSplineSolution("Cubic Spline Interpolation - Wikipedia Example", x, y, xs, ys, path); This displays the following console output: x: -1, 0, 3 y: 0.5, 0, 3 xs: -1, -0.7894737, -0.5789474, -0.368421, -0.1578947, 0.05263158, ... ys: 0.5, 0.3570127, 0.2245225, 0.1130267, 0.0330223, -0.005029888, ... The PlotSplineSolution() method uses the .NET Framework's System.Windows.Forms.DataVisualization.Charting classes to plot a graph of the input points and the fitted and interpolated spline curve. The image created by this is the one included in this article (above). PlotSplineSolution() System.Windows.Forms.DataVisualization.Charting The core of the spline fitting function sets up the tridiagonal matrix and then uses it to solve the system of equations. The tridiagonal matrix is not represented as a matrix but rather three 1-d arrays, A, B, and C. Array A is the sub-diagonal, B is the diagonal, and C is the super-diagonal, to match the Wikipedia article names. Once the tridiagonal matrix is set up, the spline fitting function calls TriDiagonalMatrix.Solve(). TriDiagonalMatrix.Solve() Here is the tridiagonal matrix solver function from TriDiagonalMatrix.cs: // Solve the system of equations this*x=d given the specified d. public float[] Solve(float[] d) { int n = this.N; if (d.Length != n) { throw new ArgumentException("The input d is not the same size as this matrix."); } // cPrime float[] cPrime = new float[n]; cPrime[0] = C[0] / B[0]; for (int i = 1; i < n; i++) { cPrime[i] = C[i] / (B[i] - cPrime[i-1] * A[i]); } // dPrime float[] dPrime = new float[n]; dPrime[0] = d[0] / B[0]; for (int i = 1; i < n; i++) { dPrime[i] = (d[i] - dPrime[i-1]*A[i]) / (B[i] - cPrime[i - 1] * A[i]); } // Back substitution float[] x = new float[n]; x[n - 1] = dPrime[n - 1]; for (int i = n-2; i >= 0; i--) { x[i] = dPrime[i] - cPrime[i] * x[i + 1]; } return x; } The following is the core of the CubicSpline.Eval() method. This method takes a float[] x which contains the x values you want to compute y for using the fitted splines. The loop is stepping through each value of x computing the corresponding value of y. CubicSpline.Eval() y x for (int i = 0; i < n; i++) { // Find which spline can be used to compute this x int j = GetNextXIndex(x[i]); // Evaluate using j'th spline float t = (x[i] - xOrig[j]) / (xOrig[j + 1] - xOrig[j]); // equation 9 in the Wiki y[i] = (1 - t) * yOrig[j] + t * yOrig[j + 1] + t * (1 - t) * (a[j] * (1 - t) + b[j] * t); } Here is another chart. This one was created by the TestSpline() method in Program.cs: TestSpline() In addition to computing the Y for each X, one can also compute the slope of the spline. It is specified as q' in the Wikipedia article and the formula is equation 5. The method to evaluate slope is CubicSpline.EvalSlope() and it returns a float array (one slope value for each X you provide). You must have called either Fit() (or FitEndEval()) before calling this. Here is an example use: Y X float // Try slope (spline is already computed at this point, see above code example) float[] slope = spline.EvalSlope(xs); // Same xs as first example above string slopePath = @"..\..\spline-wikipedia-slope.png"; PlotSplineSolution("Cubic Spline Interpolation - Wikipedia Example - Slope", x, y, xs, ys, slopePath, slope); Here is the resulting chart: The normal cubic spline algorithm works on 2-d points where y is a function of x, i.e. y=f(x), and y has a single value for each x. However, user LutzL in the comments below has pointed out a clever way to use splines to fit sequences of points that do not fit this definition: y=f(x) [You] invent a third time coordinate that increases monotonically, T=0,1,2,3 or start with T=0 and increment by the distance from the current point to the next point. Use then the (T,X) and (T,Y) pairs to compute two cubic splines x(t) and y(t) and draw the curve (x(t),y(t)) as result. I implemented this as a new static method FitParametric() in the CubicSpline class. Here is an example use: static float[] x = { 0.5f, 2.0f, 3.0f, 4.5f, 3.0f, 2.0f }; float[] y = { 4.0f, 2.0f, 6.0f, 4.0f, 3.0f, 5.0f }; float[] xs, ys; CubicSpline.FitParametric(x, y, 100, out xs, out ys); Thanks to user YvesDaoust for the correct term for this method. Surprisingly, this simple implementation is more than twice as fast as a C# port of the very terse Numerical Recipes in C++ (Press et al, Cambridge University Press, 2002) version in the scenario I benchmarked. I only benchmarked one scenario and did not investigate the results much so shouldn't draw too many conclusions, but it is interesting anyway. I benchmarked the implementations using a Release build by Visual Studio 2012, running on a Win 7 Core i5 2500k. I used 10,000 random points and interpolated 100,000 points, with 100 repetitions. See the Program.cs method TestPerf(). I did not include the Numerical Recipes in C++ version in the code due to licensing and copyright issues. [Update: This performance difference is mostly likely due to the fact that the NR in C implementation does a search for the appropriate spline for each evaluation, whereas I require that the points to be evaluated are sorted and therefore I can do a simultaneous traverse.] TestPerf() The Chart class in the System.Windows.Forms.DataVisualization.Charting namespace worked well for my simple charts. The classes in there make it very easy to render a chart and save to a file. Chart It is unfortunate that C#/.NET generics (still) cannot handle creating a generic type constraint that allows you to handle both float and double with the same code. I implemented this with floats because I typically start with floats until it's proven that I need doubles. double This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) public void Fit(float[] x, float[] y, bool debug = false, float start=float.NaN, float end=float.NaN) dx1 = x[1] - x[0]; m.C[0] = 1.0f / dx1; m.B[0] = 2.0f * m.C[0]; r[0] = 3 * (y[1] - y[0]) / (dx1 * dx1); if (float.IsNaN(start)) { dx1 = x[1] - x[0]; m.C[0] = 1.0f / dx1; m.B[0] = 2.0f * m.C[0]; r[0] = 3 * (y[1] - y[0]) / (dx1 * dx1); } else { m.B[0] = 1; r[0] = start; } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/560163/Csharp-Cubic-Spline-Interpolation?msg=4601571
CC-MAIN-2016-44
refinedweb
1,895
62.48
Custom Data Binding, Part 1 Michael Weinhardt November 9, 2004 Summary: Michael Weinhardt covers several new designer improvements and additions that allow you to take a single custom type and turn it into a fully-fledged list data source without a single line of code. We also take a look at future enhancements you'll see in Beta 2. (15 printed pages) Download the winforms11162004_sample.msi file. Introduction Recently, I was lucky enough to have the opportunity to visit a friend in Taipei, Taiwan, which was great because I love to travel. In particular, I love trying to learn a little of the local language because I think it's both respectful and adds to the whole travel experience. In Taipei, the language is the modern, standardized version of Chinese, with a Taiwanese flavor (I am no language expert, so apologies for any inaccuracies in my understanding). Numbers tend to be the first places I start, as I imagine most people do. I tried to get to the stage where I could say the numbers without having to internally translate from English to Chinese. As such, I was looking for new fun ways to practice them. One example is waiting at a pedestrian crossing. In Taipei, a vast majority of crossing signals not only have green and red men, but also have numbers counting down in seconds the time pedestrians have left to cross. It was particularly challenging trying to say the numbers in Chinese as fast as they counted down, and initially, I was phenomenally bad. Figure 1. Taiwanese pedestrian-crossing countdown Inspired by the pedestrian-crossing-signals game, I decided to write a Windows Forms application to provide a similar style of computerized game for a wide range of words, numbers, and phrases. The idea was to have the application flash a random sequence of these at you as you attempt to translate to spoken Chinese. Capturing the Data The first step was determining what pieces of information would need to be gathered for each word, number, or phrase. I decided on the English representation (what you're translating from), the Pinyin version (what you're translating too), an image of the Chinese character or characters (how it's written in Chinese), and the pronunciation (so you can hear how it sounds). Note Pinyin is a system of spelling Chinese using the Roman alphabet and, hopefully, making it easier for language-challenged people like me to learn. See for more information. With this in mind, I set about building a UI to capture the desired game data. From a portability viewpoint, I didn't want to use a database to persist this information. Instead, I chose to use a custom Word type to capture the information, which at some point would utilize the serialization support in the .NET Framework: [Serializable] public class Word { // Constructor public Word(string english, string pinyin, Image character, byte[] pronunciation); // Properties public string English { get; set; } public string Pinyin { get; set; } public Image Character { get; set; } public byte[] Pronunciation { get; set; } } Displaying the Data Since we'll need multiple words, numbers, and phrases, we'll ultimately need a list data source composed of zero or more Word data items. Subsequently, we need to determine the style of UI most suited to displaying a list of Words. There are two common UI patterns display list data sources: - Details view UI - List view UI Details view UIs classically display one list data item at a time, where each public property of the data item is associated with its own control. Because only one data item is visible at any given moment, details view UIs usually provide navigational commands to assist the user in moving back and forwards through the list. Common mechanisms for exposing these commands to the UI are through the menu bar and as a VCR-style tool bar. Figure 2 illustrates a details view UI. Figure 2. Details view UI List view UIs, as the name suggests, displays multiple data items using a list or data grid style of control, like the form shown in Figure 3. Figure 3. List view UI Details view UIs are great for ad-hoc, infrequent browsing for situations where there are simply too many data item properties to properly display in a list. They are also useful when you need greater flexibility and control over the UI than what is typically afforded you by list style controls. On the other hand, list view UIs suit data entry, particularly when sequentially entering non-trivial rows of data at once. I anticipate entering bulk data for the game and, consequently, think a list view UI is best. So, to build the UI, we need to turn a single Word type into a list data source, which should then be displayed through a suitable list or data grid style control. In this situation, I think the DataGridView control is a great candidate. Next, we need to ensure any edits are synchronized between the DataGridView and list data source. Because I'm anticipating a lot of data, a VCR-style control is needed to simplify navigation. Any such control should ensure that currency (current list position) is synchronized between it and the DataGridView as navigation occurs in either. Data Binding The underlying mechanism designed to support all of these requirements is data binding, which is the ability to hook up, or bind, a data source to a UI control. Once bound, the primary job of data binding is to automatically manage editing and navigation synchronization. In Windows Forms 1.x, list data sources need to at least implement System.ComponentModel.IList before they can be bound. For developers, this means either constructing their own strongly-typed IList implementations or, at the very least, utilizing one of the .NET Framework pre-baked IList implementations, such as simple type arrays and ArrayList. Once built, a list control's DataSource property needs to be set to the IList implementation to establish the binding. If your data source is a typed data set (.xsd), you can declaratively bind by adding a DataSet component to your form to expose your typed data set to data bound controls on a form. Actual binding is achieved by setting its DataSource and DataMember properties appropriately, as illustrated in Figure 4. Figure 4. Pre-Windows Forms 2.0 declarative data binding Alternatively, you could take the programmatic route: Unfortunately, these niceties don't work too well for list data sources based on custom types. First, such data sources are simply not visible to the Windows 1.x Forms Designer. Second, while you can bind an IList implementation to a list control using the DataSource property, you can't enjoy DataGrid's support for automatically detecting and creating columns at design time. Not only that, you'd need to potentially implement several other interfaces to ensure your list data source will support basic editing and navigation. As you can tell, once you've created your custom type in Windows Forms 1.x, you need to commit to turning it into a useful list data source. That is, unless you are using Windows Forms 2.0, in which case a few mouse clicks and perhaps a minute's worth of effort is all you'll need. From Little Things, Big Things Grow To take a custom type, turn it into a fully fledged list data source, bind it to a DataGridView, and add VCR-style navigation requires first configuring your project to recognize your custom type as a data source. Adding a Project Data Source A custom type can be registered as a data source in one of two ways. First, by selecting Visual Studio .NET 2005 | Data | Add New Data Source... and so on. Second, by opening the Data Sources tool window (Visual Studio .NET 2005 | Data | Show Data Sources) and clicking the Add New Data Source... link if no data sources exist, as shown in Figure 5. Figure 5. Empty Data Sources tool window If your project already has data sources in Visual Studio .NET 2005 Beta 1, you'll need to bring up the Data Sources tool window's context menu and select Add New Data Source... or click the left-most tool bar icon. Whichever way you elect to add a new data source, you'll initiate the Data Source Configuration Wizard and begin by specifying a data source type, shown in Figure 6. Figure 6. Selecting a Data Source Type dialog box As you can see from Figure 6, there are four data source types to choose from, including database servers (such as Microsoft SQL Server), local database files (such as Microsoft Access .mdb files), Web services with Web methods that return suitable objects (such as DataSets or custom objects like Word), and objects. And, when they say "object," they mean object (or really they mean "type"). You can choose any type from the next step in the wizard. Figure 7 illustrates selecting the custom Word type. Figure 7. Selecting the Word object Although Business Object is used here, you can actually register non-business objects such as your project's application settings, also shown in Figure 7. Of course, we'll select Word, click the Next button, and be left with the expected data source, conveniently added to the Data Sources tool window for us to view and enjoy, as illustrated in Figure 8. Figure 8. Data Sources tool window with selected object data source The DataConnector The next step is the big one —converting our data source from a single type to a proper list data source to which data bound list controls can bind. This is achieved for us by the new DataConnector component. Simply dragone from the Toolbox onto your form and set its DataSource property to the appropriate list data source, shown in Figure 9. Figure 9. Adding and configuring a DataConnector component This has the effect of causing the Windows Forms 2.0 designer to generate the following code to InitializeComponent: When this line of code is executed, the DataConnector inspects the data source to determine whether or not it implements IList. If it doesn't, DataConnector creates an internal IList implementation to house the custom type. Specifically, it creates a generic BindingList implementation that we'll discuss later in this article. Note that if the DataConnector is bound to an IList implementation, these steps are not necessary, but the results are the same. Ultimately, DataConnector consumes the custom type and, by proxy, represents it to data bound controls as a valid list data source using its own IList implementation. That's a really neat trick. Binding the DataGridView by Proxy We can now drag a DataGridView onto the form and set its own DataSource property. As you may have guessed, that property will be set to the DataConnector itself, allowing it to dutifully fulfill its role as a by proxy list data source. This is illustrated in Figure 10. Figure 10. DataGridView bound to the DataConnector Here's the generated InitializeComponent code: Because DataConnector is presenting a fully-fledged list data source to a world populated by data-bindable controls, with appropriate data item descriptor information, controls such as the DataGridView can inspect and utilize that information to automatically generate appropriate columns, another feature illustrated in Figure 10 above. The DataNavigator The final feature we wanted to implement was VCR-style navigation. Luckily for us, the usefulness of the DataConnector doesn't stop here because it is also turns out to be a currency manager. This means that DataConnector tracks the current data item and implements methods, properties, and events that you can code against to programmatically navigate the list data source. public class DataConnector : ... { ... // Properties public int Position { get; set; } public object Current { get; } ... // Methods public void MoveFirst(); public void MoveLast(); public void MoveNext(); public void MovePrevious(); ... // Events public event EventHandler CurrentItemChanged; public event EventHandler PositionChanged; public event AddingNewEventHandler AddingNew; } Basic navigation using these members would look something like: private void firstToolStripMenuItem_Click(object sender, EventArgs e) { this.wordDataConnector.MoveFirst(); } private void prevToolStripMenuItem_Click(object sender, EventArgs e) { this.wordDataConnector.MovePrevious(); } private void nextToolStripMenuItem_Click(object sender, EventArgs e) { this.wordDataConnector.MoveNext(); } private void lastToolStripMenuItem_Click(object sender, EventArgs e) { this.wordDataConnector.MoveLast(); } Calling any of these methods ensures that the DataConnector will move to the desired list data item and synchronize any such navigation with and controls bound to the DataConnector so they can update their UIs appropriately. On the other hand, you might decide that dragging a DataNavigator onto your form and binding it to the DataConnector, once again using the DataSource property, is an easier solution because it achieves the same effect. Not only that, you get Add and Delete functionality thrown in for free. The DataNavigator is a specialized tool strip control that you can extend to expose a wide variety of list data source related activities, such as save and load. With the DataNavigator added, the completed UI looks like Figure 11. Figure 11. Completed UI with no code To get here, we created a custom Word type (not a list), used one wizard, added one DataConnector, one DataGridView, and one DataNavigator, and set three properties. All in all, that's got to be considered easy. Automatic for the Developer People But, why spend your time dragging, dropping, and configuring the DataConnector and DataNavigator components? An even easier route is to simply drag the custom data source onto your form, as shown in Figure 12. Figure 12. Dragging a project data source onto a form All the work we just did to build-up a UI is performed by the Windows Forms 2.0 designer automatically when you use this technique, and you won't even have enough time to think about making coffee, let alone actually drinking it. And, the code that is generated to make it all work is basically the same code that you would need to write yourselves. Note At the time of writing, this method only resulted in generating three of the four expected columns. I would expect both approaches discussed in this article to be consistent by Windows Forms 2.0 release, if not Beta 2 The Data Sources tool window actually lets you do a little more magic. For example, you can configure the data source to generate either a list (DataGridView) or details style UI when dragged onto a form. Also, you can specify what sort of control each item property should be generated as. Figure 13 shows both of these items in action. Figure 13. Instructing UI generation Finally, if you select Customize from either of the menus shown in Figure 13, you can add to and remove from the list of default choices. Some Last Minute Adjustments Of course, the designer can only go so far in what it produces. For example, it can't guess what column order you want and, because a byte array could a wide variety of data formats, it did it's best and interpreted our Pronunciation byte array property as an image although we want to store a sound. To fix the former problem is simply a case of using the DataGridView's column editor to update column order. To fix the latter, I needed to create a custom DataGridView pronunciation column/cell implementation that allows you to single mouse-click each cell to hear the pronunciation. The implementation details are beyond the scope of this article, but check out the February 2004 installment of Wonders of Windows Forms installment for further details, noting that, while there are a few changes between the alpha version I wrote against and Beta 1, the concepts are the same. Finally, to simplify the process of adding bitmaps and wav files to the Character and Pronunciation columns, I added drag-and-drop support to both. Please explore the code sample for further insight into the implementation details. With these little fixes in place, the form finally looks like the sample we saw back in Figure 3. Windows Forms 2.0 Beta 2 Improvements The technology discussed in this article is based on Windows Forms 2.0 Beta 1. Although I'm going to say it, it goes without saying that things will change. Fortunately, Joe Stegman and Steve Lasker from Microsoft graciously spent some time enumerating the set of updates and improvements you can expect in data binding in Beta 2. First, a few names have changed, most notably DataConnector will become BindingSource and DataNavigator will become BindingNavigator. Personally, I believe the new names more accurately represent what these components do. That is, they represent and work with binding sources rather than the data sources they expose. Second, the Data Sources tool window and Data Source Configuration Wizard have been streamlined. The updated Data Sources tool window offers more guidance up front, as shown in Figure 14. Figure 14. Beta 2 Data Sources window The same theme is applied to the Data Source Configuration Wizard. The options in the Choosing a Data Source step have been simplified and each option provides descriptive information about what it can do. Figure 15 illustrates this for choosing an Object data source. Figure 15. More simplified and informative data source selection The next step in the wizard, selecting the type to bind to, has been improved threefold, as Figure 16 demonstrates. Figure 15. More functional type binding selection You can hide or show "Microsoft." and "System." assemblies, XML comments associated with your type are displayed as descriptive text, and the underscores that follow the namespaces in the list, as shown in Figure 7, are now correctly displayed as periods. Finally, dragging a data source onto a form as a DataGridView will generate well-named DataGridView and DataGridViewColumn names. Currently, the naming is simply the camel-cased DataGridView column type name and a number (dataGridViewTextBoxColumn1, for example). Expect Beta 2 names to be something like the data source field name plus the DataGridView column type name. Please note that the concepts described in this article won't change, even though the technology will. I would still encourage you to familiarize yourself with Beta 1, if possible. Where Are We? Even though I haven't yet completed the application, something must have gone right just by working on the data entry side of things because I successfully ordered ten BBQ pork buns in Chinese. Also, it provided a great way to explore two techniques for taking a single custom type and turning it into a genuine list data source, which relied heavily on the new data-binding workhorse, the DataConnector. DataConnectors ensure edits are synchronized between bound controls and list data sources and, in conjunction with the VCR-style DataNavigator, ensures navigation is also synchronized. The complete development cost was minor, to say the least, especially when compared with Windows Forms 1.x. However, if you play with the sample, you might notice that certain features are missing, including sorting, searching, and filtering. Although we could implement these on a form, it would make more sense from a reusability standpoint to implement them on the list data source itself, thereby ensuring any data bound control can utilize them. In the next installment, we'll take a look at how to add sorting and searching by implementing IBindingList, which can easily by accomplished by deriving from the new BindingList<T> generic type. We'll also look at adding advanced sorting and filtering by implementing IBindingListView. Finally, we'll add persistence with the .NET Framework's serialization support to complete the game data management part of the application. Acknowledgements I'd like to thank the people of Taipei for being very generous, warm, respectful, and considerate. I'd especially like to thank the one person who made the trip possible. Finally, special thanks to Steve Lasker and Joe Stegman of Microsoft for their time and for their input into this piece. References Object Binding by Steve Lasker, CoDe Magazine, September/October 2004.
https://msdn.microsoft.com/en-us/library/ms951295.aspx
CC-MAIN-2015-14
refinedweb
3,328
51.48
This article has been dead for over three months: Start a new discussion instead Reputation Points: 0 [?] Q&As Helped to Solve: 0 [?] Skill Endorsements: 0 [?] 0 Hi, I am running Ubuntu in my machine and I tried this code: import java.io.InputStream; import java.io.IOException; import java.applet.Applet; import java.awt.*; public class execute extends Applet{ String output=""; public void init(){ try { // Execute command String command = "ls"; Process child = Runtime.getRuntime().exec(command); // Get the input stream and read from it InputStream in = child.getInputStream(); int c= in.read(); while ((c = in.read()) != -1) { output =output+((char)c); } in.close(); } catch (IOException e) { } System.out.println(output); } public void paint(Graphics g){ g.drawString(output,60,100); } } And then wrote this html file and saved it in the same directory: <html> <head><title>Applet</title> <body> <applet code="execute.class", </body> </html> What I'm trying to do here is to run the ls shell command in an applet and display the results. The code compiles with no errors. But when I open the html file in the browser, I just get a gray square. Is this because of security issues that I don't get anything? Or is it because of an error in the code? Or is it the browser that is the problem? Please help, Thanks in advance. Reputation Points: 274 [?] Q&As Helped to Solve: 277 [?] Skill Endorsements: 14 [?] 0 check ProcessBuilder too Edit change preHistorical Applet to JApplet and check You
http://www.daniweb.com/software-development/java/threads/359948/executing-a-terminal-command-with-an-applet
CC-MAIN-2014-15
refinedweb
250
71.51
jskeet 02/02/25 09:59:09 Modified: src/main/org/apache/tools/ant TaskAdapter.java Log: JavaDoc comments. Revision Changes Path 1.14 +25 -11 jakarta-ant/src/main/org/apache/tools/ant/TaskAdapter.java Index: TaskAdapter.java =================================================================== RCS file: /home/cvs/jakarta-ant/src/main/org/apache/tools/ant/TaskAdapter.java,v retrieving revision 1.13 retrieving revision 1.14 diff -u -r1.13 -r1.14 --- TaskAdapter.java 19 Feb 2002 16:17:25 -0000 1.13 +++ TaskAdapter.java 25 Feb 2002 17:59:09 -0000 1.14 @@ -56,28 +56,35 @@ import java.lang.reflect.Method; - - /** - * Use introspection to "adapt" an arbitrary Bean ( not extending Task, but with similar - * patterns). + * Uses introspection to "adapt" an arbitrary Bean which doesn't + * itself extend Task, but still contains an execute method and optionally + * a setProject method. * * @author [email protected] */ public class TaskAdapter extends Task { + /** Object to act as a proxy for. */ Object proxy; /** - * Checks a class, whether it is suitable to be adapted by TaskAdapter. + * Checks whether or not a class is suitable to be adapted by TaskAdapter. * - * Checks conditions only, which are additionally required for a tasks - * adapted by TaskAdapter. Thus, this method should be called by - * {@link Project#checkTaskClass}. + * This only checks conditions which are additionally required for + * tasks adapted by TaskAdapter. Thus, this method should be called by + * Project.checkTaskClass. * * Throws a BuildException and logs as Project.MSG_ERR for - * conditions, that will cause the task execution to fail. + * conditions that will cause the task execution to fail. * Logs other suspicious conditions with Project.MSG_WARN. + * + * @param taskClass Class to test for suitability. + * Must not be <code>null</code>. + * @param project Project to log warnings/errors to. + * Must not be <code>null</code>. + * + * @see Project#checkTaskClass(Class) */ public static void checkTaskClass(final Class taskClass, final Project project) { // don't have to check for interface, since then @@ -100,7 +107,7 @@ } /** - * Do the execution. + * Executes the proxied task. */ public void execute() throws BuildException { Method setProjectM = null; @@ -147,12 +154,19 @@ } /** - * Set the target object class + * Sets the target object to proxy for. + * + * @param o The target object. Must not be <code>null</code>. */ public void setProxy(Object o) { this.proxy = o; } + /** + * Returns the target object being proxied. + * + * @return the target proxy object + */ public Object getProxy() { return this.proxy ; } -- To unsubscribe, e-mail: <mailto:[email protected]> For additional commands, e-mail: <mailto:[email protected]>
http://mail-archives.apache.org/mod_mbox/ant-dev/200202.mbox/%[email protected]%3E
CC-MAIN-2018-30
refinedweb
403
54.79
Building Rasp Pi into Mindstorms projects with BrickPi Programmed Stuff To get access to the robot and thus start scripts that you have programmed yourself, you should use the headless capabilities of the Rasp Pi and log onto the BrickPi via SSH or VNC. If have built your own robot, it will still have to receive instructions by means of software. The whole point of an on-board processor is to write programs that use the Rasp Pi to control the robot's motors and sensors. You will, therefore, need to set up the Raspberry Pi to support your favorite programming language. Listing 1 shows how to download and install the necessary tools for using Python. Listing 1 Setting up BrickPi for Python ¤¤nonumber $ git clone $ cd BrickPi_Python $ sudo apt-get install python-setuptools $ sudo python setup.py install If you write your own Python script for the robot, you have to import the library for controlling the BrickPi functions with the command from BrickPi import * and also initialize the serial port with the command BrickPiSetup(). Because the robot almost always uses servo motors, you will need to call the following command for each of the motors: BrickPi.MotorEnable[Port] = 1 Depending on which of the four motor ports of the BrickPi the cable of each servo motor has been plugged into, you will have to replace the Port placeholder with one of the PORT_A to PORT_D identifiers. In the last initialization step, you need to inform the BrickPi library of the types of sensors used and define any ports that are connected to sensors. For an ultrasound sensor plugged into port 1, the corresponding line will read: BrickPi.SensorType[PORT_1] = TYPE_SENSOR_ULTRASONIC_CONT All in all, five ports are reserved for sensors. In contrast to the motors, sensors do not receive letters as identifiers. Instead, they are numbered in ascending order. Each sensor type is associated with a predefined constant. For example, the constant indicating a color sensor is TYPE_SENSOR_COLOR_FULL. To finish the sensor initialization process, invoke the BrickPiSetupSensors() command. Communicating with the individual robot parts is easy while the program is running. A servo motor is controlled via BrickPi.MotorSpeed[Port] = rate_of_speed For the rate of speed, you can enter a whole number between -255 and +255, with 0 bringing the motor to a stop. Positive and negative values designate the direction for turning. To make the robot turn, you can slow down the motor of one wheel to zero while the other motor continues to turn the other wheel. For interrogating sensors, you can call BrickPiUpdateValues() to obtain the current values. By subsequently invoking BrickPi.Sensor[Port], you obtain a numeric value for a particular sensor. For example, an ultrasound sensor will return the distance to the nearest object. Using the C language, you can program the BrickPi according to the same scheme. At the beginning of the program, import the required header files (Listing 2). The C names for constants and methods in the BrickPi library correspond exactly to those for the Python version, except that you call the methods according to the rules of C syntax. Fans of Scratch will find an implementation that makes graphical building blocks available for building the control functions. Listing 2 Including Header Files in C ¤¤nonumber #include <wiringPi.h> #include "BrickPi.h" Conclusion If your Mindstorms set was purchased before September 2013, you will easily see many advantages of the BrickPi over the NXT. The BrickPi has considerably more computing power, more RAM, and more mass storage, without any dependency on an external PC. You can easily integrate every type of software and hardware available for the Rasp Pi into the robot. Moreover, the BrickPi opens up the possibility of installing not only Mindstorms sensors but also the less expensive Arduino sensors. The BrickPi also offers advantages for those who possess the new EV3 system. Granted, the EV itself already has a clock speed of over 300MHz, 64MB of RAM, a USB host mode, and an operating system based on Linux. However, the Rasp Pi offers even more options. If you still do not have a Mindstorms set, you won't have to go to the expense of buying an expensive new starter package. You can build out the BrickPi with separately purchased sensors, motors, and Lego Technic building blocks. Despite its benefits, the BrickPi is not a particularly easy system for beginners; you will fare better if you have some basic knowledge of Mindstorms and Linux. Infos - Dexter Industries: - French BrickPi Distributor: - SD card image: « Previous 1 2 Next » Buy this article as PDF Pages: 4 (incl. VAT)
http://www.raspberry-pi-geek.com/Archive/2014/03/Building-Rasp-Pi-into-Mindstorms-projects-with-BrickPi/(offset)/2
CC-MAIN-2018-34
refinedweb
768
53.71
With cool APIs like Open Notify we can programmatically access the location of the International Space Station to determine when it is flying by a specific location, and with Twilio SendGrid we can send an email notification when this occurs. Let's walk through how to do this in Python using Redis Queue to schedule the email. - Requests - for making HTTP requests Make sure you create and activate a virtual environment, and then install these with the following command: pip install sendgrid rq-scheduler==0.11.0 requests==2.26. Accessing the International Space Station's location Let’s begin by writing code that will call the Open Notify API for a given set of coordinates and print the next time that the ISS will fly by that latitude and longitude. Create a file called iss.py (an “International Space Station” module) in the directory where you want your code to live, and add the following function: from datetime import datetime import pytz import requests ISS_URL = '')) The get_next_pass function in this code will make a request to the Open Notify API with a given latitude and longitude, check to see if there is a valid response, and then convert the timestamp received from the API into a Python datetime object and print the corresponding time that the ISS will fly overhead next. To test this code, open a Python shell and run the following two lines. For this example we will use the Twilio Headquarters in San Francisco as our test location (latitude: 37.788052, longitude: -122.391472): from iss import get_next_pass get_next_pass(37.788052, -122.391472) You should see something along the lines of: Next pass for 37.788052, -122.391472 is: 2021-12-09 23:58:11+00:00 with an appropriate timestamp. Now we can move on to writing code to send an email. can update iss.py to include code to send an email: from datetime import datetime import os import pytz import requests from sendgrid import SendGridAPIClient from sendgrid.helpers.mail import Mail ISS_URL = '')) Remember to make sure the SendGrid API key environment variable is set before trying to run this code.. If you want to test this code, open up a Python shell and run the following, replacing the to_email argument with your own email address: from iss import send_email send_email('[email protected]', '[email protected]', 'Look up!') You should receive an email telling you to look up after running this code. Scheduling a task with RQ Scheduler Now that we have a function that provides us with a datetime, and a function that sends an email, we can use RQ Scheduler. Create another file called schedule_notification.py, and add the following code to it: from datetime import datetime from redis import Redis from rq_scheduler import Scheduler import iss scheduler = Scheduler(connection=Redis()) # Get a scheduler for the "default" queue # Change these latitude and longitude values for any location you want. next_pass = iss.get_next_pass(37.788052, -122.391472) if next_pass: scheduler.enqueue_at(next_pass, iss.send_email, '[email protected]', '[email protected]', 'Look up! The ISS is flying above you!') This is just a quick script that calls the other functions you wrote, one to find out when the ISS is passing by your location next, and another that will send you an email. In this example, I'm using the coordinates for the Twilio office in San Francisco, but you can change the latitude and longitude to be wherever you are. Before being able to run this code, you have to make sure you're running a Redis server, an RQ worker, and the RQ Scheduler process all in other terminal windows or as background processes. a notification: python schedule_notification.py Now all you have to do is wait... Traveling through time This is great, but it's understandable if you don't feel like waiting around just to see if your code works. If you want instant gratification, we can use the time traveling method. On Unix based systems, you can change your system time using the date command. If the space station is supposed to fly by on December 5th, 2019 at 4:02, then on Linux you can run date -s "12/05/2019 03:02:00". On OSX you would run date 1205160219 (you can even use the -u argument if you want to use a UTC time zone, which corresponds to the datetime your Python code is printing). If all else fails, there are also GUI options to change your computer's time on most operating systems. On OSX you can set (and reset) this by opening "Date & Time" in your System Preferences If you want to receive notifications every time the ISS passes by instead of just once, you can schedule another notification after each message by modifying your send_email function in iss.py and adding the appropriate import statements: from redis import Redis from rq_scheduler import Scheduler scheduler = Scheduler(connection=Redis()) # Get a scheduler for the "default" queue) scheduler.enqueue_at(next_pass, iss.send_email, '[email protected]', '[email protected]', 'Look up! The ISS is flying above you!') To Infinity and Beyond Now that you can receive emails whenever the International Space Station flies by, you can use RQ Scheduler for all of your Python scheduling needs. The possibilities are endless. For another example of a project that uses RQ, check out this post on how to create a phone number that plays computer generated music that sounds like the soundtracks of old Nintendo games. - Twitter: @Sagnewshreds - GitHub: Sagnew - Twitch (streaming live code): Sagnewshreds
https://www.twilio.com/blog/scheduling-international-space-station-emails-in-python-with-sendgrid-and-redis-queue
CC-MAIN-2022-05
refinedweb
927
60.04
Investing The Foresight saga, continued Shares have had their worst year for decades. Interest rates are close to record lows. Where should you invest? STOCKMARKETS fell for the third year running in 2002. Measured by the decline in the ratio of equity wealth to GDP, the current bear market is the deepest in history. Yet not all investors have lost their shirts. Remember Felicity Foresight? In December 1999 The Economist revealed how this little-known but brilliant investor had used an infallible investment strategy to become the world's richest person. Her secret? Perfect foresight. When Ms Foresight was born in America in January 1900, her parents invested $1 on her behalf in a basket of shares. If it had been left there it would now be worth about $9,000. But Felicity reckoned she could do much better. Discovering at an early age that she could foresee the performance of financial markets perfectly, she would predict at the beginning of each year which asset around the world would bring the highest total dollar return (income plus capital gain) over the following 12 months. She would put all her wealth into that asset, and reinvest her income, before making a new forecast and shifting her money the following January. By her 100th birthday Ms Foresight had turned her initial $1 into an amazing $1.3 quadrillion—ie, 13 followed by 14 noughts—even after deducting dealing costs and taxes. Since then, despite the three-year bear market, she has enjoyed a post-tax average annual return of 29% thanks to some canny selections (Israeli, Russian and Czech shares in succession). Her nest-egg is now $2.7 quadrillion. Had she not had dealing costs and taxes to pay, she would, thanks to the effects of compounding, be worth an astonishing $27.5 quintillion (ignoring the fact that no market in the world is big enough to absorb such wealth). In 2002, when S&P 500 shares yielded a total loss (including dividends) of 22%, Felicity enjoyed a 44% dollar return on her Czech shares. In theory, she could have made an even heftier profit in Pakistan, but she limits herself to stockmarkets covered in the back pages of The Economist. Gold, up 26% as investors sought a safe haven in an uncertain world, and London residential property (up by over 30% including an assumed net rental yield of 4%) would also have produced handsome rewards (see chart). Indeed, The Economist's global house-price indices suggest that, once rental income is included, housing in most countries (apart from Germany and Japan) yielded double-digit returns last year. At the other extreme, Argentine shares saw the biggest drop. Among developed markets German shares fared worst, with a 33% dollar loss, even though the euro rose by 18% against the dollar. During Ms Foresight's 103 years, American shares have outperformed all other assets, returning 9.3% on average since 1900 compared with 4.8% on long-term Treasury bonds. Gold has yielded a dismal 2.8%, even less than the 4.1% return on cash (measured by the yield on American Treasury bills). Over the past ten years, however, the main equity markets have offered a lower return than either property or British government bonds (see chart). The best ten-year investment in our portfolio was London residential property with a total dollar return of 16%, well above the 9% earned on American stocks. Despite the recent spurt in its price, gold's long-term performance has been lacklustre, offering a ten-year average return of only 0.4%. But the worst investment of all has been Chinese equities. China may be the fastest growing big economy, with massive potential, yet Chinese shares have suffered an average annual loss of 16%. Emerging stockmarkets more broadly have proved a poor bet over the decade, even though in 14 of the past 15 years one of these markets has topped the global investment league. What goes up has then usually tumbled down. This is where Henry Hindsight, an old flame of Felicity's, has been badly caught out. He invests each January in the previous year's best-performing asset. Henry is like a typical small investor who tends to follow fashion, buying East Asian shares or internet shares in the 1990s after they had already risen sharply—and usually just before they began to slide. Felicity's best prediction was that marriage to Henry would never work. Future imperfect What is Felicity's hot tip for 2003? Naturally, she is not saying. However, she has hinted that she is ignoring the advice of most American analysts, who reckon, on average, that the S&P 500 will gain 20% by the end of 2003. After all, share prices are still far from cheap. In previous bear markets equities have always undershot before staging a full recovery; this time valuations have remained well above their long-term average. The price/earnings (p/e) ratio for the S&P 500 using historic reported profits is currently around 30, compared with a 50-year average of 16 and lows of ten or less in previous bear markets. Even if forecast profits are used to estimate earnings, the p/e ratio still looks high. Moreover, analysts' consensus forecast of a 15% rise in American corporate profits in 2003 is almost certainly far too high, given that nominal GDP is likely to grow by only 4%. Markets were cheered by the upward revision of America's third-quarter GDP growth to 4% at an annual rate. However, a deeper delve into the figures shows that corporate profits actually fell by 7%. Even in a recovering economy, excess capacity and weak pricing power continue to squeeze profits. Even if the American economy avoids another recession in 2003, real GDP growth is likely to remain below trend for a prolonged period because the large debts of firms and consumers will cramp spending. If so, the output gap (the gap between actual and potential GDP) will remain large, pushing inflation lower. Inflation is already low, with the GDP deflator rising by only 0.8% in the past 12 months. In the non-financial corporate sector prices have fallen by 1.2% in the past year. The Fed has now woken up to the risks, but even if full-blown deflation is avoided, the pace of growth in nominal income—and hence profits—will be sluggish for some time. If Ms Foresight believes that American share prices could fall for a fourth year—the longest such decline since the Great Depression—where might she be investing? Interest rates of only 1% on bank deposits are hardly enticing. Long-term government bonds look attractive if you expect inflation and interest rates to fall further, but with bond yields already near historic lows, potential gains are limited. Henry Hindsight's decision is much easier to predict. He is ploughing all his money into Czech shares, gold and residential property. But to Felicity's experienced eye many housing markets, from London to Washington to Sydney, may look horribly bubble-like. Asked whether she still favours emerging markets, the old lady smiles coyly. According to the Bank Credit Analyst, a Canadian research firm, emerging stockmarket valuations are currently at their cheapest during the 15 years for which figures are available, offering a p/e ratio of nine compared with a long-term average of 15. Whatever happens this year, Felicity will strike lucky again. But sooner or later her run of double-digit returns may come to an end. In a world of near-zero inflation, where stockmarkets in different countries are moving ever closer in step and where risk and uncertainty are on the rise, the returns that Felicity is accustomed to will be elusive. She would be wise to recall 1931, when the best performing asset was cash, offering 1% interest.
http://www.economist.com/node/1515323/print
CC-MAIN-2014-35
refinedweb
1,317
62.17
setitimer() Set the value of an interval timer Synopsis: #include <sys/time.h> int setitimer ( int which, const struct itimerval *value, struct itimerval *ovalue ); Arguments: - which - The interval time whose value you want to set. Currently, this must be ITIMER_REAL. - value - A pointer to a itimerval structure that specifies the value that you want to set the interval timer to. - ovalue - NULL, or a pointer to a itimerval structure where the function can store the old value of the interval timer. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The system provides each process with interval timers, defined in <sys/time.h>. The setitimer() function sets the value of the timer specified by which to the value specified in the structure pointed to by value, and if ovalue isn't NULL, stores the previous value of the timer in the structure it points to. A timer value is defined by the itimerval structure, which includes the following members: struct timeval it_interval; /* timer interval */ struct timeval it_value; /* current value */ Each struct timeval contains the following members: - time_t tv_sec — the number of seconds since the start of the Unix Epoch. - suseconds_t tv_usec — the number of microseconds.zero). Time values smaller than the resolution of the system clock are rounded up to the resolution of the system clock. The only supported timer is ITIMER_REAL, which decrements in real time. A SIGALRM signal is delivered when this timer expires. The SIGALRM so generated isn't maskable on this bound thread by any signal-masking function, pthread_sigmask(), or sigprocmask(). Returns: Errors: - EINVAL - The specified number of seconds is greater than 100,000,000, the number of microseconds is greater than or equal to 1,000,000, or the which argument is unrecognized. Classification: Caveats: All flags to setitimer() other than ITIMER_REAL behave as documented only with bound threads. Their ability to mask the signal works only with bound threads. If the call is made using one of these flags from an unbound thread, the system call returns -1 and sets errno to EACCES. These behaviors are the same for bound or unbound POSIX threads. A POSIX thread with system-wide scope, created by the call: pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); is equivalent to a Solaris bound thread. A POSIX thread with local process scope, created by the call: pthread_attr_setscope(&attr, PTHREAD_SCOPE_PROCESS); is equivalent to a Solaris unbound thread. The microseconds field shouldn't be equal to or greater than one second. The setitimer() function is independent of alarm() . Don't use setitimer(ITIMER_REAL) with the sleep() routine. A sleep() call wipes out knowledge of the user signal handler for SIGALRM. The granularity of the resolution of the alarm time is platform-dependent.
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/setitimer.html
CC-MAIN-2019-47
refinedweb
458
63.39
Quickstart¶ The easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern. How to sample a multi-dimensional Gaussian¶ We’re going to demonstrate how you might draw samples from the multivariate Gaussian density given by: where \(\vec{\mu}\) is an N-dimensional vector position of the mean of the density and \(\Sigma\) is the square N-by-N covariance matrix. The first thing that we need to do is import the necessary modules: import numpy as np import emcee Then, we’ll code up a Python function that returns the density \(p(\vec{x})\) for specific values of \(\vec{x}\), \(\vec{\mu}\) and \(\Sigma^{-1}\). In fact, emcee actually requires the logarithm of p. We’ll call it lnprob: def lnprob(x, mu, icov): diff = x-mu return -np.dot(diff,np.dot(icov,diff))/2.0 It is important that the first argument of the probability function is the position of a single walker (a N dimensional numpy array). The following arguments are going to be constant every time the function is called and the values come from the args parameter of our EnsembleSampler that we’ll see soon. Now, we’ll set up the specific values of those “hyperparameters” in 50 dimensions: ndim = 50 means = np.random.rand(ndim) cov = 0.5 - np.random.rand(ndim ** 2).reshape((ndim, ndim)) cov = np.triu(cov) cov += cov.T - np.diag(cov.diagonal()) cov = np.dot(cov,cov) and where cov is \(\Sigma\). Before going on, let’s compute the inverse of cov because that’s what we need in our probability function: icov = np.linalg.inv(cov) It’s probably overkill this time but how about we use 250 walkers? Before we go on, we need to guess a starting point for each of the 250 walkers. This position will be a 50-dimensional vector so the initial guess should be a 250-by-50 array—or a list of 250 arrays that each have 50 elements. It’s not a very good guess but we’ll just guess a random number between 0 and 1 for each component: nwalkers = 250 p0 = np.random.rand(ndim * nwalkers).reshape((nwalkers, ndim)) Now that we’ve gotten past all the bookkeeping stuff, we can move on to the fun stuff. The main interface provided by emcee is the EnsembleSampler object so let’s get ourselves one of those: sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=[means, icov]) Remember how our function lnprob required two extra arguments when it was called? By setting up our sampler with the args argument, we’re saying that the probability function should be called as: lnprob(p, means, icov) where p is the position of a single walker. If we didn’t provide any args parameter, the calling sequence would be lnprob(p) instead. It’s generally a good idea to run a few “burn-in” steps in your MCMC chain to let the walkers explore the parameter space a bit and get settled into the maximum of the density. We’ll run a burn-in of 100 steps (yep, I just made that number up... it’s hard to really know how many steps of burn-in you’ll need before you start) starting from our initial guess p0: pos, prob, state = sampler.run_mcmc(p0, 100) sampler.reset() You’ll notice that I saved the final position of the walkers (after the 100 steps) to a variable called pos. You can check out what will be contained in the other output variables by looking at the documentation for the EnsembleSampler.run_mcmc() function. The call to the EnsembleSampler.reset() method clears all of the important bookkeeping parameters in the sampler so that we get a fresh start. It also clears the current positions of the walkers so it’s a good thing that we saved them first. Now, we can do our production run of 1000 steps (again, this is probably overkill... it’s generally very silly to take way more samples than you need to but never mind that for now): sampler.run_mcmc(pos, 1000) The sampler now has a property EnsembleSampler.chain that is a numpy array with the shape (250, 1000, 50). Take note of that shape and make sure that you know where each of those numbers come from. A much more useful object is the EnsembleSampler.flatchain which has the shape (250000, 50) and contains all the samples reshaped into a flat list. You can see now that we now have 250 000 unbiased samples of the density \(p(\vec{x})\). You can make histograms of these samples to get an estimate of the density that you were sampling: import matplotlib.pyplot as pl for i in range(ndim): pl.figure() pl.hist(sampler.flatchain[:,i], 100, color="k", histtype="step") pl.title("Dimension {0:d}".format(i)) pl.show() Another good test of whether or not the sampling went well is to check the mean acceptance fraction of the ensemble using the EnsembleSampler.acceptance_fraction() property: print("Mean acceptance fraction: {0:.3f}" .format(np.mean(sampler.acceptance_fraction))) This number should be between approximately 0.25 and 0.5 if everything went as planned. Well, that’s it for this example. You’ll find the full, unadulterated sample code for this demo here.
https://emcee.readthedocs.io/en/stable/user/quickstart.html
CC-MAIN-2019-13
refinedweb
904
64.3
This is a guest repost by Ron Pressler, the founder and CEO of Parallel Universe, a Y Combinator company building advanced middleware for real-time applications.? Many modern web applications are composed of multiple (often many) HTTP services (this is often called a micro-service architecture). This architecture has many advantages in terms of code reuse and maintainability, scalability and fault tolerance. In this post I’d like to examine one particular bottleneck in the approach, which hinders scalability as well as fault tolerance, and various ways to deal with it (I am using the term “scalability” very loosely in this post to refer to software’s ability to extract the most performance out of the available resources). We will begin with a trivial example, analyze its problems, and explore solutions offered by various languages, frameworks and libraries. Let’s suppose we have an HTTP service accessed directly by the client (say, web browser or mobile app), which calls various other HTTP services to complete its task. This is how such code might look in Java: import ...; @Path("myservice") public class MyRestResource { private static final Client httpClient = ClientBuilder.newClient(); @GET @Produces(MediaType.TEXT_HTML) public String getIt() { int failures = 0; // call foo (synchronous) String fooResponse = null; try { fooResponse = httpClient.target("").request().get().readEntity(String.class); } catch(ProcessingException e) { failures++; } // call bar (synchronous) String barResponse = null; try { barResponse = httpClient.target("").request().get().readEntity(String.class); } catch(ProcessingException e) { failures++; } monitorOperation(failures); return combineResponses(fooResponse, barResponse); } } To define a REST service, our example uses JAX-RS, though a plain Servlet or any other framework could have been used. To invoke other services, we use a JAX-RS client, though other libraries could have been used (JAX-RS client is powerful and general, and supports integrating other libraries to perform the actual HTTP request; e.g. Jersey’s JAX-RS client integrates with Netty, Jetty or Grizzly clients). To keep the example simple, instead of calling many “micro services”, we call just two, foo and bar, but we keep in mind that a real application might call many more. We also want to monitor failed service calls, so we count and report them. Now let’s find the bottlenecks in our approach. Little’s Law is a mathematical theorem useful in determining the capacity of a system such as ours that receives and processes external requests. For a stable system, it ties the average request arrival rate, λ, the average time each request is processed by the system, W, and the number of concurrent requests pending in the system, L, in a neat little formula: L = λW What’s remarkable about this result is that it does not depend on the precise distribution of the requests, the order in which requests are processed or any other variable that might have conceivably affected the result. Here’s an example: if 1000 requests, on average, arrive each second, and each takes 0.5 seonds to process, on average, then our system is required to handle 1000*0.5 = 500 requests concurrently. Normally, however, the system’s capacity, L, is a given, and the request processing time, W, is a feature of the software and depends on its complexity and internal latencies. If we know L and W we can figure out the rate of requests we can support: λ = L/W To handle more requests, we need to increase L, our capacity, or decrease W, our processing time, or latency. What happens if requests arrive at a greater rate than λ? The system will no longer be stable. Requests will start queuing up. At first, they will experience much increased latency, but quickly system resources will be exhausted and the server will become unavailable. L is a feature of the environment (hardware, OS, etc.) and its limiting factors. It is the minimum of all limits constraining the number of concurrent requests. What are those limits? Well, first, we have the number of concurrent TCP connections the server can support. Normally, a server can support several tens-of-thousands of concurrent TCP connections, and some shops have had success maintaining over 2 million open connections. Second, we have bandwidth. Unless we are streaming HD video, the requests and responses travelling back and forth over the LAN are no more than a few kilobytes in length. If the total “chit-chat” volume of a single request is under 1MB (usually, it is well under), given today’s high-bandwidth LANs, our network could support anywhere between 100K to over a million concurrent requests (remember, we are talking about concurrent requests, not requests per second; services that require large message volumes usually take longer to process, so we’re those requests take longer than a second or even several seconds, and the network can handle that). Third, there’s RAM. The number of concurrent requests that can fit in RAM depends on how much memory each request consumes, but assuming we can keep this to well below 1MB, and given the low cost of RAM, this number is probably well over 1 million, and certainly over several hundreds-of-thousands. Fourth is the CPU. Just how many concurrent requests the CPU can support depends on the application logic, but given that most processing is done by the microservices and that most of the time our requests just wait for the microservices to responsd and don’t waste CPU, this number is anywhere between several hundreds of thousands and several millions. Indeed, productions systems employing similar architectures rarely report CPU as their bottleneck in practice. So far, we have reason to believe we can keep L somewhere between 100K and 1 million. Sounds great, huh? But there is one more limiting factor: the OS. In our example, we employ the simple and familiar thread-per-request model. A request runs on a single OS thread to completion, and when it’s done, the web server is free to use that thread to serve other requests. So the number of concurrent request we can handle is also limited by the number of threads the OS can handle. How do those threads behave? Well, they do some processing and then they block, waiting for a microservice to respond. Then they might do some more processing and block again. So the threads are not very busy (that’s why the CPU isn’t saturated), but they’re not just sitting there idle, either: the OS is required to schedule each of them anywhere between 2 and a few dozen times to complete the request. So, how many such threads could the OS handle concurrently? That depends on the OS, but it is usually somewhere between 2K and 15K. Beyond that, thread scheduling will add significant latency to the requests, and once latency grows, W increases and λ drops again. Allowing the software to spawn thread willy-nilly may bring our application to its knees, so we usually set a hard limit on the number of threads we let the application spawn. This number is somewhere between 500 and 15K, but rarely more than that. Because L is the minimum of all these limits, the OS scheduler suddenly dropped our capacity, L, from the high 100Ks-low millions, to well under 20,000! In conclusion, if we use the thread-per-request model on good-enough hardware, L is completely dominated by the number of threads the OS can support without adding latency. We could, of course, buy more servers, but those cost money and incur many other hidden costs. We might be particularly reluctant to buy extra servers when we realize that software is the problem, and those servers we already have are under-utilized. Before we look at other models that might work around this problem (and the issues they introduce), lets turn to examine W, the processing latency. Suppose our two micro-services, foo and bar, each take 500ms on average to return a response (including network latency). Since we call them sequentially (for the time being) our web service’s request processing time, or processing latency, is 1 second. That’s our W. Now, suppose we’ve allowed the web server to spawn up to 2000 threads (that’s now our L). According to Little’s law, we can handle up to λ = L/W = 2000/1 = 2000 requests per second before we become unstable and crash. We figure that even taking into account traffic spikes that number is good enough. Problem is, this calculation is only valid when both foo and bar are healthy. What happens if one of them experiences trouble which increases its latency to 10 seconds? From W = 0.5 + 0.5 = 1 we’ve now gone to W = 0.5 + 10 = 10.5 (let’s call it a round 10). What happened to λ? From 2000 requests per second it now dropped to 200, which we deem unacceptable. So, to make our service fault-tolerant, we set timeouts for the service calls: @Path("myservice") public class MyRestResource { private static final Client httpClient; static { ClientConfig configuration = new ClientConfig(); configuration.property(ClientProperties.CONNECT_TIMEOUT, TimeUnit.SECONDS.toMillis(2)); configuration.property(ClientProperties.READ_TIMEOUT, TimeUnit.SECONDS.toMillis(2)); httpClient = ClientBuilder.newClient(configuration); } // ... } We’ve assigned our HTTP client a timeout parameter of 2 seconds to give it some leeway. This means that our maximum latency, even in the presence of failure, is 4 seconds, which yields a maximum request rate λ of 2000/4 = 500 per second. In fact, we can do better. If foo goes bad and consistently times-out, there’s no need to try reaching it again and again, waiting for twho whole seconds each time. We can install a “circuit breaker” that trips if a service fails and prevents subsequent requests from attempting to call it. Ocassionally, a side process can sample foo to see if it has recovered, and if so, close the circuit again. This can bring our latency back to under 1 second, and our request handling capacity back to 2000 requests per second even in the event of a failure, at the cost of added complexity. This kind of circuit breaker mechanism is exactly what Netflix’s open source Hystrix library provides. Its circuit breakers help prevent W from rising when something goes wrong. Also, instead of giving the server a single cap on the number of threads it can spawn, Hystrix makes it easy to allocate various capped thread-pools for different kinds of operations as a form of bulkheading failure (so that one operation that goes awry won’t exhaust all threads). Now, assuming all our services are healthy or protected with circuit breakers, can we reduce W further? As a matter of fact we can, and quite easily, in fact. In our example, ee call bar only after foo returns, so their latencies are compounded. This might be OK for our little example, but if we call 20 services instead of 2, this could be a serious issue. We notice that we don’t need the result of foo to call bar, so we can issue the two (or twenty) calls at the same time, let both them do their business in parallel, and absorb their latencies into one another. Here’s how we can do it in Java: @GET @Produces(MediaType.TEXT_HTML) public String getIt() throws InterruptedException{); } Futures let us dispatch both requests at once, and then wait for their results. If all goes well, we’ve just reduced W, our processing latency, from 1 second to 500ms. The same applies for 20 or more service calls. The result is that our server can now handle up to 4000 requests per second, even if we call quite a few micro services. Is that the best we can do? Unfortunately yes. Short of somehow greatly optimizing both foo and bar, this is pretty much it, even though our hardware is still severely under-utilized. 4000 requests per second is the best we can do, and it might be much worse than that if we allow latency greater than 500ms for any of the micro services. What if this is not enough? So we’ve taken down W as much as we could, but L is still constrained by the number of threads the OS can efficiently handle. The only thing left for us to do now is somehow increase L, and to do that we have no option other than abandon the thread-per-request model. Node.js is a server side JavaScript framework used (primarily) for web applications. JavaScript is single-threaded, so Node has no choice when it comes to not using the OS thread scheduler. Let’s see how Node.js handles our problem: var http = require('http'); http.createServer(myService).listen(8080); function myService(request, response) { var completed = 0; var failures = 0; var fooResponse; var barResponse; function checkCompletion() { if (completed == 2) { monitorOperation(failures); response.write(combineResponses(fooResponse, barResponse)); response.end(); } } http.get({host: '', port: 80, path: '/foo'}, function(resp){ resp.on('data', function(chunk){ fooResponse = chunk; completed++; checkCompletion(); }); }).on("error", function(e){ completed++; failures++; checkCompletion(); }); http.get({host: '', port: 80, path: '/bar'}, function(resp){ resp.on('data', function(chunk){ fooResponse = chunk; completed++; checkCompletion(); }); }).on("error", function(e){ completed++; failures++; checkCompletion(); }); } Node request handlers do not have a thread for themselves and they don’t block. Instead, they run for a while, and when they need to wait for, say, a service call, they give the framework a callback to execute when the call completes. By doing that, Node has turned JavaScript’s lack of threading into an advantage. Because it can’t use a thread-per-request model, Node uses asynchronous callbacks which do not suffer from the OS thread limitation. But while this approach completely eschews the OS thread limit problem, it introduces several others. First, any accidental blocking of a handler function, or even if it happens to be running a lengthy computation, effectively blocks the entire Node.js instance; no other requests can be processed. This is often mitigated by running several Node instances on one machine and load-balancing them, which also helps take advantage of all CPU cores, but it wastes RAM and makes parallelizing certain computational tasks difficult (the latter might or might not matter, depending on what you want to accomplish). More importantly, however, it forces an asynchronous, callback-based programming style. This style is harder to write and harder to reason about, as your code is not executed in the order it is written. With complex, dependent operations, this style is the nightmare colloquially known as callback hell (Node.js is trying to make this simpler by adopting a more comprehensively-functional style with something called promises; we’ll look at a similar approach right away). One of the things working in Node’s favor, however, is that it’s single threaded. Why is that good? Because it keeps our example code simpler. While the calls to foo and bar are submitted asynchronously and their callbacks can be executed at any time and any order (once they complete), we can still safely increment the failures and completed variables, because whenever the callbacks run, they will all run on the same thread – never concurrently. So while we have to wrap our heads around callbacks, thankfully we don’t need to factor concurrency races into the equation as well. But the JVM does support multiple threads, and if you have them, why not use them? Let’s see how Play, a JVM web framework, handles the problem. Again, our goal is to avoid hogging an OS thread for the duration of the request: public static Promise<Result> getIt() { Promise<String> fooPromise = WS.url("").get().map( result -> result.toString(); ); Promise<String> barPromise = WS.url("").get().map( result -> result.toString(); ); // Not actually waiting Promise<List<ResultType>> results = Promise.waitAll(fooPromise, barPromise); return async(results.map((List<String> rs) -> { String fooResponse = rs.get(0); String barResponse = rs.get(1); // monitoring ???? return ok(combineResponses(fooResponse, barResponse)); } }); } Play is written in Scala, which supports (among other styles), and sometimes encourages, a functional programming style, and this is reflected in Play’s Java API as well. I’ve written the code sample above in Java 8 because prior versions of Java were verbose to the point of exhaustion when written in the functional style, while the Scala API requires understanding of Scala for-comprehensions. In practice, the functional style is callback-based, but it usually contains rigorous mechanisms for combining callbacks, which, once you learn them, can make your code much more methodic. So, the functions in lines 3 and 7 will be executed when their calls (to foo and bar respectively) complete, and the one in line 14 will after when both complete, because the two promises (essentially futures) were combined with Promise.waitAll. But what about our failure count? We cannot keep it in a local variable or even in a class field because the two callbacks can be called on any thread, even concurrently, and if they were to modify any shared state, race conditions are bound to happen. To solve this, we need those promises to return not just a string, but a string and some other additional monitoring values (yes, I know failures specifically are reported differently, but this holds true for any other monitoring value we may want to collect), combined in a new class we need to define. In addition, because the callbacks are not executed in the same thread as the original request handler (or the same thread as one another) we cannot use any library functions that rely on ThreadLocal state. Threads are gone. If you don’t like Play’s API (which certainly feels very foreign to Java), the Netflix RxJava project offers a functional API that operates along the same principles, not tied to a single web framework, and more familiar to Java programmers (and a lot nicer, IMO). So while the functional style is elegant and helps compose callbacks nicely and in an idiomatic manner, using it entails a pretty serious learning curve, and it might bring about some baffling problems and race-conditions if you happen to interact with code that does not play well with the functional bits. In short, once you go functional you might find that you need to go functional all the way (at least within the service), or risk some serious head scratchers, especially if you’re not programming in a language that restricts shared mutable state, like Clojure or Erlang (or Haskell, if you’re so inclined). … aaaand that’s it. We’re no longer using threads to guide the flow of our code and delineate requests, so L is no longer dominated by the OS’s scheduling capacity. It is solely determined by our hardware capacity and whatever (hopefully small) overhead the frameworks/libraries we use incur. We only need to buy more hardware if our hardware is saturated. All is well and good except for one thing: we’ve lost the thread-per-request model. Nobody likes nesting callbacks, and while some may argue that a functional style is the way to go no matter what you do, it is unfamiliar, arguably more difficult to reason about in some circumstances, and as yet unproven in the industry. Also, threads are nice. They give us a clear program flow along with a stack for intermediate state. Must we completely throw this wonderful abstraction out the window? Are scalability (and fault tolerance), and simple, easy to follow code mutually exclusive? Luckily, they are not. If you remember how we started, it was by realizing that the server’s capacity, or L in Little’s formula, is dominated by the OS’s thread scheduling capability. Becasue we had to cap our threads at some relatively small number, they became a precious resource. The circuit-breakers and the functional programming style were required because threads are expensive. But what if they didn’t have to be? Some languages, most notably Erlang and Go, provide lightweight threads (processes in Erlang; goroutines in Go). The open-source Quasar library provides them on the JVM (where they’re called fibers). These lightweigh threads are not scheduled by the OS but by the language or library runtime. The runtime can often do a far better job than the OS at scheduling those lightweight threads because it knows more about their purpose. In particular it knows that they run in very short bursts and block very often; this is not generally true of heavyweight threads. The runtime scheduler usually employs what is known as M:N scheduling, where M lightweight threads are mapped onto N OS threads, with M » N. Just like anything else, lightweight threads aren’t magic, and they have their own tradeoffs. Their main problem is that all blocking function calls you issue on a lightweight thread must be integrated with the scheduler. If a library function is not aware of the scheduler it might block the entire OS thread (Quasar monitors all running fibers and issues a warning when this happens; also, it easily handles non-frequent blocking of OS threads). In many circumstances, though, this is a very acceptable price to pay for keeping your code simple while gaining scalability and fault-tolerance. This is also not too hard to enforce, as blocking calls usually perform one of a small set of operations – network IO, file IO, or database calls – so making sure to only use libraries that are aware of your lightweight threads is usually easy (in fact, Quasar makes it very easy to turn any asynchronous, callback-based, API into a fiber-blocking API with a simple mechanism). And if Google’s proposed user-level threads make it into the Linux kernel (which will then allow user code to schedule OS threads), Quasar will integrate with those and further reduce, or even completely remove, any possible conflict when integrating with non-fiber-aware blocking libraries. The new open-source Comsat library is a set of standard Java API implementations (like JAX-RS and JDBC) that integrate with Quasar fibers. So let’s see how we can re-write our original example to be scalable and fault-tolerant, this time using lightweight threads via Comsat: import ...; import co.paralleluniverse.fibers.ws.rs.client.AsyncClientBuilder; import co.paralleluniverse.fibers.SuspendExecution; @Path("myservice") public class MyRestResource { private static final Client httpClient = AsyncClientBuilder.newClient(); @GET @Produces(MediaType.TEXT_HTML) public String getIt() throws InterruptedException, SuspendExecution {); } } You’ll notice that this is exactly the original example (after adding futures to absorb the two services’ latencies into each other) with some very minor changes! The first is adding throws SuspendExecution to our service method to designate it as a fiber-blocking method (alternatively you can annotate it with the @Suspendable annotation). The second is that we use the AsyncClientBuilder provided by Comsat, which provides the same JAX-RS client API, only with an implementation that is fiber-aware. What about circuit-breakers? They’re not as critical now. We could add timeouts if we want to quickly respond back with a failure if one of the services takes too long, but other than that, we don’t mind the increased latency. Sure W might grow but L is now only contrained by the hardware. Fibers are cheap, and we can handle hundreds-of-thousands of them, or even millions, at once (we might still want to use a library like Hystrix to prevent an unbounded number of fibers from piling up, but even without it our server can recover gracefully from a short-term failure). So we can have our cake and eat it, too! When you combine the awesome performance, stability, unparalleled tooling and monitoring of the JVM with the power and simplicity of lightweight threads, you can make the absolute most of your hardware while keeping your code simple, easy to understand, and familiar. You don’t even need to learn new APIs. Both Clojure and Scala provide fiber-like functionality with Scala Async and Clojure’s wonderful core.async. But those are limited for use by their respective languages (i.e. they cannot integrate with other JVM languages), and even there, because they are based on macros, they are restricted to a single syntactical form: you can only explicitely block in the outermost “fiber” function – you can’t call another function that blocks. That’s great! We like all of these concepts, but believe they should be used where they make sense as a computational model – when they make programming easier – not as a convoluted way to work around OS limitations. That’s why Quasar has Go-like channels complete with “reactive extensions” (or Rx) for good measure, as well as a full, Erlang-like actor system, all of which are built on top of the solid fiber foundation. They are great for making your business-logic, not just your service endpoints, scalable and fault tolerant. Quasar’s Clojure API, Pulsar is even compatible with core.async. And while Comsat provides fiber-aware implementation of standard Java APIs, it also offers an optional API called Web Actors. Web Actors let you write web applications using the actor model, popularized by Erlang. Web Actors give you excellent scalability and fault-tolerance, and are particularly fun to use in interactive web application, those that use WebSockets, or other server-push technologies (such as comet or SSE). Web Actors were discussed in our previous blog post. Little’s law determines the load (request rate) a server can withstand given its (concurrent request) capacity and processing latency. We learned that when using the simple thread-per-request, the OS severly limits the server capacity. To maintain scalability and fault tolerance you must work around this limitation by either forgoing the simple thread-per-request model and adopting a functional programming style, or by using a language or a library that provides lightweight threads for your platform. If you’re developing for the JVM, Quasar gives you lightweight threads (fibers), and Comsat gives you fiber-aware implementations to standard Java APIs.
http://highscalability.com/blog/2014/2/5/littles-law-scalability-and-fault-tolerance-the-os-is-your-b.html?printerFriendly=true
CC-MAIN-2019-22
refinedweb
4,346
51.89
5. The import system¶ Python standard builtin __import__() is called. Other mechanisms for invoking the import system (such as importlib.import_module()) may choose to subvert __import__() and use its own solution to implement import semantics. When a module is first imported, Python searches for the module and if found, it creates a module object [1], initializing it. If the named module cannot be found, an ModuleNotFoundEr). 5.1. importlib¶ The importlib module provides a rich API for interacting with the import system. For example importlib.import_module() provides a recommended, simpler API than built-in __import__() for invoking the import machinery. Refer to the importlib library documentation for additional detail. 5.2. Packages¶ that subpackage called 5.2.1. Regular packages¶. 5.2.2. Namespace packages¶. 5.3. Searching¶ ModuleNotFoundError is raised. 5.3.1. The module cache¶ ModuleNotFoundEr ModuleNotFoundError. Beware though, as if you keep a reference to the module object, invalidate its cache entry in sys.modules, and then re-import the named module, the two module objects will not be the same. By contrast, importlib.reload() will reuse the same module object, and simply reinitialise the module contents by rerunning the module’s code. 5.3.2. Finders and loaders¶. 5.3.3. Import hooks¶. 5.3.4. The meta path¶ a ModuleNotFoundError is raised. Any other exceptions raised are simply propagated up, aborting the import process. The find_spec() method of meta path finders is called with two or three ModuleNotFoundEr).). Changed in version 3.4: The find_spec() method of meta path finders replaced find_module(), which is now deprecated. While it will continue to work without change, the import machinery will try it only if the finder does not implement find_spec(). 5.4. Loading¶'): # It is assumed 'exec_module' will also be defined on the loader.before the loader executes the module code. This is crucial because the module code may (directly or indirectly) import itself; adding it to sys.modulesbeforehand prevents unbounded recursion in the worst case and multiple loading in the best. - If loading fails, the failing module – and only the failing module – gets removed from sys.modules. Any module already in the sys.modulescache,. 5.4.1. Loaders¶ method returns None,before the loader executes the module code, to prevent unbounded recursion or multiple loading. - If loading fails, the loader must remove any modules it has inserted into sys.modules, but it must remove only the failing module(s), and only if the loader itself has loaded the module(s) explicitly. Changed in version 3.5: A DeprecationWarning is raised when exec_module() is defined but create_module() is not. Starting in Python 3.6 it will be an error to not define create_module() on a loader attached to a ModuleSpec. 5.4.2. Submodules¶ When a submodule is loaded using any mechanism (e.g. importlib APIs, the import or import-from statements, or built-in __import__()) a binding is placed in the parent module’s namespace to the submodule object. For example, if package spam has a submodule foo, after importing spam.foo, spam will have an attribute foo which is bound to the submodule. Let’s say you have the following directory structure: spam/ __init__.py foo.py bar.py and spam/__init__.py has the following lines in it: from .foo import Foo from .bar import Bar then executing the following puts a name binding to foo and bar in the spam module: >>> import spam >>> spam.foo <module 'spam.foo' from '/tmp/imports/spam/foo.py'> >>> spam.bar <module 'spam.bar' from '/tmp/imports/spam/bar.py'> Given Python’s familiar name binding rules this might seem surprising, but it’s actually a fundamental feature of the import system. The invariant holding is that if you have sys.modules['spam'] and sys.modules['spam.foo'] (as you would after the above import), the latter must appear as the foo attribute of the former. 5.4.3. Module spec¶. 5.4.5. module.__path__¶ import machinery automatically sets __path__ correctly for the namespace package. 5.4.6. Module reprs¶. 5.5. The Path Based Finder¶ As mentioned previously, Python comes with several default meta path finders. One of these, called the path based finder ( PathFinder),. 5.5.1. Path entry finders¶_spec()_spec() method as described previously. When the path argument to find_spec() ( PathEntryFinder)_spec() spec, which is then used when loading the module. The current working directory – denoted by an empty string – is handled slightly differently from other entries on sys.path. First, if the current working directory is found to not exist, no value is stored in sys.path_importer_cache. Second, the value for the current working directory is looked up fresh for each module lookup. Third, the path used for sys.path_importer_cache and returned by importlib.machinery.PathFinder.find_spec() will be the actual current working directory and not the empty string. 5.5.2. Path entry finder protocol¶. However, if find_spec() is implemented on the path entry finder, the legacy methods are ignored.. If both find_loader() and find_module() exist on a path entry finder, the import system will always call find_loader() in preference to find_module(). 5.6. Replacing the standard import system¶ ModuleNoFoundError directly from find_spec() instead of returning None. The latter indicates that the meta path search should continue, while raising an exception terminates it immediately. 5.7. Special considerations for __main__¶ The __main__ module is a special case relative to Python’s import system. As noted elsewhere, the __main__ module is directly initialized at interpreter startup, much like sys and builtins. However, unlike those two, it doesn’t strictly qualify as a built-in module. This is because the manner in which __main__ is initialized depends on the flags and other options with which the interpreter is invoked. 5. 5.8. Open issues¶. XXX Add more explanation regarding the different ways in which __main__ is initialized? XXX Add more info on __main__ quirks/pitfalls (i.e. copy from PEP 395). 5.9. References¶.
http://docs.activestate.com/activepython/3.6/python/reference/import.html
CC-MAIN-2018-34
refinedweb
991
51.04
After procrastinating, I finally decided to sit down and learn to program in F#, since functional programming has always intrigued me. Although I had some experience with Prolog many moons ago, I have done most of my programming using imperative rather than functional or declarative languages, with recent work mostly in C#. Whenever I've learned new languages, there is always an effort required to "get up to speed", but with F#, I found the initial steps were more difficult than I anticipated, due to the lack of a mature IDE, on top of learning a new language and adjusting to a different programming paradigm. I'm still no expert in F#, but perhaps, some of my early stumbling might be of use to others, so they don't have to go through the same pain that I did. This article describes my steps in getting F# to work for desktop applications. First, I'll describe how I got F# installed with VS 2008, and note some general program and architecture related issues that I ran across. Next, I'll present a simple Windows Forms application that can be used as a template for desktop applications written in F#. Finally, I'll summarize my perceptions of F# as a language, mention the strengths and weaknesses of the current implementation, and explain where I think F# fits in my personal .NET programming toolbox. As an example application, I put together a simple parser test to be used in an expert system based on fuzzy logic. The parser simply accepts text input, and defines variables and fuzzy sets. The application allows text to be saved into and read from a text file, and is used only in my testing, so it isn't much useful in terms of fuzzy logic. For anyone interested, the actual parser and fuzzy set code is contained in the sample project, but won't be discussed here, since it is beyond the scope of this article. Perhaps, in a future article, I'll describe the rest of the system and show a real word application. By way of background, the parser is meant to parse a fuzzy set and variable definitions. A variable is declared using either of these forms: Variable Name of Context [=] Value (i.e. Variable Depth of Water = 250) Variable Context Name [=] Value (i.e. Variable Water Depth 250) and a fuzzy set is declared by the form: FuzzySet Name Context [=] Values (i.e. FuzzySet Deep Water (0,0) (900,1) In both of the definitions, both a Name and a Context are specified so that ideas with similar names can be differentiated, for example, a Hot Day vs. a Hot Volcano. Both can be measured in temperature units, and have the same name, but the ranges and the meaning of the "Hot" concept is very different. Name Context To get started with F#, I downloaded the Microsoft F# CTP (Version 1.9.6.2) from the Microsoft F# website[^], saving the .msi file and then running it to install F#. I accepted all the installation defaults, and it installed without any obvious problems. Once installed, I opened VS 2008 Professional, and tried to activate the F# add-in according to the Microsoft instructions, but the F# add-in was not on the list under Tools | Add-in Manager. After uninstalling and reinstalling with the same results, I finally discovered that by simply typing Alt+Ctrl+F, the F# Interactive window opened, and apparently the installation had worked just fine the first time, an hour or so earlier. The F# add-in still doesn't show up in the Add-in Manager, but it seems to work just fine. Once installed and working, using the VS editor with the F# Interactive tool was easy. By simply typing code in the editor, highlighting it, and pressing Alt+Enter, the highlighted code is copied into the interactive F# window, compiled, and run. It's beautiful for trying, testing, and interactively debugging code snippets. Also, in the editor, Intellisense seems to work for most, but not all, of the F# code that I needed. In trying to code anything more than some trivial tests, however, I ran into two problems which I haven't really found mentioned elsewhere online. First, even though I was able to pop-up a Windows form, I always had the black DOS/Command window show up as well. Since the F# system doesn't allow you to add a Windows Forms project, you have to manually go to the Project | Properties and set the application type to Windows Application. It appears that the default F# project is always a Console Application. Second, even though I added references to the standard .NET namespaces (open System.Windows.Forms, etc.), the F# compiler always complained about not finding things like System.Windows.Forms, System.Windows.Drawing, etc. Apparently, VS doesn't automatically add the common references, and those had to be added manually under Solution Explorer | References. With those two items out of the way, using F# was fairly easy, with the rest of my headaches being due to learning a new language and programming paradigm. open System.Windows.Forms System.Windows.Forms System.Windows.Drawing Once my project got to the point where I needed several source code files to keep things organized, I ran into several other problems. First, there is no obvious entry point, main function, or other obvious starting place in F# programs. So, how does the compiler know where to start? Apparently, the F# system simply runs all of the executable statements in the last file compiled. This has two immediate implications: To work around these issues, I took the approach of placing each type in a separate file, pretty much like in C#, and creating a simple file that always is last in the file list called project_name.fs, where project_name is replaced by the actual name of my project. This file is very simple, containing only the following lines, where MyNamespace and MyMainForm() are the namespace and main form names used in a particular application. Note that if you need to do additional things before actually displaying the main form, the code for those can also be inserted before the do Application.Run statement, and might require additional open statements to reference any needed namespaces or modules. MyNamespace MyMainForm() do Application.Run open #light open System open System.Windows.Forms open MyNamespace [<STAThread>] do Application.Run(new MyMainForm()) Note that the [<STAThread>] line is a .NET attribute that defines the application to run in a Single Thread Apartment. This is required when some of the .NET dialogs are used, such as FileOpenDialog and FileSaveDialog, because they apparently use COM Interop behind the scenes. If you don't use any of those, that attribute won't be needed. [<STAThread>] FileOpenDialog FileSaveDialog In working with F#, I truly began to appreciate the designers that are available for C#, VB, and other languages supported by VS 2008 and other IDEs. Since form designers are not available for F#, I had to code all windows by hand. To keep things simple (for me), I adopted a coding style illustrated below for part of the MainForm that represents the application. MainForm #light namespace MyNamespace open System open System.Windows.Forms open System.Drawing type MainForm() as form = inherit Form() // Define private variables let mutable fuzzySets = [] let mutable fuzzyRules = [] let mutable fuzzyVariables = [] let mutable fileName = "" // Define the controls for this form let mainMenu = new MainMenu() let mnuFile = new MenuItem() let mnuFileOpen = new MenuItem() let mnuFileSave = new MenuItem() let mnuFileSaveAs = new MenuItem() let mnuFileExit = new MenuItem() let mnuHelp = new MenuItem() let mnuHelpAbout = new MenuItem() let label1 = new Label() let label2 = new Label() let label3 = new Label() let lstFuzzySets = new ListBox() let lstVariables = new ListBox() let txtInput = new RichTextBox() let btnCalculate = new Button() let dlgFileOpen = new OpenFileDialog() let dlgFileSave = new SaveFileDialog() let HomeDir = Application.ExecutablePath // Private functions let rec getVariable ((lst:(Variable list)), vName:string, vContext:string) = match lst with | [] -> failwith (sprintf "Variable %s.%s not found" vName vContext) | x::_ when (x.Name = vName) && (x.Context = vContext) -> x | _::t -> getVariable(t, vName, vContext) let rec getFuzzySet ((lst:(FuzzySet list)), vName:string, vContext:string) = match lst with | [] -> failwith (sprintf "FuzzySet %s.%s not found" vName vContext) | x::_ when (x.Name = vName) && (x.Context = vContext) -> x | _::t -> getFuzzySet(t, vName, vContext) let rec prtVars (s:(Variable list)) = match s with | [] -> "" | x::y -> (sprintf "[%s %s = %f]" x.Name x.Context x.Value)^(prtVars y) let rec prtFSets (s:(FuzzySet list)) = match s with | [] -> "" | x::y -> (sprintf "[%s %s = %A]" x.Name x.Context x.Def)^(prtFSets y) // The constructor simply initializes the form do form.InitializeForm // member definitions member this.InitializeForm = // Set Form attributes this.FormBorderStyle <- FormBorderStyle.Sizable this.Text <- "Fuzzy Logic Parser F# Test" this.Width <- 300 this.Height <- 300 // Declare Form events this.Load.AddHandler(new System.EventHandler (fun s e -> this.Form_Loading(s, e))) this.Closed.AddHandler(new System.EventHandler (fun s e -> this.Form_Closing(s, e))) // MainMenu mnuFile.Text <- "&File" mnuFileOpen.Text <- "&Open" mnuFileOpen.Click.AddHandler(new System.EventHandler (fun s e -> this.mnuFileOpen_Click(s, e))) mnuFileSave.Text <- "&Save" mnuFileSave.Click.AddHandler(new System.EventHandler (fun s e -> this.mnuFileSave_Click(s, e))) mnuFileSaveAs.Text <- "Save &As" mnuFileSaveAs.Click.AddHandler(new System.EventHandler (fun s e -> this.mnuFileSaveAs_Click(s, e))) mnuFileExit.Text <- "E&xit" mnuFileExit.Click.AddHandler(new System.EventHandler (fun s e -> this.mnuFileExit_Click(s, e))) mnuFile.MenuItems.AddRange([| mnuFileOpen; mnuFileSave; mnuFileSaveAs; mnuFileExit |]) mnuHelp.Text <- "&Help" mnuHelpAbout.Text <- "&About" mnuHelpAbout.Click.AddHandler(new System.EventHandler (fun s e -> this.mnuHelpAbout_Click(s, e))) mnuHelp.MenuItems.AddRange([| mnuHelpAbout |]) mainMenu.MenuItems.AddRange([| mnuFile; mnuHelp |]) this.Menu <- mainMenu // label1 label1.Text <- "Fuzzy Sets" label1.Location <- new Point(5,2) label1.Dock <- DockStyle.None label1.AutoSize <- true // lstFuzzySets lstFuzzySets.Location <- new Point(5,19) lstFuzzySets.Width <- 137 lstFuzzySets.Height <- 120 lstFuzzySets.MouseDoubleClick.AddHandler(new MouseEventHandler (fun s e -> this.lstFuzzySets_Click(s, e))) ... // Add controls to form this.Controls.AddRange([| (label1:> Control); (lstFuzzySets:> Control); (label2:> Control); (lstVariables:> Control); (label3:> Control); (txtInput:> Control); (btnCalculate:> Control) |]) member this.Form_Loading(sender : System.Object, e : EventArgs) = lstFuzzySets.Items.Clear() lstVariables.Items.Clear() member this.Form_Closing(sender : System.Object, e : EventArgs) = null member this.mnuFileOpen_Click(sender : System.Object, e : EventArgs) = dlgFileOpen.DefaultExt <- "txt" if dlgFileOpen.ShowDialog() = DialogResult.OK then fileName <- dlgFileOpen.FileName txtInput.Clear() txtInput.LoadFile(fileName) else null ... member this.mnuHelpAbout_Click(sender : System.Object, e : EventArgs) = (new AboutForm()).ShowDialog() |> ignore member this.lstFuzzySets_Click(sender : System.Object, e : MouseEventArgs) = let s = String.split ['.'] (lstFuzzySets.SelectedItem.ToString()) let text (x:FuzzySet) = sprintf "%s = %A" (x.Name^"."^x.Context) (x.Def) MessageBox.Show(text (getFuzzySet(fuzzySets, s.Head, s.Tail.Head))) |> ignore ... Note that the form is defined by inheriting a standard .NET Form. I then defined any private "variables" that will be needed, then all of the controls that are placed on the form. After the controls, I placed the constructor code that is to be executed; in this case, it simply calls the form.InitializeForm function to initialize the form. Finally come all of the member functions that are needed, including all of the event handlers. Form form.InitializeForm Inside the InitializeForm member, each control is set up as needed, and finally all of the controls are added to the form. In addition, all events are defined as members so that they may be organized, coded separately, and called from outside the form. This is similar to how things are automatically organized by the C# and VB designers. Presumably, when designers for F# become available, they would take care of organizing things in a similar way. I have to admit that after doing the controls layout manually, I certainly appreciate the ability to use the designers available for other languages! InitializeForm There are a couple of things to note in the above code. First, the event handlers are added to the form and each control using the standard .NET AddHandler, and are defined using anonymous functions in F#. Most of the event handlers take two parameters corresponding to the sender and the EventArgs, denoted by s and e in the anonymous functions. AddHandler sender EventArgs s e Second, all of the event handlers expect a unit (void in C#) to be returned. In cases where the result of an F# function is a value other than unit, it's necessary to either explicitly return a unit value using the null keyword or throw away a value using |> ignore. unit void null |> ignore As you can see, there is really no true functional programming contained in the above code; it's all pretty much straight imperative code, similar to what would be used in C#, with a different syntax. That is mainly because of the dependence on the .NET framework that is inherently imperative. Of course, the ability of F# to handle both imperative and declarative code is one of its strengths, and the ability to program Windows forms shows the imperative side of F#. For anyone interested, the actual parser and fuzzy set code is contained in the sample project, but won't be discussed here, since it is beyond the scope of this article. Perhaps, in a future article, I'll describe those parts of the system. After experimenting with F# for awhile and getting somewhat up-to-speed, I have definitely formed some opinions. First, I have to say that F# appears to be a very nice, declarative, functional language, and should find wide application in many areas of scientific and mathematical computing. Since it also allows imperative programming, it is fairly easy to access the objects and features in the underlying .NET Framework, which is a big plus. Unfortunately, F# in its current implementation is substantially lacking in what I would call "developer tools". Specifically, the lack of form designers and complete Intellisense support in Visual Studio makes it somewhat tedious to use. In addition, some of the default project settings are not very intuitive, and not extremely well documented. Hopefully, both of those problems will be addressed in the near future and F# will be able to take its place along with the other .NET languages as a serious development language. Aside from the lack of developer tools, programming in F# takes some getting used to, especially if you are used to using imperative languages, as I am. The shift in paradigm from imperative to declarative can be difficult, and F# seems to have so many detailed oddities, that it indeed takes some concerted effort to master. I certainly have not yet mastered it, but I hope to get a book or two on the language on my next trip home and pursue F# much more in the future, because I do definitely see where it will be a handy tool to have in my programming toolbox. I would recommend that anyone with even the slightest interest learn about F#. It never hurts to expand one's resume, or to stretch the brain into thinking about things from a different perspective. In addition, I sincerely hope that Microsoft will continue to improve F#, and that additional project templates, form designers, and complete Intellisense support will be provided in the near future. And for my wish list, I'd have to add that being able to design and implement forms in C# (or even VB?), yet call functions written in F# would be wonderful. So far, the only way I've been able to do that has been to compile F# code to a library (*.dll) and then call it from C#. Being able to easily include both C# and F# files in a project, each compiled with the respective compiler and then linked into one application, would allow one to enjoy the best of both worlds. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Quote:The order of the files in the Solution Explorer makes a difference. Make sure the starting point for your program is in the last file in the project list btnOk.Click.AddHandler(new System.EventHandler (fun s e -> this.btnOk_Click(s, e))) ... member this.btnOk_Click(sender : System.Object, e : EventArgs) = this.Close() btnOk.Click.Add(fun _ -> this.btnOk_Click ()) ... member this.btnOk_Click () = this.Close() btnOk.Click.Add(fun _ -> this.Close()) let lblVersion = new Label(Text="Version 1.0.0", Left=10, Top=26, AutoSize=true) member this.Value v:float = let rec find (lst:(float * float) list) value:float = match lst with | (x:float, y:float)::_ when value <= x -> y | (x:float, y:float)::[] when value >= x -> y | (x1:float, y1:float)::(x2:float, y2:float)::_ when (value <= x2) && (value >= x1) -> y1 + (v - x1)*(y2 - y1)/(x2 - x1) | (_, _)::(x:float, y:float)::t when value > x -> find t value | _ -> 0.0 find def v match lst with (x, y)::_ -> y when value <= x y1 + (v - x1)*(y2 - y1)/(x2 - x1) General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/30414/Getting-Started-in-Fsharp-A-Windows-Forms-Applicat
CC-MAIN-2018-05
refinedweb
2,856
56.66
Getting Started with Java ™ VERSI ON 8 Borland Software Corporation 100 Enterprise Way, Scotts Valley, CA 95066-3249 Borland ® JBuilder ® Refer to the file deploy.html located in the redist directory of your JBuilder product for a complete list of files that you can distribute in accordance with the JBuilder License Statement and Limited Warranty.. C OPYRIGHT © 1997–2002 United States and other countries. All other marks are the property of their respective owners. For third-party conditions and disclaimers, see the Release Notes on your JBuilder product CD. Printed in the U.S.A. JBE0080WW21000gsjava 4E4R1002 0203040506-9 8 7 6 5 4 3 2 1 PDF i Chapter 1 Introduction 1-1 Chapter 2 Java language elements 2-1 Terms . . . . . . . . . . . . . . . . . . . . . . . . 2-1 Identifier. . . . . . . . . . . . . . . . . . . . . 2-1 Data type . . . . . . . . . . . . . . . . . . . . 2-2 Primitive data types . . . . . . . . . . . . 2-2 Composite data types . . . . . . . . . . . 2-3 Strings . . . . . . . . . . . . . . . . . . . . . . 2-3 Arrays . . . . . . . . . . . . . . . . . . . . . . 2-4 Variable . . . . . . . . . . . . . . . . . . . . . 2-4 Literal . . . . . . . . . . . . . . . . . . . . . . 2-4 Applying concepts. . . . . . . . . . . . . . . . . 2-5 Declaring variables. . . . . . . . . . . . . . . 2-5 Methods. . . . . . . . . . . . . . . . . . . . . 2-5 Chapter 3 Java language structure 3-1 Terms . . . . . . . . . . . . . . . . . . . . . . . . 3-1 Keywords . . . . . . . . . . . . . . . . . . . . 3-1 Operators . . . . . . . . . . . . . . . . . . . . 3-2 Comments. . . . . . . . . . . . . . . . . . . . 3-3 Statements. . . . . . . . . . . . . . . . . . . . 3-5 Code blocks. . . . . . . . . . . . . . . . . . . 3-5 Understanding scope . . . . . . . . . . . . . 3-5 Applying concepts. . . . . . . . . . . . . . . . . 3-7 Using operators. . . . . . . . . . . . . . . . . 3-7 Arithmetic operators . . . . . . . . . . . . 3-7 Logical operators . . . . . . . . . . . . . . 3-9 Assignment operators . . . . . . . . . . .3-10 Comparison operators . . . . . . . . . . .3-11 Bitwise operators . . . . . . . . . . . . . .3-11 ?:, the ternary operator. . . . . . . . . . .3-12 Using methods . . . . . . . . . . . . . . . . .3-13 Using arrays. . . . . . . . . . . . . . . . . . .3-13 Using constructors . . . . . . . . . . . . . . .3-14 Member access . . . . . . . . . . . . . . . . .3-15 Arrays . . . . . . . . . . . . . . . . . . . .3-15 Chapter 4 Java language control 4-1 Terms. . . . . . . . . . . . . . . . . . . . . . . . .4-1 String handling . . . . . . . . . . . . . . . . .4-1 Type casting and conversion. . . . . . . . . .4-2 Return types and statements. . . . . . . . . .4-3 Flow control statements . . . . . . . . . . . .4-3 Applying concepts . . . . . . . . . . . . . . . . .4-4 Escape sequences . . . . . . . . . . . . . . . .4-4 Strings. . . . . . . . . . . . . . . . . . . . .4-4 Determining access . . . . . . . . . . . . . . .4-5 Handling methods . . . . . . . . . . . . . . .4-7 Using type conversions. . . . . . . . . . . . .4-8 Implicit casting. . . . . . . . . . . . . . . .4-8 Explicit conversion . . . . . . . . . . . . . . .4-8 Flow control . . . . . . . . . . . . . . . . . . .4-9 Loops . . . . . . . . . . . . . . . . . . . . .4-9 Loop control statements. . . . . . . . . . 4-11 Conditional statements . . . . . . . . . . . . 4-12 Handling exceptions . . . . . . . . . . . . . 4-13 Chapter 5 The Java class libraries 5-1 Java 2 Platform editions . . . . . . . . . . . . . .5-1 Standard Edition. . . . . . . . . . . . . . . . .5-2 Enterprise Edition. . . . . . . . . . . . . . . .5-2 Micro Edition . . . . . . . . . . . . . . . . . .5-2 Java 2 Standard Edition packages. . . . . . . . .5-3 The Language package: java.lang . . . . . . .5-4 The Utility package: java.util. . . . . . . . . .5-5 The I/O package: java.io . . . . . . . . . . . .5-5 The Text package: java.text. . . . . . . . . . .5-5 The Math package: java.math . . . . . . . . .5-6 The AWT package: java.awt . . . . . . . . . .5-6 The Swing package: javax.swing . . . . . . .5-6 The Javax packages: javax . . . . . . . . . . .5-7 The Applet package: java.applet. . . . . . . .5-7 The Beans package: java.beans. . . . . . . . .5-8 The Reflection package: java.lang.reflect . . .5-9 XML processing . . . . . . . . . . . . . . . . .5-9 The SQL package: java.sql . . . . . . . . . . 5-10 The RMI package: java.rmi. . . . . . . . . . 5-10 The Networking package: java.net . . . . . 5-11 The Security package: java.security . . . . . 5-11 Contents ii Chapter 6 Object-oriented programming in Java 6-1 Classes . . . . . . . . . . . . . . . . . . . . . . . 6-2 Declaring and instantiating classes. . . . . . 6-2 Data members . . . . . . . . . . . . . . . . . 6-2 Class methods . . . . . . . . . . . . . . . . . 6-3 Constructors and finalizers . . . . . . . . . . 6-3 Case study: A simple OOP example . . . . . 6-4 Class inheritance . . . . . . . . . . . . . . . . 6-8 Calling the parent’s constructor. . . . . .6-11 Access modifiers . . . . . . . . . . . . . . . .6-11 Access from within class’s package. . . .6-12 Access outside of a package . . . . . . . .6-12 Accessor methods . . . . . . . . . . . . . . .6-12 Abstract classes. . . . . . . . . . . . . . . . .6-16 Polymorphism . . . . . . . . . . . . . . . . . . .6-17 Using interfaces. . . . . . . . . . . . . . . . .6-17 Adding two new buttons . . . . . . . . . . .6-21 Running your application. . . . . . . . . . .6-23 Java packages. . . . . . . . . . . . . . . . . . . .6-24 The import statement . . . . . . . . . . . . .6-24 Declaring packages. . . . . . . . . . . . . . .6-24 Chapter 7 Threading techniques 7-1 The lifecycle of a thread. . . . . . . . . . . . . . 7-1 Customizing the run() method . . . . . . . . 7-2 Subclassing the Thread class . . . . . . . 7-2 Implementing the Runnable interface . . 7-3 Defining a thread. . . . . . . . . . . . . . . . 7-5 Starting a thread . . . . . . . . . . . . . . . . 7-5 Making a thread not runnable . . . . . . . . 7-6 Stopping a thread. . . . . . . . . . . . . . . . 7-6 Thread priority. . . . . . . . . . . . . . . . . . . 7-7 Time slicing . . . . . . . . . . . . . . . . . . . 7-7 Synchronizing threads. . . . . . . . . . . . . . . 7-7 Thread groups . . . . . . . . . . . . . . . . . . . 7-8 Chapter 8 Serialization 8-1 Why serialize? . . . . . . . . . . . . . . . . . . . 8-1 Java serialization. . . . . . . . . . . . . . . . . . 8-2 Using the Serializable interface. . . . . . . . 8-2 Using output streams . . . . . . . . . . . . . . . 8-4 ObjectOutputStream methods . . . . . . . . 8-5 Using input streams . . . . . . . . . . . . . . . .8-5 ObjectInputStream methods . . . . . . . . . .8-7 Writing and reading object streams. . . . . . . .8-7 Chapter 9 An introduction to the Java Virtual Machine 9-1 Java VM security . . . . . . . . . . . . . . . . . .9-3 The security model . . . . . . . . . . . . . . .9-3 The Java verifier . . . . . . . . . . . . . . .9-3 The Security Manager and the java.security Package . . . . . . . . . . .9-4 The class loader . . . . . . . . . . . . . . .9-6 What about Just-In-Time compilers? . . . . .9-6 Chapter 10 Working with the Java Native Interface (JNI) 10-1 How JNI works . . . . . . . . . . . . . . . . . . 10-2 Using the native keyword . . . . . . . . . . . . 10-2 Using the javah tool. . . . . . . . . . . . . . . . 10-3 Chapter 11 Java language quick reference 11-1 Java 2 platform editions . . . . . . . . . . . . . 11-1 Java class libraries. . . . . . . . . . . . . . . . . 11-1 Java keywords. . . . . . . . . . . . . . . . . . . 11-2 Data and return types and terms . . . . . . 11-3 Packages, classes, members, and interfaces . . . . . . . . . . . . . . . . . . . 11-3 Access modifiers. . . . . . . . . . . . . . . . 11-4 Loops and flow controls . . . . . . . . . . . 11-4 Exception handling . . . . . . . . . . . . . . 11-5 Reserved . . . . . . . . . . . . . . . . . . . . 11-5 Converting and casting data types . . . . . . . 11-5 Primitive to primitive. . . . . . . . . . . . . 11-6 Primitive to String. . . . . . . . . . . . . . . 11-7 Primitive to reference . . . . . . . . . . . . . 11-8 String to primitive. . . . . . . . . . . . . . .11-10 Reference to primitive . . . . . . . . . . . .11-12 Reference to reference. . . . . . . . . . . . .11-14 Escape sequences . . . . . . . . . . . . . . . . .11-19 iii Operators . . . . . . . . . . . . . . . . . . . . . 11-20 Basic operators . . . . . . . . . . . . . . . . 11-20 Arithmetic operators. . . . . . . . . . . . . 11-21 Logical operators. . . . . . . . . . . . . . . 11-21 Assignment operators . . . . . . . . . . . . 11-22 Comparison operators. . . . . . . . . . . . 11-22 Bitwise operators. . . . . . . . . . . . . . . 11-23 Ternary operator . . . . . . . . . . . . . . . 11-23 Chapter 12 Learning more about Java 12-1 Online glossaries. . . . . . . . . . . . . . . . . .12-1 Books . . . . . . . . . . . . . . . . . . . . . . . .12-2 Index I-1 iv I n t r o d u c t i o n 1-1 C h a p t e r 1 Chapter1 Introduction Java is an object-oriented programming language. Switching to object-oriented programming (OOP) from other programming paradigms can be difficult. Java focuses on creating objects (data structures or behaviors) that can be assessed and manipulated by the program. Like other programming languages, Java provides support for reading and writing data to and from different input and output devices. Java uses processes that increase the efficiency of input/output, facilitate internationalization, and provide better support for non-UNIX platforms. Java looks over your program as it runs and automatically deallocates memory that is no longer required. This means you don’t have to keep track of memory pointers or manually deallocate memory. This feature means a program is less likely to crash and that memory can’t be intentionally misused. This book is intended to serve programmers who use other languages as a general introduction to the Java programming language. The Borland Community site provides an annotated list of books on Java programming and related subjects at 3,00.html. Examples of applications, APIs, and code snippets are at. 1-2 Ge t t i n g S t a r t e d w i t h J a v a I n t r o d u c t i o n This book includes the following chapters: • Java syntax: Chapter 2, “Java language elements,” Chapter 3, “Java language structure,” and Chapter 4, “Java language control.” These three chapters define basic Java syntax and introduce you to object-oriented programming concepts. Each section is divided into two main parts: “Terms” and “Applying concepts.” “Terms” builds vocabulary, adding to concepts you already understand. “Applying concepts” demonstrates the use of concepts presented up to that point. Some concepts are revisited several times, at increasing levels of complexity. • Chapter 5, “The Java class libraries” This chapter presents an overview of the Java 2 class libraries and the Java 2 Platform editions. • Chapter 6, “Object-oriented programming in Java” This chapter introduces the object-oriented features of Java. You will create Java classes, instantiate objects, and access member variables in a short tutorial. You will learn to use inheritance to create new classes, use interfaces to add new capabilities to your classes, use polymorphism to make related classes respond in different ways to the same message, and use packages to group related classes together. • Chapter 7, “Threading techniques” A thread is a single sequential flow of control within a program. One of the powerful aspects of the Java language is you can easily program multiple threads of execution to run concurrently within the same program. This chapter explains how to create multithreaded programs, and provides links to other resources with more in-depth information. • Chapter 8, “Serialization” Serialization saves and restores a Java object’s state. This chapter describes how to serialize objects using Java. It describes the Serializable interface, how to write an object to disk, and how to read the object back into memory again. • Chapter 9, “An introduction to the Java Virtual Machine” The JVM is the native software that allows a Java program to run on a particular machine. This chapter explains the JVM’s general structure and purpose. It discusses the major roles of the JVM, particularly in Java security. It goes into more detail about three specific security features: the Java verifier, the Security Manager, and the Class Loader. I n t r o d u c t i o n 1-3 I n t r o d u c t i o n • Chapter 10, “Working with the Java Native Interface (JNI)” This chapter explains how to invoke native methods in Java applications using the Java Native Method Interface (JNI). It begins by explaining how the JNI works, then discusses the native keyword and how any Java method can become a native method. Finally, it examines the JDK’s javah tool, which is used to generate C header files for Java classes. • Chapter 11, “Java language quick reference” This chapter contains a partial list of class libraries and their main functions, a list of the Java2 platform editions, a complete list of Java keywords as of JDK 1.3, extensive tables of data type conversions between primitive and reference types, Java escape sequences, and extensive tables of operators and their actions. 1-4 Ge t t i n g S t a r t e d w i t h J a v a J a v a l a n g u a g e e l e me n t s 2-1 C h a p t e r 2 Chapter2 Java language elements This section provides you with foundational concepts about the elements of the Java programming language that will be used throughout this chapter. It assumes you understand general programming concepts, but have little or no experience with Java. Terms The following terms and concepts are discussed in this chapter: • “Identifier” on page 2-1 • “Data type” on page 2-2 • “Strings” on page 2-3 • “Arrays” on page 2-4 • “Variable” on page 2-4 • “Literal” on page 2-4 Identifier The identifier is the name you choose to call an element (such as a variable or a method). Java will accept any valid identifier, but for reasons of usability, it’s best to use a plain-language term that’s modified to meet the following requirements: • It should start with a letter. Strictly speaking, it can begin with a Unicode currency symbol or an underscore (_), but some of these symbols may be used in imported files or internal processing. They are best avoided. 2-2 Ge t t i n g S t a r t e d w i t h J a v a T e r m s • After that, it may contain any alphanumeric characters (letters or numbers), underscores, or Unicode currency symbols (such as pound sterling or $), but no other special characters. • It must be all one word (no spaces or hyphens). Capitalization of an identifier depends on the kind of identifier it is. Java is case-sensitive, so be careful of capitalization. Correct capitalization styles are mentioned in context. Data type Data types classify the kind of information that certain Java programming elements can contain. Data types fall into two main categories: • Primitive or basic • Composite or reference Naturally, different kinds of data types can hold different kinds and amounts of information. You can convert the data type of a variable to a different type, within limits: you cannot cast to or from the boolean type, and you cannot cast an object to an object of an unrelated class. Java will prevent you from risking your data. This means it will easily let you convert a variable or object to a larger type, but will try to prevent you from converting it to a smaller type. When you change a data type with a larger capacity to one with a smaller capacity, you must use a type of statement called a type cast. Primitive data types Primitive, or basic, data types are classified as Boolean (specifying an on/ off state), character (for single characters and Unicode characters), integer (for whole numbers), or floating-point (for decimal numbers). In code, primitive data types are all lower case. The Boolean data type is called boolean, and takes one of two values: true or false. Java doesn’t store these values numerically, but uses the boolean data type to store these values. The character data type is called char and takes single Unicode characters with values up to 16 bits long. In Java, Unicode characters (letters, special characters, and punctuation marks) are put between single quotation marks: 'b'. Java’s Unicode default value is \u0000, ranging from \u0000 to \uFFFF. Briefly, the Unicode numbering system takes numbers from 0 to 65535, but the numbers must be specified in hexadecimal notation, preceded by the escape sequence \u. J a v a l a n g u a g e e l e me n t s 2-3 T e r ms Not all special characters can be represented in this way. Java provides its own set of escape sequences, many of which can be found in the table of “Escape sequences” on page 11-19. In Java, the size of primitive data types is absolute, rather than platform- dependent. This improves portability. Different numeric data types take different kinds and sizes of numbers. Their names and capacities are listed below: Composite data types Each of the data types above accepts one number, one character, or one state. Composite, or reference, data types consist of more than a single element. Composite data types are of two kinds: classes and arrays. Class and array names start with an upper case letter and are camel-capitalized (that is, the first letter of each natural word is capitalized within the name, for instance, NameOfClass). A class is a complete and coherent piece of code that defines a logically unified set of objects and their behavior. For more information on classes, see Chapter 6, “Object-oriented programming in Java.” Any class can be used as a data type once it has been created and imported into the program. Because the String class is the class most often used as a data type, we will focus on it in this chapter. Strings The String data type is actually the String class. The String class stores any sequence of alphanumeric characters, spaces, and normal punctuation (termed strings), enclosed in double quotes. Strings can contain any of the Unicode escape sequences and require \" to put double quotes inside of a string, but, generally, the String class itself tells the program how to interpret the characters correctly. Type Attributes Range double Java’s default. A floating-point type that takes an 8- byte number to about fifteen decimal places. +/- 9.00x10 18 int Most common option. An integer type that takes a 4- byte whole number. +/- 2x10 9 long An integer type that takes an 8-byte whole number.+/- 9x10 18 float A floating-point type that takes a 4-byte number to about seven decimal places. +/- 2.0x10 9 short An integer type that takes a 2-byte whole number.+/- 32768 byte An integer type that takes a 1-byte whole number.+/- 128 2-4 Ge t t i n g S t a r t e d w i t h J a v a T e r m s Arrays An array is a data structure containing a group of values of the same data type. For instance, an array can accept a group of String values, a group of int values, or a group of boolean values. As long as all of the values are of the same data type, they can go into the same array. Arrays are characterized by a pair of square brackets. When you declare an array in Java, you can put the brackets either after the identifier or after the data type: int studentID[]; char[] grades; Note that the array size is not specified. Declaring an array does not allocate memory for that array. In most other languages the array’s size must be included in its declaration, but in Java you don’t specify its size until you use it. Then the appropriate memory is allocated. Variable A variable is a value that a programmer names and defines. Variables need an identifier and a value. Literal A literal is the actual representation of a number, a character, a state, or a string. A literal represents the value of an identifier. Alphanumeric literals include strings in double quotes, single char characters in single quotes, and boolean true/false values. Integer literals may be stored as decimals, octals, or hexadecimals, but think about your syntax: any integer with a leading 0 (as in a date) will be interpreted as an octal. Floating point literals can only be expressed as decimals. They will be treated as double unless you specify the type. For a more detailed explanation of literals and their capacities, see The Java Handbook by Patrick Naughton. J a v a l a n g u a g e e l e me n t s 2-5 A p p l y i n g c o n c e p t s Applying concepts The following sections demonstrate how to apply the terms and concepts introduced earlier in this chapter. Declaring variables The act of declaring a variable sets aside memory for the variable you declare. Declaring a variable requires only two things: a data type and an identifier, in that order. The data type tells the program how much memory to allocate. The identifier labels the allocated memory. Declare the variable only once. Once you have declared the variable appropriately, just refer to its identifier in order to access that block of memory. Variable declarations look like this: Methods Methods in Java are equivalent to functions or subroutines in other languages. The method defines an action to be performed on an object. Methods consist of a name and a pair of parentheses: getData() Here, getData is the name and the parentheses tell the program that it is a method. boolean isOn;The data type boolean can be set to true or false. The identifier isOn is the name that the programmer has given to the memory allocated for this variable. The name isOn has meaning for the human reader as something that would logically accept true/false values. int studentsEnrolled;The data type int tells you that you will be dealing with a whole number of less than ten digits. The identifier studentsEnrolled suggests what the number will signify. Since students are whole people, the appropriate data type calls for whole numbers. float creditCardSales;The data type float is appropriate because money is generally represented in decimals. You know that money is involved because the programmer has usefully named this variable creditCardSales. 2-6 Ge t t i n g S t a r t e d w i t h J a v a A p p l y i n g c o n c e p t s If the method needs particular information in order to get its job done, what it needs goes inside the parentheses. What’s inside the parentheses is called the argument, or arg for short. In a method declaration, the arg must include a data type and an identifier: drawString(String remark) Here, drawString is the name of the method, and String remark is the data type and variable name for the string that the method must draw. You must tell the program what type of data the method will return, or if it will return anything at all. This is called the return type. You can make a method return data of any primitive type. If the method doesn’t need to return anything (as in most action-performing methods), the return type must be void. Return type, name, and parentheses with any needed args give a very basic method declaration: String drawString(String remark); Your method is probably more complex than that. Once you have typed and named it and told it what args it will need (if any), you must define it completely. You do this below the method name, nesting the body of the definition in a pair of curly braces. This gives a more complex method declaration: String drawString(String remark) { //Declares the method. String remark = "My, what big teeth you have!" //Defines what's in the method. } //Closes the method body. Once you have defined the method, you only need to refer to it by its name and pass it any args it needs to do its job right then: drawString(remark); J a v a l a n g u a g e s t r u c t u r e 3-1 C h a p t e r 3 Chapter3 Java language structure This section provides you with foundational concepts about the structure of the Java programming language that will be used throughout this chapter. It assumes you understand general programming concepts, but have little or no experience with Java. Terms The following terms and concepts are discussed in this chapter: • “Keywords” on page 3-1 • “Operators” on page 3-2 • “Statements” on page 3-5 • “Code blocks” on page 3-5 • “Understanding scope” on page 3-5 Keywords Keywords are reserved Java terms that modify other syntax elements. Keywords can define an object’s accessibility, a method’s flow, or a variable’s data type. Keywords can never be used as identifiers. Many of Java’s keywords are borrowed from C/C++. Also, as in C/C++, keywords are always written in lowercase. Generally speaking, Java’s 3-2 Ge t t i n g S t a r t e d w i t h J a v a T e r m s keywords can be categorized according to their functions (examples are in parentheses): • Data and return types and terms (int, void, return) • Package, class, member, and interface (package, class, static) • Access modifiers (public, private, protected) • Loops and loop controls (if, switch, break) • Exception handling (throw, try, finally) • Reserved words — not used yet, but unavailable (goto, const) Some keywords are discussed in context in these chapters. For a complete list of keywords and what they mean, see “Java keywords” on page 11-2. Operators Operators allow you to access, manipulate, relate, or refer to Java language elements, from variables to classes. Operators have properties of precedence and associativity. When several operators act on the same element (or operand), the operators’ precedence determines which operator will act first. When more than one operator has the same precedence, the rules of associativity apply. These rules are generally mathematical; for instance, operators will usually be used from left to right, and operator expressions inside parentheses will be evaluated before operator expressions outside parentheses. Operators generally fall into six categories: assignment, arithmetic, logical, comparison, bitwise, and ternary. Assignment means storing the value to the right of the = inside the variable to the left of it. You can either assign a value to a variable when you declare it or after you have declared it. The machine doesn’t care; you decide which way makes sense in your program and your practice: double bankBalance; //Declaration bankBalance = 100.35; //Assignment double bankBalance = 100.35; //Declaration with assignment In both cases, the value of 100.35 is stored inside the memory reserved by the declaration of the bankBalance variable. Assignment operators allow you to assign values to variables. They also allow you to perform an operation on an expression and then assign the new value to the right-hand operand, using a single combined expression. Arithmetic operators perform mathematical calculations on both integer and floating-point values. The usual mathematical signs apply: + adds, - subtracts, * multiplies, and /divides two numbers. J a v a l a n g u a g e s t r u c t u r e 3-3 T e r ms Logical, or Boolean, operators allow the programmer to group boolean expressions in a useful way, telling the program exactly how to determine a specific condition. Comparison operators evaluate single expressions against other parts of the code. More complex comparisons (like string comparisons) are done programmatically. Bitwise operators act on the individual 0s and 1s of binary digits. Java’s bitwise operators can preserve the sign of the original number; not all languages do. The ternary operator, ?:, provides a shorthand way of writing a very simple if-then-else statement. The first expression is evaluated; if it’s true, the second expression is evaluated; if the second expression is false, the third expression is used. Below is a partial list of other operators and their attributes: Commenting code is excellent programming practice. Good comments can help you scan your code more quickly, keep track of what you’ve done as you build a complex program, and remind you of things you Operator Operand Behavior .object member Accesses a member of the object. (<type>) data type Casts a data type. 1 1.It’s important to distinguish between operation and punctuation. Parentheses are used around args as punctuation. They are used around a data type in an operation that changes a variable’s data type to the one inside the parentheses. + String Joins up strings (concatenator). number Adds. - number This is the unary 2 minus (reverses number sign). 2.A unary operator takes a single operand, a binary operator takes two operands, and a ternary operator takes three operands. number Subtracts. !boolean This is the boolean NOT operator. & integer, boolean This is both the bitwise (integer) and boolean AND operator. When doubled (&&), it is the boolean conditional AND. = most elements with variables Assigns an element to another element (for instance, a value to a variable, or a class to an instance). This can be combined with other operators to perform the other operation and assign the resulting value. For instance, += adds the left-hand value to the right, then assigns the new value to the right-hand side of the expression. 3-4 Ge t t i n g S t a r t e d w i t h J a v a T e r m s want to add or tune. You can use comments to hide parts of code that you want to save for special situations or keep out of the way while you work on something that might conflict. Comments can help you remember what you were doing when you return to one project after working on another, or when you come back from vacation. In a team development environment or whenever code is passed between programmers, everything you comment on, without having to parse out every bit of it to be sure they understand. Java uses three kinds of comments: single-line comments, multi-line Here are some examples: /* You can put as many lines of discussion or as many pages of boilerplate as you like between these two tags. */ /* Note that, if you really get carried away, you can nest single-line comments //inside of the multi-line comments and the compiler will have no trouble with it at all. */ Comment Tag Purpose Single- line// ...Suitable for brief remarks on the function or structure of a statement or expression. They require only an opening tag: as soon as you start a new line, you’re back into code. Multi- line/* ... */Good for any comment that will cover more than one line, as when you want to go into some detail about what’s happening in the code or when you need to embed legal notices in the code. It requires both opening and closing Javadoc/** ... */This is a multi-line comment that the JDK’s Javadoc utility can read and turn into HTML documentation. Javadoc has tags you can use to extend its functionality. It’s used to provide help for APIs, generate to do lists, and embed flags in code. It requires both opening and closing tags. To learn more about the Javadoc tool, go to Sun’s Javadoc page at javadoc/. J a v a l a n g u a g e s t r u c t u r e 3-5 T e r ms /* Just don't try nesting /* multi-line types of comments */ /** of any sort */ because that will generate a compiler error. */ /**Useful information about what the code does goes in Javadoc tags. Special tags such as @todo can be used here to take advantage of Javadoc's helpful features. */ Statements A statement is a single command. One command can cover many lines of code, but the compiler reads the whole thing as one command. Individual (usually single-line) statements end in a semicolon (;), and group (multi-line) statements end in a closing curly brace (}). Multi-line statements are generally called code blocks. By default, Java runs statements in the order in which they’re written, but Java allows forward references to terms that haven’t been defined yet. Code blocks A code block is everything between the curly braces, and includes the expression that introduces the curly brace part: class GettingRounder { ... } Understanding scope Scope rules determine where in a program a variable is recognized. Variables fall into two main scope categorizes: • Global variables: Variables that are recognized across an entire class. • Local variables: Variables that are recognized only in the code block where they were declared. 3-6 Ge t t i n g S t a r t e d w i t h J a v a T e r m s Scope rules are tightly related to code blocks. The one general scope rule is: a variable declared in a code block is visible only in that block and any blocks nested inside it. The following code illustrates this: class Scoping { int x = 0; void method1() { int y; y = x; // This works. method1 can access y. } void method2() { int z = 1; z = y; // This does not work: // y is defined outside method2's scope. } } This code declares a class called scoping, which has two methods: method1() and method2(). The class itself is considered the main code block, and the two methods are its nested blocks. The x variable is declared in the main block, so it is visible (recognized by the compiler) in both method1() and method2(). Variables y and z, on the other hand, were declared in two independent, nested blocks; therefore, attempting to use y in method2() is illegal since y is not visible in that block. Note A program that relies on global variables can be error-prone for two reasons: 1 Global variables are difficult to keep track of. 2 A change to a global variable in one part of the program can have an unexpected side effect in another part of the program. Local variables are safer to use since they have a limited life span. For example, a variable declared inside a method can be accessed only from that method, so there is no danger of it being misused somewhere else in the program. End every simple statement with a semicolon. Be sure every curly brace has a mate. Organize your curly braces in some consistent way (as in the examples above) so you can keep track of the pairs. Many Java IDEs (such as JBuilder) automatically nest the curly braces according to your settings. J a v a l a n g u a g e s t r u c t u r e 3-7 A p p l y i n g c o n c e p t s Applying concepts The following sections demonstrate how to apply the terms and concepts introduced earlier in this chapter. Using operators Review There are six basic kinds of operators (arithmetic, logical, assignment, comparison, bitwise, and ternary), and operators affect one, two, or three operands, making them unary, binary, or ternary operators. They have properties of precedence and associativity, which determine the order they’re processed in. Operators are assigned numbers that establish their precedence. The higher the number, the higher the order of precedence (that is, the more likely it is to be evaluated sooner than others). An operator of precedence 1 (the lowest) will be evaluated last, and an operator with a precedence of 15 (the highest) will be evaluated first. Operators with the same precedence are normally evaluated from left to right. Precedence is evaluated before associativity. For instance, the expression a + b - c * d will not be evaluated from left to right; multiplication has precedence over addition, so c * d will be evaluated first. Addition and subtraction have the same order of precedence, so associativity applies: a and b will be added first, then that sum will be subtracted from the product of c * d. It’s good practice to use parentheses around mathematical expressions you want evaluated first, regardless of their precedence, for instance: (a + b) - (c * d). The program will evaluate this operation the same way, but for the human reader, this format is clearer. Arithmetic operators Java provides a full set of operators for mathematical calculations. Java, unlike some languages, can perform mathematical functions on both integer and floating-point values. You will probably find these operators familiar. 3-8 Ge t t i n g S t a r t e d w i t h J a v a A p p l y i n g c o n c e p t s Here are the arithmetic operators: Use pre- or post-increment/decrement depending on when you want the new value to be assigned: int y = 3, x; //1. variable declarations int b = 9; //2. int a; //3. x = ++y; //4. pre-increment a = b--; //5. post-decrement In statement 4, pre-increment, the y variable’s value is incremented by 1, and then its new value (4) is assigned to x. Both x and y originally had a value of 3; now they both have the value of 4. In statement 5, post-decrement, b’s current value (9) is assigned to a and then the value of b is decremented (to 8). b originally had a value of 9 and a had no value assigned; now a is 9 and b is 8. The modulus operator requires an explanation to those who last studied math a long time ago. Remember that when you divide two numbers, they rarely divide evenly. What is left over after you have divided the numbers (without adding any new decimal places) is the remainder. For instance, 3 goes into 5 once, with 2 left over. The remainder (in this case, 2) is what the modulus operator evaluates for. Since remainders recur in a division cycle on a predictable basis (for instance, an hour is modulus 60), the modulus operator is particularly useful when you want to tell a program to repeat a process at specific intervals. Operator Definition Prec. Assoc. ++/-- Auto-increment/decrement: Adds one to, or subtracts one from, its single operand. If the value of i is 4, ++i is 5. A pre-increment (++i) increments the value by one and assigns the new value to the original variable i. A post-increment (i++) increments the value but leaves the original variable i with the original value. See below for more information. 1 Right +/- Unary plus/minus: sets or changes the positive/negative value of a single number. 2 Right * Multiplication.4 Left /Division.4 Left % Modulus: Divides the first operand by the second operand and returns the remainder. See below for a brief mathematical review. 4 Left +/- Addition/subtraction 5 Left J a v a l a n g u a g e s t r u c t u r e 3-9 A p p l y i n g c o n c e p t s Logical operators Logical (or Boolean) operators allow the programmer to group boolean expressions to determine certain conditions. These operators perform the standard Boolean operations AND, OR, NOT, and XOR. The following table lists the logical operators: The evaluation operators always evaluate both operands. The conditional operators, on the other hand, always evaluate the first operand, and if that determines the value of the whole expression, they don’t evaluate the second operand. For example: if ( !isHighPressure && (temperature1 > temperature2)) { ... } //Statement 1: conditional boolean1 = (x < y) || ( a > b); //Statement 2: conditional boolean2 = (10 > 5) & (5 > 1); //Statement 3: evaluation The first statement evaluates !isHighPressure first. If !isHighPressure is false (that is, if the pressure is high; note the logical double-negative of ! and false), the second operand, temperature1 > temperature2, doesn’t need to be evaluated. && only needs one false value in order to know what value to return. Operator Definition Prec. Assoc. !Boolean NOT (unary) Changes true to false or false to true. Because of its low precedence, you may need to use parentheses around this statement. 2 Right & Evaluation AND (binary) Yields true only if both operands are true. Always evaluates both operands. Rarely used as a logical operator. 9 Left ^ Evaluation XOR (binary) Yields true if only one operand is true. Evaluates both operands. 10 Left | Evaluation OR (binary) Yields true if one or both of the operands is true. Evaluates both operands. 11 Left && Conditional AND (binary) Yields true only if both operands are true. Called “conditional” because it only evaluates the second operand if the first operand is true. 12 Left || Conditional OR (binary) Yields true if either one or both operands is true; returns false if both are false. Doesn’t evaluate second operand if first operand is true. 13 Left 3-10 Ge t t i n g S t a r t e d wi t h J a v a A p p l y i n g c o n c e p t s In the second statement, the value of boolean1 will be true if x is less than y. If x is more than y, the second expression will be evaluated; if a is less than b, the value of boolean1 will still be true. In the third statement, however, the compiler will compute the values of both operands before assigning true or false to boolean2, because & is an evaluation operator, not a conditional one. Assignment operators You know that the basic assignment operator (=) lets you assign a value to a variable. With Java’s set of assignment operators, you can perform an operation on either operand and assign the new value to a variable in one step. The following table lists assignment operators: The first operator is familiar by now. The rest of the assignment operators perform an operation first, and then store the result of the operation in the operand on the left side of the expression. Here are some examples: int y = 2; y *= 2; //same as (y = y * 2) boolean b1 = true, b2 = false; b1 &= b2; //same as (b1 = b1 & b2) Operator Definition Prec. Assoc. = Assign the value on the right to the variable on the left. 15 Right += Add the value on the right to the value of the variable on the left; assign the new value to the original variable. 15 Right -= Subtract the value on the right from the value of the variable on the left; assign the new value to the original variable. 15 Right *= Multiply the value on the right with the value of the variable on the left; assign the new value to the original variable. 15 Right /= Divide the value on the right from the value of the variable on the left; assign the new value to the original variable. 15 Right J a v a l a n g u a g e s t r u c t u r e 3-11 A p p l y i n g c o n c e p t s Comparison operators Comparison operators allow you to compare one value to another. The following table lists the comparison operators: The equality operator can be used to compare two object variables of the same type. In this case, the result of the comparison is true only if both variables refer to the same object. Here is a demonstration: m1 = new Mammal(); m2 = new Mammal(); boolean b1 = m1 == m2; //b1 is false m1 = m2; boolean b2 = m1 == m2; //b2 is true The result of the first equality test is false because m1 and m2 refer to different objects (even though they are of the same type). The second comparison is true because both variables now represent the same object. Note Most of the time, however, the equals() method in the Object class is used instead of the comparison operator. The comparing class must be subclassed from Object before objects of the comparing class can be compared using equals(). Bitwise operators Bitwise operators are of two types: shift operators and Boolean operators. The shift operators are used to shift the binary digits of an integer to the right or the left. Consider the following example (the short integer type is used instead of int for conciseness): short i = 13; //i is 0000000000001101 i = i << 2; //i is 0000000000110100 In the second line, the bitwise left shift operator shifted all the bits of i two positions to the left. Note The shifting operation is different in Java than in C/C++ in how it is used with signed integers. A signed integer is one whose left-most bit is used to indicate the integer’s positive or negative sign: the bit is 1 if the integer is negative, 0 if positive. In Java, integers are always signed, whereas in C/ C++ they are signed only by default. In most implementations of C/C++, Operator Definition Prec. Assoc. < Less than 7 Left > Greater than 7 Left <= Less than or equal to 7 Left >= Greater than or equal to 7 Left == Equal to 8 Left != Not equal to 8 Left 3-12 Ge t t i n g S t a r t e d wi t h J a v a A p p l y i n g c o n c e p t s a bitwise shift operation does not preserve the integer’s sign; the sign bit would be shifted out. In Java, however, the shift operators preserve the sign bit (unless you use the >>> to perform an unsigned shift). This means that the sign bit is duplicated, then shifted. For example, right shifting 10010011 by 1 is 11001001. The following table lists Java’s bitwise operators: ?:, the ternary operator ?: is a ternary operator that Java borrowed from C. It provides a handy shortcut to create a very simple if-then-else kind of statement. This is the syntax: <expression 1> ? <expression 2> : <expression 3>; expression 1 is evaluated first. If it is true, then expression 2 is evaluated. If expression 2 is false, expression 3 is used. For example: int x = 3, y = 4, max; max = (x > y) ? x : y; Here, max is assigned the value of x or y, depending on which is greater. Operator Definition Prec. Assoc. ~ Bitwise NOT Inverts each bit of the operand, so each 0 becomes 1 and vice versa. 2 Right << Signed left shift Shifts the bits of the left operand to the left, by the number of digits specified in the right operand, with 0’s shifted in from the right. High-order bits are lost. 6 Left >> Signed right shift Shifts the bits of the left operand to the right, by the number of digits specified on the right. If the left operand is negative, 0’s are shifted in from the left; if it is positive, 1’s are shifted in. This preserves the original sign. 6 Left >>> Zero-fill right shift Shifts right, but always fills in with 0’s. 6 Left & Bitwise AND Can be used with = to assign the value. 9 Left | Bitwise OR Can be used with = to assign the value. 10 Left ^ Bitwise XOR Can be used with = to assign the value. 11 Left <<= Left-shift with assignment 15 Left >>= Right-shift with assignment 15 Left >>>= Zero-fill right shift with assignment 15 Left J a v a l a n g u a g e s t r u c t u r e 3-13 A p p l y i n g c o n c e p t s Using methods You know that methods are what get things done. Methods cannot contain other methods, but they can contain variables and class references. Here is a brief example to review. This method helps a music store with its inventory: //Declare the method: return type, name, args: public int getTotalCDs(int numRockCDs, int numJazzCDs, in numPopCDs) { //Declare the variable totalCDs. The other three variables are //declared elsewhere: int totalCDs = numRockCDs + numJazzCDs + numPopCDs; //Make it do something useful. In this case, print this line on the screen: System.out.println("Total CDs in stock = " + totalCDs); } In Java, you can define more than one method with the same name, as long as the different methods require different arguments. For instance, both public int getTotalCDs(int numRockCDs, int numJazzCDs, in numPopCDs) and public int getTotalCDs(int salesRetailCD, int salesWholesaleCD) are legal in the same class. Java will recognize the different patterns of arguments (the method signatures) and apply the correct method when you make a call. Assigning the same name to different methods is called method overloading. To access a method from other parts of a program, you must first create an instance of the class the method resides in, and then use that object to call the method: //Create an instance totalCD of the class Inventory: Inventory totalCD = new Inventory(); //Access the getTotalCDs() method inside of Inventory, storing the value in total: int total = totalCD.getTotalCDs(myNumRockCDs, myNumJazzCDs, myNumPopCDs); Using arrays Note that the size of an array is not part of its declaration. The memory an array requires is not actually allocated until you initialize the array. To initialize the array (and allocate the needed memory), you must use the new operator as follows: int studentID[] = new int[20]; //Creates array of 20 int elements. char[] grades = new char[20]; //Creates array of 20 char elements. float[][] coordinates = new float[10][5]; //2-dimensional, 10x5 array of //float elements. Note In creating two-dimensional arrays, the first array number defines number of rows and the second array number defines number of columns. 3-14 Ge t t i n g S t a r t e d wi t h J a v a A p p l y i n g c o n c e p t s Java counts positions starting with 0. This means the elements of a 20-element array will be numbered from 0 to 19: the first element will be 0, the second will be 1, and so on. Be careful how you count when you’re working with arrays. When an array is created, the value of all its elements is null or 0; values are assigned later. Note The use of the new operator in Java is similar to that of the malloc command in C and the new operator in C++. To initialize an array, specify the values of the array elements inside a set of curly braces. For multi-dimensional arrays, use nested curly braces. For example: char[] grades = {'A', 'B', 'C', 'D', 'F'); float[][] coordinates = {{0.0, 0.1}, {0.2, 0.3}}; The first statement creates a char array called grades. It initializes the array’s elements with the values 'A' through 'F'. Notice that we did not have to use the new operator to create this array; by initializing the array, enough memory is automatically allocated for the array to hold all the initialized values. Therefore, the first statement creates a char array of 5 elements. The second statement creates a two-dimensional float array called coordinates, whose size is 2 by 2. Basically, coordinates is an array consisting of two array elements: the array’s first row is initialized to 0.0 and 0.1, and the second row to 0.2 and 0.3. Using constructors A class is a full piece of code, enclosed in a pair of curly braces, that defines a logically coherent set of variables, attributes, and actions. A package is a logically associated set of classes. Note that a class is just a set of instructions. It doesn’t do anything itself. It’s analogous to a recipe: you can make a cake from the right recipe, but the recipe is not the cake, it’s only the instructions for it. The cake is an object you have created from the instructions in the recipe. In Java, we would say that we have created an instance of cake from the recipe Cake. The act of creating an instance of a class is called instantiating that object. You instantiate an object of a class. To instantiate an object, use the assignment operator (=), the keyword new, and a special kind of method called a constructor. A call to a constructor is the name of the class being instantiated followed by a pair of parentheses. Although it looks like a method, it takes a class’s name; that’s why it’s capitalized: <ClassName> <instanceName> = new <Constructor()>; J a v a l a n g u a g e s t r u c t u r e 3-15 A p p l y i n g c o n c e p t s For example, to instantiate a new object of the Geek class and name the instance thisProgrammer: Geek thisProgrammer = new Geek(); A constructor sets up a new instance of a class: it initializes all the variables in that class, making them immediately available. It can also perform any start-up routines required by the object. For example, when you need to drive your car, the first thing you do is open the door, climb in, put the clutch in, and start the engine. (After that, you can do all the things normally involved in driving, like getting into gear and using the accelerator.) The constructor handles the programmatic equivalents of the actions and objects involved in getting in and starting the car. Once you have created an instance, you can use the instance name to access members of that class. For more information on constructors, see “Case study: A simple OOP example” on page 6-4. Member access The access operator (.) is used to access members inside of an instantiated object. The basic syntax is: <instanceName>.<memberName> Precise syntax of the member name depends on the kind of member. These can include variables (<memberName>), methods (<memberName>()), or subclasses (<MemberName>). You can use this operation inside of other syntax elements wherever you need to access a member. For example: setColor(Color.pink); This method needs a color to do its job. The programmer used an access operation as an arg to access the variable pink within the class Color. Arrays Array elements are accessed by subscripting, or indexing, the array variable. To index an array variable, follow the array variable’s name with the element’s number (index) surrounded by square brackets. Arrays are always indexed starting from 0. If you have an array with 9 elements, the first element is the 0 index and the last element is the 8 index. (Coding as if elements were numbered from 1 is a common mistake.) In the case of multi-dimensional arrays, you must use an index for each dimension to access an element. The first index is the row and the second index is the column. 3-16 Ge t t i n g S t a r t e d wi t h J a v a A p p l y i n g c o n c e p t s For example: firstElement = grades[0]; //firstElement = 'A' fifthElement = grades[4]; //fifthElement = 'F' row2Col1 = coordinates[1][0]; //row2Col1 = 0.2 The following snippet of code demonstrates one use of arrays. It creates an array of 5 int elements called intArray, then uses a for loop to store the integers 0 through 4 in the elements of the array: int[] intArray = new int [5]; int index; for (index = 0; index < 5; index++) intArray [index] = index; This code increments the index variable from 0 to 4, and at every pass, it stores its value in the element of intArray indexed by the variable index. J a v a l a n g u a g e c o n t r o l 4-1 C h a p t e r 4 Chapter4 Java language control This section provides you with foundational concepts about control of the Java programming language that will be used throughout this chapter. It assumes you understand general programming concepts, but have little or no experience with Java. Terms The following terms and concepts are discussed in this chapter: • “String handling” on page 4-1 • “Type casting and conversion” on page 4-2 • “Return types and statements” on page 4-3 • “Flow control statements” on page 4-3 String handling The String class provides methods that allow you to get substrings or to index characters within a string. However, the value of a declared String can’t be changed. If you need to change the String value associated with that variable, you must point the variable to a new value: String text1 = new String("Good evening."); // Declares text1 and assigns a value. text1 = "Hi, honey, I'm home!" // Assigns a new value to text1. Indexing allows you to point to a particular character in a string. Java counts each position in a string starting from 0, so that the first position is 0, the second position is 1, and so on. This gives the eighth position in a string an index of 7. 4-2 Ge t t i n g S t a r t e d w i t h J a v a T e r m s The StringBuffer class provides a workaround. It also offers several other ways to manipulate a string’s contents. The StringBuffer class stores your string in a buffer (a special area of memory) whose size you can explicitly control; this allows you to change the string as much as necessary before you have to declare a String and make the string permanent. Generally, the String class is for string storage and the StringBuffer class is for string manipulation. Type casting and conversion Values of data types can be converted from one type to another. Class values can be converted from one class to another in the same class hierarchy. Note that conversion does not change the original type of that value, it only changes the compiler’s perception of it for that one operation. Obvious logical restrictions apply. A widening conversion — from a smaller type to a larger type — is easy, but a narrowing conversion — converting from a larger type (for instance, double or Mammal) to a smaller type (for instance, float or Bear) — risks your data, unless you’re certain that your data will fit into the parameters of the new type. A narrowing conversion requires a special operation called a cast. The following table shows widening conversions of primitive values. These won’t risk your data: To cast a data type, put the type you want to cast to in parentheses immediately before the variable you want to cast: (int)x. This is what it looks like in context, where x is the variable being cast, float is the original data type, int is the target data type, and y is the variable storing the new value: float x = 1.00; //declaring x as a float int y = (int)x; //casting x to an int named y This assumes that the value of x would fit inside of int. Note that x’s decimal values are lost in the conversion. Java rounds decimals down to the nearest whole number. Original Type Converts to Type byte short, char, int, long, float, double short int, long, float, double char int, long, float, double int long, float, double long float, double float double J a v a l a n g u a g e c o n t r o l 4-3 T e r ms Return types and statements You know that a method declaration requires a return type, just as a variable declaration requires a data type. The return types are the same as the data types (int, boolean, String, and so on), with the exception of void. void is a special return type. It signifies that the method doesn’t need to give anything back when it’s finished. It is most commonly used in action methods that are only required to do something, not to pass any information on. All other return types require a return statement at the end of the method. You can use the return statement in a void method to leave the method at a certain point, but otherwise it’s needless. A return statement consists of the word return and the string, data, variable name, or concatenation required: return numCD; It’s common to use parentheses for concatenations: return ("Number of files: " + numFiles); Flow control statements Flow control statements tell the program how to order and use the information that you give it. With flow control, you can reiterate statements, conditionalize statements, create recursive loops, and control loop behavior. Flow control statements can be grouped into three kinds of statements: iteration statements such as for, while, and do-while, which create loops; selection statements such as switch, if, if-else, if-then-else, and if-else-if ladders, which conditionalize the use of statements; and the jump statements break, continue, and return, which shift control to another part of your program. A special form of flow control is exception handling. Exception handling provides a structured means of catching runtime errors in your program and making them return meaningful information about themselves. You can also set the exception handler to perform certain actions before allowing the program to terminate. 4-4 Ge t t i n g S t a r t e d w i t h J a v a A p p l y i n g c o n c e p t s Applying concepts The following sections demonstrate how to apply the terms and concepts introduced earlier in this chapter. Escape sequences A special type of character literal is called an escape sequence. Like C/C++, Java uses escape sequences to represent special control characters and characters that cannot be printed. An escape sequence is represented by a backslash (\) followed by a character code. The following table summarizes these escape sequences: Non-decimal numeric characters are escape sequences. An octal character is represented by three octal digits, and a Unicode character is represented by lowercase u followed by four hexadecimal digits. For example, the decimal number 57 is represented by the octal code \071 and the Unicode sequence \u0039. The sample string in the following statement prints out the words Name and "Hildegaard von Bingen" separated by two tabs on one line, and prints out ID and "1098", also separated by two tabs, on the second line: String escapeDemo = new String("Name\t\t\"Hildegaard von Bingen\"\nID\t\t\"1098\""); Strings The string of characters you specify in a String is a literal; the program will use exactly what you specify, without changing it in any way. However, the String class provides the means to chain strings together (called string concatenation), see and use what’s inside of strings (compare strings, Character Escape Sequence Backslash\\ Backspace\b Carriage return\r Double quote\" Form feed\f Horizontal tab\t New line\n Octal character\DDD Single quote\' Unicode character\uHHHH J a v a l a n g u a g e c o n t r o l 4-5 A p p l y i n g c o n c e p t s search strings, or extract a substring from a string), and convert other kinds of data to strings. Some examples follow: • Declare variables of the String type and assign values: String firstNames = "Joseph, Elvira and Hans"; String modifier = " really "; String tastes = "like chocolate."; • Get a substring from a string, selecting from the eighth column to the end of the string: String sub = firstNames.substring(8); // "Elvira and Hans" • Compare part of the substring to another string, convert a string to capital letters, then concatenate it with other strings to get a return value: boolean bFirst = firstNames.startsWith("Emine"); // Returns false in // this case. String caps = modifier.toUpperCase(); // Yields " REALLY " return firstNames + caps + tastes; // Returns the line: // Elvira and Hans // REALLY like chocolate. For more information on how to use the String class, see Sun’s API documentation at String.html. StringBuffer If you want more control over your strings, use the StringBuffer class. This class is part of the java.lang package. StringBuffer stores your strings in a buffer so that you don’t have to declare a permanent String until you need it. Some of the advantages to this are that you don’t have to redeclare a String if its content changes. You can reserve a size for the buffer larger than what is already in there. StringBuffer provides methods in addition to those in String that allow you to modify the contents of strings in new ways. For instance, StringBuffer's setCharAt() method changes the character at the index specified in the first parameter, to the new value specified in the second parameter: StringBuffer word = new StringBuffer ("yellow"); word.setCharAt (0, 'b'); //word is now "bellow" Determining access By default, classes are available to all of the members inside them, and the members within the class are available to each other. However, this access can be widely modified. 4-6 Ge t t i n g S t a r t e d w i t h J a v a A p p l y i n g c o n c e p t s Access modifiers determine how visible a class’s or member’s information is to other members and classes. Access modifiers include: • public: A public member is visible to members outside the public member’s scope, as long as the parent class is visible. A public class is visible to all other classes in all other packages. • private: A private member’s access is limited to the member’s own class. • protected: A protected member can be accessed by other members of its class and by members of classes in the same package (as long as the member’s parent class is accessible), but not from other packages. A protected class is available to other classes in the same package, but not to other packages. • If no access modifier is declared, the member is available to all classes inside the parent package, but not outside the package. Let’s look at this in context: class Waistline { private boolean invitationGiven = false; // This is private. private int weight = 170 // So is this. public void acceptInvitation() { // This is public. invitationGiven = true; } //Class JunkFood is declared and object junkFood is instantiated elsewhere: public void eat(JunkFood junkFood) { /*This object only accepts more junkFood if it has an invitation * and if it is able to accept. Notice that isAcceptingFood() * checks to see if the object is too big to accept more food: */ if (invitationGiven && isAcceptingFood()) { /*This object's new weight will be whatever its current weight * is, plus the weight added by junkFood. Weight increments * as more junkFood is added: */ weight += junkFood.getWeight(); } } //Only the object knows if it's accepting food: private boolean isAcceptingFood() { // This object will only accept food if there's room: return (isTooBig() ? false : true); } J a v a l a n g u a g e c o n t r o l 4-7 A p p l y i n g c o n c e p t s //Objects in the same package can see if this object is too big: protected boolean isTooBig() { //It can accept food if its weight is less than 185: return (weight > 185) ? true : false; } } Notice that isAcceptingFood() and invitationGiven are private. Only members inside this class know if this object is capable of accepting food or if it has an invitation. isTooBig() is protected. Only classes inside this package can see if this object’s weight exceeds its limit or not. The only methods that are exposed to the outside are acceptInvitation() and eat(). Any class can perceive these methods. Handling methods The main() method deserves special attention. It is the point of entry into a program (except an applet). It’s written like this: public static void main(String[] args) { ... } There are specific variations allowed inside the parentheses, but the general form is consistent. The keyword static is important. A static method is always associated with its entire class, rather than with any particular instance of that class. (The keyword static can also be applied to classes. All of the members of a static class are associated with the class’s entire parent class.) static methods are also called class methods. Since the main() method is the starting-point within the program, it must be static in order to remain independent of the many objects the program may generate from its parent class. static’s class-wide association affects how you call a static method and how you call other methods from within a static method. static members can be called from other types of members by simply using the name of the method, and static members can call each other the same way. You don’t need to create an instance of the class in order to access a static method within it. To access non static members of a nonstatic class from within a static method, you must instantiate the class of the member you want to reach and use that instance with the access operator, just as you would for any other method call. Notice that the arg for the main() method is a String array, with other args allowed. Remember that this method is where the compiler starts 4-8 Ge t t i n g S t a r t e d w i t h J a v a A p p l y i n g c o n c e p t s working. When you pass an arg from the command line, it’s passed as a string to the String array in the declaration of the main() method, and uses that arg to start running the program. When you pass a data type other than a String, it will still be received as a string. You must code into the body of the main() method the required conversion from String to the data type needed. Using type conversions Review Type conversion is the process of converting the data type of a variable for the duration of a specific operation. The standard form for a narrowing conversion is called a cast; it may risk your data. Implicit casting There are times when a cast is performed implicitly by the compiler. The following is an example: if (3 > 'a') { ... } In this case, the value of 'a' is converted to an integer value (the ASCII value of the letter a) before it is compared with the number 3. Explicit conversion Syntax for a widening cast is simple: <nameOfOldValue> = (<new type>) <nameOfNewValue> Java doesn’t want you to make a narrowing conversion, so you must be more explicit when doing so: floatValue = (float)doubValue; // To float "floatValue" // from double "doubValue". longValue = (long)floatValue; // To long "longValue" // from float "floatValue". // This is one of four possible constructions. (Note that decimals are rounded down by default.) Be sure you thoroughly understand the syntax for the types you want to cast; this process can get messy. For more information, see “Converting and casting data types” on page 11-5. J a v a l a n g u a g e c o n t r o l 4-9 A p p l y i n g c o n c e p t s Flow control Review There are three types of loop statements: iteration statements (for, while, and do-while) create loops, selection statements (switch and all the if statements) tell the program under what circumstances the program will use statements, and jump statements (break, continue, and return) shift control out to another part of the program. Loops Each statement in a program is executed once. However, it is sometimes necessary to execute statements several times until a condition is met. Java provides three ways to loop statements: while, do and for loops. • The while loop The while loop is used to create a block of code that will execute as long as a particular condition is met. This is the general syntax of the while loop: while ( <boolean condition statement> ) { <code to execute as long as that condition is true> } The loop first checks the condition. If the condition’s value is true, it executes the entire block. It then reevaluates the condition, and repeats this process until the condition becomes false. At that point, the loop stops executing. For instance, to print “Looping” 10 times: int x = 0; //Initiates x at 0. while (x < 10){ //Boolean condition statement. System.out.println("Looping"); //Prints "Looping" once. x++; //Increments x for the //next iteration. } When the loop first starts executing, it checks whether the value of x is less than 10. Since it is, the body of the loop is executed. In this case, the word “Looping” is printed on the screen, and then the value of x is incremented. This loop continues until the value of x equals 10, when the loop stops executing. Unless you intend to write an infinite loop, make sure there is some point in the loop where the condition’s value becomes false and the loop terminates. You can also terminate a loop by using the return, continue, or break statements. 4-10 Ge t t i n g S t a r t e d wi t h J a v a A p p l y i n g c o n c e p t s • The do-while loop The do-while loop is similar to the while loop, except that it evaluates the condition after the statements instead of before. The following code shows the previous while loop converted to a do loop: int x = 0; do{ System.out.println("Looping"); x++; } while (x < 10); The main difference between the two loop constructs is that the do-while loop is always going to execute at least once, but the while loop won’t execute at all if the initial condition is not met. • The for loop The for loop is the most powerful loop construct. Here is the general syntax of a for loop: for ( <initialization> ; <boolean condition> ; <iteration> ) { <execution code> } The for loop consists of three parts: an initialization expression, a Boolean condition expression, and an iteration expression. The third expression usually updates the loop variable initialized in the first expression. Here is the for loop equivalent of the previous while loop: for (int x = 0; x < 10; x++){ System.out.println("Looping"); } This for loop and its equivalent while loop are very similar. For almost every for loop, there is an equivalent while loop. The for loop is the most versatile loop construct, but still very efficient. For example, a while loop and a for loop can both add the numbers one through twenty, but a for loop can do it in one line less. While: int x = 1, z = 0; while (x <= 20) { z += x; x++; } For: int z = 0; for (int x=1; x <= 20; x++) { z+= x; } J a v a l a n g u a g e c o n t r o l 4-11 A p p l y i n g c o n c e p t s We can tweak the for loop to make the loop execute half as many times: for (int x=1,y=20, z=0; x<=10 && y>10; x++, y--) { z+= x+y; } Let’s break this loop up into its four main sections: a The initialization expression: int x =1, y=20, z=0 b The Boolean condition: x<=10 && y>10 c The iteration expression: x++, y-- d The main body of executable code: z+= x + y Loop control statements These statements add control to the loop statements. • The break statement The break statement will allow you to exit a loop structure before the test condition is met. Once a break statement is encountered, the loop immediately terminates, skipping any remaining code. For instance: int x = 0; while (x < 10){ System.out.println("Looping"); x++; if (x == 5) break; else ... //do something else } In this example, the loop will stop executing when x equals 5. • The continue statement The continue statement is used to skip the rest of the loop and resume execution at the next loop iteration. for ( int x = 0 ; x < 10 ; x++){ if(x == 5) continue; //go back to beginning of loop with x=6 System.out.println("Looping"); } This example will not print "Looping" if x is 5, but will continue to print for 6, 7, 8, and 9. 4-12 Ge t t i n g S t a r t e d wi t h J a v a A p p l y i n g c o n c e p t s Conditional statements Conditional statements are used to provide your code with decision- making capabilities. There are two conditional structures in Java: the if- else statement, and the switch statement. • The if-else statement The syntax of an if-else statement is as follows: if (<condition1>) { ... //code block 1 } else if (<condition2>) { ... //code block 2 } else { ... //code block 3 } The if-else statement is typically made up of multiple blocks. Only one of the blocks will execute when the if-else statement executes, based on which of the conditions is true. The else-if and else blocks are optional. Also, the if-else statement is not restricted to three blocks: it can contain as many else-if blocks as needed. The following examples demonstrate the use of the if-else statement: if ( x % 2 == 0) System.out.println("x is even"); else System.out.println("x is odd"); if (x == y) System.out.println("x equals y"); else if (x < y) System.out.println("x is less than y"); else System.out.println("x is greater than y"); • The switch statement The switch statement is similar to the if-else statement. Here is the general syntax of the switch statement: switch (<expression>){ case <value1>: <codeBlock1>; break; case <value2>: <codeBlock2>; break; default : <codeBlock3>; } J a v a l a n g u a g e c o n t r o l 4-13 A p p l y i n g c o n c e p t s Note the following: • If there is only one statement in a code block, the block does not need to be enclosed in braces. • The default code block corresponds to the else block in an if-else statement. • The code blocks are executed based on the value of a variable or expression, not on a condition. • The value of <expression> must be of an integer type, or a type that can be safely converted to int, such as char. • The case values must be constant expressions that are of the same data type as the original expression. • The break keyword is optional. It is used to end the execution of the switch statement once a code block executes. If it’s not used after codeBlock1, then codeBlock2 executes right after codeBlock1 finishes executing. • If a code block should execute when expression is one of a number of values, each of the values must be specified like this: case <value>:. Here is an example, where c is of type char: switch (c){ case '1': case '3': case '5': case '7': case '9': System.out.println("c is an odd number"); break; case '0': case '2': case '4': case '6': case '8': System.out.println("c is an even number"); break; case ' ': System.out.println("c is a space"); break; default : System.out.println("c is not a number or a space"); } The switch will evaluate c and jump to the case statement whose value is equal to c. If none of the case values equal c, the default section will be executed. Notice how multiple values can be used for each block. Handling exceptions Exception handling provides a structured means of catching run-time errors in your program and making them return meaningful information about themselves. You can also set the exception handler to perform certain actions before allowing the program to terminate. Exception handling uses the keywords try, catch, and finally. A method can declare an exception by using the throws and throw keywords. 4-14 Ge t t i n g S t a r t e d wi t h J a v a A p p l y i n g c o n c e p t s In Java, an exception can be a subclass of the class java.lang.Exception or java.lang.Error. When a method declares that an exception has occurred, we say that it throws an exception. To catch an exception means to handle an exception. Exceptions that are explicitly declared in the method declaration must be caught, or the code will not compile. Exceptions that are not explicitly declared in the method declaration could still halt your program when it runs, but it will compile. Note that good exception handling makes your code more robust. To catch an exception, you enclose the code which might cause the exception in a try block, then enclose the code you want to use to handle the exception in a catch block. If there is important code (such as clean-up code) that you want to make sure will run even if an exception is thrown and the program gets shut down, enclose that code in a finally block at the end. Here is an example of how this works: try { ... // Some code that might throw an exception goes here. } catch( Exception e ) { ... // Exception handling code goes here. // This next line outputs a stack trace of the exception: e.printStackTrace(); } finally { ... // Code in here is guaranteed to be executed, // whether or not an exception is thrown in the try block. } The try block should be used to enclose any code that might throw an exception that needs to be handled. If no exception is thrown, all of the code in the try block will execute. If, however, an exception is thrown, then the code in the try block stops executing at the point where the exception is thrown and the control flows to the catch block, where the exception is handled. You can do whatever you need to do to handle the exception in one or more catch blocks. The simplest way to handle exceptions is to handle all of them in one catch block. To do this, the argument in parentheses after catch should indicate the class Exception, followed by a variable name to assign to this exception. This indicates that any exception which is an instance of java.lang.Exception or any of its subclasses will be caught; in other words, any exception. If you need to write different exception handling code depending on the type of exception, you can use more than one catch block. In that case, instead of passing Exception as the type of exception in the catch argument, you indicate the class name of the specific type of exception you want to catch. This may be any subclass of Exception. Keep in mind that the catch J a v a l a n g u a g e c o n t r o l 4-15 A p p l y i n g c o n c e p t s block will always catch the indicated type of exception and any of its subclasses. Code in the finally block is guaranteed to be executed, even if the try block code does not complete for some reason. For instance, the code in the try block might not complete if it throws an exception, but the code in the finally block will still execute. This makes the finally block a good place to put clean-up code. If you know that a method you’re writing is going to be called by other code, you might leave it up to the calling code to handle the exception that your method might throw. In that case, you would simply declare that the method can throw an exception. Code that might throw an exception can Log in to post a comment
https://www.techylib.com/en/view/squawkpsychotic/getting_started_with_java
CC-MAIN-2018-09
refinedweb
13,705
62.78
0 Hello all, This is my first post here, this board seems to contain interesting stuff regarding C++, that's why I subscribed into it. Ok, let's go the problem... I have a class with two methods into it: floodLog writeToLog floodLog is intended to create threads that will execute the method writeToLog. The problem is that I can't compile it, I'm receiving the following error: "testing.cpp", line 54: Error: Cannot assign void*(Testing::*)(void*) to void*(*)(void*). Below you can see the source code: #include <iostream> using namespace std; #include <sys/types.h> #include <sys/signal.h> #include <sys/stat.h> #ifndef AIX #include <sys/siginfo.h> #endif #if defined sun #include <sys/statvfs.h> struct statvfs info; #elif defined HPUX11 #include <sys/vfs.h> struct statfs info; #elif defined AIX #include <sys/vfs.h> struct statfs info; #else #include <sys/mount.h> struct statfs info; #endif #include <cLog.h> #include <pthread.h> const char *mpszFileVersion[] = {"a", "b"}; const char *mpszEnvironment[] = {"c", "d"}; char *SCCS_LONGVERSION ; #define LOGLEVEL 4 class Testing { private: cLog* m_Log; pthread_t tid; void* (*fPtr)( void* ); public: Testing( ); ~Testing( ); void floodLog( int nThreads ); private: void* writeToLog( void* ); }; Testing::Testing( ) { cout << "Called constructor of Testing...\n"; m_Log = new cLog("TESTING", LOGLEVEL, 5242880); fPtr = writeToLog; } void* Testing::writeToLog( void* ) { m_Log->Write( LOGLEVEL, "Another thread writting...\n" ); return NULL; } void Testing::floodLog( int nThreads ) { int i = 0; cout << "Flooding cLog class with " << i << " threads.\n"; for( i = 0; i < nThreads; i++ ) { cout << "Created Thread: " << ( i + 1 ) << endl; pthread_create( &tid, NULL, fPtr, NULL ); } cout << "Finished flooding.\n"; } int main( ) { Testing test; cout << "Now in main.\n"; test.floodLog( 10 ); return 0; } Thanks in advance, Regards, Caio. Edited by happygeek: fixed formatting
https://www.daniweb.com/programming/software-development/threads/115838/pthread-create-in-class-method
CC-MAIN-2018-43
refinedweb
284
68.57
The aim of this and the following section is to introduce you to some of the most common C++ code methods in BaBar modules. With just a few basic syntax snippets, you will be well on your way to being able to edit a BaBar module for your analysis - or even to write your own module. BaBar code is written in C++, and modules are implemented as a C++ class. So editing module code means working with C++. If you have not yet learned C++, now is the time to do it. You do not need to become an expert programmer, but you will should try to become familiar with how C++ variables, functions and classes work. Here are some useful references for beginners: There are also many many C++ books available from libraries and bookstores. Your colleagues probably have some lying around. A module named MyModule consists of two files: the header file MyModule.hh, and the implementation file MyModule.cc. The header file contains the declarations of all the module's member objects and member functions. That is, it provides the compiler with a list of module's member functions and objects. The implementation file contains the definitions of the member functions. That is, it tells the compiler what the module's functions do. The following are the skeleton .hh and .cc files for a module. This is the absolute minimum code that you need in a module. Right now, this module does precisely nothing. To make a useful module, you need to add code into the skeleton framework. #ifndef MYMODULE_HH #define MYMODULE_HH #include "Framework/AppModule.hh" #include "AbsEvent/AbsEvent.hh" class MyModule : public AppModule { public: // Constructors MyModule( const char* const theName, const char* const theDescription ); // Destructor virtual ~MyModule( ); // Operations virtual AppResult beginJob( AbsEvent* anEvent ); virtual AppResult event( AbsEvent* anEvent ); virtual AppResult endJob ( AbsEvent* anEvent ); protected: private: }; #endif #include "BaBar/BaBar.hh" #include "MyPackage/MyModule.hh" // Constructors MyModule::MyModule( const char* const theName, const char* const theDescription ) : AppModule( theName, theDescription ) { } // Destructor MyModule::~MyModule( ) { } // Operations AppResult MyModule::beginJob( AbsEvent* anEvent ) { return AppResult::OK; } AppResult MyModule::endJob( AbsEvent* anEvent ) { return AppResult::OK; } AppResult MyModule::event( AbsEvent* anEvent ) { return AppResult::OK; } There are 5 main locations where you will add code: In the .hh file: The BaBar environment package for histograms and ntuples is the HepTuple package. The histogram class is HepHistogram, and the ntuple class is HepTuple. To create and use either a histogram or an ntuple from within the BaBar framework, you also need a HepTupleManager. The analysis module QExample class (from the sample analysis job) books a histogram of the number of tracks per event. If an analysis module is to use classes from the HepTuple histogramming package, its header file must #include the Histogram header file. In QExample.hh, you have: #include "HepTuple/Histogram.h" HepHistogram* _numTracksHisto; The next step is to create a histogram manager. First, the analysis module's .cc file needs to #include the defining header files: #include "AbsEnv/AbsEnv.hh" #include "GenEnv/GenEnv.hh" #include "HepTuple/TupleManager.h" HepTupleManager* manager = gblEnv->getGen()->ntupleManager(); _numTrkHisto = manager->histogram("Tracks per Event", 20, 0., 20. ); event accumulate _numTrkHisto->accumulate( trkList->length() ); Managing ntuples is very similar to managing histograms. Once again, the first step is to #include the required header in the module's .hh file, #include "HepTuple/Tuple.h" in the header file. HepTuple* _ntuple; In the .cc file, #include the header files needed for the HepTupleManager: #include "HepTuple/TupleManager.h" #include "AbsEnv/AbsEnv.hh" #include "GenEnv/GenEnv.hh" In beginJob(), create a HepTuple manager: HepTupleManager* manager = gblEnv->getGen()->ntupleManager(); In beginJob(), initialize your ntuple: _ntuple = manager->ntuple("MyTuple"); This ntuple is declared with only one argument: the name of the ntuple. This completes the declaration and the definition of the ntuple. The ntuple can be filled for each event (from within the event function of the analysis module) with a call to the column function: _ntuple->column("Benergy", Benergy, -99.0); The variable can be an integer, a double, a float, or a boolean. There are also several other supported types. In this example, Benergy is a double, the B meson energy that was (presumably) calculated or determined earlier in the event() function. If in some event the column function is not called for some reason (but has been called in previous events), the ntuple will be filled with the default value for that event. So it is a good idea to set it to a non-physical value, so that this case easy to identify. Here, the default is set to -99.0, an very unphysical energy. The name of the variable is the name that will be used in the ntuple to refer to the variable. You should always give your variables names that make sense - like Benergy for B meson energy, pleptonCM for lepton momentum in the center-of-mass frame, and so on. The last but very important step is to dump all of the event's data into the ntuple. Add the line: _ntuple->dumpData(); As described in the Event Information section of the Workbook, one of the most common data structures is a list of particle candidates. The list could be a standard list from the Event Store, or a run-time list created at run-time. Either way, one of the most common tasks that a module needs to perform is to access and use a particle candidate list. BaBar's C++ class for lists is called HepAList. You can have a HepAList of lots of types of pointers, but the most important type of HepAList is the HepAList of BtaCandidate*s (pointers to BtaCandidates): HepAList<BtaCandidate>* To begin, #include the required header files in the module's .cc file: #include "Beta/BtaCandidate.h" #include "CLHEP/Alist/AList.h" #include "ProxyDict/Ifd.hh" Like all event information, the Event Store's HepALists of BtaCandidates must be accessed from the module's event function. The syntax is: QExample::event(AbsEvent* anEvent) { ... HepAList<BtaCandidate> *trkList = Ifd<HepAList< BtaCandidate > >::get(anEvent, "ChargedTracks"); The Ifd<T>::get function requires two arguments: The name of the list must be one of the available lists in the Event Store. (In other words, this is not a name that you assign - it is one that you use to access the list.) To see the particle candidate lists available, see the Table of BtaCandidate lists from the Workbook's Event Information section. The most common way to use a HepAList of BtaCandidates is to loop over it. That is, you examine each BtaCandidate on the HepAlist, one at a time. To loop over a HepAList, you need another C++ object, called a HepAListIterator. The first step is to #include the required header files in the module's .cc file: #include "CLHEP/Alist/AIterator.h" The syntax to create the HepAListIterator is: HepAListIterator<BtaCandidate> iterTrk(*trkList); Here, iterTrk is the name you assign to your iterator, and trkList is the (pointer to) the HepAList of BtaCandidates. (By the way, if trkList were just a HepAList, rather than a pointer to a HepAList, then you would not need the dereferencing asterix * above.) To loop over your HepAList<BtaCandidate>*, you need a "placeholder" BtaCandidate*. This BtaCandidate* will take on the identity of each BtaCandidate* in the list, one at a time. BtaCandidate* trk(0); int nTrkHard(0); while ( trk = iterTrk()) { double etrk = trk->energy(); if (etrk > 1.0) nTrkHard++; } There are 2 steps to add a module into the Framework: Every analysis package comes with a file called AppUserBuild.cc. The AppUserBuild object gives the Framework its list of modules. So to put your module on this list, you need to add it in the AppUserBuild file. Unfortunately, the BetaMiniUser package (which is the one we are using) has its own peculiar AppUserBuild method, and this method is different from the method used for all other BaBar packages. So first I will show you the syntax used in most packages, and then I will show you how to add your module in BetaMiniUser. In most packages, there are two lines to add in AppUserBuild.cc. First, you need to #include the module's header file: #include "ThePackage/MyModule.hh" Next, you need to add the module in the AppUserBuild constructor: add( new MyModule( "MyModule", "Skeleton module" ) ); path list module list In BetaMiniUser, AppUserBuild's job has been divided among four AppUserBuild files: The file that you need to modify to add your module to the Framework is AppUserBuildBase.cc. First, #include your module's header file: #include "BetaMiniUser/MyModule.hh" Then, in the AppUserBuildBase constructor, add the module: void AppUserBuildBase(AppUserBuild* theBuild, AppFramework* theFramework) { ... theBuild->add(new MyModule("MyModule", "Skeleton module")); Now you have created your new module and added it to the Framework's list of modules. But you must also add it to the path, or else the Framework will ignore it. To add your module to the analysis path, you need to modify one of the package's tcl files. In the tcl file, you must append your module to the main analysis path, or to one of the smaller paths or sequences that make up the main analysis path. The basic syntax to add (append) your module to a path is: path append path_name MyModule sequence append sequence_name MyModule Exactly where to put your new module depends on the package. The best way to proceed is to examine how another module in the package is appended, and follow the same procedure for your module. For example, the BetaMiniUser package comes with the module MyMiniAnalysis. To find the tcl file where MyMiniAnalysis is appended, use grep to search the BetaMiniUser package: BetaMiniUser > grep MyMiniAnalysis * | grep append MyMiniAnalysis.tcl:path append Everything MyMiniAnalysis You see that MyMiniAnalysis is appended to a path called "Everything" in the tcl file MyMiniAnalysis.tcl. So now you know how to edit MyMiniAnalysis.tcl to append your module as well: path append Everything MyMiniAnalysis path append Everything MyModule This procedure ensures that if MyMiniAnalysis is on the analysis path, then MyModule will be there, too. A word of warning, however: for a given package there is usually more than one possible analysis path. (Different tcl configurations lead to different paths.) So if you run one of the analysis jobs that does NOT use MyMiniAnalysis, then MyModule will not be there, either. Now it is time to put together everything you've learned. The following examples will give you some practice in modifying and writing code in modules. As a first step in becoming familiar with some of the analysis code, you can look at the annotated header and implementation files for the Quicktour's QExample analysis module. This module is used to generate the number-of-tracks-per-event histogram. The comments inserted for these purposes are in blue. Quick links: The easiest way to edit analysis code is to modify an existing module. As a second example, you will modify the QExample module and add a track momentum histogram in addition to its number-of-tracks histogram. Open your QExample file in a text editor, such as emacs. (Don't worry about making mistakes as you edit the code. If you get stuck and can't find or fix your error, you can always re-copy the original code from $BFROOT/www/doc/workbook/examples/ex0/.) When you modify an existing module, the required header files are often already #included. Normally, the first step in adding a histogram would be to #include "HepTuple/Histogram.h". However, QExample.hh already has that #include statement. So you can move onto the next step: declaring the histogram. Declare the new HepHistogram as a private data member in the header file: HepHistogram* _pHisto; The reason that histograms and ntuples are added as class members is because you want the histograms to retain all the information they store over multiple events, so the histogram needs to be of greater scope than the event( ) function. Making them private ensures that other modules cannot interfere with them. Now QExample now has a new HepHistogram pointer, but it has not yet been initialized. Member objects, like the histogram and the histogram manager, should be initialized in the module's beginJob function. beginJob In our case, a histogram manager has already been initialized for QExample's _numTracksHisto. You can use the same histogram manager for _pHisto: _pHisto = manager->histogram("Momentum", 25, 0., 1. ); The first argument (in quotes) is the title of the histogram, the second argument (an integer) is the number of bins, the third and fourth arguments (doubles) are the low and high values of the x-axis. Finally, you are ready to fill the momentum histogram. In your number-of-tracks histogram, all you needed was the length of the tracks list, which is a property of the tracks list as a whole. But momentum is a property of a single track. So you need to iterate (loop) over the BtaCandidates in trkList, and store the momentum for each one: // Loop over track candidates to plot momentum HepAListIterator<BtaCandidate> iterTrk(*trkList); BtaCandidate* trk(0); while ( trk = iterTrk()) { _pHisto->accumulate( trk->p() ); } The original QExample did not use a HepAListIterator, so the HepAListIterator header still needs to be included: CLHEP, Class Library for High Energy Physics, is a package that contains general utility classes. If you are looking for the home of a class or function named HEPsomething, it's probably there. That's all there is to it! Working examples of the modified QExample module can be found in: $BFROOT/www/doc/workbook/examples/ex2/ or viewed here, where the added code is in blue. However, the C++ code must be re-compiled and re-linked (gmake all) before the changes will be incorporated into the executable. You will do this in the next section of the workbook: Compile and Link. This third example will take you step-by-step through the process of creating a new module from the basic module skeleton. This example is independent of the rest of the Workbook, and it is optional. If you choose to try it, you have two options: you can just read through the example, or you can copy the skeleton code and follow along. Either way, this will be a useful reference if someday you need to create your own module. If you want to follow along, then copy the skeleton code for a module to your analysis package: Package> cp -r $BFROOT/www/doc/workbook/examples/ex3/skeleton/* . (For the Quicktour and the QExample module, the analysis package is BetaMiniUser, but this example is intended to be general so that it will work in any analysis package.) Now you are ready to begin: At this point you have learned the basics of what you need to know to modify BaBar code, and maybe even write your own module. You have seen how to manage the two main data-storage objects, HepHistograms and HepTuples, and how to access the main data-storage object, the HepAList of BtaCandidates*. In the next section, you will learn how to compile and link your code, so that your modified or new module will be incorporated into a new BetaMiniApp. Last modified: January 2008
http://www.slac.stanford.edu/BFROOT/www/doc/workbook/editing/editing.html
CC-MAIN-2015-11
refinedweb
2,533
54.73
Bow is an open source library for Typed Functional Programming in Swift. let single: IO<Never, [Void]> = effects.traverse { effect in let action = IO<Never, Action>.var() return binding( continueOn(.global(qos: .userInitiated)), action <- effect, continueOn(.main), |<-IO.invoke { self.send(action.get) }, yield: ()) }^ single.unsafeRunAsync(on: .global(qos: .userInitiated)) { _ in } traversewill get you a single IO describing the effects of your array of IO, and collecting all results in an array. You can also use parTraverseif you'd like the execution of the effects to be run in parallel. sendcould be something like: public func send(_ action: Action) { let effects = self.reducer(&self.value, action) let single: UIO<[Void]> = effects.traverse { effect in let action = UIO<Action>.var() return binding( continueOn(.global(qos: .userInitiated)), action <- effect, continueOn(.main), |<-UIO.invoke { self.send(action.get) }, yield: ()) }^ single.unsafeRunAsync(on: .global(qos: .userInitiated)) { _ in } }. hey channel, I recently started looking at Bow and it looks absolutely awesome. Kudos to the maintainers for putting it together. I have a question about pattern matching data types in Bow. I'd like to apologize in advance for it, as the question is very basic. let's say I have a simple Option<String>. How do I pattern match it? The following code: let myObject: Option<String> = .some("42") switch myObject { case let .some(c): // ... } produces the following compilation error Pattern variable binding cannot appear in an expression Is there a way to use pattern matching with Options and other data types in Bow? Hi Kyrill! Thanks for your nice words, it helps us be motivated to continue moving this project forward! Regarding your question, it is not possible to use pattern matching on the types provided in Bow. The reason is the emulation of Higher Kinded Types we do; we need our types to be classes in order to be able to use them as HKTs. Only enums can be used in pattern matching. Nevertheless, all types provide a method fold, where you can pass closures to handle each of the cases: let myObject: Option<String> = .some("42") myObject.fold( { /* Handle none case */ }, { c in /* Handle some case */ } ) This is possible with any other data type in Bow, like Either: let myObject: Either<Int, String> = .right("42") myObject.fold( { left in /* Handle left case */ }, { right in /* Handle right case */ } ) You can read more about HKTs and how they are emulated in Bow here, and feel free to ask any questions you may have! If you decide to use our recently released library Bow Lite, we got rid of the emulation of HKTs. With this, we lost a lot of abstraction power, but one of the things we got back is pattern matching. Therefore, if you use Bow Lite, you can still use fold on all types, but you can also pattern match: import BowLite let myObject: Either<Int, String> = .right("42") switch myObject { case let .left(l): /* Handle left case */ case let .right(r): /* Handle right case */ } I hope this clarifies your question! Hi @dsabanin! Thanks for your kind words! We made Bow Lite in order to ease the learning process for newcomers to FP. It takes a while until you fully grasp HKTs, and since they are not native to the language (yet), we have observed many people have issues understanding and using the library. It is also a big commitment to add HKT emulation into your project, especially if you are working with others and they are not very familiar with FP. Having said that, of course, Bow is much more versatile thanks to HKTs. You can see how much we could save being able to use more powerful abstractions by checking the implementations we had to make in Bow Lite, where there is plenty of duplication that cannot be abstracted out. We were even able to implement Bow Arch on top of the HKT emulation, which lets us replace a few types and get a whole new architecture with different properties. My recommendation is that, if you and your team are familiar with FP and HKTs, go ahead with Bow, and only choose Bow Lite if you want a lightweight library to practice FP without the heavy features. Regarding benchmarking, I unfortunately haven't been able to do anything about it, but it is definitely something interesting to measure. I'd be interested in measuring which parts of the library are less performant and improving them. Hacktoberfest is coming, so if you are interested in doing this type of project, you are really welcome!
https://gitter.im/bowswift/bow?at=5e98143b85b01628f050e366
CC-MAIN-2021-10
refinedweb
756
65.01
I am having a difficult time writing what started out as a simple vending machine program. As I go on though, it seems that I'm overcomplicating it and it is getting messy, so I have resorted to the forum before I really get off the path. Any advice on how to clean it up? I'm going to list a few of the problems I am having with the code below: 1. The user is supposed to enter his/her money and the program is to read each value separately (Do-While loop) and keep a running total of the money entered. I can't seem to get the right code format to do this. I was trying to do this with the variable total and so that is why the total exists in the switch statement, but it did not work. 2. In the second Do-While statement, is there a way to kick back an error when their are insufficient funds to purchase an item? Instead of getting a negative change return. And any other advice on how I can clean up this code would be great! Thanks a lot and sorry about the mess. Code : package practice; import java.util.Scanner; public class Practice { public static void main(String[] args) { double count, total; int item; //Display Available Options To Customer Scanner input = new Scanner(System.in); System.out.println("*VENDING MACHINE*"); System.out.println("1. Snickers"); System.out.println("2. 100 Grand"); System.out.println("3. Pay Day"); System.out.println("4. Milky Way"); System.out.println("5. Kit Kat\n"); System.out.println("We accept coins, $1 and $5 bills."); //User Input do{ count = input.nextDouble(); total = count; System.out.println("Please insert your money now. (0 To Exit)"); System.out.printf("Amount Entered: $%.2f\n" , count); if(count >= 0.01 && count <=5){ //System.out.printf("Amount Entered: $%.2f\n" , count); total = count; } else if(count > 5) System.out.println("Please Enter Up To $5.00"); else break; } while(count!=0 || count <= 5); //Loop That Enables User To Make Multiple Purchases And Record Running Total do{ System.out.println("Enter item number (0 to exit):\t"); item = input.nextInt(); switch(item){ case 1: total -= 1.00; System.out.printf("%s%.2f\n","Amount remaining: $" , total); break; case 2: count -= 1.00; System.out.printf("Amount remaining: $%.2f\n" , count); break; case 3: count -= 0.50; System.out.printf("Amount remaining: $%.2f\n" , count); break; case 4: count -= 1.25; System.out.printf("Amount remaining: $%.2f\n" , count); break; case 5: count -= 0.75; System.out.printf("Amount remaining: $%.2f\n" , count); break; case 0: break; } } while(item!=0 && count > 0); System.out.println("Please make another selection, or press 0"); System.out.printf("Change: $%.2f\n" , count); System.out.println("Have a nice day."); } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/35338-vending-machine-printingthethread.html
CC-MAIN-2018-09
refinedweb
469
62.04
IRC log of ws-addr on 2007-03-19 Timestamps are in UTC. 19:53:48 [RRSAgent] RRSAgent has joined #ws-addr 19:53:49 [RRSAgent] logging to 19:54:05 [bob] zakim, this will be ws_addrwg 19:54:05 [Zakim] ok, bob; I see WS_AddrWG()4:00PM scheduled to start in 6 minutes 19:54:35 [bob] meeting: Web Services Addressing WG Teleconference 19:54:50 [bob] rrsagent, make logs public 19:55:01 [bob] chair: Bob Freund 19:56:32 [bob] agenda: 19:57:49 [Zakim] WS_AddrWG()4:00PM has now started 19:57:56 [Zakim] +Gilbert_Pilz 19:58:04 [gpilz] gpilz has joined #ws-addr 19:58:32 [Zakim] +Bob_Freund 20:00:45 [Rama] Rama has joined #ws-addr 20:00:49 [Zakim] +Mark_Little 20:01:29 [monica] monica has joined #ws-addr 20:02:04 [Zakim] +??P7 20:02:15 [MrGoodner] MrGoodner has joined #ws-addr 20:02:20 [Zakim] +Tom_Rutt 20:02:25 [Zakim] +m2 20:02:27 [anish] anish has joined #ws-addr 20:02:37 [bob] zakim ??P7 is MrGoodner 20:02:42 [TonyR] TonyR has joined #ws-addr 20:03:36 [TomR] TomR has joined #ws-addr 20:03:50 [monica] Monica Martin, Sun 20:04:20 [Zakim] +Anish_Karmarkar 20:04:38 [yinleng] yinleng has joined #ws-addr 20:05:40 [Zakim] +??P10 20:05:59 [yinleng] zakim, ??P10 is me 20:05:59 [Zakim] +yinleng; got it 20:06:05 [bob] zakim ??P10 is yinleng 20:06:40 [Zakim] +Paco:Francisco_Curbera 20:07:00 [Zakim] +Paul_Knight 20:07:15 [Zakim] +[Microsoft] 20:07:29 [PaulKnight] PaulKnight has joined #ws-addr 20:07:50 [bob] zakim, [Microsoft] is ram 20:07:50 [Zakim] +ram; got it 20:09:09 [bob] scribe: TomR 20:09:24 [TomR] Topic: Agenda 20:09:29 [Ram] Ram has joined #ws-addr 20:09:55 [TomR] Resolution: New issue concerning version of policy namespace accepted 20:10:14 [TomR] Topic: Minutes from last meeting 20:10:24 [TomR] Resolution: Minutes of last meeting accepted 20:10:32 [Paco] Paco has joined #ws-addr 20:10:35 [TomR] Topic: LC2 Issue 1 ws policy comments 20:10:55 [TomR] Proposals for E, with New proposal F. 20:11:44 [TomR] Proposal E: Parameters 20:12:08 [TomR] Proposal F: This new alternative F takes the approach of nested support 20:12:08 [TomR] > assertions, however 20:12:08 [TomR] > non presence of a nested policy assertion now implies that the 20:12:08 [TomR] > associated response mode is not supported. 20:13:41 [bob] 20:14:00 [gpilz] q+ 20:15:18 [Zakim] +David_Hull 20:15:34 [Zakim] -Mark_Little 20:16:15 [bob] ack gpil 20:16:44 [anish] q+ to ask about composibility with rm assertion 20:17:11 [TomR] Gil: my concern is with nonAnonymous assertion. Non presense of non anonmous means non anymous supported. Due to wide open definition of non anonymous 20:18:57 [dhull] dhull has joined #ws-addr 20:20:00 [bob] ack anish 20:20:00 [Zakim] anish, you wanted to ask about composibility with rm assertion 20:20:25 [TomR] The policy expression can compose with ws make connection, by also asserting nonAnonymous support 20:23:44 [TomR] Anish: rm is optional, use anon back channel. Need two alternatives, anon another non anon as well as rm 20:24:14 [TomR] We can do examples for all of these cases. 20:25:42 [TomR] Composition with RX should be shown to everyone's happyness. 20:29:53 [TomR] Gil: Reply to anon, fault to is non anonymous. These assertions cover responses as a Bob: can we enumerate use cases for an email discussion. 20:30:01 [MrGoodner] q+ 20:30:13 [MrGoodner] 20:30:24 [bob] ack mrg 20:34:01 [TomR] Marc G: we have been talking about another alternative with nested assertions, (alternative G). This mail is differnent says addressing without nested policy says only ws addressing is supported. It retains required language, but they cannot be used on same alternative. 20:34:16 [TomR] Tom what is difference between new G and the old alternative A. 20:34:27 [TomR] Bob: I did not see your mail Marc. 20:36:10 [gpilz] q+ 20:36:32 [bob] ack gpil 20:36:34 [Zakim] +??P0 20:36:38 [TomR] General agreement on use of nested policy. 20:36:43 [TonyR] zakim, ??p0 is me 20:36:43 [Zakim] +TonyR; got it 20:36:44 [MrGoodner] q+ 20:37:09 [TomR] Gil Client can look at parameters to determine if it can interact with a server 20:37:14 [Zakim] -Gilbert_Pilz 20:37:19 [monica] Need to also ensure we understand how this is handled in the default intersection algorithm. Consider how such assumptions may have an influence with other composable specifications. Particularly if you wish to leverage was is defined in WS-Policy. 20:37:23 [TomR] Paco: I agree with that. 20:37:33 [monica] c/was/what 20:37:41 [Zakim] +Gilbert_Pilz 20:37:44 [MrGoodner] q+ 20:38:45 [MrGoodner] q+ 20:39:04 [bob] ack mrg 20:39:24 [TomR] Tom: we need to understand the requirements before we can decide on parameters of nested policy assertions. 20:39:37 [anish] i don't understand the schema ns issue. i.e., why is it an issue 20:39:47 [TomR] Marc G: we worked on this new proposal G, since we find value in doing the intersection matching. 20:40:04 [gpilz] anish - because then we have to argue about what version of WS-Policy we reference 20:40:21 [plh] plh has joined #ws-addr 20:40:23 [gpilz] s/we have to/we are likely to/ 20:40:25 [Zakim] +Plh 20:40:38 [gpilz] q+ 20:40:48 [gpilz] q- 20:41:22 [MrGoodner] q+ 20:42:10 [TomR] Gil: if we used nested parameters we wuld not need the wsp namespace in our schema. 20:42:38 [TomR] Anish: this metadata spec in w3c has w3c restrictions. anyway. 20:42:38 [bob] ack mrg 20:43:15 [TomR] Marc G: I agree with anish. and the fix to the new issue is to change to CR reference. 20:44:22 [TomR] Bob: can we agree that alternative e is unacceptable, since intersection mechanism. 20:44:29 [TomR] No objection: 20:44:49 [TomR] Bob: can Marc explain the differences. 20:45:57 [TomR] Marc : G stays with requirements semantics. F has empty addressing meaing no responses, G empty means addressing is fully supported. for mixed mode F has both, for G use unqualivied addressing assertion. 20:46:39 [TomR] Marc G: both compose with make connectib 20:46:53 [TomR] Tony: what about no responses. 20:47:45 [TomR] Marc G: G does not allow saying no responses. But non is allowed in both 20:47:59 [bob] s/non/none 20:48:50 [TomR] Tom: how important is use case for no responses supported. 20:49:05 [TomR] Bob: is more time required. 20:49:25 [TomR] general agreement to discuss both over email. 20:50:13 [TomR] Bob: Is one week ok for next meeting? 20:50:48 [Zakim] +Paul_Knight.a 20:51:03 [TomR] Bob: April 2 for next meeting 20:51:10 [TomR] Agreed, next meeting April 2. 20:51:23 [plh] regrets for next meeting 20:52:15 [TomR] Topic: New issue on namespace 20:52:27 [TomR] Marc: it is a simple bug fix. 20:53:29 [TomR] Bob: does everyone agree to resolve by plugging in the new namespace from CR version of WS Policy. 20:53:53 [TomR] Resolution: New issue resolved by fixing to CR version of WS Policy. 20:54:28 [TomR] Agreed to have use case discussion for proposals F or G on email. 20:54:44 [monica] 20:56:23 [TomR] Bob: Policy interop in May in Ottawa. IBM can test metadata interop but we need at least one other company. 20:57:03 [Zakim] -Paco:Francisco_Curbera 20:57:05 [TomR] Bob: Other companies should come forward, so we can meet our interop requirement of 2. 20:57:27 [Zakim] -Gilbert_Pilz 20:57:30 [Zakim] -Plh 20:57:31 [Zakim] -Tom_Rutt 20:57:32 [Zakim] -m2 20:57:33 [Zakim] -Paul_Knight.a 20:57:34 [Zakim] -ram 20:57:36 [Zakim] -??P7 20:57:37 [Zakim] -yinleng 20:57:38 [Zakim] -David_Hull 20:57:39 [Zakim] -Anish_Karmarkar 20:57:40 [yinleng] yinleng has left #ws-addr 20:57:41 [TomR] TomR has left #ws-addr 20:57:41 [Zakim] -Bob_Freund 20:57:48 [bob] rrsagent, generate minutes 20:57:48 [RRSAgent] I have made the request to generate bob 20:57:57 [TonyR] TonyR has left #ws-addr 20:58:02 [Zakim] -TonyR 20:58:11 [Zakim] +[IPcaller] 20:58:44 [Katy] Katy has joined #ws-addr 21:00:04 [David_Illsley] David_Illsley has joined #ws-addr 21:00:33 [Zakim] +David_Illsley 21:05:33 [David_Illsley] zakim, who is here? 21:05:34 [Zakim] On the phone I see Paul_Knight, [IPcaller], David_Illsley 21:05:38 [Zakim] On IRC I see David_Illsley, Katy, plh, dhull, Ram, MrGoodner, Rama, RRSAgent, Zakim, bob 21:06:24 [bob] david, sorry about our summertime switch 21:06:42 [Zakim] -Paul_Knight 21:07:44 [David_Illsley] so it's all over? 21:07:54 [bob] yes, we wrapped up. 21:07:58 [David_Illsley] :-) 21:08:12 [David_Illsley] when's the next call? 21:08:15 [bob] Next week decide between alternative F and G 21:08:21 [bob] Next call is April 2 21:08:31 [bob] s/week/call 21:08:34 [Katy] so now I can go and watch tele! 21:08:38 [David_Illsley] ok, thanks, speak to you then 21:08:39 [Katy] :o) 21:08:42 [Zakim] -David_Illsley 21:08:44 [bob] sure, what's on? 21:08:53 [Katy] I'll take a look - not sure 21:08:54 [David_Illsley] David_Illsley has left #ws-addr 21:08:58 [Katy] sorry we missed the call 21:09:00 [bob] noting good here 21:09:06 [bob] NP 21:09:26 [Katy] bye for now - we are back to summertime next week 21:09:29 [Katy] thanks 21:09:33 [bob] bye 21:09:48 [bob] bob has left #ws-addr 21:10:32 [Zakim] -[IPcaller] 21:10:33 [Zakim] WS_AddrWG()4:00PM has ended 21:10:34 [Zakim] Attendees were Gilbert_Pilz, Bob_Freund, Mark_Little, Tom_Rutt, m2, Anish_Karmarkar, yinleng, Paco:Francisco_Curbera, Paul_Knight, ram, David_Hull, TonyR, Plh, [IPcaller], 21:10:36 [Zakim] ... David_Illsley 21:13:09 [Rama] Rama has left #ws-addr
http://www.w3.org/2007/03/19-ws-addr-irc
CC-MAIN-2016-40
refinedweb
1,767
66.88
The System.Threading.Channels namespace contains types that you can use to implement a producer-consumer scenario, which speeds up processing by allowing producers and consumers to perform their tasks concurrently. This namespace contains a set of synchronization types that can be used for asynchronous data exchange between producers and consumers. This article discusses how we can work with the System.Threading.Channels library in .Net Core. Dataflow blocks vs. channels The System.Threading.Tasks.Dataflow library encapsulates both storage and processing, and it is focused primarily on pipelining. By contrast, the System.Threading.Tasks.Channels library is focused primarily on storage. Channels are much faster than Dataflow blocks but they are specific to producer-consumer scenarios. That means they don’t support some of the control flow features that you get with Dataflow blocks. Why use System.Threading.Channels? To continue reading this article register now Learn More Existing Users Sign In
https://www.infoworld.com/article/3445156/how-to-use-systemthreadingchannels-in-net-core.html
CC-MAIN-2019-47
refinedweb
153
52.26
Stream.CanWrite Property When overridden in a derived class, gets a value indicating whether the current stream supports writing. [Visual Basic] Public MustOverride ReadOnly Property CanWrite As Boolean [C#] public abstract bool CanWrite {get;} [C++] public: __property virtual bool get_CanWrite() = 0; [JScript] public abstract function get CanWrite() : Boolean; Property Value true if the stream supports writing; otherwise, false.. If a class derived from Stream does not support writing, a call to Write, BeginWrite, or WriteByte throws a NotSupportedException. If the stream is closed, this property returns false. Example [Visual Basic, C#, C++] The following is an example of using the CanWrite property. [Visual Basic]. [C#]. [C++] #using <mscorlib.dll> using namespace System; using namespace System::IO; int main() { FileStream* fs = new FileStream(S"MyFile.txt", FileMode::OpenOrCreate, FileAccess::Write); if (fs->CanRead && fs->CanWrite) { Console::WriteLine(S"MyFile.txt can be both written to and read from."); } else if (fs->CanWrite) { Console::WriteLine(S"MyFile.txt is writable."); } } //This code outputs "MyFile.txt is writable." //To get the output message "MyFile.txt can be both written to and read from.", //change the FileAccess parameter to ReadWrite in the FileStream constructor.
http://msdn.microsoft.com/en-us/library/a83yafhb(v=vs.71)
CC-MAIN-2014-52
refinedweb
190
52.26
#include <UT_InfoTree.h> This is a tree in which each node can have as many children as it likes, and at each node we can store data in the form of strings (which are stored in arrays/rows, in case we want to display each row in a spreadsheet table). Definition at line). Adds a child branch with "Property" and "Value" headings.. Create a new empty row. HOW TO USE: You may add up to 4 properties to a single row at once using these methods.*. Definition at line 215 of file UT_InfoTree.h. Definition at line 230 of file UT_InfoTree.h. Definition at line 247 of file UT_InfoTree.h. Definition at line 266 of file UT_InfoTree.h. Clears all child branches. Doesn't affect our properties. Clears all our row and column information. Doesn't affect child branches. branch's "type" idwentifier. Return this branch's name. Get a pointer to my parent. Get a pointer to the very top node in the tree. In case we decide to change the "type" of this branch. In case we decide to change the name of this branch.
https://www.sidefx.com/docs/hdk/class_u_t___info_tree.html
CC-MAIN-2020-34
refinedweb
187
87.42
Message schema generation Discussion in 'XML' started by WideBoy, DTD generation from XML SchemaRob Exley, Jan 26, 2004, in forum: XML - Replies: - 2 - Views: - 459 - Rob Exley - Jan 26, 2004 [XML Schema] Including a schema document with absent target namespace to a schema with specified tarStanimir Stamenkov, Apr 22, 2005, in forum: XML - Replies: - 3 - Views: - 1,264 - Stanimir Stamenkov - Apr 25, 2005 w3c Schema naming patterns and template-based schema generationSteve Jorgensen, Aug 9, 2005, in forum: XML - Replies: - 0 - Views: - 559 - Steve Jorgensen - Aug 9, 2005 HTML Generation (Next Generation CGI)John W. Long, Nov 22, 2003, in forum: Ruby - Replies: - 4 - Views: - 341 - John W. Long - Nov 24, 2003
http://www.thecodingforums.com/threads/message-schema-generation.303814/
CC-MAIN-2014-41
refinedweb
112
51.55
What problem does modularity solve With the increasing size of JavaScript projects, teamwork is inevitable. In order to better manage and test the code, the concept of modularity is gradually introduced into the front end. Modularization can reduce the cost of collaborative development and reduce the amount of code. At the same time, it is also the basis of "high cohesion and low coupling". Modularization mainly solves two problems: - name conflict - File dependency: for example, jquery needs to be introduced into bootstrap, and the location of jquery file must be introduced before bootstrap.js. How did people in ancient times solve modularization Before various modularization specifications came out, people used anonymous closure functions to solve the problem of modularization. var num0 = 2; // Notice the semicolon here (function () { var num1 = 3 var num2 = 5 var add = function () { return num0 + num1 + num2 } console.log(add()) // 10 })() // console.log(num1) // num1 is not defined The advantage of this is that you can use global and local variables inside functions without worrying about local variables polluting global variables. This way of wrapping anonymous functions in parentheses is also called immediate execution function (IIFE). All function internal code is in a closure. It provides privacy and state throughout the application lifecycle. CommonJS specification CommonJS treats each file as a module. Variables in each module are private variables by default. Define the external output interface of the current module through module.exports, and load the module through require. (1) Usage: circle.js const { PI } = Math exports.area = (r) => PI * r ** 2 exports.circumference = (r) => 2 * PI * r app.js const circle = require('./circle.js') console.log(circle.area(4)) (2) Principle: in the process of compiling js files, node will use the following function wrapper to wrap it Module wrapper: (function (exports, require, module, __filename, __dirname) { const circle = require('./circle.js') console.log(circle.area(4)) }) This is why these variables that are not explicitly defined can be used in the node environment. Among them__ filename and__ dirname is passed in after being analyzed in the process of finding the file path. The module variable is the module object itself, and exports is an empty object initialized in the module constructor. For more details, please refer to node modules You can refer to when to use exports and when to use module.exports exports shutcut (3) Advantages vs. disadvantages CommonJS can avoid global namespace pollution and clarify the dependencies between codes. However, the module loading of CommonJS is synchronous. If a module references three other modules, the three modules need to be fully loaded before the module can run. This is not a problem on the server side (node), but it is not so efficient on the browser side. After all, reading network files takes more time than local files. AMD AMD The full name is Asynchronous Module Definition. The module is loaded asynchronously. The module loading does not affect the execution of the following statements, and the callback function is used to run the code after the module is loaded. (1) Mode of use Define a module of myModule, which depends on jQuery module: define('myModule', ['jQuery'], function ($) { // $is the output module of jQuery $('#app').text('Hello World') }) The first parameter represents the module id, which is an optional parameter, and the second parameter represents the module dependency, which is also an optional parameter. Using the myModule module: require(['myModule', function (myModule) {}]) requirejs It is an implementation of AMD specification. For detailed usage, you can view the official documents. CMD CMD specification comes from seajs As a whole, CMD is very similar to AMD. The differences between AMD and CMD can be seen from the similarities and differences with RequireJS]() (1) Usage: // CMD define(function(require, exports, module) { var a = require('./a') a.doSomething() // ... var b = require('./b') // Dependence can be written nearby b.doSomething() // ... }) CMD advocates dependency proximity, which can be written into any line of your code. AMD is a dependency front. Before parsing and executing the current module, the module must indicate the module on which the current module depends. UMD UMD (Universal Module Definition) is not a specification, but a more general JS module solution combining AMD and CommonJS. This is often seen when packaging modules: output: { path: path.resolve(__dirname, '../dist'), filename: 'vue.js', library: 'Vue', libraryTarget: 'umd' }, Indicates that the packaged module is a umd module, which can run on both the server side (node) and the browser side. Let's look at the vue packaged source code vue.js (function (global, factory) { typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : typeof define === 'function' && define.amd ? define(factory) : (global.Vue = factory()); }(this, (function () { 'use strict'; // ... }))) The code is translated as: - First, judge whether it is a node environment: exports is an object and the module exists. - If it is a node environment, use module.exports = factory() to export Vue (refer to it through require('vue ')). - If it is not a node environment, judge whether AMD is supported: define is function and define.amd exists. - If AMD is supported, define the module (referenced through require(['vue')). - Otherwise, vue will be directly bound to the global variable (referenced by window.vue). ES6 Finally, in the era of ES6, JS supports modularization from the language level, and supports native es modules from node8.5. However, there are two limitations: - Module name (file name) must be mjs - Add -- experimental modules to the startup parameters If there is a.mjs, it is as follows: export default { name: 'Jack' } In b.mjs, you can refer to: import a from './a.mjs' console.log(a) // { name: 'Jack' } Chrome 61 also supports JS module from the beginning. You only need to add type="module" to the script attribute. <script type="module" src="module.js"></script> <script type="module"> import { sayHello } from './main.js' sayHello() </script> // main.js export function sayHello () { console.info('Hello World') } ES6 module details ES6 module is mainly composed of two commands: export and import. (1) export command // Output variable export let num = 123 export const name = 'Leo' // Output a set of variables let num = 123 let name = 'Leo' export { num, name } // Output function export function foo (x, y) { return x ** y } // Use alias function a () {} function b () {} export { a as name, b as value } // When referencing, reference by alias import { name, value } from '..' It should be noted that the export command can only output external interfaces, and the following output methods are wrong: // report errors export 1 var m = 1 export m function f () {} export f // Correct writing export var m = 1 var m = 1 export { m } export { m as n} export function f () {} function f () {} export { f } The value output by export is dynamically bound, which is different from CommonJS. CommonJS outputs the value cache and there is no dynamic update. How to delete the node cache? let config setInterval(() => { delete require.cache[require.resolve('./config')] config = require('./config') console.log(config) }, 3000) The export command must be at the top level of the module. If it is within the scope of the block level, an error will be reported. (2) import command // The import variable is read-only import { a } from './a.js' a = 33 // Syntax Error : 'a' is read-only // However, its attribute value can be modified a.name = 'Jack' // The import command can promote variables foo() import { foo } from './a.js' // import is a static execution, and expressions and variables cannot be used import { a + b } from './a.ls' // report errors // report errors let module = 'my_module' import { foo } from module // report errors if (x === 1) { import { foo } from 'module1' } else { import { foo } from 'module2' } // Load the entire module import * as utils from './utils.js' (3) export default // a.js export default function () { console.log('Hello World') } // Citation a import a from 'a.js' a() // b.js export default funtion foo () { console.log('Hello World') } // Reference b import foo from 'b.js' foo() The difference between export default and the module exported by export is whether to package variables with {}. function add (x, y) { return x + y } export { add as default } // quote import { default as foo } from 'a.js' // correct export var a = 1 // correct var a = 1; export default a // error export default var a = 1 (4) Compound writing of export and import export { foo, bar } from 'a.js' // amount to import { foo, bar } from 'a.js' export { foo, bar }
https://programmer.help/blogs/front-end-modularization-commonjs-amd-cmd-es6.html
CC-MAIN-2021-49
refinedweb
1,392
57.57
> On Thu, Oct 24, 2002 at 09:24:30AM -0700, Lee, Jung-Ik wrote: > > > > ACPI is a must for PHP enumeration/configuration and > resource management for > > DIG64/ACPI compliant platforms. ACPI method is optional for > controller/slot > > operations(event management). intcphp driver conforms to > ACPI resource > > management. I didn't enable ACPI based event management for > PHP since it is > > optional and provides less feature than controller based > solution - LED, > > MRL, Bus/Adapter speeds/capabilities, etc. > > But as the code you sent me isn't enabled for ACPI, I don't see how it > ties into the existing Compaq based PCI driver. How does this work? > Are you trying to have 1 driver handle both the PCI and ACPI > functionality? Let me try again :) **resource management** Non-ACPI platforms uses $HRT/EBDA, pcibios_*(), SMBIOS, etc. for slot enumeration/configuration. DIG64/ACPI, and SHPC requires ACPI for this. IPF platforms only have ACPI _CRS, _PRT, _HPP, _BBN, _STA, _ADR, _SUN, etc on the namespace for PHP, and we have to use them. (as a side note, this functionality is common for other hotplug-* as mentioned in first mail. No API will be common for hotplug-everything, but functionality is common and has not to be duplicated) **event management in terms of controller/slot operations ** ACPI provides only _EJ0, _PS?, _STA, etc for slot operations but these are not mandatory. That means, we can use either ACPI method or controller driver. intcphp driver has not enabled ACPI method based solution but uses controller driver. intcphp driver is also capable of performing ACPI method based solution since it works on ACPI namespace. This is why acpiphp and intcphp could be sharing resource management and event management. > > > > As an example from your patch: > > > > > > +enum php_ctlr_type phphpc_get_ctlr_type() > > > +{ > > > + return PCI; > > > +} > > > > > > It never returns any other type, so the ACPI or ISA > sections of the > > > driver will never get called. Or am I missing something? > > > > > This is because this release only addresses PCI version > only. I can take > > this out too, with ease :) > > As this means there is a lot of "dead code" in the driver, you should > take all of it out. Well, I removed many dead codes from the base driver. This is not dead code but needed to support other types. However, if it's not acceptable, I'll remove them. > Hm, directly trying to sneak something by me, by just renaming a > #define. Not nice. :( No intention like that at all. Please regard this as bright side of engineers simple version control w/ commonly used #ifdefs :) Anyway, they'll be removed per your requirement. > > > We need this driver as it's the only solution for DIG64 > compliant IPF > > platforms. > > Can we work in parallel ? - Make this driver available and > we all work > > together for enhanced pci_hotplug core. > > No, does your driver _have_ to be in the 2.5 kernel tree right now for > some reason? Can't anyone who _has_ to have PCI Hotplug support for > these types of machines (and I don't think there are any > types of these > machines currently shipping, right? And they wouldn't be > running a 2.5 > kernel on them, right?) just grab your driver if they really need it? It's not my area to answer. It's product marketting. I do not want to skrew marketting :) We want the driver available before they compalint it :| BTW, do we know how far is 2.6, even roughly ? > > > Also I'll talk with cpqphp owner over the integration of the two. > > You're talking to him :) Ah. I now know why you are the one who talks about this :) OK, then, I'll send you a patch against your cpqphp driver asap. Thanks, J.I.Received on Thu Oct 24 10:40:16 2002 This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:10 EST
http://www.gelato.unsw.edu.au/archives/linux-ia64/0210/4032.html
CC-MAIN-2016-18
refinedweb
644
75.1
Advertising --- Comment #8 from MZMcBride <[email protected]> --- (In reply to comment #7) > The first example was > >> * Posting to discussion pages of articles under a common category. > > What Tilman is asking is whether MassMessage would always post to discussion > pages of articles within a category, never to the article itself. Oh, sorry about that. Posting to article talk pages was never considered as part of the design or architecture of MassMessage. You'd have to play around with it or read the code to be sure, but I imagine it won't work well without a bit of additional coding. As I recall, if you try to post to the article namespace, it'll just get logged as a skip currently. On the other hand, I think talk pages are often categorized. You could simply use those categories instead. > For most cases MM always wants to past in the Talk pages, isn't it. The only > exceptions I can think of are village pumps and the like, where the page > itself is dedicated to discuss. There are other quirks. For example, Meta-Wiki actually has a few discussion boards in the main (article) namespace. But generally speaking talk pages are for talk, yes. :-) -- You are receiving this mail because: You are the assignee for the bug. You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list [email protected]
https://www.mail-archive.com/[email protected]/msg326477.html
CC-MAIN-2018-13
refinedweb
235
66.64
DS18B20 Temperature Module: The temperature module is used to sense the temperature at a much accurate level than the other temperature sensors. The temperature module consists of a one wire temperature sensor (DS18B20) also known as (Dallas). This module can be used with any if the microcontroller like raspberry pi and Arduino but we are using the Arduino. The temperature module has wide range accuracy. It is accurate up to ± 0.5 ᴼC over the range of -10 ᴼC to +85 ᴼC. You can locate these sensors at a much larger distance from your Arduino, almost 100 meter away from your Arduino. If you will use this sensor alone, then you will have to use a 5 K ohm pull up resistor but in our module you do not need to use it. It has built in pull up resistor in it.It has a resolution of 12 bit. It gets powers up in a low power idle state. To measure the temperature and to do the A to D conversion, it should get a convert T [ 44h ] command from the master. After the conversion, the data will be stored in the 2-byte temperature register and the sensor will return to idle state. Page Contents Pin out of DS18B20 Temperature Module The DS18B20 temperature module has only three pins so it is very easy to use but it highly accurate as compared to other sensors. The three pins of the Temperature module are - Ground : This will be connected to the ground of Arduino - VCC : This will be connected to the 5v of Arduino - Signal : This will be connected to any digital pin of Arduino. Connection of DS18B20 Temperature Module with arduino The connection scheme is very easier. We have only three pins to connect. Connect the temperature module to the Arduino as shown in the figure. Installing the Library for DS18B20 Temperature Module code Before uploading the code you will have to install the library of the DS18B20 temperature sensor. Download the library from here and place it in your Arduino library folder. Copy the file from the ZIP folder and place it in the library folder. After placing the files in the library folder, the Arduino library folder should have new folders named onewire and Dallas temperature containing. After that copy the following code in the Arduino IDE and upload the code. Code of DS18B20 Temperature Module with arduino // This code is for the DS1820 Temperature module. // Do not forget to install library before running the code. #include < OneWire.h > // Including the library of DS1820 Temperature module #include < DallasTemperature.h > // Including the library of DS1820 Temperature module #define ONE_WIRE_BUS 2 // Initializing the Arduino pin 2 for temperature module OneWire ourWire(ONE_WIRE_BUS); // Declaring a variable named our wire DallasTemperature sensors ( &ourWire ) ; // Asking the Dallas temperature library to use the one wire library void setup ( ) // Void setup runs only one. So the code written in it will run only one time { Delay ( 1000 ) ; // Wait for one second Serial.begin ( 9600 ) ; // Setting the baud rate at 9600 Serial.println ( " Microcontrollerlab.com : This is the test code " ) ; Serial.println ( " Temperature Sensor : DS18B20 " ) ; Delay ( 1000 ) ; // Wait for one second sensors.begin ( ) ; // The sensor will start working here void loop ( ) // Void loop runs repeatedly. So the code written in it will run repeatedly { Serial.println ( ) ; // This will give some space in the output Serial.print ( " Waiting for the temperature module to give value ... " ) ; // This will print “Waiting for the temperature module to give value … ” on the display. sensors.requestTemperatures ( ) ; // Sending the commands to get the temperature values from sensor Serial.println ( " DONE " ) ; // This will print “ done “ on the display Serial.print ( " Temperature in degree C is : " ) ; // This will print " Temperature in degree C is :" on the display Serial.print ( sensors.getTempCByIndex ( 0 ) ) ; // This will show the temperature in degree C on the display Serial.println ( " Degrees C " ) ; // This will print " Degrees C " on the display Serial.print ( " Temperature in degree Fahrenheit is : " ) ; // This will print " Temperature in degree Fahrenheit is : " on display Serial.print ( sensors.getTempFByIndex ( 0 ) ) ; // This will show the temperature in Fahrenheit on display Serial.println ( " Degrees F " ) ; // This will print " Degrees F " on the display Delay ( 5000 ) ; // Waiting for 5 seconds. } If you sensor is working alright then the output should look like this microcontrollerslab.com: This is the test code Temperature Sensor: DS18B20 Waiting for the temperature module to give value ... DONE Temperature in degree C is : 19.12 Degrees C Temperature in degree Fahrenheit is : 60.22 Degrees F While if you are seeing this on the output then there is a connection problem or sensor problem. microcontrollerslab.com: This is the test code Temperature Sensor: DS18B20 Waiting for the temperature module to give value ... DONE Temperature in degree C is : -127.00 Degrees C Temperature in degree Fahrenheit is : -196.00 Degrees F you can also add feature of gsm interfacing in this article to make it wireless temperature sensor. This is all about using DS18B20 Temperature sensor and its code. If you have any issue after reading this article comment on this post with your issues. We will try our best to resolve your issue.
http://microcontrollerslab.com/ds18b20-temperature-module-interfacing-arduino/
CC-MAIN-2017-04
refinedweb
858
56.45
Surface GeomFillSurface Description Surface GeomFillSurface creates a parametric surface from two, three, or four boundary edges, trying to create a smooth transition between them. Left: edges that are used to generate a surface with the GeomFillSurface tool, 4 connected edges, 3 connected edges, and 2 disconnected edges. Right: resulting surface from using the 4, 3, and 2 edges, respectively. Usage - Press the Fill boundary curves button. - Select edges in the 3D view. The edges must connect together so that they formed a closed profile. - Press OK. Note: once created, it is not possible to apply additional constraints to the created surface. Options Fill type: Stretch, Coons, or Curved. Properties A Surface GeomFillSurface ( Surface::GeomFillSurface class) is derived from the basic Part Feature ( Part::Feature class, through the Part::Spline subclass), therefore it shares all the latter's properties. In addition to the properties described in Part Feature, the Surface Filling has the following properties in the property editor. Data Base - DataFill Type ( Enumeration): the applied filling algorithm; Stretch, the style with the flattest patches; Coons, a rounded style with less depth than Curved; Curved, the style with the most rounded patches. - DataBoundary List ( LinkSubList): a list of edges that will be used to build the surface. - Data (hidden)Reversed List ( BoolList): View Base - View. A surface that twists on itself will probably have self-intersections, and thus will be an invalid Shape; this can be verified with Part CheckGeometry. For example, if two curves have the points curve1 = [a, b, c, d] curve2 = [e, f, g] and the resulting surface after using GeomFillSurface or Sections is a twisted surface, you may create a third curve that is equal to one of the two original curves but with a reversed list of points. Either curve1 = [a, b, c, d] curve3 = [g, f, e] or curve3 = [d, c, b, a] curve2 = [e, f, g] should work to generate a surface that doesn't twist. In practical terms this means that all edges used to generate a surface should be created preferably in the same clockwise or anti-clockwise direction. Following this simple rule usually guarantees that the surface will follow the smoothest direction and won't twist. When the surface's ViewLighting property is One side, a face will be painted completely black if its normal direction points into the 3D view (away from the current viewer), indicating a flipped face with respect to the other colored faces. Left: the boundary edges are oriented in the same direction, and thus the generated surface is smooth. Right: the boundary edges have opposite directions, and thus the generated surface twists on itself, resulting in self-intersections. Scripting See also: FreeCAD Scripting Basics. The Surface GeomFillSurface tool can be used in macros and from the Python console by adding the Surface::GeomFillSurface object. - The edges to be used to define the surface must be assigned as a LinkSubList to the BoundaryListproperty of the object. - The type of algorithm must be assigned like a string to the FillTypeproperty. - All objects with edges need to be computed before they can be used as input for the properties of the GeomFillSurface object. import FreeCAD as App import Draft doc = App.newDocument() a = App.Vector(-140, -100, 0) b = App.Vector(175, -108, 0) c = App.Vector(200, 101, 0) d = App.Vector(-135, 107, 70) points1 = [a, App.Vector(-55, -91, 65), App.Vector(35, -85, -5), b] obj1 = Draft.make_bspline(points1) points2 = [b, App.Vector(217, -45, 55), App.Vector(217, 35, -15), c] obj2 = Draft.make_bspline(points2) points3 = [c, App.Vector(33, 121, 55), App.Vector(0, 91, 15), App.Vector(-80, 121, -40), d] obj3 = Draft.make_bspline(points3) points4 = [d, App.Vector(-140, 0, 45), a] obj4 = Draft.make_bspline(points4) doc.recompute() surf = doc.addObject("Surface::GeomFillSurface", "Surface") surf.BoundaryList = [(obj1, "Edge1"), (obj2, "Edge1"), (obj3, "Edge1"), (obj4, "Edge1")] doc.recompute() >
https://wiki.freecadweb.org/Surface_GeomFillSurface
CC-MAIN-2020-45
refinedweb
645
58.38
A common object used to be passed in N-tier applications in .NET environment is a DataSet object. Moreover in order to ease the use of the DataSet object, we usually define it as a strongly typed DataSet (using XML schema). A common scenario is to create (instantiate) the typed DataSet in one tier and then to pass it to the other tiers for further logic implementations. The creation of a typed DataSet is actually very expensive. I was amazed to realize (using various profiling tools) that around 15%-20% of my application was wasted on typed DataSets' ctors. Most of the applications use the following flow: a newly created DataSet is populated with data (either by the user or from the DB) updated with some logic, saved to DB and then finally discarded. The cycle then repeats itself. (Refer the Remarks section for additional information of what can be done in cases where more than one instance of such a DataSet is needed.) If the DataSet would be created only once, it would improve the performance significantly. Therefore the proposed solution is as follows: create once the desired typed DataSet, save it to a cache, and once needed, return it just after clearing its data using the DataSet.Clear() method. Is it essentially faster? Yes, I have tested it using various profilers and the DataSet.Clear() method is faster by 2-10 (!!!) times than a typed DataSet's ctor. Below is the source code for a typed DataSet proxy which controls the creation of the desired typed DataSet. namespace TypedDSProxy { ///<summary> /// Controls creation of typed DataSet. /// Singleton. ///</summary> public class DSProxy { ///<summary> /// Initial size of typed DS cache. ///</summary> private const int InitialCacheSize = 2; ///<summary> /// Typed DS cache. ///</summary> private Hashtable cache; ///<summary> /// Instance variable for Singleton pattern. ///</summary> private static DSProxy DSProxyInstance; ///<summary> /// Default Constructor. ///</summary> private DSProxy() { cache = new Hashtable(InitialCacheSize); } ///<summary> /// Instance method for Singleton pattern. ///</summary> ///<returns>DSProxy</returns> public static DSProxy Instance() { if(DSProxyInstance == null) DSProxyInstance = new DSProxy(); return DSProxyInstance; } ///<summary> /// Gets the namespace from the given type. ///(The namespace derived from the beginning /// of the type till the first "." char.) ///</summary> ///<param name="dsType">The string representation /// of typed DS's type.</param> ///<returns>Typed DS's namespace.</returns> private string GetNamespace(string dsType) { try { return dsType.Substring(0, dsType.IndexOf(".")); } catch(Exception e) { // write e.Message to log. ... } } ///<summary> /// Returns an empty typed DataSet according to a given type. ///</summary> ///<param name="dsType">The string representation /// of typed DS's type.</param> ///<returns>Empty typed DS.</returns> public DataSet GetDS(string dsType) { try { DataSet ds; // if the DataSet wasn't created. if(cache[dsType] == null) { // create it using its assembly. Assembly asm = Assembly.Load(GetNamespace(dsType)); ds = (DataSet)asm.CreateInstance(dsType, true); cache.Add(dsType, ds); } else { // else clear it. ds = (DataSet)cache[dsType]; ds.Clear(); } return ds; } catch(Exception e) { // write e.Message to log. ... } } } } The client uses the typed DataSet proxy as follows: class clsClient { ... public void SomeOperationWithTypedDS() { OrdersDS ds = (OrdersDS)DSProxy.Instance().GetDS(typeof(OrdersDS).ToString()); ds.Orders.AddOrdersRow("a", 1, System.DateTime.Now, System.DateTime.Now, System.DateTime.Now, 1, decimal.MinValue, "a", "a", "a", "a", "a", "a"); } ... } DataSets can be used. (A pool of objects also solves cases where more than one typed DataSetobject is needed.) A good article about managed pools of objects can be found here: Object Pooling with Managed Code - By Scott Sherer. (When thread safety is not an issue and still more than one typed DataSetobject is needed, a simpler pool - without object context - can be used.) DataSet�s type. Another approach is to use the ConstructorInfo object which can be extracted from the DataSet�s Type in order to create the instance. Use ConstructorInfo.Invoke() method in order to accomplish the task. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/grid/typeddsproxy.aspx
crawl-002
refinedweb
637
52.05
I am using Ant 1.2 with JDK 1.3 under Windows 98. I read about the IllegalAccessError problem in the archives, but it didn't give me an adequate explanation. I have a public class, let's call it Public, with a static main which i use the java task to execute. In the constructor of this class I create an instance of another class who's constructor is protected, let's call this Protected. Both Public and Protected are in the same package, so Public should be able to create an instance of Protected. Alas, I cannot do so in Ant. I understand why Ant must use its own classloader, because I am specifying my own class path on the fly which consists of JAR files that are created in the build file. But I have created my own class loaders before and I have not run into this type of problem, this is perfectly legal stuff. Is this problematic because Ant must use reflection to call the static main method? I cannot use forking since I need standard input and I would prefer not to run an additional JVM anyway. This seems like pretty reasonable limitation. Am I missing something? -> richard hall
http://mail-archives.apache.org/mod_mbox/ant-user/200102.mbox/%[email protected]%3E
CC-MAIN-2017-34
refinedweb
205
73.98
It looks like you're new here. If you want to get involved, click one of these buttons! hi , i am suppost to read in a user defined text file then, sort it using bubble sort then print the results into a newfile called sorted.txt created within the program, but when i compile i get error with strcpy, and sayas its expects char** but argument is of type char here is my code void Bubble_sort(char name [], int n); typedef enum {false,true} bool; { int main() { FILE *pFile; FILE *fptr; int i; char FileName[MAX_FILENAME+1]; char name[SIZE]; int last; printf("Enter the name of the student records to be sorted\n"); scanf("%s",&FileName); pFile == fopen(FileName,"r"); /* read from the file */ if(pFile!=NULL) { printf("File Open Successful\n"); for(i = 0 ; !feof(pFile ) ;i++ ) { /* Read the number of elements in the input file / fscanf(pFile,"%s",name[i]); last = i-1; if (i > SIZE){/exit(EXIT_FAILURE); */ exit(0); } } } fclose (pFile); fptr == fopen("C:\\sorted.txt","w"); if(fptr==NULL){ printf("Error!"); exit(1); } BUBBLESORT ALGORITHM void Bubble_sort(char name [SIZE], int i); { int i,j; char temp; bool done = false ; while (!done) { done = true; for (i = last; i > 0; i--) for (j = 0; j <= i; j++) if (strcmp(name[j],name[j-1]) < 0) { strcpy(temp,name[j]); strcpy(name[j],name[j - 1]); strcpy(name[j - 1], temp); done = false; } } for (i = 0 ; i <= last ; i++) fprintf(fptr,"%s",name[i]); } fclose(fptr); return (0); } } The str cpy/cmp error is caused by name being an array of characters. You're trying to compare single characters, but the functions want strings. If you're supposed to read many names and store them in an array in order to sort them, then you need another dimension in the declaration of name and in the function prototype. If you are just sorting an array of characters, then don't use str cmp/cpy yeah im trying to sort an array of names thanks for your help
http://programmersheaven.com/discussion/433896/error-with-my-bubble-sort-program-dont-know-where-im-going-wrong
CC-MAIN-2017-09
refinedweb
340
56.29
Re: workbook_open no longer occurs - From: Steve the large <Stevethelarge@xxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Fri, 22 Jun 2007 06:40:03 -0700 Sorry, I must have described the problem wrongly. The properties functioning was added several weeks ago, long before the problem occurred. The older version worked well, workbook_open() worked just fine as I was adding in that functionality. In my previous posts, I was copying the non-working version to a new workbook, first by copying sheets over, correcting the defined names as I went. With macros OFF, I'd copy one or two sheets, save off the new file as a separate version, close everything down. THEN open the "saved off" copy with macros enabled -and see if my "heartbeat" code (the msgbox routine in the workbook_open()) would fire off. If it did, then I closed the "saved off" file. REopen the new target file with macros disabled, reopen the broken file (macros disabled), and copy a few more sheets over to the target, and then save off a different version, close up and re-open & look for heartbeat. I worked my way through all the sheets this way -no problem. Then I imported the modules, one at a time, saving off, closing, opening the saved off, checking for msg. Then I imported the forms -still working. Then I manually added the four, little properties manually to the target program (the very last thing) by opening the files/properties menu. Viola -the target program heartbeat stopped occurring. To make sure, I closed the target, re-opened the broken file, and *JUST* deleted the four custom properties. I made no changes to any code, just removed the props from via the file/properties... dialog. The 'broken' version started working -the workbook_open() routine was firing off and I started seeing the msgbox message. Weird, hunh? It's still possible I have bad code in there that is re-corrupting the file somehow, after twenty years as a professional programmer, I've seen everything. Or it could be a corrupted registry or some such related nonsense. Being without Admin privileges, I'm limited in how far I can dig. Anyway, thanks for the support and advice. "Mark Lincoln" wrote: You know you've got a tough problem when you start wishing it was your. fault.... ;) I'm afraid I don't have any better idea of what's happening than you do. But this has never stopped me from guessing. You mentioned that the problem occurred after you added the custom properties, and that this was the last thing you did. Have you tried adding those to the previous version before making any other changes? I'm thinking that if the Workbook_Open() code still works afterward, then you can make your changes one at a time and test each change for failure of Workbook_Open(). This sounds like it could be quite tedious for you, but if there's some weird interaction between new code and custom properties then that's possibly the only way to find it. Good luck. I'm hoping to read about a happy resolution soon. Mark Lincoln On Jun 21, 2:46 pm, Steve the large <Stevethela...@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote: Excellent suggestion, I'm under some limitations, though. This has to be a single workbook, because that is what I am updating. Some background is in order.... I'm doing this as a favor to another dept. at a company to which I'm consulting. Not charging for it. They had a single workbook with multiple sheets, each sheet being used as a "dashboard". Multiple people access this single workbook through eRoom. Opening it up, modifying their dashboard, and putting it back. In the original workbook they would keep a blank dashboard (***) as a template.. They would manually copy this work*** and fill in tab name and project information into the copy. This was all done manually. The group managing this presentation process has two "facilitators" who are charged with ensuring that the PMs update their sheets. The PMs then give presentations once a month to the muckies with the dashboads as a presentation tool. Almost all these people are very unsophisticated Excel & eRoom users, with little training. I want to make their life easier, and that means as few changes over what they were doing before as possible. They are using eroom because the PMs do not all have access to a common server. I am creating some automation for the facilitators, so that the dashboard is created automatically, and there is a tracking/control work*** where the facilitators can see all projects at a glance. Along the way I've added some bells and whistles to make their life easier, but I'm afraid I'm limited to a single workbook. There are usually no more than 30-40 projects going on simultaneously so I wasn't expecting a problem. There are six summary worksheets for different areas of the company, and when a new dashboard is required, they just click on a menu choice that brings up a simple dialog to put in project manager, area, facilitator, etc. Then the template is copied and links are automatically added to the summary sheets. Not really that huge in concept. During this round of debugging, I copied the sheets over to a new workbook, which had a single call to msgbox routine in the workbook_open( ) sub. Periodically I would save off to an intermediate file and open that with macros enabled to see if workbook_open event was functioning. When I copied sheets over to a new workbook, the defined names (about twenty or so) get really big because the complete path & file name is pre-pended to the defined name "linking" the new workbook to the old workbook. I would then have to go into the defined names and manually remove this file path data. My concern is that the size of memory allocated to the storage of the defined names may be dynamically re-allocated beyond some limit, overwriting code execution space, and that the "properties" may be linked to this memory storage allocation. These are just guesses at this point. I programmed Apps in C for 13 years on PCs, and I saw weird stuff like this many times.. I wish I had some decent debbugging tools for this, but sigh. I would be ecstatic to discover something as mundane as a coding reason for this weirdness, then I could definitly fix it. Until I find the reason for this weird occurance, I don't trust the app I've designed using Excel. -Not a good place for my head, or the people I'm trying to help. Thanks for the suggestion though, if I had more leeway, I would prefer to keep the workbooks smaller with more separation of code from data. "Mark Lincoln" wrote: Okay, another guess: You mention copying over worksheets to a new workbook (with the resulting defined names "becoming quite long"). I can imagine that anything getting ever-larger over time is going to eventually become a problem. I wonder if it would be better to save the "template" workbook under a new name using Save As and work with that, rather than copying sheets to a new workbook each time. Mark Lincoln On Jun 20, 12:26 pm, Steve the large <Stevethela...@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote: Mark, the custom properties were working fine for several weeks in previous versions of the file. (The workbook_open routine had been working exactly as expected in these previous versions). Something happened (either there was a file corruption, I hit an internal and as-yet undocumented limit on namespaces, or an interaction is occurring with an add-in (because of which I removed all Add-Ins, just to simplify/be sure)) and the workbook_open() event seemed to stop occurring. To answer your question, I created the custom properties through the file.properties dialog, under the "custom properties" tab, and am accessing them through the code. The App I'm working on has 5 modules, 4 forms, one template work*** with several controls, 5 "user interaction" worksheets with controls, and several summary report sheets. The template is used to create dashboards for various projects. The overall file size can vary from 1 to 2.5 Meg, and has worked flawlessly until the workbook_open stopped executing. When I copy the sheets over to a new sheet, the defined names become quite long, and I'm wondering if anyone knows the limit on the defined name namespace. Does anyone know the hard limits on the amount of space available for custom properties names? Are they using the same bucket within the code to hold these variables? Right now, I'm leaning towards some type of overwrite occurring from that namespace into executing code. I'm going to pursue this and see if I can get the problem to re-occur just through this method. At this point I'm starting to think it is a bug in Excel, rather than a coding problem. "Mark Lincoln" wrote: Since I was unaware of custom properties prior to reading this thread, I created a workbook and put the following code from VBA Help file into a Workbook_Open Sub in the ThisWorkbook module... Private Sub Workbook_Open() Dim wksSheet1 As Worksheet Set wksSheet1 = Application.ActiveSheet ' Add metadata to work***. wksSheet1.CustomProperties.Add _ Name:="Market", Value:="Nasdaq" ' Display metadata. With wksSheet1.CustomProperties.Item(1) MsgBox .Name & vbTab & .Value End With End Sub ....and it runs just as expected. You may want to try something similar. If it works, it then begs the question: Where are you defining your custom properties? Have you used custom properties before with success, or is this your first use of them? (You may want to post an example of your code.) Which version of Excel are you using? Mark Lincoln On Jun 19, 5:14 pm, Steve the large <Stevethela...@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote: Update -I can get workbook_open event to fire now, but it's a weird interaction. I just finished copying over the code modules, names, forms, worksheets to a brand new file. Every so often I saved off a copy of the document and opened the new document with macros enabled. Each time I would see my message box text being from within Workbook_open. Until I did the very last thing.... I have four custom properties defined, they are "debug", "version", "userID", and "AutoSense". They are Y/N, Text, Text, Y/N. They are not linked to content. When I added these, my message stopped displaying. Note that there is no code running that interacts with these properties. The workbook_open() sub contains only a msgbox call. I went back to the original file, deleted all the custom properties, and the workbook_open() sub started running there, also. I added the custom properties back, (now named gDeb, gVer, gUser, gAS), workbook_open() stopped working again. I tried linking them to content in the workbook. This made no difference. At this point I am mightily confused, I have a work around (don't use custom properties) but I don't like it. Anyone have any thoughts? "Steve the large" wrote: Yeah, that's what I was thinking about. The process takes about an hour and I was hoping I was doing something either stupid, obvious, or known. I can also just go back to the previous version and make the changes there, but there were a lot of changes and it doesn't tell me what went wrong. Thanks for the advice, I'll try it and get back to the thread. "Tom Ogilvy" wrote: actually, you said two workbookis: I created a brand new workbook, copied version 18's sheets, modules, forms, names and properties over, and the workbook_open still doesnt work. Now you need to repeat this, but don't copy over everything. Copy over parts of the workbook until you find which part cause it to stop working. -- Regards, Tom Ogilvy "Steve the large" wrote: Susan, thanks for the reply, I did try that, by putting the enableevents = true in another routine. But my macro events are running along fine. Its just the workbook_open() routine that is causing the problem. It just stopped running in a single workbook. Other workbooks (previous versions, which I keep, and unrelated workbooks) still work perfectly fine. "Susan" wrote: obviously that should have been application.calculation=xlautomatic duh susan On Jun 19, 1:09 pm, Steve the large- Hide quoted text - - Show quoted text -... read more ; - Follow-Ups: - Re: workbook_open no longer occurs - From: Mark Lincoln - References: - Re: workbook_open no longer occurs - From: Susan - Re: workbook_open no longer occurs - From: Tom Ogilvy - Re: workbook_open no longer occurs - From: Steve the large - Re: workbook_open no longer occurs - From: Steve the large - Re: workbook_open no longer occurs - From: Mark Lincoln - Re: workbook_open no longer occurs - From: Steve the large - Re: workbook_open no longer occurs - From: Mark Lincoln - Re: workbook_open no longer occurs - From: Steve the large - Re: workbook_open no longer occurs - From: Mark Lincoln - Prev by Date: RE: Combo drop-down edit box - Next by Date: Re: Enable/Disable ComboBox - Previous by thread: Re: workbook_open no longer occurs - Next by thread: Re: workbook_open no longer occurs - Index(es):
http://www.tech-archive.net/Archive/Excel/microsoft.public.excel.programming/2007-06/msg03492.html
crawl-002
refinedweb
2,200
71.44
Not sure how pycassa does it, but it a simple case of... - get_slice with start="", finish="" and count = 100,001 - pop the last column and store it's name - get_slice with start as the last column name, finish="" and count = 100,001 repeat. A On 20 Oct, 2010,at 03:08 PM, Wayne <[email protected]> wrote: Thanks Done in 0.75460100174 Done2 in 0.314303874969 {978} > python decode_test.py 600000 Done in 13.2945489883 Done2 in 7.32861185074 My general advice is to pull back less data in a single request. Aaron On 20 Oct, 2010,at 11:30 AM, Wayne <[email protected]> wrote: I am not sure how many bytes,? To us it is like the MySQL client we use in python. It is really C wrapped in python and adds almost zero overhead to the time it takes mysql to return the data. That is the expectation we have and the performance we are looking to get to. Disk I/O + 20%. We are returning one big row and this is not our normal use case but a requirement for us to use Cassandra. We need to get all data for a specific value, as this is a secondary index. It is like getting all users in the state of CA. CA is the key and there is a column for every user id. We are testing with 600,000 but this will grow to 10+ million in the future. We can not test .7 as we are only using .6.6. We are trying to evaluate Cassandra and stability is one concern so .7 is definitely not for us at this point. Thanks. On Tue, Oct 19, 2010 at 4:27 PM, Aaron Morton <[email protected]> wrote: Just wondering how many bytes you are returning to the client to get an idea of how slow it is. The call to fastbinary is decoding the wireformat and creating the Python objects. When you ask for 600,000 columns your are creating a lot of python objects. Each column will be a ColumnOrSuperColumn, wrapping a Column, which has probably 2 Strings. So 2.4 million python objects. Here's my rough test script. def go(count): start = time.time() buffer = [ ttypes.ColumnOrSuperColumn(column=ttypes.Column( "column_name_%s" % i, "row_size of something something", 0, 0)) for i in range(count) ] print "Done in %s" % (time.time() - start) On my machine that takes 13 seconds for 600,000 and 0.04 for 10,000. The fastbinary module is running a lot faster because it's all in c. It's not a great test but I think it gives an idea of what you are asking for. I think there is an element of python been slower than other languages. But IMHO you are asking for a lot of data. Can you ask for less data? Out of interest are you able to try the avro client? It's still experimental (0.7 only) but may give you something to compare it against. Aaron On 20 Oct, 2010,at 07:23 AM, Wayne <[email protected]> wrote:thrift_spec))) so I am not sure what other options we have. >> >> Thanks for your help. >> > > > > -- > Jonathan Ellis > Project Chair, Apache Cassandra > co-founder of Riptano, the source for professional Cassandra support >
http://mail-archives.apache.org/mod_mbox/cassandra-user/201010.mbox/%[email protected]%3E
CC-MAIN-2020-40
refinedweb
550
86.2
This Dezi is a search platform based on Apache Lucy, Swish3, Search::OpenSearch and Search::Query. Dezi integrates several CPAN search libraries into one easy-to-use interface. This document is a placeholder for the namespace and documentation only. You s...KARMAN/Dezi-0.004002 - 30 Apr 2015 22:01:11 GMT - Search in distribution - dezi - Dezi server app - Dezi::Server - Dezi Plack server - Dezi::Config - Dezi server configuration - 3 more results from Dezi » JDB/Win32-OLE-0.1712 (5 reviews) - 14 May 2014 22:54:06 GMT - Search in distribution *Sereal* is an efficient, compact-output, binary and feature-rich serialization protocol. The Perl encoder is implemented as the Sereal::Encoder module, the Perl decoder correspondingly as Sereal::Decoder. They are distributed separately to allow for...YVES/Sereal-3.005 (1 review) - 05 Jan 2015 14:37:46 GMT - Search in distribution This script uses the nmap security scanner with the Nmap::Parser module in order to run quick scans against specific hosts, and gather all the information that is required to know about that specific host which nmap can figure out. This script can be...APERSAUD/Nmap-Parser-1.31 (2 reviews) - 05 Apr 2013 23:27:03 GMT - Search in distribution - nmap2db - store nmap scan data into entries in SQLite/MySQL database - nmap2sqlite - turn nmap scan data into entries in SQLite DB - Nmap::Parser - parse nmap scan data with perl (3 reviews) - 11 Aug 2015 06:50:14 GMT - Search in distribution Alien is a package that exists just to hold together an idea, the idea of Alien:: packages, so there is no code here, just motivation for Alien. The intent of Alien is to provide a mechanism for specifying, installing and using non-native dependencie...PLICEASE/Alien-0.93 - 14 Sep 2015 13:57:34 GMT - Search in distribution Nama performs multitrack recording, effects processing, editing, mixing, mastering, live performance and general-purpose audio processing, using the Ecasound realtime audio engine. Audio Functionality Audio projects may be developed using tracks, bus...GANGLION/Audio-Nama-1.204 - 09 Aug 2015 03:00:26 GMT - Search in distribution DDUMONT/App-Cme-1.006 - 19 Jul 2015 12:18:00 SHLOMIF/App-Sky-0.2.1 - 17 Oct 2014 12:59:26 GMT - Search in distribution - App::Sky::Module - class that does the heavy lifting. - App::Sky::CmdLine - command line program - App::Sky::Results - results of an upload. - 3 more results from App-Sky » PERLANCAR/phpBB2-Simple-0.03 - 04 Sep 2015 00:39:04 GMT - Search in distribution Type: % deziapp -h for a full usage statement. See the Dezi::CLI class documentation for details....KARMAN/Dezi-App-0.013 - 02 Sep 2014 02:41:46 GMT - Search in distribution - Dezi::CLI - command-line interface to Dezi::App - Dezi::App - build Dezi search applications - Dezi::Role - common attributes for Dezi classes - 22 more results from Dezi-App » Background Let's start by listing some of the resources at the disposal of the Perl community. * CPAN is a wonderful archive and distribution system for Perl modules. * search.cpan.org is a very nice web interface for searching and browsing the CPAN ...ITUB/AnnoCPAN-0.22 - 02 Aug 2005 04:33:32 GMT - Search in distribution Since Perl 5.8, thread programming has been available using a model called *interpreter threads* which provides a new Perl interpreter for each thread, and, by default, results in no data or state information being shared between threads. (Prior to P...JDHEDDEN/threads-2.02 - 13 Jun 2015 10:36:17 CORION/DBD-WMI-0.07 (1 review) - 14 Jan 2015 18:55:00 GMT - Search in distribution - Win32::WQL - DBI-like wrapper for the WMI The DBD::CSV module is yet another driver for the DBI (Database independent interface for Perl). This one is based on the SQL "engine" SQL::Statement and the abstract DBI driver DBD::File and implements access to so-called CSV files (Comma Separated ...HMBRAND/DBD-CSV-0.48 (1 review) - 11 Feb 2015 20:51:10 GMT - Search in distribution Forums can be thought of as channels or discussion area on which Posts are made. A Forum can be linked to the Site that hosts it. Forums will usually discuss a certain topic or set of related topics, or they may contain discussions entirely devoted t...GEEWIZ/SIOC-v1.0.0 - 21 Mar 2008 21:25:06 GMT - Search in distribution - SIOC::Site - SIOC Site class - SIOC::Post - SIOC Post class - SIOC::Container - SIOC Container class - 1 more result from SIOC »
https://metacpan.org/search?q=CPAN-Forum
CC-MAIN-2015-40
refinedweb
752
52.6
Spark, an alternative for fast data analytics. ). We'll look at examples of these two operations shortly, but first, let's get acquainted with the Scala language. Brief introduction to Scala Scala may be one of the Internet's best-kept secrets. You can find Scala in production at some of the Internet's busiest websites, including Twitter, LinkedIn, and Foursquare (with its web application framework, called Lift). There's also evidence to suggest that financial institutions have taken an interest in the performance of Scala (such as EDF Trading's use for derivative pricing). Scala is a multi-paradigm language in that it supports language features associated with imperative, functional, and object-oriented languages in a smooth and comfortable way. From the perspective of object-orientation, every value in Scala is an object. Similarly, from the functional perspective, every function is a value. Scala is also statically typed with a type system both expressive and safe. In addition, Scala is a virtual machine (VM) language and runs directly on the Java™ Virtual Machine (JVM) using the Java Runtime Environment version 2 through byte codes that the Scala compiler generates. This setup allows Scala to run almost everywhere the JVM runs (with the requirement of an additional Scala run time library). It also allows Scala to exploit the vast catalog of Java libraries that exist, along with your existing Java code. Finally, Scala is extensible. The language (which actually stands for Scalable Language) was defined for simple extensions that integrate cleanly into the language. Scala illustrated Let's look at some examples of the Scala language in action. Scala comes with its own interpreter, allowing you to experiment with the language in an interactive way. A useful treatment of Scala is beyond the scope of this article, but you can find links to more information in Related topics. Listing 1 begins our quick tour of the Scala language through its interpreter. After starting Scala, you're greeted with its prompt, through which you can interactively evaluate expressions and programs. Begin by creating two variables—one being immutable ( vals, called single assignment) and one being mutable ( vars). Note that when you try to change b (your var), you succeed, but an error is returned when you attempt the change to your val. Listing 1. Simple variables in Scala $ scala Welcome to Scala version 2.8.1.final (OpenJDK Client VM, Java 1.6.0_20). Type in expressions to have them evaluated. Type :help for more information. scala> val a = 1 a: Int = 1 scala> var b = 2 b: Int = 2 scala> b = b + a b: Int = 3 scala> a = 2 <console>6: error: reassignment to val a = 2 ^ Next, create a simple method that calculates and returns the square of an Int. Defining a method in Scala begins with def, followed by the method name and a list of parameters, then you set it to number of statements (in this example, one). No return value is specified, as it can be inferred from the method itself. Note how this is similar to assigning a value to a variable. I demonstrate this process on an object called 3 and a result variable called res0 (which the Scala interpreter creates for you automatically). This is all shown in Listing 2. Listing 2. A simple method in Scala scala> def square(x: Int) = x*x square: (x: Int)Int scala> square(3) res0: Int = 9 scala> square(res0) res1: Int = 81 Next, let's look at the construction of a simple class in Scala (see Listing 3). You define a simple Dog class that accepts a String argument (your name constructor). Note here that the class takes the parameter directly (with no definition of the class parameter in the body of the class). A single method exists that emits a string when called. You create a new instance of your class, and then invoke your method. Note that the interpreter inserts the vertical bars: They are not part of the code. Listing 3. A simple class in Scala scala> class Dog( name: String ) { | def bark() = println(name + " barked") | } defined class Dog scala> val stubby = new Dog("Stubby") stubby: Dog = Dog@1dd5a3d scala> stubby.bark Stubby barked scala> When you're done, simply type :quit to exit the Scala interpreter. Installing Scala and Spark The first step is to download and configure Scala. The commands shown in Listing 4 illustrate downloading and preparing the Scala installation. Use the 2.8 version of Scala, because this is what Spark is documented as needing. Listing 4. Installing Scala $ wget $ sudo tar xvfz scala-2.8.1.final.tgz --directory /opt/ To make Scala visible, add the following lines to your .bashrc (if you're using Bash as your shell): export SCALA_HOME=/opt/scala-2.8.1.final export PATH=$SCALA_HOME/bin:$PATH You can then test your installation, as illustrated in Listing 5. This set of commands loads the changes to the bashrc file, and then does a quick test of the Scala interpreter shell. Listing 5. Configuring and running interactive Scala $ scala Welcome to Scala version 2.8.1.final (OpenJDK Client VM, Java 1.6.0_20). Type in expressions to have them evaluated. Type :help for more information. scala> println("Scala is installed!") Scala is installed! scala> :quit $ As shown, you should now see a Scala prompt. You can exit by typing :quit. Note that Scala executes within the context of the JVM, so you'll need that also. I'm using Ubuntu, which comes with OpenJDK by default. Next, get the latest copy of the Spark framework. To do so, use the script shown in Listing 6. Listing 6. Downloading and installing the Spark framework wget mesos-spark-0.3-scala-2.8-0-gc86af80.tar.gz $ sudo tar xvfz mesos-spark-0.3-scala-2.8-0-gc86af80.tar.gz Next, set up the spark configuration in ./conf/spar-env.sh with the following line for the Scala home directory: export SCALA_HOME=/opt/scala-2.8.1.final The final step in setup is to update your distribution using the simple build tool ( sbt). sbt is a build tool for Scala and has been used with the Spark distribution. You perform the update and compile step in the mesos-spark-c86af80 subdirectory as: $ sbt/sbt update compile Note that you'll need to be connected to the Internet when you perform this step. When complete, run a quick test of Spark, as shown in Listing 7. In this test, you request to run the SparkPi example, which calculates an estimation of pi (through random point sampling in the unit square). The format shown requests the sample program (spark.examples.SparkPi) and the host parameter, which defines the Mesos master (in this case, your localhost, because it's a single-node cluster) and the number of threads to use. Notice that in Listing 7, two tasks are executed, but they are serialized (task 0 starts and finishes before task 1 begins). Listing 7. Performing a quick test of Spark $ ./run spark.examples.SparkPi local[1] 11/08/26 19:52:33 INFO spark.CacheTrackerActor: Registered actor on port 50501 11/08/26 19:52:33 INFO spark.MapOutputTrackerActor: Registered actor on port 50501 11/08/26 19:52:33 INFO spark.SparkContext: Starting job... 11/08/26 19:52:33 INFO spark.CacheTracker: Registering RDD ID 0 with cache 11/08/26 19:52:33 INFO spark.CacheTrackerActor: Registering RDD 0 with 2 partitions 11/08/26 19:52:33 INFO spark.CacheTrackerActor: Asked for current cache locations 11/08/26 19:52:33 INFO spark.LocalScheduler: Final stage: Stage 0 11/08/26 19:52:33 INFO spark.LocalScheduler: Parents of final stage: List() 11/08/26 19:52:33 INFO spark.LocalScheduler: Missing parents: List() 11/08/26 19:52:33 INFO spark.LocalScheduler: Submitting Stage 0, which has no missing ... 11/08/26 19:52:33 INFO spark.LocalScheduler: Running task 0 11/08/26 19:52:33 INFO spark.LocalScheduler: Size of task 0 is 1385 bytes 11/08/26 19:52:33 INFO spark.LocalScheduler: Finished task 0 11/08/26 19:52:33 INFO spark.LocalScheduler: Running task 1 11/08/26 19:52:33 INFO spark.LocalScheduler: Completed ResultTask(0, 0) 11/08/26 19:52:33 INFO spark.LocalScheduler: Size of task 1 is 1385 bytes 11/08/26 19:52:33 INFO spark.LocalScheduler: Finished task 1 11/08/26 19:52:33 INFO spark.LocalScheduler: Completed ResultTask(0, 1) 11/08/26 19:52:33 INFO spark.SparkContext: Job finished in 0.145892763 s Pi is roughly 3.14952 $ By increasing the number of threads, you can not only increase the parallelization of thread execution but also execute the job in less time (as shown in Listing 8). Listing 8. Another quick test of Spark with two threads $ ./run spark.examples.SparkPi local[2] 11/08/26 20:04:30 INFO spark.MapOutputTrackerActor: Registered actor on port 50501 11/08/26 20:04:30 INFO spark.CacheTrackerActor: Registered actor on port 50501 11/08/26 20:04:30 INFO spark.SparkContext: Starting job... 11/08/26 20:04:30 INFO spark.CacheTracker: Registering RDD ID 0 with cache 11/08/26 20:04:30 INFO spark.CacheTrackerActor: Registering RDD 0 with 2 partitions 11/08/26 20:04:30 INFO spark.CacheTrackerActor: Asked for current cache locations 11/08/26 20:04:30 INFO spark.LocalScheduler: Final stage: Stage 0 11/08/26 20:04:30 INFO spark.LocalScheduler: Parents of final stage: List() 11/08/26 20:04:30 INFO spark.LocalScheduler: Missing parents: List() 11/08/26 20:04:30 INFO spark.LocalScheduler: Submitting Stage 0, which has no missing ... 11/08/26 20:04:30 INFO spark.LocalScheduler: Running task 0 11/08/26 20:04:30 INFO spark.LocalScheduler: Running task 1 11/08/26 20:04:30 INFO spark.LocalScheduler: Size of task 1 is 1385 bytes 11/08/26 20:04:30 INFO spark.LocalScheduler: Size of task 0 is 1385 bytes 11/08/26 20:04:30 INFO spark.LocalScheduler: Finished task 0 11/08/26 20:04:30 INFO spark.LocalScheduler: Finished task 1 11/08/26 20:04:30 INFO spark.LocalScheduler: Completed ResultTask(0, 1) 11/08/26 20:04:30 INFO spark.LocalScheduler: Completed ResultTask(0, 0) 11/08/26 20:04:30 INFO spark.SparkContext: Job finished in 0.101287331 s Pi is roughly 3.14052 $ Building a simple Spark application with Scala To build a Spark application, you need Spark and its dependencies in a single Java archive (JAR) file. Create this JAR in Spark's top-level directory with sbt as: $ sbt/sbt assembly The result is the file ./core/target/scala_2.8.1/"Spark Core-assembly-0.3.jar"). Add this file to your CLASSPATH so that it's accessible. In this example, you won't use this JAR, because you'll run it with the Scala interpreter instead of compiling it. For this example, use the standard MapReduce transformation (shown in Listing 9). The example begins with the necessary imports for the Spark classes. Next, you define your class ( SparkTest), with its main method, which parses the arguments for later use. These arguments define the environment from which Spark will be executed (in this case, a single-node cluster). Next, create your SparkContext object, which tells Spark how to access your cluster. This object requires two parameters: the Mesos master name (passed in) and the name that you assign the job ( SparkTest). Parse the number of slices from the command line, which tells Spark how many threads to use for the job. The last remaining item for setup is specifying the text file to use for the MapReduce operation. Finally, you get to the real meat of the Spark example, which consists of a set of transformations. With your file, invoke the flatMap method to return an RDD (through the specified function to split up the text line into tokens). This RDD is then passed through the map method (which creates the key-value pairs) and finally through the ReduceByKey method, which aggregates your key-value pairs. It does this by passing the key-value pairs to the _ + _ anonymous function. This function simply takes two parameters (the key and value) and returns the result by appending them together (a String and an Int). This value is then emitted as a text file (to the output directory). Listing 9. MapReduce in Scala/Spark (SparkTest.scala) import spark.SparkContext import SparkContext._ object SparkTest { def main( args: Array[String]) { if (args.length == 0) { System.err.println("Usage: SparkTest <host> [<slices>]") System.exit(1) } val spark = new SparkContext(args(0), "SparkTest") val slices = if (args.length > 1) args(1).toInt else 2 val myFile = spark.textFile("test.txt") val counts = myFile.flatMap(line => line.split(" ")) .map(word => (word, 1)) .reduceByKey(_ + _) counts.saveAsTextFile("out.txt") } } SparkTest.main(args) To execute your script, simply request execution with: $ scala SparkTest.scala local[1] You can find the MapReduce test file in the output directory (as output/part-00000). Other big data analytics frameworks Since Hadoop was developed, a number of other big data analytics platforms have arrived that may be worthy of a look. These platforms range from simple script-based offerings to production environments similar to Hadoop. One of the simplest is called bashreduce, which as the name suggests allows you to perform MapReduce-type operations across multiple machines in the Bash environment. bashreduce relies on Secure Shell (password-less) for the cluster of machines you plan to use, and then exists as a script through which you request jobs via UNIX®-style tools ( sort, awk, netcat, and the like). GraphLab is another interesting implementation of the MapReduce abstraction that focuses on parallel implementation of machine learning algorithms. In GraphLab, the Map stage defines computations that can be performed independently in isolation (on separate hosts), and the Reduce stage combines the results. Finally, a newcomer to the big data scene is Storm from Twitter (through the acquisition of BackType). Storm is defined as the "Hadoop of real-time processing" and is focused on stream processing and continuous computation (stream results out as they're computed). Storm is written in Clojure (a modern dialect of the Lisp language) but supports applications written in any language (such as Ruby and Python). Twitter released Storm as open source in September 2011. See Related topics for more information. Going further Spark is an interesting addition to the growing family of big data analytics solutions. It provides not only an efficient framework for the processing of distributed datasets but does so in an efficient way (through simple and clean Scala scripts). Both Spark and Scala are under active development. However, with their adoption at key Internet properties, it appears that both have transitioned from interesting open source software to foundational web technologies. Downloadable resources Related topics - In this practice session, Data analysis and performance with Spark, explore multi-thread and multi-node performance with Spark and Mesos and its tunable parameters.(M. Tim Jones, developerWorks, February 2012). - EDF Trading: Implementing a domain-specific language for derivative pricing with Scala: Scala has found adoption in a variety of industries, including stock trading. Learn about one example by watching this video. - Application virtualization, past and future (M. Tim Jones, developerWorks, May 2011) presents an introduction to virtual machine languages and their implementations. - Ceylon: True advance, or just another language? (M. Tim Jones, developerWorks July 2011) explores another interesting (work-in-progress) VM language that relies on the JVM. - First Steps to Scala is a great introduction to the Scala language (written in part by Martin Odersky, the designer of Scala). This lengthy introduction from 2007 covers many aspects of the language. Another useful example is Code Examples for Programming in Scala, which provides Scala recipes for a large variety of code patterns. - Distributed computing with Linux and Hadoop (Ken Mann and M. Tim Jones, developerWorks, December 2008) provides an introduction to the architecture of Hadoop, including the basics of the MapReduce paradigm for distributed processing of bulk data. - Distributed data processing with Hadoop (M. Tim Jones, developerWorks 2010): Find a practical introduction to Hadoop, including how to set up and use a single-node Hadoop cluster, how to set up and use a multi-node cluster, and how to develop map and reduce applications within the Hadoop environment. - developerWorks on Twitter: Follow us for the latest news. You can also follow this author on Twitter at M. Tim Jones. - Spark introduces an in-memory data analytics solution written and supported by the Scala language. - The simple build tool is the build solution adopted by the Scala language. It offers a simple method for small projects as well as advanced features for complex builds. - Lift is the web application framework for Scala, similar to the Rails framework for Ruby. You can find Lift in action at Twitter and Foursquare. - Mesos Project: Spark doesn't support distribution of workloads natively but instead relies on this cluster manager that provides resource isolation and sharing across a network for distributed applications. bashreduce(a Bash script-based implementation) and Storm (acquired by Twitter from BackType, a real-time distributed stream processing system written in Clojure): Hadoop kicked off a number of big data analytics platforms. Other than Spark, you can implement parallel computing architectures with these three offerings.
https://www.ibm.com/developerworks/library/os-spark/index.html
CC-MAIN-2018-34
refinedweb
2,917
58.08
DBI is OK A recent article on Perl.com recommended that most Perl programs use the DBIx::Recordset module as the standard database interface. The examples cast an unfavorable light on DBI (which DBIx::Recordset uses internally). While choosing an interface involves trade-offs, the venerable DBI module, used properly, is a fine choice. This response attempts to clear up some misconceptions and to demonstrate a few features that make DBI, by itself, powerful and attractive. Since its inception in 1992, DBI has matured into a powerful and flexible module. It runs well on many platforms, includes drivers for most popular database systems and even supports flat files and virtual databases. Its popularity means it is well-tested and well-supported, and its modular design provides a consistent interface across the supported backends. Is Table Mutation a Big Problem? The previous article described an occurrence called ``table mutation,'' where the structure of a table changes. Mr. Brannon says that the DBI does not handle this gracefully. One type of mutation is field reordering. For example, a table named 'user' may originally be: name char(25) email char(25) phone char(11) At some point, the developers will discover that the 'name' field is a poor primary key. If a second ``John Smith'' registers, the indexing scheme will fail. To ensure uniqueness, the coders could add an 'id' field using an auto-incremented integer or a globally unique identifier built into the database. The updated table might resemble: id bigint name char(25) email char(25) phone char(11) Mr. Brannon contends that all code that generates a SQL statement that resolves to 'SELECT * FROM user' will break with this change. He is correct. Any database request that assumes, but does not specify, the order of results is susceptible. The situation is not as bad as it seems. First, DBI's fetchrow_hashref() method returns results keyed on field names. Provided the existing field names do not change, code using this approach will continue to work. Unfortunately, this is less efficient than other fetching methods. More importantly, explicitly specifying the desired fields leads to clearer and more secure code. It is easier to understand the purpose of code that selects the 'name' and 'email' fields from the 'user' table above than code that assumes an order of results that may change between databases. This can also improve performance by eliminating unnecessary data from a request. (The less work the database must do, the better. Why retrieve fields that won't be used?) From a pragmatic approach, the example program will need to change when the 'id' field is added. The accessor code must use the new indexing approach. Whether the accessors continue to function in the face of this change is irrelevant -- someone must update the code! The same arguments apply to destructive mutations, where someone deletes a field from a table. While less likely than adding a field, this can occur during prototyping. (Anyone who deletes a field in a production system will need an approved change plan, an extremely good excuse or a recent CV.) A change of this magnitude represents a change in business rules or program internals. Any system that can handle this reliably, without programmer intervention, is a candidate for Turing testing. It is false laziness to assume otherwise. Various classes of programs will handle this differently. My preference is to die immediately, noisily alerting the maintainer. Other applications and problem domains might prefer to insert or to store potentially tainted data for cleansing later. It's even possible to store metadata as the first row of a table or to examine the table structure before inserting data. Given the hopefully rare occurrence of these mutations and the wide range of options in handling them, the DBI does not enforce one solution over another. Contrary to the explanation in the prior article, this is not a failing of the DBI. (See a November 2000 PerlMonks discussion at for more detail.) Making Queries Easier Another of Mr. Brannon's disappointments with the DBI is that it provides no generalized mechanism to generate SQL statements automatically. This allows savvy users to write intricate queries by hand, while database neophytes can use modules to create their statements for them. The rest of us can choose between these approaches. SQL statements are plain text, easily manipulated with Perl. An example from the previous article created an INSERT statement with multiple fields and a hash containing insertable data. Where the example was tedious and hard to maintain, a bit of editing makes it general and powerful enough to become a wrapper function. Luckily, the source hash keys correspond to the destination database fields. It takes only a few lines of code and two clever idioms to produce a sane and generalized function to insert data into a table. my $table = 'uregisternew'; my @fields = qw( country firstname lastname userid password address1 city state province zippostal email phone favorites remaddr gender income dob occupation age ); my $fields = join(', ', @fields); my $values = join(', ', map { $dbh->quote($_) } @formdata{@fields}); $sql = "INSERT into $table ($fields) values ($values)"; $sth = $dbh->prepare($sql); $sth->execute(); $sth->finish(); We'll assume that %formdata has been declared and contains data already. We've already created a database handle, stored in $dbh, and it has the RaiseError attribute set. The first two lines declare the database table to use and the fields into which to insert data. These could just as well come from function arguments. The join() lines transforms lists of fields and values into string snippets used in the SQL statement. The map block simply runs each value through the DBI's quote() method, quoting special characters appropriately. Don't quote the fields, as they'll be treated as literals and will be returned directly. (Be sure to check the DBD module for your chosen database for other notes regarding quote().) The only tricky construct is @formdata{@fields}. This odd fellow is known as a hash slice. Just as you can access a single value with a scalar ($formdata{$key}), you can access a list of values with a list of keys. Not only does this reduce the code that builds $values, using the same list in @fields ensures that the field names and the values appear in the same order. Placeholders A relational database must parse each new statement, preparing the query. (This occurs when a program calls the prepare() method). High-end systems often run a query analyzer to choose the most efficient path. Because many queries are repeated, some databases cache prepared queries. DBI can take advantage of this with placeholders (also known as 'bind values'). This is especially handy when inserting multiple rows. Instead of interpolating each new row into a unique statement and forcing the database to prepare a new statement each time, adding placeholders to an INSERT statement allows us to prepare the statement once, looping around the execute() method. my $fields = join(', ', @fields); my $places = join(', ', ('?') x @fields); $sql = "INSERT into $table ($fields) values ($places)"; $sth = $dbh->prepare($sql); $sth->execute(@formdata{@fields}); $sth->finish(); Given @fields containing 'name', 'phone', and 'email', the generated statement will be: INSERT into users (name, phone, email) values (?, ?, ?) Each time we call execute() on the statement handle, we need to pass the appropriate values in the correct order. Again, a hash slice comes in handy. Note that DBI automatically quotes values with this technique. This example only inserts one row, but it could easily be adapted to loop over a data source, repeatedly calling execute(). While it takes slightly more code than interpolating values into a statement and calling do(), the code is much more robust. Additionally, preparing the statement only once confers a substantial performance benefit. Best of all, it's not limited to INSERT statements. Consult the DBI documentation for more details. Binding Columns In a similar vein, the DBI also supports a supremely useful feature called 'binding columns.' Instead of returning a list of row elements, the DBI stores the values in bound scalars. This is very fast, as it avoids copying returned values, and can simplify code greatly. From the programmer's side, it resembles placeholders, but it is a function of the DBI, not the underlying database. Binding columns is best illustrated by an example. Here, we loop through all rows of the user table, displaying names and e-mail addresses: my $sql = "SELECT name, email FROM users"; my $sth = $dbh->prepare($sql); $sth->execute(); my ($name, $email); $sth->bind_columns(\$name, \$email); while ($sth->fetch()) { print "$name <$email>\n"; } $sth->finish(); With each call to fetch(), $name and $email will be updated with the appropriate values for the current row. This code does have the flaw of depending on field ordering hardcoded in the SQL statement. Instead of giving up on this flexibility and speed, we'll use the list-based approach with a hash slice: my $table = 'users'; my @fields = qw( name email ); my %results; my $fields = join(', ', @fields); my $sth = $dbh->prepare("SELECT $fields FROM $table"); $sth->execute(); @results{@fields} = (); $sth->bind_columns(map { \$results{$_} } @fields); while ($sth->fetch()) { print "$results{name} <$results{email}>\n"; } $sth->finish(); It only takes two lines of magic to bind hash values to the result set. After declaring the hash, we slice %results with @fields to initialize the keys we'll use. Their initial value (undef) doesn't matter, as it is only necessary that they exist. The map block in the bind_columns() call creates a reference to the hash value associated with each key in @fields. (This is the only required step of the example, but the value initialization in the previous line makes it more clear.) If we only display names and addresses, this is no improvement over binding simple lexicals. The real power comes with more complicated tasks. This technique may be used in a function: sub bind_hash { my ($table, @fields) = @_; my $sql = 'SELECT ' . join(', ', @fields) . " FROM $table"; my $sth = $dbh->prepare($sql); $sth->execute(); my %results; @results{@fields} = (); $sth->bind_columns(map { \$results{$_} } @fields); return (\%results, sub { $sth->fetch() }); } Calling code could resemble: my ($res, $fetch) = bind_hash('users', qw( name email )); while ($fetch->()) { print "$res->{name} >$res->{email}>\n"; } Other options include passing in references to populate or returning an object that has a fetch() method of its own. Modules Built on DBI The decision to use one module over another depends on many factors. For certain classes of applications, the nuts and bolts of the underlying database structure is less important than ease of use or rapid development. Some coders may prefer a higher level of abstraction to hide tedious details for simple requirements. The drawbacks are lessened flexibility and slower access. It is up to the programmer to analyze each situation, choosing the appropriate approach. Perl itself encourages this. As mentioned above, DBI does not enforce any behavior of SQL statement generation or data retrieval. When the techniques presented here are too onerous and using a module such as Tangram or DBIx::Recordset makes the job easier and more enjoyable, do not be afraid to use them. Conversely, a bit of planning ahead and abstraction can provide the flexibility needed for many other applications. There is no single best solution, but Perl and the CPAN provide many workable options, including the DBI.
http://www.perl.com/pub/2001/03/dbiokay.html
CC-MAIN-2013-20
refinedweb
1,886
54.52
This is the second part of An Introduction to GameplayKit. If you haven't yet gone through the first part, then I recommend reading that tutorial first before continuing with this one. Introduction In this tutorial, I am going to teach you about two more features of the GameplayKit framework you can take advantage of: - agents, goals, and behaviors - pathfinding By utilizing agents, goals, and behaviors, we are going to build in some basic artificial intelligence (AI) into the game that we started in the first part of this series. The AI will enable our red and yellow enemy dots to target and move towards our blue player dot. We are also going to implement pathfinding to extend on this AI to navigate around obstacles. For this tutorial, you can use your copy of the completed project from the first part of this series or download a fresh copy of the source code from GitHub. 1. Agents, Goals, and Behaviors In GameplayKit, agents, goals and behaviors are used in combination with each other to define how different objects move in relation to each other throughout your scene. For a single object (or SKShapeNode in our game), you begin by creating an agent, represented by the GKAgent class. However, for 2D games, like ours, we need to use the concrete GKAgent2D class. The GKAgent class is a subclass of GKComponent. This means that your game needs to be using an entity- and component-based structure as I showed you in the first tutorial of this series. Agents represent an object's position, size, and velocity. You then add a behavior, represented by the GKBehaviour class, to this agent. Finally, you create a set of goals, represented by the GKGoal class, and add them to the behavior object. Goals can be used to create many different gameplay elements, for example: - moving towards an agent - moving away from an agent - grouping close together with other agents - wandering around a specific position Your behavior object monitors and calculates all of the goals that you add to it and then relays this data back to the agent. Let's see how this works in practice. Open your Xcode project and navigate to PlayerNode.swift. We first need to make sure the PlayerNode class conforms to the GKAgentDelegate protocol. class PlayerNode: SKShapeNode, GKAgentDelegate { ... Next, add the following code block to the PlayerNode class.)) } } We start by adding a property to the PlayerNode class so that we always have a reference to the current player's agent object. Next, we implement the two methods of the GKAgentDelegate protocol. By implementing these methods, we ensure that the player dot displayed on screen will always mirror the changes that GameplayKit makes. The agentWillUpdate(_:) method is called just before GameplayKit looks through the behavior and goals of that agent to determine where it should move. Likewise, the agentDidUpdate(_:) method is called straight after GameplayKit has completed this process. Our implementation of these two methods ensures that the node we see on screen reflects the changes GameplayKit makes and that GameplayKit uses the last position of the node when performing its calculations. Next, open ContactNode.swift and replace the file's contents with the following implementation: import UIKit import SpriteKit import GameplayKit class ContactNode: SKShapeNode, GKAgentDelegate {)) } } } By implementing the GKAgentDelegate protocol in the ContactNode class, we allow for all of the other dots in our game to be up to date with GameplayKit as well as our player dot. It's now time to set up the behaviors and goals. To make this work, we need to take care of three things: - Add the player node's agent to its entity and set its delegate. - Configure agents, behaviors, and goals for all of our enemy dots. - Update all of these agents at the correct time. Firstly, open GameScene.swift and, at the end of the didMoveToView(_:) method, add the following two lines of code: playerNode.entity.addComponent(playerNode.agent) playerNode.agent.delegate = playerNode With these two lines of code, we add the agent as a component and set the agent's delegate to be the node itself. Next, replace the implementation of the initialSpawn method with the following implementation: func initialSpawn() { agent.mass = 0.01 agent.maxSpeed = 50 agent.maxAcceleration = 1000 } node!.position = point node!.strokeColor = UIColor.clearColor() node!.physicsBody!.contactTestBitMask = 1 self.addChild(node!) } } The most important code that we've added is located in the if statement that follows the switch statement. Let's go through this code line by line: - We first add the agent to the entity as a component and configure its delegate. - Next, we assign the agent's position and add the agent to a stored array, agents. We'll add this property to the GameSceneclass in a moment. - We then create a GKBehaviorobject with a single GKGoalto target the current player's agent. The weightparameter in this initializer is used to determine which goals should take precedence over others. For example, imagine that you have a goal to target a particular agent and a goal to move away from another agent, but you want the targeting goal to take preference. In this case, you could give the targeting goal a weight of 1and the moving away goal a weight of 0.5. This behavior is then assigned to the enemy node's agent. - Lastly, we configure the mass, maxSpeed, and maxAccelerationproperties of the agent. These affect how fast the objects can move and turn. Feel free to play around with these values and see how it affects the movement of the enemy dots. Next, add the following two properties to the GameScene class: var agents: [GKAgent2D] = [] var lastUpdateTime: CFTimeInterval = 0.0 The agents array will be used to keep a reference to the enemy agents in the scene. The lastUpdateTime property will be used to calculate the time that has passed since the scene was last updated. Finally, replace the implementation of the update(_:) method of the GameScene class with the following implementation: override func update(currentTime: CFTimeInterval) { /* Called before each frame is rendered */ self.camera?.position = playerNode.position if self.lastUpdateTime == 0 { lastUpdateTime = currentTime } let delta = currentTime - lastUpdateTime lastUpdateTime = currentTime playerNode.agent.updateWithDeltaTime(delta) for agent in agents { agent.updateWithDeltaTime(delta) } } In the update(_:) method, we calculate the time that has passed since the last scene update and then update the agents with that value. Build and run your app, and begin moving around the scene. You will see that the enemy dots will slowly begin moving towards you. As you can see, while the enemy dots do target the current player, they do not navigate around the white barriers, instead they try to move through them. Let's make the enemies a bit smarter with pathfinding. 2. Pathfinding With the GameplayKit framework, you can add complex pathfinding to your game by combining physics bodies with GameplayKit classes and methods. For our game, we are going to set it up so that the enemy dots will target the player dot and at the same time navigate around obstacles. Pathfinding in GameplayKit begins with creating a graph of your scene. This graph is a collection of individual locations, also referred to as nodes, and connections between these locations. These connections define how a particular object can move from one location to another. A graph can model the available paths in your scene in one of three ways: - A continuous space containing obstacles: This graph model allows for smooth paths around obstacles from one location to another. For this model, the GKObstacleGraphclass is used for the graph, the GKPolygonObstacleclass for obstacles, and the GKGraphNode2Dclass for nodes (locations). - A simple 2D grid: In this case, valid locations can only be those with integer coordinates. This graph model is useful when your scene has a distinct grid layout and you do not need smooth paths. When using this model, objects can only move horizontally or vertically in a single direction at any one time. For this model, the GKGridGraphclass is used for the graph and the GKGridGraphNodeclass for nodes. - A collection of locations and the connections between them: This is the most generic graph model and is recommended for cases where objects move between distinct spaces, but their specific location within that space is not essential to the gameplay. For this model, the GKGraphclass is used for the graph and the GKGraphNodeclass for nodes. Because we want the player dot in our game to navigate around the white barriers, we are going to use the GKObstacleGraph class to create a graph of our scene. To begin, replace the spawnPoints property in the GameScene class with the following: let spawnPoints = [ CGPoint(x: 245, y: 3900), CGPoint(x: 700, y: 3500), CGPoint(x: 1250, y: 1500), CGPoint(x: 1200, y: 1950), CGPoint(x: 1200, y: 2450), CGPoint(x: 1200, y: 2950), CGPoint(x: 1200, y: 3400), CGPoint(x: 2550, y: 2350), CGPoint(x: 2500, y: 3100), CGPoint(x: 3000, y: 2400), CGPoint(x: 2048, y: 2400), CGPoint(x: 2200, y: 2200) ] var graph: GKObstacleGraph! The spawnPoints array contains some altered spawn locations for the purposes of this tutorial. This is because currently GameplayKit can only calculate paths between objects that are relatively close to each other. Due to the large default distance between dots in this game, a couple of new spawn points must be added to illustrate pathfinding. Note that we also declare a graph property of type GKObstacleGraph to keep a reference to the graph we will create. Next, add the following two lines of code at the start of the didMoveToView(_:) method: let obstacles = SKNode.obstaclesFromNodePhysicsBodies(self.children) graph = GKObstacleGraph(obstacles: obstacles, bufferRadius: 0.0) In the first line, we create an array of obstacles from the physics bodies in the scene. We then create the graph object using these obstacles. The bufferRadius parameter in this initializer can be used to force objects to not come within a certain distance of these obstacles. These lines need to be added at the start of the didMoveToView(_:) method, because the graph we create is needed by the time the initialSpawn method is called. Finally, replace the initialSpawn method with the following implementation: func initialSpawn() { let endNode = GKGraphNode2D(point: float2(x: 2048.0, y: 2048.0)) self.graph.connectNodeUsingObstacles(endNode)*/ /*** BEGIN PATHFINDING ***/]) /*** END PATHFINDING ***/ agent.mass = 0.01 agent.maxSpeed = 50 agent.maxAcceleration = 1000 } node!.position = point node!.strokeColor = UIColor.clearColor() node!.physicsBody!.contactTestBitMask = 1 self.addChild(node!) } self.graph.removeNodes([endNode]) } We begin the method by creating a GKGraphNode2D object with the default player spawn coordinates. Next, we connect this node to the graph so that it can be used when finding paths. Most of the initialSpawn method remains unchanged. I have added some comments to show you where the pathfinding portion of the code is located in the first if statement. Let's go through this code step by step: - We create another GKGraphNode2Dinstance and connect this to the graph. - We create a series of nodes which make up a path by calling the findPathFromNode(_:toNode:)method on our graph. - If a series of path nodes has been created successfully, we then create a path from them. The radiusparameter works similar to the bufferRadiusparameter from before and defines how much an object can move away from the created path. - We create two GKGoalobjects, one for following the path and another for staying on the path. The maxPredictionTimeparameter allows for the goal to calculate as best it can ahead of time whether anything is going to interrupt the object from following/staying on that particular path. - Lastly, we create a new behavior with these two goals and assign this to the agent. You will also notice that we remove the nodes we create from the graph once we are finished with them. This is a good practice to follow as it ensures that the nodes you have created do not interfere with any other pathfinding calculations later on. Build and run your app one last time, and you will see two dots spawn very close to you and begin moving towards you. You may have to run the game multiple times if they both spawn as green dots. Important! In this tutorial, we used GameplayKit's pathfinding feature to enable enemy dots to target the player dot around obstacles. Note that this was just for a practical example of pathfinding. For an actual production game, it would be best to implement this functionality by combining the player targeting goal from earlier in this tutorial with an obstacle-avoiding goal created with the init(toAvoidObstacles:maxPredictionTime:) convenience method, which you can read more about in the GKGoal Class Reference. Conclusion In this tutorial, I showed you how you can utilize agents, goals, and behaviors in games that have an entity-component structure. While we only created three goals in this tutorial, there are many more available to you, which you can read more about in the GKGoal Class Reference. I also showed you how to implement some advanced pathfinding in your game by creating a graph, a set of obstacles, and goals to follow these paths. As you can see, there is a vast amount of functionality made available to you through the GameplayKit framework. In the third and final part of this series, I will teach you about GameplayKit's random value generators and how to create your own rule system to introduce some fuzzy logic into your game. As always, please be sure to leave your comments and feedback below. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/an-introduction-to-gameplaykit-part-2--cms-24528
CC-MAIN-2018-22
refinedweb
2,281
53.41
Difference between revisions of "BIRT/FAQ/Charts2.2" Revision as of 07:02, 12 September 2008 Contents - 1 Download - 2 Features - 3 Integration & Deployment - Terminology - 5 Chart Model API - 5.1 How to get started? - 5.2 How is the Chart Model working? - 5.3 Which Chart class should I start with? - 5.4 How to navigate through the model structure? - 5.5 How is the chart library structured? - 5.6 What is a device renderer? - 5.7 What is a display server? - 5.8 What is a series renderer? - 5.9 What are blocks? - 5.10 How are defaults assigned to the model? - 5.11 How do I create an equivalent of a ’Hello Chart’ first chart example with a plot containing axes? - 5.12 How do I set the dimension (i.e. 2D, 2D with depth, 3D) of the chart? - 5.13 How does transposing of the chart model work? - 5.14 How do I set a solid color for various attributes (e.g. foreground, background, line edge)? - 5.15 How do I set a gradient as the background fill of any region? - 5.16 How do I set a tiled image as the background of the plot area? - 5.17 How do I set a line or border edge style and thickness? - 5.18 How do I set the edge color to represent a darker shade of a solid color? - 5.19 How do I set text font face, size, styles, rotation, etc (for axis labels, title, data point values, etc)? - 5.20 How do I apply insets within the text bounds of any label? - 5.21 How do I draw a border with a custom line style around a label? - 5.22 How do I add a shadow behind a rendered label? - 5.23 How does ’auto axis scaling’ work? - 5.24 How are axes and series defined in a ChartWithAxes? - 5.25 How do I define the axis origin or the intersecting point between two perpendicular axes? - 5.26 How are series defined in a chart without axes? - 5.27 How do I create a combination chart? - 5.28 How do I show data point labels next to my graphic elements in each chart type? How do I format these data point labels? How do I position these labels for the various supported chart types? - 5.29 How do I define chart interaction via API for the generated output? - 5.30 How do I create a chart and perform a minimal refresh to show updated scrolling data values using animation? - 5.31 How do I set an axis scale as logarithmic? - 5.32 How do I set a series as stacked? Are all series types stackable? - 5.33 How do I set an axis scale as a percent scale? - 5.34 Can I combine stacked, logarithmic and/or percent properties on a single scale? - 5.35 How do I add marker lines and marker ranges to the plot? - 5.36 How do I define a chart structure and populates it with data? As an example, provide a code snippet for bar chart creation. - 6 Chart Engine API - 6.1 How do I render a chart on an SWT Canvas? - 6.2 How do I render a chart on a SWING JPanel? - 6.3 How do I render a chart into an image? - 6.4 How do I externalize text that needs to retrieve a runtime value from a locale specific message file? - 6.5 How do I intercept the chart generation/rendering process and alter the content of the chart using custom business logic? - 7 Chart Engine Extensions - 8 Troubleshooting -: - org.eclipse.birt.chart.render.BaseRenderer -: - Axis label attributes + title attributes - Axis attributes - Grid attributes (major + minor) - Data point label attributes -<font color="#0000FF">_LITERAL</font>);, and legend order are transposed... ChartWithAxes cwa = …; // BUILD THE CHART STRUCTURE; } Chart Engine API How do I render a chart on an SWT Canvas? A chart, once created (perhaps the bar chart from the previous Q), may be rendered on an SWT Canvas using the code snippet shown below. [Note that imports have been omitted for improved readability and to reduce clutter. Also, the createMyChart() method needs to provide a chart structure. Ensure that the chart runtime library is included in the build path.] public class ChartViewerSWT implements PaintListener { private IDeviceRenderer idr = null; private Chart cm = null; /** * The program entry point */) { DefaultLoggerImpl.instance().log(pex); } cm = createSimpleBarChart(); } /** * imports have been omitted for improved readability and to reduce clutter. Also, the createMyChart() method needs to provide a chart structure. Ensure that the chart runtime library is included in the build path.] public class ChartViewer extends JPanel { private boolean bNeedsGeneration = true; private GeneratedChartState gcs = null; private Chart cm = null; private IDeviceRenderer idr = null; /** * The program entry point */.
http://wiki.eclipse.org/index.php?title=BIRT/FAQ/Charts2.2&diff=118111&oldid=118077
CC-MAIN-2017-13
refinedweb
796
67.86
Java Files File handling is an important part of any application. Java has several methods for creating, reading, updating, and deleting files. Java File Handling The File class from the java.io package, allows us to work with files. To use the File class, create an object of the class, and specify the filename or directory name: Example import java.io.File; // Import the File class File myObj = new File("filename.txt"); // Specify the filename If you don't know what a package is, read our Java Packages Tutorial. The File class has many useful methods for creating and getting information about files. For example: You will learn how to create, write, read and delete files in the next chapters:
https://www.w3schools.com/java/java_files.asp
CC-MAIN-2022-27
refinedweb
120
67.04
A powerful programming language developed at Bell Labs by Brian Kerninghan and Dennis Ritchie in the early 1970s. C has wide application, including central office switches and voice processing systems. C operates under Unix, MS-DOS, Windows (all flavors) and other operating systems. Symbol for capacitance. See also Capacitance . Symbol for Celsius. A portion of the electromagnetic spectrum used heavily for both terrestrial microwave and non-terrestrial (i.e., satellite) microwave radio transmission. In terrestrial microwave applications, the C Band is in the range of 4-6 GHz. Satellites split the C Band into an uplink channel at approximately 6 GHz and a downlink channel at 4 GHz. Traditionally, C-Band satellites ranged in power from five to 11 watts per transponder , requiring receive antennas of five to 12 feet in diameter. The fleet of C-Band satellites is being gradually replaced by higher- powered (10-17 watt) satellites, which allow the size of the dish to be reduced to 90 inches in diameter. Traditional applications include voice communications, videoconferencing, and broadcast TV and radio. The large dish size and associated high cost of such dishes have contributed to their lack of popularity for TV reception by individuals; Ku band dishes largely have replaced them in support of DBS (Direct Broadcast Satellite) TV reception . Contrast with KU Band and KA Band. See also C-Band. Large (6- to 10- foot ) satellite dish antenna, usually motorized , used to intercept signals from C band satellites. Many big dish antennas today receive both C band and Ku band signals. A source of low potential used in the grid circuit of a vacuum tube to cause operation to take place at the desired point on the characteristic curve. Signaling and control bits used in certain T-carrier systems. Classic M13 multiplexers serve to multiplex 28 T-1s into a T-3 signal. M13 multiplexers accomplish this process by first combining four T-1s, at a signaling speed of 1.544 Mbps, into a T-2, at a signaling speed of 6.312 Mbps. At this stage, 136 Kbps of overhead is added, including justification, or bit stuffing . (Note that 1.544 Mbps x 4 = 6.176 + .136 = 6.312). At the next stage, seven T-2s, are combined into a T-3, at a total signaling speed of 44.736 Mbps, with 552 Kbps of bit stuffing. (Note: 6.312 x 7 = 44.184 + .552 = 44.736) The stuff bits added during this last stage are known as C Bits, as A & B Bits are used in older channel banks for signaling and control purposes at the T-1 level. C Bits are used for a variety of signaling and control purposes, including synchronization, and parity checking for error control. See also A & B Bits. T-3 framing structure that uses the traditional management overhead bits (X,P,M,F), but differs in that the control bits (C bits) are used for additional functions, e.g. FID, FEAC, FEBE,TDL and CP. See C Bit for a longer explanation. A type of line conditioning which controls attenuation, distortion and delay so that they lie within specified limits. See Conditioning. Also called a female amp connector or 25-pair female connector. The male version is called a P connector. Clamp used to fasten aerial wire to buildings . The third of three wires which make up trunk lines between central office switches. There are three wires ” positive, negative, and the "c lead." The purpose of the "c lead" is to control the grounding , holding and releasing of trunks. An SS7 term . Cross links used between mated pairs of Signal Transfer Points (STPs). They are primarily used for STP to STP communications or for network management messages. If there is congestion or a failure in the network, this is the link that the STPs use to communicate with each other. See A, B and D Links. This definition from James Harry Green, author of the excellent Dow Jones Handbook of Telecommunications. C Message Weighting is a factor in noise measurements to approximate the lesser annoying effect on the human ear of high and low-frequency noise compared to mid-range noise. The control plane within the ISDN protocol architecture; these protocols provide the transfer of information for the control of user connections and the allocation/deal- location of network resources; C Plane functions include call establishment, call termination, modifying service characteristics during the call (e.g. alternate speech/unrestricted 64 kbps data), and requesting supplementary services. C Wire is what the phone company calls the last piece of its wire that comes into your house or office. It is typically the piece of underground cable that comes in from its pedestal on the street to your network interface box on the side of your house or building. Computers and Communications. An NEC slogan which focused on the deployment of computer and telephony elements to create an integrated environment. Later on, NEC changed it to "Computing and Communicating" and expanded it into a "Fusion" strategy. See Fusion. A high-level programming language developed by Bjarne Stroustrup at AT&T's Bell laboratories. Combining all the advantages of the C language with those of object-oriented programming, C++ has been adopted as the standard house programming language by several major software vendors . Conventional Wavelength Band. The optical wavelength band from 1490 to 1570nm (nanometers). The C-Band has been used by conventional WDM (Wavelength Division Multiplexing) optical fiber systems since about 1993. Most WDM and DWDM (Dense WDM) systems, therefore, currently make use of this band. According to ITU-T standards, the C-Band will support 8 optical channels at a frequency spacing of 400 GHz, or wavelength spacing of 3.2nm; 16 channels at 200 GHz, or 1.6nm; 40 channels at 100 GHz, or 0.8nm; 80 channels at 50 GHz, or 0.4nm; and 160 at 25 GHz, or 0.25nm. See also C Band, L-Band and S-Band. A 30-MHz PCS carrier serving a basic trading area in the frequency block 1895-1910 MHz paired with 1975-1990 MHz. Character mode Data Terminal Equipment. A term to describe most PCs (personal computers) and printer-terminals that use asynchronous signals for data communications. A signaling link used to connect mated pairs of Signal Transfer Points (STPs). An Ericsson term. A type of telephone weighting network that allows for equal attenuation of all frequencies within the voice band in the same manner as it appears to be attenuated by the media. The C-message frequency-weighted noise on a channel with a holding tone that is removed at the measuring end through a notch (very narrow band) filter. Fremont, California, June 30, 1999 - The Enterprise Computer Telephony Forum (ECTF) announced the release of its latest Interoperability Agreement, C.100. This agreement specifies that certain packages of Java Telephony API (JTAPI) 1.3 constitute a portable, object-oriented, call control API that meet all ECTF interoperability requirements. C.100 (available for downloading from the ECTF web site at) serves as a formal reference to the core, call center, call control, and private data packages of JTAPI 1.3 ” see Sun Microsystem's JTAPI web site at. According to Sun, "JTAPI was designed to be simple to implement. Application developers must still be knowledgeable about telephony, but they will not need implementation-specific knowledge to successfully develop their applications. It can be implemented without existing telephony APIs, but it was also designed to allow layering above APIs such as TAPI, TSAPI and others." The specific JTAPI packages covered by C.100 are: JTAPI Core , Call Center, Call Control and Private Data. See CARE. See CERT. The standard Clear/Acquisition GPS (Global Positioning Code) ” a sequence of 1023 pseudo-random, binary biphase modulations on the GPS carrier at the chip rate of 1,023 MHz. Also known as the "civilian code." See GPS. Command Response. A Frame Relay term defining a 1-bit portion of the frame Address Field. Reserved for the use of FRADs, the C/R is applied to the transport of data involving polled protocols such as SNA. Polled protocols require a command/response process for signaling and control during the communications process.. See C3. Command, Control and Communications. The capabilities required by military commanders to exercise command and control of their forces. See C4. Command, Control, Communications, and Computers. Once it was C2, then C3, now C4. It basically refers to all the computers and telecommunications which the U.S. military needs to run itself. European equivalent of the North American System Signaling 7. C7 is not 100% compatible with North American System Signaling 7 and that's where gateway and signaling conversion switches come in. These switches convert the signaling between one and the other and do it in real time. See Signaling System 7. Call Appearance. Canada, as in a Web address, i.e.. Do not type Corel.com, thinking the CA is a mistake. It's not. See URL and Web address. Certificate Authority. See Certificate Authority. Compound Annual Average Growth Rate. A container that may enclose connection devices, terminations, apparatus, wiring, and equipment. In telecommunications, an enclosure used for terminating telecommunications cables, wiring and connection devices that has a hinged cover, usually flush mounted in the wall. May refer to a number of different types of wires or groups of wires capable of carrying voice or data transmissions. The most common interior telephone cable has been two pair. It's typically called quad wiring. It consists of four separate wires each covered with plastic insulation and with all four wires wrapped in an outer plastic covering. Quad wiring is falling into disrepute as it is increasingly obvious that it does not have the capacity to carry data at high speeds. The wire and cable business is immense. The assortment of stuff it produces each year is mind-boggling. In telecommunications, there is one rule: The quality of a circuit is only as good as its weakest link. Often that "weak link" is the quality of the wiring or cabling (we used the words interchangeably) that the user himself puts in. Please put in decent quality wiring. Don't skimp. See Category of Performance. An Act passed by Congress that deregulated most of the CATV industry including subscriber rates, required programming and fees to municipalities. The FCC was left with virtually no jurisdiction over cable television except among the following areas: (1) registration of each community system prior to the commencement of operations; (2) ensuring subscribers had access to an A-B switch to permit the receipt of off-theair broadcasts as well as cable programs; (3) carriage of television broadcast programs in full without alteration or deletion; (4) non-duplication of network programs; (5) fines or imprisonment for carrying obscene material; and (6) licensing for receive-only earth stations for satellite-delivered via pay cable. The FCC could impose fines on CATV systems violating the rules. This Act was superseded by the Cable Reregulation Act of 1992. A completed cable that typically is terminated with connectors and plugs. It is ready to install. Lots of cable arranged like bays in a harbor. Cable bend radius during installation infers that the cable is experiencing a tensile load. Free bend implies a small allowable bend radius since it is at a condition of no load. In the telephone network, multiple insulated copper pairs are bundled together into a cable called a cable binder. Each binder group contains 25 cable pairs, which are color -coded in order to make it easier to splice and terminate them properly. That way you won't get the tip and ring reversed , or connect your boss' phone to his administrative assistant's cable. See Denial of Service Attack. A local area network term. The overall length of cable allowed between the DTEs located farthest apart within a common collision domain. A magazine on cabling run by Steve Paulov and family in Mesquite (Dallas), TX. A great magazine. 214-270-0860. The number assigned to a television channel carried by a cable television system. Cable channels 2 through 13 are assigned to the same frequencies as broadcast channels 2 through 13; cable channels above Channel 13 are not. Cable channel assignments are specified in EIA Interim Standard EIA/IS-132, and are incorporated by reference into the FCC's cable television rules. Equipment often provided by a cable company in a subscriber's home that allows access to cable TV services. Service outage caused by cutting or damaging a cable. For a cabled single-mode optical fiber, Cable Cutoff Wavelength specifies a complex inter-relation of specified length, bend, and deployment conditions. It is the wavelength at which the fiber's second order mode is attenuated a measurable amount when compared to a multimode reference fiber or to a tightly bent single-mode fiber. Expressed in millimeters or inches. Affects space occupied, allowable bend radius, reel size, length on a reel and reel weight. Also affects selection of pulling grips. The interchange point between the regional fiber network and the cable plant. At the hub, the cable modem termination system (CMTS) converts data from a wide-area network (WAN) protocol, such as packet over SONET (POS), into digital signals that are modulated for transmission over HFC plant and then demodulated by the cable modem in the home or business. The CMTS provides a dedicated 27 Mbps downstream data channel that is shared by the 500 to 1,000 homes served by a fiber node, or group of nodes. Upstream bandwidth per node typically ranges from two Mbps to ten Mbps. Slang expression. In the West, lifelong cable installer who seeks no upward mobility. In the East, worker who deals with underground cable. An amplifier , usually in a gain of 2, suitable for driving the low resistance of a double terminated cable. Load resistance = 150 ohms for video, 100 ohms for instrumentation. The segment of cable that typically runs from the street or a telephone pole into the home. A type of connector. I cannot describe it. Go to this web site. There's a photo of one type of cable gland connector.. The point where a marine cable connects to terrestrial facilities. The cable headend connects the cable network with dishes that receive both satellite and traditional broadcast TV signals, and with cable modems linking to the Internet. This forum is a organization of the cable television, telephone, computer and switching network industries. The Forum was created to promote greater communication between vendors in the information technology industry, cable television companies and CableLabs, the research and development consortium serving most of the cable operators in North America. The Convergence Forum, based in Louisville, CO, was conceived and sponsored by CableLabs. Companies that have agreed to join the Forum include Apple Computer, Bay Networks, Cisco Systems, Compaq, Digital Equipment Corporation, Fore Systems and LANCity. See also CableLabs. Internet access via traditional cable television networks with upstream capability supported by either telco return (in nonupgraded one-way cable plants) or RF return (in upgraded two-way cable plants). See Sheath. Just what it sounds like. The buildings where undersea cables begin and end. The amount of radio frequency (RF) signal attenuated (lost) while it travels on a cable. There are many reasons for cable loss, including the cable's shape, its type, its size, its length and what it's made of. For coaxial cable, higher frequencies have greater loss than lower frequencies and follow a logarithmic function. Cable losses are usually calculated for the highest frequency carried on the cable. See Attenuation. Companies have oodles of telephone cables in and around their buildings. Cable management is the science and art of managing those cables. Typically, cable management covers keeping track of where the cables are (maps are useful), what type and quality the cable is, and what is attached to either end of it. Cable management for corporations is critical, since stringing, laying and snaking of new cable can be inordinately expensive. Cable mapping is the task of trying to track every single pair of wire or circuit from beginning to end. You will need to know where all cables reside, not just the circuits that are in use. Cable mapping is critical for any organization ” from company to university ” which has a lot of cables floating around. Installing more of it ” when there are plenty of spare pairs ” is stupid and expensive. Thus, the need for cable mapping. Also known as sheath mile. The measurement, in miles, of fiber optic cable that is deployed. Contrast with fiber mile and route mile. A cable modem is a device that will let you transmit and receive computer information over your cable TV line ” just as a phone modem will let you transmit and receive computer information over your dialup telephone line. A dial-up modem on your local analog phone line will provide online Internet access through the public telephone network at up 53,000 bits per second. A cable modem will give you Internet access through your cable TV network at more than one million bits per second, or about 20 times faster. When a cable modem unit is installed next to your computer, a splitter is placed between the coaxial cable coming in from CATV provider. One side of the splitter will go to your cable set-top box and the other to your cable modem. Your cable modem will connect to your computer through a standard 10Base-T Ethernet RJ-45 interface. Data is transmitted between the cable modem and computer at standard Ethernet local area network speeds of 10 million bits per second. You connect your cable modem directly to your computer using a standard Ethernet NIC (network Interface card) card ” the exact same card you use to connect to your office's local area network. Here's how it works. A cable television system typically has 60 or more TV channels. Most of them are used for receiving channels like ABC, CBS, NBC, CNN, ESPN and HBO. In a typical installation, the cable company chooses one TV channel and sets it aside for data transmission. That one channel can deliver 27 million bits per second downstream and and 10 mbps of upstream capacity. Typically the cable company organizes that that capacity is shared by a cluster of homes or apartments. Because data traffic is bursty , several hundred cable modem users can surf roughly at the same time. If speeds begin to fall off due to heavy traffic (we're all sharing one line), the cable operator will eventually allocate more channel space. A device called a cable modem termination system (CMTS) is located at the local cable operator's network hub and controls access to cable modems on the network. Each of the cable modems have their own network numbers ” just as each NIC card has its own network number. Your carrier's access software can turn off or on service to the your cable modem. Traffic is routed from the CMTS to the backbone of a cable Internet service provider (ISP), such as @Home or Road Runner, which, in turn , connects to the Internet. With newer cable modem systems, all traffic from the CMTS to the cable modem is encrypted to ensure privacy and security for users. Some cable modem ISPs, such as @Home and Road Runner use proxy and caching servers to store copies of popular Web sites closer to their subscribers. The upshot: A customer with a cable modem connection isn't forced to travel across the Internet. The problem: sometimes a cable modem subscriber can be fed an old web site. Several million cable modems have been installed in the U.S. and Canada. The hardware and software supporting those connections aren't always completely interoperable, or able to work together. If, for example, a cable company uses Motorola network equipment, at one stage is compatible, as dial-up modems are. CableLabs tests DOCSIS cable modems, stamping the products that pass the test "CableLabs Certified." My personal experience with cable modems has been excellent ” far better than my experience with DSL lines. I've learned two things about getting cable modems to work. First, when you first install your cable modem, it's best to replace the entire cable from and within your house to the pole coming in from the cable company. The fewer splitters in that line the better and the newer the cable the better. Second, your cable modem and attached router will occasionally lock up. Before you call Repair, unplug the cable modem and the router from electricity, count to ten, then plug them back in again. 99% of the time, your system will then work fine. See DOSCIS and the next definition also. CMTS. To deliver data services over a cable network, one six-MHz television channel (in the 50-750 MHz range) is typically allocated for downstream traffic to homes and another channel (in the 5-42 MHz band) is used to carry upstream signals. A head end CMTS communicates through these channels with cable modems located in subscriber homes to create a virtual LAN connection. See also Cable Modem, CableLabs Certified, and DOCSIS. A mechanism incorporated into a consumer television receiver which allows the user to select the channel assignment plan. In older receivers, this mechanism is usually a physical switch; in newer receivers, it is usually incorporated as an option in the setup menu. All cable/normal switches allow two choices: "standard" (some- times called "normal" or "off air") which tunes to the channel assignments used by broadcast stations for over-the-air transmission; and "cable" (sometimes called "CATV" or "STD") which tunes to cable channels. Many receivers also include a third option called HRC or Harmonically-Related Carriers. As defined by the Cable Act, a cable operator is a CATV (Community Antenna TeleVision) system operator that provides video programming using closed transmission paths and using public rights-of-way. Not included are open video systems, MMDS (Multichannel Multipoint Distribution Systems), or DBS (Direct Broadcast Satellite). See also Cable Act of 1984. A term which refers to the physical connection media (optical fiber, copper wiring, connectors, splicers, etc.) in a local area network. It is a term also used less frequently by the telephone company to mean all its outside cables ” those going from the central office to the subscribers' offices. Any tier of cable television programming except: - The basic tier (see Basic Cable Television Service). - Any programming offered on a per-channel or per-program basis. There are three basic types of protection in addition to standard plastic cladding: ElectroMagnetic (EM) Shielding: Prevents passive coupling. EM shielding can be a metallic conduit or metal wrapping-with appropriate grounding-on the wires. Penetration-Resistant Conduit: Used to secure the cable from cutting or tapping. Note, however, not all penetration-resistant conduits provide EM shielding. Pressurized Conduit: Detects intrusion by monitoring for pressure loss. Fiber optic cable is extremely difficult to tap and if tapped, the intrusion can be detected through signal attenuation. But since fiber optic cable can be cut, penetration-resistant conduit is recommended to protect the cable. Framework fastened to bays to support cabling between them. Cable Reregulation Bill 1515 passed Congress in October 1992, forcing the FCC to reregulate cable television and cable television rates (after the Cable Act of 1984 effectively de- regulated the cable TV industry). After the Act was passed, the FCC forced the industry to reduce its rates by 10% in 1993 and then again by 7% in 1994. A fancy way of saying that you have a communications system which can pump a lot of bits over some cables that weren't originally meant to carry that many bits. For example let's say you put digital subscriber line electronics on a standard phone line. That would be called a "cable relief system." Cable running vertically in a multi-story building to serve the upper floors. Conduit used to run cables through a building. Also, path taken by a cable or group of cables. A device which tests coaxial, twisted-pair, and fiber-optic cable. It measures the length of a cable segment, tests for opens and shorts, and can report on the distance to the problem so the problem can be found and fixed. Many scanners also indicate if a cable segment has RFI or EMI. A covering over the conductor assembly that may include one or more metallic members, strength members , or jackets. See Cable Shield. A metallic component of the cable sheath which prevents outside electrical interference and drains off current induced by lightning. Excessive levels of radio frequency (RF) energy that leak from cable television systems. Leak can cause interference to communications users, including safety service users such as aviation, police and fire departments. FCC rules specify the maximum RF leakage, and require that cable television systems be operated within certain guidelines. Tool used to strip the jackets off ALPETH and lead-jacketed telephone cable. Cable strippers include cable knives and snips. A professional or amateur stripper who appears on X-rated, or community access channels. Quality varies widely. Pay is often non-existent. Cable Telephony is transmitting anything other than TV pictures over a cable TV system. That "anything" might be anything from a data connection to the Internet to simple, standard, analog voice phone calls ” local, long distance and international. Typically transmitting anything other than TV over the standard coaxial cable CATV providers install at your house requires a cable modem. See Cable Modem.
http://flylib.com/books/en/4.118.1.30/1/
CC-MAIN-2013-20
refinedweb
4,265
55.95
This could've been a very scary topic.. Or maybe it is already??? Posted 05 March 2014 - 04:39 PM This could've been a very scary topic.. Or maybe it is already??? Check out Tabletop Simulator. No goats, but we have table flipping! Posted 07 March 2014 - 11:24 AM You mean if you prefer your privates on the top or bottom? Posted 09 March 2014 - 10:00 AM Bottom for classes (first divided in public, protected and private, each of one in "bottom style" ie ctors, functions, inner classes/datastructures, consts, variables... static things before non static things) Top for data structures (yes, sometimes I add some little functions to structures) I like placing the private stuff first in classes, since that is the one thing that makes sense. Putting the public interface (or anything public) first is nonsensical. A class is a struct where the members default to being private rather than public (in layman's wording, the standard's wording is slightly more elegant). Therefore, if you write class and immediately follow with public: you are being nonsensical. You're the madman who puts salt on his bananas and throws them away because he doesn't like salted banana. I try not to write nonsensical code, if I can help it. If one wants the public members first, one should write struct and declare the non-public members private. Of course nobody does that... so it's private stuff first In C++ private as default class access modifier was a style choice with a solid reason, and not salt on banana:. p14 Edited by Alessio1989, 09 March 2014 - 10:13 AM. Posted 09 March 2014 - 10:25 PM I do top because of the class being private by default. My method is to do private members on top, protected if I have any, public methods, followed by any operator overloading I'm doing and the constructors/destructors. Posted 09 March 2014 - 11:36 PM Learn all about my current projects and watch some of the game development videos that I've made. Posted 10 March 2014 - 08:12 AM Reading his most recent version of his The C++ Programming Language book makes it look like Bjarne does top too. His first code from 16.2.3 Access Control: class Date{ int d, m, y; public: void init(int dd, int mm, int yy); // initialize void add_year(int n); // add n years void add_month(int n); // add n months void add_day(int n); // add n days }; I don't know his reasoning behind it, but I do know mine. My reasoning is, since the default is private why change it to public just to turn around and make it private again. Edited by BHXSpecter, 10 March 2014 - 08:12 AM. Posted 10 March 2014 - 11:21 AM In C++ private as default class access modifier was a style choice with a solid reason, and not salt on banana: Well yes, exactly. that is what I'm saying. Hence the pittoresque example When you write class foo, you are implicitly requesting from the compiler: class foo { private: The private: is implied, since that is what class is about (in contrast to struct). If, on the other hand, you put the private members first, you are requesting from the compiler: class foo { private: public: It's legal, but absurd. Nonsensical, like putting salt on a banana and throwing it away. Or, like producing a 24k gold zippo and painting it black because you don't like gold (no kidding, one well-known designer actually did that in the 1980s...). You ask for private and immediately throw it away. The logically correct thing (leaving protected out of consideration for simplicity) would be to either: struct foo { // here be public stuff private: // blah }; or class foo { // here be private stuff public: // blah }; Of course, "struct" does not sound nearly as cool as "class", so nobody uses struct (other than to bundle together 2-3 PODs without methods, maybe). But actually it would be the correct thing to do, if that is the behavior you want. Edited by samoth, 10 March 2014 - 11:21 AM. Posted 10 March 2014 - 01:54 PM I don't know Samoth, I see your point but I would imagine most people don't use class or struct based on implicit accessibility (and go for the more traditional class = complex container with logic, struct = POD approach), so from that point of view it's not nonsensical at all, as the default accessibility doesn't factor into the decision either way, i.e. the programmer did not "ask for private/public", but for a struct or for a class. The fact that the language has some rules about what the default accessibility is is just irrelevant from that perspective. To me this appears like a style decision, depending on how you interpret the more subtle parts of the language. In your case (the pedantic view, if you will) you prefer to honor the default accessibility settings. In the other case, the programmer prefers not to rely on them (or even know they exist at all) and to explicitly define his own accessibility fields, rearranging them to his liking. The final code is the same, and each style is equally correct "philosophically" from the corresponding perspective. “If I understand the standard right it is legal and safe to do this but the resulting value could be anything.” Posted 10 March 2014 - 03:40 PM Well, it sure isn't wrong to do one or the other. Clearly, this is pretty much an opinion-based (or preference-based) decision. And yes, I certainly agree that the common understanding is that a class is rather the complex thing with built-in logic, and a struct is a few PODs tied together in one lump. Which is likely the reason (apart from the fact that "class" sounds more classy) why everybody writes class for classes (regardless of layout or accessibility). Still, for some reason, Mr. Stroustrup decided that struct is public by default, and class is private by default. I wouldn't know his personal reasons for that decision, but it's apparently something that made sense in some context. Maybe he wanted to make everything private by default, but finally decided to leave struct as it was because C++ was just C with classes then. Or, something else, who knows. Either way, as it stands, there's the way that I personally find "the correct way" due to these defaults, and there's the way that I personally find "nonsensical", since it does the exact opposite. That doesn't have to mean that my opinion is the only correct one, but it's how I feel about it. In the end, like all languages, programming languages, too, are for expressing an intent, and as long as the compiler (and the people around you) understands what you want, it's fine. Posted 10 March 2014 - 04:26 PM structs are public by default because that was most likely how they were in C and he just kept it in C++ since C is a subset of C++. He made classes private by default because they were designed with data hiding in mind.That is what I was told in my programming class years ago. Posted 10 March 2014 - 05:21 PM C is a subset of C++Hahahaha... careful, never say that aloud without looking over your shoulder first. You risk being burned at the stake by two angry mobs But yes, I agree that this was likely the reason for several design decisions (including this one) a long, long time ago when C++ was C with classes. Posted 10 March 2014 - 06:45. Posted 10 March 2014 - 06:47. It does say "with minor exceptions", though... which I guess is true, even if I don't personally consider them minor. “If I understand the standard right it is legal and safe to do this but the resulting value could be anything.” Posted 10 March 2014 - 07:16 PM For me: I prefer to not have ambiguity: it keeps me from having to wonder. As for why... I am not sure. I feel that types are first. For functions, I read documentation, but for types, I feel that gleaning the header file is more likely, since there are few parameters or other such intricacies, unlike with functions. So, using declarations and types at the top. I put member variables in the middle, before functions; chances are, you aren't supposed to pay attention to those, so having them in the center, while types and the interface are on the extremeties, makes sense to me, and because of the old C (and other languagues') paradigm of putting variables at the top of the block, before everything. I feel that private should go before public, because of the default member access of classes, and habit. I sort member functions by their purpose, as noted above. Free functions generally go last, due to reliance on types declared earlier in the translation unit. I also sort type and variable declarations. ...What? Edited by Ectara, 10 March 2014 - 08:24 PM. Posted 03 April 2014 - 10:52 AM I generally will place private before public. For me it just makes more sense to do it that way. Posted 04 April 2014 - 06:56 AM Hmmm... my C# style does not fit into either category neatly. I do: - Fields - Properties - Constructors - Methods - Explicit interface implementations (I don't sort any of them by access modifier) Same kindda - const - fields (that have no related properties) - properties and fields, with collections and generics first - delegates and event handlers - constructors - event methods - methods organised by #regions based on interface - and if destructors are needed it goes together with the disposer method (not property) in a #region at the bottom Although I doubt this is common practice, to keep statics (when necessary) separate I make use of partial keeping statics separate but in the same partial class. As for C++ it is straight up bottom Edited by Strix_Overflow, 04 April 2014 - 06:57 AM. Posted 08 April 2014 - 09:34 AM Since I prefer coding constructor forced (defined) objects , for security production reasons, I put public stuff first as I put constructor first. Though it bothers me becouse primarily I develop private members as a developer of the class, the public members are rather frequented by others, not by me who designs the class. A header file with private stuff being first also does me good (do not know why though). But I stick to bottom, since it is common coding culture and better for users of my code (though I doubt this too) Posted 17 April 2014 - 05:54 AM I think my code is generally ordered/grouped by relevance to surrounding code. This means that I pretty much have to use IDE tools to locate a method/field/whatever, but once I do I can usually find helpers and nested calls right next to it, helping me to avoid jumping around alot when reading code. Public/private has no impact on my ordering. Posted 18 April 2014 - 10:41 AM I tend to organize my classes not so much in terms of visibility, but in terms of complexity. Fields both public and private go at the top. Then properties or any relevant getters/setters (when I use them) go next, because they generally don't do much except set or retrieve the fields. Then the constructors, destructors, and other boiler-plate like copy-constructors, move constructors, etc. THEN methods, arranged by length and importance. Important methods go first, while long methods come last. Also, if I'm adding something new to a class, the new methods go at the bottom, new fields go at the bottom of the field list, etc. Within each of these categories, I tend to put public things before protected before private, but I'm not picky on that particular front. The reason for this is that often-times I'll spend most of my work in the longer and more complex methods, and they become easier to find when they're at the bottom - since I just have to scroll to the bottom to find the thing I'm working on. Of course, that isn't as big of a deal with IDE code navigation help, but it lets me get a sense by looking at the code file where complicated things are. Edited by Oberon_Command, 18 April 2014 - 10:46 AM. Posted 19 April 2014 - 08:05 AM public class MagicalGirl { private Top boobs; public Top grab(){ return boobs; } private Bottom datAss; public Bottom grep(String g){ int count = 0; while (g.contains("jizz")){ count++; } return datAss.stack().get(count); } public void setBottom(Bottom huge){ datAss = huge; } } I have to say, it's a hard decision. Edited by tom_mai78101, 19 April 2014 - 08:07 AM.
http://www.gamedev.net/topic/653871-top-or-bottom/page-2
CC-MAIN-2016-50
refinedweb
2,161
68.6
TimeInterval, Date, and DateInterval Nestled between Madrid’s Centro and Salamanca districts, just a short walk from the sprawling Buen Retiro Park, The Prado Museum boasts an extensive collection of works from Europe’s most celebrated painters. But if, during your visit, you begin to tire of portraiture commissioned by 17th-century Spanish monarchs, consider visiting the northernmost room of the 1st floor — Sala 002. There you’ll find this Baroque era painting by the French artist Simon Vouet. You’d be forgiven for wondering why this pair of young women, brandishing a hook and spear, stand menacingly over a cowering old man while a mob of cherubim tears at his back. It is, of course, allegorical: reading the adjacent placard, you’ll learn that this piece is entitled Time defeated by Hope and Beauty. The old man? That’s Time. See the hourglass in his hand and scythe at his feet? Take a moment, standing in front of this painting, to reflect on the enigmatic nature of time. Think now about how our limited understanding of time is reflected in — or perhaps exacerbated by — the naming of the Foundation date and time APIs. It’s about time we got them straight. Seconds are the fundamental unit of time. They’re also the only unit that has a fixed duration. Months vary in length (“30 days hath September”), as do years (“53 weeks hath 71 years every cycle of 400”). Certain years pick up an extra day (leap years are misnamed if you think about it), and days gain and lose an hour from daylight saving time (thanks, Benjamin Franklin). And that’s to say nothing of leap seconds, which are responsible for such oddities as the 61 second minute, the 3601 second hour, and, of course, the 1209601 second fortnight. Time (née NSTime) is a typealias for Double that represents duration as a number of seconds. You’ll see it as a parameter or return type for APIs that deal with a duration of time. Being a double-precision floating-point number, Time can represent submultiples in its fraction, (though for anything beyond millisecond precision, you’ll want to use something else). Date and TimeDate and Time It’s unfortunate that the Foundation type representing time is named Date. Colloquially, one typically distinguishes “dates” from “times” by saying that the former has to do with calendar days and the latter has more to do with the time of day. But Date is entirely orthogonal from calendars, and contrary to its name represents an absolute point in time. Another source of confusion for Date is that, despite representing an absolute point in time, it’s defined by a time interval since a reference date: public struct Date : Reference Convertible, Comparable, Equatable { public typealias ReferenceConvertible, Comparable, Equatable { public typealias Reference Type = NSDate fileprivate var _time: TimeType = NSDate fileprivate var _time: Time Interval … }Interval … } The reference date, in this case, is the first instant of January 1, 2001, Greenwich Mean Time (GMT). Date Intervals and Time IntervalsDate Intervals and Time Intervals Date is a recent addition to Foundation. Introduced in iOS 10 and macOS Sierra, this type represents a closed interval between two absolute points in time (again, in contrast to Time, which represents a duration in seconds). So what is this good for? Consider the following use cases: Getting the Date Interval of a Calendar UnitGetting the Date Interval of a Calendar Unit In order to know the time of day for a point in time — or what day it is in the first place — you need to consult a calendar. From there, you can determine the range of a particular calendar unit, like a day, month, or year. The Calendar method date makes this really easy to do: let calendar = Calendar.current let date = Date() let date Interval = calendar.dateInterval = calendar.date Interval(of: .month, for: date)Interval(of: .month, for: date) Because we’re invoking Calendar, we can be confident in the result that we get back. Look how it handles daylight saving transition without breaking a sweat: let dst Components = DateComponents = Date Components(year: 2018, month: 11, day: 4) calendar.dateComponents(year: 2018, month: 11, day: 4) calendar.date Interval(of: .day, for: calendar.date(from: dstInterval(of: .day, for: calendar.date(from: dst Components)!)?.duration // 90000 secondsComponents)!)?.duration // 90000 seconds It’s 2018. Don’t you think that it’s time you stopped hard-coding seconds? Calculating Intersections of Date IntervalsCalculating Intersections of Date Intervals For this example, let’s return to The Prado Museum and admire its extensive collection of paintings by Rubens — particularly this apparent depiction of the god of Swift programming. Rubens, like Vouet, painted in the Baroque tradition. The two were contemporaries, and we can determine the full extent of how they overlap in the history of art with the help of Date: import Foundation let calendar = Calendar.current // Simon Vouet // 9 January 1590 – 30 June 1649 let vouet = Date Interval(start: calendar.date(from: DateInterval(start: calendar.date(from: Date Components(year: 1590, month: 1, day: 9))!, end: calendar.date(from: DateComponents(year: 1590, month: 1, day: 9))!, end: calendar.date(from: Date Components(year: 1649, month: 6, day: 30))!) // Peter Paul Rubens // 28 June 1577 – 30 May 1640 let rubens = DateComponents(year: 1649, month: 6, day: 30))!) // Peter Paul Rubens // 28 June 1577 – 30 May 1640 let rubens = Date Interval(start: calendar.date(from: DateInterval(start: calendar.date(from: Date Components(year: 1577, month: 6, day: 28))!, end: calendar.date(from: DateComponents(year: 1577, month: 6, day: 28))!, end: calendar.date(from: Date Components(year: 1640, month: 5, day: 30))!) let overlap = rubens.intersection(with: vouet)! calendar.dateComponents(year: 1640, month: 5, day: 30))!) let overlap = rubens.intersection(with: vouet)! calendar.date Components([.year], from: overlap.start, to: overlap.end) // 50 yearsComponents([.year], from: overlap.start, to: overlap.end) // 50 years According to our calculations, there was a period of 50 years where both painters were living. We can even take things a step further and use Date to provide a nice representation of that time period: let formatter = Date IntervalInterval Formatter() formatter.timeFormatter() formatter.time Style = .none formatter.dateStyle = .none formatter.date Template = "%Y" formatter.string(from: overlap) // "1590 – 1640"Template = "%Y" formatter.string(from: overlap) // "1590 – 1640" Beautiful. You might as well print this code out, frame it, and hang it next to The Judgement of Paris. The fact is, we still don’t really know what time is (or if it even actually exists). But I’m hopeful that we, as developers, will find the beauty in Foundation’s Date APIs, and in time, learn how to overcome our lack of understanding. That does it for this week’s article. See you all next time.
https://nshipster.com/timeinterval-date-dateinterval/
CC-MAIN-2022-27
refinedweb
1,127
56.76
. A Visual Basic 6.0 DLL is Unmanaged, meaning it is not being generated by Common Language Runtime. But we can make this VB DLL to interoperate with C#, by converting the same into .NET compatible version. Following example shows how to create a simple server by using Visual Basic 6.0 and implementing it in a C# client program. Creating an ActiveX DLL using Visual Basic 6.0 Public Function Show() MsgBox("Message box created by using Visual Basic") End Function Environment Variables: tlbimp Csharpcorner.dll /out:Csharp.dll A new .NET compatible file called Csharp.dll will be placed in the appropriate directory. Type in the following C# client program and execute as usual: using Csharp;using System;. C# and ActiveX DLL Creating an Excel Spreadsheet Programmatically I face the problem to use ocx in .net web serviceyour post help me a lot.. Thank you. dear sir,i wnat you to tell me can i create activex using C# and when open the browser activex appear and ask for installneed help quicly plz hi i'm student of computer science,i have problem while running the code in mobile simulator. plz help me out how to run the code using simulator in C# thanks
http://www.c-sharpcorner.com/UploadFile/anandnarayanswamy/CSNActiveX11082005230943PM/CSNActiveX.aspx
crawl-003
refinedweb
206
59.19
Frequently Asked Questions Stuck on a particular problem? Check some of these common gotchas first in the FAQ. If you still can't find what you're looking for, you can refer to our support page. Material-UI is awesome. How can I support the project? There are many ways to support Material-UI: - Spread the word. Evangelize Material-UI by linking to material-ui.com on your website, every backlink matters. Follow us on Twitter, like and retweet the important news. Or just talk about us with your friends. - Give us feedback. Tell us what we're doing well or where we can improve. Please upvote (👍) the issues that you are the most interested in seeing solved. - Help new users. You can answer questions on StackOverflow. - Make changes happen. - Edit the documentation. Every page has an "EDIT THIS PAGE" link in the top right. - Report bugs or missing features by creating an issue. - Review and comment on existing pull requests and issues. - Help translate the documentation. - Improve our documentation, fix bugs, or add features by submitting a pull request. - Support us financially on OpenCollective. If you use Material-UI Material-UI home page. Why aren't my components rendering correctly in production builds? The #1 reason this likely happens is due to class name conflicts once your code is in a production bundle. For Material-UI to work, the className values of all components on a page must be generated by a single instance of the class name generator. To correct this issue, all components on the page need to be initialized such that there is only ever one class name generator among them. You could end up accidentally using two class name generators in a variety of scenarios: - You accidentally bundle two versions of Material-UI. You might have a dependency not correctly setting Material-UI as a peer dependency. - You are using StylesProviderfor a subset of your React tree. - You are using a bundler and it is splitting code in a way that causes multiple class name generator instances to be created. If you are using webpack with the SplitChunksPlugin, try configuring the runtimeChunksetting under optimizations. Overall, it's simple to recover from this problem by wrapping each Material-UI application with StylesProvider components at the top of their component trees and using a single class name generator shared among them. Why do the fixed positioned elements move when a modal is opened? Scrolling is blocked as soon as a modal is opened. This prevents interacting with the background when the modal should be the only interactive content. However, removing the scrollbar can make your fixed positioned elements move. In this situation, you can apply a global .mui-fixed class name to tell Material-UI to handle those elements. How can I disable the ripple effect globally? The ripple effect is exclusively coming from the BaseButton component. You can disable the ripple effect globally by providing the following in your theme: import { createTheme } from '@mui/material'; const theme = createTheme({ components: { // Name of the component ⚛️ MuiButtonBase: { defaultProps: { // The props to apply disableRipple: true, // No more ripple, on the whole application 💣! }, }, }, }); How can I disable transitions globally? Material-UI uses the same theme helper for creating all its transitions. Therefore you can disable all transitions by overriding the helper in your theme: import { createTheme } from '@mui/material'; const theme = createTheme({ transitions: { // So we have `transition: none;` everywhere create: () => 'none', }, }); It can be useful to disable transitions during visual testing or to improve performance on low-end devices. You can go one step further by disabling all transitions and animations effects: import { createTheme } from '@mui/material'; const theme = createTheme({ components: { // Name of the component ⚛️ MuiCssBaseline: { styleOverrides: { '*, *::before, *::after': { transition: 'none !important', animation: 'none !important', }, }, }, }, }); Notice that the usage of CssBaseline is required for the above approach to work. If you choose not to use it, you can still disable transitions and animations by including these CSS rules: *, *::before, *::after { transition: 'none !important'; animation: 'none !important'; } Do I have to use emotion to style my app? No, it's not required. But if you are using the default styled engine ( @mui/styled-engine) the emotion dependency comes built in, so carries no additional bundle size overhead. Perhaps, however, you're adding some Material-UI components to an app that already uses another styling solution, or are already familiar with a different API, and don't want to learn a new one? In that case, head over to the Style Library Interoperability section, where we show how simple it is to restyle Material-UI components with alternative style libraries. When should I use inline-style vs. CSS? As a rule of thumb, only use inline-styles for dynamic style properties. The CSS alternative provides more advantages, such as: - auto-prefixing - better debugging - media queries - keyframes How do I use react-router? We detail the integration with third-party routing libraries like react-router, Gatsby or Next.js in our guide. How can I access the DOM element? All Material-UI components that should render something in the DOM forward their ref to the underlying DOM component. This means that you can get DOM elements by reading the ref attached to Material-UI components: // or a ref setter function const ref = React.createRef(); // render <Button ref={ref} />; // usage const element = ref.current; If you're not sure if the Material-UI component in question forwards its ref you can check the API documentation under "Props" e.g. the Button API includes The ref is forwarded to the root element. indicating that you can access the DOM element with a ref. I have several instances of styles on the page If you are seeing a warning message in the console like the one below, you probably have several instances of @mui/styles initialized on the page. It looks like there are several instances of @mui/stylesinitialized in this application. This may cause theme propagation issues, broken class names, specificity issues, and make your application bigger without a good reason. Possible reasons There are several common reasons for this to happen: - You have another @mui/styleslibrary somewhere in your dependencies. - You have a monorepo structure for your project (e.g, lerna, yarn workspaces) and @mui/stylesmodule is a dependency in more than one package (this one is more or less the same as the previous one). - You have several applications that are using @mui/stylesrunning on the same page (e.g., several entry points in webpack are loaded on the same page). Duplicated module in node_modules If you think that the issue may be in the duplication of the @mui/styles module somewhere in your dependencies, there are several ways to check this. You can use npm ls @mui/styles, yarn list @mui/styles or find -L ./node_modules | grep /@mui/styles/package.json commands in your application folder. If none of these commands identified the duplication, try analyzing your bundle for multiple instances of @mui/styles.. If you are using webpack, you can change the way it will resolve the @mui/styles module. You can overwrite the default order in which webpack will look for your dependencies and make your application node_modules more prioritized than default node module resolution order: resolve: { + alias: { + "@mui/styles": path.resolve(appFolder, "node_modules", "@mui/styles"), + } } Usage with Lerna One possible fix to get @mui/styles to run in a Lerna monorepo across packages is to hoist shared dependencies to the root of your monorepo file. Try running the bootstrap option with the --hoist flag. lerna bootstrap --hoist Alternatively, you can remove @mui/styles from your package.json file and hoist it manually to your top-level package.json file. Example of a package.json file in a Lerna root folder { "name": "my-monorepo", "devDependencies": { "lerna": "latest" }, "dependencies": { "@mui/styles": "^4.0.0" }, "scripts": { "bootstrap": "lerna bootstrap", "clean": "lerna clean", "start": "lerna run start", "build": "lerna run build" } } Running multiple applications on one page If you have several applications running on one page, consider using one @mui/styles module for all of them. If you are using webpack, you can use CommonsChunkPlugin to create an explicit vendor chunk, that will contain the @mui/styles module: module.exports = { entry: { + vendor: ["@mui/styles"], app1: "./src/app.1.js", app2: "./src/app.2.js", }, plugins: [ + new webpack.optimize.CommonsChunkPlugin({ + name: "vendor", + minChunks: Infinity, + }), ] } My App doesn't render correctly on the server If it doesn't work, in 99% of cases it's a configuration issue. A missing property, a wrong call order, or a missing component – server-side rendering is strict about configuration. The best way to find out what's wrong is to compare your project to an already working setup. Check out the reference implementations, bit by bit. CSS works only on first load then is missing The CSS is only generated on the first load of the page. Then, the CSS is missing on the server for consecutive requests. Action to Take The styling solution relies on a cache, the sheets manager, to only inject the CSS once per component type (if you use two buttons, you only need the CSS of the button one time). You need to create a new sheets instance for each request. Example of fix: -// Create a sheets instance. -const sheets = new ServerStyleSheets(); function handleRender(req, res) { + // Create a sheets instance. + const sheets = new ServerStyleSheets(); //… // Render the component to a string. const html = ReactDOMServer.renderToString( React class name hydration mismatch Warning: Prop className did not match. There is a class name mismatch between the client and the server. It might work for the first request. Another symptom is that the styling changes between initial page load and the downloading of the client scripts. Action to Take The class names value relies on the concept of class name generator. The whole page needs to be rendered with a single generator. This generator needs to behave identically on the server and on the client. For instance: You need to provide a new class name generator for each request. But you shouldn't share a createGenerateClassName()between different requests: Example of fix: -// Create a new class name generator. -const generateClassName = createGenerateClassName(); function handleRender(req, res) { + // Create a new class name generator. + const generateClassName = createGenerateClassName(); //… // Render the component to a string. const html = ReactDOMServer.renderToString( You need to verify that your client and server are running the exactly the same version of Material-UI. It is possible that a mismatch of even minor versions can cause styling problems. To check version numbers, run npm list @mui/materialin the environment where you build your application and also in your deployment environment. You can also ensure the same version in different environments by specifying a specific MUI version in the dependencies of your package.json. example of fix (package.json): "dependencies": { ... - "@mui/material": "^4.0.0", + "@mui/material": "4.0.0", ... }, You need to make sure that the server and the client share the same process.env.NODE_ENVvalue. Why are the colors I am seeing different from what I see here? The documentation site is using a custom theme. Hence, the color palette is different from the default theme that Material-UI ships. Please refer to this page to learn about theme customization. Why does component X require a DOM node in a prop instead of a ref object? Components like the Portal or Popper require a DOM node in the container or anchorEl prop respectively. It seems convenient to simply pass a ref object in those props and let Material-UI access the current value. This works in a simple scenario: function App() { const container = React.useRef(null); return ( <div className="App"> <Portal container={container}> <span>portaled children</span> </Portal> <div ref={container} /> </div> ); } where Portal would only mount the children into the container when container.current is available. Here is a naive implementation of Portal: function Portal({ children, container }) { const [node, setNode] = React.useState(null); React.useEffect(() => { setNode(container.current); }, [container]); if (node === null) { return null; } return ReactDOM.createPortal(children, node); } With this simple heuristic Portal might re-render after it mounts because refs are up-to-date before any effects run. However, just because a ref is up-to-date doesn't mean it points to a defined instance. If the ref is attached to a ref forwarding component it is not clear when the DOM node will be available. In the example above, the Portal would run an effect once, but might not re-render because ref.current is still null. This is especially apparent for React.lazy components in Suspense. The above implementation could also not account for a change in the DOM node. This is why we require a prop with the actual DOM node so that React can take care of determining when the Portal should re-render: function App() { const [container, setContainer] = React.useState(null); const handleRef = React.useCallback( (instance) => setContainer(instance), [setContainer], ); return ( <div className="App"> <Portal container={container}> <span>Portaled</span> </Portal> <div ref={handleRef} /> </div> ); } What's the clsx dependency for? clsx is a tiny utility for constructing className strings conditionally, out of an object with keys being the class strings, and values being booleans. Instead of writing: // let disabled = false, selected = true; return ( <div className={`MuiButton-root ${disabled ? 'Mui-disabled' : ''} ${ selected ? 'Mui-selected' : '' }`} /> ); you can do: import clsx from 'clsx'; return ( <div className={clsx('MuiButton-root', { 'Mui-disabled': disabled, 'Mui-selected': selected, })} /> );
https://next.material-ui.com/getting-started/faq/
CC-MAIN-2021-39
refinedweb
2,233
57.37
6.3. settings — Persistent application settings¶ Settings are stored in a non-volatile memory (NVM). In other words, settings are perserved after a board reset or power cycle. Application settings are defined in an ini-file that is used to generate the c source code. A setting has a type, a size, an address and a default value, all defined in the ini-file. Supported types are: int8_tAn 8 bits signed integer. int16_tA 16 bits signed integer. int32_tA 32 bits signed integer. stringAn ASCII string. The size is the number of bytes of the value. For the standard integer types the size must be the value returned by sizeof(). For strings it is the length of the string, including null termination. The address for each setting is defined by the user, starting at address 0 and increasing from there. The build system variable SETTINGS_INI contains the path to the ini-file used by the build system. Set this variable to the path of yours application ini-file and run make settings-generate to generate four files; settings.h, settings.c, settings.little-endian.bin and settings.big-endian.bin. Also add this to the Makefile: SRC += settings.c and include settings.h in the source files that accesses the settings. 6.3.1. Debug file system commands¶ Four debug file system commands are available, all located in the directory oam/settings/. Example output from the shell: $ oam/settings/list NAME TYPE SIZE VALUE version int8_t 1 1 value_1 int16_t 2 24567 value_2 int32_t 4 -57 value_3 string 16 foobar $ oam/settings/read value_1 24567 $ oam/settings/write value_1 -5 $ oam/settings/read value_1 -5 $ oam/settings/reset $ oam/settings/list NAME TYPE SIZE VALUE version int8_t 1 1 value_1 int16_t 2 24567 value_2 int32_t 4 -57 value_3 string 16 foobar 6.3.2. Example¶ In this example the ini-file has one setting defined, foo. The type is int8_t, the address is 0x00, the size is 1 and the default value is -4. [types] foo = int8_t [addresses] foo = 0x00 [sizes] foo = 1 [values] foo = -4 The settings can be read and written with the functions settings_read() and settings_write(). Give the generated defines SETTING_FOO_ADDR and SETTING_FOO_SIZE as arguments to those functions. int my_read_write_foo() { int8_t foo; /* Read the foo setting. */ if (settings_read(&foo, SETTING_FOO_ADDR, SETTING_FOO_SIZE) != 0) { return (-1); } foo -= 1; /* Write the foo setting. */ if (settings_write(SETTING_FOO_ADDR, &foo, SETTING_FOO_SIZE) != 0) { return (-1); } return (0); } Source code: src/oam/settings.h, src/oam/settings.c Test code: tst/oam/settings/main.c Test coverage: src/oam/settings.c Enums Functions - int settings_module_init(void)¶ Initialize the settings module. This function must be called before calling any other function in this module. The module will only be initialized once even if this function is called multiple times. - Return - zero(0) or negative error code. - ssize_t settings_read(void *dst_p, size_t src, size_t size)¶ Read the value of given setting by address. - Return - Number of words read or negative error code. - Parameters dst_p- The read value. src- Setting source address. size- Number of words to read. - ssize_t settings_write(size_t dst, const void *src_p, size_t size)¶ Write given value to given setting by address. - Return - Number of words written or negative error code. - Parameters dst- Destination setting address. src_p- Value to write. size- Number of bytes to write. - ssize_t settings_read_by_name(const char *name_p, void *dst_p, size_t size)¶ Read the value of given setting by name. - Return - Number of words read or negative error code. - Parameters name_p- Setting name. dst_p- The read value. size- Size of the destination buffer. - ssize_t settings_write_by_name(const char *name_p, const void *src_p, size_t size)¶ Write given value to given setting by name. - Return - Number of words read or negative error code. - Parameters name_p- Setting name. src_p- Value to write. size- Number of bytes to write. - struct setting_t¶ Public Members FAR const char* setting_t::name_p - setting_type_t type¶
https://simba-os.readthedocs.io/en/10.2.0/library-reference/oam/settings.html
CC-MAIN-2019-30
refinedweb
642
60.92
- NAME - SYNOPSIS - DESCRIPTION - Plugins and rule ordering - Data your dispatch routines has access to - Things your dispatch routine might do - under $match => $rule - on $match => $rule - before $match => $rule - after $match => $rule - when {...} => $rule - run {...} - stream {...} - set $arg => $val - default $arg => $val - del $arg - show $component - dispatch $path - next_rule - last_rule - abort $code - redirect $uri - tangent $uri - plugin - app - import - rules STAGE - new - handle_request - _handle_stage NAME, EXTRA_RULES - _handle_rules RULESET - _handle_rule RULE - _do_under - _do_when - _do_before - _do_on - _do_after - already_run - _do_redirect PATH - _do_tangent PATH - _do_stream CODE - _do_abort - _do_show [PATH] - _do_dispatch [PATH] - _match CONDITION - _match_method METHOD - _match_https - _match_http - _compile_condition CONDITION - _compile_glob METAEXPRESSION - import_plugins NAME Jifty::Dispatcher - The Jifty Dispatcher SYNOPSIS ... DESCRIPTIONand abortdirectives, though "after" rules will still run. - onor abort. - after after rules let you clean up after rendering your page. Delete your cache files, write your transaction logs, whatever. At this point, it's too late to show, redirect, tangentor abortpage. Plugins and rule ordering); after app,. after app inserts the following RULES after the application's dispatcher rules, and is identical to, but hopefully clearer than, after plugin Jifty => RULES. RULES may either be a single before, on, under, or after rule to change the ordering of, or an array reference of rules to reorder. Data your dispatch routines has access to request The current Jifty::Request object. $Dispatcher The current dispatcher object. get $arg Return the argument value. Things your dispatch routine might do under $match => $rule - GET - - - PUT - - OPTIONS - - DELETE - - HEAD - - HTTPS - - HTTP - on $match => $rule Like under, except it has to match the whole path instead of just the prefix. Does not set current directory context for its rules. before $match => $rule Just like on, except it runs before actions are evaluated. after $match => $rule Just like on, except it runs after the page is rendered. when {...} => $rule Like under, except using an user-supplied test condition. You can stick any Perl you want inside the {...}; it's just an anonymous subroutine. run {...} Run a block of code unconditionally; all rules are allowed inside a run block, as well as user code. You can think of the {...} as an anonymous subroutine. stream {...} Run a block of code unconditionally, which should return a coderef that is a PSGI streamy response. set $arg => $val Adds an argument to what we're passing to our template, overriding any value the user sent or we've already set. default $arg => $val Adds an argument to what we're passing to our template, but only if it is not defined currently. del $arg Deletes an argument we were passing to our template. show $component Display the presentation component. If not specified, use the request path as the default page. dispatch $path Dispatch again using $path as the request path, preserving args. next_rule Break out from the current run block and go on the next rule. last_rule Break out from the current run block and stop running rules in this stage. abort $code Abort the request; this skips straight to the cleanup stage. If $code is specified, it's used as the HTTP status code. redirect $uri Redirect to another URI. tangent $uri Take a continuation here, and tangent to another URI. plugin app See "Plugins and rule ordering", above. import rules STAGE Returns an array of all the rules for the stage STAGE. Valid values for STAGE are - SETUP - - RUN - - CLEANUP - new Creates a new Jifty::Dispatcher object. You probably don't ever want to do this. (Jifty.pm does it for you) handle_request Actually do what your dispatcher does. For now, the right thing to do is to put the following two lines first: require MyApp::Dispatcher; MyApp::Dispatcher->handle_request; _handle_stage NAME, EXTRA_RULES Handles the all rules in the stage named NAME. Additionally, any other arguments passed after the stage NAME are added to the end of the rules for that stage. This is the unit which calling "last_rule" skips to the end of. _handle_rules RULESET When handed an arrayref or array of rules (RULESET), walks through the rules in order, executing as it goes. _handle_rule RULE. _do_under This method is called by the dispatcher internally. You shouldn't need to. _do_when This method is called by the dispatcher internally. You shouldn't need to. _do_before This method is called by the dispatcher internally. You shouldn't need to. _do_on This method is called by the dispatcher internally. You shouldn't need to. _do_after This method is called by the dispatcher internally. You shouldn't need to. already_run; # ... }; _do_redirect PATH This method is called by the dispatcher internally. You shouldn't need to. Redirect the user to the URL provided in the mandatory PATH argument. _do_tangent PATH This method is called by the dispatcher internally. You shouldn't need to. Take a tangent to the URL provided in the mandatory PATH argument. (See Jifty::Manual::Continuation for more about tangents.) _do_stream CODE The method is called by the dispatcher internally. You shouldn't need to. Take a coderef that returns a PSGI streamy response code. _do_abort This method is called by the dispatcher internally. You shouldn't need to. Don't display any page. just stop. _do_show [PATH] This method is called by the dispatcher internally. You shouldn't need to. Render a template. If the scalar argument "PATH" is given, render that component. Otherwise, just render whatever we were going to anyway. _do_dispatch [PATH]. _match CONDITIONicographical order) and succeeded, the value associated with the '' key is matched again as the condition. _match_method METHOD Takes an HTTP method. Returns true if the current request came in with that method. _match_https Returns true if the current request is under SSL. _match_http Returns true if the current request is not under SSL. _compile_condition CONDITION Takes a condition defined as a simple string and return it as a regex condition. _compile_glob METAEXPRESSION Private function. Turns a metaexpression containing *, ? and # into a capturing regex pattern. Also supports the non-capturing [] and {} notations. The rules are: A *between two /characters, or between a /and end of string, should match one or more non-slash characters: /foo/*/bar /foo/*/ /foo/* /* All other *can match zero or more non-slash characters: /*bar /foo*bar * Two stars ( **) can match zero or more characters, including slash: /**/bar /foo/** ** Consecutive ?marks are captured together: /foo???bar # One capture for ??? /foo??* # Two captures, one for ?? and one for * The #character captures one or more digit characters. Brackets such as [a-z]denote character classes; they are not captured. Braces such as {xxx,yyy}]denote alternations; they are not captured. import_plugins Imports rules from "plugins" in Jifty into the main dispatcher's space.
https://metacpan.org/pod/Jifty::Dispatcher
CC-MAIN-2015-40
refinedweb
1,097
68.06
Hi all, I am new to programming and algorithms in general, so please bear with me. I just started programming my first algo over at Quantopian (QP) this week when I realized that QC is probably the better place to be at the moment. I am now trying to simply migrate my basic algo over to QC, and was hoping to find some kind help on this forum. My algo is very simple for obvious reasons, but I still find it hard to figure out how to go about converting it to QC standards. The algo is based on two technical indicators (ADX and Parabolic SAR). The algo goes long when the ADX crosses a certain threshold and when the parabolic SAR is indicating an up-trend, It exists the long posistion and switches to a second asset at exit when similar conditions are met by the SAR and ADX indicator. The aglo from QP currently imports Talib for the mentioned indicators, and this is the difficult part to migrate. The code from QP currently looks like this: # --------------------------------------------------------------------- # ADX, PLUS_DI, MINUS_DI, SAR import talib # --------------------------------------------------------------------- stock, bond, period = symbol('dia'), symbol('tlt'), 14; bars = period*2 # --------------------------------------------------------------------- def initialize(context): schedule_function(trade , date_rules.every_day(), time_rules.market_open(minutes = 65)) set_benchmark(symbol('dia')) set_slippage(slippage.FixedSlippage(spread=0.02)) set_commission(commission.PerShare(cost=0.01, min_trade_cost=1)) def trade (context, data): if get_open_orders(): return H = data.history(stock,'high', bars, '1d').dropna() L = data.history(stock,'low', bars, '1d').dropna() C = data.history(stock,'close', bars, '1d').dropna() adx = talib.ADX(H, L, C, period)[-1] mdi = talib.MINUS_DI(H, L, C, period)[-1] pdi = talib.PLUS_DI(H, L, C, period)[-1] sar = talib.SAR(H, L, 0.02, 0.2)[-1] if (adx < 15 and mdi > pdi and sar > C[-1]): wt_stk, wt_bnd = 0, 1 elif (adx > 15 and mdi < pdi): wt_stk, wt_bnd = 3, 0 else: return if all(data.can_trade([stock, bond])): order_target_percent(stock, wt_stk) order_target_percent(bond, wt_bnd) def before_trading_start(context,data): x = context.account.leverage y = context.account.net_leverage record(leverage = x,net_leverage = y) # --------------------------------------------------------------------- I hope this community can help me get started with learning how to program on this platform. Thank you all in advance!
https://www.quantconnect.com/forum/discussion/6358/migrating-my-first-algo-from-quantopian-to-qc/p1
CC-MAIN-2021-31
refinedweb
365
50.73
Drop-in App Engine OAuth client handlers for many popular sites. Project description oauth-dropins Drop-in OAuth for Python App Engine! About This is a collection of drop-in Google App Engine Python request handlers for the initial OAuth client flows for many popular sites, including Blogger, Disqus, Dropbox, Facebook, Flickr, GitHub, Google, IndieAuth, Instagram, Medium, Tumblr, Twitter, and WordPress.com. - Available on PyPi. Install with pip install oauth-dropins. - A demo app is deployed at oauth-dropins.appspot.com. Requires either the App Engine Python SDK or the Google Cloud SDK (aka gcloud) with the gcloud-appengine-python and gcloud-appengine-python-extras components. All other dependencies are handled by pip and enumerated in requirements.txt. We recommend that you install with pip in a virtualenv. App Engine details here. If you clone the repo directly or want to contribute, see Development for setup instructions. This software is released into the public domain. See LICENSE for details. Quick start Here's a full example of using the Facebook drop-in. Make sure you have either the App Engine Python SDK version 1.9.15 or later (for vendor support) or the Google Cloud SDK (aka gcloud) installed and on your $PYTHONPATH, e.g. export PYTHONPATH=$PYTHONPATH:/usr/local/google_appengine. oauth-dropins's setup.pyfile needs it during installation. Install oauth-dropins into a virtualenv somewhere your App Engine project's directory, e.g. local/: source local/bin/activate pip install oauth-dropins Add this to the appengine_config.pyfile in your project's root directory (background): from google.appengine.ext import vendor vendor.add('local') from oauth_dropins.appengine_config import * Put your Facebook application's ID and secret in two plain text files in your app's root directory, facebook_app_idand facebook_app_secret. (If you use git, you'll probably also want to add them to your .gitignore.) Create a facebook_oauth.pyfile with these contents: from oauth_dropins import facebook import webapp2 application = webapp2.WSGIApplication([ ('/facebook/start_oauth', facebook.StartHandler.to('/facebook/oauth_callback')), ('/facebook/oauth_callback', facebook.CallbackHandler.to('/next'))] Add these lines to app.yaml: - url: /facebook/(start_oauth|oauth_callback) script: facebook_oauth.application secure: always Voila! Send your users to /facebook/start_oauth when you want them to connect their Facebook account to your app, and when they're done, they'll be redirected to /next?access_token=... in your app. All of the sites provide the same API. To use a different one, just import the site module you want and follow the same steps. The filenames for app keys and secrets also differ by site; appengine_config.py has the full list. Usage details There are three main parts to an OAuth drop-in: the initial redirect to the site itself, the redirect back to your app after the user approves or declines the request, and the datastore entity that stores the user's OAuth credentials and helps you use them. These are implemented by StartHandler, CallbackHandler, and auth entities, respectively. The request handlers are full WSGI applications and may be used in any Python web framework that supports WSGI (PEP 333). Internally, they're implemented with webapp2. StartHandler This HTTP request handler class redirects you to an OAuth-enabled site so it can ask the user to grant your app permission. It has two useful methods: to(callback_path, scopes=None)is a factory method that returns a request handler class you can use in a WSGI application. The argument should be the path mapped to CallbackHandlerin your application. This also usually needs to match the callback URL in your app's configuration on the destination site. If you want to add OAuth scopes beyond the default one(s) needed for login, you can pass them to the scopeskwarg as a string or sequence of strings, or include them in the scopesquery parameter in the POST request body. This is currently supported with Facebook, Google, Blogger, and Instagram. Some of the sites that use OAuth 1 support alternatives. For Twitter, StartHandler.totakes an additional access_typekwarg that may be reador write. It's passed through to Twitter x_auth_access_type. For Flickr, the start handler accepts a permsPOST query parameter that may be read, writeor delete; it's passed through to Flickr unchanged. (Flickr claims it's optional, but sometimes breaks if it's not provided.) redirect_url(state=None)returns the URL to redirect to at the destination site to initiate the OAuth flow. StartHandlerwill redirect here automatically if it's used in a WSGI application, but you can also instantiate it and call this manually if you want to control that redirect yourself: class MyHandler(webapp2.RequestHandler): def get(self): ... handler_cls = facebook.StartHandler.to('/facebook/oauth_callback') handler = handler_cls(self.request, self.response) self.redirect(handler.redirect_url()) However, this is not currently supported for Google and Blogger. Hopefully that will be fixed in the future. CallbackHandler This class handles the HTTP redirect back to your app after the user has granted or declined permission. It also has two useful methods: to(callback_path)is a factory method that returns a request handler class you can use in a WSGI application, similar to StartHandler. The callback path is the path in your app that users should be redirected to after the OAuth flow is complete. It will include a statequery parameter with the value provided by the StartHandler. It will also include an OAuth token in its query parameters, either access_tokenfor OAuth 2.0 or access_token_keyand access_token_secretfor OAuth 1.1. It will also include an auth_entityquery parameter with the string key of an auth entity that has more data (and functionality) for the authenticated user. If the user declined the OAuth authorization request, the only query parameter besides statewill be declined=true. finish(auth_entity, state=None)is run in the initial callback request after the OAuth response has been processed. auth_entityis the newly created auth entity for this connection, or Noneif the user declined the OAuth authorization request. By default, finishredirects to the path you specified in to(), but you can subclass CallbackHandlerand override it to run your own code inside the OAuth callback instead of redirecting: class MyCallbackHandler(facebook.CallbackHandler): def finish(self, auth_entity, state=None): self.response.write('Hi %s, thanks for connecting your %s account.' % (auth_entity.user_display_name(), auth_entity.site_name())) However, this is not currently supported for Google and Blogger. Hopefully that will be fixed in the future. Auth entities Each site defines an App Engine datastore ndb.Model class that stores each user's OAuth credentials and other useful information, like their name and profile URL. The class name is of the form SiteAuth, e.g. FacebookAuth. Here are the useful methods: site_name()returns the human-readable string name of the site, e.g. "Facebook". user_display_name()returns a human-readable string name for the user, e.g. "Ryan Barrett". This is usually their first name, full name, or username. access_token()returns the OAuth access token. For OAuth 2 sites, this is a single string. For OAuth 1.1 sites (currently just Twitter, Tumblr, and Flickr), this is a (string key, string secret)tuple. The following methods are optional. Auth entity classes usually implement at least one of them, but not all. api()returns a site-specific API object. This is usually a third party library dedicated to the site, e.g. Tweepy or python-instagram. See the site class's docstring for details. urlopen(data=None, timeout=None)wraps urllib2.urlopen()and adds the OAuth credentials to the request. Use this for making direct HTTP request to a site's REST API. Some sites may provide get()instead, which wraps requests.get(). http()returns an httplib2.Httpinstance that adds the OAuth credentials to requests. Troubleshooting/FAQ If you get this error: bash: ./bin/easy_install: ...bad interpreter: No such file or directory You've probably hit this open virtualenv bug (fixed but not merged): virtualenv doesn't support paths with spaces. The easy fix is to recreate the virtualenv in a path without spaces. If you can't do that, then after creating the virtualenv, but before activating it, edit the activate, easy_install and pip files in local/bin/ to escape any spaces in the path. For example, in activate, VIRTUAL_ENV=".../has space/local" becomes VIRTUAL_ENV=".../has\ space/local", and in pip and easy_install the first line changes from #!".../has space/local/bin/python" to #!".../has\ space/local/bin/python". This should get virtualenv to install in the right place. If you do this wrong at first, you'll have installs in /usr/local/lib/python2.7/site-packages that you need to delete, since they'll prevent virtualenv from installing into the local site-packages. If you're using Twitter, and import requestsor something similar fails with: ImportError: cannot import name certs or you see an exception like: File ".../site-packages/tweepy/auth.py", line 68, in _get_request_token raise TweepError(e) TweepError: must be _socket.socket, not socket ...you need to configure App Engine's SSL. Add this to your app.yaml: libraries: - name: ssl version: latest If you use dev_appserver, you'll also need to apply this workaround (more background). Annoying, I know. If you see errors importing or using tweepy, it may be because six.pyisn't installed. Try pip install sixmanually. tweepydoes include sixin its dependencies, so this shouldn't be necessary. Please let us know if it happens to you so we can debug! If you get an error like this: File "oauth_dropins/webutil/test/__init__.py", line 5, in <module> import dev_appserver ImportError: No module named dev_appserver ... InstallationError: Command python setup.py egg_info failed with error code 1 in /home/singpolyma/src/bridgy/src/oauth-dropins-master ...you either don't have /usr/local/google_appengine in your PYTHONPATH, or you have it as a relative directory. pip requires fully qualified directories. If you get an error like this: Running setup.py develop for gdata ... error: option --home not recognized ... InstallationError: Command /usr/bin/python -c "import setuptools, tokenize; __file__='/home/singpolyma/src/bridgy/src/gdata/setup.py'; exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" develop --no-deps --home=/tmp/tmprBISz_ failed with error code 1 in .../src/gdata ...you may be hitting Pip bug 1833. Are you passing -t to pip install? Use the virtualenv instead, it's your friend. If you really want -t, try removing the -e from the lines in requirements.freeze.txt that have it. Changelog 2.2 - 2019-11-01 - Add LinkedIn and Mastodon! - Add Python 3.7 support, and improve overall Python 3 compatibility. - Add new button_html()method to all StartHandlerclasses. Generates the same button HTML and styling as on oauth-dropins.appspot.com. - Blogger: rename module from blogger_v2to blogger. The blogger_v2module name is still available as an alias, implemented via symlink, but is now deprecated. - Dropbox: fix crash with unicode header value. - Google: fix crash when user object doesn't have namefield. - Facebook: upgrade Graph API version from 2.10 to 4.0. - Update a number of dependencies. - Switch from Python's built in jsonmodule to ujson(built into App Engine) to speed up JSON parsing and encoding. 2.0 - 2019-02-25 - Breaking change: switch from Google+ Sign-In (which shuts down in March) to Google Sign-In. Notably, this removes the googleplusmodule and adds a new google_signinmodule, renames the GooglePlusAuthclass to GoogleAuth, and removes its api()method. Otherwise, the implementation is mostly the same. - webutil.logs: return HTTP 400 if start_timeis before 2008-04-01 (App Engine's rough launch window). 1.14 - 2018-11-12 - Fix dev_appserver in Cloud SDK 219 / app-engine-python1.9.76 and onward. Background. - Upgrade google-api-python-clientfrom 1.6.3 to 1.7.4 to stop using the global HTTP Batch endpoint. - Other minor internal updates. 1.13 - 2018-08-08 - IndieAuth: support JSON code verification responses as well as form-encoded (snarfed/bridgy#809). 1.12 - 2018-03-24 - More Python 3 updates and bug fixes in webutil.util. 1.11 - 2018-03-08 - Add GitHub! - Pass stateto the initial OAuth endpoint directly, instead of encoding it into the redirect URL, so the redirect can match the Strict Mode whitelist. - Add Python 3 support to webutil.util! - Add humanize dependency for webutil.logs. 1.10 - 2017-12-10 Mostly just internal changes to webutil to support granary v1.10. 1.9 - 2017-10-24 Mostly just internal changes to webutil to support granary v1.9. - Flickr: - Handle punctuation in error messages. 1.8 - 2017-08-29 - Upgrade Graph API from v2.6 to v2.10. - Flickr: - Fix broken FlickrAuth.urlopen()method. - Medium: - Bug fix for Medium OAuth callback error handling. - IndieAuth: - Store authorization endpoint in state instead of rediscovering it from meparameter, which is going away. 1.7 - 2017-02-27 - Updates to bundled webutil library, notably WideUnicode class. 1.6 - 2016-11-21 - Add auto-generated docs with Sphinx. Published at oauth-dropins.readthedocs.io. - Fix Dropbox bug with fetching access token. 1.5 - 2016-08-25 1.4 - 2016-06-27 - Upgrade Facebook API from v2.2 to v2.6. 1.3 - 2016-04-07 1.2 - 2016-01-11 - Flickr: - Add upload method. - Improve error handling and logging. - Bug fixes and cleanup for constructing scope strings. - Add developer setup and troubleshooting docs. - Set up CircleCI. 1.1 - 2015-09-06 - Flickr: split out flickr_auth.py file. - Add a number of utility functions to webutil. 1.0 - 2015-06-27 - Initial PyPi release. Development You'll need the App Engine Python SDK version 1.9.15 or later (for vendor support) or the Google Cloud SDK (aka gcloud) with the gcloud-appengine-python and gcloud-appengine-python-extras components. Add them to your $PYTHONPATH, e.g. export PYTHONPATH=$PYTHONPATH:/usr/local/google_appengine, and then run: git submodule init git submodule update virtualenv local source local/bin/activate pip install -r requirements setup.py test Most dependencies are clean, but we've made patches to gdata-python-client below that we haven't (yet) tried to push upstream. If we ever switch its submodule repo for, make sure the patches are included! To deploy: python -m unittest discover && git push && gcloud -q app deploy oauth-dropins *.yaml. Release instructions Here's how to package, test, and ship a new release. (Note that this is largely duplicated in granary's readme too.) - Run the unit tests. source local/bin/activate.csh python2 -m unittest discover deactivate source local3/bin/activate.csh python3 -m unittest oauth_dropins.webutil.tests.test_util. python3 setup.py clean build sdist twine upload -r pypitest dist/oauth-dropins-X.Y.tar.gz - Install from test.pypi.org, both Python 2 and 3. cd /tmp virtualenv local source local/bin/activate.csh # mf2py 1.1.2 on test.pypi.org is broken :( pip install mf2py pip install -i --extra-index-url oauth-dropins deactivate python3 -m venv local3 source local3/bin/activate.csh pip3 install --upgrade pip # mf2py 1.1.2 on test.pypi.org is broken :( pip3 install mf2py pip3 install -i --extra-index-url oauth-dropins deactivate - Smoke test that the code trivially loads and runs, in both Python 2 and 3. source local/bin/activate.csh python2 # run test code below deactivate source local3/bin/activate.csh python3 # run test code below deactivateTest code to paste into the interpreter: from oauth_dropins.webutil import util util.__file__ util.UrlCanonicalizer()('') # should print '' exit() - Tag the release in git. In the tag message editor, delete the generated comments at bottom, leave the first line blank (to omit the release "title" in github), put ### Notable changeson the second line, then copy and paste this version's changelog contents below it. git tag -a vX.Y --cleanup=verbatim git push git push --tags - Click here to draft a new release on GitHub. Enter vX.Yin the Tag version box. Leave Release title empty. Copy ### Notable changesand the changelog contents into the description text box. - Upload to pypi.org! twine upload dist/oauth-dropins-X.Y.tar.gz Related work TODO - Google and Blogger need some love: - handle declines - allow overriding CallbackHandler.finish() - support StartHandler.redirect_url() - allow more than one CallbackHandlerper app - clean up app key/secret file handling. (standardize file names? put them in a subdir?) - implement CSRF protection for all sites - implement Blogger's v3 API Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/oauth-dropins/
CC-MAIN-2020-10
refinedweb
2,726
52.76
Hi all, just joined this forum and looking for a little advice with my code. My project was to create an array holding 10 integers and populate the array with 10 random numbers. Then ask the user to guess the number. Prompt the user with a while loop if their input is out of range. Determine if the users number is in the array, and display which index location the number is in. I got most of the code done but am having trouble displaying the index location. Any help is appreciated. Code : import javax.swing.*; public class Homework4 { public static void main(String[] args) { int[] numarray = new int [10]; char repeatcode = 'y'; for ( int i = 0 ; i < numarray.length ; i++ ) { while ( repeatcode == 'y' ) { numarray[i] = (int) (Math.random() * 100); int userinput = Integer.parseInt(JOptionPane.showInputDialog ( "Guess a number between 1 and 100" )); if ( userinput < 1 || userinput > 100 ) { JOptionPane.showMessageDialog(null, "Your number wasn't in the parameters" ); } else { if ( userinput != numarray[i] ) { JOptionPane.showMessageDialog(null, "THE NUMBER IS WRONG! Thanks for playing" ); } // ends if else { JOptionPane.showMessageDialog(null, "WINNER!!!!\n\nYou guessed the number! Thanks for playing " + "\nThe number is stored at position " + numarray.length[i] ); } // ends else } repeatcode = JOptionPane.showInputDialog ( "Do you want to play again? (y/n)" ).charAt(0); } // ends while loop } // ends for loop } // ends public static void main } // ends public class Homework4
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/35696-need-help-displaying-location-array-index-printingthethread.html
CC-MAIN-2015-22
refinedweb
227
61.12
- NAME - SYNOPSIS - DESCRIPTION - METHODS - USAGE DETAILS - TODO - Perlmonks - AUTHOR - BUGS - SEE ALSO NAME CGI::Safe - Safe method of using CGI.pm. This is pretty much a two-line change for most CGI scripts. SYNOPSIS use CGI::Safe qw/ taint /; my $q = CGI::Safe->new; DESCRIPTION to initiate a DOS attack. To prevent this, we're regularly warned to include the following two lines at the top of our CGI scripts: $CGI::DISABLE_UPLOADS = 1; # Disable uploads $CGI::POST_MAX = 512 * 1024; # limit posts to 512K max As long as those are their before you instantiate a CGI object (or before you access param and related CGI functions with the function oriented interface), you have pretty safely plugged this problem. However, most CGI scripts don't have these lines of code. Some suggest changing these settings directly in CGI.pm. I dislike this for two reasons: If you upgrade CGI.pm, you might forget to make the change to the new version. You may break a lot of existing code (which may or may not be a good thing depending upon the security implications). Hence, the CGI::Safe module. It will establish the defaults for those variables and require virtually no code changes. Additionally, it will delete %ENV variables listed in perlsec as dangerous. The $ENV{ PATH } and $ENV{ SHELL } are explicitly set in the INIT method to ensure that they are not tainted. These may be overriden by passing named args to the CGI::Safe's constructor or by setting them manually. METHODS new my $cgi = CGI::Safe->new; my $cgi = CGI::Safe->new( %args ); Contructor for a new CGI::Safe object. See "USAGE DETAILS" for more information about which arguments are accepted and how they are used. set CGI::Safe->set( DISABLE_UPLOADS => 0, POST_MAX => 1_024 * 1_024 ); my $cgi = CGI::Safe->new; Class method which sets the value for DISABLE_UPLOADS and POST_MAX. Calling this method after the constructor is effectively a no-op. get_path my $path = $cgi->get_path; Returns the original $ENV{'PATH'} value. This value is tainted. get_shell my $path = $cgi->get_path; Returns the original $ENV{'SHELL'} value. This value is tainted. USAGE DETAILS Some people prefer the object oriented interface for CGI.pm and others prefer the function oriented interface. Naturally, the CGI::Safe module allows both. use CGI::Safe qw/ taint /; my $q = CGI::Safe->new( DISABLE_UPLOADS => 0 ); Or: use CGI::Safe qw/ :standard taint /; $CGI::DISABLE_UPLOADS = 0; Uploads and Maximum post size As mentioned earlier, most scripts that do not need uploading should have something like the following at the start of their code to disable uploads: $CGI::DISABLE_UPLOADS = 1; # Disable uploads $CGI::POST_MAX = 512 * 1024; # limit posts to 512K max The CGI::Safe sets these values in an BEGIN{} block. If necessary, the programmer can override these values two different ways. When using the function oriented interface, if needing file uploads and wanting to allow up to a 1 megabyte upload, they would set these values directly before using any of the CGI.pm CGI functions: use CGI::Safe qw/ :standard taint /; $CGI::DISABLE_UPLOADS = 0; $CGI::POST_MAX = 1_024 * 1_024; # limit posts to 1 meg max If using the OO interface, you can set these explicitly or pass them as parameters to the CGI::Safe constructor: use CGI::Safe qw/ taint /; my $q = CGI::Safe->new( DISABLE_UPLOADS => 0, POST_MAX => 1_024 * 1_024 ); CGI.pm objects from input files and other sources You can instantiate a new CGI.pm object from an input file, properly formatted query string passed directly to the object, or even a has with name value pairs representing the query string. To use this functionality with the CGI::Safe module, pass this extra information in the source key: use CGI::Safe qw/ taint /; my $q = CGI::Safe->new( source = $some_file_handle ); Alternatively: use CGI::Safe qw/ taint /; my $q = CGI::Safe->new( source => 'color=red&name=Ovid' ); CGI::Safe::set As of CGI::Safe::VERSION 1.1, this is a new method which allows the client a cleaner method of setting the $CGI::POST_MAX and $CGI::DISABLE_UPLOADS variables. As expected, you may use this with both the OO or function-oriented interface. When used with the OO interface, it should be treated as a class method and called before instantiation of the CGI object. use CGI::Safe qw/ taint /; CGI::Safe->set( DISABLE_UPLOADS => 0, POST_MAX => 1_024 * 1_024 ); my $q = CGI::Safe->new; This is equivalent to the following: use CGI::Safe qw/ taint /; my $q = CGI::Safe->new( DISABLE_UPLOADS => 0, POST_MAX => 1_024 * 1_024 ); When using the function oriented interface, the set method is imported into the client's namespace whenever :standard or :cgi is imported. The set must be called prior to using the cgi methods. use CGI::Safe qw/:standard taint /; set( POST_MAX => 512 * 1024 ); Since the set method is imported into your namespace, you should be aware of the possibility of namespace collisions. If you already have a subroutine named set, you should either rename the subroutine or consider using the OO interface to CGI::Safe. admin If you are running your own Web server and you find deleting the $ENV{PATH} and $ENV{SHELL} variables too restrictive, you can declare yourself to be the administrator and have those variables restored. Simply add admin to the import list: use CGI::Safe we/admin taint/; Those variables will be restored, but they will still be tainted and it is your responsibility to ensure that this is done properly. Don't use this feature unless you know exactly what you are doing. Period. CGI::get_shell and CGI::get_path These two methods/functions will return the original shell and path, respectively. These are, of course, tainted. If either :standard or :cgi is specified in the import list, these will be exported into the caller's namespace. These are provided in case you need them. Once again, don't use 'em if you're unsure of yourself. TODO You've probably noticed by now that all instances of CGI::Safe list taint in the import list. This is because the next major release of this module is intended to allow for much easier untainting of form data and cookies. Specifying taint in the import list, in that release, will tell CGI::Safe that nothing is to be untainted. As that is the default behavior, at the present time, I wanted you to get used to it so future releases wouldn't break your code. Perlmonks Many thanks to the wonderful Monks at Perlmonks.org for holding my hand while I learned Perl. There are far too many to name here. Two, however, deserve special thanks: Ben Tilly.. I thought I was a good programmer until I started reading his stuff. I've learned more about programming from him than almost any other source. Tye McQueen. [email protected]. Tye, in addition to being an excellent programmer, gave me good feedback about this module and future releases will be heavily incorporating some of his suggestions. Tye is also sometimes known as "Lord Throll, Konqueror of..."... oh, wait, he told me not to say that. Copyright (c) 2001 Curtis "Ovid" Poe. All rights reserved. This program is free software; you may redistribute it and/or modify it under the same terms as Perl itself AUTHOR Curtis "Ovid" Poe <[email protected]> Address bug reports and comments to: [email protected]. When sending bug reports, please provide the version of CGI.pm, the version of CGI::Safe, the version of Perl, and the version of the operating system you are using. BUGS 2001/07/13 There are no known bugs at this time. However, I am somewhat concerned about the use of this module with the function oriented interface. CGI.pm uses objects internally, even when using the function oriented interface (which is part of the reason why the function oriented interface is not faster than the OO version).
https://metacpan.org/pod/CGI::Safe
CC-MAIN-2017-17
refinedweb
1,307
62.27
Today, innovation in .NET is to integrate all Microsoft development tools, libraries, languages, technologies, purposes, under the same framework, that is useful to the developer or the company. In this sense, there are several tools and controls available for performing applications with .NET. These tools make it easy for us to build applications by using their already established controls and components. One of these tools is DotVVM. In a previous article, we were able to learn how to design forms with the base controls that DotVVM offers: Web forms with DotVVM controls. This time we will learn in the same way how to design web forms with DotVVM (with C# and HTML), but using the premium controls that DotVVM offers, in this case, called as Business Pack. Design Pattern: Model, View, ViewModel - MVVM DotVVM Business Pack DotVVM Business Pack is nothing more than a private NuGet, in which, we can make use of the competent premiums already established for the construction of web applications in the business field. To install the Business Pack version, in addition to obtaining the corresponding license (there is a trial version), it is necessary to perform a configuration of a few minutes to be able to use these functionalities. The steps to add the private NuGet can be consulted at the following link:. Create a DotVVM project with Business Pack in Visual Studio 2019 To create a project with the Business Pack option in DotVVM from Visual Studio 2019, we'll start by creating the DotVVM project like any other: By specifying the name and continuing, the project initialization wizard will give us a number of options to add certain functionality to the project. In this case the functionality we are interested in is the DotVVM Business Pack located in the DotVVM Commercial Extensions section: DotVVM Form with Business Pack To exemplify the use of DotVVM Business Pack components, we have a small application like this: Considering that this website in DotVVM consists of a view and its corresponding model view, let's look at the most important parts as a whole of these elements. View @viewModel BPFormControls.ViewModels.DefaultViewModel, BPFormControls @masterPage Views/MasterPage.dotmaster <dot:Content <h1 align="center"> <img src="UserIcon.png" width="20%" height="20%" /> <br /> <b>{{value: Title}}</b> </h1> <div align="center"> <bp:Window <h1 align="center"><p><b>Pearson Form</b>.</p></h1> <div Validator. <b>Username:</b> <br /> <bp:TextBox </div> <p /> <div Validator. <b>EnrollmentDate:</b> <br /> <bp:DateTimePicker </div> <p /> <div Validator. <b>Gender:</b> <br /> <bp:RadioButton <bp:RadioButton </div> <p /> <b>About:</b> <br /> <bp:TextBox <p /> <b>Profile Picture:</b> <br /> <bp:FileUpload <p /> <bp:Button <p /> </bp:Window> <bp:Button </div> </dot:Content> Viewmodel public class DefaultViewModel : MasterPageViewModel { public string Title { get; set; } public PersonModel Person { get; set; } = new PersonModel { EnrollmentDate = DateTime.UtcNow.Date }; public bool IsWindowDisplayed { get; set; } public UploadData ProfilePicture { get; set; } = new UploadData(); public DefaultViewModel() { Title = "Person Form"; } public void Process() { this.IsWindowDisplayed = false; String script = "alert('" + "Welcome" + " " + Person.Username + " to Web App :) ')"; Context.ResourceManager.AddStartupScript(script); } public void ProcessFile() { // do what you have to do with the uploaded files String script = "alert('" + "ProcessFile() was called.')"; Context.ResourceManager.AddStartupScript(script); } } Model public class PersonModel { [Required] public string Username { get; set; } [Required] public DateTime EnrollmentDate { get; set; } [Required] public string Gender { get; set; } public string About { get; set; } } The first element we will analyze is the Window component, which represents a modal dialog window, as in HTML. This control allows us to customize directly from its attributes as the window will be displayed. If we were working with DotVVM's base controls, to achieve this functionality we would have to make use of some Javascript functionalities directly to set the window. In this example, the window title can be assigned. We can also customize certain properties, such as not allowing the window to close by pressing the Escape key or clicking outside the window box. In this example, the Boolean attribute IsWindowDisplayed, according to its value of true or false, will allow whether or not to display the set window. <bp:Window This is the definition of the IsWindowDisplayed attribute for the window display: public bool IsWindowDisplayed { get; set; } Learn more about the Window component here:. To display the window, a button is used. This button is also another of the Business Pack components. The premium version allows us to make certain customizations in terms of its styles, for example, enable/disable the button, assign an icon, among other functionalities:. The result is as follows: When loading the window, we see how the cursor is positioned in the text box intended for the Person's Username attribute, this property is called AutoFocus, and is part of the premium component TextBox. <bp:TextBox Among the features of this component, one of the most relevant is the Type property. This allows us to specify whether the component is of type text, password, or textarea. For example, in the case of the application, the About section of the Person through the MultiLine type looks like this: <bp:TextBox Learn more about the TextBox component here:. Continuing the analysis of this form, we will now review the DatePicker component, a premium control that allows us to work with dates and times through established designs. In this case, the component looks something like this: Business Pack components generally have a greater number of properties that allow us to customize controls. In this case, in the DateTimePicker, we can adjust for example the time format (AM/PM or 24-hour format), set icons, adjust the start day of the week, among others. To learn more about the DateTimePicker component, we can access the following address:. Finally, the last component we will analyze is FileUpload, another of the exclusive controls of the Business Pack tools that allows us to upload files from a form, to process them and fulfill some specific objective. For this case, the control can be useful for loading the profile picture of a particular user. To work with files, an image for example can be dragged to the upload section or through search by means of the operating system directories. <bp:FileUpload There are certain functionalities that can be considered in this control, for example, specifying whether one or more files can be loaded through the AllowMultipleFiles property, performing a particular action when the file has been uploaded through the UploadCompleted property, among others: What's next? With this article, we learned certain features of DotVVM premium components through its Business Pack. We've also seen how to create dynamic forms by implementing views and models of views with DotVVM-defined controls for working with web pages. The source code for this implementation is available in this repository: DotVVM Business Pack. Additional resources Want to know the steps to create a DotVVM app? To do this you can check this article: Steps to create an MVVM application (Model-View-Viewmodel) with DotVVM and ASP.NET Core. Want to take your first steps in developing web applications with ASP.NET Core and DotVVM? Learn more in this tutorial: DotVVM and ASP.NET Core: Implementing CRUD operations. Thank you! If you have any concerns or need help in something particular, it will be a pleasure to be able to help. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/dotvvm/web-forms-with-dotvvm-business-pack-gl1
CC-MAIN-2022-05
refinedweb
1,217
50.67
MKNOD(2) BSD Programmer's Manual MKNOD(2) mknod - make a special file node #include <sys/stat.h> int mknod(const char *path, mode_t mode, dev_t dev);user privileges. Upon successful completion a value of 0 is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error. mknod() will fail and the file will be not created if: user. . [EINVAL] The process is running within an alternate root directory, as created by chroot(2). This value is only returned if the currently running kernel has the following option set: option HARD_CHROOT chmod(2), chroot(2), stat(2), umask(2) A mknod() function call appeared in Version 6 AT&T UN.
https://www.mirbsd.org/htman/i386/man2/mknod.htm
CC-MAIN-2014-10
refinedweb
116
56.55
#include "llvm/IR/ModuleSummaryIndex.h" Definition at line 126 of file ModuleSummaryIndex.h. Definition at line 127 of file ModuleSummaryIndex.h. The GlobalValue corresponding to this summary. This is only used in per-module summaries and when the IR is available. E.g. when module analysis is being run, or when parsing both the IR and the summary from assembly. Definition at line 138 of file ModuleSummaryIndex.h. Referenced by NameOrGV(). Summary string representation. This StringRef points to BC module string table and is valid until module data is stored in memory. This is guaranteed to happen until runThinLTOBackend function is called, so it is safe to use this field during thin link. This field is only valid if summary index was loaded from BC file. Definition at line 145 of file ModuleSummaryIndex.h. Referenced by NameOrGV().
https://llvm.org/doxygen/unionllvm_1_1GlobalValueSummaryInfo_1_1NameOrGV.html
CC-MAIN-2022-21
refinedweb
137
53.07
Unity 2017.3b feature preview: Assembly Definition Files and Transform Tool As the release of Unity 2017.3 is drawing near, we’d like to show you two nifty features that you might like. The first feature is Assembly Definition Files. With this feature, you’ll get the option to define your own managed assemblies inside a folder. With well-defined dependencies you’ll ensure that only the required assemblies will be rebuilt, when you make a change in a script, reducing compilation time. The larger your project grows, the greater the compilation time will inevitably end up being. When iterating on a project this can easily become a nuisance, so setting up proper assembly definition files should help you work more efficiently and waste less time waiting for scripts to compile. The second feature is the new Transform Tool. The Transform Tool is a mix of the three current tools; Move, Rotate and Scale suitable for those situations, where easy access to all three is required. A small thing, but also a potential time saver. You can test out the above features (and more) right now while waiting for the full release of Unity 2017.3, by simply downloading the 2017.3 beta here. Remember to take backups before upgrading your project. Script Compilation – Assembly Definition Files Unity automatically defines how scripts compile to managed assemblies. Typically, compilation times in the Unity Editor for iterative script changes increase as you add more scripts to the Project. When you are going through an iterative process you want the builds to compile as fast and smooth as possible. You can use an assembly definition file to define your own managed assemblies based upon scripts inside a folder. If you separate Project scripts into multiple assemblies with well-defined dependencies, you’ll ensure that only required assemblies are rebuilt when making changes in a script. This reduces compilation times, so you can think of each managed assembly as a single library within the Unity Project. The figure above illustrates how you can split the Project scripts into several assemblies. If you are only changing scripts in Main.dll, none of the other assemblies will need to recompile. Since Main.dll contains fewer scripts, it also compiles faster than Assembly-CSharp.dll. Similarly, script changes you make in Stuff.dll will only cause Main.dll and Stuff.dll to recompile. How to use assembly definition files Assembly definition files are Asset files that you create by going to Assets > Create > Assembly Definition. They have the extension .asmdef . You can add an assembly definition file to a folder in a Unity Project to compile all the scripts in the folder into an assembly, and then set the name of the assembly in the Inspector. Note: The name of the folder in which the assembly definition file resides and the filename of the assembly definition file have no effect on the name of the assembly. Add references to other assembly definition files in the Project using the Inspector too. The references are used when compiling the assemblies and define the dependencies between the assemblies. Unity uses the references to compile the assemblies and also defines the dependencies between the assemblies. You can set the platform compatibility for the assembly definition files in the Inspector. You have the option to exclude or include specific platforms. Multiple assembly definition files inside a folder hierarchy If you have multiple assembly definition files (extension: .asmdef) inside a folder hierarchy it will cause each script to be added to the assembly definition file with the shortest path distance. Example: If you have a Assets/ExampleFolder/MyLibrary.asmdef and a Assets/ExampleFolder/ExampleFolder2/Utility.asmdef file, then: - Any scripts inside the Assets > ExampleFolder > ExampleFolder2 folder will be compiled into the Assets/ExampleFolder/ExampleFolder2/Utility.asmdef defined assembly. - Any files in the Assets > ExampleFolder folder that are not inside Assets > ExampleFolder > ExampleFolder2 folder would be compiled into the Assets/ExampleFolder/MyLibrary.asmdef defined assembly. Assembly definition files are not build system files **Note** : The assembly definition files are not assembly build files. They do not support conditional build rules typically found in build systems. This is also the reason why the assembly definition files do not support setting of preprocessor directives (defines), as those are static at all times. Backwards compatibility and implicit dependencies Assembly definition files are backwards compatible with the existing Predefined Compilation System in Unity. This means that the predefined assemblies always depend on every assembly definition file’s assemblies. This is similar to how all scripts are dependent on all precompiled assemblies (plugins / .dlls) compatible with the active build target in Unity. The diagram in Figure 3 illustrates the dependencies between predefined assemblies, assembly definition files assemblies and precompiled assemblies. Unity gives priority to the assembly definition files over the Predefined Compilation System. This means that having any of the special folder names from the predefined compilation inside an assembly definition file folder will not have any effect on the compilation. Unity treats these as regular folders without any special meaning. It is highly recommended that you use assembly definition files for all the scripts in the Project, or not at all. Otherwise, the scripts that are not using assembly definition files always recompile every time an assembly definition file recompiles. This reduces the benefit of using assembly definition files in your project. API In the namespace UnityEditor.Compilation there is a static CompilationPipeline class that you use to retrieve information about assembly definition files and all assemblies built by Unity. File Format Assembly definition files are JSON files. They have the following fields: The fields includePlatforms and excludePlatforms cannot be used together in the same assembly definition file. Retrieve the platform names by using Examples You can find the Assembly Definition Files Example Project on our forum, and download the 2017.3 beta here. Transform Tool One of the other useful features you’ll find in the 2017.3 beta is the new Transform Tool. The Transform Tool is a multitool that contains the functionality of the standard three Move, Rotate and Scale tools. The Transform Tool isn’t meant to replace the three standard tools, but to provide a tool for situations where you would want all of the present without having to switch back and forth between them. To get early access and try out the new Transform Tool coming soon in 2017.3, you can download the Unity 2017.3 beta. Once installed, you’ll have access to this and all of the upcoming features in the next version of Unity. Enabling the Transform Tool To begin using the Transform Tool, click the Transform Tool button in the toolbar or press “Y”. You’ll now have access to the Transform Tool gizmo. press and keep the “V” key pressed, the gizmos enters vertex snapping mode. This way, you can translate your gameobject so one of its vertex is placed on the vertex of another object. Get early access to the features now Both Assembly Definition Files and the Transform Tool are a part of the Unity 2017.3 beta together with new improvements to the particle system, an updated Crunch texture compression library and much more. Test compatibility with your current project or play around with the new features by downloading the beta. You also still have time to be a part of this round of beta sweepstakes too for the chance of winning one of three Acer Windows Mixed Reality headsets with Motion Controllers, sponsored by our friends at Microsoft – read more about that here. Related posts 26 CommentsSubscribe to comments Nam DuongDecember 10, 2017 at 3:58 am Programmer here – the assembly definition file is exactly what we needed! However, the assembly generated always use the unity Assets directory as its root, i.e. if I create an assembly def file in /project-root/assets/MyModule, I want the assembly to point to /project-root/assets/MyModule instead of /project-root/assets; This makes the assembly files harder to browse as you have to click and expand some extra folders. Great feature! I love it! AndreNovember 29, 2017 at 5:18 pm That transform tool ist great. Saves a lot of clicking. LevonNovember 28, 2017 at 10:55 pm How about this, stop integrating and go back to the 4.6.8 style editor. There was nothing wrong with it now we have all this crap piled up on top of crap instead of fixing tons of issues. I can see adding more stuff into the engine such as a better renderer take a look at Unreal and get some ideas on how you can improve that. Another thing Vertex Snapping that’s a new feature?? We had that back in 4.6.8 anyone on 5.x.x try it Shift V and move your object around super new.. Also whats the point of having a zoom in the game window? That’s what the scene view was for I thought. Its not all bad though Unity3d is amazing your just overcompensating in the wrong areas. KonradNovember 25, 2017 at 9:53 pm Will we have the option to use the old style transform tool? biloNovember 24, 2017 at 6:38 pm Please add option for pivot point change on mesh. CloverNovember 24, 2017 at 5:41 pm Instead of a new cluttered transform tool, wouldn’t it be far more productive to instead have keyboard shortcuts for moving, rotating and scaling in specific axes? Say, something like what’s in Blender? How about letting me directly input the transformation values by using those keyboard shortcuts? And how about making a nicer gizmo for the 2D mode, and a 2D mode for the transform inspector? I get that I can probably create some of these tools for Unity myself (In fact, I’ve found some assets that already enable some of those features, and I even created a transform inspector that allows you to switch between 2D and 3D modes: would you like to hire me to do this for you?), but I’d much rather have some of them already built into Unity, so I don’t have to manually add all of these custom editor scripts into all of my projects, or (And I’d love to see this implemented regardless) allow us to more easily/further modify the editor GUI. How about letting us extend or modify all of the existing editors? I’ve been struggling with Your extremely user unfriendly Animator editor with no zooming feature or curved transition lines for literally 5 years now. CloverNovember 24, 2017 at 6:09 pm Ah, interesting, looks like Unity 2017.3 Beta breaks my 2D/3D transform inspector. Thanks! Good job, Team Unity, Good job! ImiNovember 23, 2017 at 4:06 pm The “splitting up compilation into different DLLs” thing is superseeded by the immense speed improvement you get from just ditching mono compiler completely and compile all source code with MS Visual Studio. For us, this got compile times down from about 30 seconds to under 2 seconds. So I don’t see a point about the assembly definition files, except for “we want to support Editor – Mac/Linux compilation of code”, which is of no interest to probably 95% of devs? About the transform tool: I never had problems with the current tool, but I guess you guys had a reason to change it…? Probably? dasNovember 24, 2017 at 12:12 am I’m interested… how do you compile in VS? MaikNovember 27, 2017 at 1:54 pm How do you go about compiling with Visual Studio only? How do you set it up? Thanks! Vlad MJanuary 20, 2018 at 11:09 pm @IMI can you please describe how you did that? I would really appreciate this. Thank you vulgerstalNovember 23, 2017 at 6:27 am Add RectTransform to Transform Multitool, please. dasNovember 23, 2017 at 3:28 am assembly definition is very nice but the new transform tool … sould be better. No axial scale gizmo when in global or screen mode and the gizmo is drawn on top of the object with no indication of depth – confusing. Ciro ContinisioNovember 23, 2017 at 11:21 am Hey das, out of curiosity: why do you want to scale in screen mode? If you think about it (simple example), rotating a cube and then scaling in screen space would skew it, making it another shape. That’s… doable, I guess, but it would require to modify the underlying mesh because you can’t describe that just by using the Transform values. Same goes for scaling in global space. So, I’m curious: what’s your use case? Abion47November 23, 2017 at 12:33 pm Well, strictly speaking, you CAN do a skew transform using the same underlying transformation matrix that objects have. That being said, I can’t even begin to imagine the math entailed in converting a screen-space skew operation into an object-level transformation matrix. And I do have to agree, why would you even want to? dasNovember 24, 2017 at 12:11 am don’t skew it, just project the non local scale values to local space, like the rect tool does JustinNovember 22, 2017 at 6:43 pm If assembly definition files work for the switch between editor and play mode, and can continue to run through recompiles in editor mode, we could simplify Scene Fusion, where it will allow people to drop off for these events while keeping everyone connected. Emil "AngryAnt" JohansenNovember 22, 2017 at 4:57 pm Curious: Why a new file format when the folder meta file is right there? Ruben AsterNovember 22, 2017 at 9:11 pm External tools can use this system and I guess using only the meta of the folders may work great within Unity but will cause a lot of confusion when using some “advanced” workflows, e Lukasz PaczkowskiNovember 23, 2017 at 12:11 pm The .meta file format is not standardized nor documented. We want to standardize and document the .asmdef file format to enable third party tooling to read and write the files. Emil "AngryAnt" JohansenNovember 28, 2017 at 3:25 pm Adding yet another file format is preferable to documenting the existing one? JaimiNovember 22, 2017 at 4:46 pm The new Transform tool is a great addition! It’s really improved the workflow. Kudos to whomever thought of this and implemented it! PaheNovember 22, 2017 at 3:47 pm Nice features. One question about the Script Assemblies. How will GUID referencing work in that case? For example, I have a component which references the original script from the folder and another reference which references the script from the generated DLL. René DammNovember 23, 2017 at 3:01 am GUID referencing is actually unaffected. The GUID reference goes to the C# script asset which is still there. So from the perspective of the asset database, nothing has changed. It isn’t aware of what the script compilation pipeline does with the C# scripts it is being fed. paheNovember 23, 2017 at 10:29 am Thanks Rene. I’ll give it a try then. Had some problems in the past with exactly that, but it was in experimential state back then, so maybe it has been improved (or I did it wrong ^^). Alan MattanoNovember 22, 2017 at 3:34 pm Sometimes simple improvements are useful than complex ones. Finally a super awesome all in one Gizmo tool in Unity! Thx Unity Team!
https://blogs.unity3d.com/2017/11/22/unity-2017-3b-feature-preview-assembly-definition-files-and-transform-tool/?replytocom=364932
CC-MAIN-2018-47
refinedweb
2,589
63.59
I have been helped here once before. I got the info I needed to solve my issues. I'm a beginner and this online class I'm taking is killing me. This assignment involves the use of loops, most likely do-while, and possibly others. Essentially the program is to have the user guess what the total of his roll of two dice will be (2 to 12). Then the two dice are rolled. The user has three rolls to achieve a WIN, if not he loses and must get the opportunity to play again. If he matches his guess, the game tells him he's a winner and the game is over then - no three rolls. He also must be given the opportunity to play again. I've validated the user's guess (input). That part works great. This program uses a class that rolls and displays the dice - that too works ok. From there, I've gotten completely confused and frustrated. Here is my code, most of which does not work. Can anyone help me? I'm such a beginner, the book doesn't always help, and there is no online course assistance. Thanks in advance for any hints or help: //******************************************************************** // DiceGame.java // This program tells user to predict what the pair of dice will roll. // Dice will roll a maximum of 3 times, using the Dice class already created. // If the dice rolls the user's prediction in the 3 rolls, user wins // If not, user loses. Either way user has option to play again. //******************************************************************** import java.util.Scanner; public class DiceGame { public static void main (String[] args) { // All variables including char variable that holds y and n. int rolls, roll1, roll2, guessRoll; int rollSum; char repeat = ' '; String input; Scanner keyboard = new Scanner(System.in); // Create two separate dice objects to create a pair of dice Dice die1 = new Dice(); Dice die2 = new Dice(); // get the user to enter the what user predicts his/her pair of dice will roll System.out.print ("Enter what your pair of dice will roll (a number 2 through twelve please: "); guessRoll = keyboard.nextInt(); // Validate user's input - page 261 while (guessRoll < 2 || guessRoll > 12) { System.out.println("Incorrect number entered. Please enter a number from 2 to to 12 only."); System.out.println("Enter correct number here: "); guessRoll = keyboard.nextInt(); } { do { roll1 = die1.diceRoll(); roll2 = die2.diceRoll(); rollSum = roll1 + roll2; // Set the number of times the loop can repeat itself. for (rolls = 1; rolls < 3; rolls++) { System.out.println("Your Dice rolled: " + rollSum +"."); System.out.println("NO MATCH"); } while (repeat == 'Y' || repeat == 'y') { System.out.println("Play again? Enter Y or N: "); input = keyboard.next(); repeat = input.charAt(0); } } while (guessRoll != rollSum); } do { //This will run if the correct number is guessed System.out.println("_______________Roll #: " + rolls + " _________________"); roll1 = die1.diceRoll(); roll2 = die2.diceRoll(); rollSum = roll1 + roll2; System.out.println("Your Dice rolled: " + rollSum +"."); System.out.println("IT'S a MATCH - YOU WIN!!"); while (repeat == 'Y' || repeat == 'y') { System.out.println("Play again? Enter Y or N: "); input = keyboard.next(); repeat = input.charAt(0); } } while (guessRoll == rollSum); } }
https://www.daniweb.com/programming/software-development/threads/112006/beginner-help-w-java-loops-etc
CC-MAIN-2016-50
refinedweb
517
70.29
Hi..I got stuck with file handling.. how can I get the each line's string from the file..?? I mean..for example, if I have file which contain 2--->how many countries participate canada united states 2,1-->these are score for the game then, how can I store each line into array?? like ["2","canada","united states", "2,1"] the file format always like that, but I should handle 2 team to 256 teams..so..anyway..anybody knows how to handle the file like this?? Here is my code.... Code:#include<stdio.h> int main(int argc, char *argv[]){ char *file[512]; int line = 0; if (argc == 2){ FILE *fp; fp = fopen(argv[1], "r"); while (fgets(file[line], 81, fp) != 0){ printf("file[%d] is [%s] \n", line, file[line]); line++; } fclose(fp); return (0); }else { printf("theres no input file\n"); return (-1); } } [code][/code]tagged by Salem
http://cboard.cprogramming.com/c-programming/27626-what-wrong-code-can-u-help.html
CC-MAIN-2014-15
refinedweb
152
81.22
NDArray - Imperative tensor operations on CPU/GPU¶ In MXNet, NDArray is the core data structure for all mathematical computations. An NDArray represents a multidimensional, fixed-size homogenous array. If you’re familiar with the scientific computing python package NumPy, you might notice that mxnet.ndarray is similar to numpy.ndarray. Like the corresponding NumPy data structure, MXNet’s NDArray enables imperative computation. So you might wonder, why not just use NumPy? MXNet offers two compelling advantages. First, MXNet’s NDArray supports fast execution on a wide range of hardware configurations, including CPU, GPU, and multi-GPU machines. MXNet also scales to distributed systems in the cloud. Second, MXNet’s NDArray executes code lazily, allowing it to automatically parallelize multiple operations across the available hardware. An NDArray is a multidimensional array of numbers with the same type. We could represent the coordinates of a point in 3D space, e.g. [2, 1, 6] as a 1D array with shape (3). Similarly, we could represent a 2D array. Below, we present an array with length 2 along the first axis and length 3 along the second axis. [[0, 1, 2] [3, 4, 5]] Note that here the use of “dimension” is overloaded. When we say a 2D array, we mean an array with 2 axes, not an array with two components. Each NDArray supports some important attributes that you’ll often want to query: - ndarray.shape: The dimensions of the array. It is a tuple of integers indicating the length of the array along each axis. For a matrix with nrows and mcolumns, its shapewill be (n, m). - ndarray.dtype: A numpytype object describing the type of its elements. - ndarray.size: The total number of components in the array - equal to the product of the components of its shape - ndarray.context: The device on which this array is stored, e.g. cpu()or gpu(1). (set in the GPUs section of this tutorial) to mx.cpu(). Array Creation¶ There are a few different ways to create an NDArray. - We can create an NDArray from a regular Python list or tuple by using the arrayfunction: import mxnet as mx # create a 1-dimensional array with a python list a = mx.nd.array([1,2,3]) # create a 2-dimensional array with a nested python list b = mx.nd.array([[1,2,3], [2,3,4]]) {'a.shape':a.shape, 'b.shape':b.shape} - We can also create an MXNet NDArray from a numpy.ndarrayobject: import numpy as np import math c = np.arange(15).reshape(3,5) # create a 2-dimensional array from a numpy.ndarray object a = mx.nd.array(c) {'a.shape':a.shape} We can specify the element type with the option dtype, which accepts a numpy type. By default, float32 is used: # float32 is used by default a = mx.nd.array([1,2,3]) # create an int32 array b = mx.nd.array([1,2,3], dtype=np.int32) # create a 16-bit float array c = mx.nd.array([1.2, 2.3], dtype=np.float16) (a.dtype, b.dtype, c.dtype) If we know the size of the desired NDArray, but not the element values, MXNet offers several functions to create arrays with placeholder content: # create a 2-dimensional array full of zeros with shape (2,3) a = mx.nd.zeros((2,3)) # create a same shape array full of ones b = mx.nd.ones((2,3)) # create a same shape array with all elements set to 7 c = mx.nd.full((2,3), 7) # create a same shape whose initial content is random and # depends on the state of the memory d = mx.nd.empty((2,3)) Printing Arrays¶ When inspecting the contents of an NDArray, it’s often convenient to first extract its contents as a numpy.ndarray using the asnumpy function. Numpy uses the following layout: - The last axis is printed from left to right, - The second-to-last is printed from top to bottom, - The rest are also printed from top to bottom, with each slice separated from the next by an empty line. b = mx.nd.arange(18).reshape((3,2,3)) b.asnumpy() Basic Operations¶ When applied to NDArrays, the standard arithmetic operators apply elementwise calculations. The returned value is a new array whose content contains the result. a = mx.nd.ones((2,3)) b = mx.nd.ones((2,3)) # elementwise plus c = a + b # elementwise minus d = - c # elementwise pow and sin, and then transpose e = mx.nd.sin(c**2).T # elementwise max f = mx.nd.maximum(a, c) f.asnumpy() As in NumPy, * represents element-wise multiplication. For matrix-matrix multiplication, use dot. a = mx.nd.arange(4).reshape((2,2)) b = a * a c = mx.nd.dot(a,a) print("b: %s, \n c: %s" % (b.asnumpy(), c.asnumpy())) The assignment operators such as += and *= modify arrays in place, and thus don’t allocate new memory to create a new array. a = mx.nd.ones((2,2)) b = mx.nd.ones(a.shape) b += a b.asnumpy() Indexing and Slicing¶ The slice operator [] applies on axis 0. a = mx.nd.array(np.arange(6).reshape(3,2)) a[1:2] = 1 a[:].asnumpy() We can also slice a particular axis with the method slice_axis d = mx.nd.slice_axis(a, axis=1, begin=1, end=2) d.asnumpy() Shape Manipulation¶ Using reshape, we can manipulate any arrays shape as long as the size remains unchanged. a = mx.nd.array(np.arange(24)) b = a.reshape((2,3,4)) b.asnumpy() The concat method stacks multiple arrays along the first axis. Their shapes must be the same along the other axes. a = mx.nd.ones((2,3)) b = mx.nd.ones((2,3))*2 c = mx.nd.concat(a,b) c.asnumpy() Reduce¶ Some functions, like sum and mean reduce arrays to scalars. a = mx.nd.ones((2,3)) b = mx.nd.sum(a) b.asnumpy() We can also reduce an array along a particular axis: c = mx.nd.sum_axis(a, axis=1) c.asnumpy() Broadcast¶ We can also broadcast an array. Broadcasting operations, duplicate an array’s value along an axis with length 1. The following code broadcasts along axis 1: a = mx.nd.array(np.arange(6).reshape(6,1)) b = a.broadcast_to((6,4)) # b.asnumpy() It’s possible to simultaneously broadcast along multiple axes. In the following example, we broadcast along axes 1 and 2: c = a.reshape((2,1,1,3)) d = c.broadcast_to((2,2,2,3)) d.asnumpy() Broadcasting can be applied automatically when executing some operations, e.g. * and + on arrays of different shapes. a = mx.nd.ones((3,2)) b = mx.nd.ones((1,2)) c = a + b c.asnumpy() Copies¶ When assigning an NDArray to another Python variable, we copy a reference to the same NDArray. However, we often need to make a copy of the data, so that we can manipulate the new array without overwriting the original values. a = mx.nd.ones((2,2)) b = a b is a # will be True The copy method makes a deep copy of the array and its data: b = a.copy() b is a # will be False The above code allocates a new NDArray and then assigns to b. When we do not want to allocate additional memory, we can use the copyto method or the slice operator [] instead. b = mx.nd.ones(a.shape) c = b c[:] = a d = b a.copyto(d) (c is b, d is b) # Both will be True Advanced Topics¶ MXNet’s NDArray offers some advanced features that differentiate it from the offerings you’ll find in most other libraries. GPU Support¶ By default, NDArray operators are executed on CPU. But with MXNet, it’s easy to switch to another computation resource, such as GPU, when available. Each NDArray’s device information is stored in ndarray.context. When MXNet is compiled with flag USE_CUDA=1 and the machine has at least one NVIDIA GPU, we can cause all computations to run on GPU 0 by using context mx.gpu(0), or simply mx.gpu(). When we have access to two or more GPUs, the 2nd GPU is represented by mx.gpu(1), etc. Note In order to execute the following section on a cpu set gpu_device to mx.cpu(). gpu_device=mx.gpu() # Change this to mx.cpu() in absence of GPUs. def f(): a = mx.nd.ones((100,100)) b = mx.nd.ones((100,100)) c = a + b print(c) # in default mx.cpu() is used f() # change the default context to the first GPU with mx.Context(gpu_device): f() We can also explicitly specify the context when creating an array: a = mx.nd.ones((100, 100), gpu_device) a Currently, MXNet requires two arrays to sit on the same device for computation. There are several methods for copying data between devices. a = mx.nd.ones((100,100), mx.cpu()) b = mx.nd.ones((100,100), gpu_device) c = mx.nd.ones((100,100), gpu_device) a.copyto(c) # copy from CPU to GPU d = b + c e = b.as_in_context(c.context) + c # same to above {'d':d, 'e':e} Serialize From/To (Distributed) Filesystems¶ MXNet offers two simple ways to save (load) data to (from) disk. The first way is to use pickle, as you might with any other Python objects. NDArray is pickle-compatible. import pickle as pkl a = mx.nd.ones((2, 3)) # pack and then dump into disk data = pkl.dumps(a) pkl.dump(data, open('tmp.pickle', 'wb')) # load from disk and then unpack data = pkl.load(open('tmp.pickle', 'rb')) b = pkl.loads(data) b.asnumpy() The second way is to directly dump to disk in binary format by using the save and load methods. We can save/load a single NDArray, or a list of NDArrays: a = mx.nd.ones((2,3)) b = mx.nd.ones((5,6)) mx.nd.save("temp.ndarray", [a,b]) c = mx.nd.load("temp.ndarray") c It’s also possible to save or load a dict of NDArrays in this fashion: d = {'a':a, 'b':b} mx.nd.save("temp.ndarray", d) c = mx.nd.load("temp.ndarray") c The load and save methods are preferable to pickle in two respects - When using these methods, you can save data from within the Python interface and then use it later from another language’s binding. For example, if we save the data in Python: a = mx.nd.ones((2, 3)) mx.nd.save("temp.ndarray", [a,]) we can later load it from R: a <- mx.nd.load("temp.ndarray") as.array(a[[1]]) ## [,1] [,2] [,3] ## [1,] 1 1 1 ## [2,] 1 1 1 - When a distributed filesystem such as Amazon S3 or Hadoop HDFS is set up, we can directly save to and load from it. mx.nd.save('s3://mybucket/mydata.ndarray', [a,]) # if compiled with USE_S3=1 mx.nd.save('hdfs///users/myname/mydata.bin', [a,]) # if compiled with USE_HDFS=1 Lazy Evaluation and Automatic Parallelization¶ MXNet uses lazy evaluation to achieve superior performance. When we run a=b+1 in Python, the Python thread just pushes this operation into the backend engine and then returns. There are two benefits to this approach: - The main Python thread can continue to execute other computations once the previous one is pushed. It is useful for frontend languages with heavy overheads. - It is easier for the backend engine to explore further optimization, such as auto parallelization. The backend engine can resolve data dependencies and schedule the computations correctly. It is transparent to frontend users. We can explicitly call the method wait_to_read on the result array to wait until the computation finishes. Operations that copy data from an array to other packages, such as asnumpy, will implicitly call wait_to_read. import time def do(x, n): """push computation into the backend engine""" return [mx.nd.dot(x,x) for i in range(n)] def wait(x): """wait until all results are available""" for y in x: y.wait_to_read() tic = time.time() a = mx.nd.ones((1000,1000)) b = do(a, 50) print('time for all computations are pushed into the backend engine:\n %f sec' % (time.time() - tic)) wait(b) print('time for all computations are finished:\n %f sec' % (time.time() - tic)) Besides analyzing data read and write dependencies, the backend engine is able to schedule computations with no dependency in parallel. For example, in the following code: a = mx.nd.ones((2,3)) b = a + 1 c = a + 2 d = b * c the second and third lines can be executed in parallel. The following example first runs on CPU and then on GPU: n = 10 a = mx.nd.ones((1000,1000)) b = mx.nd.ones((6000,6000), gpu_device) tic = time.time() c = do(a, n) wait(c) print('Time to finish the CPU workload: %f sec' % (time.time() - tic)) d = do(b, n) wait(d) print('Time to finish both CPU/GPU workloads: %f sec' % (time.time() - tic)) Now we issue all workloads at the same time. The backend engine will try to parallel the CPU and GPU computations. tic = time.time() c = do(a, n) d = do(b, n) wait(c) wait(d) print('Both as finished in: %f sec' % (time.time() - tic))
https://mxnet.apache.org/versions/0.12.1/tutorials/basic/ndarray.html
CC-MAIN-2022-33
refinedweb
2,207
59.7
Send a request to the composition manager to add new buffers to a window. #include <screen/screen.h> int screen_create_window_buffers(screen_window_t win, int count) The handle of the window for which the new buffers must be allocated. The number of buffers to be created for this window. Function Type: Flushing Execution This function adds buffers to a window. Windows need at least one buffer in order to be visible. Buffers cannot be created using screen_create_window_buffers() if at some point prior, buffers were attached to this window using screen_attach_window_buffers(). Buffers will be created with the size of SCREEN_PROPERTY_BUFFER_SIZE as set for the window. 0 if new buffers were created for the specified window, or -1 if an error occurred (errno is set; refer to /usr/include/errno.h for more details). Note that the error may also have been caused by any delayed execution function that's just been flushed.
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.screen/topic/screen_create_window_buffers.html
CC-MAIN-2018-47
refinedweb
149
57.06
I'm hoping anybody could help me with the following. I have 2 lists of arrays, which should be linked to each-other. Each list stands for a certain object. arr1 arr2 import numpy as np arr1 = [np.array([1, 2, 3]), np.array([1, 2]), np.array([2, 3])] arr2 = [np.array([20, 50, 30]), np.array([50, 50]), np.array([75, 25])] 1 arr1 20 arr2 array([[ 0, 20, 50, 30], [ 0, 50, 50, 0], [ 0, 0, 75, 25]]) Here's an almost* vectorized approach - lens = np.array([len(i) for i in arr1]) N = len(arr1) row_idx = np.repeat(np.arange(N),lens) col_idx = np.concatenate(arr1) M = col_idx.max()+1 out = np.zeros((N,M),dtype=int) out[row_idx,col_idx] = np.concatenate(arr2) *: Almost because of the loop comprehension at the start, but that should be computationally negligible as it doesn't involve any computation there.
https://codedump.io/share/aGnqYrGqSwtT/1/combine-list-of-numpy-arrays-and-reshape
CC-MAIN-2017-43
refinedweb
150
64.37
On Thu, Aug 10, 2006 at 12:01:32PM -0400, Geir Magnusson Jr wrote: > > > Dalibor Topic wrote: > > On Thu, Aug 10, 2006 at 01:35:34PM +0100, Tim Ellison wrote: > >> Dalibor Topic wrote: > >>> On Thu, Aug 10, 2006 at 10:27:42AM +0100, Tim Ellison wrote: > >>>> Mikhail Loenko wrote: > >>>>> The problem is Base64 functionality is unavailable through the > >>>>> standard API, so people have a choice either use unportable sun.*, > >>>>> o.a.h.*, etc or create the coder from scratch > >>>> Understood, I think people are 'driven' to using the non-API types > >>>> though necessity rather than doing so by mistake. > >>> Hardly. Many replacements for Base64 exists, for example GNU Classpath > >>> recommends using Apache Commons Codec for projects compatible with the > >>> Apache license. > >>> > >>> Amateur developers use non-standard classes because they are too lazy > >>> to think for themselves, and accordingly copy broken code around projects. > >>> Alas, a language designed to appeal to the masses naturally attracts a > >>> lot of people who'd have trouble producing portable code in any programming > >>> language. :) > >> I was being charitable. For sure many such usages can be easily > >> avoided, but in other cases no so easily; like CharToByteConverter would > >> require duplicating a wad of data, and I don't know of any third-party > >> impl. of Unsafe. > > > > Is there something that CharToByteConverter does that > > CharsetEncoder(ICU) can't? I don't think there is, but I haven't seen > > code using CharToByteConverter in years. > > > > Code using Unsafe, of course, is fundamentally tied to a VM, anyway, thanks > > to the JVM JSR not being ahead thinking enough to specify an API for low-level > > operations. It may or may not work depedending on how a VM interprets > > the non-existent spec for that class, so it is practically useless, > > unless it is running on the VM it was written on, i.e. Sun's. It may or > > may not work by chance, rather than by virtue. > > I think you are being too harsh here, but that notwithstanding, one of > the things we can do as open implementations is petition the EG to > actually define these in the spec once we show that there's a good > reason to do so, namely the independent implementations. The independant implementations are not using sun.misc.Unsafe. Why would anyone want to use an undocumented implementation-specific class from another implementation in their own? That makes no sense. There is no point in petitioning the EC to specify sun.misc.Unsafe since no one in their right mind uses it, except Sun, in the interna of their implementation. That's like asking the EC to specify org.kaffe.util.Ptr, or something equivalently pointless from DLRVM. What I want is a java.util.concurrent.VMCompareAndSwap class in j.u.c, with a sane API that does not require computing field offsets manually, like sun.misc.Unsafe seems to do. ;) The only reason why sun.misc.Unsafe matters for us is, IMHO, a rather simple bug in Doug Lea's java.util.concurrent implementation: using Sun-specific classes directly, rather than delegating to some java.util.concurrent.VMCompareAndSwap interface layer. That means anyone adopting the code needs to repeat the mistakes of Sun's implementation (which has, coincidentally, had a very straightforward remote execution exploit involving direct use of sun.misc.Unsafe a while ago) or fork it. Yay. :/ Other than Doug's code, sun.misc.Unsafe does not matter at all, unless you want "remote-execution-exploit-for-remote-execution-exploit" compatibility. I believe that's too much to ask for. ;) > > > > > That holds in general for any code using implementation-specific > > interfaces. > > > >> We should appeal to app developers to change implementation dependent > >> code (even provide a recipe book of recommended solutions), but be > >> pragmatic about the need to run the current version. In many cases it > >> may be beyond the apps immediate sphere of influence (i.e. dependent > >> libraries). If everyone responded as quickly and effectively as Martin > >> we would have no worries. > > > > We've been doing that for years. See the GNU Classpath migration page in > > the Wiki that describes how to fix such broken code in many cases. Being > > 'pragmatic' solves nothing, it just encourages people to continue behaving > > in dumb ways, because their code still may run, somehow, even if there > > is no way for Harmony to ensure that it will behave as they expect from > > whatever buggy Sun JDK they are using to run it usually in the corner > > cases, because there is no spec, and there are no tests. > > > > Awarding incompetence doesn't solve the problem. That being said, kudos > > to Martin for fixing the bug in his code. Had we had a Base64 class, > > that bug wouldn't have showed up, and his code would have failed > > elsewhere, or behaved differently on another VM. With the fix, his code > > is portable, behaves in the same way on any VM, and may even be faster > > than a vm-specific 'just for compatibility' implementation. > > > > There is no downside to simply fixing the buggy code. > > Sure, but again, not everyone will be as wise as Martin - they'll just > bail on us. They may even realize that there's a problem with their > code, but there may be nothing they can do about it. People bailing on us is not a problem, as people *stuck with Sun-specific code* will (and should, IMHO) only use Sun's VM anyway. People blindly trusting us because we pretend to run some Sun-specific code according to some unspecified interface is a problem, both for them, as their code will obviously behave differently from what it expects, in subtle ways, if they are very unlucky, and for us, as we have to chase unreproducible behaviour as Sun changes their internal interfaces around. See the sad story of Ant's chasing around of sun.tools.javac.Main tail over and over again. The 'works 100% like Sun's VM out-of-the-box for Sun-specific code' niche is already taken by Sun. There is no point in trying to compete with them on that, or for the audience that wants and expects that. Been there, done that, etc. ;) > > > >>> On a side note, I seem to recall reading on Sun's javac engineer's blog > >>> that javac won't allow access to sun internal classes sooner or later, > >>> so idiot-proofing class libraries may not be very useful, anyway, > >>> as people will have to rewrite such code, or preferrably, throw it away. > >> It will be interesting to see what havoc is unleashed by attempts to > >> reduce the visibility of sun internal packages by the compiler and at > >> runtime. > > > > I assume it will just cause unmaintained libraries to be substituted by > > maintained ones, as people trade up to higher quality implementations of > > functionality they need. Code that uses sun.* classes is a bad smell, to > > invoke Fowler. If it doesn't get fixed, just get rid of it, and use > > something else that doesn't stink. > > Yah, but I think it's insane to enforce restrictions on one company's > package namespace in the compiler. This should be a general feature :) Go ahead and start a JSR ... I'd propose com.microsoft for a start. ;) cheers, dalibor topic >
http://mail-archives.apache.org/mod_mbox/harmony-dev/200608.mbox/%[email protected]%3E
CC-MAIN-2016-22
refinedweb
1,204
54.12