text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
- NAME
- DESCRIPTION
- VERSION CAVEAT
- DYNAMIC VERSUS STATIC
- EXAMPLE 1
- EXAMPLE 2
- WHAT HAS GONE ON?
- WRITING GOOD TEST SCRIPTS
- EXAMPLE 3
- WHAT'S NEW HERE?
- INPUT AND OUTPUT PARAMETERS
- THE XSUBPP COMPILER
- THE TYPEMAP FILE
- WARNING
- EXAMPLE 4
- WHAT HAS HAPPENED HERE?
- SPECIFYING ARGUMENTS TO XSUBPP
- THE ARGUMENT STACK
- EXTENDING YOUR EXTENSION
- DOCUMENTING YOUR EXTENSION
- INSTALLING YOUR EXTENSION
- SEE ALSO
- Author
- Last Changed
NAME
perlXStut - Tutorial for XSUBs
DESCRIPTION
This tutorial will educate the reader on the steps involved in creating a Perl extension. The reader is assumed to have access to perlguts and perlxs.
This tutorial starts with very simple examples and becomes more complex, with each new example adding new features. Certain concepts may not be completely explained until later in the tutorial to ease the reader slowly into building extensions.
VERSION CAVEAT
This tutorial tries hard to keep up with the latest development versions of Perl. This often means that it is sometimes in advance of the latest released version of Perl, and that certain features described here might not work on earlier versions. This section will keep track of when various features were added to Perl 5.
In versions of Perl 5.002 prior to the gamma version, the test script in Example 1 will not function properly. You need to change the "use lib" line to read:
use lib './blib';
In versions of Perl 5.002 prior to version beta 3, the line in the .xs file about "PROTOTYPES: DISABLE" will cause a compiler error. Simply remove that line from the file.
In versions of Perl".
DYNAMIC VERSUS STATIC
It is commonly thought that if a system does not have the capability to load a library dynamically, you cannot build XSUBs. This is incorrect. You can build them, but you must link the XSUB's created.; require Exporter; require Dyn Mytest $VERSION; # Preloaded methods go here. # Autoload methods go after __END__, and are processed by the autosplit program. 1; __END__ # Below is the stub of documentation for your module. You better edit it!
And the Mytest.xs file should look something like this:
#ifdef __cplusplus extern "C" { #endif #include "EXTERN.h" #include "perl.h" #include "XSUB.h" #ifdef __cplusplus } #endif PROTOTYPES: DISABLE MODULE = Mytest PACKAGE = Mytest
Let's edit the .xs file by adding this to the end of the file:
void hello() CODE: printf("Hello, world!\n"); shortened for clarity):
% make umask 0 && cp Mytest.pm ./blib/Mytest.pm perl xsubpp -typemap typemap Mytest.xs >Mytest.tc && mv Mytest.tc Mytest.c
Now, although there is already a test.pl template ready for us, for this example only, we'll create a special test script. Create a file called hello that looks like this:
#! /opt/perl5/bin/perl use ExtUtils::testlib; use Mytest; Mytest::hello();
Now we run the script and we should see the following output:
% perl hello Hello, world! %
EXAMPLE 2
Now let's add to our extension a subroutine that will take a single argument and return 1 if the argument is even, 0 if the argument is odd.
Add the following to the end of Mytest.xs:
int is_even(input) int input CODE: RETVAL = (input % 2 == 0); OUTPUT: RETVAL
There does not need to be white space at the start of the "int input" line, but it is useful for improving readability. The semi-colon at the end of that line is also optional.
Any white space may be between the "int" and "input". It is also okay for the four lines starting at the "CODE:" line to not be indented. However, for readability purposes, it is suggested that you indent them 8 spaces (or one normal tab stop).
Now rerun make to rebuild our new shared library.
Now perform the same steps as before, generating a Makefile from the Makefile.PL file, and running make.2b files <extension>.pm and <extension>.xs contain the meat of the extension. The .xs file holds the C routines that make up the extension. The .pm file contains routines that tell Perl how to load your extension.
Generating and invoking the Makefile created a directory use that are set in the .pm file are very important. The @ISA array contains a list of other packages in which to search for methods (or subroutines) that do not exist in the current package. The @EXPORT array tells Perl which of the extension's routines should be placed into the calling package's namespace.
It's important to select what to export carefully. Do NOT export method names and do NOT export anything else by default without a good reason.
As a general rule, if the module is trying to be object-oriented then don't export anything. If it's just a collection of functions then you can export any of the functions via another array, called @EXPORT_OK.", and ensure all your test files end with the suffix ".t". The Makefile will properly run all these test files. out .
You might be wondering if you can round a constant. To see what happens, add the following line to test.pl temporarily:
&Mytest::round(3);
Run "make test" and notice that Perl dies with a fatal error. Perl won't let you change the value of constants!
WHAT'S NEW HERE?
Two things are new here. First, we've made some changes to Makefile.PL. In this case, we've specified an extra library to link in, the math library libm. We'll talk later about how to write XSUBs that can call every routine in a library.
Second, the value of the function is being passed back not as the function's return value, but through the same variable that was passed into the function.
INPUT AND OUTPUT PARAMETERS
You specify the parameters that will be passed into the XSUB just after you declare the function return value and name. Each parameter line starts with optional white space, and may have an optional terminating semicolon.
The list of output parameters occurs after the OUTPUT: directive. The use of RETVAL tells Perl that you wish to send this value back as the return value of the XSUB function. In Example 3, the value we wanted returned was contained in the same variable we passed in, so we listed it (and not RETVAL) in the OUTPUT: section.
THE XSUBPP COMPILER
The compiler xsubpp takes the XS code in the .xs file and converts it into C code, placing it in a file whose suffix is .c. The C code created makes heavy use of the C functions within Perl.
THE TYPEMAP FILE
The xsubpp compiler uses rules to convert from Perl's data types (scalar, array, etc.) to C's data types (int, char *, etc.). These rules are stored in the typemap file ($PERLLIB/ExtUtils/typemap). This file is split into three parts.
The first part attempts to map various C data types to a coded flag, which has some correspondence with the various Perl types. The second part contains C code which xsubpp uses for input parameters. The third part contains C code which xsubpp uses for output parameters. We'll talk more about the C code later.
Let's now take a look at a portion of the .c file created for our extension. marked
In general, it's not a good idea to write extensions that modify their input parameters, as in Example 3. However, to accommodate better calling pre-existing C routines, which often do modify their input parameters, this behavior is tolerated. The next example will show how to do this.
EXAMPLE 4
In this example, we'll now begin to write XSUBs that will interact with predefined testlib directory, create a file mylib.h that looks like this:
#define TESTVAL 4 extern double foo(int, long, const char*);
Also create a file mylib.c that looks like this:
#include <stdlib.h> #include "./mylib.h" double foo(a, b, c) static :: libmylib$(LIB_EXT) libmylib$(LIB_EXT): $(O_FILES) $(AR) cr libmylib$(LIB_EXT) $(O_FILES) $(RANLIB) libmylib$(LIB_EXT) '; } following key-value pair to the WriteMakefile call:
'MYEXTLIB' => 'mylib/libmylib$(LIB_EXT)',
and a new replacement subroutine too:
sub MY::postamble { ' $(MYEXTLIB): mylib/Makefile cd mylib && $(MAKE) $(PASTHRU) '; }
(Note: Most makes will require that there be a tab character that indents the line
cd mylib && $(MAKE) $(PASTHRU), similarly for the Makefile in the subdirectory.) lines setting @EXPORT to @EXPORT_OK (there are two: one in the line beginning "use vars" and one setting the array itself). and place the following in it:
const char * T_PV
Now run perl on the top-level Makefile.PL. Notice that it also created a Makefile in the mylib directory. Run make and see often useful not to check for equality, but rather the difference being below a certain epsilon factor, declaration with the full path to the mylib.h header file.
There's now some new C code that's been added to the .xs file. The purpose of the
constantroutine is to make the values that are #define'd in the header file available to the Perl script (in this case, by calling
&main::TESTVAL). There's also some XS code to allow calls to the
constantroutine.
The .pm file has exported the name TESTVAL in the @EXPORT array. This could lead to name clashes. A good rule of thumb is that if the #define is going to be used by only the C routines themselves, and not by the user, they should be removed from the @EXPORT array. Alternately, if you don't mind using the "fully qualified name" of a variable, you could remove most or all of the items in the @EXPORT array.
If our include file contained #include directives, these would not be processed at all by h2xs. There is no good solution to this right now.
We've also told Perl about the library that we built in the mylib subdirectory. That required the addition of only the MYEXTLIB variable specified simply that the library to be created here was a static archive (as opposed to a dynamically loadable library) and provided the commands to build it.
SPECIFYING ARGUMENTS TO XSUBPP in the .xs file, you are really passing three pieces of information for each one listed. The first piece is the order of that argument relative to the others (first, second, etc). The second is the type of argument, and consists of the type declaration of the argument (e.g., int, char*, etc). The third piece is the exact way in which the argument should be used in the call to the library function from this XSUB. This would mean whether or not to place a "&" before the argument or not, meaning the argument expects to be passed the address of the specified data type.
There is a difference between the two arguments in this hypothetical function:
int foo(a,b) char &a char * b
The first argument to this function would be treated as a char and assigned to the variable a, and its address would be passed into the function foo. The second argument would be treated as a string pointer and assigned to the variable b. The value of b would be passed into the function foo. The actual call to the function foo that xsubpp generates would look like this:
foo(&a, b);
Xsubpp will identically parse the following function argument lists:
char &a char&a char & a
However, to help ease understanding, it is suggested that you place a "&" next to the variable name and away from the variable type), and place a "*" near the variable type, but away from the variable name (as in the complete example. The "ST" is actually a macro that points to the n'th argument on the argument stack. ST(0) is thus loaded only when called depends on where in the .pm file the subroutine definition is placed. page.
1996/7/10 | https://metacpan.org/pod/release/MICB/perl5.004_66/pod/perlxstut.pod | CC-MAIN-2017-17 | refinedweb | 1,978 | 65.01 |
Hi, Marko Lindqvist wrote: > All the other references to po_lex_iconv are guarded by HAVE_ICONV, > but one in woe32dll/gettextsrc-exports.c is not. So it tries to export > non-existent symbol. > > Trivial patch attached. > @@ -66,7 +66,9 @@ > VARIABLE(po_error_at_line) > VARIABLE(po_gram_lval) > VARIABLE(po_lex_charset) > +#if HAVE_ICONV > VARIABLE(po_lex_iconv) > +#endif > VARIABLE(po_lex_weird_cjk) > VARIABLE(po_multiline_error) > VARIABLE(po_multiline_warning) Thanks for the patch, but it is incorrect: <config.h> is not included in this compilation unit, therefore the #if HAVE_ICONV is in fact equivalent to a #if 0, and will make the compiles _with_ iconv break. Also, it is simply not supported to build gettext without iconv support on Woe32. Sorry, the doc was not clear about it. I'm adding this paragraph to the README.woe32. Thanks for reporting it. *** README.woe32 17 Oct 2007 19:11:47 -0000 1.12 --- README.woe32 22 May 2008 23:49:17 -0000 *************** *** 32,34 **** --- 32,40 ---- The -I and -L option are so that packages previously built for the same environment are found. The --host option tells the various tools that you are building for mingw, not cygwin. + + Dependencies: + + This package depends on GNU libiconv. (See the file DEPENDENCIES.) Before + building this package, you need to build GNU libiconv, in the same development + environment, with the same configure options, and install it ("make install"). | http://lists.gnu.org/archive/html/bug-gnu-utils/2008-05/msg00017.html | CC-MAIN-2015-22 | refinedweb | 219 | 59.19 |
table of contents
- buster 4.16-2
- buster-backports 5.02-1~bpo10+1
- testing 5.03-1
- unstable 5.03-1
NAME¶mcheck, mcheck_check_all, mcheck_pedantic, mprobe - heap consistency checking
SYNOPSIS¶
#include <mcheck.h>
int mcheck(void (*abortfunc)(enum mcheck_status mstatus));
int mcheck_pedantic(void (*abortfunc)(enum mcheck_status mstatus));
void mcheck_check_all(void);
enum mcheck_status mprobe(void *ptr);
DESCRIPTION¶The¶mcheck() and mcheck_pedantic() return 0 on success, or -1 on error.
VERSIONS¶The mcheck_pedantic() and mcheck_check_all() functions are available since glibc 2.2. The mcheck() and mprobe() functions are present since at least glibc 2.0
ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶These functions are GNU extensions.
NOTES¶Linking a program with -lmcheck and using the MALLOC_CHECK_ environment variable (described in mallopt(3)) cause the same kinds of errors to be detected. But, using MALLOC_CHECK_ does not require the application to be relinked.
EXAMPLE¶The program below calls mcheck() with a NULL argument and then frees the same block of memory twice. The following shell session demonstrates what happens when running the program:
$ ./a.out About to free About to free a second time block freed twice Aborted (core dumped)
Program source¶
); } | https://manpages.debian.org/buster-backports/manpages-dev/mcheck.3.en.html | CC-MAIN-2019-47 | refinedweb | 200 | 51.55 |
I don't like the way Swing does menus. In my experience, it's more convenient to have a component ask a set of menu item providers whether they have any menu items, at the time the popup trigger is actuated. One of the great things about this scheme is that it's trivial to have, as with NSTextView, a set of default items (such as "Cut", "Copy", and "Paste"), a set of items corresponding to a misspelling at the cursor (the guesses, "Ignore Spelling", and "Learn Spelling), and a set of items that appear if there's a non-empty selection ("Search in Spotlight", "Search in Google", and "Look Up in Dictionary").
We have a class in salma-hayek called EPopupMenu that takes care of this, and also makes sure you're using AWT menus on Mac OS and Swing menus everywhere else. (Swing's menus look terribly unrealistic on Mac OS, and Linux's AWT menus are Motif, which is like a bad flashback to the early 1990s.) PTextArea uses EPopupMenu, so all we need to do is register a new provider that checks for a non-empty selection, and write a few new actions.
The Search in Google action is trivial. BrowserLauncher is from Eric J. Albert's sourceforge project of the same name, slightly modified to use /usr/bin/sensible-browser (I kid you not!) instead of the antiquated netscape.
private class SearchInGoogleAction extends AbstractAction {
public SearchInGoogleAction() {
super("Search in Google");
}
public void actionPerformed(ActionEvent e) {
try {
String encodedSelection =
StringUtilities.urlEncode(textArea.getSelectedText().trim());
BrowserLauncher.openURL("" +
encodedSelection + "&ie=UTF-8&oe=UTF-8");
} catch (Exception ex) {
Log.warn("Exception launching browser", ex);
}
}
}
The Look Up in Dictionary action is also pretty simple, though there are a couple of gotchas:
private class LookUpInDictionaryAction extends AbstractAction {
public LookUpInDictionaryAction() {
super("Look Up in Dictionary");
setEnabled(GuiUtilities.isMacOs());
}
public void actionPerformed(ActionEvent e) {
try {
// We need to rewrite spaces as "%20" for them to find their
// way to Dictionary.app unmolested. The usual url-encoded
// form ("+") doesn't work, for some reason.
String encodedSelection =
textArea.getSelectedText().trim().replaceAll("\\s+", "%20");
// In Mac OS 10.4.1, a dict: URI that causes Dictionary.app to
// start doesn't actually cause the definition to be shown, so
// we need to ask twice. If we knew the dictionary was already
// open, we could avoid the flicker. But we may as well wait
// for Apple to fix the underlying problem.
BrowserLauncher.openURL("dict:///");
BrowserLauncher.openURL("dict:///" + encodedSelection);
} catch (Exception ex) {
Log.warn("Exception launching browser", ex);
}
}
}
I was interested to find while testing this that the equivalent NSTextView menu item doesn't work outside of Safari. I don't know if it's just my machine, but I can't get it to work in Dictionary, Mail, or Text Edit.
The Search in Spotlight action is a bit trickier, though. I had to write a little Objective-C++ program to do the hard part of calling NSPerformService:
#include <Cocoa/Cocoa.h>
#include <iostream>
void doService(const std::string& service, const std::string& text) {
NSPasteboard* pb = [NSPasteboard pasteboardWithUniqueName];
[pb declareTypes:[NSArray arrayWithObject:NSStringPboardType]
owner:nil];
[pb setString:[NSString stringWithUTF8String:text.c_str()]
forType:NSStringPboardType];
NSString* serviceString = [NSString stringWithUTF8String:service.c_str()];
BOOL success = NSPerformService(serviceString, pb);
if (success == NO) {
NSLog(@"NSPerformService failed.");
exit(1);
}
}
static void usage(std::ostream& os, const std::string& name) {
os << "Usage: " << name << " <service> <text>" << std::endl;
os << "Examples:" << std::endl;
os << " " << name << " Spotlight blackberry" << std::endl;
os << " " << name << " 'Mail/Send To' root@localhost" << std::endl;
}
int main(int argCount, char* args[]) {
if (--argCount != 2) {
usage(std::cerr, args[0]);
exit(1);
}
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
[NSApplication sharedApplication];
doService(args[1], args[2]);
[pool release];
return 0;
}
So there you have it: anything NSTextView can do, your Java application can do better. Better in that you can offer some of the functionality on other platforms, and better in that Search in Google works even if your application isn't called Safari!
As usual, the latest version of all this stuff is in salma-hayek, along with the PTextArea I mentioned. | http://elliotth.blogspot.com/2005/05/searching-spotlightgoogledictionaryapp.html | CC-MAIN-2017-22 | refinedweb | 678 | 53.41 |
> COS0.0.1.rar > tss.h
/* tss.h - TSS definitions Created: 31/08/04 Last Modified: 02/09/04 This stuff is taken from GeekOS, see "COPYING-GEEKOS" Modified for COS by Paul Barker. */ #ifndef _COS_TSS_H_ #define _COS_TSS_H_ /* The TSS struct is taken directly from GeekOS, my only addition is the typedef. Source for GeekOS: "Protected Mode Software Architecture" by Tom Shanley, ISBN 020155447X. */ // NOTE: all reserved fields must be set to zero. // I should change the types but there's no real point typedef struct _tss { // Link to nested task. For example, if an interrupt is handled // by a task gate, the link field will contain the selector for // the TSS of the interrupted task. unsigned short link; unsigned short reserved1; // Stacks for privilege levels. esp0/ss0 specifies the kernel stack. unsigned long esp0; unsigned short ss0; unsigned short reserved2; unsigned long esp1; unsigned short ss1; unsigned short reserved3; unsigned long esp2; unsigned short ss2; unsigned short reserved4; // Page directory register. unsigned long cr3; // General processor registers. unsigned long eip; unsigned long eflags; unsigned long eax; unsigned long ecx; unsigned long edx; unsigned long ebx; unsigned long esp; unsigned long ebp; unsigned long esi; unsigned long edi; // Segment registers and padding. unsigned short es; unsigned short reserved5; unsigned short cs; unsigned short reserved6; unsigned short ss; unsigned short reserved7; unsigned short ds; unsigned short reserved8; unsigned short fs; unsigned short reserved9; unsigned short gs; unsigned short reserved10; // GDT selector for the LDT descriptor. unsigned short ldt; unsigned short reserved11; // The debug trap bit causes a debug exception upon a switch // to the task specified by this TSS. unsigned int debugTrap : 1; unsigned int reserved12 : 15; // Offset in the TSS specifying where the io map is located. unsigned short ioMapBase; } tss_t; typedef struct tss_segment { tss_t main_tss; } tss_segment_t; #endif // !_COS_TSS_H_ | http://read.pudn.com/downloads59/sourcecode/os/207771/include/cos/tss.h__.htm | crawl-002 | refinedweb | 299 | 60.55 |
getsubopt - parse suboption arguments from a string
#include <stdlib.h>
int getsubopt(char **optionp, char * const *keylistp, char **valuep); <comma> characters and each may consist of either a single token, or a token-value pair separated by an <equals-sign>.
The keylistp argument shall be a pointer to a vector of strings. The end of the vector is identified by a null pointer. Each entry in the vector is one of the possible tokens that might be found in *optionp. Since <comma> characters delimit suboption arguments in optionp, they should not appear in any of the strings pointed to by keylistp. Similarly, because an <equals-sign> separates a token from its value, the application should not include an <equals-sign> in any of the strings pointed to by keylistp. The getsubopt() function shall not modify the keylistp vector.
The valuep argument is the address of a value string pointer.
If a <comma> appears in optionp, it shall be interpreted as a suboption separator. After <comma> characters have been processed, if there are one or more <equals-sign> characters in a suboption string, the first <equals-sign> in any suboption string shall be interpreted as a separator between a token and a value. Subsequent <equals-sign> characters in a suboption string shall be interpreted as part of the value.
If the string at *optionp contains only one suboption argument (equivalently, no <comma> characters), <equals.
Parsing Suboptions
The following example uses the getsubopt() function to parse a value argument in the optarg external variable returned by a call to getopt().#include <stdio.h> #include <stdlib.h> #include <unistd != ' ') { char *saved = subopts; switch(getsubopt(&subopts, (char **", saved); abort(); } } break; default: abort(); }
/* Do the real work. */
return 0; }
If the above example is invoked with:program -o ro,rsize=512
then after option parsing, the variable do_all will be 0, type will be a null pointer, read_size will be 512, write_size will be 0, and read_only will be 1. If it is invoked with:program -o oops
it will print:"Unknown suboption `oops'"
before aborting.
The value of *valuep when getsubopt() returns -1 is unspecified. Historical implementations provide various incompatible extensions to allow an application to access the suboption text that was not found in the keylistp array.
The keylistp argument of getsubopt() is typed as char * const * to match historical practice. However, the standard is clear that implementations will not modify either the array or the strings contained in the array, as if the argument had been typed const char * const *.
None.
getopt
XBD <stdlib.h>
First released in Issue 4, Version 2.
Moved from X/OPEN UNIX extension to BASE.
IEEE Std 1003.1-2001/Cor 1-2002, item XSH/TC1/D6/26 is applied, correcting an editorial error in the SYNOPSIS.
The getsubopt() function is moved from the XSI option to the Base.
POSIX.1-2008, Technical Corrigendum 1, XSH/TC1-2008/0260 [196], XSH/TC1-2008/0261 [196], XSH/TC1-2008/0262 [196], XSH/TC1-2008/0263 [196], XSH/TC1-2008/0264 [196], XSH/TC1-2008/0265 [196], XSH/TC1-2008/0266 [196], XSH/TC1-2008/0267 [196], XSH/TC1-2008/0268 [196], and XSH/TC1-2008/0269 [196] are applied.
return to top of pagereturn to top of page | http://pubs.opengroup.org/onlinepubs/9699919799/functions/getsubopt.html | CC-MAIN-2014-42 | refinedweb | 540 | 55.74 |
by Flavio De Stefano
How I built the SiriWaveJS library: a look at the math and the code
It was 4 years ago when I had the idea to replicate the Apple® Siri wave-form (introduced with the iPhone 4S) in the browser using pure Javascript.
During the last month, I updated this library by doing a lot of refactoring using ES6 features and reviewed the build process using RollupJS. Now I’ve decided to share what I've learned during this process and the math behind this library.
To get an idea what the output will be, visit the live example; the whole codebase is here.
Additionally, you can download all plots drawn in this article in GCX (OSX Grapher format): default.gcx and ios9.gcx.
The classic wave style
Initially, this library only had the classic wave-form style that all of you remember using in iOS 7 and iOS 8.
It’s no hard task to replicate this simple wave-form, only a bit of math and basic concepts of the Canvas API.
You’re probably thinking that the wave-form is a modification of the Sine math equation, and you're right…well, almost right.
Before starting to code, we’ve got to find our linear equation that will be simply applied afterwards. My favourite plot editor is Grapher; you can find it in any OSX installation under Applications > Utilities > Grapher.app.
We start by drawing the well known:
Perfecto! Now, let’s add some parameters (Amplitude [A], Time coordinate[t] and Spatial frequency [k]) that will be useful later (Read more here:).
Now we have to “attenuate” this function on plot boundaries, so that for |x| >; 2, the y values tends to 0. Let’s draw separately an equation g(x) that has these characteristics.
This seems to be a good equation to start with. Let’s add some parameters here too to smooth the curve for our purposes:
Now, by multiplying our f(x, …) and g(x, …), and by setting precise parameters to the other static values, we obtain something like this.
- A = 0.9 set the amplitude of the wave to max Y = A
- k = 8 set the spatial frequency and we obtain “more peaks” in the range [-2, 2]
- t = -π/2 set the phase translation so that f(0, …) = 1
- K = 4 set the factor for the “attenuation equation” so that the final equation is y = 0 when |x| ≥ 2
It looks good! ?
Now, if you notice on the original wave we have other sub-waves that will give a lower value for the amplitude. Let’s draw them for A = {0.8, 0.6, 0.4, 0.2, -0.2, -0.4, -0.6, -0.8}
In the final canvas composition the sub-waves will be drawn with a decreasing opacity tending to 0.
Basic code concepts
What do we do now with this equation?
We use the equation to obtain the Y value for an input X.
Basically, by using a simple for loop from -2 to 2, (the plot boundaries in this case), we have to draw point by point the equation on the canvas using the beginPath and lineTo API.
const ctx = canvas.getContext('2d');
ctx.beginPath();ctx.strokeStyle = 'white';
for (let i = -2; i <= 2; i += 0.01) { const x = _xpos(i); const y = _ypos(i); ctx.lineTo(x, y);}
ctx.stroke();
Probably this pseudo-code will clear up these ideas. We still have to implement our _xpos and _ypos functions.
But… hey, what is 0.01⁉️ That value represents how many pixels you move forward in each iteration before reaching the right plot boundary… but what is the correct value?
If you use a really small value (<0.01), you’ll get an insanely precise rendering of the graph but your performance will decrease because you’ll get too many iterations.
Instead, if you use a really big value (> 0.1) your graph will lose precision and you’ll notice this instantly.
You can see that the final code is actually similar to the pseudo-code:
Implement _xpos(i)
You may argue that if we’re drawing the plot by incrementing the x, then _xpos may simply return the input argument.
This is almost correct, but our plot is always drawn from -B to B (B = Boundary = 2).
So, to draw on the canvas via pixel coordinates, we must translate -B to 0, and B to 1 (simple transposition of [-B, B] to [0,1]); then multiply [0,1] and the canvas width (w).
_xpos(i) = w * [ (i + B) / 2B ]
Implement _ypos
To implement _ypos, we should simply write our equation obtained before (closely).
const K = 4;const FREQ = 6;
function _attFn(x) { return Math.pow(K / (K + Math.pow(x, K)), K);}
function _ypos(i) { return Math.sin(FREQ * i - phase) * _attFn(i) * canvasHeight * globalAmplitude * (1 / attenuation);}
Let’s clarify some parameters.
- canvasHeight is Canvas height expressed in PX
- i is our input value (the x)
- phase is the most important parameter, let’s discuss it later
- globalAmplitude is a static parameter that represents the amplitude of the total wave (composed by sub-waves)
- attenuation is a static parameter that changes for each line and represents the amplitude of a wave
Phase
Now let’s discuss about the phase variable: it is the first changing variable over time, because it simulates the wave movement.
What does it mean? It means that for each animation frame, our base controller should increment this value. But to avoid this value throwing a buffer overflow, let’s modulo it with 2π (since Math.sin dominio is already modulo 2π).
phase = (phase + (Math.PI / 2) * speed) % (2 * Math.PI);
We multiply speed and Math.PI so that with speed = 1 we have the maximum speed (why? because sin(0) = 0, sin(π/2) = 1, sin(π) = 0, … ?)
Finalizing
Now that we have all code to draw a single line, we define a configuration array to draw all sub-waves, and then cycle over them.
return [ { attenuation: -2, lineWidth: 1.0, opacity: 0.1 }, { attenuation: -6, lineWidth: 1.0, opacity: 0.2 }, { attenuation: 4, lineWidth: 1.0, opacity: 0.4 }, { attenuation: 2, lineWidth: 1.0, opacity: 0.6},
// basic line { attenuation: 1, lineWidth: 1.5, opacity: 1.0},];
The iOS 9+ style
Now things start to get complicated. The style introduced with iOS 9 is really complex and the reverse engineering to simulate it it’s not easy at all! I’m not fully satisfied of the final result, but I’ll continue to improve it until I get the desired result.
As previously done, let’s start to obtain the linear equations of the waves.
As you can notice:
- we have three different specular equations with different colours (green, blue, red)
- a single wave seems to be a sum of sine equations with different parameters
- all other colours are a composition of these three base colours
- there is a straight line at the plot boundaries
By picking again our previous equations, let’s define a more complex equation that involves translation. We start by defining again our attenuation equation:
Now, define h(x, A, k, t) function, that is the sine function multiplied for attenuation function, in its absolute value:
We now have a powerful tool.
With h(x), we can now create the final wave-form by summing different h(x) with different parameters involving different amplitudes, frequency and translations. For example, let’s define the red curve by putting random values.
If we do the same with a green and blue curve, this is the result:
This is not quite perfect, but it could work.
To obtain the specular version, just multiply everything by -1.
In the coding side, the approach is the same, we have only a more complex equation for _ypos.
const K = 4;const NO_OF_CURVES = 3;
// This parameters should be generated randomlyconst widths = [ 0.4, 0.6, 0.3 ];const offsets = [ 1, 4, -3 ];const amplitudes = [ 0.5, 0.7, 0.2 ];const phases = [ 0, 0, 0 ];
function _globalAttFn(x) { return Math.pow(K / (K + Math.pow(x, 2)), K);}
function _ypos(i) { let y = 0; for (let ci = 0; ci < NO_OF_CURVES; ci++) { const t = offsets[ci]; const k = 1 / widths[ci]; const x = (i * k) - t; y += Math.abs( amplitudes[ci] * Math.sin(x - phases[ci]) * _globalAttFn(x) ); }
y = y / NO_OF_CURVES; return canvasHeightMax * globalAmplitude * y;}
There’s nothing complex here. The only thing that changed is that we cycle NO_OF_CURVES times over all pseudo-random parameters and we sum all y values.
Before multiplying it for canvasHeightMax and globalAmplitude that give us the absolute PX coordinate of the canvas, we divide it for NO_OF_CURVES so that y is always ≤ 1.
Composite operation
One thing that actually matters here is the globalCompositeOperation mode to set in the Canvas. If you notice, in the original controller, when there’s a overlap of 2+ colors, they’re actually mixed in a standard way.
The default is set to source-over, but the result is poor, even with an opacity set.
You can see all examples of vary globalCompositeOperation here:
By setting globalCompositeOperation to “ligther”, you notice that the intersection of the colours is nearest to the original.
Build with RollupJS
Before refactoring everything, I wasn’t satisfied at all with the codebase: old prototype-like classes, a single Javascript file for everything, no uglify/minify and no build at all.
Using the new ES6 feature like native classes, spread operators and lambda functions, I was able to clean everything, split files, and decrease lines of unnecessary code.
Furthermore, I used RollupJS to create a transpiled and minified build in various formats.
Since this is a browser-only library, I decided to create two builds: an UMD (Universal Module Definition) build that you can use directly by importing the script or by using CDN, and another one as an ESM module.
The UMD module is built with this configuration:
{ input: 'src/siriwave.js', output: { file: pkg.unpkg, name: pkg.amdName, format: 'umd' }, plugins: [ resolve(), commonjs(), babel({ exclude: 'node_modules/**' }), ]}
An additional minified UMD module is built with this configuration:
{ input: 'src/siriwave.js', output: { file: pkg.unpkg.replace('.js', '.min.js'), name: pkg.amdName, format: 'umd' }, plugins: [ resolve(), commonjs(), babel({ exclude: 'node_modules/**' }), uglify()]}
Benefiting of UnPKG service, you can find the final build on this URL served by a CDN:
This is the “old style Javascript way” — you can just import your script and then refer in your code by using SiriWave global object.
To provide a more elegant and modern way, I also built an ESM module with this configuration:
{ input: ‘src/siriwave.js’, output: { file: pkg.module, format: ‘esm’ }, plugins: [ babel({ exclude: ‘node_modules/**’ }) ]}
We clearly don’t want the resolve or commonjs RollupJS plugins because the developer transplier will resolve dependencies for us.
You can find the final RollupJS configuration here:
Watch and Hot code reload
Using RollupJS, you can also take advantage of rollup-plugin-livereload and rollup-plugin-serve plugins to provide a better way to work on scripts.
Basically, you just add these plugins when you’re in “developer” mode:
import livereload from 'rollup-plugin-livereload';import serve from 'rollup-plugin-serve';
if (process.env.NODE_ENV !== 'production') { additional_plugins.push( serve({ open: true, contentBase: '.' }) ); additional_plugins.push( livereload({ watch: 'dist' }) );}
We finish by adding these lines into the package.json:
"module": "dist/siriwave.m.js","jsnext:main": "dist/siriwave.m.js","unpkg": "dist/siriwave.js","amdName": "SiriWave","scripts": { "build": "NODE_ENV=production rollup -c", "dev": "rollup -c -w"},
Let’s clarify some parameters:
- module / jsnext:main: path of dist ESM module
- unpkg: path of dist UMD module
- amdName: name of the global object in UMD module
Thanks a lot RollupJS!
Hope that you find this article interesting, see you soon! ? | https://www.freecodecamp.org/news/how-i-built-siriwavejs-library-maths-and-code-behind-6971497ae5c1/ | CC-MAIN-2019-43 | refinedweb | 1,964 | 63.49 |
Author: Keats in post puberty
introduction
It is said that StringBuilder is more efficient than String in handling String splicing, but sometimes there may be some deviation in our understanding. Recently, when I tested the efficiency of data import, I found that my previous understanding of StringBuilder was wrong. Later, I found out the logic of this piece by means of practical test + finding principle. Now let's share the process
test case
There are generally two cases when our code splices strings in a loop
- The first is to splice several fields in the object into a new field each time, and then assign a value to the object
- The second operation is to create a string object outside the loop and splice new content to the string each time. After the loop ends, the spliced string is obtained
For both cases, I created two control groups
Group 1:
Concatenate strings in each For loop, that is, use them and destroy them when they are used up. Use String and StringBuilder to splice respectively
/** * String concatenates strings in a loop and is destroyed after one loop */ public static void useString(){ for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { String str = str1 + i + str2 + i + str3 + i + str4 ; } } /** * String builder is used to splice strings in the loop and destroy them after one loop */ public static void useStringBuilder(){ for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { StringBuilder sb = new StringBuilder(); String s = sb.append(str1).append(i).append(str2).append(i).append(str3).append(i).append(str4).toString(); } }
Group 2:
Multiple For loops splice a String. The String is used at the end of the loop and recycled by the garbage collector. It is also spliced using String and StringBuilder respectively
/** * Concatenate multiple loops into a String using String */ public static void useStringSpliceOneStr (){ String str = ""; for (int i = 0; i < CYCLE_NUM_LOWER; i++) { str += str1 + str2 + str3 + str4 + i; } } /** * Concatenate multiple loops into a string with StringBuilder */ public static void useStringBuilderSpliceOneStr(){ StringBuilder sb = new StringBuilder(); for (int i = 0; i < CYCLE_NUM_LOWER; i++) { sb.append(str1).append(str2).append(str3).append(str4).append(i); } }
In order to ensure the test quality, before each test item is carried out. The thread rested for 2s, and then ran for 5 times to warm up. Finally, calculate the time by averaging the time for 5 times
public static int executeSometime(int kind, int num) throws InterruptedException { Thread.sleep(2000); int sum = 0; for (int i = 0; i < num + 5; i++) { long begin = System.currentTimeMillis(); switch (kind){ case 1: useString(); break; case 2: useStringBuilder(); break; case 3: useStringSpliceOneStr(); break; case 4: useStringBuilderSpliceOneStr(); break; default: return 0; } long end = System.currentTimeMillis(); if(i > 5){ sum += (end - begin); } } return sum / num; }
Main method
public class StringTest { public static final int CYCLE_NUM_BIGGER = 10_000_000; public static final int CYCLE_NUM_LOWER = 10_000; public static final String str1 = "Zhang San"; public static final String str2 = "Li Si"; public static final String str3 = "Wang Wu"; public static final String str4 = "Zhao Liu"; public static void main(String[] args) throws InterruptedException { int time = 0; int num = 5; time = executeSometime(1, num); System.out.println("String Splicing "+ CYCLE_NUM_BIGGER +" Times," + num + "Time average:" + time + " ms"); time = executeSometime(2, num); System.out.println("StringBuilder Splicing "+ CYCLE_NUM_BIGGER +" Times," + num + "Time average:" + time + " ms"); time = executeSometime(3, num); System.out.println("String Splice a single string "+ CYCLE_NUM_LOWER +" Times," + num + "Time average:" + time + " ms"); time = executeSometime(4, num); System.out.println("StringBuilder Splice a single string "+ CYCLE_NUM_LOWER +" Times," + num + "Time average:" + time + " ms"); } }
test result
The test results are as follows
Result analysis
first group
10_ 000_ 000 times of loop splicing, the efficiency of using String and StringBuilder in the loop is the same! Why?
Use javap -c StringTest.class to decompile and view the files compiled by the two methods:
It can be found that StringBuilder is used after String method splicing and String compiler optimization, so the efficiency of use case 1 and use case 2 is the same.
Group 2
The result of the second group is loved by everyone, because 10_ 000_ 000 loop String splicing is too slow, so I used 10_ 000 splices to analyze.
Analysis case 3: Although the compiler will optimize String splicing, it creates StringBuilder objects in the loop and destroys them in the loop every time. The next cycle he has created. In comparison, use case 4 creates new objects outside the loop n times, destroys objects, and converts StringBuilder into String n - 1 times. Low efficiency is also a matter of course.
extend
There is another way to write the test of the first group:
/** * String builder is used to splice strings in the loop and destroy them after one loop */ public static void useStringBuilderOut(){ StringBuilder sb = new StringBuilder(); for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { // sb.setLength(0); sb.delete(0, sb.length()); String s = sb.append(str1).append(i).append(str2).append(i).append(str3).append(i).append(str4).toString(); } }
Create a StringBuilder outside the loop, empty the contents of the StringBuilder at the beginning of each loop, and then splice. No matter sb.setLength(0) is used in this way; Or sb.delete(0, sb.length()); Efficiency is slower than using String / StringBuilder directly within a loop. However, I don't understand why he is slow. I guess the speed of the new object is slower than the length, so I tested the following:
public static void createStringBuider() { for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { StringBuilder sb = new StringBuilder(); } } public static void cleanStringBuider() { StringBuilder sb = new StringBuilder(); for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { sb.delete(0, sb.length()); } }
But the result is that cleanStringBuider is faster. I can't figure it out
If a great God sees hope, he can help analyze it
conclusion
- The compiler will optimize String splicing to use StringBuilder, but there are still some defects. It is mainly reflected in the use of String splicing in the loop. The compiler will not create a single StringBuilder for reuse
- For the need to splice a String in multiple loops: StringBuilder is fast because it avoids n operations of new object and object destruction, and n - 1 operations of converting StringBuilder into String
- StringBuilder splicing is not applicable to the operation mode of each splicing in the loop. Because the compiler optimized String splicing also uses StringBuilder, the efficiency of both is the same. The latter is easy to write
-END- | https://programmer.help/blogs/do-you-still-use-concatenation-strings-in-the-for-loop-tomorrow-is-not-for-work.html | CC-MAIN-2021-49 | refinedweb | 1,066 | 54.86 |
Is there a better way to do this? Like I'm passing in arguments to
func
inside_func
def inside_func(arg1,arg2):
print arg1, arg2
return
def func(arg1, arg2):
inside_func(arg1,arg2)
return
Of course it is.
Your outer function provides a service, and to do its job it may need inputs to work with. How it uses those inputs is up to the function. If it needs another function to do their job and they pass in the arguments verbatim, is an implementation detail.
You are doing nothing more than standard encapsulation and modularisation here. This would be correct programming practice in any language, not just Python.
The Python standard library is full of examples; it is often used to provide a simpler interface for quick use-cases. The
textwrap.wrap() function for example:
def wrap(text, width=70, **kwargs): """Wrap a single paragraph of text, returning a list of wrapped lines. Reformat the single paragraph in 'text' so it fits in lines of no more than 'width' columns, and return a list of wrapped lines. By default, tabs in 'text' are expanded with string.expandtabs(), and all other whitespace characters (including newline) are converted to space. See TextWrapper class for available keyword args to customize wrapping behaviour. """ w = TextWrapper(width=width, **kwargs) return w.wrap(text)
This does nothing else but pass the arguments on to other callables, just so your code doesn't have to remember how to use the
TextWrapper() class for a quick one-off text wrapping job. | https://codedump.io/share/xo5eO6c7OL4I/1/is-it-pythonic-to-passed-in-arguments-in-a-function-that-will-be-used-by-a-function-inside-it | CC-MAIN-2017-30 | refinedweb | 253 | 64 |
+ Post New Thread
I appreciate that I may have made a simple error when trying to implement this however...
I have attached an example of the issue I am getting...
REQUIRED INFORMATION
Ext version tested:
Ext 4.0.2a
Ext 4.0.7
Browser versions tested against:
REQUIRED INFORMATION
Ext version tested:
Ext 4.0.7
hi,
Is there some reason why some of the ext-lang-* files in /locale folder are missing values for 'okText'/'cancelText' setting in the parameters...
If you load the grid filter feature and set the "filterable: true" attribute on a grid column (without defining a filters object array), the filter...
Is it just me or are the labels completely misplaced on the top bar chart in Chrome (18)?
...
Ext version tested:
Ext 4.1.0 Beta 2
Browser versions tested against:
FF 10.0.1
Chrome 17
DOCTYPE tested against:
REQUIRED INFORMATION
Ext version tested:
Ext 4.0.7 rev ???
Browser versions tested against:
Chrome, Firefox
DOCTYPE tested against:
HTML5
ItemSelector should return if no items are selected, it did so in 4.0.7.
Ext.Loader.setConfig({enabled: true});
Ext.Loader.setPath('Ext.ux',...
It renders fine, but on scroll I get this in the console. Try buffered sample, switch one column to locked : true.
Uncaught TypeError: Object has...
Maybe I'm reading this Ext.grid.header.Container code wrong, but shouldn't this:
initComponent: function() {
var me = this;
...
At the moment the Ext.Element.repaint method always repaints a DOM element twice if called. (once by adding the "x-repaint" class and once by...
This is broken in 4.1B, has worked fine since Ext 3 - 4.0.7.
var tpl = new Ext.XTemplate(
'<tpl for="kids">',
...
GridPanel doesn't scroll back up to the top row when using a trackpad or mouse wheel scroll action. Can be repeated on a GridPanel with infinity...
I place a grid in the Ext.window.Window. and set animateTarget of the window to a button.
when resizing the window, a loadmask appears, and cannot...
I'd like to point out a certain design inefficiency in the way sprites are added to and removed from groups.
I have a drawing application...
REQUIRED INFORMATION Ext version tested:
Ext 4.0.1
Browser versions tested against:
FF4
Description:
I am using Ext.ClassManager.setNamespace to define various properties. However, if I call it twice with the same namespace but different value, the...
Checkbox Column (Ext.selection.CheckboxModel) does not have a width property, which is causing several issues throughout our application. Namely, the...
Siesta reported this :)
toggleSpinners: function(){
var me = this,
value = me.getValue();
valueIsNull = value...
Sencha is used by over two million developers. Join the community, wherever you’d like that community to be
or Join Us | http://www.sencha.com/forum/forumdisplay.php?80-Ext-Bugs/page233&order=desc | CC-MAIN-2014-52 | refinedweb | 460 | 61.53 |
Migrating from SDK2 to SDK3 API
The 3.0 API breaks the existing 2.0 APIs in order to provide a number of improvements. Collections and Scopes are introduced. The Document class and structure has been completely removed from the API, and the returned value is now
Result. Retry behaviour is more proactive, and lazy bootstrapping moves all error handling to a single place. Individual behaviour changes across services are explained here.
Fundamentals
The Couchbase SDK team takes semantic versioning seriously, which means that API should not be broken in incompatible ways while staying on a certain major release. This has the benefit that most of the time upgrading the SDK should not cause much trouble, even when switching between minor versions (not just bugfix releases). The downside though is that significant improvements to the APIs are very often not possible, save as pure additions — which eventually lead to overloaded methods.
To support new server releases and prepare the SDK for years to come, we have decided to increase the major version of each SDK and as a result take the opportunity to break APIs where we had to. As a result, migration from the previous major version to the new major version will take some time and effort — an effort to be counterbalanced by improvements to coding time, through the simpler API, and performance. The new API is built on years of hands-on experience with the current SDK as well as with a focus on simplicity, correctness, and performance.
Before this guide dives into the language-specific technical component of the migration, it is important to understand the high level changes first. As a migration guide, this document assumes you are familiar with the previous generation of the SDK and does not re-introducing SDK 2.0 concepts. We recommend familiarizing yourself with the new SDK first by reading at least the getting started guide, and browsing through the other chapters a little.
Terminology
The concept of a
Cluster and a
Bucket remain the same, but a fundamental new layer is introduced into the API:
Collections and their
Scopes.
Collections are logical data containers inside a Couchbase bucket that let you group similar data just like a Table does in a relational database — although documents inside a collection do not need to have the same structure.
Scopes allow the grouping of collections into a namespace, which is very usfeul when you have multilpe tenants acessing the same bucket.
Couchbase Server is including support for collections as a developer preview in version 6.5 — in a future release, it is hoped that collections will become a first class concept of the programming model.
To prepare for this, the SDKs include the feature from SDK 3.0.
In the previous SDK generation, particularly with the
KeyValue API, the focus has been on the codified concept of a
Document.
Documents were read and written and had a certain structure, including the
id/
key, content, expiry (
ttl), and so forth.
While the server still operates on the logical concept of documents, we found that this model in practice didn’t work so well for client code in certain edge cases.
As a result we have removed the
Document class/structure completely from the API.
The new API follows a clear scheme: each command takes required arguments explicitly, and an option block for all optional values.
The returned value is always of type
Result.
This avoids method overloading bloat in certain languages, and has the added benefit of making it easy to grasp APIs evenly across services.
As an example here is a KeyValue document fetch:
$getResult = $collection->get("key", (new GetOptionsl())->timeout(3000000));
Compare this to a N1QL query:
$queryResult = $cluster->query("select 1=1", (new QueryOptions())->timeout(3000000));
Since documents also fundamentally handled the serialization aspects of content, two new concepts are introduced: the
Serializer and the
Transcoder.
Out of the box the SDKs ship with a JSON serializer which handles the encoding and decoding of JSON.
You’ll find the serializer exposes the options for methods like N1QL queries and KeyValue subdocument operations,.
The KV API extends the concept of the serializer to the
Transcoder.
Since you can also store non-JSON data inside a document, the
Transcoder allows the writing of binary data as well.
It handles the object/entity encoding and decoding, and if it happens to deal with JSON makes uses of the configured
Serializer internally.
See the Serialization and Transcoding section below for details.
What to look out for
The SDKs are more proactive in retrying with certain errors and in certain situations, within the timeout budget given by the user — as an example, temporary failures or locked documents are now being retried by default — making it even easier to program against certain error cases.
This behavior is customizable in a
RetryStrategy, which can be overridden on a per operation basis for maximum flexibility if you need it.
Note, most of the bootstrap sequence is now lazy (happening behind the scenes). For example, opening a bucket is not raising an error anymore, but it will only show up once you perform an actual operation. The reason behind this is to spare the application developer the work of having to do error handling in more places than needed. A bucket can go down 2ms after you opened it, so you have to handle request failures anyway. By delaying the error into the operation result itself, there is only one place to do the error handling. There will still be situations why you want to check if the resource you are accessing is available before continuing the bootstrap; for this, we have the diagnostics and ping commands at each level which allow you to perform those checks eagerly.
Language Specifics
Now that you are familiar with the general theme of the migration, the next sections dive deep into the specifics. First, installation and configuration are covered, then we talk about exception handling, and then each service (i.e. Key/Value, Query,…) is covered separately.
Installation and Configuration
As with 2.x release, the primary source of artifacts is the release notes page, where we publish links to pre-built binaries, as well as to source tarballs.
SDK 3.x supports PHP interpreters from 7.2: | https://docs.couchbase.com/php-sdk/3.0/project-docs/migrating-sdk-code-to-3.n.html | CC-MAIN-2020-40 | refinedweb | 1,047 | 50.77 |
Concepts revolutionise the way we think about and use generic programming. They didn't make it in C++11, or C++17 but with C++20 we will get them with high probability.
Before I write about the use of concepts, I want to make a general remark.
Until C++20 we have in C++ two diametral ways to think about functions or user-defined types (classes). Functions or classes can be defined on specific types or on generic types. In the second case, we call them function or class templates. What are the downsides of each way?
It's quite a job to define for each specific type a function or a class. To avoid that burden, type conversion comes often to our rescue but is also part of the problem. Let's see what I mean.
You have a function getInt(int a) which you can invoke with a double. Now, narrowing conversion takes places.
getInt(int a)
double
// narrowingConversion.cpp
#include <iostream>
void needInt(int i){
std::cout << "int: " << i << std::endl;
}
int main(){
std::cout << std::endl;
double d{1.234};
std::cout << "double: " << d << std::endl;
needInt(d);
std::cout << std::endl;
}
I assume that is not the behaviour you wanted. You started with a double and ended with an int.
int
But conversion works also the other way around.
You have a user-defined type MyHouse. An instance of MyHouse can be constructed in two ways. When invoked without an argument (1), its attribute family is set to an empty string. This means the house is still empty. To easily check if the house is empty or full, I implemented a conversion operator to bool (2). Fine or? No!
MyHouse
family
bool
// conversionOperator.cpp
#include <iostream>
#include <string>
struct MyHouse{
MyHouse() = default; // (1)
MyHouse(const std::string& fam): family(fam){}
operator bool(){ return !family.empty(); } // (2)
std::string family = "";
};
void needInt(int i){
std::cout << "int: " << i << std::endl;
}
int main(){
std::cout << std::boolalpha << std::endl;
MyHouse firstHouse;
if (!firstHouse){
std::cout << "The firstHouse is still empty." << std::endl;
};
MyHouse secondHouse("grimm");
if (secondHouse){
std::cout << "Family grimm lives in secondHouse." << std::endl;
}
std::cout << std::endl;
needInt(firstHouse); // (3)
needInt(secondHouse); // (3)
std::cout << std::endl;
}
Now, instances of MyHouse can be used, when an int is required. Strange!
Due to the overloaded operator bool (2), instances of MyHouse can be used as an int and can, therefore, be used in arithmetic expressions: auto res = MyHouse() + 5. This was not my intention! Just for completeness. With C++11 you can declare conversion operators as explicit. Therefore implicit conversions are not allowed.
auto res = MyHouse() + 5
explicit
My strong belief is that we need for convenience reasons the entire magic of conversions in C/C++ to deal with the fact that functions only accept specific arguments.
Are templates the cure? No!
Generic functions or classes can be invoked with arbitrary values. If the values do not satisfy the requirements of the function or class, no problem. You will get a compile time error. Fine!
// gcd.cpp
#include <iostream>
template<typename T>
T gcd(T a, T b){
if( b == 0 ){ return a; }
else{
return gcd(b, a % b);
}
}
int main(){
std::cout << std::endl;
std::cout << gcd(100, 10) << std::endl;
std::cout << gcd(3.5, 4.0)<< std::endl;
std::cout << gcd("100", "10") << std::endl;
std::cout << std::endl;
}
What is the problem with this error message?
Of course, it is quite long and quite difficult to understand but my crucial concern is a different one. The compilation fails because neither double nor the C-strings supports the % operator. This means the error is due to the failed template instantiation for double and C-string. This is too late and, therefore, really bad. No template instantiation for type double or C-strings should be possible. The requirements for the arguments should be part of the function declaration and not a side-effect of an erroneous template instantiation.
double
Now concepts come to our rescue.
With concepts, we get something in between. With them, we can define functions or classes which act on semantic categories. Meaning the arguments of functions or classes are either too specific nor too generic but named sets of requirements such as Integral.
Integral
Sorry for this short post but one week before my multithreading workshop at the CppCon I had neither the time nor in particular the resources (no connectivity in the national parks in Washington state) to write a full post. My next post will be special because I will write about the CppCon. Afterward, I continue my story about generics and in particular about concepts. 257
All 2993010
Currently are 136 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | http://modernescpp.com/index.php/c-core-guidelines-rules-for-the-usage-of-concepts | CC-MAIN-2019-51 | refinedweb | 799 | 66.54 |
I am using Flash Builder 4.0.1. have installed the Flex 4.5 SDK and am trying to run AIR apps from within Flash Builder. I get the error message below. The ADL debugger is getting fed a -runtime parameter pointing to the wrong SDK (4.1.0 instead of 4.5.1). Where do I configure this to point to the new SDK?
When you create a Flex project in Flash Builder, there is a Flex SDK configuration option on the first page of the wizard. If you choose to use a specfiic SDK rather than the default, you will be able to choose from the SDKs that are included with Flash Builder or select one that you have on your filesystem. However, it sounds like you should be using Flash Builder 4.5 or above rather than Flash Builder 4.01.
If you want to alter this value on an existing project, they you can see it on the Project Properties then look at the 'Flex SDK version' control.
Sounds like you may need to update the namespace version in your -app.xml file.
Flex 4.5.1:
<?xml version="1.0" encoding="utf-8" standalone="no"?>
<application xmlns="">
...
Flex 4.6:
<?xml version="1.0" encoding="utf-8" standalone="no"?>
<application xmlns="">
...
No, that's not the issue. I have already updated the xmlns in the -app.xml file.
The issue is that, after correctly compiling the app against the Flex 4.5 sdk, FB4 then tries to run it through ADT in the Flex 4.1 SDK (see "-runtime" parameter in the screen shot in my original post above) which causes the error. I want to know how I can tell FB4 to debug via ADT in the Flex 4.5 SDK instead.
This should be a setting in Flash Builder or in a config file somewhere, right??? | https://forums.adobe.com/thread/914186 | CC-MAIN-2017-51 | refinedweb | 313 | 84.27 |
13 November 2013 18:00 [Source: ICIS news]
HOUSTON (ICIS)--Here is Wednesday's midday ?xml:namespace>
CRUDE: Dec WTI: $94.18/bbl, up $1.14; Dec Brent: $106.93/bbl, up $1.12
NYMEX WTI crude futures rose in early trading as support from Libyan supply outages outweighed concerns about the US Federal Reserve reducing its economic stimulus.
RBOB: Dec: $2.6403/gal, up by 5.39 cents/gal
Reformulated blendstock for oxygen blending (RBOB) gasoline futures were higher in early trading on expectations that Thursday’s inventory report will show a decline in supplies. Analysts expect a draw of about 700,000 bbl.
NATURAL GAS: Dec: $3.602/MMBtu, down 1.5 cents
The December front month on the NYMEX natural gas futures market slid through Wednesday morning trading despite the strong demand outlook as traders weighed ongoing high production levels and the likelihood of another rise in inventories being reported by the Energy Information Administration (EIA) in its latest storage report due on Thursday.
ETHANE: steady at 24.75 cents/gal
Ethane spot prices were steady in early trading as demand from petrochemical plants remains steady.
AROMATICS: benzene higher at $4.14-4.20/gal FOB
US November benzene spot prices moved up to $4.14-4.20/gal FOB on Wednesday from $4.10-4.14/gal FOB at the close of the previous day.
OLEFINS: ethylene bid higher at 51.75 cents/lb, PGP offered at 66.25 cents/lb
US November ethylene bid levels moved up to 51.75 cents/lb, compared with 51.00 cents/lb the previous day, against no fresh offers. US polymer-grade propylene (PGP) was offered at 66.25 cents/lb, higher than the previous reported trade at 64 | http://www.icis.com/Articles/2013/11/13/9725100/noon-snapshot---americas-markets-summary.html | CC-MAIN-2015-06 | refinedweb | 290 | 61.83 |
WPF Learner s Guide to Head First C#
- Leona Holt
- 2 years ago
- Views:
Transcription
1 Good news! I just approved your request to upgrade your desktop to Windows There are many projects in Head First C# where you build Windows Store apps that require Windows 8. In this appendix, you'll use WPF to build them as desktop apps instead. appendix ii: Windows Presentation Foundation WPF Learner s Guide to Head First C# Suzie got her office desktop upgraded in JUST sixteen months. A new company record! Not running Windows 8? Not a problem. We wrote many chapters in the third edition of Head First C# using the latest technology available from Microsoft, which requires Windows 8 and Visual Studio But what if you re using this book at work, and you can t install the latest version? That s where Windows Presentation Foundation (or WPF) comes in. It s an older technology, so it works with Visual Studio 2010 and 2008 running on Windows editions as mature as But it s also a core C# technology, so even if you re running Windows 8 it s a good idea to get some experience with WPF. In this appendix, we ll guide you through building most of the Windows Store projects in the book using WPF. this is an appendix 1
2 same programs new technology Why you should learn WPF Windows Presentation Foundation, or WPF, is a technology that s used to build user interfaces for programs written in.net. WPF programs typically run on the Windows desktop and display their user interfaces in windows. WPF is one of the most popular technologies for developing Windows software, and familiarity with WPF is considered by many employers to be a required skill for professional C# and.net developers. WPF programs use XAML (Extensible Application Markup Language) to lay out their UIs. This is great news for Head First C# readers who have been reading about Windows Store apps. Most of the Windows Store projects in the book can be built for WPF with few or no modifications to the XAML code. I m running Windows 8 and Visual Studio 2013, so I don t care about WPF... right? Some things, like app bars and page navigation, are specific to Windows Store apps. In this appendix, we show you WPF alternatives wherever possible. Every C# developer should work with WPF. Almost every programming language can be used in lots of different environments and operating systems, and C# is no exception. If your goal is to improve as a C# developer, you should go out of your way to work with as many different technologies as possible. And WPF in particular is especially important for C# developers, because there are many programs that use WPF in companies, and this will continue for a long time. If your goal is to use C# in a professional environment, WPF is technology you ll want to list on your resumé. Learning WPF is also great for a hobby programmer who s using Windows 8 and can build all of the code in Head First C#. One of the most effective learning tools you have as a developer is seeing the same problem solved in different ways. This appendix will guide you through building many of the projects in Head First C# using WPF. Seeing those projects built in WPF and Windows 8 will give you valuable perspective, and that s one of the things that helps turn good programmers into great developers. You can download the code for all of the projects in this appendix. Go to the Head First Labs website for more information: 2 Appendix ii
3 windows presentation foundation Build WPF projects in Visual Studio Creating a new WPF application in Visual Studio works just like creating other kinds of desktop applications. If you re using Visual Studio Express 2013, make sure you re using Visual Studio 2013 Express for Desktop (the edition for Windows 8 will not create WPF projects). You can also create programs using Visual Studio 2013 Professional, Premium, or Ultimate. When you create a new project, Visual Studio displays a New Project dialog. Make sure you select Visual C#, and then choose : You can also create C# WPF applications using all editions of Visual Studio 2010, Visual C# 2010 Express, and Visual Studio Note that if you use the Express editions of Visual Studio 2010 or 2008, the project files are initially created in a temporary folder and are not saved to the location specified in the New Project dialog until you use Save or Save All to save your files. WPF can also be used to build XAML browser applications that run inside Internet Explorer and other browsers. We won t be covering it in this appendix, but you can learn more about it here: Microsoft has yet another technology that also uses XAML. It s called Silverlight, and you can read about it here: Did you find an error in this appendix? Please submit it using the Errata page for Head First C# (3rd edition) so we can fix it as quickly as possible! you are here 4 3
4 let s get started How to use this appendix This appendix contains complete replacements for pages in Head First C# (3rd edition). We ve divided this appendix up into individual guides for each chapter, starting with an overview page that has specific instructions for how to work through that chapter: what pages to replace in the chapter, what to read in it, and any specific instructions to help you get the best learning experience. If you re using an old version of Visual Studio, you ll be able to do these projects... but things will be a little harder for you. The team at Microsoft did a really good job of improving the user interface of Visual Studio 2013, especially when it comes to editing XAML. One important feature of Head First C# is its use of the Visual Studio IDE as a tool for teaching, learning, and exploration. This is why we strongly recommend that you use the latest version of Visual Studio if possible. However, we do understand that some readers cannot install Visual Studio (For example, a lot of our readers are using a computer provided by an employer, and do not have administrative privileges to install new software.) We still want you to be able to use our book, even if you re stuck using an old version of Visual Studio! We ll do our best to give you as much guidance as we can. But we also need to strike a balance here, because we re being careful not to compromise the learning for the majority of our readers who are using the latest version of Visual Studio. If you re using Visual Studio 2010 or earlier, and you find yourself stuck because the IDE s user interface doesn t look right or menu options aren t where you expect them to be, we recommend that you enter the XAML and C# code by hand or even better, copy it and paste it into Visual Studio. Once the XAML is correct, it s often easier to track down the feature in the IDE that generated it. We ve made all of the source code in the book available for download, and we encourage you to copy and paste it into your programs anytime you get stuck. Go to the book s website() for more details and links to the source code. You can download the source code directly from com/ but for the replacement chapters in this appendix, make sure that you sure you download the code from the WPF folder. If you try to use the Windows Store code in a WPF project, you'll get frustrating errors. One more thing. This appendix has replacements for pages that you ll find in the printed or PDF version this book, and you can find those pages using their page numbers. However, if you re using a Kindle or another ebook reader, you might not be able to use the page numbers. Instead, just use the section heading to look up the section to replace. For example, this appendix has replacements for pages 72 and 73 section called Build an app from the ground up, which you can find in your ebook reader s Table of Contents underneath Chapter 2. (Exercises like the one on page 83 and the solution on page 85 might not show up in your reader s Table of Contents, but you ll get to the exercises as you go through each chapter.) This will be much easier for you if you download the PDF of this appendix from the book s website. 4 Appendix ii
5 windows presentation foundation Chapter 1 You can build the entire Save the Humans game in WPF using these replacements for pages Build a game, and get a feel for the IDE. The first project in the book walks you through building a complete and fun! video game. The goal of the project is to help you get used to creating user interfaces and writing C# code using the Visual Studio IDE. We recommend that you read through page 11 in the main part of the book, and then flip to the next page in this appendix. We designed pages in this appendix so that they can be 100% replacements for the corresponding pages in the book. Once you ve finished building the WPF version of Save the Humans, you can go on to Chapter 2 in the book. The screenshots in this chapter are from Visual Studio 2013 for Windows Desktop, the latest version of Visual Studio available at this time. If you re using Visual Studio 2010, some of the menu options and windows in the IDE will be different. We ll give you guidance to help you find the right menu options. We worked really hard to keep the page flipping to a minimum, because by reducing distractions we make it easier for you to learn important C# concepts. After you read the first 11 pages of Chapter 1, you won't have to flip back to the main part of the book at all for the rest of the chapter. Then there are just five pages that you need in this appendix for Chapter 2. After that, the book concentrates on building desktop applications, which you can build with any version of Windows. You won't need this appendix again until you get to Chapter 10. you are here 4 5
6 fill in the blanks Start with a blank application Every great app starts with a new project. Choose New Project from the File menu. Make sure you have Visual C# Windows selected and choose WPF Application as the project type. Type Save the Humans as the project name.. 1 Your starting point is the Designer window. Double-click on MainWindow.xaml in the Solution Explorer to bring it up (if it's not already displayed). Find the zoom drop-down in the lower-left corner of the designer and choose Fit all to zoom it out. The designer shows you a preview of the window that you re working on. It looks like a blank window with a default white background. You won t see these buttons in older versions of Visual Studio, only in 2013 (and 2012). Use these three buttons to turn on the grid lines, turn on snapping (which automatically lines up your controls to each other), and turn on snapping to grid lines (which aligns them with the grid). 12 Appendix ii
7 You are here! windows presentation foundation XAML Main Window and Containers WPF UI Controls C# Code Main window Rectangle ProgressBar Target timer Tick event handler Enemy timer Tick event handler Canvas StackPanel Start button Click event handler Grid methods StartGame() The bottom half of the Designer window shows you the XAML code. It turns out your blank window isn t blank at all it contains a XAML grid. The grid works a lot like a table in an HTML page or Word document. We ll use it to lay out our windows in a way that lets them grow or shrink to different screen sizes and shapes. Ellipse Rectangle AddEnemy() AnimateEnemy() EndTheGame() You can see the XAML code for the blank window that the IDE generated for you. Keep your eyes on it we ll add some columns and rows in a minute. These are the opening and closing tags for a grid that contains controls. When you add rows, columns, and controls to the grid, the code for them will go between these opening and closing tags. This part of the project has steps numbered 1 to 3. Flip the page to keep going! This project closely follows chapter 1. We want to give you a solid learning foundation, so we ve designed this project so that it can replace pages of Head First C#. Other projects in this appendix will give you all the information that you need to adapt the material in the book. So even when we don t give you oneto-one page replacements, we ll make sure you get all the information you need to do the projects. you are here 4 13
8 not so blank after all 2 Hover over the border of the window until an orange triangle and line appear......then click to create a bottom row in the grid. Your app will be a grid with two rows and three columns, with one big cell in the middle that will contain the play area. Start defining rows by hovering over the border of the window until a line and triangle appear: You might need to click inside the window in order to see the triangles for adding rows and columns. WPF apps often need to adapt to different window sizes displayed at different screen resolutions. Over the next few pages you ll explore a lot of different features in the Visual Studio IDE, because we ll be using the IDE as a powerful tool for learning and teaching. You ll use the IDE throughout the book to explore C#. That s a really effective way to get it into your brain! Laying out the window using a grid s columns and rows allows your program to automatically adjust to the window size. After the row is added, the line will change to blue and you ll see the heights of both rows in the border. The height of each row will be a number followed by a star. Don t worry about the numbers for now. Q: But it looks like I already have many rows and columns in the grid. What are those gray lines? A: The gray lines are just Visual Studio giving you a grid of guidelines to help you lay your controls out evenly in the window. You can turn them on and off with the button. None of the lines you see in the designer show up when you run the app outside of Visual Studio. But when you clicked and created a new row, you actually altered the XAML, which will change the way the app behaves when it s compiled and executed. Q: Wait a minute. I wanted to learn about C#. Why am I spending all this time learning about XAML? A: Because WPF apps built in C# almost always start with a user interface that s designed in XAML. That s also why Visual Studio has such a good XAML editor to give you the tools you need to build stunning user interfaces. Throughout the book, you ll learn how to build other types of programs with C#: Windows Store apps, which use XAML, and desktop applications and console applications, which don t. Seeing all of these different technologies will give you a deeper understanding of programming with C#. 14 Appendix ii
9 windows presentation foundation 3 Do the same thing along the top border of the window except this time create two columns, a small one on the left-hand side and another small one on the right-hand side. Don t worry about the row heights or column widths they ll vary depending on where you click. We ll fix them in a minute. Don t worry if your row heights or column widths are different; you ll fix them on the next page. When you re done, look in the XAML window and go back to the same grid from the previous page. Now the column widths and row heights match the numbers on the top and side of your window. Here s the width of the left column you created in step 3 the width matches the width that you saw in the designer. That s because the IDE generated this XAML code for you. Your grid rows and columns are now added! XAML grids are container controls, which means they hold other controls. Grids consist of rows and columns that define cells, and each cell can hold other XAML controls that show buttons, text, and shapes. A grid is a great way to lay out a window, because you can set its rows and columns to resize themselves based on the size of the screen. The humans are preparing. We don t like the looks of this. you are here 4 15
10 let s size up the competition Set up the grid for your window Your program needs to be able to work on different sized windows, and using a grid is a great way to do that. You can set the rows and columns of a grid to a specific pixel height. But you can also use the Star setting, which keeps them the same size proportionally to one another and also to the window no matter how big the window or resolution of the display. If you don t see the numbers like 120* and 19* along the border of your window, click outside the window in the designer. 1 Set the width of the LEFT column. Hover over the number above the leftmost column until a drop-down menu appears. Choose Pixel to change the star to a lock, and then click on the number to change it to 140. Your column s number should now look like this: 2 Repeat for the right column and the bottom row. Make the right column 160 pixels and the bottom row 150 by choosing Pixel and typing 160 or 150 into the box. Set your columns or rows to Pixel to give them a fixed width or height. The Star setting lets a row or column grow or shrink proportionally to the rest of the grid. Use this setting in the designer to alter the Width or Height property in the XAML. If you remove the Width or Height property, it s the same as setting the property to 1*. When you switch the column to pixels, the number changes from a proportional width to the actual pixel width. It s OK if you re not a pro at app design...yet. We ll talk a lot more about what goes into designing a good app later on. For now, we ll walk you through building this game. By the end of the book, you ll understand exactly what all of these things do! 16 Appendix ii
11 3 If you accidentally changed the center column s width to Pixels, you can change it back to 1*. Make the center column the default size. Make sure that the center column width is set to. If it isn t, click on the number above the center column and enter 1. Don t use the drop-down (leave it star) so it looks like the picture below. Then make sure to look back at the other columns to make sure the IDE didn t resize them. If it did, just change them back to the widths you set in steps 1 and 2. windows presentation foundation XAML and C# are case sensitive! Make sure your uppercase and lowercase letters match example code. When you enter 1* into the box, the IDE sets the column to its default width. It might adjust the other columns. If it does, just reset them back to 160 pixels. 4 Look at your XAML code! Click on the grid to make sure it s selected, then look in the XAML window to see the code that you built. The <Grid> line at the top means everything that comes after it is part of the grid. This is how a column is defined for a XAML grid. You added three columns and two rows, so there are three ColumnDefinition tags and two RowDefinition tags. You used the designer to set the height of the bottom row to 150 pixels. In a minute, you ll be adding controls to your grid, which will show up here, after the row and column definitions. You used the column and row drop-downs to set the Width and Height properties. If you re using Visual Studio 2010, the IDE looks different. When you hover over a column size, you ll see this box to select pixel or star: It s possible to edit the column sizes in the designer using the older versions of the IDE, but it s not nearly as easy to do. We recommend that if you re using an older version of the IDE, you create the columns you are and here 4 17 rows, and then edit the XAML row and column definitions by hand.
12 take control of your program Add controls to your grid Ever notice how programs are full of buttons, text, pictures, progress bars, sliders, drop-downs, and menus? Those are called controls, and it s time to add some of them to your app inside the cells defined by your grid s rows and columns. If you don t see the toolbox in the IDE, you can open it using the View menu. Use the pushpin to keep it from collapsing. 1 Expand the Common WPF Controls section of the toolbox and drag a into the bottom-left cell of the grid. Then look at the bottom of the Designer window and have a look at the XAML tag that the IDE generated for you. You ll see something like this your margin numbers will be different depending on where in the cell you dragged it, and the properties might be in a different order. The XAML for the button starts here, with the opening tag. When you pin the Toolbox, you can use this tab to open it These are properties. Each property has a name, followed by an equals sign, followed by its value. 2 Drag a into the lower-right cell of the grid. Your XAML will look something like this. See if you can figure out how it determines which row and column the controls are placed in. 18 Appendix ii Click on Pointer in the toolbox, then click on the TextBlock and move it around and watch the IDE update the Margin property in the XAML. We added line breaks to make the XAML easier to read. You can add line breaks, too. Give it a try! If you don t see the toolbox, try clicking on the word Toolbox that shows up in the upper-left corner of the IDE. If it s not there, select Toolbox from the View menu to make it appear.
13 windows presentation foundation 3 Next, expand the All WPF Controls section of the toolbox. Drag a into the bottom-center cell, a into the bottom-right cell (make sure it s below the TextBlock you already put in that cell), and a into the top center cell. Your window should now have controls on it (don t worry if they re placed differently than the picture below; we ll fix that in a minute): When you add the Canvas control, it looks like an empty box. We ll fix that shortly. Here s the button you added in step 1. Here s the TextBlock control you added in step 2. You dragged a ContentControl into the same cell. You just added this ProgressBar. Here s the ContentControl. What do you think it does? 4 You ve got the Canvas control currently selected, since you just added it. (If not, use the pointer to select it again.) Look in the XAML window: It s showing you the XAML tag for the Canvas control. It starts with <Canvas and ends with />, and between them it has properties like Grid.Column="1" (to put the Canvas in the center column) and Height="100" (to set its height in pixels). Try clicking in both the grid and the XAML window to select different controls.... Try clicking this button. It brings up the Document Outline window. Can you figure out how to use it? You ll learn more about it in a few pages. When you drag a control out of the toolbox and onto your window, the IDE automatically generates XAML to put it where you dragged it. you are here 4 19
14 your app s property value is going up Use properties to change how the controls look The Visual Studio IDE gives you fine control over your controls. The Properties window in the IDE lets you change the look and even the behavior of the controls on your window. When you re editing text, use the Escape key to finish. This works for other things in the IDE, too. 1 Change the text of the button. Right-click on the button control that you dragged onto the grid and choose Edit Text from the menu. Change the text to: Start! and see what you did to the button s XAML: Use the Name box to change the name of the control to startbutton When you edit the text in the button, the IDE updates the Content property in the XAML. Use the Properties window to modify the button. Make sure the button is selected in the IDE, and then look at the Properties window in the lower-right corner of the IDE. Use it to change the name of the control to startbutton and center the control in the cell. Once you ve got the button looking right, right-click on it and choose View Source to jump straight to the <Button> tag in the XAML window. You might need to expand the Common and Layout sections. These little squares tell you if the property has been set. A filled square means it s been set; an empty square means it s been left with a default value. When you used Edit Text on the right-click menu to change the button s text, the IDE updated the Content property. Use the and buttons to set the HorizontalAlignment and VerticalAlignment properties to Center and center the button in the cell. Older versions of the IDE use the word Center instead of icons like this. When you dragged the button onto the window, the IDE used the Margin property to place it in an exact position in the cell. Click on the square and choose Reset from the menu to reset the margins to 0. Go back to the XAML window in the IDE and have a look at the XAML that you updated! Use the buttons to set the Width and Height to Auto. 20 Appendix ii The properties may be in a different order. That s OK!
15 You can use Edit Undo (or Ctrl-Z) to undo the last change. Do it several times to undo the last few changes. If you selected the wrong thing, you can choose Select None from the Edit menu to deselect. You can also hit Escape to deselect the control. If it s living inside a container like a StackPanel or Grid, hitting Escape will select the container, so you may need to hit it a few times. XAML Main Window and Containers Main window windows presentation foundation Canvas WPF UI Controls Rectangle StackPanel ProgressBar Target timer Tick event handler C# Code Start button Click event handler Enemy timer Tick event handler 3 Change the size and title of the window. Select any of the controls. Then hit Escape, and keep hitting Escape until the outer <Window> tag is displayed in the XAML editor: Grid methods StartGame() AddEnemy() AnimateEnemy() EndTheGame() You are here! Ellipse Rectangle Click in the XAML editor. The <Window> tag has properties for Height and Width. Look for their corresponding values in the Properties window in the IDE: Your TextBlock and ContentControl are in the lower-right cell of the grid. Set the width to 1000 and height to 700, and the window immediately resizes itself to the new size. You can use the Fit all option in the Zoom drop-down to show the whole window in the designer. Notice how the center column and top row resized themselves to fit the new window, while the other rows and columns kept their pixel sizes. Then expand the Common section in the Properties window and set the Title property to Save the Humans. You ll see the window title get updated. 4 5 Update the TextBlock to change its text and its font size. Use the Edit Text right-mouse menu option to change the TextBlock so it says Avoid These (hit Escape to finish editing the text). Then expand the Text section of the Properties window and change the font size to 18 px. This may cause the text to wrap and expand to two lines. If it does, drag the TextBlock to make it wider. Use a StackPanel to group the TextBlock and ContentControl. Make sure that the TextBlock is near the top of the cell, and the ContentControl is near the bottom. Click and drag to select both the TextBlock and ContentControl, and then right-click. Choose from the pop-up menu, then choose. This adds a new control to your form: a StackPanel control. You can select the StackPanel by clicking between the two controls. The StackPanel is a lot like the Grid and Canvas: its job is to hold other controls (it s called a container ), so it s not visible on the form. But since you dragged the TextBlock to the top of the cell and the ContentControl to the bottom, the IDE created the StackPanel so it fills up most of the cell. Click in the middle of the StackPanel to select it, then right-click and choose and to quickly reset its properties, which will set its vertical and horizontal alignment to Stretch. Right-click on the TextBox and ContentControl to reset their properties as well. While you have the ContentControl selected, set its vertical and horizontal alignments to Center. A box appears around the StackPanel if you hover over it. Right-click and reset the layout of the StackPanel, TextBlock, and ContentControl. you are here 4 21
16 you want your game to work, right? Controls make the game work Controls aren t just for decorative touches like titles and captions. They re central to the way your game works. Let s add the controls that players will interact with when they play your game. Here s what you ll build next: You ll create a play area with a gradient background......and you ll work on the bottom row. The user interface for editing colors in earlier versions of Visual Studio is not as advanced, but you should still be able to set the colors so they look correct. The Document Outline window is also a little more primitive, but it still works. However, there is not an easy way to visually create a template in Visual Studio The easiest way to do this in the old version of the IDE is to copy the entire <Window.Resources> section (up through the closing </Window.Resources> tag) from the downloadable source code and paste it into your XAML just above the opening <Grid> tag. Make sure you download the code from the WPF folder! Then you can select the ContentControl and use the Properties window to set the Template property to EnemyTemplate. Your enemies will already look like evil aliens, so make sure you still read pages 44 and 45. You ll make the ProgressBar as wide as its column......and you ll use a template to make your enemy look like this. 1 Update the ProgressBar. Right-click on the ProgressBar in the bottom-center cell of the grid, choose the Layout menu option, and then choose Reset All to reset all the properties to their default values. Use the Height box in the Layout section of the Properties window to set the Height to 20. The IDE stripped all of the properties from the XAML, and then added the new Height: You can also get to the Document Outline by choosing the View Other Windows menu. You can also open the Document Outline by clicking the tab on the side of the IDE. 2 Turn the Canvas control into the gameplay area. Remember that Canvas control that you dragged into the center square? It s hard to see it right now because a Canvas control is invisible when you first drag it out of the toolbox, but there s an easy way to find it. Click the very small the XAML window to bring up the Document Outline. Click on select the Canvas control. Make sure the Canvas control is selected, then use the Name box in the Properties window to set the name to playarea. 22 Appendix ii Click on the left-hand tab, then on the starting color for the gradient. Then click on the right-hand tab and choose the ending color. button above to Once you change the name, it ll show up as playarea instead of [Canvas] in the Document Outline window. After you ve named the Canvas control, you can close the Document Outline window. Then use the and buttons in the Properties window to set its vertical and horizontal alignments to Stretch, reset the margins, and click both buttons to set the Width and Height to Auto. Then set its Column to 0, and its ColumnSpan (next to Column) to 3. Finally, open the Brush section of the Properties window and use the button to give it a gradient. Choose the starting and ending colors for the gradient by clicking each of the tabs at the bottom of the color editor and then clicking a color.
17 3 Create the enemy template. windows presentation foundation Your game will have a lot of enemies bouncing around the screen, and you re going to want them all to look the same. Luckily, XAML gives us templates, which are an easy way to make a bunch of controls look alike. Next, right-click on the ContentControl in the Document Outline window. Choose Edit Template, then choose Create Empty... from the menu. Name it EnemyTemplate. The IDE will add the template to the XAML. You re flying blind for this next bit the designer won t display anything for the template until you add a control and set its height and width so it shows up. Don t worry; you can always undo and try again if something goes wrong. 4 Your newly created template is currently selected in the IDE. Collapse the Document Outline window so it doesn t overlap the Toolbox. Your template is still invisible, but you ll change that in the next step. If you accidentally click out of the control template, you can always get back to it by opening the Document Outline, right-clicking on the Content Control, and choosing Edit Template Edit Current. Edit the enemy template. Add a red circle to the template: Double-click on in the Toolbox to add an ellipse. Set the ellipse s Height and Width properties to 100, which will cause the ellipse to be displayed in the cell. Reset the Margin, HorizontalAlignment, and VerticalAlignment properties by clicking their squares and choosing Reset. Go to the Brush section of the Properties window and click on to select a solid-color brush. Color your ellipse red by clicking in the color selector and dragging to the upper-right corner. The XAML for your ContentControl now looks like this: You can also use the Document Outline window to select the grid if it gets deselected. Make sure you don t click anywhere else in the designer until you see the ellipse. That will keep the template selected. Click in this color selector and drag to the upper-right corner. 5 Use the Document Outline to modify the StackPanel, TextBlock, and Grid controls. Go back to the Document Outline (if you see at the top of the Document Outline window, just click to get back to the Window outline). Select the StackPanel control, make sure its vertical and horizontal alignments are set to center, and clear the margins. Then do the same for the TextBlock, and use the Properties window to set the Foreground property to white using the color selector. Finally, select the Grid, then open the Brush section of properties and click Scroll around your window s XAML window and see if you can find where EnemyTemplate is defined. It should be right below the AppName resource. Click here and use the color selector to make the TextBlock white. to give it a black Background. you are here 4 23 You re almost done laying out the form! Flip the page for the last steps...
18 check out the window you built 6 If you used the designer to create your human, make sure its source matches this XAML Appendix ii Add the human to the Canvas. You ve got two options for adding the human. The first option is to follow the next three paragraphs. The second, quicker option is to just type the four lines of XAML into the IDE. It s your choice! Select the Canvas control, and then open the All XAML Controls section of the toolbox and double-click on Ellipse to add an Ellipse control to the Canvas. Select the Canvas control again and double-click on Rectangle. The Rectangle will be added right on top of the Ellipse, so drag the Rectangle below it. Hold down the Shift key and click on the Ellipse so both controls are selected. Right-click on the Ellipse, choose Group Into, and then StackPanel. Select the Ellipse, use the solid brush property to change its color to white, and set its Width and Height properties to 10. Then select the Rectangle, make it white as well, and change its Width to 10 and its Height to 25. Use the Document Outline window to select the Stack Panel (make sure you see at the top of the Properties window). Reset its margins, then click both buttons to set the Width and Height to Auto. Then use the Name box at the top of the window to set its name to human. Here s the XAML you generated: You might also see a Stroke property on the Ellipse and Rectangle set to "Black". (If you don't see one, try adding it. What happens?) Go back to the Document Outline window to see how your new controls appear: If human isn't indented underneath playarea, click and drag human onto it. Add the Game Over text. When your player s game is over, the game will need to display a Game Over message. You ll do it by adding a TextBlock, setting its font, and giving it a name: Select the Canvas, and then drag a TextBlock out of the toolbox and onto it. Use the Name box in the Properties window to change its name to gameovertext. Use the Text section of the Properties window to change the font to Arial, change the size to 100 px, and make it Bold and Italic. Click on the TextBlock and drag it to the middle of the Canvas. Edit the text so it says Game Over. If you choose to type this into the XAML window of the IDE, make sure you do it directly above the </Canvas> tag. That s how you indicate that the human is contained in the Canvas. You gave the Canvas control the name playarea in step 2, so it shows up in the Document Outline window. Try hovering over the controls in it. When you drag a control around a Canvas, its Left and Top properties are changed to set its position. If you change the Left and Top properties, you move the control.
19 windows presentation foundation 8 Add the target portal that the player will drag the human onto. There s one last control to add to the Canvas: the target portal that your player will drag the human into. (It doesn t matter where in the Canvas you drag it.) Select the Canvas control, and then drag a Rectangle control onto it. Use the button in the Brushes section of the Properties window to give it a gradient. Set its Height and Width properties to 50. Turn your rectangle into a diamond by rotating it 45 degrees. Open the Transform section of the Properties window to rotate the Rectangle 45 degrees by clicking and setting the angle to 45. Finally, use the Name box in the Properties window to give it the name target. 9 Take a minute and double-check a few things. Open the Document Outline window and make sure that the human StackPanel, gameovertext TextBlock, and target Rectangle are indented underneath the playarea Canvas control, which is indented under the second [Grid]. Select the playarea Canvas control and make sure its Height and Width are set to Auto. These are all things that could cause bugs in your game that will be difficult to track down. Your Document Outline window should look like this: Congratulations you ve finished building the window for your app! We collapsed human to make it obvious that it s indented underneath playarea, along with gameovertext and target. It s okay if the controls are in a different order (you can even drag them up an down!), as long as the indenting is correct that s how you know which controls are inside other container controls. you are here 4 25
20 you took control to change text displayed inside your control Solution on page 35 Here s a hint: you can use the Search box in the Properties window to find properties but some of these properties aren t on every type of control. 26 Appendix ii
21 windows presentation foundation You ve set the stage for the game Your window is now all set for coding. You set up the grid that will serve as the basis of your window, and you added controls that will make up the elements of the game.() The first step you did was to create the project and set up the grid. Then you added controls to your window. The next step is to write code that uses them. Visual Studio gave you useful tools for laying out your window, but all it really did was help you create XAML code. You re the one in charge! you are here 4 27
22 keep your stub for re-entry What you ll do next Now comes the fun part: adding the code that makes your game work. You ll do it in three stages: first you ll animate your enemies, then you ll let your player interact with the game, and finally you ll add polish to make the game look better. First you ll animate the enemies... The first thing you ll do is add C# code that causes enemies to shoot out across the play area every time you click the Start button. A lot of programmers build their code in small increments, making sure one piece works before moving on to the next one. That s how you ll build the rest of this program. You ll start by creating a method called AddEnemy() that adds an animated enemy to the Canvas control. First you ll hook it up to the Start button so you can fill your window up with bouncing enemies. That will lay the groundwork to build out the rest of the game....then you ll add the gameplay... To make the game work, you ll need the progress bar to count down, the human to move, and the game to end when the enemy gets him or time runs out. You used a template to make the enemies look like red circles. Now you ll update the template to make them look like evil alien heads....and finally, you ll make it look good. 28 Appendix ii
23 windows presentation foundation Add a method that does something It s time to start writing some C# code, and the first thing you ll do is add a method and the IDE can give you a great starting point by generating code. When you re editing a window in the IDE, double-clicking on any of the toolbox controls causes the IDE to automatically add code to your project. Make sure you ve got the window designer showing in the IDE, and then double-click on the Start button. The IDE will add code to your project that gets run anytime a user clicks on the button. You should see some code pop up that looks like this: When you double-clicked the button control, the IDE created this method. It will run when a user clicks the Start! button in the running application. Use the IDE to create your own method Click between the { brackets and type this, including the parentheses and semicolon: The red squiggly line is the IDE telling you there s a problem, and the blue box is the IDE telling you that it might have a solution. Notice the red squiggly line underneath the text you just typed? That s the IDE telling you that something s wrong. If you click on the squiggly line, a blue box appears, which is the IDE s way of telling you that it might be able to help you fix the error. Hover over the blue box and click the icon that pops up. You ll see a box asking you to generate a method stub. What do you think will happen if you click it? Go ahead and click it to find out! The IDE also added this to the XAML. See if you can find it. You ll learn more about what this is in Chapter 2. Q: What s a method? A: A method is just a named block of code. We ll talk a lot more about methods in Chapter 2. Q: And the IDE generated it for me? A: Yes...for now. A method is one of the basic building blocks of programs you ll write a lot of them, and you ll get used to writing them by hand. you are here 4 29
24 intelligent and sensible Fill in the code for your method It s time to make your program do something, and you ve got a good starting point. The IDE generated a method stub for you: the starting point for a method that you can fill in with code. 1 Delete the contents of the method stub that the IDE generated for you. C# code must be added exactly as you see it here. It s really easy to throw off your code. When you re adding C# code to your program, the capitalization has to be exactly right, and make sure you get all of the parentheses, commas, and semicolons. If you miss one, your program won t work! 2 Select this and delete it. You ll learn about exceptions in Chapter 12. Start adding code. Type the word Content into the method body. The IDE will pop up a window called an IntelliSense Window with suggestions. Choose ContentControl from the list. 3 Finish adding the first line of code. You ll get another IntelliSense window after you type new. 30 Appendix ii This line creates a new ContentControl object. You ll learn about objects and the new keyword in Chapter 3, and reference variables like enemy in Chapter 4.
25 windows presentation foundation 4 Before you fill in the AddEnemy() method, you ll need to add a line of code near the top of the file. Find the line that says public partial class MainWindow : Window and add this line after the bracket ({): This is called a field. You ll learn more about how it works in Chapter 4. 5 Finish adding the method. You ll see some squiggly red underlines. The ones Do you see a squiggly underline under AnimateEnemy() will go away when you generate its method stub. under playarea? Go back to the XAML editor and make sure you set the name of the Canvas control to playarea. This line adds your new enemy control to a collection called Children. You ll learn about collections in Chapter 8. 6 If you need to switch between the XAML and C# code, use the tabs at the top of the window. Use the blue box and the button to generate a method stub for AnimateEnemy(), just like you did for AddEnemy(). This time it added four parameters called enemy, p1, p2, and p3. Edit the top line of the method to change the last three parameters. Change the property p1 to from, the property p2 to to, and the property p3 to propertytoanimate. Then change any int types to double. You ll learn about methods and parameters in Chapter 2. The IDE may generate the method stub with int types. Change them to double. You ll learn about types in Chapter 4. Flip the page to see your program run! you are here 4 31
26 ok, that s pretty cool Finish the method and run your program Your program is almost ready to run! All you need to do is finish your AnimateEnemy() method. Don t panic if things don t quite work yet. You may have missed a comma or some parentheses when you re programming, you need to be really careful about those things! 1 Statements like these let you use code from.net libraries that come with C#. You ll learn more about them in Chapter 2. 2 Add a using statement to the top of the file. Scroll all the way to the top of the file. The IDE generated several lines that start with using. Add one more to the bottom of the list: You ll need this line to make the next bit of code work. You can use the IntelliSense window to get it right and don t forget the semicolon at the end. This using statement lets you use animation code from the.net Framework in your program to move the enemies on your screen. Add code that creates an enemy bouncing animation. You generated the method stub for the AnimateEnemy() method on the previous page. Now you ll add its code. It makes an enemy start bouncing across the screen. Still seeing red? The IDE helps you track down problems. If you still have some of those red squiggly lines, don t worry! You probably just need to track down a typo or two. If you re still seeing squiggly red underlines, it just means you didn t type in some of the code correctly. We ve tested this chapter with a lot of different people, and we didn t leave anything out. All the code you need to get your program working is in these pages. You ll learn about object initializers like this in Chapter 4. And you ll learn about animation in Chapter 16. This code makes the enemy you created move across playarea. If you change 4 and 6, you can make the enemies move slower or faster. 3 Look over your code. You shouldn t see any errors, and your Error List window should be empty. If not, double-click on the error in the Error List. The IDE will jump your cursor to the right place to help you track down the problem. If you can t see the Error List window, choose Error List from the View menu to show it. You ll learn more about using the error window and debugging your code in Chapter Appendix ii
27 windows presentation foundation Here s a hint: if you move too many windows around your IDE, you can always reset by choosing Reset Window Layout from the Window menu. 4 Start your program. Find the button at the top of the IDE. This starts your program running. This button starts your program. 5 Now your program is running! When you start your program, the main window will be displayed. Click the Start! button a few times. Each time you click it, a circle is launched across your canvas. You built something cool! And it didn t take long, just like we promised. But there s more to do to get it right. If the enemies aren t bouncing, or if they leave the play area, double-check the code. You may be missing parentheses or keywords. 6 Stop your program. Press Alt-Tab to switch back to the IDE. The button in the toolbar has been replaced with to break, stop, and restart your program. Click the square to stop the program running. you are here 4 33
28 what you ve done, where you re going Here s what you ve done so far Congratulations! You ve built a program that actually does something. It s not quite a playable game, but it s definitely a start. Let s look back and see what you built.() We ve gotten a good start by building the user interface... but we still need the rest of the C# code to make the game actually work. This step is where we actually write C# code that makes the gameplay run. Visual Studio can generate code for you, but you need to know what you want to build BEFORE you start building it. It won t do that for you! 34 Appendix ii
29 Here s the solution for the Who Does What exercise on page 28. We ll give you the answers to the pencil-and-paper puzzles and exercises, but they won t always be on the next page. windows presentation foundation solution text or graphics in your control Remember how you set the Name of the Canvas control to playarea? That set its x:name property in the XAML, which will come in handy in a minute when you write C# code to work with the Canvas. you are here 4 35
30 tick tick tick Add timers to manage the gameplay Let s build on that great start by adding working gameplay elements. This game adds more and more enemies, and the progress bar slowly fills up while the player drags the human to the target. You ll use timers to manage both of those things. The MainWindow.Xaml.cs file you ve been editing contains the code for a class called MainWindow. You ll learn about classes in Chapter Add another line to the top of your C# code. You ll need to add one more using line right below the one you added a few pages ago: Then go up to the top of the file where you added that Random line. Add three more lines: Add a method for one of your timers. Find this code that the IDE generated: This using statement lets you use DispatcherTimers. Add these three lines below the one you added before. These are fields, and you ll learn about them in Chapter 4. Tick Tick Tick Put your cursor right after the semicolon, hit Enter two times, and type enemytimer. (including the period). As soon as you type the dot, an IntelliSense window will pop up. Choose Tick from the IntelliSense window and type the following text. As soon as you enter += the IDE pops up a box: 36 Appendix ii Press the Tab key. The IDE will pop up another box: Press Tab one more time. Here s the code the IDE generated for you: The IDE generated a method for you called an event handler. You ll learn about event handlers in Chapter 15. Timers tick every time interval by calling methods over and over again. You ll use one timer to add enemies every few seconds, and the other to end the game when time expires.
31 3 It s normal to add parentheses () when writing about a method. Finish the MainWindow() method. You ll add another Tick event handler for the other timer, and you ll add two more lines of code. Here s what your finished MainWindow() method and the two methods the IDE generated for you should look like: windows presentation foundation Right now your Start button adds bouncing enemies to the play area. What do you think you ll need to do to make it start the game instead? Try changing these numbers once your game is finished. How does that change the gameplay? The IDE generated these lines as placeholders when you pressed Tab to add the Tick event handlers. You ll replace them with code that gets run every time the timers tick. 4 Did the IDE keep trying to capitalize the P in progressbar? That s because there was no lowercase-p progressbar, and the closest match it could find was the type of the control. Add the EndTheGame() method. Go to the new targettimer_tick() method, delete the line that the IDE generated, and add the following code. Type EndTheGame() and generate a method stub for it, just like before: Notice how progressbar has an error? That s OK. We did this on purpose (and we re not even sorry about it!) to show you what it looks like when you try to use a control that doesn t have a name, or has a typo in the name. Go back to the XAML code (it s in the other tab in the IDE), find the ProgressBar control that you added to the bottom row, and change its name to progressbar. Next, go back to the code window and generate a method stub for EndTheGame(), just like you did a few pages ago for AddEnemy(). Here s the code for the new method: If gameovertext comes up as an error, it means you didn t set the name of the Game Over TextBlock. Go back and do it now. If you closed the Designer tab that had the XAML code, double-click on MainWindow.xaml in the Solution Explorer window to bring it up. This method ends the game by stopping the timers, making the Start button visible again, and adding the GAME OVER text to the play area. you are here 4 37
32 so close i can taste it Make the Start button work Remember how you made the Start button fire circles into the Canvas? Now you ll fix it so it actually starts the game Make the Start button start the game. Find the code you added earlier to make the Start button add an enemy. Change it so it looks like this: When you change this line, you make the Start button start the game instead of just adding an enemy to the playarea Canvas. Add the StartGame() method. Generate a method stub for the StartGame() method. Here s the code to fill into the stub method that the IDE added: You ll learn about IsHitTestVisible in Chapter 15. Did you forget to set the names of the target Rectangle or the human StackPanel? You can look a few pages back to make sure you set the right names for all the controls. Make the enemy timer add the enemy. Find the enemytimer_tick() method that the IDE added for you and replace its contents with this: Ready Bake Code We re giving you a lot of code to type in. By the end of the book, you ll know what all this code does in fact, you ll be able to write code just like it on your own. For now, your job is to make sure you enter each line accurately and to follow the instructions exactly. This will get you used to entering code and will help give you a feel for the ins and outs of the IDE. If you get stuck, you can download working versions of MainWindow.xaml and MainWindow.Xaml.cs or copy and paste XAML or C# code for each individual method: One more thing... if you download code for this project (or anything else in this appendix), make sure you get it from the WPF folder! If you try to use Windows Store code with your WPF project, it won't work. Once you re used to working with code, you ll be good at spotting those missing parentheses, semicolons, etc. Are you seeing errors in the Error List window that don t make sense? One misplaced comma or semicolon can cause two, three, four, or more errors to show up. Don t waste your time trying to track down every typo! Just go to the Head First Labs web page we made it really easy for you to copy and paste all the code in this program. 38 Appendix ii There s also a link to the Head First C# forum, which you can check for tips to get this game working!
33 windows presentation foundation Run the program to see your progress Your game is coming along. Run it again to see how it s shaping up. When you press the Start! button, it disappears, clears the enemies, and starts the progress bar filling up. The play area slowly starts to fill up with bouncing enemies. Alert! Our spies have reported that the humans are building up their defenses! When the progress bar at the bottom fills up, the game ends and the Game Over text is displayed. The target timer should fill up slowly, and the enemies should appear every two seconds. If the timing is off, make sure you added all the lines to the MainWindow() method. What do you think you ll need to do to get the rest of your game working? Flip the page to find out! you are here 4 39
34 in any event... Add code to make your controls interact with the player 1 Go to the XAML designer and use the Document Outline window to select human (remember, it s the StackPanel that contains a Circle and a Rectangle). Then go to the Properties window and press the button to switch it to show event handlers. Find the MouseDown row and double-click in the empty box. Make sure you switch back to the IDE and stop the app before you make more changes to the code. You ve got a human that the player needs to drag to the target, and a target that has to sense when the human s been dragged to it. It s time to add code to make those things work. You ll learn more about the event handlers in the Properties window in Chapter 4. Double-click in this box. Now go back and check out what the IDE added to your XAML for the StackPanel: It also generated a method stub for you. Right-click on human_mousedown in the XAML and choose Navigate to Event Handler to jump straight to the C# code: 2 Fill in the C# code: You can use these buttons to switch between showing properties and event handlers in the Properties window. If you go back to the designer and click on the StackPanel again, you ll see that the IDE filled in the name of the new event handler method. You ll be adding more event handler methods the same way. 40 Appendix ii
35 3 Make sure you add the right event handler! You added a MouseDown event handler to the human, but now you re adding a MouseEnter event handler to the target. Use the Document Outline window to select the Rectangle named target, and then use the event handlers view of the Properties window to add a MouseEnter event handler. Here s the code for the method: windows presentation foundation When the Properties window is in the mode where it displays event handlers, doubleclicking on an empty event handler box causes the IDE to add a method stub for it. You ll need to switch your Properties window back to show properties instead of event handlers. 4 These two vertical bars are a logical operator. You ll learn about them in Chapter 2. Now you ll add two more event handlers, this time to the playarea Canvas control. You ll need to find the [Grid] in the Document Outline, select it, and set its name to grid. Then you can add these methods to handle the MouseMove and MouseLeave event handlers for the Canvas: That s a lot of parentheses! Be really careful and get them right. You can make the game more or less sensitive by changing these 3s to a lower or higher number. Make sure you put the right code in the correct event handler! Don t accidentally swap them. you are here 4 41
36 you can t save them all Dragging humans onto enemies ends the game When the player drags the human into an enemy, the game should end. Let s add the code to do that. Go to your AddEnemy() method and add one more line of code to the end. Use the IntelliSense window to fill in enemy.pointerentered from the list: Start typing this line of code. As soon as you enter the dot, an IntelliSense window will pop up. Keep typing Enter to jump down to the right entry in the list. Here s the last line of your AddEnemy() method. Put your cursor at the end of the line and hit Enter to add the new line of code. Choose MouseEnter from the list. (If you choose the wrong one, don t worry just backspace over it to delete everything past the dot. Then enter the dot again to bring up the IntelliSense window.) Next, add an event handler, just like you did before. Type += and then press Tab: Then press Tab again to generate the stub for your event handler: You ll learn all about how event handlers like this work in Chapter 15. Now you can go to the new method that the IDE generated for you and fill in the code: 42 Appendix ii
37 windows presentation foundation Your game is now playable Run your game it s almost done! When you click the Start button, your play area is cleared of any enemies, and only the human and target remain. You have to get the human to the target before the progress bar fills up. Simple at first, but it gets harder as the screen fills with dangerous alien enemies! Drag the human to safety! The aliens spend their time patrolling for moving humans, so the game ends only if you drag a human onto an enemy. Once you release the human, he s temporarily safe from aliens. Get him to the target before time s up... Look through the code and find where you set the IsHitTestVisible property on the human. When it s on, the human intercepts the PointerEntered event because the human s StackPanel control is sitting between the enemy and the pointer....but drag too fast, and you ll lose your human! you are here 4 43
38 bells whistles aliens Make your enemies look like aliens Red circles aren t exactly menacing. Luckily, you used a template. All you need to do is update it. 1 Go to the Document Outline, right-click on the ContentControl, choose Edit Template, and then Edit Current to edit the template. You ll see the template in the XAML window. Edit the XAML code for the ellipse to set the width to 75 and the fill to Gray. Then add to add a black outline. Here s what it should look like (you can delete any additional properties that may have inadvertently been added while you worked on it): Seeing events instead of properties? You can toggle the Properties window between displaying properties or events for the selected control by clicking the wrench or lightning bolt icons. 2 Drag another Ellipse control out of the toolbox on top of the existing ellipse. Change its Fill to black, set its width to 25, and its height to 35. Set the alignment and margins like this: You can also eyeball it (excuse the pun) by using the mouse or arrow keys to drag the ellipse into place. Try using Copy and Paste in the Edit menu to copy the ellipse and paste another one on top of it. 3 Use the button in the Transforms section of the Properties window to add a Skew transform: 4 Drag one more Ellipse control out of the toolbox on top of the existing ellipse. Change its fill to Black, set its width to 25, and set its height to 35. Set the alignment and margins like this: and add a skew like this: Now your enemies look a lot more like human-eating aliens. 44 Appendix ii
39 windows presentation foundation Here s the final XAML for the updated enemy ControlTemplate you created: See if you can get creative and change the way the human, target, play area, and enemies look. And don t forget to step back and really appreciate what you built. Good job! There s just One more thing you need to do... Play your game! you are here 4 45
40 Chapter 2 The first few projects in Chapter 2 use XAML and Windows Store apps. We ve got replacements for them. Start diving into code with WPF projects. The second chapter gets you started writing C# code, and most of the chapter is focused around building Windows Store apps. We recommend that you do the following: Read Chapter 2 in the main part of the book through page 68. We provide a replacement for page 69 in this appendix. After that, you can read pages 70, 71, and 72 in the book. Then there are replacements for pages 73 and 74, where you build a program from scratch. You can follow the rest of the project in the book. The book will work just fine for you through page 82. There s an exercise on page 83, and its solution is on page 85. We provide replacements for those pages in this PDF. Once you finish that exercise, the chapter no longer requires any Windows Store apps or Windows 8. You ll be able to continue on in the book through Chapter 9, and you can do the first and second labs. 46 Appendix ii
41 windows presentation foundation Use the debugger to see your variables change The debugger is a great tool for understanding how your programs work. You can use it to see the code on the previous page in action. Debug this! 1 2 Create a new WPF APPLICATION project. Drag a TextBlock onto your page and give it the name output. Then add a button and double-click it to add a method called Button_Click(). The IDE will automatically open that method in the code editor. Enter all the code on the previous page into the method. Insert a breakpoint on the first line of code. Right-click on the first line of code (int number = 15;) and choose Insert Breakpoint from the Breakpoint menu. (You can also click on it and choose Debug Toggle Breakpoint or press F9.) When you set a breakpoint on a line of code, the line turns red and a red dot appears in the margin of the code editor. When you debug your code by running it inside the IDE, as soon as your program hits a breakpoint it ll pause and let you inspect and change the values of all the variables. Comments (which either start with two or more slashes or are surrounded by /* and */ marks) show up in the IDE as green text. You don t have to worry about what you type in between those marks, because comments are always ignored by the compiler. Creating a new WPF Application project will tell the IDE to create a new project with a blank window. You might want to name it something like UseTheDebugger (to match the header of this page). You ll be building a whole lot of programs throughout the book, and you may want to go back to them later. Flip back to page 70 in the book and keep going! you are here 4 69
42. 72 Appendix ii
43 Make sure you choose a sensible name for this project, because you ll refer back to it later in the book. Build an app from the ground up The real work of any program is in its statements. You ve already seen how statements fit into a window. Now let s really dig into a program so you can understand every line of code. Start by creating a new Visual C# WPF Application project. Open the main window and use the IDE to modify it by adding three rows and two columns to the grid, and then adding four button controls and a TextBlock to the cells. The window has a grid with three rows and two columns. Each row definition has its height set to 1*, which gives it a <RowDefinition/> without any properties. The column heights work the same way. When you see these sneakers, it means that it s time for you to come up with code on your own. windows presentation foundation The window has four button controls, one in each row. Use the Content property to set their text to Show a message, If/else, Another conditional test, and A loop. Build this window Each button is centered in the cell. Use the Grid.Row and Grid.Column properties to set the row and column (they default to 0). You don t see anything here, but there s actually a TextBlock control. It doesn t have any text, so it s invisible. It s centered and in the bottom row, with ColumnSpan set to 2 so it spans both columns. The bottom cell has a TextBlock control named mylabel. Use its Style property to set the style to BodyTextStyle. If you need to use the Edit Style right-mouse menu to set this but you re having trouble selecting the control, you can right-click on the TextBlock control in the Document Outline and choose Edit Style from there. Use the x:name property to name the buttons button1, button2, button3, and button4. Once they re named, double-click on each of them to add an event handler method. you are here 4 73
44 Here s our solution to the exercise. Does your solution look similar? Are the line breaks different, or the properties in a different order? If so, that s OK! A lot of programmers don t use the IDE to create their XAML they build it by hand. If we asked you to type in the XAML by hand instead of using the IDE, would you be able to do it? Here are the row and column definitions: three rows and two columns. Here s the <Window> and <Grid> tags that the IDE generated for you when you created the WPF application. When you double-clicked on each button, the IDE generated a method with the name of the button followed by _Click. This button is in the second column and second row, so these properties are set to 1. Try removing the HorizontalAlignment or VerticalAlignment property from one of the buttons. It expands to fill the entire cell horizontally or vertically if the alignment isn t set. Why do you think the left column and top row are given the number 0, not 1? Why is it OK to leave out the Grid.Row and Grid.Column properties for the top-left cell? 74 Appendix ii
45 We ll give you a lot of exercises like this throughout the book. We ll give you the answer in a couple of pages. If you get stuck, don t be afraid to peek at the answer it s not cheating! windows presentation foundation You ll be creating a lot of applications throughout this book, and you ll need to give each one a different name. We recommend naming this one PracticeUsingIfElse. It helps to put programs from a chapter in the same folder. Build this window. It s got a grid with two rows and two columns, it s 150 pixels tall and 450 pixels wide, and it s got the window title Fun with if/else statements. Time to get some practice using if/else statements. Can you build this program? If you create two rows and set one row s height to 1* in the IDE, it seems to disappear because it s collapsed to a tiny size. Just set the other row to 1* and it ll show up again. Add a button and a checkbox. You can find the checkbox control in the toolbox, just below the button control. Set the Button s name to changetext and the checkbox s name to enablecheckbox. Use the Edit Text right-click menu option to set the text for both controls (hit Escape to finish editing the text). Right-click on each control and choose Reset Layout All, then make sure both of them have their VerticalAlignment and HorizontalAlignment set to Center. Add a TextBlock. It s almost identical to the one you added to the bottom of the window in the last project. This time, name it labeltochange and set its Grid.Row property to "1". Set the TextBlock to this message if the user clicks the button but the box IS NOT checked. Here s the conditional test to see if the checkbox is checked: enablecheckbox.ischecked == true If that test is NOT true, then your program should execute two statements: Hint: you ll put this code in the else block. labeltochange.text = "Text changing is disabled"; labeltochange.horizontalalignment = HorizontalAlignment.Center; If the user clicks the button and the box IS checked, change the TextBlock so it either shows on the left-hand side or on the right-hand side. If the label s Text property is currently equal to "Right" then the program should change the text to "Left" and set its HorizontalAlignment property to HorizontalAlignment.Left. Otherwise, set its text to "Right"and its HorizontalAlignment property to HorizontalAlignment.Right. This should cause the program to flip the label back and forth when the user presses the button but only if the checkbox is checked. you are here 4 83
46 Time to get some practice using if/else statements. Can you build this program? Here s the XAML code for the grid: <Grid> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> </Grid> We added line breaks as usual to make it easier to read on the window. If you double-clicked the button in the designer before you set its name, it may have created a Click event handler method called Button_Click_1() instead of changetext_click(). <Button x: <CheckBox x: <TextBlock x: And here s the C# code for the button s; 85 Appendix ii
47 You won't use XAML for the next part of the book. The rest of Chapter 2 doesn't require Windows 8 and can be done with Visual Studio 2010, or using a Windows operating system as early as Windows You won t need to replace any pages in the book until you get to Chapter 10. That s because the next part of the book uses Windows Forms Application (or WinForms) projects. These C# projects use an older technology for building desktop apps. You ll use WinForms as a teaching and learning tool, just like you ve been using the IDE to learn and explore C# and XAML. Have a look at page 87, which explains why switching to WinForms is a good tool for getting C# concepts into your brain. This applies to WPF, too! Building these WinForms projects will help get core C# concepts into your brain faster, and that's the quickest route to learning WPF. Did you say that I won't need either Windows 8 or WPF until Chapter 10? Why aren't you using more current technology? Sometimes older technologies make great learning tools. If you want to build a desktop app, WPF is a superior tool for doing it. But if you want to learn C#, a simpler technology can make it easier to make concepts stick. And there s another important reason for using WinForms. When you see the same thing done in more than one way, you learn a lot from seeing what they have in common, and also what s different between them like on page 88, when you rebuild the WPF you just built using WinForms. We ll get back to XAML in Chapter 10, and by that time you ll have laid down a solid foundation that will make it much easier for those WPF concepts to stick. Some chapters use C# features introduced in.net 4.0 that are not supported by Visual Studio If you re using Visual Studio 2008, you may run into a few problems once you reach the end of Chapter 3. That s because the latest version of the.net Framework available in 2008 was 3.5. And that s a problem, because the book uses features of C# that were only introduced in.net 4.0. In Chapter 3 we ll teach you about object initializers, and in Chapter 8 you ll learn about collection initializers and covariance and if you re using Visual Studio 2008, the code for those examples won t compile because in 2008 those things hadn t been added to C# yet! If you absolutely can t install a newer version of Visual Studio, you ll still be able to do almost all the exercises, but you won t be able to use these features of C#.
48.
49 windows presentation foundation Chapter 10 In this chapter, you'll dive into WPF development by redesigning some familiar programs as WPF apps. You can port your WinForms apps to WPF. If you ve completed chapters 3 9 and finished all the exercises and labs so far, then you ve written a lot of code. In this chapter, you ll revisit some of that code and use it as a springboard for learning WPF. Here s how we recommend that you work through Chapter 10: We recommend that you follow the chapter in the main part of the book through page 497. This includes doing everything on page 489, the Sharpen your Pencil exercises, and the Do this! exploration project on page 497. This appendix has replacement pages for pages , so use those instead. Page 506 applies only to Windows Store projects, so you can read it but it won t help you with WPF. After that, use pages from this appendix. Finally, read pages 514 and 515 in the book. Once you ve read them, you can replace the rest of the chapter (pages ) with pages in this appendix.
50 let s explore xaml WPF applications use XAML to create UI objects Do this! When you use XAML to build the user interface for a WPF application, you re building out an object graph. And just like with WinForms, you can explore it with IDE s Watch window. Open the fun with if-else statements program from Chapter 2. Then open MainWindow. xaml.cs, place a breakpoint in the constructor on the call to InitializeComponent(), and use the IDE to explore the app s UI objects. 1 Start debugging, then press F10 to step over the method. Open a Watch window using the Debug menu. Start by choosing Debug Windows Watch Watch 1, and add a watch for this: labeltochange is an instance of TextBlock 2 Now have another look at the XAML that defines the page: <Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Button x: <CheckBox x: <TextBlock x: </Grid> The XAML that defines the controls on a page is turned into a Page object with fields and properties that contain references to UI controls. 498 Appendix ii
51 windows presentation foundation 3 Add some of the labeltochange properties to the Watch window: The app automatically sets the properties based on your XAML: <TextBlock x: But try putting labeltochange.grid or labeltochange.columnspan into the Watch window. The control is a Windows.UI.Controls.TextBlock object, and that object doesn t have those properties. Can you guess what s going on with those XAML properties? 4 Stop your program, open MainWindow.xaml.cs, and find the class declaration for MainWindow. Take a look at the declaration it s a subclass of Window. Hover over Window so the IDE shows you its full class name: Hover over Window to see its class. Now start your program again and press F10 to step over the call to InitializeComponent(). Go back to the Watch window and expand this >> base >> base to traverse back up the inheritance hierarchy. Expand these to see the superclasses. Expand Content and explore its [System.Windows.Controls.Grid] node. Take a little time and explore the objects that your XAML generated. We ll dig into some of these objects later on in the book. For now, just poke around and get a sense of how many objects are behind your app. you are here 4 499
52 old becomes new Redesign the Go Fish! form as a WPF application The Go Fish! game that you built in Chapter 8 would make a great WPF application. Open Visual Studio and create a new WPF Application project (just like you did for Save the Humans). Over the next few pages, you ll redesign it in XAML, with a main window that adjusts its content as it s resized. Instead of using Windows Forms controls on a form, you ll use WPF XAML controls. Do this! This becomes a <TextBox/> This becomes a <Button/> We ll use a horizontal StackPanel to group the TextBox and Button controls so they can go into the same cell in the grid. This becomes a <ListBox/> This becomes a <ScrollViewer/> 500 Appendix ii This becomes a <ScrollViewer/> This is another control in the toolbox. It displays a string of text, adding vertical and/or horizontal scrollbars if the text grows larger than the window control. This becomes a <Button/>
53 windows presentation foundation Here s how those controls will look on the app s main window: <TextBox/> <Button/> <ScrollViewer/> Most of the code to manage the gameplay will remain the same, but the UI code will change. <ScrollViewer/> <ListBox/> <Button/> The controls will be contained in a grid, with rows and columns that expand or contract based on the size of the window. This will allow the game to shrink or grow if the user resizes the window: The game will be playable no matter what the window dimensions are. you are here 4 501
54 now that s a page Page layout starts with controls WPF apps and WinForms have one thing in common: they both rely on controls to lay out your page. The Go Fish! page has two buttons, a ListBox to show the hand, a TextBox for the user to enter the name, and four TextBlock labels. It also has two ScrollViewer controls with a white background to display the game progress and books If the window is made very tall, this ScrollViewer should grow to fill up the extra vertical space. It should display scrollbars if the text gets too big. This ScrollViewer needs to be tall enough to show various books that have been discovered, and it should also display scrollbars if needed. This ListBox should also grow to fill up the extra vertical space if the window is made taller. 6 The XAML for the main window starts with an opening <Window> tag. The title property sets the title of the window to Go Fish! Setting the Height and Width property changes the window size and you ll see the size change in the designer as soon as you change those properties. Use the Background property to give it a gray background. Here s the updated <Window> opening tag. We named our project GoFish if you use a different name, the first line will have that name in its x:class property. The window title and starting width and height are set using properties in the <Window> tag. <Window x: <Grid Margin="10" > We ll use a StackPanel to put the TextBox for the player s name and the Start button in one cell: 1 <TextBlock Text="Your Name" /> <StackPanel Orientation="Horizontal" Grid. <TextBox x: 2 <Button x: </StackPanel> This Margin property sets the left and right margins for the button to 5, and the top and bottom margins to 0. We could also have set it to 5,0,0,0 to set just the left margin and left the right margin zero.
55 windows presentation foundation Each label on the page ( Your name, Game progress, etc.) is a TextBlock. Use the Margin property to add a 10-pixel margin above the label: <TextBlock Text="Game progress" Grid. A ScrollViewer control displays the game progress, with scrollbars that appear if the text is too big for the window: 3 <ScrollViewer Grid. Here s another TextBlock and ScrollViewer to display the books. The default vertical and horizontal alignment for the ScrollViewer is Stretch, and that s going to be really useful. We ll set up the rows and columns so the ScrollViewer controls expand to fit any screen size. 4 <TextBlock Text="Books" Margin="0,10,0,0" Grid. <ScrollViewer FontSize="12" Background="White" Foreground="Black" Grid. We used a small 40-pixel column to add space, so the ListBox and Button controls need to go in the third column. The ListBox spans rows 2 6, so we gave it Grid. <ListBox x: The Ask for a card button has its horizontal and vertical alignment set to Stretch so that it fills up the cell. The 20-pixel margin at the bottom of the ListBox adds a small gap. 6 <Button x: We ll finish this grid on the next page you are here 4 503
56 it grows, it shrinks it s all good Rows and columns can resize to match the page size Grids are very effective tools for laying out windows because they help you design pages that can be displayed on many different devices. Heights or widths that end in * adjust automatically to different screen geometries. The Go Fish! window has three columns. The first and third have widths of 5* and 2*, so they will grow or shrink proportionally and always keep a 5:2 ratio. The second column has a fixed width of 40 pixels to keep them separated. Here s how the rows and columns for the window are laid out (including the controls that live inside them): <RowDefinition Height="Auto"/> <TextBlock/> <ColumnDefinition Width="5*"/> Row= 1 means the second row, because row numbers start at 0. <ColumnDefinition Width="40"/> <ColumnDefinition Width="2*"/> <TextBlock Grid. <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <StackPanel Grid. <TextBlock/> <Button/> </StackPanel> <TextBlock Grid. <RowDefinition/> <ScrollViewer Grid. <RowDefinition Height="Auto"/> <TextBlock Grid. This row is set to the default height of 1*, and the ScrollViewer in it is set to the default vertical and horizontal alignment of Stretch so it grows or shrinks to fill up the page. <ListBox Grid. This ListBox spans five rows, including the fourth row which will grow to fill any free space. This makes the ListBox expand to fill up the entire righthand side of the page. <RowDefinition Height="Auto" MinHeight="150"/> <RowDefinition Height="Auto"/> <ScrollViewer Grid. This ScrollViewer has a row span of 2 to span these two rows. We gave the sixth row (which is row number 5 in XAML because numbering starts at 0) a minimum height of 150 to make sure the ScrollViewer doesn t get any smaller than that. <Button Grid. XAML row and column numbering start at 0, so this button s row is 6 and its column is 2 (to skip the middle column). Its vertical and horizontal alignment are set to Stretch so the button takes up the entire cell. The row has a height of Auto, so its height is based on the contents (the button plus its margin). 504 Appendix ii
57 windows presentation foundation Here s how the row and column definitions make the window layout work: "/> The first column will always be 2.5 times as wide as the third (a 5:2 ratio), with a 40-pixel column to add space between them. The ScrollViewer and ListBox controls that display data have HorizontalAlignment set to Stretch to fill up the columns. The fourth row has the default height of 1* to make it grow or shrink to fill up any space that isn t taken up by the other rows. The ListBox and first ScrollViewer span this row, so they will grow and shrink, too. <RowDefinition Height="Auto" MinHeight="150" /> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> You can add the row and column definitions above or below the controls in the grid. We added them below this time. </Grid> </Window> Here s the closing tag for the grid, followed by the closing tab for the window. You ll bring this all together at the end of the chapter when you finish porting the Go Fish! game to a WPF app. Almost all the row heights are set to Auto. There s only one row that will grow or shrink, and any control that spans this row will also grow or shrink. you are here 4 505
58 those programs look familiar. Use a Border control to draw a border around ScrollViewers. If you look in the Properties window or look at the IntelliSense window, you ll see that the ScrollViewer control has BorderBrush and BorderThickness properties. This is a little misleading, because these properties don t actually do anything. ScrollViewer is a subclass of ContentControl, and it inherits those properties from ContentControl but doesn t actually do anything with them. Luckily, there s an easy way to draw a border around a ScrollViewer, or any other control, by using a Border control. Here s XAML code that you can use in the Breakfast for Lumberjacks window: Use the BorderThickness and BorderBrush properties to set the thickness and color of the border. You can also add a background, round the corners, and make other visual changes. The Border control can contain one other control. If you want to put more than one control inside it, use a StackPanel, Grid, Canvas, or other container. 508 Appendix ii
59 windows presentation foundation Use StackPanels to design this window. Its height is set to 300, its width is 525, and its ResizeMode property is set to NoResize. It uses two <Border> controls, one to draw a border around the top StackPanel and one to draw a border around the ScrollViewer. This is a <ComboBox>, and its items are <ComboBoxItem/> tags with the Content property set to the item name. This button is rightaligned with FontSize set to 18 and 20 pixel top and right margin. <StackPanel Margin= 5 > <TextBlock/> <StackPanel Orientation= Horizontal > <StackPanel> <StackPanel> <TextBlock/> <TextBlock/> <ComboBox> <TextBox/> <ComboBoxItem/> </StackPanel> <ComboBoxItem/>... 4 more... </ComboBox> </StackPanel> <Button/> Use the Content property to add text to this ScrollViewer. will add line breaks. Give it a 2-pixel white border using BorderThickness and BorderBrush, and a height of 250. Use a Grid to design this form. It has seven rows with height set to Auto so they expand to fit their contents, and one with the default height (which is the same as 1*) so that row expands with the grid. Use StackPanels to put multiple controls in the same row. Each TextBlock has a 5-pixel margin below it, and the bottom two TextBlocks each have a 10-pixel margin above them. Use the <Window> properties This is a ListBox. It uses <ListBoxItem/> tags the same way the ComboBox uses <ComboBoxItem/> tags. Set its VerticalAlignment to Stretch so when its row grows and shrinks, the ListBox does too. Set the window's ResizeMode to CanResizeWithGrip" to display this sizing grip. Get your pages to look just like these screenshots by adding dummy data to the controls that would normally be filled in using the methods and properties in your classes. <Button/> <TextBlock/> <ScrollViewer/> </StackPanel> <Grid Grid.Row= 1 Margin= 5 > <TextBlock/> <TextBox/> <TextBlock/> Set the ComboBox control s SelectedIndex property to 0 so it displays the first item. Use these <Window> properties to set the initial and minimum size for the window, then resize the window to make sure they work: Height= 400" MinHeight= 350" Width= 525" MinWidth= 300" <ListBox VerticalAlignment= Stretch > <ListBoxitem/> <ListBoxitem/>... 4 more... </ListBox> <TextBlock> <StackPanel Orientation= Horizontal > <TextBox/> <ComboBox>... 4 items... </ComboBox> <Button/> </StackPanel> <ScrollViewer/> <StackPanel Orientation= Horizontal > <Button/> <Button/> </StackPanel> Set this row to the default height 1* and make all the other row heights Auto so this row grows and shrinks when the window is resized. you are here 4 509
60 A. <Window x: Here s the margin we gave you. Specifying just one number (5) sets the top, left, bottom, and right margins to the same value. <StackPanel Margin="5"> <TextBlock Text="Worker Bee Assignments" Margin="0,0,0,5" /> <Border BorderThickness="1" BorderBrush="Black"> <StackPanel Orientation="Horizontal" Margin="5"> <StackPanel Margin="0,0,10,0"> <TextBlock Text="Job"/> <ComboBox SelectedIndex="0" > <ComboBoxItem Content="Baby bee tutoring"/> <ComboBoxItem Content="Egg care"/> <ComboBoxItem Content="Hive maintenance"/> <ComboBoxItem Content="Honey manufacturing"/> <ComboBoxItem Content="Nectar collector"/> <ComboBoxItem Content="Sting patrol"/> </ComboBox> </StackPanel> <StackPanel> <TextBlock Text="Shifts" /> <TextBox/> </StackPanel> <Button Content="Assign this job to a bee" VerticalAlignment="Bottom" Margin="10,0,0,0" /> </StackPanel> </Border> <Button Content="Work the next shift" Margin="0,20,20,0" FontSize="18" HorizontalAlignment="Right" /> <TextBlock Text="Shift report" Margin="0,10,0,5"/> <Border BorderBrush="Black" BorderThickness="1" Height="100"> <ScrollViewer Content=" Report for shift #20 Worker #1 will be done with 'Nectar collector' after this shift Worker #2 finished the job Worker #2 is not working Worker #3 is doing 'Sting patrol' for 3 more shifts Worker #4 is doing 'Baby bee tutoring' for 6 more shifts "/> </Border> </StackPanel> </Window> Does your XAML code look different from ours? There are many ways to display very similar (or even identical) pages in XAML. And don t forget that XAML is very flexible about tag order. You can put many of these tags in a different order and still create the same object graph for your window. This Border control draws a border around the ScrollViewer. Here s the dummy data we used to populate the shift report. The Content property ignores line breaks we added them to make the solution easier to read. 510 Appendix ii
61 <Window x: <Grid Grid. <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition /> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> </Grid> </Window> <TextBlock Text="Lumberjack name" Margin="0,0,0,5" /> <TextBox Grid. <TextBlock Grid. <ListBox Grid. <ListBoxItem Content="1. Ed"/> <ListBoxItem Content="2. Billy"/> <ListBoxItem Content="3. Jones"/> <ListBoxItem Content="4. Fred"/> <ListBoxItem Content="5. Johansen"/> <ListBoxItem Content="6. Bobby, Jr."/> </ListBox> <TextBlock Grid. <StackPanel Grid. <TextBox Text="2" Margin="0,0,10,0" Width="30"/> <ComboBox SelectedIndex="0" Margin="0,0,10,0"> <ComboBoxItem Content="Crispy"/> <ComboBoxItem Content="Soggy"/> <ComboBoxItem Content="Browned"/> <ComboBoxItem Content="Banana"/> </ComboBox> <Button Content="Add flapjacks" /> </StackPanel> windows presentation foundation Here are the Window properties that set the initial window size to 525x400, and set a minimum size of 300x350. You can set the ResizeMode property to NoResize to prevent all resizing, CanMinimize to allow only minimizing, CanResize to allow all resizing, or CanResizeWithGrip to display a sizing grip in the lower right-hand corner of the window. Just to be 100% clear, we asked you to add these dummy items as part of the exercise, to make the form look like it s being used. You re about to learn how to bind controls like this ListBox to properties in your classes. <Border BorderThickness="1" BorderBrush="Gray" Grid. <ScrollViewer Content="Ed has 7 flapjacks" BorderThickness="2" BorderBrush="White" MinHeight="50"/> </Border> More dummy content... <StackPanel Grid. <Button Content="Add Lumberjack" Margin="0,0,10,0" /> <Button Content="Next Lumberjack" /> </StackPanel> you are here 4 511
62 sloppy joe meets windows store Use data binding to build Sloppy Joe a better menu Remember Sloppy Joe from Chapter 4? Well, he s heard that you're becoming an XAML pro, and he wants a WPF app for his sandwich menu. Let s build him one. Here s the window we re going to build. It uses one-way data binding to populate a ListView and a Run inside a TextBlock, and it uses two-way data binding for a TextBox, using one of its <Run> tags to do the actual binding. <StackPanel Grid. <StackPanel Orientation="Horizontal"> <StackPanel> <TextBlock/> <TextBox Text="{Binding NumberOfItems, Mode=TwoWay"/> </StackPanel> <Button/> </StackPanel> <ListView ItemsSource="{Binding Menu"/> <TextBlock> <Run/> <Run Text="{Binding GeneratedDate"/> </TextBlock> </StackPanel> We ll need an object with properties to bind to. The Window object will have an instance of the MenuMaker class, which has three public properties: an int called NumberOfItems, an ObservableCollection of menu items called Menu, and a DateTime called GeneratedDate. TextBlock object ListView object MenuMaker NumberOfItems Menu GeneratedDate UpdateMenu() TextBox object 516 Appendix ii
63 The Window object creates an instance of MenuMaker and uses it for the data context. The constructor for the Page object will set the StackPanel s DataContext property to an instance of MenuMaker. The binding will all be done in XAML. StackPanel object Window object The ListView and TextBlock objects are also bound to properties in the MenuMaker object. TextBlock object StackPanel object ListView object Menu MenuItem Meat Condiment Bread override ToString() MenuMaker object GeneratedDate StackPanel object Button object MenuItems are simple data objects, overriding the ToString() method to set the text in the ListView. ObservableCollection The TextBox uses two-way binding to set the number of menu items. That means the TextBox doesn t need an x:name property. Since it s bound to the NumberOfItems property in the MenuMaker object, we don t need to write any C# code that refers to it. TextBox object windows presentation foundation MenuItem object TextBlock object MenuItem object MenuItem object MenuItem MenuItem object object NumberOfItems The button tells the MenuMaker to update. The two-way binding for the TextBox means that it gets initially populated with the value in the NumberOfItems property, and then updates that property whenever the user edits the value in the TextBox. The button calls the MenuMaker s UpdateMenu() method, which updates its menu by clearing the ObservableCollection and then adding new MenuItems to it. The ListView will automatically update anytime the ObservableCollection changes. Here s a coding challenge. Based on what you ve read so far, how much of the new and improved Sloppy Joe app can you build before you flip the page and see the code for it? you are here 4 517
64 sloppy joe 2: the legend of curly fries Do this! 1 Create the project. Create a new WPF Application project. You ll keep the default window size. Set the window title to Welcome to Sloppy Joe s. 2 Just right-click on the project name in the Solution Explorer and add a new class, just like you did with other projects. You ll use data binding to display data from these properties on your page. You ll also use two-way binding to update NumberOfItems. 518 Add the new and improved MenuMaker class. You ve come a long way since Chapter 4. Let s build a well-encapsulated class that lets you set the number of items with a property. You ll create an ObservableCollection of MenuItem in its constructor, which is updated every time the UpdateMenu() is called. That method will also update a DateTime property called GeneratedDate with a timestamp for the current menu. Add this MenuMaker class to your project: using System.Collections.ObjectModel; class MenuMaker { private Random random = new Random(); private List<String> meats = new List<String>() { "Roast beef", "Salami", "Turkey", "Ham", "Pastrami" ; private List<String> condiments = new List<String>() { "yellow mustard", "brown mustard", "honey mustard", "mayo", "relish", "french dressing" ; private List<String> breads = new List<String>() { "rye", "white", "wheat", "pumpernickel", "italian bread", "a roll" ; public ObservableCollection<MenuItem> Menu { get; private set; public DateTime GeneratedDate { get; set; public int NumberOfItems { get; set; public MenuMaker() { Menu = new ObservableCollection<MenuItem>(); NumberOfItems = 10; UpdateMenu(); private MenuItem CreateMenuItem() { string randommeat = meats[random.next(meats.count)]; string randomcondiment = condiments[random.next(condiments.count)]; string randombread = breads[random.next(breads.count)]; return new MenuItem(randomMeat, randomcondiment, randombread); public void UpdateMenu() { Menu.Clear(); for (int i = 0; i < NumberOfItems; i++) { Menu.Add(CreateMenuItem()); GeneratedDate = DateTime.Now; What happens if the NumberOfItems is set to a negative number? You ll need this using line because ObservableCollection<T> is in this namespace. The new CreateMenuItem() method returns MenuItem objects, not just strings. That will make it easier to change the way items are displayed if we want. Take a closer look at how this works. It never actually creates a new MenuItem collection. It updates the current one by clearing it and adding new items. Use DateTime to work with dates You ve already seen the DateTime type that lets you store a date. You can also use it to create and modify dates and times. It has a static property called Now that returns the current time. It also has methods like AddSeconds() for adding and converting seconds, milliseconds, days, etc., and properties like Hour and DayOfWeek to break down the date. How timely!
65 windows presentation foundation 3 Add the MenuItem class. You ve already seen how you can build more flexible programs if you use classes instead of strings to store data. Here s a simple class to hold a menu item add it to your project, too: class MenuItem { public string Meat { get; set; public string Condiment { get; set; public string Bread { get; set; public MenuItem(string meat, string condiment, string bread) { Meat = meat; Condiment = condiment; Bread = bread; public override string ToString() { return Meat + " with " + Condiment + " on " + Bread; The three strings that make up the item are passed into the constructor and held in read-only automatic properties. Override the ToString() method so the MenuItem knows how to display itself. 4 Build the XAML page. Here s the screenshot. Can you build it using StackPanels? The TextBox has a width of 100. The bottom TextBlock has the style BodyTextStyle, and it has two <Run> tags (the second one just holds the date). Don t add dummy data this time. We ll let data binding do that for us. This is a ListView control. It s a lot like the ListBox control in fact, it inherits from the same base class as ListBox, so it has the same item selection functionality. But the ListView gives you much more flexibility to customize the way your items are displayed by letting you specify a data template for each item. You'll learn more about that later in the chapter. Can you build this page on your own just from the screenshot before you see the XAML? you are here 4 519
66 bound and determined 5 Add object names and data binding to the XAML. Here s the XAML that gets added to MainWindow.xaml. We used a StackPanel to lay it out, so you can replace the opening <Grid> and closing </Grid> tags with the XAML below. We named the button newmenu. Since we used data binding of the ListView, TextBlock, and TextBox, we didn t need to give them names. (Here s a shortcut. We didn t even really need to name the button; we did it just to get the IDE to automatically add an event handler named newmenu_click when we double-clicked it in the IDE. Try it out!) Here s that ListView control. Try swapping it out for ListBox to see how it changes your window. <StackPanel Margin="5" x: <StackPanel Orientation="Horizontal" Margin="0,0,0,10"> <StackPanel Margin="0,0,10,0"> <TextBlock Text="Number of items" Margin="0,0,0,5" /> <TextBox Width="100" HorizontalAlignment="Left" Text="{Binding NumberOfItems, Mode=TwoWay" /> </StackPanel> <Button x: </StackPanel> <ListView ItemsSource="{Binding Menu" Margin="0,0,20,0" /> <TextBlock> <Run Text="This menu was generated on " /> <Run Text="{Binding GeneratedDate"/> </TextBlock> </StackPanel> We need twoway data binding to both get and set the number of items with the TextBox. This is where <Run> tags come in handy. You can have a single TextBlock but bind only part of its text. 6 Add the code-behind for the page to MainWindow.xaml.cs. The page constructor creates the menu collection and the MenuMaker instance and sets the data contexts for the controls that use data binding. It also needs a MenuMaker field called menumaker. MenuMaker menumaker = new MenuMaker(); public MainWindow() { this.initializecomponent(); pagelayoutstackpanel.datacontext = menumaker; Your main window s class in MainWindow.xaml.cs gets a MenuMaker field, which is used as the data context for the StackPanel that contains all the bound controls. You just need to set the data context for the outer StackPanel. It will pass that data context on to all the controls contained inside it. Finally, double-click on the button to generate a method stub for its Click event handler. Here s the code for it it just updates the menu: private void newmenu_click(object sender, RoutedEventArgs e) { menumaker.updatemenu(); 520 Appendix ii There s an easy way to rename an event handler so that it updates XAML and C# code at the same time. Flip to leftover #8 in Appendix I to learn more about the refactoring tools in the IDE.
67 windows presentation foundation Now run your program! Try changing the TextBox to different values. Set it to 3, and it generates a menu with three items: Now you can play with binding to see just how flexible it is. Try entering xyz or no data at all into the TextBox. Nothing happens! When you enter data into the TextBox, you re giving it a string. The TextBox is pretty smart about what it does with that string. It knows that its binding path is NumberOfItems, so it looks in its data context to see if there are any properties with that name, and then does its best to convert the string to whatever that property s type is. Keep your eye on the generated date. It s not updating, even though the menu updates. Hmm, maybe there s still something we need to do. My Text property s bound to NumberOfItems. And, look, my data context has a NumberOfItems property! Can I stick this string 3 into that property? Looks like I can! TextBox object Hmm, my data context says NumberOfItems is an int, and I don t know how to convert the string xyz to an int. Guess I won t do anything at all. TextBox object you are here 4 521
68 put your data in context Use static resources to declare your objects in XAML When you build a page with XAML, you re creating an object graph with objects like StackPanel, Grid, TextBlock, and Button. And you ve seen that there s no magic or mystery to any of that when you add a <TextBox> tag to your XAML, then your page object will have a TextBox field with a reference to an instance of TextBox. And if you give it a name using the x:name property, your code-behind C# code can use that name to access the TextBox. You can do exactly the same thing to create instances of almost any class and store them as fields in your page by adding a static resource to your XAML. And data binding works particularly well with static resources, especially when you combine it with the visual designer in the IDE. Let s go back to your program for Sloppy Joe and move the MenuMaker to a static resource. 1 2 Delete the MenuMaker field from the code-behind. You re going to be setting up the MenuMaker class and the data context in the XAML, so delete these lines from your C# code: MenuMaker menumaker = new MenuMaker(); public MainWindow() { this.initializecomponent(); pagelayoutstackpanel.datacontext = menumaker; Add your project's namespace to the XAML. Look at the top of the XAML code for your window, and you ll see that the opening tag has a set of xmlns properties. Each of these properties defines a namespace: When you use XAML to add a static resource to a Window, you can access it using its FindResource() method. Start adding a new xmlns property: This is an XML namespace property. It consists of xmlns: followed by an identifier, in this case local. 522 Here's what you'll end up with: When the namespace value starts with using: it refers to one of the namespaces in the project. It can also start with to refer to a standard XAML namespace. xmlns:local="using:sloppyjoechapter10" You ll use this identifier to create objects in your project s namespace. Since we named our app SloppyJoeChapter10, the IDE created this namespace for us. Find the namespace that corresponds to your app, because that s where your MenuMaker lives.
69 windows presentation foundation 3 Add the static resource to your XAML and set the data context. Add a <Window.Resources> tag to the top of the XAML (just under the opening tag), and add a closing </Window.Resources> tag for it. Then type <local: between them to pop up an IntelliSense window: You can add static resources only if their classes have parameterless constructors. This makes sense! If the constructor has a parameter, how would the XAML page know what arguments to pass to it? The window shows all the classes in the namespace that you can use. Choose MenuMaker. Then give it the resource key menumaker using the x:key XAML property: <local:menumaker x: Now your page has a static MenuMaker resource with the key menumaker. 4 Set the data context for your StackPanel and all of its children. Then go to the outermost StackPanel and set its DataContext property: <StackPanel Margin="5" DataContext="{StaticResource ResourceKey=menuMaker"> Finally, modify the button s Click event handler to find the static resource and method to update the menu: private void newmenu_click(object sender, RoutedEventArgs e) { MenuMaker menumaker = FindResource("menuMaker") as MenuMaker; menumaker.updatemenu(); Your program will still work, just like before. But did you notice what happened in the IDE when you added the data context to the XAML? As soon as you added it, the IDE created an instance of MenuMaker and used its properties to populate all the controls that were bound to it. You got a menu generated immediately, right there in the designer before you even ran your program. Neat! The menu shows up in the designer immediately, even before you run your program. Hmm, something s not quite right. It updates the menu items when the button is clicked, but the date doesn't change. What s going on? you are here 4 523
70 change your list s look and feel Use a data template to display objects When you show items in a list, you re showing contents of ListViewItem (which you use for ListViews), ListBoxItem, or ComboBoxItem controls, which get bound to objects in an ObservableCollection. Each ListViewItem in the Sloppy Joe menu generator is bound to a MenuItem object in its Menu collection. The ListViewItem objects call the MenuMaker objects ToString() methods by default, but you can use a data template that uses data binding to display data from the bound object s properties. This is a really basic data template, and it looks just like the default one used to display the ListViewItems. Modify the <ListView> tag to add a basic data template. It uses the basic {Binding to call the item s ToString(). <ListView ItemsSource="{Binding Menu" Margin="0,0,20,0"> <ListView.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding"/> </DataTemplate> </ListView.ItemTemplate> </ListView> Leave the ListView tag intact, but replace /> with > and add a closing </ListView> tag at the bottom. Then add the ListView.ItemTemplate tag to contain the data template. Adding a {Binding without a path just calls the ToString() method of the bound object. Replace the <DataTemplate>, but leave the rest of the ListView intact. Change your data template to add some color to your menu. <DataTemplate> <TextBlock> <Run Text="{Binding Meat" Foreground="Blue"/><Run Text=" on "/> <Run Text="{Binding Bread" FontWeight="Light"/><Run Text=" with "/> <Run Text="{Binding Condiment" Foreground="Red" FontWeight="ExtraBold"/> </TextBlock> </DataTemplate> You can bind individual Run tags. You can change each tag s color, font, and other properties, too. Go crazy! The data template can contain any controls you want. <DataTemplate> <StackPanel Orientation="Horizontal"> <StackPanel> <TextBlock Text="{Binding Bread"/> <TextBlock Text="{Binding Bread"/> <TextBlock Text="{Binding Bread"/> </StackPanel> The DataTemplate object s Content property can hold only one object, so if you want multiple controls in your data template, you ll need a container like StackPanel. <Ellipse Fill="DarkSlateBlue" Height="Auto" Width="10" Margin="10,0"/> <Button Content="{Binding Condiment" FontFamily="Segoe Script"/> </StackPanel> </DataTemplate> 524 Appendix ii
71 Q: So I can use a StackPanel or a Grid to lay out my page. I can use XAML static resources, or I can use fields in codebehind. I can set properties on controls, or I can use data binding. Why are there so many ways to do the same things? A: Because C# and XAML are extremely flexible tools for building apps. That flexibility makes it possible to design very detailed pages that work on many different devices and displays. This gives you a very large toolbox that you can use to get your pages just right. So don t look at it as a confusing set of choices; look at it as many different options that you can choose from. Q: I m still not clear on how static resources work. What happens when I add a tag inside <Window.Resources>? A:When you add that tag, it updates the Window object and adds static resources. In this case, it created an instance of MenuMaker and added it to the Window object s resources. The Window object contains a dictionary called Resources, and if you use the debugger to explore the Window object after you add the tag you can find that it contains an instance of MenuMaker. When you declared the resource, you used x:key to assign the resource a key. That allowed you to use that key to look up your MenuMaker object in the window's static resources with the FindResource() method. Q: I used x:key to set my MenuMaker resource s key. But earlier in the chapter, I used x:name to give names to my controls. What s the difference? Why did I have to use FindResources() to look up the MenuMaker object couldn't I give it a name instead? A: When you add a control to a WPF window, it actually adds a field to the Window object that s created by the XAML. When you use the x:name property, you give it a name that you can use in your code. If you don t give it a name, the control object is still created as part of the Window object s graph. However, if you give it a name, then the XAML object is given a field with that name with a reference to that control. You can see this in your code by putting a breakpoint in the button s event handler and adding newmenu to the Watch window. You ll see that it refers to a System.Windows.Controls. Button object whose Content property is set to Make a new menu. Resources are treated differently: they re added to a dictionary in the Window object. The FindResource() method uses the key specified in the x:key markup. Set the same breakpoint and try adding this.resources["menumaker"] to the Watch window. This time, you ll see a reference to your MenuMaker object, because you re looking it up in the Resources dictionary. Q: Does my binding path have to be a string property? A: No, you can bind a property of any type. If it can be converted between the source and property types, then the binding will work. If not, the data will be ignored. And remember, not all properties on your controls are text, either. Let s say you ve got a bool in your data context called EnableMyObject. You can bind it to any Boolean property, like IsEnabled. This will enable or disable the control based on the value of the EnableMyObject property: IsEnabled="{Binding EnableMyObject" Of course, if you bind it to a text property it ll just print True or False (which, if you think about it, makes perfect sense). Q: Why did the IDE display the data in my form when I added the static resource and set the data context in XAML, but not when I did it in C#? A: Because the IDE understands your XAML, which has all the information that it needs to create the objects to render your page. As soon as you added the MenuMaker resource to your XAML code, the IDE created an instance of MenuMaker. But it couldn t do that from the new statement in its constructor, because there could be many other statements in the constructor, and they would need to be run. The IDE runs the code-behind C# code only when the program is executed. But if you add a static resource to the page, the IDE will create it, just like it creates instances of TextBlock, StackPanel, and the other controls on your page. It sets the controls properties to show them in the designer, so when you set up the data context and binding paths, those got set as well, and your menu items showed up in the IDE s designer. The static resources in your page are instantiated when the page is first loaded and can be used at any time by the objects in the application. windows presentation foundation The name static resource is a little misleading. Static resources are definitely created for each instance; they re not static fields! you are here 4 525
72 ch-ch-ch changes INotifyPropertyChanged lets bound objects send updates When the MenuMaker class updates its menu, the ListView that s bound to it gets updated. But the MenuMaker updates the GeneratedDate property at the same time. Why doesn t the TextBlock that s bound to it get updated, too? The reason is that every time an ObservableCollection changes, it fires off an event to tell any bound control that its data has changed. This is just like how a Button control raises a Click event when it s clicked, or a Timer raises a Tick event when its interval elapses. Whenever you add, remove, or delete items from an ObservableCollection, it raises an event. You can make your data objects notify their target properties and bound controls that data has changed, too. All you need to do is implement the INotifyPropertyChanged interface, which contains a single event called PropertyChanged. Just fire off that event whenever a property changes, and watch your bound controls update themselves automatically. The data object fires off a PropertyChanged event to notify any control that it s bound to that a property has changed. ~PropertyChanged event DATA CONTEXT Data object Source property Binding Target property Control object The control receives the event and refreshes its target property by reading the data from the source property that it s bound to. Collections work almost the same way as data objects. The ObservableCollection<T> object doesn t actually implement INotifyPropertyChanged. Instead, it implements a closely related interface called INotifyCollectionChanged that fires off a CollectionChanged event instead of a PropertyChanged event. The control knows to look for this event because ObservableCollection implements the INotifyCollectionChanged interface. Setting a ListView s DataContext to an INotifyCollectionChanged object will cause it to respond to these events. 526 Appendix ii
73 windows presentation foundation Modify MenuMaker to notify you when the GeneratedDate property changes INotifyPropertyChanged is in the System.ComponentModel namespace, so start by adding this using statement to the top of the MenuMaker class file: using System.ComponentModel; Update the MenuMaker class to implement INotifyPropertyChanged, and then use the IDE to automatically implement the interface: This will be a little different from what you saw in chapters 7 and 8. It won t add any methods or properties. Instead, it will add an event: public event PropertyChangedEventHandler PropertyChanged; Next, add this OnPropertyChanged() method, which you ll use to raise the PropertyChanged event. private void OnPropertyChanged(string propertyname) { PropertyChangedEventHandler propertychangedevent = PropertyChanged; if (propertychangedevent!= null) { propertychangedevent(this, new PropertyChangedEventArgs(propertyName)); This is the first time you re raising events. You ve been writing event handler methods since Chapter 1, but this is the first time you re firing an event. You ll learn all about how this works and what s going on in Chapter 15. For now, all you need to know is that an interface can include an event, and that your OnPropertyChanged() method is following a standard C# pattern for raising events to other objects. Now all you need to do to notify a bound control that a property is changed is to call OnPropertyChanged() with the name of the property that s changing. We want the TextBlock that s bound to GeneratedDate to refresh its data every time the menu is updated, so all we need to do is add one line to the end of UpdateMenu(): This is a standard.net pattern for raising events. public void UpdateMenu() { Menu.Clear(); for (int i = 0; i < NumberOfItems; i++) { Menu.Add(CreateMenuItem()); GeneratedDate = DateTime.Now; OnPropertyChanged("GeneratedDate"); Now the date should change when you generate a menu. Don t forget to implement INotifyPropertyChanged. Data binding works only when the controls implement that interface. If you leave : INotifyPropertyChanged out of the class declaration, your bound controls won t get updated even if the data object fires PropertyChanged events. you are here 4 527
74 go fish goes xaml Finish porting the Go Fish! game to a WPF application. You ll need to modify the XAML from earlier in this chapter to add data binding, copy all the classes and enums from the Go Fish! game in Chapter 8 (or download them from our website), and update the Player and Game classes. 1 Add the existing class files and change their namespace to match your app. Add these files to your project from the Chapter 8 Go Fish! code: Values.cs, Suits.cs, Card.cs, Deck.cs, CardComparer_bySuit.cs, CardComparer_byValue.cs, Game.cs, and Player.cs. You can use the Add Existing Item option in the Solution Explorer, but you ll need to change the namespace in each of them to match your new projects (just like you did with multipart projects earlier in the book). Try building your project. You should get errors in Game.cs and Player.cs that look like this: 2 Remove all references to WinForms classes and objects; add using lines to Game. You re not in the WinForms world anymore, so delete using System.Windows.Forms; from the top of Game.cs and Player.cs. You ll also need to remove all mentions of TextBox. You ll need to modify the Game class to use INotifyPropertyChanged and ObservableCollection<T>, so add these using lines to the top of Game.cs: using System.ComponentModel; using System.Collections.ObjectModel; 3 4 Add an instance of Game as a static resource and set up the data context. Modify your XAML to add an instance of Game as a static resource and use it as the data context for the grid that contains the Go Fish! page you built earlier in the chapter. Here s the XAML for the static resource: <local:game x: and you re going to need a new constructor because you can include only resources that have parameterless constructors: public Game() { PlayerName = "Ed"; Hand = new ObservableCollection<string>(); ResetGame(); Add public properties to the Game class for data binding. Here are the properties you ll be binding to properties of the controls in the page: public bool GameInProgress { get; private set; public bool GameNotStarted { get { return!gameinprogress; public string PlayerName { get; set; public ObservableCollection<string> Hand { get; private set; public string Books { get { return DescribeBooks(); public string GameProgress { get; private set; Make sure you add the <Window.Resources> section to the top of your XAML, and you ll also need to add the xmlns:local tag, exactly like you did on pages 522 and Appendix ii
75 windows presentation foundation 5 You ll need two of each of these. 6 7 Use binding to enable or disable the TextBox, ListBox, and Buttons. You want the Your Name TextBox and the Start the game! Button to be enabled only when the game is not started, and you want the Your hand ListBox and Ask for a card Button to be enabled only when the game is in progress. You ll add code to the Game class to set the GameInProgress property. Have a look at the GameNotStarted property. Figure out how it works, and then add the following property bindings to the TextBox, ListBox, and two Buttons: IsEnabled="{Binding GameInProgress" IsEnabled="{Binding GameInProgress" IsEnabled="{Binding GameNotStarted" IsEnabled="{Binding GameNotStarted" Modify the Player class so it tells the Game to display the game s progress. The WinForms version of the Player class takes a TextBox as a parameter for its constructor. Change that to take a reference to the Game class and store it in a private field. (Look at the StartGame() method below to see how this new constructor is used when adding players.) Find the lines that use the TextBox reference and replace them with calls to the Game object s AddProgress() method. Modify the Game class. Change the PlayOneRound() method so that it s void instead of returning a Boolean, and have it use the AddProgress() method instead of the TextBox to display progress. If a player won, display that progress, reset the game, and return. Otherwise, refresh the Hand collection and describe the hands. You ll also need to add/update these four methods and figure out what they do and how they work."); public void ClearProgress() { GameProgress = String.Empty; OnPropertyChanged("GameProgress"); public void AddProgress(string progress) { GameProgress = progress + Environment.NewLine + GameProgress; OnPropertyChanged("GameProgress"); You ll also need to implement the INotifyPropertyChanged interface and add the same OnPropertyChanged() method that you used in the MenuMaker class. The updated methods use it, and your modified PullOutBooks() method will also use it. public void ResetGame() { GameInProgress = false; OnPropertyChanged("GameInProgress"); OnPropertyChanged("GameNotStarted"); books = new Dictionary<Values, Player>(); stock = new Deck(); Hand.Clear(); you are here 4 529
76 exercise solution Game game; Here s all the code-behind that you had to write. public MainWindow() { InitializeComponent(); A game = this.findresource("game") as Game; private void startbutton_click(object sender, RoutedEventArgs e) { game.startgame(); private void askforacard_click(object sender, RoutedEventArgs e) { if (cards.selectedindex >= 0) game.playoneround(cards.selectedindex); private void cards_mousedoubleclick(object sender, MouseButtonEventArgs e) { if (cards.selectedindex >= 0) game.playoneround(cards.selectedindex); These are the changes needed for the Player class: class Player { private string name; public string Name { get { return name; private Random random; private Deck cards; private Game game; public Player(String name, Random random, Game game) { this.name = name; this.random = random; this.game = game; this.cards = new Deck(new Card[] { ); game.addprogress(name + " has just joined the game"); public Deck DoYouHaveAny(Values value) { Deck cardsihave = cards.pulloutvalues(value); game.addprogress(name + " has " + cardsihave.count + " " + Card.Plural(value)); return cardsihave; public void AskForACard(List<Player> players, int myindex, Deck stock, Values value) { game.addprogress(name + " asks if anyone has a " + value); int totalcardsgiven = 0; for (int i = 0; i < players.count; i++) { if (i!= myindex) { Player player = players[i]; Deck CardsGiven = player.doyouhaveany(value); totalcardsgiven += CardsGiven.Count; while (CardsGiven.Count > 0) cards.add(cardsgiven.deal()); if (totalcardsgiven == 0) { game.addprogress(name + " must draw from the stock."); cards.add(stock.deal()); //... the rest of the Player class is the same Appendix ii
77 windows presentation foundation These are the changes needed for the XAML: <Grid Margin="10" DataContext="{StaticResource ResourceKey=game"> The TextBox has a twoway binding to PlayerName. <TextBlock Text="Your Name" /> <StackPanel Orientation="Horizontal" Grid. <TextBox x: <Button x: </StackPanel> </Grid> <TextBlock Text="Game progress" Grid. <ScrollViewer Grid. <TextBlock Text="Books" Margin="0,10,0,0" Grid. <ScrollViewer FontSize="12" Background="White" Foreground="Black" Grid. <TextBlock Text="Your hand" Grid. <ListBox x: <Button x: "/> <RowDefinition Height="Auto" MinHeight="150" /> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> The data context for the grid is the Game class, since all of the binding is to properties on that class. The IsEnabled property enables or disables the control. It s a Boolean property, so you can bind it to a Boolean property to turn the control on or off based on that property. Here s the Click event handler for the Start button. The Game Progress and Books ScrollViewers bind to the Progress and Books properties. you are here 4 531
78 exercise solution A Here s everything that changed in the Game class, including the code we gave you with the instructions. These properties are used by the XAML data binding. These methods make the game progress data binding work. New lines are added to the top so the old activity scrolls off the bottom of the ScrollViewer. Here s the StartGame() method we gave you. It clears the progress, creates the players, deals the cards, and then updates the progress and books. 532 Appendix ii using System.ComponentModel; using System.Collections.ObjectModel; class Game : INotifyPropertyChanged { private List<Player> players; private Dictionary<Values, Player> books; private Deck stock; public bool GameInProgress { get; private set; public bool GameNotStarted { get { return!gameinprogress; public string PlayerName { get; set; public ObservableCollection<string> Hand { get; private set; public string Books { get { return DescribeBooks(); public string GameProgress { get; private set; public Game() { PlayerName = "Ed"; Hand = new ObservableCollection<string>(); ResetGame(); public void AddProgress(string progress) { GameProgress = progress + Environment.NewLine + GameProgress; OnPropertyChanged("GameProgress"); public void ClearProgress() { GameProgress = String.Empty; OnPropertyChanged("GameProgress"); You need these lines for INotifyPropertyChanged and ObservableCollection."); Here s the new Game constructor. We create only one collection and just clear it when the game is reset. If we created a new object, the form would lose its reference to it, and the updates would stop. Every program you ve written in the book so far can be adapted or rewritten as a WPF application using XAML. But there are so many ways to write them, and that s especially true when you re using XAML! That s why we gave you so much of the code for this exercise.
79 windows presentation foundation public void PlayOneRound(int selectedplayercard) { Values cardtoaskfor = players[0].peek(selectedplayercard).value; for (int i = 0; i < players.count; i++) { if (i == 0) players[0].askforacard(players, 0, stock, cardtoaskfor); else players[i].askforacard(players, i, stock); if (PullOutBooks(players[i])) { AddProgress(players[i].Name + " drew a new hand"); int card = 1; while (card <= 5 && stock.count > 0) { players[i].takecard(stock.deal()); card++; OnPropertyChanged("Books"); players[0].sorthand(); if (stock.count == 0) { AddProgress("The stock is out of cards. Game over!"); AddProgress("The winner is... " + GetWinnerName()); ResetGame(); return; Hand.Clear(); foreach (String cardname in GetPlayerCardNames()) Hand.Add(cardName); if (!GameInProgress) AddProgress(DescribePlayerHands()); public void ResetGame() { GameInProgress = false; OnPropertyChanged("GameInProgress"); OnPropertyChanged("GameNotStarted"); books = new Dictionary<Values, Player>(); stock = new Deck(); Hand.Clear(); public event PropertyChangedEventHandler PropertyChanged; private void OnPropertyChanged(string propertyname) { This used to return a Boolean value so the form could update its progress. Now it just needs to call AddProgress, and data binding will take care of the updating for us. PropertyChangedEventHandler propertychangedevent = PropertyChanged; if (propertychangedevent!= null) { propertychangedevent(this, new PropertyChangedEventArgs(propertyName)); //... the rest of the Game class is the same... The books changed, and the form needs to know about the change so it can refresh its ScrollViewer. Here are the modifications to the PlayOneRound() method that update the progress when the game is over, or update the hand and the books if it s not. This is the ResetGame() method from the instructions. It clears the books, stock, and hand. This is the standard PropertyChanged event pattern from earlier in the chapter. you are here 4 533
80 Are you getting a strange XAML error about a class not existing in the namespace? Make sure that ALL your C# code compiles and that every control's event handler method is declared in the code-behind. Sometimes you ll get an error like this when you declare a static resource, even though you definitely have a class called MyDataClass in the namespace MyWpfApplication: This is often caused by either an error in the code-behind or a missing event handler for a XAML control. This can be a little misleading, because the IDE is telling you that there s an error on the tag that declares the static resource, when the error is actually somewhere else in the code. You can reproduce this yourself: create a new WPF project called MyWpfApplication, add a data class called MyDataClass, add it as a static resource to your page s <Window.Resources>, and add a button to your page. Then add Click="Button_Click" to the XAML to add an event handler for the button, but don t add the Button_Click() method. When you try to rebuild your code, you should see the error above. You can make it go away by adding the Button_Click() method to the code-behind. Sometimes the error message becomes a little clearer if you rightclick on the project in the Solution Explorer, click Unload Project to unload it, and then right-click it again and choose Reload Project to load it again. This may cause the IDE to show you a different error message that might be more helpful. 534 Appendix ii
81 windows presentation foundation Chapter 11 Even though a lot of this chapter works only with Windows Store apps, you can still get the core learning with WPF. Windows Store was built for asynchronous programming, but WPF can still use it... but not all the tools are there. Read through pages 536 and 537 in the main part of the book see how Brian is shocked (shocked!) to find that his familiar file classes from Chapter 9 aren t there? Well, WPF apps don t have that problem. That s a good thing, because it means you can keep using the file classes and serialization that you re used to. But it also means that your WPF apps can t take advantage of the new asynchronous file and dialog classes that come with the.net Framework for Windows Store. In this appendix, we ll give you two replacement projects to show you how to use the async and await keywords and data contract serialization with WPF apps. Here s how we recommend that you work through Chapter 11: Pages 538 and 539 have replacements in this appendix. Use the replacements in place of the book pages. Pages are specific to Windows Store apps. Skip them. Read pages 546 and 547 to learn about data contract serialization. Skip pages 548, 549, and 550; they apply only to Windows Store apps. Read page 551 in the book. Then follow the Do this! project on the replacement pages in this appendix. The rest of the chapter has you build a Windows Store replacement for Brian s excuse manager. The goal of this project is to learn about the file tools in the Windows.Storage namespace for Windows Store apps. We don t have a WPF alternative for this project, because those classes are specific to Windows Store apps. you are here 4 535
82 don t keep me waiting C# programs can use await to be more responsive What happens when you call MessageBox.Show() from a WinForms program? Everything stops, and your program freezes until the dialog disappears. That s literally the most unresponsive that a program can be! Windows Store apps should always be responsive, even when they re waiting for feedback from a user. But some things like waiting for a dialog, or reading or writing all the bytes in a file take a long time. When a method sits there and makes the rest of the program wait for it to complete, programmers call that blocking, and it s one of the biggest causes of program unresponsiveness. Windows Store apps use the await operator and the async modifier to keep from becoming unresponsive during operations that block. You can see how it works by looking at an example of how a WPF could call a define task that blocks, but can be called asynchronously: Declare the method using the async modifier to indicate that it can be called asynchronously. private async Task LongTaskAsync() { await Task.Delay(5000); The Task class is in the System.Threading.Tasks namespace. Its Delay() method blocks for a specified number of milliseconds. That method is really similar to the Thread.Sleep() method that you used in Chapter 2, but it s defined with the async modifier so it can be called asynchronously with await. The await operator causes the method that s running this code to stop and wait until the ShowAsync() method completes and that method will block until the user chooses one of the commands. In the meantime, the rest of the program will keep responding to other events. As soon as the LongTaskAsync() method returns, the method that called it will pick up where it left off (although it may wait until after any other events that started up in the meantime have finished). If your method uses the await operator, then it must be declared with the async modifier: private async void countbutton_click(object sender, RoutedEventArgs e) { //... some code... await LongTaskAsync(); //... some more code: Notice how this is a Click event handler. Since it uses await, it also needs to be declared with the async modifier. When a method is declared with async, you have some options with how you call it. If you call the method as usual, then as soon as it hits the await statement it returns, which keeps the blocking call from freezing your app. 538 Appendix ii
83 windows presentation foundation You can see exactly how this works by creating a new WPF application with the following main window XAML: <Window x: <Grid> <StackPanel> <CheckBox x: <Button x: <TextBlock x: </StackPanel> </Grid> </Window> Here s the code-behind: using System.Threading; using System.Windows.Threading; public partial class MainWindow : Window { DispatcherTimer timer = new DispatcherTimer(); public MainWindow() { InitializeComponent(); We named our project WpfAndAsync. If you named your project something else, you ll need to change this line to match its namespace: x:class="wpfandasync.mainwindow" Do this! timer.tick += timer_tick; timer.interval = TimeSpan.FromSeconds(.1); int i = 0; void timer_tick(object sender, EventArgs e) { progress.text = (i++).tostring(); private async void countbutton_click(object sender, RoutedEventArgs e) { countbutton.isenabled = false; timer.start(); if (useawaitasync.ischecked == true) await LongTaskAsync(); else LongTask(); countbutton.isenabled = true; private void LongTask() { Thread.Sleep(5000); timer.stop(); private async Task LongTaskAsync() { await Task.Delay(5000); timer.stop(); The button s event handler uses the CheckBox s IsChecked property. If the box is checked, the event handler calls await LongTaskAsync(), which is asynchronous. The method is called with await, so the event handler method pauses and lets the rest of the program continue to run. Try adding other buttons to the window that change properties or print output to the console. You ll be able to use them while the timer ticks. If the CheckBox is not checked, IsChecked is false and the button s event handler calls LongTask(), which blocks. This causes the event handler method to block, which makes the entire program become unresponsive, and if you add other buttons they won t respond either. Make sure the box is checked, and then click the button. You ll see the numbers increase, and the form is responsive: the button disables itself, and you can move and resize the form. Then uncheck the box and click the button now the form freezes. you are here 4 539
84 those guys get around Stream some Guy objects to a file Do this! Here s a project to help you experiment with data contract serialization. Create a new WPF application. Then add both classes with the data contracts from page 551 in the book (you ll need using System. Runtime.Serialization in each of them). And add the familiar Suits and Values enums, too (for the Card class). Here s the window you ll build next: 1 Before you start coding, you ll need to right-click on References in the Solution Explorer and choose Add Reference from the menu. Click on Framework, scroll down to System.Runtime.Serialization, check it, and click OK: This will allow your WPF application to use the System.Runtime.Serialization namespace. You can also add an empty GuyManager class to get rid of the IDE error on the <local:guymanager> tag when you add the XAML in step 2. You ll fill in the GuyManager in step 3 when you flip the page. 552 Appendix ii
85 2 Here s the XAML for the page. windows presentation foundation We named this project GuySerializer. If your project has a different namespace, make sure you change these lines to match it. <Window x: <Window.Resources> <local:guymanager x: </Window.Resources> The grid's data context is the GuyManager static resource. <Grid DataContext="{StaticResource guymanager" Margin="5"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="4*"/> <RowDefinition Height="3*"/> </Grid.RowDefinitions> <StackPanel> <Button x: <TextBlock Text="{Binding Joe" Margin="0,0,10,20" TextWrapping="Wrap"/> </StackPanel> <StackPanel Grid. <Button x: <TextBlock Text="{Binding Bob" Margin="0,0,0,20" TextWrapping="Wrap"/> </StackPanel> <StackPanel Grid. <Button x: <TextBlock Text="{Binding Ed" Margin="0,0,0,20" TextWrapping="Wrap"/> </StackPanel> <StackPanel Grid. <TextBlock>Last filename written</textblock> <TextBox Text="{Binding GuyFile, Mode=TwoWay" TextWrapping="Wrap" Height="60" Margin="0,0,0,20"/> </StackPanel> <StackPanel Grid. <Button x: <StackPanel> <TextBlock Text="New guy:"/> <TextBlock TextWrapping="Wrap" Text="{Binding NewGuy"/> </StackPanel> </StackPanel> </Grid> </Window> The page has three columns and two rows. Each column in the top row has a StackPanel with a TextBlock and a Button. ThisTextBlock is bound to the Ed property in GuyManager. The first cell in the bottom row spans two columns. It has several controls bound to properties. Why do you think we used a TextBox for the path? you are here We re not done yet flip the page!
86 think about separation of concerns 3 Add the GuyManager class. using System.ComponentModel; using System.IO; using System.Runtime.Serialization; class GuyManager : INotifyPropertyChanged { private Guy joe = new Guy("Joe", 37, M); public Guy Joe { get { return joe; private Guy bob = new Guy("Bob", 45, 4.68M); public Guy Bob { get { return bob; private Guy ed = new Guy("Ed", 43, 37.51M); public Guy Ed { get { return ed; public Guy NewGuy { get; set; public string GuyFile { get; set; This program uses TextBoxes that are bound to read-only properties that have only get accessors. If you try to bind to a property that has a public get accessor with a private set accessor, you ll get an error. Luckily, a backing field will work just fine. There are three read-only Guy properties with private backing fields. The XAML has a TextBlock bound to each of them. A fourth TextBlock is bound to this Guy property, which is set by the ReadGuy() method. public void ReadGuy() { if (String.IsNullOrEmpty(GuyFile)) return; 554 Appendix ii using (Stream inputstream = File.OpenRead(GuyFile)) { DataContractSerializer serializer = new DataContractSerializer(typeof(Guy)); NewGuy = serializer.readobject(inputstream) as Guy; OnPropertyChanged("NewGuy"); The ReadGuy() method uses familiar System.IO methods to open a stream and read from it. But instead of using a BinaryFormatter, it uses a DataContractSerializer to serialize data from an XML file.
87 If the file exists, it's deleted, then recreated using a file stream. It's serialized using the data contract serializer. public void WriteGuy(Guy guytowrite) { GuyFile = Path.GetFullPath(guyToWrite.Name + ".xml"); windows presentation foundation if (File.Exists(GuyFile)) File.Delete(GuyFile); using (Stream outputstream = File.OpenWrite(GuyFile)) { DataContractSerializer serializer = new DataContractSerializer(typeof(Guy)); serializer.writeobject(outputstream, guytowrite); OnPropertyChanged("GuyFile"); public event PropertyChangedEventHandler PropertyChanged; This uses the GetFullPath() method in the Path class (in System.IO) to get the full path of the filename to write. 4 private void OnPropertyChanged(string propertyname) { PropertyChangedEventHandler propertychangedevent = PropertyChanged; if (propertychangedevent!= null) { propertychangedevent(this, new PropertyChangedEventArgs(propertyName)); Here s the code-behind for MainWindow.xaml.cs: public partial class MainWindow : Window { GuyManager guymanager; Here's the same code you used earlier to implement INotifyPropertyChanged and fire off PropertyChanged events. public MainWindow() { InitializeComponent(); guymanager = FindResource("guyManager") as GuyManager; private void WriteJoe_Click(object sender, RoutedEventArgs e) { guymanager.writeguy(guymanager.joe); private void WriteBob_Click(object sender, RoutedEventArgs e) { guymanager.writeguy(guymanager.bob); private void WriteEd_Click(object sender, RoutedEventArgs e) { guymanager.writeguy(guymanager.ed); private void ReadNewGuy_Click(object sender, RoutedEventArgs e) { guymanager.readguy(); you are here 4 555
88 serializing guys Take your Guy Serializer for a test drive Use the Guy Serializer to experiment with data contract serialization: Write each Guy object to the files they ll be written to the bin\debug folder in your projects folder. Click the ReadGuy button to read the guy that was just written. It uses the path in the TextBox to read the file, so try updating that path to read a different guy. Try reading a file that doesn t exist. What happens? Open up the Simple Text Editor you built earlier in the chapter. You added XML files as options for the open and save file pickers, so you can use it to edit Guy files. Open one of the Guy files, change it, save it, and read it back into your Guy Serializer. What happens if you add invalid XML? What if you change the card suit or value so it doesn t match a valid enum value? Try adding or removing the DataMember names ([DataMember(Name="...")]). What does that do to the XML? What happens when you update the contract and then try to load a previously saved XML file? Can you fix the XML file to make it work? Try changing the namespace of the Card data contract. What happens to the XML? Q: Sometimes I make a change in my XAML or my code, and the IDE s designer gives me a message that I need to rebuild. What s going on? A: The XAML designer in the IDE is really clever. It s able to show you an updated page in real time as you make changes to your XAML code. You already know that when the XAML uses static resources, that adds object references to the Page class. Well, those objects need to get instantiated in order for them to be displayed in the designer. If you make a change to the class that s being used for a static resource, the designer doesn t get updated until you rebuild that class. That makes sense the IDE rebuilds your project only when you ask it to, and until you do that it doesn t actually have the compiled code in memory that it needs to instantiate the static resources. You can use the IDE to see exactly how this works. Open your Guy Serializer and edit the Guy.ToString() method to add some extra words to the return value. Then go back to the main page designer. It s still showing the old output. Now choose Rebuild from the Build menu. The designer will update itself as soon as the code finishes rebuilding. Try making another change, but don t rebuild yet. Instead, add another TextBlock that s bound to a Guy object. The IDE will use the old version of the object until you rebuild. 556 Appendix ii Q: I m confused about namespaces. How is the namespace in the program different from the one in an XML file? A: Let s take a step back and understand why namespaces are necessary. C#, XML files, the Windows filesystem, and web pages all use different (but often related) naming systems to give each class, XML document, file, or web page its own unique name. So why is this important? Well, let s say back in Chapter 9, you created a class called KnownFolders to help Brian keep track of excuse folders. Uh-oh! Now you find out that the.net Framework for Windows Store already has a KnownFolders class. No worries. The.NET KnownFolders class is in the Windows.Storage namespace, so it can exist happily alongside your class with the same name, and that s called disambiguation. Data contracts also need to disambiguate. You ve seen several different versions of a Guy class throughout this book. What if you wanted to have two different contracts to serialize different versions of Guy? You can put them in different namespaces to disambiguate them. And it makes sense that these namespaces would be separate from the ones for your classes, because you can t really confuse classes and contracts. One more thing. Your WPF applications can use the same OpenFileDialog and SaveFileDialog classes that you used in your WinForms projects. Here s an MSDN page that has more information and code samples:
89 windows presentation foundation Chapter 12 Remember Brian's excuse manager from chapter 9? well, it's got a few bugs, and you'll fix them in this chapter. Exception handling works the same in WPF as it does in WinForms and Windows Store. If you flip through the replacement pages for Chapter 12, you ll notice that there s no XAML. That s because the material on exception handling that we cover in Head First C# is basically the same whether you re working on a WPF application, a WinForms program, a Windows Store app, or even a console application. Here s how you should use this appendix for Chapter 12: Read through page 575 in the book, including the Sharpen your Pencil exercise. Use the appendix replacement pages for 576 and 577. Read pages 578 and 579 in the book. Follow pages in this appendix, and skip 591 in the main part of the book. Finish the rest of the chapter in the book. Then do all of Chapter 13 in the book, too! Once you re done with this chapter, you can go straight through Chapter 13 in the book. It doesn t depend on Windows 8 or Windows Store apps at all.
90 nobody expects the Brian s code did something unexpected When Brian wrote his Excuse Manager, he never expected the user to try to pull a random excuse out of an empty directory. This appendix depends on the Excuse Manager WinForms app that you built in Chapter 9. If your code doesn t match the code in the appendix, you can download it from 1 The problem happened when Brian pointed his Excuse Manager program at an empty folder on his laptop and clicked the Random Excuse button. Let s take a look at it and see if we can figure out what went wrong. Here s the unhandled exception window that popped up when he ran the program in the IDE: Do this! 2 OK, that s a good starting point. It s telling us that there s some value that doesn t fall inside some range. Clicking the Break button drops the IDE back into the debugger, with the execution halted on a specific line of code: public Excuse(Random random, string folder) { string[] filenames = Directory.GetFiles(folder, "*.excuse"); OpenFile(fileNames[random.Next(fileNames.Length)]); 3 Let s use the Watch window to track down the problem. Add a watch for filenames.length. Looks like that returns 0. Try adding a watch for random.next(filenames.length). That returns 0, too. So add a watch for filenames[random.next(filenames.length)]. This time the Value column in the Watch window has the same error message that you saw in step 1: Out of bounds array index. You can call methods and use indexers in the Watch window. When one of those things throws an exception, you ll see that exception in the Watch window, too. 576 Appendix ii
91 windows presentation foundation 4 So what happened? It turns out that Directory.GetFiles() returns an empty array when you point it at an empty folder. So filenames.length is zero, and passing 0 to Random.Next() will always return 0 as well. Try to get the 0th element of an empty array and your program will throw a System. IndexOutOfRangeException, with the message Index was outside the bounds of the array. Now that we know what the problem is, we can fix it. All we need to do is check to see if the selected folder has excuses in it before we try to load a random excuse from it: private void randomexcuse_click(object sender, EventArgs e) { if (Directory.GetFiles(selectedFolder).Length == 0) MessageBox.Show("There are no excuse files in the selected folder."); else if (CheckChanged()) { currentexcuse = new Excuse(random, selectedfolder); UpdateForm(false); What do you think about that solution? Does it make the most sense to put it in the form, or would it be better to find a way to encapsulate it inside the Excuse class? By checking for excuse files in the folder before we create the Excuse object, we can prevent the exception from being thrown and display a helpful dialog, too. Oh, I get it. Exceptions aren t always bad. Sometimes they identify bugs, but a lot of the time they re just telling me that something happened that was different from what I expected. That s right. Exceptions are a really useful tool that you can use to find places where your code acts in ways you don t expect. A lot of programmers get frustrated the first time they see an exception. But exceptions are really useful, and you can use them to your advantage. When you see an exception, it s giving you a lot of clues to help you figure out when your code is reacting to a situation that you didn t anticipate. And that s good for you: it lets you know about a new scenario that your program has to handle, and it gives you an opportunity to do something about it. you are here 4 577
92 you don t know where that watch has been Use the IDE s debugger to ferret out exactly what went wrong in the Excuse Manager Let s use the debugger to take a closer look at the problem that we ran into in the Excuse Manager. You ve probably been using the debugger a lot over the last few chapters, but we ll go through it step by step anyway to make sure we don t leave out any details. Debug this 1 Add a breakpoint to the Random button s event handler. You ve got a starting point the exception happens when the Random Excuse button is clicked after an empty folder is selected. So open up the button s event handler and use Debug Toggle Breakpoint (F9) to add a breakpoint to the first line of the method. Start debugging, choose an empty folder, and then click the Random button to make your program break at the breakpoint: 2 Step into the Excuse constructor. We want to reproduce the problem, but we already added code to get past it. No problem. Right-click on the line currentexcuse = new Excuse(random, selectedfolder); and choose Set Next Statement (Ctrl+Shift+F10). Then use Step Into (F11) to step into the constructor: You used the debugger to skip past the workaround that you added to avoid the exception, so now the Excuse constructor is about to throw the exception again. 580 Appendix ii
93 windows presentation foundation 3 Step through the program until it throws the exception. You ve already seen how handy the Watch window is. Now we ll use it to reproduce the exception. Choose Step Over (F10) twice to get your program to throw the exception. Then use the IDE to select filenames.length, right-click on it, and choose to add a watch. Then do it again for random.next(filenames.length) and filenames[random.next(filenames.length)]: The Watch window has another very useful feature. It lets you change the value of variables and fields that it s displaying, and it even lets you execute methods and create new objects. When you do, it displays its reevaluate icon that you can click to tell it to execute that method again. 4 Add a watch for the Exception object. Debugging is a little like performing a forensic crime scene investigation on your program. You don t necessarily know what you re looking for until you find it, so you need to use your debugger CSI kit to follow clues and track down the culprit. One important tool is adding $exception to the Watch window, because it shows you the contents of the Exception object that s been thrown: When you get an exception, you can go back and reproduce it in the debugger and use the Exception object to help you fix your code. you are here 4 581
94 make a break for it Q: How do I know where to put a breakpoint? A: That s a really good question, and there s no one right answer. When your code throws an exception, it s always a good idea to start with the statement that threw it. But usually, the problem actually happened earlier in the program, and the exception is just fallout from it. For example, the statement that throws a divide-by-zero error could be dividing values that were generated 10 statements earlier but just haven t been used yet. So there s no one good answer to where you should put a breakpoint, because every situation is different. But as long as you ve got a good idea of how your code works, you should be able to figure out a good starting point. Q: Can I run any method in the Watch window? A: Yes. Any statement that s valid in your program will work inside the Watch window, even things that make absolutely no sense to run inside a Watch window. Here s an example. Bring up a program, start it running, break it, and then add this to the Watch window: System.Threading. Thread.Sleep(2000). That method causes your program to delay for two seconds.there s no reason you d ever do that in real life, but it s interesting to see what happens: the IDE will block and you ll get a wait cursor for two seconds while the method evaluates. Then, since Sleep() has no return value, the Watch window will display the value Expression has been evaluated and has no value to let you know that it didn t return anything. But it did evaluate it. Not only that, but it displays IntelliSense pop-ups to help you type code into the window. That s useful because it shows the available properties and methods for objects currently in memory. Q: Wait, so isn t it possible for me to run something in the Watch window that ll change the way my program runs? A: Yes! Not permanently, but it can definitely affect your program s output. But even better, just hovering over fields inside the debugger can cause your program to change its behavior, because hovering over a property executes its get accessor. If you have a property that has a get accessor that executes a method, then hovering over that property will cause that method to execute. And if that method sets a value in your program, then that value will stay set if you run the program again. And that can cause some pretty unpredictable results inside the debugger. Programmers have a name for results that seem to be unpredictable and random: they re called heisenbugs (which is a joke that makes sense to physicists and cats trapped in boxes). When you run your program inside the IDE, an unhandled exception will cause it to break as if it had run into a breakpoint. 582 Appendix ii
95 windows presentation foundation Uh-oh the code s still got problems Brian was happily using his Excuse Manager when he accidentally chose a folder full of files that weren t created by the Excuse Manager. Let s see what happens when he tries to load one of them... No, not again! 1 You can re-create Brian s problem. Take a random file that isn t a serialized excuse and give it the.excuse file extension. 2 Pop open the Excuse Manager in the IDE and open up the file you created. It throws an exception! Look at the message, then click the Break button to start investigating. 3 Open up the Locals window and expand $exception (you can also enter it into the Watch window). Take a close look at its members to see if you can figure out what went wrong. Do you see why the program threw the exception? Does it make sense for the program to crash if it encounters an invalid Excuse XML file? Can you think of anything you can do about this? you are here 4 583
96 users are unpredictable Wait a second. Of course the program s gonna crash. I gave it a bad file. Users screw up all the time. You can t expect me to do anything about that... right? Actually, there is something you can do about it. Yes, it s true that users screw up all the time. That s a fact of life. But that doesn t mean you can t do anything about it. There s a name for programs that deal with bad data, malformed input, and other unexpected situations gracefully: they re called robust programs. And C# gives you some really powerful exception handling tools to help you make your programs more robust. Because while you can t control what your users do, you can make sure that your program doesn t crash when they do it. ro-bust, adj. sturdy in construction; able to withstand or overcome adverse conditions. After the Tacoma Narrows Bridge disaster, the civil engineering team looked for a more robust design for the bridge that would replace it. The BinaryFormatter class will also throw a SeralizationException if you give it a file that doesn t contain exactly the right serialized object. It s even more finicky than DataContractSerializer! Serializers will throw an exception if there s anything at all wrong with a serialized file. It s easy to get the Excuse Manager to throw a SerializationException just feed it any file that s not a serialized Excuse object. When you try to deserialize an object from a file, DataContractSerializer expects the file to contain a serialized object that matches the contract of the class that it s trying to read. If the file contains anything else, almost anything at all, then the ReadObject() method will throw a SerializationException. 584 Appendix ii
97 windows presentation foundation Handle exceptions with try and catch In C#, you can basically say, Try this code, and if an exception occurs, catch it with this other bit of code. The part of the code you re trying is the try block, and the part where you deal with exceptions is called the catch block. In the catch block, you can do things like print a friendly error message instead of letting your program come to a screeching halt: private void OpenFile(string excusepath) { try { this.excusepath = excusepath; This is the try block. You start exception handling with try. In this case, we ll put the existing code in it. You ll also need to add these lines to the top of Excuse.cs: using System.Runtime.Serialization; using System.Windows.Forms;; What happens if you leave out this last line of code? Can you figure out why we included it in the catch block? Put the code that might throw an exception inside the try block. If no exception happens, it ll get run exactly as usual, and the statements in the catch block will be ignored. But if a statement in the try block throws an exception, the rest of the try block won t get executed. The catch keyword means that the block immediately following it contains an exception handler. You ll recognize the code here because we surrounded the entire method with this try block. When an exception is thrown, the program immediately jumps to the catch statement and starts executing the catch block. This is the simplest kind of exception handling: stop the program, write out the exception message, and keep running. If throwing an exception makes your code automatically jump to the catch block, what happens to the objects and data you were working with before the exception happened? you are here 4 585
98 risky business What happens when a method you want to call is risky? Users are unpredictable. They feed all sorts of weird data into your program and click on things in ways you never expected. And that s just fine, because you can handle unexpected input with good exception handling. 1 Let s say your user is using your code and gives it some input that it didn t expect. a user gives input å ß ıïô œ ˆ øƒ Ω ÔÒÎ åß ÒÅ åƒ ß å ß ƒå ß ƒ å ß ƒ ƒ å ç ƒ ˆ å ƒ public class Data { public void Process(Input i) { if (i.isbad()) { explode(); to your method user some input a class you wrote 2 That method does something risky, something that might not work at runtime. Runtime just means while your program is running. Some people refer to exceptions as runtime errors. 3 You need to know that the method you re calling is risky. If you can come up with a way to do a less risky thing that avoids throwing the exception, that s the best possible outcome! But some risks just can t be avoided, and that s when you want to do this. a class you wrote user public class Data { public void Process(Input i) { if (i.isbad()) { explode(); public void Process(Input i) { if (i.isbad()) { Explode(); I wonder what happens if I click here My Process() method will blow up if it gets bad input data! public class Data { public void Wow, this program s really stable! Process(Input i) { if (i.isbad()) { explode(); a class you wrote 4 You then write code that can handle the failure if it does happen. You need to be prepared, just in case. 586 Appendix ii user now your program s more robust! public class Data { public void Process(Input i) { try { if (i.isbad()) { explode(); catch { HandleIt(); your class, now with exception handling
99 windows presentation foundation Q: So when do I use try and catch? A: Anytime you re writing risky code, or code that could throw an exception. The trick is figuring out which code is risky, and which code is safer. You ve already seen that code that uses input provided by a user can be risky. Users give you incorrect files, words instead of numbers, and names instead of dates, and they pretty much click everywhere you could possibly imagine. A good program will take all that input and work in a calm, predictable way. It might not give the users a result they can use, but it will let them know that it found the problem and hopefully suggest a solution. Q: How can a program suggest a solution to a problem it doesn t even know about in advance? A: That s what the catch block is for. A catch block is executed only when code in the try block throws an exception. It s your chance to make sure the user knows that something went wrong, and to let the user know that it s a situation that might be corrected. If the Excuse Manager simply crashes when there s bad input, that s not particularly useful. But if it tries to read the input and displays garbage in the form, that s also not useful in fact, some people might say that it s worse. But if you have the program display an error message telling the user that it couldn t read the file, then the user has an idea of what went wrong, and information that he can use to fix the problem. Q: So the debugger should really only be used to troubleshoot exceptions then? A: No. As you ve already seen many times throughout the book, the debugger s a really useful tool that you can use to examine any code you ve written. Sometimes it s useful to step through your code and check the values of certain fields and variables like when you ve got a really complex method and you want to make sure it s working properly. But as you may have guessed from the name debugger, its most common use is to track down and remove bugs. Sometimes those bugs are exceptions that get thrown. But a lot of the time, you ll be using the debugger to try to find other kinds of problems, like code that gives a result that you don t expect. Q: I m not sure I totally got what you did with the Watch window. A: When you re debugging a program, you usually want to pay attention to how a few variables and fields change. That s where the Watch window comes in. If you add watches for a few variables, the Watch window updates their values every time you step into, out of, or over code. That lets you monitor exactly what happens to them after every statement, which can be really useful when you re trying to track down a problem. The Watch window also lets you type in any statement you want, and even call methods, and the IDE will evaluate it and display the results. If the statement updates any of the fields and variables in your program, then it does that, too. That lets you change values while your program is running, which can be another really useful tool for reproducing exceptions and other bugs. Any changes you make in the Watch window just affect the data in memory, and last only as long as the program is running. Restart your program, and values that you changed will be undone. The catch block is executed only when code in the try block throws an exception. It gives you a chance to make sure your user has the information to fix the problem. you are here 4 587
100 go with the flow Use the debugger to follow the try/catch flow An important part of exception handling is that when a statement in your try block throws an exception, the rest of the code in the block gets short-circuited. The program s execution immediately jumps to the first line in the catch block. But don t take our word for it... Debug this 1 Add the try/catch from a few pages ago to your Excuse Manager app s ReadExcuseAsync() method. Then place a breakpoint on the opening bracket { in the try block. 2 Start debugging your app and open up a file that s not a valid excuse file (but still has the.excuse extension). When the debugger breaks on your breakpoint, click the Step Over button (or F10) five times to get to the statement that calls ReadObject() to deserialize the Excuse object. Here s what your debugger screen should look like: Put the breakpoint on the opening bracket of the try block. Step over the statements until your yellow next statement bar shows that the next statement to get executed will read the Excuse object from the stream. 588 Appendix ii
101 windows presentation foundation 3 Step over the next statement. As soon as the debugger executes the Deserialize() statement, the exception is thrown and the program short-circuits right past the rest of the method and jumps straight to the catch block. The debugger will highlight the catch statement with its yellow next statement block, but it shows the rest of the block in gray to show you that it s about to execute the whole thing. 4 Start the program again by pressing the Continue button (or F5). It ll begin running the program again, starting with whatever s highlighted by the yellow next statement block in this case, the catch block. It will just display the dialog and then act as if nothing happened. The ugly crash has now been handled. Here s a career tip: a lot of C# programming job interviews include a question about how you deal with exceptions in a constructor. Keep risky code out of the constructor! You ve noticed by now that a constructor doesn t have a return value, not even void. That s because a constructor doesn t actually return anything. Its only purpose is to initialize an object which is a problem for exception handling inside the constructor. When an exception is thrown inside the constructor, then the statement that tried to instantiate the class won t end up with an instance of the object. you are here 4 589
102 clean up after yourself If there is no exception thrown during the try block, the code in the finally block will execute after the try block completes. If there's an exception handled by a catch block, then it will short-circuit as usual, and then run the finally block after the catch block. If you have code that should ALWAYS run, use a finally block When your program throws an exception, a couple of things can happen. If the exception isn t handled, your program will stop processing and crash. If the exception is handled, your code jumps to the catch block. But what about the rest of the code in your try block? What if you were closing a stream, or cleaning up important resources? That code needs to run, even if an exception occurs, or you re going to make a mess of your program s state. That s where the finally block comes in really handy. It comes after the try and catch blocks. The finally block always runs, whether or not an exception was thrown. private void OpenFile(string excusepath) { try { this.excusepath = excusepath;; finally { // Any code here will get executed no matter what Always catch specific exceptions like SerializationException. You typically follow a catch statement with a specific kind of exception telling it what to catch. It s valid C# code to just have catch (Exception) and you can even leave the exception type out and just use catch. When you do that, it catches all exceptions, no matter what type of exception is thrown. But it s a really bad practice to have a catch-all exception handler like that. Your code should always catch as specific an exception as possible. 590 Appendix ii
103 Reminder: Once you finish Chapter 12, you can go straight through Chapter 13 in the book. It doesn t depend on Windows 8 or Windows Store apps at all. windows presentation foundation Chapter 14 In Chapter 14, you'll see a bunch of LINQ queries. In the book you'll combine them into a single Windows Store app. We'll show you how to build a WPF Application instead. LINQ works with any kind of C# program. When you read Chapter 14 in the main part of the book, you ll see that it s structured differently from other chapters. It has a series of increasingly complex LINQ queries, and small console apps to demonstrate each of them. Throughout the chapter, you ll also see exercises to build a Windows Store app that combines all the queries into a single user interface. Over the next few pages of this appendix, we ll show you how to build a WPF application that executes those same queries. Here s how we recommend you use this appendix with Chapter 14: Read through page 657 in the book. Even though pages in the chapter through 665 are about building a Windows Store app, read them especially the parts about anonymous types. It will help to get a sense of how the Comic, ComicQuery, and ComicQueryManager classes work. Pages 666 and 667 describe more LINQ queries. You can skim pages 668 and 669, because those are more Windows Storerelated pages. Read pages , but don t do the exercise on page 679. You can skip the rest of the chapter, because it s related to Windows Store apps. Instead, follow the replacement pages
104 Build a WPF comic query application When you read through Chapter 14 in the book, you saw that we built a Windows Store app to execute the LINQ queries throughout the chapter. Since we followed the principle of separation of concerns, the classes for managing data and issuing queries were separated from the code that created the user interface. That let us reuse the same data and query management classes to build another app using the Visual Studio Split App template. Now we ll be able to take advantage of the same separation of concerns and build a WPF application using the same data and query classes. Do this! 1 Create a new WPF application and Add existing classes and images from the Comic app. Before you start this project, you ll need to download source code to the JimmysComics app from Chapter 14. See the Head First Labs website () for a link to the source code. Once you ve got the source code, you ll build a new WPF application called JimmysComics. Then right-click on the project name in the Solution Explorer and choose Add Existing Item to add the following items from the Windows Store app we built in the book (you can download the source from the book s website): Purchase.cs Comic.cs ComicQuery.cs ComicQueryManager.cs PriceRange.cs. The following files are in the Assets folder: bluegray_250x250.jpg, bluegray_250x250.jpg, captain_ amazing_250x250.jpg, captain_amazing_zoom_250x250.jpg add them to the root level of your WPF application so they re alongside your XAML and C# files. Your Solution Explorer should look like this: If you give your project a different name, make sure you change the namespace for the C# files you added to match your project's namespace. You ll also need to select each image file in the Solution Explorer and use the Properties window to set Build Action to Content and Copy to Output Directory to Copy always. Here s what it looks like make sure you do this for each of the.jpg files that you added: 680 Appendix ii
105 windows presentation foundation 2 Make two modifications to ComicQueryManager.cs. There are two small changes you ll need to make to ComicQueryManager.cs. WPF applications cannot use the Windows.UI namespace because it s only part of the.net Framework for Windows Store. You ll need to change the using statements at the top to replace Windows.UI with System.Windows : using System.Collections.ObjectModel; using System.Windows.Media.Imaging; And WPF applications load images slightly differently from Windows Store apps, so you ll need to change the CreateImageFromAssets() method in ComicQueryManager. Here s the new method: private static BitmapImage CreateImageFromAssets(string imagefilename) { try { Uri uri = new Uri(imageFilename, UriKind.RelativeOrAbsolute); return new BitmapImage(uri); catch (System.IO.IOException) { return new BitmapImage(); You copied the.jpg files into your project's top-level folder. This new CreateImageFromAssets() method will load those files. 3 Add code-behind for the main window. Here s all the code-behind you ll need for MainWindow.xaml.cs. public partial class MainWindow : Window { ComicQueryManager comicquerymanager; public MainWindow() { InitializeComponent(); comicquerymanager = FindResource("comicQueryManager") as ComicQueryManager; comicquerymanager.updatequeryresults(comicquerymanager.availablequeries[0]); private void ListView_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (e.addeditems.count >= 1 && e.addeditems[0] is ComicQuery) { comicquerymanager.currentqueryresults.clear(); comicquerymanager.updatequeryresults(e.addeditems[0] as ComicQuery); The ListView control fires its SelectionChanged event whenever the user selects or deselects items. The items that were selected can be found in the e.addeditems collection. you are here 4 681
106 4 Add the XAML for the main window. Here s the XAML for the main window. Remember, if you used a different project name, make sure you change JimmysComics to match your project s namespace. This ListView's SelectionMode is set to Single so only one query can be selected at a time. <Window x: <Window.Resources> <local:comicquerymanager x: </Window.Resources> <Grid DataContext="{StaticResource ResourceKey=comicQueryManager"> <Grid.ColumnDefinitions> The ListView on the right has an item template that displays information about each query. The ListView on the right has an item template that shows individual items in the query results. </Grid> </Window> <ColumnDefinition Width="2*"/> <ColumnDefinition Width="3*"/> </Grid.ColumnDefinitions> <ListView SelectionMode="Single" ItemsSource="{Binding AvailableQueries" SelectionChanged="ListView_SelectionChanged"> <ListView.ItemTemplate> <DataTemplate> <Grid Height="55" Margin="6"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Border Width="55" Height="55"> <Image Source="{Binding Image" Stretch="UniformToFill"/> </Border> <StackPanel Grid. <TextBlock Text="{Binding Title" TextWrapping="NoWrap"/> <TextBlock Text="{Binding Subtitle" TextWrapping="NoWrap"/> <TextBlock Text="{Binding Description" TextWrapping="NoWrap"/> </StackPanel> </Grid> </DataTemplate> </ListView.ItemTemplate> </ListView> <ListView Grid. <ListView.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <Image Source="{Binding Image" Margin="0,0,20,0" Stretch="UniformToFill" Width="25" Height="25" VerticalAlignment="Top" HorizontalAlignment="Right"/> <StackPanel> <TextBlock Text="{Binding Title" /> </StackPanel> </StackPanel> </DataTemplate> </ListView.ItemTemplate> </ListView> 682 Appendix ii
107 windows presentation foundation When you run the app, the queries appear on the left, and the results of the selected query appear on the right. Queries that return comic books have additional information: price, synopsis, even a cover image. Can you figure out how to get the comic queries to display all the information about each comic? You'll need to add the comic book cover images to the project. You'll find some helpful XAML code in the chapter on pages 689 and 690. you are here 4 683
108. 684 Appendix ii
109 windows presentation foundation Chapter 15 There are only a few pages in this chapter that are specific to Windows Store apps. You should read them anyway! Events are useful for any app, but especially important for understanding XAML. Events can be simple and straightforward, because you ve been using them throughout the book. But there s a lot more depth to them than you might expect. This chapter helps you understand events in more detail. Here s what we recommend for this chapter: Read the chapter in the book through page 711. Use the replacement pages in this appendix for the exercise on pages and its solution on pages Read pages in the book. Pages are specific to Windows Store apps, but we recommend that you read them anyway. They give you some insight not just into Windows Store apps, but also into some basic features of Windows 8. We provide replacement pages for pages in this appendix. Read the rest of the chapter in the book. The only pages you should skip are the top of page 740, and pages you are here 4 685
110 Pitcher object put it all together 2 It s time to put what you ve learned so far into practice. Your job is to complete the Ball and Pitcher classes, add a Fan class, and make sure they all work together with a very basic version of your baseball simulator. 1 Complete the Pitcher class. Below is what we ve got for Pitcher. Add the CatchBall() and CoverFirstBase() methods. Both should create a string saying that the catcher has either caught the ball or run to first base and add that string to a public ObservableCollection<string> called PitcherSays. class Pitcher { public Pitcher(Ball ball) { ball.ballinplay += new EventHandler(ball_BallInPlay); void ball_ballinplay(object sender, EventArgs e) { if (e is BallEventArgs){ BallEventArgs balleventargs = e as BallEventArgs; if ((balleventargs.distance < 95) && (balleventargs.trajectory < 60)) CatchBall(); else CoverFirstBase(); You ll need to implement these two methods to add a string to the PitcherSays ObservableCollection. 2 Write a Fan class. Create another class called Fan. Fan should also subscribe to the BallInPlay event in its constructor. The fan s event handler should see if the distance is greater than 400 feet and the trajectory is greater than 30 (a home run), and grab for a glove to try to catch the ball if it is. If not, the fan should scream and yell. Everything that the fan screams and yells should be added to an ObservableCollection<string> called FanSays. Look at the output on the facing page to see exactly what it should print. Fan object? 712 Appendix ii
111 windows presentation foundation 3 Build a very simple simulator. If you didn t do it already, create a new WPF Application and add the following BaseballSimulator class. Then add it as a static resource to the page. using System.Collections.ObjectModel; class BaseballSimulator { private Ball ball = new Ball(); private Pitcher pitcher; private Fan fan; public ObservableCollection<string> FanSays { get { return fan.fansays; public ObservableCollection<string> PitcherSays { get { return pitcher.pitchersays; public int Trajectory { get; set; public int Distance { get; set; public BaseballSimulator() { pitcher = new Pitcher(ball); fan = new Fan(ball); public void PlayBall() { BallEventArgs balleventargs = new BallEventArgs(Trajectory, Distance); ball.onballinplay(balleventargs); 4 Build the main window. Can you come up with the XAML just from looking at the screenshot to the right? The two TextBox controls are bound to the Trajectory and Distance properties of the BaseballSimulator static resource, and the pitcher and fan chatter are ListView controls bound to the two ObservableCollections. See if you can make your simulator generate the above fan and pitcher chatter with three successive balls put into play. Write down the values you used to get the result below: Ball 1: Trajectory: Distance: Ball 2: Trajectory: Distance: Don t forget the Click event handler for the button. Ball 3: Trajectory: Distance: you are here 4 713
112 Read-only automatic properties work really well in event arguments because the event handlers read only the data passed to them. The fan s BallInPlay event handler looks for any ball that s high and long. Here are the Ball and BallEventArgs from earlier, and the new Fan class that needed to be added: class Ball { public event EventHandler BallInPlay; public void OnBallInPlay(BallEventArgs e) { EventHandler ballinplay = BallInPlay; if (ballinplay!= null) ballinplay(this, e); class BallEventArgs : EventArgs { public int Trajectory { get; private set; public int Distance { get; private set; public BallEventArgs(int trajectory, int distance) { this.trajectory = trajectory; this.distance = distance; using System.Collections.ObjectModel; class Fan { public ObservableCollection<string> FanSays = new ObservableCollection<string>(); private int pitchnumber = 0; public Fan(Ball ball) { ball.ballinplay += new EventHandler(ball_BallInPlay); void ball_ballinplay(object sender, EventArgs e) { pitchnumber++; if (e is BallEventArgs) { BallEventArgs balleventargs = e as BallEventArgs; if (balleventargs.distance > 400 && balleventargs.trajectory > 30) FanSays.Add("Pitch #" + pitchnumber + ": Home run! I'm going for the ball!"); else FanSays.Add("Pitch #" + pitchnumber + ": Woo-hoo! Yeah!"); Here s the code-behind for the page: public partial class MainWindow : Window { BaseballSimulator baseballsimulator; public MainWindow() { InitializeComponent(); The OnBallInPlay() method just raises the BallInPlay event but it has to check to make sure it s not null; otherwise, it ll throw an exception. The Fan object s constructor chains its event handler onto the BallInPlay event. baseballsimulator = FindResource("baseballSimulator") as BaseballSimulator; private void Button_Click(object sender, RoutedEventArgs e) { baseballsimulator.playball(); 714 Appendix ii
113 Here s the XAML for the page. It also needs: <local:baseballsimulator x: windows presentation foundation <Window.Resources> <local:baseballsimulator x: </Window.Resources> <Grid Margin="5" DataContext="{StaticResource ResourceKey=baseballSimulator"> <Grid.ColumnDefinitions> <ColumnDefinition Width="200" /> <ColumnDefinition/> </Grid.ColumnDefinitions> <StackPanel Margin="0,0,10,0"> <TextBlock Text="Trajectory" Margin="0,0,0,5"/> <TextBox Text="{Binding Trajectory, Mode=TwoWay" Margin="0,0,0,5"/> <TextBlock Text="Distance" Margin="0,0,0,5"/> <TextBox Text="{Binding Distance, Mode=TwoWay" Margin="0,0,0,5"/> <Button Content="Play ball!" Click="Button_Click"/> </StackPanel> <StackPanel Grid. <TextBlock Text="Pitcher says" Margin="0,0,0,5"/> <ListView ItemsSource="{Binding PitcherSays" Height="125"/> <TextBlock Text="Fan says" Margin="0,0,0,5"/> <ListView ItemsSource="{Binding FanSays" Height="125"/> </StackPanel> </Grid> And here s the Pitcher class (it needs using System.Collections.ObjectModel; at the top): class Pitcher { public ObservableCollection<string> PitcherSays = new ObservableCollection<string>(); private int pitchnumber = 0; public Pitcher(Ball ball) { ball.ballinplay += ball_ballinplay; void ball_ballinplay(object sender, EventArgs e) { pitchnumber++; if (e is BallEventArgs) { BallEventArgs balleventargs = e as BallEventArgs; if ((balleventargs.distance < 95) && (balleventargs.trajectory < 60)) CatchBall(); else CoverFirstBase(); private void CatchBall() { PitcherSays.Add("Pitch #" + pitchnumber + ": I caught the ball"); private void CoverFirstBase() { PitcherSays.Add("Pitch #" + pitchnumber + ": I covered first base"); Ball 1: Trajectory: Distance: Ball 2: Trajectory: Distance: Make sure you also add the xmlns:local property to the <Window> tag. We gave you the pitcher s BallInPlay event handler. It looks for any low balls. Ball 3: Trajectory: Distance: Here are the values we used to get the output. Yours might be a little different you are here 4 715
114 bubble bubble, toil and trouble XAML controls use routed events Flip to page 722 in the main part of the book and have a closer look at the IntelliSense window that pops up when you type override into the IDE. Yes, it s for a Windows Store app, but the same exact principle applies to WPF. Two of the names of the event argument types are a little different from the others. The DoubleTapped event s second argument has the type DoubleTappedRoutedEventArgs, and the GotFocus event s is a RoutedEventArgs. The reason is that the DoubleTapped and GotFocus events are routed events. These are like normal events, except for one difference: when a control object responds to a routed event, first it fires off the event handler method as usual. Then it does something else: if the event hasn t been handled, it sends the routed event up to its container. The container fires the event, and then if it isn t handled, it sends the routed event up to its container. The event keeps bubbling up until it s either handled or it hits the root, or the container at the very top. Here s a typical routed event handler method signature. private void EventHandler(object sender, RoutedEventArgs e) The RoutedEventArgs object has a property called Handled that the event handler can use to indicate that it s handled the event. Setting this property to true stops the event from bubbling up. In both routed and standard events, the sender parameter always contains a reference to the object that called the event handler. So if an event is bubbled up from a control to a container like a Grid, then when the Grid calls its event handler, sender will be a reference to the Grid control. But what if you want to find out which control fired the original event? No problem. The RoutedEventArgs object has a property called OriginalSource that contains a reference to the control that initially fired the event. If OriginalSource and sender point to the same object, then the control that called the event handler is the same control that originated the event and started it bubbling up. IsHitTestVisible determines if an element is visible to the pointer or mouse Typically, any element on the page can be hit by the pointer or mouse as long as it meets certain criteria. It needs to be visible (which you can change with the Visibility property), it has to have a Background or Fill property that s not null (but can be Transparent), it must be enabled (with the IsEnabled property), and it has to have a height and width greater than zero. If all of these things are true, then the IsHitTestVisible property will return True, and that will cause it to respond to pointer or mouse events. This property is especially useful if you want to make your events invisible to the mouse. If you set IsHitTestVisible to False, then any pointer taps or mouse clicks will pass right through the control. If there s another control below it, that control will get the event instead. You can see a list of input events that are routed events here: 724 Appendix ii The structure of controls that contain other controls that in turn contain yet more controls is called an object tree, and routed events bubble up the tree from child to parent until they hit the root element at the top.
115 windows presentation foundation Create an app to explore routed events Here s a WPF application that you can use to experiment with routed events. It s got a StackPanel that contains a Border, which contains a Grid, and inside that grid are an Ellipse and a Rectangle. Have a look at the screenshot. See how the Rectangle is on top of the Ellipse? If you put two controls into the same cell, they ll stack on top of each other. But both of those controls have the same parent: the Grid, whose parent is the Border, and the Border s parent is the StackPanel. Routed events from the Rectangle or Ellipse bubble up through the parents to the root of the object tree. You ve already seen the CheckBox control, which you can use to toggle a value on and off. The Content property sets the label for the control. The IsChecked property is a Nullable<bool> because in addition to on and off, it can also have a third indeterminate state <Grid Margin="5"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition/> </Grid.ColumnDefinitions> <StackPanel x: <Border BorderThickness="10" BorderBrush="Blue" Width="155" x: Routed events bubble up the object tree. <Grid x: <Ellipse Fill="Red" Width="100" Height="100" MouseDown="Ellipse_MouseDown"/> <Rectangle Fill="Gray" Width="50" Height="50" MouseDown="Rectangle_MouseDown" x: </Grid> </Border> <ListBox BorderThickness="1" Width="250" Height="140" x: </StackPanel> <StackPanel Grid. <CheckBox Content="Border sets handled" x: <CheckBox Content="Grid sets handled" x: <CheckBox Content="Ellipse sets handled" x: <CheckBox Content="Rectangle sets handled" x: <Button Content="Update Rectangle IsHitTestVisible" Click="UpdateHitTestButton" Margin="0,20,20,0"/> <CheckBox IsChecked="True" Content="New IsHitTestVisible value" x: </StackPanel> </Grid> IsChecked defaults to False. This CheckBox has it set to True because controls always have IsHitTestVisible set to true by default. Flip the page to finish the app you are here 4 725
116 climbing the object tree You ll need this ObservableCollection to display output in the ListBox. Make a field called outputitems and set the ListBox.ItemsSource property in the page constructor. And don t forget to add the using System.Collections.ObjectModel; statement for ObservableCollection<T>. public partial class MainWindow : Window { ObservableCollection<string> outputitems = new ObservableCollection<string>(); public MainWindow() { this.initializecomponent(); output.itemssource = outputitems; Here s the code-behind. Each control s MouseDown event handler clears the output if it s the original source, and then it adds a string to the output. If its handled toggle switch is on, it uses e.handled to handle the event. private void Ellipse_MouseDown(object sender, MouseButtonEventArgs e) { if (sender == e.originalsource) outputitems.clear(); outputitems.add("the ellipse was pressed"); if (ellipsesetshandled.ischecked == true) e.handled = true; private void Rectangle_MouseDown(object sender, MouseButtonEventArgs e) { if (sender == e.originalsource) outputitems.clear(); outputitems.add("the rectangle was pressed"); if (rectanglesetshandled.ischecked == true) e.handled = true; private void Grid_MouseDown(object sender, MouseButtonEventArgs e) { if (sender == e.originalsource) outputitems.clear(); outputitems.add("the grid was pressed"); if (gridsetshandled.ischecked == true) e.handled = true; private void Border_MouseDown(object sender, MouseButtonEventArgs e) { if (sender == e.originalsource) outputitems.clear(); outputitems.add("the border was pressed"); if (bordersetshandled.ischecked == true) e.handled = true; private void StackPanel_MouseDown(object sender, MouseButtonEventArgs e) { if (sender == e.originalsource) outputitems.clear(); outputitems.add("the panel was pressed"); private void UpdateHitTestButton(object sender, RoutedEventArgs e) { grayrectangle.ishittestvisible = (bool)newhittestvisiblevalue.ischecked; 726 Appendix ii The Click event handler for the button uses the IsOn property of the toggle switch to turn IsHitTestVisible on or off for the Rectangle control.
117 windows presentation foundation Here s the object graph for your main window. The Mainwindow class is at the root of the object tree. When you create the new WPF application, the MainWindow.xaml and MainWindow.xaml.cs files create an object that extends the Window class. Window object StackPanel object Grid object This is the Grid that you added to the XAML, which holds the other controls. ToggleSwitch Button object StackPanel object Here s the StackPanel that contains the Border, Grid, Ellipse, and Rectangle. ToggleSwitch objects This Grid can receive routed MouseDown events, but it won t raise them. Its IsHitTestVisible property defaults to False because it doesn t have a Background or Fill property. If you update the XAML to add a Background property, its IsHitTestVisible property will default to true even if you set that property to Transparent. That will cause it to respond to pointer presses. Ellipse object Border object Grid object Rectangle object Flip the page to use your new app to explore routed events you are here 4 727
118 the bubbles go straight to your head Run the app and click or tap the gray Rectangle. You should see the output in the screenshot to the right. You can see exactly what s going on by putting a breakpoint on the first line of Rectangle_MouseDown(), the Rectangle control s MouseDown event handler: Click the gray rectangle again this time the breakpoint should fire. Use Step Over (F10) to step through the code line by line. First you ll see the if block execute to clear the outputitems ObservableCollection that s bound to the ListBox. This happens because sender and e.originalsource reference the same Rectangle control, which is true only inside the event handler method for the control that originated the event (in this case, the control that you clicked or tapped), so sender == e.originalsource is true. When you get to the end of the method, keep stepping through the program. The event will bubble up through the object tree, first running the Rectangle s event handler, then the Grid s event handler, then the Border s, then the Panel s, and finally it runs an event handler method that s part of LayoutAwarePage this is outside of your code and not part of the routed event, so it will always run. Since none of those controls are the original source for the event, none of their senders will be the same as e.originalsource, so none of them clear the output. Turn IsHitTestVisible off, press the Update button, and then click or tap the rectangle. You should see this output. Wait a minute! You pressed the Rectangle, but the Ellipse control s MouseDown event handler fired. What s going on? When you pressed the button, its Click event handler updated the Rectangle control s IsHitTestVisible property to false, which made it invisible to pointer presses, clicks, and other pointer events. So when you tapped the Rectangle, your tap passed right through it to the topmost control underneath it on the page that has IsHitTestVisible set to true and has a Background property that s set to a color or Transparent. In this case, it finds the Ellipse control and fires its MouseDown event. 728 Appendix ii
119 windows presentation foundation Check the Grid sets handled box and click or tap the gray Rectangle. You should see this output. So why did only two lines get added to the output ListBox? Step through the code again to see what s going on. This time, gridsetshandled.ison was true because you toggled the gridsetshandled to On, so the last line in the Grid s event handler set e.ishandled to true. As soon as a routed event handler method does that, the event stops bubbling up. As soon as the Grid s event handler completes, the app sees that the event has been handled, so it doesn t call the Border or Panel s event handler method, and instead skips to the event handler method in LayoutAwarePage that s outside of the code you added. A routed event first fires the Use the app to experiment with routed events. Here are a few things to try: Click on the gray Rectangle and the red Ellipse and watch the output to see how the events bubble up. Turn on each of the toggle switches, starting at the top, to cause the event handlers to set e.handled to true. Watch the events stop bubbling when they re handled. Set breakpoints and debug through all of the event handler methods. Try setting a breakpoint in the Ellipse s event handler method, and then turn the gray Rectangle s IsHitTestVisible property on and off by toggling the bottom switch and pressing the button. Step through the code for the Rectangle when IsHitTestVisible is set to false. Stop the program and add a Background property to the Grid to make it visible to pointer hits. event handler for the control that originated the event, and then bubbles up through the control hierarchy until it hits the top or an event handler sets e.handled to true. you are here 4 729
120.
121 windows presentation foundation Chapter 16 When you build your apps using the Model-View-ViewModel pattern, your code is easier to build today... and to manage tomorrow. Great developers follow design patterns. In this chapter, you ll learn about Model-View-ViewModel (MVVM), a design pattern for building effective WPF apps. Along the way, you ll learn what a design pattern is, and you ll learn how to use XAML controls to create great animations. Here s how we recommend that you work through Chapter 16: Read through page 749. Follow our replacement pages for Read pages Start the Stopwatch project on page 762 in the book, and continue it using a combination of book pages and appendix replacement pages 765, 768, , and Read page 788 in the book. The rest of Chapter 16 is replaced with pages in this appendix. There s information on page 806 about how to do Lab #3.
122 apply the pattern Use the MVVM pattern to start building the basketball roster app Create a new WPF application and make sure it s called BasketballRoster (because we ll be using the namespace BasketballRoster in the code, and this will make sure your code matches what s on the next few pages). Do this 1 Create the Model, View, and ViewModel folders in the project. Right-click on the project in the Solution Explorer and choose New Folder from the Add menu: When you use the Solution Explorer to add a new folder to your project, the IDE creates a new namespace based on the folder name. This causes the Add Class... menu option to create classes with that namespace. So if you add a class to the Model folder, the IDE will add BasketballRoster.Model to the namespace line at the top of the class file. Add a Model folder. Then do it two more times to add the View and ViewModel folders, so your project looks like this: These folders will hold the classes, controls, and windows for your app. 750 Appendix ii
123 windows presentation foundation 2 Start building the model by adding the Player class. Right-click on the Model folder and add a class called Player. When you add a class into a folder, the IDE updates the namespace to add the folder name to the end. Here s the Player class: namespace BasketballRoster.Model { class Player { public string Name { get; private set; public int Number { get; private set; public bool Starter { get; private set; When you add a class file into a folder, the IDE adds the folder name to the namespace. Player Name: string Number: int Starter: bool MODEL 3 public Player(string name, int number, bool starter) { Name = name; Number = number; Starter = starter; Different classes concerned with different things? This sounds familiar... Finish the model by adding the Roster class Next, add the Roster class to the Model folder. Here s the code for it. namespace BasketballRoster.Model { class Roster { public string TeamName { get; private set; These classes are small because they re only concerned with keeping track of which players are in each roster. None of the classes in the Model are concerned with displaying the data, just managing it. Roster TeamName: string Players: IEnumerable<string> The _ tells you that this field is private. private readonly List<Player> _players = new List<Player>(); public IEnumerable<Player> Players { get { return new List<Player>(_players); public Roster(string teamname, IEnumerable<Player> players) { TeamName = teamname; _players.addrange(players); Your Model folder should now look like this: We added an underscore to the beginning of the name of the _players field. Adding an underscore to the beginning of private fields is a very common naming convention. We re going to use it throughout this chapter so you can get used to seeing it. We ll add the view on the next page you are here 4 751
124 take control of your controls 4 Add a new main window to the View folder. Right-click on the View folder and add a new Window called LeagueWindow.xaml. VIEW Your project s View folder should now have a XAML window in it called LeagueWindow.xaml. This is just like the MainWindow.xaml window that you ve been working with throughout the book. It s still a Window object with a graph that s defined with XAML. The only difference is that it s called LeagueWindow instead of MainWindow. 5 Delete the main window and replace it with your new window. Delete the MainWindow.xaml file from the project by right-clicking on it and choosing Delete. Now try building and running your project you ll get an exception when the program starts: Well, that makes sense, since you deleted MainWindow.xaml. When a WPF application starts up, it shows the window specified in the StartupUri property in the <Application> tag App.xaml: Open App.xaml and edit StartupUri so your program pops up the window you just added: <Application x: Once you make that change, rebuild and rerun your program. Now it should start and show your newly added window. 752 Appendix ii
125 windows presentation foundation User controls let you create your own controls Take a look at the basketball roster program that you re building. Each team gets an identical set of controls: a TextBlock, another TextBlock, a ListView, another TextBlock, and another ListView, all wrapped up by a StackPanel inside a Border. Do we really need to add two identical sets of controls to the page? What if we want to add a third and fourth team that s going to mean a whole lot of duplication. And that s where user controls come in. A user control is a class that you can use to create your own controls. You use XAML and code-behind to build a user control, just like you do when you build a page. Let s get started and add a user control to your BasketballRoster project Before you flip the page, see if you can figure out what XAML should go into the new RosterControl by looking at the Windows Store app screenshot on page 746. Add a new user control to your View folder. Right-click on the View folder and add a new item. Choose the dialog and call it RosterControl.xaml. Look at the code-behind for the new user control. Open up RosterControl.xaml.cs. Your new control extends the UserControl base class. Any code-behind that defines the user control s behavior goes here. Look at the XAML for the new user control. It will have a <StackPanel> to stack up the controls that live inside a blue <Border>. Can you figure out which property gives a Border control rounded corners? It has two ListView controls that display data for players, so it also needs a <UserControl.Resources> section that contains a DataTemplate. We called it PlayerItemTemplate. Bind the ListView items to properties called Starters and Bench, and the top TextBlock to a property called TeamName. The Border control lives inside a <Grid> with a single row that has Height="Auto" to keep it from expanding past the bottom of the ListView controls to fill up the entire page. from The IDE added a user control with an empty <Grid>. Your XAML will go here. UserControl is a base class that gives you a way to encapsulate controls that are related to each other, and lets you build logic that defines the behavior of the control. Teach a man to fish... We re nearing the end of the book, so we want to challenge you with problems that are similar to ones you ll face in the real world. A good programmer takes a lot of educated guesses, so we re giving you barely enough information about how a UserControl works. You don t even have binding set up, so you won t see data in the designer! How much of the XAML can you build before you flip the page to see the code for RosterControl? you are here 4 753
126 model view viewmodel 4 Finish the RosterControl XAML. Here s the code for the RosterControl user control that you added to the View folder. Did you notice how we gave you properties for binding, but no data context? That should make sense. The two controls on the page show different data, so the page will set different data contexts for each of them. <UserControl x: <UserControl.Resources> <DataTemplate x: <TextBlock> <Run Text="{Binding Name, Mode=OneWay"/> <Run Text=" #"/> <Run Text="{Binding Number, Mode=OneWay"/> </TextBlock> </DataTemplate> </UserControl.Resources> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> You already know that controls change size based on their Height and Width properties. You can change these numbers to alter how the control is displayed in the IDE s Designer window when you re modifying it. You can use the CornerRadius property to give a Border rounded corners. <Border BorderThickness="2" BorderBrush="Blue" CornerRadius="6" Background="Black"> <StackPanel Margin="20"> <TextBlock Foreground="White" FontFamily="Segoe" FontSize="20px" FontWeight="Bold" Text="{Binding TeamName" /> <TextBlock Foreground="White" FontFamily="Segoe" FontSize="16px" Text="Starting Players" Margin="0,5,0,0"/> <ListView Background="Black" Foreground="White" Margin="0,5,0,0" ItemTemplate="{StaticResource PlayerItemTemplate" ItemsSource="{Binding Starters" /> <TextBlock Foreground="White" FontFamily="Segoe" FontSize="16px" Text="Bench Players" Margin="0,5,0,0"/> <ListView Background="Black" Foreground="White" ItemsSource="{Binding Bench" ItemTemplate="{StaticResource PlayerItemTemplate" Margin="0,5,0,0"/> </StackPanel> </Border> </Grid> </UserControl> Both ListView controls use the same template defined as a static resource. We put the data template for the ListView items in its own static resource. Then, instead of having a <ListView. ItemTemplate> section we used the static resource using the ItemTemplate property in the ListView tag: ItemTemplate="{StaticResource PlayerItemTemplate" 754 Appendix ii
127 windows presentation foundation 1 2 Build the ViewModel for the BasketballRoster app by looking at the data in the Model and the bindings in the View, and figuring out what plumbing the app needs to connect them together. Add the Roster controls to LeagueWindow.xaml. First add these xmlns properties to the page so it recognizes the new namespaces: xmlns: </Window.Resources> Now you can add a StackPanel with two RosterControls to the page: VIEW MODEL <StackPanel Orientation="Horizontal" Margin="5" VerticalAlignment="Center" HorizontalAlignment="Center" DataContext="{StaticResource ResourceKey=LeagueViewModel" > <view:rostercontrol <view:rostercontrol </StackPanel> Create the ViewModel classes. Create these three classes in the ViewModel folder. Make sure you created the classes and pages in the right folders; otherwise, the namespaces won t match the code in the solution. PlayerViewModel Name: string Number: int RosterViewModel TeamName: string Starters: ObservableCollection <PlayerViewModel> Bench: ObservableCollection <PlayerViewModel> constructor: RosterViewModel(Model.Roster) private UpdateRosters() LeagueViewModel JimmysTeam: RosterViewModel BriansTeam: RosterViewModel private GetBomberPlayers(): Model.Roster private GetAmazinPlayers(): Model.Roster 3 See page 748 for a hint about the LINQ query... Make the ViewModel classes work. The PlayerViewModel class is a simple data object with two properties. The LeagueViewModel class has two private methods to create dummy data for the page. It creates Model.Roster objects for each team that get passed to the RosterViewModel constructor. The RosterViewModel class has a constructor that takes a Model.Roster object. It sets the TeamName property, and then it calls its private UpdateRosters() method, which uses LINQ queries to extract the starting and bench players and update the Starters and Bench properties. Add using Model; to the top of the classes so you can use objects in the Model namespace. If the IDE gives you an error message in the XAML designer that LeagueViewModel does not exist in the ViewModel namespace, but you re 100% certain you added it correctly, try right-clicking on the BasketballRoster project and choosing Unload Project, and then right-click again and choose Reload Project to reload it. But make sure you don t have any errors in any of the C# code files. you are here 4 755
128 exercise solution v LeagueViewModel exposes RosterViewModel objects that a RosterControl can use as its data context. It creates the Roster model object for the RosterViewModel to use. This private method generates dummy data for the Bombers by creating a new List of Player objects. You use classes from the View to store your data, which is why this method returns Player objects and not PlayerViewModel objects. The ViewModel for the BasketballRoster app has three classes: LeagueViewModel, PlayerViewModel, and RosterViewModel. They all live in the ViewModel folder. namespace BasketballRoster.ViewModel { using Model; using System.Collections.ObjectModel; class LeagueViewModel { public RosterViewModel BriansTeam { get; set; public RosterViewModel JimmysTeam { get; set; public LeagueViewModel() { Roster briansroster = new Roster("The Bombers", GetBomberPlayers()); BriansTeam = new RosterViewModel(briansRoster); Roster jimmysroster = new Roster("The Amazins", GetAmazinPlayers()); JimmysTeam = new RosterViewModel(jimmysRoster); private IEnumerable<Player> GetBomberPlayers() { List<Player> bomberplayers = new List<Player>() { new Player("Brian", 31, true), new Player("Lloyd", 23, true), new Player("Kathleen",6, true), new Player("Mike", 0, true), new Player("Joe", 42, true), new Player("Herb",32, false), new Player("Fingers",8, false), ; return bomberplayers; private IEnumerable<Player> GetAmazinPlayers() { List<Player> amazinplayers = new List<Player>() { new Player("Jimmy",42, true), new Player("Henry",11, true), new Player("Bob",4, true), new Player("Lucinda", 18, true), new Player("Kim", 16, true), new Player("Bertha", 23, false), new Player("Ed",21, false), ; return amazinplayers; namespace BasketballRoster.ViewModel { class PlayerViewModel { public string Name { get; set; public int Number { get; set; If you left out the using Model; line then you d have to use Model.Roster instead of Roster everywhere. Dummy data typically goes in the ViewModel because the state of an MVVM application is managed using instances of the Model classes that are encapsulated inside the ViewModel objects. Here s the PlayerViewModel. It s just a simple data object with properties for the data template to bind to. public PlayerViewModel(string name, int number) { Name = name; Number = number; 756 Appendix ii
129 windows presentation foundation namespace BasketballRoster.ViewModel { using Model; using System.Collections.ObjectModel; using System.ComponentModel; In a typical MVVM app, only classes in the ViewModel implement INotifyPropertyChanged because those are the only objects that XAML controls are bound to. class RosterViewModel { public ObservableCollection<PlayerViewModel> Starters { get; set; public ObservableCollection<PlayerViewModel> Bench { get; set; private Roster _roster; private string _teamname; public string TeamName { get { return _teamname; set { _teamname = value; public RosterViewModel(Roster roster) { _roster = roster; Starters = new ObservableCollection<PlayerViewModel>(); Bench = new ObservableCollection<PlayerViewModel>(); TeamName = _roster.teamname; UpdateRosters(); private void UpdateRosters() { var startingplayers = from player in _roster.players where player.starter select player; This is where the app stores its state in Roster objects encapsulated inside the ViewModel. The rest of the class translates the Model data into properties that the View can bind to. Whenever the TeamName property changes, the RosterViewModel fires off a PropertyChanged event so any object bound to it will get updated. foreach (Player player in startingplayers) Starters.Add(new PlayerViewModel(player.Name, player.number)); This LINQ query finds all the starting players and adds them to the Starters ObservableCollection property. var benchplayers = from player in _roster.players where player.starter == false select player; Here s a similar LINQ query to find the bench players. foreach (Player player in benchplayers) Bench.Add(new PlayerViewModel(player.Name, player.number)); In a typical MVVM app, only classes in the ViewModel implement INotifyPropertyChanged. That's because the ViewModel contains the only objects that XAML controls are bound to. In this project, however, we didn t need to implement INotifyPropertyChanged because the bound properties are updated in the constructor. If you wanted to modify the project to let Brian and Jimmy change their team names, you'd need to fire a PropertyChanged event in the TeamName set accessor. you are here 4 757
130 There is one change you ll need to make to get the ViewModel code on pages 766 and 767 in the book to work. On page 766 you re given three using statements, including this one: using Windows.UI.Xaml; You ll need to replace it with this using statement: using System.Windows.Threading; The Windows.UI.Xaml namespace is part of the.net Framework for Windows Store, so you don t use it for WPF applications. But you need System.Windows.Threading because your ViewModel has a DispatcherTimer. Other than that change, the code is identical. This is a good example of decoupled layers in the Model-View-ViewModel pattern: since you used identical C# code (except for that one using statement) for the ViewModel and Model, you could reuse those classes to port the stopwatch app to WPF.
131 windows presentation foundation Build the view for a simple stopwatch Here s the XAML for a simple stopwatch control. Add a WPF user control to the View folder called BasicStopwatch.xaml and add this code. The control has TextBlock controls to display the elapsed and lap times, and buttons to start, stop, reset, and take the lap time. VIEW <UserControl x: The ViewModel has read-only properties for Hours, Minutes, Seconds, etc. WPF requires one-way binding for read-only properties. <UserControl.Resources> <viewmodel:stopwatchviewmodel x: </UserControl.Resources> <Grid DataContext="{StaticResource ResourceKey=viewModel"> <StackPanel> <TextBlock> <Run>Elapsed time: </Run> </StackPanel> </Grid> </UserControl> <Run Text="{Binding Hours, Mode=OneWay"/> <Run>:</Run> <Run Text="{Binding Minutes, Mode=OneWay"/> <Run>:</Run> <Run Text="{Binding Seconds, Mode=OneWay"/> </TextBlock> <TextBlock> <Run>Lap time: </Run> <Run Text="{Binding LapHours, Mode=OneWay"/> <Run>:</Run> <Run Text="{Binding LapMinutes, Mode=OneWay"/> <Run>:</Run> <Run Text="{Binding LapSeconds, Mode=OneWay"/> </TextBlock> <StackPanel Orientation="Horizontal"> <Button Click="StartButton_Click" Margin="0,0,5,0">Start</Button> <Button Click="StopButton_Click" Margin="0,0,5,0">Stop</Button> <Button Click="ResetButton_Click" Margin="0,0,5,0">Reset</Button> <Button Click="LapButton_Click">Lap</Button> </StackPanel> Here s a hint: use a DispatcherTimer to constantly check the Model and update the properties. You ll need this xmlns property to add the namespace. We called our project Stopwatch, so the ViewModel namespace is Stopwatch.ViewModel. This user control stores an instance of the ViewModel as a static resource and uses it as its data context. It doesn t need its container to set a data context. It keeps track of its own state. This TextBlock is bound to properties in the ViewModel that return the elapsed time. This TextBlock is bound to properties that expose the lap time. You ll need to add Click event handlers to the control and a StopwatchViewModel class to the ViewModel namespace for this to compile. The ViewModel must be firing off PropertyChanged events to keep these values up to date. The code for the ViewModel is on pages 766 and 767 in the book. How much of the ViewModel code can you build just from the View and Model code before you flip the page? Add a BasicStopwatch control to the main window and see how far you can get. But be really careful and don t assume the IDE is necessarily wrong. Sometimes an error in the XAML for one page (like a broken xmlns property) can cause all the designers to break. you are here 4 765
132 tick tick tick Finish the stopwatch app There are just a few more loose ends to tie together. Your BasicStopwatch user control doesn t have event handlers, so you need to add them. And then you just need to add the control to your main window. 1 First, go back to BasicStopwatch.xaml.cs and add these event handlers to the code-behind: ViewModel.StopwatchViewModel viewmodel; public Basic(); The buttons in the view just call methods in the ViewModel. This is a pretty typical pattern for the View. 2 Here s all the XAML for MainWindow.xaml: <Window x: <Grid> <view:basicstopwatch </Grid> </Window> All the behavior is in the user control, so there s no code-behind for the main window. Your app should now run. Click the Start, Stop, Reset, and Lap buttons to see your stopwatch work. 768 Appendix ii
133. windows presentation foundation
134 useful tools for viewmodels Converters automatically convert values for binding Anyone with a digital clock knows that it typically shows the minutes with a leading zero. Our stopwatch should also show the minutes with two digits. And it should show the seconds with two digits, and round to the nearest hundredth of a second. We could modify the ViewModel to expose string values that are formatted properly, but that would mean that we d need to keep adding more and more properties each time we wanted to reformat the same data. That s where value converters come in very handy. A value converter is an object that the XAML binding uses to modify data before it s passed to the control. You can build a value converter by implementing the IValueConverter interface (which is in the System.Windows.Data namespace). Add a value converter to your stopwatch now. 1 2 using System.Windows.Data; class TimeNumberFormatConverter : IValueConverter { public object Convert(object value, Type targettype, object parameter, System.Globalization.CultureInfo culture) { This converter knows how to convert decimal and int values. For int values, you can optionally pass in a parameter. Add the TimeNumberFormatConverter class to the ViewModel folder. Add using System.Windows.Data; to the top of the class, and then have it implement the IValueConverter interface. Use the IDE to automatically implement the interface. This will add two method stubs for the Convert() and ConvertBack() methods. Implement the Convert() method in the value converter. The Convert() method takes several parameters we ll use two of them. The value parameter is the raw value that s passed into the binding, and parameter lets you specify a parameter in XAML. if (value is decimal) return ((decimal)value).tostring("00.00"); else if (value is int) { if (parameter == null) return ((int)value).tostring("d1"); else return ((int)value).tostring(parameter.tostring()); return value; The ConvertBack() method is used for two-way binding. We re not using that in this project, so you can leave the method stub as is. VIEW MODEL Converters are useful tools for building your ViewModel. 770 public object ConvertBack(object value, Type targettype, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); Is it a good idea to leave this NotImplementedException in your code? For this project, this is code that is never supposed to be run. If it does get run, is it better to fail silently, so the user never sees it? Or is it better to throw an exception so that you can track down the problem? Which of those gives you a more robust app? There s not necessarily one right answer.
135 windows presentation foundation 3 4 Add the converter to your stopwatch control as a static resource. It should go right below the ViewModel object: <UserControl.Resources> <viewmodel:stopwatchviewmodel x: <viewmodel:timenumberformatconverter x: </UserControl.Resources> Update the XAML code to use the value converter. Modify the {Binding markup by adding the Converter= to it in each of the <Run> tags. <TextBlock> <Run>Elapsed time: </Run> <Run Text="{Binding Hours, Mode=OneWay, Converter={StaticResource timenumberformatconverter"/> <Run>:</Run> <Run Text="{Binding Minutes, Mode=OneWay, Converter={StaticResource timenumberformatconverter, ConverterParameter=d2"/> <Run>:</Run> <Run Text="{Binding Seconds, Mode=OneWay, </TextBlock> <TextBlock> Converter={StaticResource timenumberformatconverter"/> <Run>Lap time: </Run> <Run Text="{Binding LapHours, Mode=OneWay, Converter={StaticResource timenumberformatconverter"/> <Run>:</Run> <Run Text="{Binding LapMinutes, Mode=OneWay, Converter={StaticResource timenumberformatconverter, ConverterParameter=d2"/> <Run>:</Run> <Run Text="{Binding LapSeconds, Mode=OneWay, </TextBlock> If there s no parameter specified, don t forget the extra closing bracket. Converter={StaticResource timenumberformatconverter"/> VIEW The designer may make you rebuild the solution after you add this line. In rare cases, you might even need to unload and reload the project. Use the ConverterParameter syntax to pass a parameter into the converter. Now the stopwatch runs the values through the converter before passing them into the TextBlock controls, and the numbers are formatted correctly on the page. you are here 4 771
136 converting different types Converters can work with many different types TextBlock and TextBox controls work with text, so binding strings or numbers to the Text property makes sense. But there are many other properties, and you can bind to those as well. If your ViewModel has a Boolean property, it can be bound to any true/false property. You can even bind properties that use enums the IsVisible property uses the Visibility enum, which means you can also write value converters for it. Let s add Boolean and Visibility binding and conversion to the stopwatch. Here are two converters that will come in handy. Sometimes you want to bind Boolean properties like IsEnabled so that a control is enabled if the bound property is false. We ll add a new converter called BooleanNotConverter, which uses the! operator to invert a Boolean target property. IsEnabled="{Binding Running, Converter={StaticResource notconverter" You ll often want to have controls show or hide themselves based on a Boolean property in the data context. You can only bind the Visibility property of a control to a target property that s of the type Visibility (meaning it returns values like Visibility.Collapsed). We ll add a converter called BooleanVisibilityConverter that will let us bind a control s Visibility property to a Boolean target property to make it visible or invisible. Visibility="{Binding Running, Converter={StaticResource visibilityconverter" 1 Modify the ViewModel s Tick event handler. Modify the DispatcherTimer s Tick event handler to raise a PropertyChanged event if the value of the Running property has changed: int _lasthours; int _lastminutes; decimal _lastseconds; bool _lastrunning; void TimerTick(object sender, object e) { if (_lastrunning!= Running) { _lastrunning = Running; OnPropertyChanged("Running"); if (_lasthours!= Hours) { _lasthours = Hours; OnPropertyChanged("Hours"); if (_lastminutes!= Minutes) { _lastminutes = Minutes; OnPropertyChanged("Minutes"); if (_lastseconds!= Seconds) { _lastseconds = Seconds; OnPropertyChanged("Seconds"); We added the Running check to the timer. Would it make more sense to have the Model fire an event instead? VIEW MODEL 772 Appendix ii
137 2 3 4 Add a converter that inverts Boolean values. Here s a value converter that converts true to false and vice versa. You can use it with Boolean properties on your controls like IsEnabled. using System.Windows.Data; class BooleanNotConverter : IValueConverter { public object Convert(object value, Type targettype, object parameter, System.Globalization.CultureInfo culture) { if ((value is bool) && ((bool)value) == false) return true; else return false; public object ConvertBack(object value, Type targettype, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); using System.Windows; using System.Windows.Data; class BooleanVisibilityConverter : IValueConverter { public object Convert(object value, Type targettype, object parameter, System.Globalization.CultureInfo culture) { if ((value is bool) && ((bool)value) == true) return Visibility.Visible; else return Visibility.Collapsed; public object ConvertBack(object value, Type targettype, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); windows presentation foundation Add a converter that converts Booleans to Visibility enums. You ve already seen how you can make a control visible or invisible by setting its Visibility property to Visible or Collapsed. These values come from an enum in the System.Windows namespace called Visibility. Here s a converter that converts Boolean values to Visibility values: Modify your basic stopwatch control to use the converters. Modify BasicStopwatch.xaml to add instances of these converters as static resources: <viewmodel:booleanvisibilityconverter x: <viewmodel:booleannotconverter x: Now you can bind the controls IsEnabled and Visibility properties to the ViewModel s Running property: <StackPanel Orientation="Horizontal"> > <TextBlock Text="Stopwatch is running" Visibility="{Binding Running, Converter={StaticResource visibilityconverter"/> you are here This causes a TextBlock to become visible when the stopwatch is running. VIEW MODEL VIEW This enables the Start button only if the stopwatch is not running.
138.
139 Build an analog stopwatch using the same ViewModel windows presentation foundation The MVVM pattern decouples the View from the ViewModel, and the ViewModel from the Model. This is really useful if you need to make changes to one of the layers. Because of that decoupling, you can be very confident that the changes you make will not cause the shotgun surgery effect and ripple into the other layers. So did we do a good job decoupling the stopwatch program s View from its ViewModel? There s one way to be sure: let s build an entirely new View without changing the existing classes in the ViewModel. The only change you ll need in the C# code is a new converter in the ViewModel that converts minutes and seconds into angles. 1 using System.Windows.Data; class AngleConverter : IValueConverter { public object Convert(object value, Type targettype, object parameter, System.Globalization.CultureInfo culture) { double parsedvalue; if ((value!= null) && double.tryparse(value.tostring(), out parsedvalue) Add a converter to convert time to angles. Add the AngleConverter class to the ViewModel folder. You ll use it for the hands on the face. && (parameter!= null)) switch (parameter.tostring()) { case "Hours": return parsedvalue * 30; case "Minutes": case "Seconds": return parsedvalue * 6; return 0; An hour value ranges from 0 to 11, so to convert to an angle it s multiplied by 30. Minutes and seconds range from 0 to 60, so the angle conversion means multiplying by 6. public object ConvertBack(object value, Type targettype, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); Remember how you used the data classes you built for Jimmy s Comics in Chapter 14 and reused them to create a Split App without making any changes? This is the same idea. Do this! VIEW MODEL 2 Add the new UserControl. Add a new WPF user control called AnalogStopwatch to the View folder and add the ViewModel namespace to the <UserControl> tag. Also, change the design width and height: d: And add the ViewModel, two converters, and a style to the user control s static resources. <UserControl.Resources> <viewmodel:stopwatchviewmodel x: <viewmodel:booleannotconverter x: <viewmodel:angleconverter x: </UserControl.Resources> VIEW you are here 4 781
140 transform your controls Setting the column width keeps it from expanding to fill whatever container it s in. Here s the minute hand. There are two yellow hands for the lap time. 3 <Grid x: <Grid.ColumnDefinitions> <ColumnDefinition Width="400"/> </Grid.ColumnDefinitions> <Ellipse Width="300" Height="300" Stroke="Black" StrokeThickness="2"> <Ellipse.Fill> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FFB03F3F"/> <GradientStop Color="#FFE4CECE" Offset="1"/> </LinearGradientBrush> </Ellipse.Fill> </Ellipse> <Rectangle RenderTransformOrigin="0.5,0.5" Width="2" Height="150" Fill="Black"> <Rectangle.RenderTransform> <TransformGroup> <TranslateTransform Y="-60"/> <RotateTransform Angle="{Binding Seconds, Converter={StaticResource ResourceKey=angleConverter, ConverterParameter=Seconds"/> </TransformGroup> </Rectangle.RenderTransform> </Rectangle> <Rectangle RenderTransformOrigin="0.5,0.5" Width="4" Height="100" Fill="Black"> <Rectangle.RenderTransform> <TransformGroup> <TranslateTransform Y="-50"/> <RotateTransform Angle="{Binding Minutes, Converter={StaticResource ResourceKey=angleConverter, ConverterParameter=Minutes"/> </TransformGroup> </Rectangle.RenderTransform> </Rectangle> <Rectangle RenderTransformOrigin="0.5,0.5" Width="1" Height="150" Fill="Yellow"> <Rectangle.RenderTransform> <TransformGroup> <TranslateTransform Y="-60"/> <RotateTransform Angle="{Binding LapSeconds, Converter={StaticResource ResourceKey=angleConverter, ConverterParameter=Seconds"/> </TransformGroup> </Rectangle.RenderTransform> </Rectangle> <Rectangle RenderTransformOrigin="0.5,0.5" Width="2" Height="100" Fill="Yellow"> <Rectangle.RenderTransform> <TransformGroup> <TranslateTransform Y="-50"/> <RotateTransform Angle="{Binding LapMinutes, Converter={StaticResource ResourceKey=angleConverter, ConverterParameter=Minutes"/> </TransformGroup> </Rectangle.RenderTransform> </Rectangle> <Ellipse Width="10" Height="10" Fill="Black"/> </Grid> 782 Appendix ii Add the face and hands to the Grid. Modify the <Grid> tag to add the stopwatch face, using four rectangles for hands. VIEW This is the face of the stopwatch. It has a black outline and a grayish gradient background. Here s the second hand. It s a long, thin rectangle with a translate and rotate transform. Every control can have one RenderTransform section. The TransformGroup tag lets you apply multiple transforms to the same control. This draws an extra circle in the middle to cover up where the hands overlap. Since it s at the bottom of the Grid, it s drawn last and ends up on top.
141 windows presentation foundation The stopwatch face is filled with a gradient brush, just like the background you used in Save the Humans. Each hand is transformed twice. It starts out centered in the face, so the first transform shifts it up so that it s in position to rotate. <TranslateTransform Y="-60"/> <RotateTransform Angle="{Binding Seconds, Converter={StaticResource ResourceKey=angleConverter, ConverterParameter=Seconds"/> The second transform rotates the hand to the correct angle. The Angle property of the rotation is bound to seconds or minutes in the ViewModel, and uses the angle converter to convert it to an angle. Every control can have one RenderTransform element that changes how it s displayed. This can include rotating, moving to an offset, skewing, scaling its size up or down, and more. You used transforms in Save the Humans to change the shape of the ellipses in the enemy to make it look like an alien. Your stopwatch will start ticking as soon as you add the second hand, because it creates an instance of the ViewModel as a static resource to render the control in the designer. The designer may stop it updating, but you can restart it by switching away from the designer window and back again. you are here 4 783
142 adding resources 4 Add the buttons to the stopwatch. Since the ViewModel is the same, the buttons should work the same. Add the same buttons to AnalogStopwatch.xaml that you used for the basic stopwatch: <StackPanel Orientation="Horizontal" VerticalAlignment="Bottom"> > Here s the code-behind for AnalogStopwatch.xaml.cs: ViewModel.StopwatchViewModel viewmodel; public Analog(); 784 Appendix ii
143 windows presentation foundation 5 Update the main window to show both stopwatches. Now you just need to modify your MainWindow.xaml to add an AnalogStopwatch control: <Window x: <Grid> <StackPanel> <view:basicstopwatch <view:analogstopwatch </StackPanel> </Grid> </Window> Run your app. Now you have two stopwatch controls on the page. Each stopwatch keeps its own time, because each one has its own separate instance of the ViewModel as a static resource. Try changing the ViewModel to make the _stopwatchmodel field static. What does this change about how the stopwatch app behaves? Can you figure out why that happens? you are here 4 785
144 in the end, it s all just code UI controls can be instantiated with C# code, too You already know that your XAML code instantiates classes in the Windows.UI namespace, and you even used the Watch window in the IDE back in Chapter 10 to explore them. But what if you want to create controls from inside your code? Well, controls are just objects, so you can create them and work with them just like you would with any other object. Go ahead and modify the code-behind to add markings to the face of your analog stopwatch. public sealed partial class AnalogStopwatch : UserControl { public AnalogStopwatch() { InitializeComponent(); This creates instances of the same Rectangle object that you created with the <Rectangle> tag. viewmodel = FindResource("viewModel") as ViewModel.StopwatchViewModel; AddMarkings(); private void AddMarkings() { for (int i = 0; i < 360; i += 3) { Modify the constructor to call a method that adds the markings. Rectangle rectangle = new Rectangle(); rectangle.width = (i % 30 == 0)? 3 : 1; rectangle.height = 15; rectangle.fill = new SolidColorBrush(Colors.Black); rectangle.rendertransformorigin = new Point(0.5, 0.5); TransformGroup transforms = new TransformGroup(); transforms.children.add(new TranslateTransform() { Y = -140 ); transforms.children.add(new RotateTransform() { Angle = i ); rectangle.rendertransform = transforms; basegrid.children.add(rectangle); //... the button event handlers stay the same This statement uses the % modulo operator to make the marks for the hours thicker than the ones for the minutes. i % 30 returns 0 only if i is divisible by 30. Flip back to the XAML for the hour and minute hands. This code sets up exactly the same transform, except instead of binding the Angle property it sets it to a value. Controls like Grid, StackPanel, and Canvas have a Children collection with references to all the other controls contained inside them. You can add controls to the grid with its Add() method and remove all controls by calling its Clear() method. You add transforms to a TransformGroup the same way. You used a Binding object to set up data binding in C# code back in Chapter 11. Can you figure out how to remove the XAML to create the Rectangle controls for the hour and minute hands and replace it with C# code to do the same thing? 786 Appendix ii
145 windows presentation foundation Thanks for giving us everything we need for our game! Now we can compete for the prestigious objectville trophy. Now that you added the markings to the stopwatch, the ref will make all the right calls. Which team will dominate the conference and win the Objectville Trophy? Nobody s sure. All we know is that Joe, Bob, and Ed will be betting on it! you are here 4 787
146 For the next few projects, you ll need to download the bee images from the Head First Labs website (). Make sure that you add the images to your project so they re in the top-level folder, just like you did with the Jimmy s Comics app. You ll also need to select each image file in the Solution Explorer and use the Properties window to set the Build Action to Content and Copy to Output Directory to Copy always. Here s what it looks like when you did it for the Jimmy s Comics app: Make sure you do this for Bee animation 1.png, Bee animation 2.png, Bee animation 3.png, and Bee animation 4.png.
147 windows presentation foundation Create a user control to animate a picture Let s encapsulate all the frame-by-frame animation code. Add a WPF user control called AnimatedImage to your View folder. It has very little XAML all the intelligence is in the code-behind. Here s everything inside the <UserControl> tag in the XAML: <Grid> <Image x: </Grid> The work is done in the code-behind. Notice its overloaded constructor that calls the StartAnimation() method, which creates storyboard and key frame animation objects to animate the Source property of the Image control. using System.Windows.Media.Animation; using System.Windows.Media.Imaging; public partial class AnimatedImage : UserControl { public AnimatedImage() { InitializeComponent(); BitmapImage is in the Media.Imaging namespace. Storyboard and the other animation classes are in the Media.Animation namespace. public AnimatedImage(IEnumerable<string> imagenames, TimeSpan interval) : this() { Every control must have a parameterless constructor if StartAnimation(imageNames, interval); you want to create an instance of the control using XAML. You can still add overloaded constructors, but that s useful only if you re writing code to create the control. public void StartAnimation(IEnumerable<string> imagenames, TimeSpan interval) { Storyboard storyboard = new Storyboard(); ObjectAnimationUsingKeyFrames animation = new ObjectAnimationUsingKeyFrames(); Storyboard.SetTarget(animation, image); Storyboard.SetTargetProperty(animation, new PropertyPath(Image.SourceProperty)); TimeSpan currentinterval = TimeSpan.FromMilliseconds(0); foreach (string imagename in imagenames) { ObjectKeyFrame keyframe = new DiscreteObjectKeyFrame(); keyframe.value = CreateImageFromAssets(imageName); keyframe.keytime = currentinterval; animation.keyframes.add(keyframe); currentinterval = currentinterval.add(interval); storyboard.repeatbehavior = RepeatBehavior.Forever; storyboard.autoreverse = true; storyboard.children.add(animation); storyboard.begin(); private static BitmapImage CreateImageFromAssets(string imagefilename) { try { Uri uri = new Uri(imageFilename, UriKind.RelativeOrAbsolute); return new BitmapImage(uri); catch (System.IO.IOException) { return new BitmapImage(); This is the same method you used in Chapter 14. The static SetTarget() and SetTargetProperty() methods from the Storyboard class set the target object being animated ("image"), and the property that will change (Source) using the PropertyPath() class. Once the Storyboard object is set up and animations have been added to its Children collection, call its Begin() method to start the animation. you are here 4 789
148 bees gotta fly Make your bees fly around a page Do this! Let s take your AnimatedImage control out for a test flight. 1 Replace the main window with a window in the View folder. Add a Window to your View folder called FlyingBees.xaml. Delete MainWindow.xaml from the project. Then modify the StartupUri property in the <Application> tag App.xaml: <Canvas Background="SkyBlue"> <view:animatedimage Canvas. <view:animatedimage Canvas. <view:animatedimage Canvas. The AnimatedImage control is invisible until its CreateFrameImages() method is called, so the controls in the Canvas will show up only as outlines. You can select them using the Document Outline. Try dragging the controls around the canvas to see the Canvas. Left and Canvas.Top properties change. 790 Appendix ii
149 windows presentation foundation 3 Add the code-behind for the page. You ll need this using statement for the namespace that contains Storyboard and DoubleAnimation: using System.Windows.Media.Animation; Now you can modify the constructor in FlyingBees.xaml.cs to start up the bee animation. Let s also create a DoubleAnimation to animate the Canvas.Left property. Compare the code for creating a storyboard and animation to the XAML code with <DoubleAnimation> earlier in the chapter. public FlyingBees() { this.initializecomponent(); List<string> imagenames = new List<string>(); imagenames.add("bee animation 1.png"); imagenames.add("bee animation 2.png"); imagenames.add("bee animation 3.png"); imagenames.add("bee animation 4.png"); The CreateFrameImages() method takes a sequence of asset names and a TimeSpan to set the rate that the frames are updated. Instead of using a <Storyboard> tag and a <DoubleAnimation> tag like earlier in the chapter, you can create the Storyboard and DoubleAnimation objects and set their properties in code. firstbee.startanimation(imagenames, TimeSpan.FromMilliseconds(50)); secondbee.startanimation(imagenames, TimeSpan.FromMilliseconds(10)); thirdbee.startanimation(imagenames, TimeSpan.FromMilliseconds(100)); Storyboard storyboard = new Storyboard(); DoubleAnimation animation = new DoubleAnimation(); Storyboard.SetTarget(animation, firstbee); Storyboard.SetTargetProperty(animation, new PropertyPath(Canvas.LeftProperty)); animation.from = 50; animation.to = 450; animation.duration = TimeSpan.FromSeconds(3); animation.repeatbehavior = RepeatBehavior.Forever; animation.autoreverse = true; storyboard.children.add(animation); storyboard.begin(); The Storyboard is garbagecollected after the animation completes. You can see this for yourself by using to watch it and clicking to refresh it after the animation ends. Run your program. Now you can see three bees flapping their wings. You gave them different intervals, so they flap at different rates because their timers are waiting for different timespans before changing frames. The top bee has its Canvas. Left property animated from 50 to 450 and back, which causes it to move around the page. Take a close look at the properties that are set on the DoubleAnimation object and compare them with the XAML properties you used earlier in the chapter. Something s not right about this project. Can you spot it? you are here 4 791
150 remember, mvvm is a pattern Something s not right: there s nothing in your Model or ViewModel folder, and you re creating dummy data in the View. That s not MVVM! If we wanted to add more bees, we d have to create more controls in the View and then initialize them individually. What if we want different sizes or kinds of bees? Or other things to be animated? If we had a Model that was optimized for data, it would be a lot easier. How can we make this project follow the MVVM pattern???? MODEL VIEW VIEW MODEL This is easy. Just add an ObservableCollection of controls, and bind the Children property of the Canvas to it. Why are you making such a big deal about it? That won't work. Data binding doesn t work with container controls Children property and for good reason. Data binding is built to work with attached properties, which are the properties that show up in the XAML code. The Canvas object does have a public Children property, but if you try to set it using XAML (Children="{Binding...") your code won t compile. However, you already know how to bind a collection of objects to a XAML control, because you did that with ListView and GridView controls using the ItemsSource property. We can take advantage of that data binding to add child controls to a Canvas. 792 Appendix ii
151 windows presentation foundation Use ItemsPanelTemplate to bind controls to a Canvas When you used the ItemsSource property to bind items to a ListView, GridView, or ListBox, it didn t matter which one you were binding to, because the ItemsSource property always worked the same way. If you were going to build three classes that had exactly the same behavior, you would put that behavior in a base class and have the three classes extend it, right? Well, the Microsoft team did exactly the same thing when they built the selector controls. The ListView, GridView, and ListBox all extend a class called Selector, which is a subclass of the ItemsControl class that displays a collection of items You can set up the panel however you want. We ll use a Canvas with a skyblue background. We re going to use its ItemsPanel property to set up a template for the panel that controls the layout of the items. Start by adding the ViewModel namespace to FlyingBees.xaml: xmlns: Edit FlyingBees.xaml.cs and delete all the additional code that you added to the FlyingBees() constructor in the FlyingBees control. Make sure that you don t delete the InitializeComponent() method! Here s the XAML for the ItemsControl. Open FlyingBees.xaml, delete the <Canvas> tag you added, and replace it with this ItemsControl: <ItemsControl DataContext="{StaticResource viewmodel" ItemsSource="{Binding Path=Sprites" > <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <Canvas Background="SkyBlue" /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> </ItemsControl> When the ItemsControl is created, it creates a Panel to hold all of its items and uses the ItemsPanelTemplate as the control template. If you used a different project name, change AnimatedBee to the correct namespace. Use the static ViewModel resource as the data context, and bind the ItemsSource to a property called Sprites. Use the ItemsPanel property to set up an ItemsPanelTemplate. This contains a single Panel control, and both Grid and Canvas extend the Panel class. Any items bound to ItemsSource will be added to the Panel s Children. you are here 4 793
152 bee factory 4 Create a new class in the View folder called BeeHelper. Make sure it s a static class, because it ll have only static methods to help your ViewModel manage its bees. using System.Windows; using System.Windows.Controls; using System.Windows.Media.Animation; The factory method pattern MVVM is just one of many design patterns. One of the most common and most useful patterns is the factory method pattern, where you have a factory method that creates objects. The factory method is usually static, and the name often ends with Factory so it s obvious what s going on. This factory method creates bee controls. It makes sense to keep this in the View, because it s all UI-related code. static class Bee; When you take a small block of code that s reused a lot and put bee.height = height; it in its own (often static) method, it s sometimes called a helper return bee; method. Putting helper methods in a static class with a name that ends with Helper makes your code easier to read. public static void SetBeeLocation(AnimatedImage bee, double x, double y) { Canvas.SetLeft(bee, x); Canvas.SetTop(bee, y); public static void MakeBeeMove(AnimatedImage bee, double fromx, double tox, double y) { Canvas.SetTop(bee, y); Storyboard storyboard = new Storyboard(); DoubleAnimation animation = new DoubleAnimation(); Storyboard.SetTarget(animation, bee); Storyboard.SetTargetProperty(animation, new PropertyPath(Canvas.LeftProperty)); animation.from = fromx; animation.to = tox; animation.duration = TimeSpan.FromSeconds(3); animation.repeatbehavior = RepeatBehavior.Forever; animation.autoreverse = true; storyboard.children.add(animation); storyboard.begin(); This is the same code that was in the page s constructor. Now it s in a static helper method. 794 Appendix ii
153 This will come in handy in the last lab. windows presentation foundation All XAML controls inherit from the UIElement base class in the System.Windows namespace. We explicitly used the namespace (System.Windows.UIElement) in the body of the class instead of adding a using statement to limit the amount of UI-related code we added to the ViewModel. We used UIElement because it s the most abstract class that all the sprites extend. For some projects, a subclass like FrameworkElement may be more appropriate, because that s where many properties are defined, including Width, Height, Opacity, HorizontalAlignment, etc. 5 Here s the code for the empty BeeViewModel class that you added to the ViewModel folder. By moving the UI-specific code to the View, we can keep the code in the ViewModel simple and specific to managing bee-related logic. using View; using System.Collections.ObjectModel; using System.Collections.Specialized; When the AnimatedImage control is added to the _sprites ObservableCollection that s bound to the ItemsControl s ItemsSource property, the control is added to the item panel, which is created based on the ItemsPanelTemplate. class BeeViewModel { private readonly ObservableCollection<System.Windows.UIElement> _sprites = new ObservableCollection<System.Windows.UIElement>(); public INotifyCollectionChanged Sprites { get { return _sprites; A sprite is the term for any 2D image or animation that gets incorporated into a larger game or animation. 6 public BeeViewModel() { AnimatedImage firstbee = BeeHelper.BeeFactory(50, 50, TimeSpan.FromMilliseconds(50)); _sprites.add(firstbee); AnimatedImage secondbee = BeeHelper.BeeFactory(200, 200, TimeSpan.FromMilliseconds(10)); _sprites.add(secondbee); AnimatedImage thirdbee = BeeHelper.BeeFactory(300, 125, TimeSpan.FromMilliseconds(100)); _sprites.add(thirdbee); BeeHelper.MakeBeeMove(firstBee, 50, 450, 40); BeeHelper.SetBeeLocation(secondBee, 80, 260); BeeHelper.SetBeeLocation(thirdBee, 230, 100); Run your app. It should look exactly the same as before, but now the behavior is split across the layers, with UI-specific code in the View and code that deals with bees and moving in the ViewModel. The readonly keyword We re taking two steps to encapsulate the Sprites property. The backing field is marked readonly so it can t be overwritten later, and we expose it as an INotifyCollectionChanged property so other classes can only observe it but not modify it. You re changing properties and adding animations on the controls after they were added to the ObservableCollection. Why does that work? An important reason that we use encapsulation is to prevent one class from accidentally overwriting another class s data. But what s preventing a class from overwriting its own data? The readonly keyword can help with that. Any field that you mark readonly can be modified only in its declaration or in the constructor. you are here 4 795
154 1 This is the last exercise in the book. Your job is to build a program that animates bees and stars. There s a lot of code to write, but you re up to the task...and once you have this working, you ll have all the tools you need to build a complete video game. (Can you guess what s in Lab #3?) Here s the app you ll create. Bees with flapping wings fly around a dark blue canvas, while behind them, stars fade in and out. You ll build a View that contains the bees, stars, and page to display them; a Model that keeps track of where they are and fires off events when bees move or stars change; and a ViewModel to connect the two together. The bees fly around the sky to random locations. If the canvas size changes, the bees fly to new positions on the canvas. Stars fade in and out. If the canvas play area size changes, the stars instantly move and bees slowly fly to their new locations. You can test this by running this program and dragging the window to resize it. The stars move quickly! 2 3 Create a new WPF Application project. Create a new project called StarryNight. Next, add the Model, View, and ViewModel folders. Once that s done, you ll need to add an empty class called BeeStarViewModel to the ViewModel folder. Create a new window in the View folder. Delete MainWindow.xaml. Then add a window in the View folder called BeesOnAStarryNight. xaml. Add the namespace to the top-level tag in the BeesOnAStarryNight.xaml (it should match your project s name, StarryNight): xmlns: </Window.Resources> The XAML for the page is exactly the same as FlyingBees.xaml in the last project, except the Canvas control s background is Blue and it has a SizeChanged event handler: <Canvas Background="Blue" SizeChanged="SizeChangedHandler" /> Then modify the <Application> tag in App.xaml so the application starts with the new window: StartupUri="View\BeesOnAStarryNight.xaml" The SizeChanged event is fired when a control changes size, with EventArgs properties for the new size. Visual Studio comes with a fantastic tool to help you experiment with shapes! Fire up Blend for Visual Studio 2013 and use the pen, pencil, and toolbox to 796 Appendix ii create XAML shapes that you can copy and paste into your C# projects.
155 The code in step 4 won t compile until you add the PlayAreaSize property to the ViewModel in step 9. You can use the IDE to generate a property stub for it for now. windows presentation foundation 4 Add code-behind for the page and the app. Add the SizeChanged event handler to BeesOnAStarryNight.xaml.cs in the View folder: ViewModel.BeeStarViewModel viewmodel; public BeesOnAStarryNight() { InitializeComponent(); viewmodel = FindResource("viewModel") as ViewModel.BeeStarViewModel; VIEW private void SizeChangedHandler(object sender, SizeChangedEventArgs e) { viewmodel.playareasize = new Size(e.NewSize.Width, e.newsize.height); 5 6 Add the AnimatedImage control to the View folder. Go back to the View folder and add the AnimatedImage control. This is exactly the same control from earlier in the chapter. Make sure you add the image files for the animation frames to the project and update each file s Build Action to Content and its Copy to Output Directory to Copy always. Add a user control called StarControl to the View folder. This control draws a star. It also has two storyboards, one to fade in and one to fade out. Add methods called FadeIn() and FadeOut() to the code-behind to trigger the storyboards. A Polygon control uses a set of points to draw a polygon. This UserControl uses it to draw a star. <UserControl // The usual XAML code that the IDE generates is fine, // no extra namespaces are needed for this User Control. > <UserControl.Resources> <Storyboard x: <DoubleAnimation From="0" To="1" Storyboard. </Storyboard> <Storyboard x: <DoubleAnimation From="1" To="0" Storyboard. </Storyboard> </UserControl.Resources> <Grid> <Polygon Points="0,75 75,0 100,100 0,25 150,25" Fill="Snow" Stroke="Black" x: </Grid> </UserControl> You ll need to add public FadeIn() and FadeOut() methods to the code-behind that starts these storyboards. That s how the stars will fade in and out. There are even more shapes beyond ellipses, rectangles, and polygons: This polygon draws the star. You can replace it with other shapes to experiment with how they work. you are here 4 797
156 oh my stars (continued) using System.Windows; using System.Windows.Controls; using System.Windows.Media.Animation; using System.Windows.Shapes; 7 Add the BeeStarHelper class to the View. Here s a useful helper class. It s got some familiar tools and a couple of new ones. Put it in the View folder. static class BeeStar; bee.height = height; Canvas has SetLeft() and GetLeft() methods to set and get the X return bee; position of a control. The SetTop() and GetTop() methods set and get the Y position. They work even after a control is added to the Canvas. public static void SetCanvasLocation(UIElement control, double x, double y) { Canvas.SetLeft(control, x); Canvas.SetTop(control, y); VIEW public static void MoveElementOnCanvas(UIElement uielement, double tox, double toy) { double fromx = Canvas.GetLeft(uiElement); double fromy = Canvas.GetTop(uiElement); Storyboard storyboard = new Storyboard(); DoubleAnimation animationx = CreateDoubleAnimation(uiElement, fromx, tox, new PropertyPath(Canvas.LeftProperty)); DoubleAnimation animationy = CreateDoubleAnimation(uiElement, fromy, toy, new PropertyPath(Canvas.TopProperty)); storyboard.children.add(animationx); storyboard.children.add(animationy); storyboard.begin(); public static DoubleAnimation CreateDoubleAnimation(UIElement uielement, double from, double to, PropertyPath propertytoanimate) { DoubleAnimation animation = new DoubleAnimation(); Storyboard.SetTarget(animation, uielement); Storyboard.SetTargetProperty(animation, propertytoanimate); animation.from = from; animation.to = to; animation.duration = TimeSpan.FromSeconds(3); return animation; public static void SendToBack(StarControl newstar) { Canvas.SetZIndex(newStar, -1000); We added a helper called CreateDoubleAnimation() that creates a three-second DoubleAnimation. This method uses it to move a UIElement from its current location to a new point by animating its Canvas.Left and Canvas.Top properties. Z Index means the order the controls are layered on a panel. A control with a higher Z index is drawn on top of one with a lower Z index. 798 Appendix ii
157 windows presentation foundation 8 Add the Bee, Star, and EventArgs classes to the Model. Your model needs to keep track of the bees positions and sizes, and the stars positions, and it will fire off events so the ViewModel knows whenever there s a change to a bee or a star. using System.Windows; class Bee { public Point Location { get; set; public Size Size { get; set; public Rect Position { get { return new Rect(Location, Size); public double Width { get { return Position.Width; public double Height { get { return Position.Height; public Bee(Point location, Size size) { Location = location; Size = size; using System.Windows; class BeeMovedEventArgs : EventArgs { public Bee BeeThatMoved { get; private set; public double X { get; private set; public double Y { get; private set; public BeeMovedEventArgs(Bee beethatmoved, double x, double y) { BeeThatMoved = beethatmoved; X = x; Y = y; using System.Windows; class Star { public Point Location { get; set; MODEL public Star(Point location) { Location = location; Once you get your program working, try adding a Boolean Rotating property to the Star class and use it to make some of your stars slowly spin around. using System.Windows; class StarChangedEventArgs : EventArgs { public Star StarThatChanged { get; private set; public bool Removed { get; private set; The model will fire events that use these EventArgs to tell the ViewModel when changes happen. public StarChangedEventArgs(Star starthatchanged, bool removed) { StarThatChanged = starthatchanged; Removed = removed; The Points property on the Polygon control is a collection of Point structs. The Point, Size, and Rect structs The Rect struct has several overloaded constructors, and methods that let you extract its width, height, size, and location (either as a Point or individual X and Y double coordinates). The System.Windows namespace has several very useful structs. Point uses X and Y double properties to store a set of coordinates. Size has two double properties too, Width and Height, and also a special Empty value. Rect stores two coordinates for the top-left and bottom-right corner of a rectangle. It has a lot of useful methods to find its width, height, intersection with other Rects, and more. you are here 4 799
158 buzz buzz buzz using System.Windows; 800 Appendix ii (continued)(); private static bool RectsOverlap(Rect r1, Rect r2) { r1.intersect(r2); if (r1.width > 0 r1.height > 0) return true; return false; 9 Add the BeeStarModel class to the Model. We ve filled in the private fields and a couple of useful methods. Your job is to finish building the BeeStarModel class. You can use readonly to create a constant struct value. The ViewModel will use a timer to call this Update() method periodically. PlayAreaSize is a property. Size.Empty is a value of Size that s reserved for an empty size. You ll use it only to create bees and stars when the play area is resized. This method checks two Rect structs and returns true if they overlap each other using the Rect.Intersect() method. MODEL public Size PlayAreaSize { // Add a backing field, and have the set accessor call CreateBees() and CreateStars() private void CreateBees() { // If the play area is empty, return. If there are already bees, move each of them. // Otherwise, create between 5 and 15 randomly sized bees (40 to 150 pixels), add // it to the _bees collection, and fire the BeeMoved event. private void CreateStars() { // If the play area is empty, return. If there are already stars, // set each star's location to a new point and fire the StarChanged // event, otherwise call CreateAStar() between 5 and 10 times. private void CreateAStar() { // Find a new non-overlapping point, add a new Star object to the // _stars collection, and fire the StarChanged event. private Point FindNonOverlappingPoint(Size size) { If the method s tried 1,000 random locations and hasn t found one that doesn t overlap, the play area has probably run out of space, so just return any point. // Find the upper-left corner of a rectangle that doesn't overlap any bees or stars. // You'll need to try random Rects, then use LINQ queries to find any bees or stars // that overlap (the RectsOverlap() method will be useful). private void MoveOneBee(Bee bee = null) { // If there are no bees, return. If the bee parameter is null, choose a random bee, // otherwise use the bee argument. Then find a new non-overlapping point, update the bee's // location, update the _bees collection, and then fire the OnBeeMoved event. private void AddOrRemoveAStar() { // Flip a coin (_random.next(2) == 0) and either create a star using CreateAStar() or // remove a star and fire OnStarChanged. Always create a star if there are <= 5, remove // one if >= 20. _stars.keys.tolist()[_random.next(_stars.count)] will find a random star. // You'll need to add the BeeMoved and StarChanged events and methods to call them. // They use the BeeMovedEventArgs and StarChangedEventArgs classes. You can debug your app with the simulator to make sure it works with different screen sizes and orientations.
159 windows presentation foundation 10 Add the BeeStarViewModel class to the ViewModel. Fill in the commented methods. You ll need to look closely at how the Model works and what the View expects. The helper methods will also come in very handy. using View; using Model; using System.Collections.ObjectModel; using System.Collections.Specialized; using System.Windows; using DispatcherTimer = Windows.UI.Xaml.DispatcherTimer; using UIElement = Windows.UI.Xaml.UIElement; We wanted to make sure that DispatcherTimer and UIElement are the only classes from the Windows. UI.Xaml namespace that we used in the ViewModel. The using keyword lets you use = to declare a single member in another namespace. VIEW MODEL(); public Size PlayAreaSize { /* get and set accessors return and set _model.playareasize */ public BeeStarViewModel() { // Hook up the event handlers to the BeeStarModel's BeeMoved and StarChanged events, // and start the timer ticking every two seconds. void timer_tick(object sender, object e) { // Every time the timer ticks, find all StarControl references in the _fadedstars // collection and remove each of them from _sprites, then call the BeeViewModel's // Update() method to tell it to update itself. void BeeMovedHandler(object sender, BeeMovedEventArgs e) { // The _bees dictionary maps Bee objects in the Model to AnimatedImage controls // in the view. When a bee is moved, the BeeViewModel fires its BeeMoved event to // tell anyone listening which bee moved and its new location. If the _bees // dictionary doesn't already contain an AnimatedImage control for the bee, it needs // to create a new one, set its canvas location, and update both _bees and _sprites. // If the _bees dictionary already has it, then we just need to look up the corresponding // AnimatedImage control and move it on the canvas to its new location with an animation. void StarChangedHandler(object sender, StarChangedEventArgs e) { // The _stars dictionary works just like the _bees one, except that it maps Star objects // to their corresponding StarControl controls. The EventArgs contains references to // the Star object (which has a Location property) and a Boolean to tell you if the star // was removed. If it is then we want it to fade out, so remove it from _stars, add it // to _fadedstars, and call its FadeOut() method (it'll be removed from _sprites the next // time the Update() method is called, which is why we set the timer s tick interval to // be greater than the StarControl's fade out animation). // // If the star is not being removed, then check to see if _stars contains it - if so, get // the StarControl reference; if not, you'll need to create a new StarControl, fade it in, // add it to _sprites, and send it to back so the bees can fly in front of it. Then set // the canvas location for the StarControl. When you set the new Canvas location, the control is updated even if it s already on the Canvas. This is how the stars move themselves around when the play area is resized. you are here 4 801
160 exercise solution using System.Windows; SOLUTION Here are the filled-in methods in the BeeStarModel class.(); We gave these to you. private static bool RectsOverlap(Rect r1, Rect r2) { r1.intersect(r2); if (r1.width > 0 r1.height > 0) return true; return false; private Size _playareasize; public Size PlayAreaSize { get { return _playareasize; set { _playareasize = value; CreateBees(); CreateStars(); private void CreateBees() { if (PlayAreaSize == Size.Empty) return; Whenever the PlayAreaSize property changes, the Model updates the _playareasize backing field and then calls CreateBees() and CreateStars(). This lets the ViewModel tell the Model to adjust itself whenever the size changes which will happen if you run the program on a tablet and change the orientation. If there are already bees, move each of them. MoveOneBee() will find a new nonoverlapping location for each bee and fire a BeeMoved event. if (_bees.count() > 0) { List<Bee> allbees = _bees.keys.tolist(); foreach (Bee bee in allbees) MoveOneBee(bee); else { int beecount = _random.next(5, 10); for (int i = 0; i < beecount; i++) { int s = _random.next(50, 100); Size beesize = new Size(s, s); Point newlocation = FindNonOverlappingPoint(beeSize); Bee newbee = new Bee(newLocation, beesize); _bees[newbee] = new Point(newLocation.X, newlocation.y); OnBeeMoved(newBee, newlocation.x, newlocation.y); If there aren t any bees in the model yet, this creates new Bee objects and sets their locations. Any time a bee is added or changes, we need to fire a BeeMoved event. 802 Appendix ii
161 windows presentation foundation private void CreateStars() { if (PlayAreaSize == Size.Empty) return; if (_stars.count > 0) { foreach (Star star in _stars.keys) { star.location = FindNonOverlappingPoint(StarSize); OnStarChanged(star, false); else { int starcount = _random.next(5, 10); for (int i = 0; i < starcount; i++) CreateAStar(); If there are already stars, we just set each existing star s location to a new point on the PlayArea and fire the StarChanged event. It s up to the ViewModel to handle that event and move the corresponding control. private void CreateAStar() { Point newlocation = FindNonOverlappingPoint(StarSize); Star newstar = new Star(newLocation); _stars[newstar] = new Point(newLocation.X, newlocation.y); OnStarChanged(newStar, false); private Point FindNonOverlappingPoint(Size size) { Rect newrect = new Rect(); bool nooverlap = false; int count = 0; while (!nooverlap) { newrect = new Rect(_random.Next((int)PlayAreaSize.Width - 150), _random.next((int)playareasize.height - 150), size.width, size.height); var overlappingbees = from bee in _bees.keys where RectsOverlap(bee.Position, newrect) select bee; var overlappingstars = from star in _stars.keys where RectsOverlap( new Rect(star.Location.X, star.location.y, StarSize.Width, StarSize.Height), newrect) select star; if ((overlappingbees.count() + overlappingstars.count() == 0) (count++ > 1000)) nooverlap = true; return new Point(newRect.X, newrect.y); private void MoveOneBee(Bee bee = null) { if (_bees.keys.count() == 0) return; if (bee == null) { int beecount = _stars.count; List<Bee> bees = _bees.keys.tolist(); bee = bees[_random.next(bees.count)]; bee.location = FindNonOverlappingPoint(bee.Size); _bees[bee] = bee.location; OnBeeMoved(bee, bee.location.x, bee.location.y); This creates a random Rect and then checks if it overlaps. We gave it a 250-pixel gap on the right and a 150-pixel gap on the bottom so the stars and bees don t leave the play area. These LINQ queries call RectsOverlap() to find any bees or stars that overlap the new Rect. If either return value has a count, the new Rect overlaps something. If this iterated 1,000 times, it means we re probably out of nonoverlapping spots in the play area and need to break out of an infinite loop. you are here 4 803
162 exercise solution SOLUTION private void AddOrRemoveAStar() { if (((_random.next(2) == 0) (_stars.count <= 5)) && (_stars.count < 20 )) CreateAStar(); else { Star startoremove = _stars.keys.tolist()[_random.next(_stars.count)]; The last few members of the BeeStarModel class. _stars.remove(startoremove); OnStarChanged(starToRemove, true); public event EventHandler<BeeMovedEventArgs> BeeMoved; private void OnBeeMoved(Bee beethatmoved, double x, double y) { EventHandler<BeeMovedEventArgs> beemoved = BeeMoved; if (beemoved!= null) { beemoved(this, new BeeMovedEventArgs(beeThatMoved, x, y)); public event EventHandler<StarChangedEventArgs> StarChanged; private void OnStarChanged(Star starthatchanged, bool removed) { EventHandler<StarChangedEventArgs> starchanged = StarChanged; if (starchanged!= null) { starchanged(this, new StarChangedEventArgs(starThatChanged, removed)); Here are the filled-in methods of the BeeStarViewModel class. using View; using Model; using System.Collections.ObjectModel; using System.Collections.Specialized; using System.Windows; using DispatcherTimer = System.Windows.Threading.DispatcherTimer; using UIElement = System.Windows.UIElement;(); Flip a coin by choosing either 0 or 1 at random, but always create a star if there are under 5 and remove if 20 or more. Every time the Update() method is called, we want to either add or remove a star. The CreateAStar() method already creates stars. If we re removing a star, we just remove it from _stars and fire a StarChanged event. These are typical event handlers and methods to fire them. We gave these to you. 804 Appendix ii
163 windows presentation foundation public Size PlayAreaSize { get { return _model.playareasize; set { _model.playareasize = value; public BeeStarViewModel() { _model.beemoved += BeeMovedHandler; _model.starchanged += StarChangedHandler; _timer.interval = TimeSpan.FromSeconds(2); _timer.tick += timer_tick; _timer.start(); void timer_tick(object sender, object e) { foreach (StarControl starcontrol in _fadedstars) _sprites.remove(starcontrol); _model.update(); void BeeMovedHandler(object sender, BeeMovedEventArgs e) { if (!_bees.containskey(e.beethatmoved)) { AnimatedImage beecontrol = BeeStarHelper.BeeFactory( e.beethatmoved.width, e.beethatmoved.height, TimeSpan.FromMilliseconds(20)); BeeStarHelper.SetCanvasLocation(beeControl, e.x, e.y); _bees[e.beethatmoved] = beecontrol; _sprites.add(beecontrol); else { AnimatedImage beecontrol = _bees[e.beethatmoved]; BeeStarHelper.MoveElementOnCanvas(beeControl, e.x, e.y); void StarChangedHandler(object sender, StarChangedEventArgs e) { if (e.removed) { StarControl starcontrol = _stars[e.starthatchanged]; _stars.remove(e.starthatchanged); _fadedstars.add(starcontrol); starcontrol.fadeout(); else { StarControl newstar; if (_stars.containskey(e.starthatchanged)) newstar = _stars[e.starthatchanged]; else { newstar = new StarControl(); _stars[e.starthatchanged] = newstar; newstar.fadein(); BeeStarHelper.SendToBack(newStar); _sprites.add(newstar); BeeStarHelper.SetCanvasLocation( newstar, e.starthatchanged.location.x, e.starthatchanged.location.y); If a star is being added, it needs to have its FadeIn() method called. If it s already there, it s just being moved because the play area size changed. Either way, we want to move it to its new location on the Canvas. The _fadedstars collection contains the controls that are currently fading and will be removed the next time the ViewModel s Update() method is called. you are here 4 805
164 806 Appendix ii using System.Windows.Media.Animation; Here are the methods for the StarControl code-behind: public partial class StarControl : UserControl { public StarControl() { InitializeComponent(); SOLUTION public void FadeIn() { Storyboard fadeinstoryboard = FindResource("fadeInStoryboard") as Storyboard; fadeinstoryboard.begin(); public void FadeOut() { Storyboard fadeoutstoryboard = FindResource("fadeOutStoryboard") as Storyboard; fadeoutstoryboard.begin(); If you ve done a good job with separation of concerns, your designs often tend to naturally end up being loosely coupled. You've got all the tools to do Lab #3 and build Invaders! We saved the best for last. In the last lab in the book, you ll build your own version of Space Invaders, the grandfather of video games. And while the lab is aimed at Windows Store apps, if you finished the Bees on a Starry Night project and you understood it all then you have the knowledge and know-how to build a WPF version of the Invaders game. Almost everything in the lab applies to WPF. The only thing that s different is how the user controls the ship. Windows Store apps have advanced gesture events that process touch and mouse input, but WPF windows don t support those events. You ll need to use the WPF Window object s KeyUp and KeyDown events. Luckily, you ve already got a good example. Flip back to the Key Game in Chapter 4 your Invaders game can handle keystrokes in exactly the same way. The ViewModel s PlayAreaSize property just passes through to the property on the Model but the Model s PlayAreaSize set accessor calls methods that fire BeeMoved and StarChanged events. So when the screen resolution changes: 1) the Canvas fires its SizeChanged event, which 2) updates the ViewModel s PlayAreaSize property, which 3) updates the Model s property, which 4) calls methods to update bees and stars, which 5) fire BeeMoved and StarChanged events, which 6) trigger the ViewModel s event handlers, which 7) update the Sprites collection, which 8) update the controls on the Canvas. This is an example of loose coupling, where there s no single, central object to coordinate things. This is a very stable way to build software because each object doesn t need to have explicit knowledge of how the other objects work. It just needs to know one small job: handle an event, fire an event, call a method, set a property, etc.
165 windows presentation foundation Congratulations! (But you re not done yet...) Did you finish that last exercise? Did you understand everything that was going on? If so, then congratulations you ve learned a whole lot of C#, and probably in less time than you d expected! The world of programming awaits you. Still, there are a few things that you should do before you move on to the last lab, if you really want to make sure all the information you put in your brain stays there. Take one last look through Save the Humans. If you did everything we asked you to do, you ve built Save the Humans twice, once at the beginning of the book and again before you started Chapter 10. Even the second time around, there were parts of it that seemed like magic. But when it comes to programming, there is no magic. So take one last pass through the code you built. You ll be surprised at how much you understand! There s almost nothing that seals a lesson into your brain like positive reinforcement. When it comes to programming, there is no magic. Every program works because it was built to work, and all code can be understood....but it s a lot easier to understand code if the programmer used good design patterns and object-oriented programming principles. Talk about it with your friends. Humans are social animals, and when you talk through things you ve learned with your social circle you do a better job of retaining them. And these days, talking means social networking, too! Plus, you ve really accomplished something here. Go ahead and claim your bragging rights! Take a break. Even better, take a nap. Your brain has absorbed a lot of information, and sometimes the best thing you can do to lock in all that new knowledge is to sleep on it. There s a lot of neuroscience research that shows that information absorption is significantly improved after a good night s sleep. So give your brain a well-deserved rest! The humans forgot about us! Time to attack while they ve lowered their guard! you are here 4 807
CS042A. Using Microsoft Word
CS042A Using Microsoft Word 2015 Professional Career Development Institute, LLC. All rights reserved. Accredited by the Accrediting Commission of the Distance Education and Training Council. The AccreditingMore information
Step 1: Setting up the Document/Poster
Step 1: Setting up the Document/Poster Upon starting a new document, you will arrive at this setup screen. Today we want a poster that is 4 feet (48 inches) wide and 3 feet tall. Under width, type 48 inMoreMore. TheMore information
DOING MORE WITH WORD: MICROSOFT OFFICE 2010
University of North Carolina at Chapel Hill Libraries Carrboro Cybrary Chapel Hill Public Library Durham County Public Library DOING MORE WITH WORD: MICROSOFT OFFICE 2010 GETTING STARTED PAGE 02 PrerequisitesMore information
Microsoft PowerPoint 2010 Handout
Microsoft PowerPoint 2010 Handout PowerPoint is a presentation software program that is part of the Microsoft Office package. This program helps you to enhance your oral presentation and keep the audienceMore informationMore information
PowerPoint 2007 Basics Website:
Website: PowerPoint is the presentation program included in the Microsoft Office suite. With PowerPoint, you can create engaging presentations that can be presented in person, online,More information)More information
Create a Poster Using Publisher
Contents 1. Introduction 1. Starting Publisher 2. Create a Poster Template 5. Aligning your images and text 7. Apply a background 12. Add text to your poster 14. Add pictures to your poster 17. Add graphsMore information
Microsoft Expression Web Quickstart Guide
Microsoft Expression Web Quickstart Guide Expression Web Quickstart Guide (20-Minute Training) Welcome to Expression Web. When you first launch the program, you ll find a number of task panes, toolbars,More, PhotoMore information
Creating a Game Board in Microsoft Word
Creating a Game Board in Microsoft Word 1) Open Microsoft Word. To create a game board, you will probably want to use more space on the page than is allowed by the standard margin settings. Therefore,More information
Hypercosm. Studio.
Hypercosm Studio Hypercosm Studio Guide 3 Revision: November 2005 Copyright 2005 Hypercosm LLC All rights reserved. Hypercosm, OMAR, Hypercosm 3D Player, and Hypercosm Studio are trademarksMore information
Joomla! 2.5.x Training Manual
Joomla! 2.5.x Training Manual Joomla is an online content management system that keeps track of all content on your website including text, images, links, and documents. This manual includes several tutorialsMore information
Using Microsoft Word. Working With Objects
Using Microsoft Word Many Word documents will require elements that were created in programs other than Word, such as the picture to the right. Nontext elements in a document are referred to as ObjectsMore information
Create A Collage Of Warped Photos
Create A Collage Of Warped Photos In this Adobe Photoshop tutorial, we re going to learn how to create a collage of warped photos. Now, don t go letting your imagination run wild here. When I say warped,More information
Writer Guide. Chapter 15 Using Forms in Writer
Writer Guide Chapter 15 Using Forms in Writer Copyright This document is Copyright 2005 2008 by its contributors as listed in the section titled Authors. You may distribute it and/or modify it under theMore information
So you want to create an Email a Friend action
So you want to create an Email a Friend action This help file will take you through all the steps on how to create a simple and effective email a friend action. It doesn t cover the advanced features;More information
Introduction to Microsoft Word 2008
1. Launch Microsoft Word icon in Applications > Microsoft Office 2008 (or on the Dock). 2. When the Project Gallery opens, view some of the available Word templates by clicking to expand the Groups, andMore information
Chapter 15 Using Forms in Writer
Writer Guide Chapter 15 Using Forms in Writer OpenOffice.org Copyright This document is Copyright 2005 2006 by its contributors as listed in the section titled Authors. You can distribute it and/or modifyMore information
Introduction to Google SketchUp (Mac Version)
Introduction to Google SketchUp (Mac Version) This guide is handy to read if you need some basic knowledge to get started using SketchUp. You will see how to download and install Sketchup, and learn how
Book Builder Training Materials Using Book Builder September 2014
Book Builder Training Materials Using Book Builder September 2014 Prepared by WDI, Inc. Table of Contents Introduction --------------------------------------------------------------------------------------------------------------------More information,More informationMore information
Chapter 3: Quick Start Tutorial
1 Quick Start Tutorial Chapter 3: Quick Start Tutorial This chapter helps you become acquainted with the workspace, the commands, and the features that you use to create your own website. The step-by-stepMore information
Creating Interactive PDF Forms
Creating Interactive PDF Forms Using Adobe Acrobat X Pro Information Technology Services Outreach and Distance Learning Technologies Copyright 2012 KSU Department of Information Technology Services ThisMore information
PowerPoint 2007: Basics Learning Guide
PowerPoint 2007: Basics Learning Guide What s a PowerPoint Slide? PowerPoint presentations are composed of slides, just like conventional presentations. Like a 35mm film-based slide, each PowerPoint slideMore information
Microsoft PowerPoint 2010 Templates and Slide Masters (Level 3)
IT Services Microsoft PowerPoint 2010 Templates and Slide Masters (Level 3) Contents Introduction... 1 Installed Templates and Themes... 2 University of Reading Templates... 3 Further Templates and Presentations...More informationMore informationMore information
PE Content and Methods Create a Website Portfolio using MS Word
PE Content and Methods Create a Website Portfolio using MS Word Contents Here s what you will be creating:... 2 Before you start, do this first:... 2 Creating a Home Page... 3 Adding a Background ColorMore information
Adobe InDesign Creative Cloud
Adobe InDesign Creative Cloud Beginning Layout and Design November, 2013 1 General guidelines InDesign creates links to media rather than copies so -Keep all text and graphics in one folder -Save the InDesignMore information
Do It Yourself Website Editing Training Guide
Do It Yourself Website Editing Training Guide Version 3.0 Copyright 2000-2011 Sesame Communications. All Rights Reserved. Table of Contents DIY Overview 3 What pages are editable using the DIY EditingMoreMore information
Sage Accountants Business Cloud EasyEditor Quick Start Guide
Sage Accountants Business Cloud EasyEditor Quick Start Guide VERSION 1.0 September 2013 Contents Introduction 3 Overview of the interface 4 Working with elements 6 Adding and moving elements 7 ResizingMore usernameMore informationMore information
Excel 2007 Basic knowledge
Ribbon menu The Ribbon menu system with tabs for various Excel commands. This Ribbon system replaces the traditional menus used with Excel 2003. Above the Ribbon in the upper-left corner is the MicrosoftMore information
Terminal Four (T4) Site Manager
Terminal Four (T4) Site Manager Contents Terminal Four (T4) Site Manager... 1 Contents... 1 Login... 2 The Toolbar... 3 An example of a University of Exeter page... 5 Add a section... 6 Add content toMore information
Introduction to Microsoft PowerPoint
Introduction to Microsoft PowerPoint By the end of class, students should be able to: Identify parts of the work area. Create a new presentation using PowerPoint s design templates. Navigate around a presentation.More information
How to Use the Drawing Toolbar in Microsoft Word
How to Use the Drawing Toolbar in Microsoft Word The drawing toolbar allows you to quickly and easily label pictures (e.g., maps) in a MS Word file. You can add arrows, circle spots, or label with words.More information
Creating an Email with Constant Contact. A step-by-step guide
Creating an Email with Constant Contact A step-by-step guide About this Manual Once your Constant Contact account is established, use this manual as a guide to help you create your email campaign HereMore information
Intro to Microsoft PowerPoint
Intro to Microsoft PowerPoint Microsoft PowerPoint is a professional presentation program that allows the user to create "presentation slides" that can be displayed on the computer screen or through aMore information
Publisher - Basics. Course Description. Objectives
Publisher - Basics Course Description Microsoft Publisher is a desktop publishing software that is designed for people who are not design professionals but who need to produce professional looking publications.MoreMore information
Decision Support AITS University Administration. Web Intelligence Rich Client 4.1 User Guide
Decision Support AITS University Administration Web Intelligence Rich Client 4.1 User Guide 2 P age Web Intelligence 4.1 User Guide Web Intelligence 4.1 User Guide Contents Getting Started in Web IntelligenceMore information
Working with the Ektron Content Management System
Working with the Ektron Content Management System Table of Contents Creating Folders Creating Content 3 Entering Text 3 Adding Headings 4 Creating Bullets and numbered lists 4 External Hyperlinks and eMore information
Joomla Article Advanced Topics: Table Layouts
Joomla Article Advanced Topics: Table Layouts An HTML Table allows you to arrange data text, images, links, etc., into rows and columns of cells. If you are familiar with spreadsheets, you will understandMore information
Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint
Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint While it is, of course, possible to create a Research Day poster using a graphics editing programme such as AdobeMore information
Item Editor Reference Guide
Item Editor Reference Guide This reference guide is intended for all iwebfolio users. The item editor is the editing tool that appears when a user is editing an item within a portfolio or template. ThisMore information
SiteBuilder 2.1 Manual
SiteBuilder 2.1 Manual Copyright 2004 Yahoo! Inc. All rights reserved. Yahoo! SiteBuilder About This Guide With Yahoo! SiteBuilder, you can build a great web site without even knowing HTML. If you canMore informationMore information SlideMoreMore information
MS Word 2007 practical notes
MS Word 2007 practical notes Contents Opening Microsoft Word 2007 in the practical room... 4 Screen Layout... 4 The Microsoft Office Button... 4 The Ribbon... 5 Quick Access Toolbar... 5 Moving in theMore
Quick Reference Guide
Simplified Web Interface for Teachers Quick Reference Guide Online Development Center Site Profile 5 These fields will be pre-populated with your information { 1 2 3 4 Key 1) Website Title: Enter the nameMore...More information
Florence School District #1
Florence School District #1 Training Module 2 Designing Lessons Designing Interactive SMART Board Lessons- Revised June 2009 1 Designing Interactive SMART Board Lessons Lesson activities need to be designedMoreMore information
Google Docs Basics Website:
Website: Google Docs is a free web-based office suite that allows you to store documents online so you can access them from any computer with an internet connection. With GoogleMore information
Microsoft Word 2010 Basics
P a g e 1 Microsoft Word 2010 Basics ABOUT THIS CLASS This class is designed to give a basic introduction into Microsoft Word 2010. Specifically, we will progress from learning how to open Microsoft WordMore information
CREATE A WEB PAGE WITH LINKS TO DOCUMENTS USING MICROSOFT WORD 2007
CREATE A WEB PAGE WITH LINKS TO DOCUMENTS USING MICROSOFT WORD 2007 For Denise Harrison s College Writing Course students Table of Contents Before you Start: Create documents, Create a Folder, Save documentsMore information
Excel 2003 Tutorial I
This tutorial was adapted from a tutorial by see its complete version at Excel 2003 Tutorial I Spreadsheet Basics Screen Layout Title bar Menu barMore information
Graphic Design Basics Tutorial
Graphic Design Basics Tutorial This tutorial will guide you through the basic tasks of designing graphics with Macromedia Fireworks MX 2004. You ll get hands-on experience using the industry s leadingMore,More information
Analyzing PDFs with Citavi 5
Analyzing PDFs with Citavi 5 Introduction Just Like on Paper... 2 Methods in Detail Highlight Only (Yellow)... 3 Highlighting with a Main Idea (Red)... 4 Adding Direct Quotations (Blue)... 5 Adding IndirectMore information menuMore information
Ansur Test Executive. Users Manual
Ansur Test Executive Users Manual April 2008 2008 Fluke Corporation, All rights reserved. All product names are trademarks of their respective companies Table of Contents 1 Introducing Ansur... 4 1.1 AboutMore information
HIT THE GROUND RUNNING MS WORD INTRODUCTION
HIT THE GROUND RUNNING MS WORD INTRODUCTION MS Word is a word processing program. MS Word has many features and with it, a person can create reports, letters, faxes, memos, web pages, newsletters, andMore information
Clip Art in Office 2000
Clip Art in Office 2000 In the process of making a certificate, we will cover: Adding clipart and templates from the Microsoft Office Clip Gallery, Modifying clip art by grouping and ungrouping, FlippingMore information
Windows 7 for Educators
Step-by-step Windows 7 for Educators Step-by-step Introducing Windows 7 You strive to make new and exciting things possible for your students and for yourself. Meanwhile, you re limited on time and wantMore information theMore information
Custom Reporting System User Guide
Citibank Custom Reporting System User Guide April 2012 Version 8.1.1 Transaction Services Citibank Custom Reporting System User Guide Table of Contents Table of Contents User Guide Overview...2 SubscribeMore information
CONTENTM WEBSITE MANAGEMENT SYSTEM. Getting Started Guide
CONTENTM WEBSITE MANAGEMENT SYSTEM Getting Started Guide Table of Contents CONTENTM WEBSITE MANAGEMENT SYSTEM... 1 GETTING TO KNOW YOUR SITE...5 PAGE STRUCTURE...5 Templates...5 Menus...5 Content Areas...5More information
Hello. What s inside? Ready to build a website?
Beginner s guide Hello Ready to build a website? Our easy-to-use software allows to create and customise the style and layout of your site without you having to understand any coding or HTML. In this guideMoreMore information,More information
TUTORIAL 4 Building a Navigation Bar with Fireworks
TUTORIAL 4 Building a Navigation Bar with Fireworks This tutorial shows you how to build a Macromedia Fireworks MX 2004 navigation bar that you can use on multiple pages of your website. A navigation barMore information
Expression Web 3 Lab Exercises
Expression Web 3 Lab Exercises Expression Web 3 Quick Start Tutorial Beaches Around the World By Aseem Badshah Edited by Dave Burkhart (Part 2: Beaches Around the World series) Information in this document,More information
irise Visualization Workbook
irise Visualization Workbook Updated for irise Studio Version 7.2.1 April 5, 2009 Prepared by Bill Smith, irise Inside Sales 2009 irise, Inc. Please send corrections and suggestions to [email protected],More information
Creating Your Own Shapes
1 Creating Your Own Shapes in Visio You can create your own custom shapes in Visio if you need a shape that is not in one of the standard templates. This example shows how to create the recycling symbolMore information
Basic tutorial for Dreamweaver CS5
Basic tutorial for Dreamweaver CS5 Creating a New Website: When you first open up Dreamweaver, a welcome screen introduces the user to some basic options to start creating websites. If you re going toMore information
Introduction to MS WINDOWS XP
Introduction to MS WINDOWS XP Mouse Desktop Windows Applications File handling Introduction to MS Windows XP 2 Table of Contents What is Windows XP?... 3 Windows within Windows... 3 The Desktop... 3 TheMore information
Getting Started with Excel 2008. Table of Contents
Table of Contents Elements of An Excel Document... 2 Resizing and Hiding Columns and Rows... 3 Using Panes to Create Spreadsheet Headers... 3 Using the AutoFill Command... 4 Using AutoFill for Sequences...More information
Expression Web Lab Exercises
Expression Web Lab Exercises Expression Web Quick Start Tutorial Heavy Metal Show Car By Aseem Badshah Heavy Metal Show Car Tutorial Page 1 Information in this document, including URL and other InternetMore information
Foxit Reader Quick Guide
I Contents Foxit Reader Contents... II Chapter 1 Get Started... 1 Foxit Reader Overview... 1 System Requirements... 1 Install Foxit Reader... 2 Uninstall Foxit Reader... 2 Update Foxit Reader... 2 Workspace...More information
MailChimp Instruction Manual
MailChimp Instruction Manual Spike HQ This manual contains instructions on how to set up a new email campaign, add and remove contacts and view statistics on completed email campaigns from within MailChimp.More information
Triggers & Actions 10
Triggers & Actions 10 CHAPTER Introduction Triggers and actions are the building blocks that you can use to create interactivity and custom features. Once you understand how these building blocks work,More information
Expression Web Lab Exercises
Expression Web Lab Exercises Expression Web Quick Start Tutorial Beaches Around the World By Aseem Badshah (Part 2: Beaches Around the World series) Information in this document, including URL and otherMore information
Introduction to Microsoft Publisher : Tools You May Need
Introduction to Microsoft Publisher : Tools You May Need 1. Why use Publisher instead of Word for creating fact sheets, brochures, posters, newsletters, etc.? While both Word and Publisher can create documentsMoreMore information
MICROSOFT POWERPOINT 2011 SHOW YOUR PRESENTATION
MICROSOFT POWERPOINT 2011 SHOW YOUR PRESENTATION Lasted Edited: 2012-07-10 1 Use Speaker Notes... 4 Add speaker notes... 4 Change or format a note on a slide... 5 Print slides including speaker notes...More information
Guide To Creating Academic Posters Using Microsoft PowerPoint 2010
Guide To Creating Academic Posters Using Microsoft PowerPoint 2010 INFORMATION SERVICES Version 3.0 July 2011 Table of Contents Section 1 - Introduction... 1 Section 2 - Initial Preparation... 2 2.1 OverallMore information
Microsoft Word 2007 Lesson Plan
Table of Contents Introduction...3 Exploring the Word 2007 Environment...3 Creating, Saving and Closing a Document...5 Creating a document...5 Saving the new document...5 Saving the new document in a differentMore information
Microsoft Word 2010 Tutorial
1 Microsoft Word 2010 Tutorial Microsoft Word 2010 is a word-processing program, designed to help you create professional-quality documents. With the finest documentformatting tools, Word helps you organizeMore information
SharePoint Basic Editing. Text. Creating List
Text Putting Text on the Page 1. Entering text on the web page is just like typing in a word processing document. Lines will wrap within a paragraph. 2. Enter = Paragraph Break (leaves a blank line) 3.More information | http://docplayer.net/175736-Wpf-learner-s-guide-to-head-first-c.html | CC-MAIN-2017-51 | refinedweb | 50,022 | 64.2 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
4
results of 4
Revision: 316
Author: mhagger
Date: 2010-12-12 07:40:33 +0000 (Sun, 12 Dec 2010)
Log Message:
-----------
Fix handling of float64 data.
The old code attempted to handle Python data as float32 (to save
memory). But contrary to the comment, it was coercing float64 numpy
arrays to float32. (Perhaps this is a difference between numpy and
Numeric?) That was never the intention.
The new version avoids this coersion, with the side-effect that Python
lists are converted to float64 numpy arrays.
Patch by: xaverxn (see tracker patch with ID 2671685)
> Gnuplot.py float64 handling is broken (gets downcast to float32, see
> gnuplot-py-users mailing list in the beginning of march 09). It is
> essentially my last proposal on the list, which wasn't discussed or
> commented on any further.
>
> This patch changes the file utils.py so it coverts python native types
> to numpy arrays and (up)casts everything but float64 to float32.
Modified Paths:
--------------
trunk/utils.py
Modified: trunk/utils.py
===================================================================
--- trunk/utils.py 2010-12-12 07:01:00 UTC (rev 315)
+++ trunk/utils.py 2010-12-12 07:40:33 UTC (rev 316)
@@ -18,29 +18,18 @@
def float_array(m):
"""Return the argument as a numpy array of type at least 'Float32'.
- Leave 'Float64' unchanged, but upcast all other types to
- 'Float32'. Allow also for the possibility that the argument is a
- python native type that can be converted to a numpy array using
- 'numpy.asarray()', but in that case don't worry about
- downcasting to single-precision float.
-
+ Convert input data to numpy array first (in case it is a python
+ native type), then return it if it is of dtype 'float64' or
+ 'float32'. Try to upcast everything else to float32.
"""
- try:
- # Try Float32 (this will refuse to downcast)
- return numpy.asarray(m, numpy.float32)
- except TypeError:
- # That failure might have been because the input array was
- # of a wider data type than float32; try to convert to the
- # largest floating-point type available:
- # NOTE TBD: I'm not sure float_ is the best data-type for this...
- try:
- return numpy.asarray(m, numpy.float_)
- except TypeError:
- # TBD: Need better handling of this error!
- print "Fatal: array dimensions not equal!"
- return None
+ m = numpy.asarray(m)
+ if m.dtype in ('float64','float32'):
+ return m
+ else:
+ return numpy.array(m,dtype=numpy.float32)
+
def write_array(f, set,
item_sep=' ',
nest_prefix='', nest_suffix='\n', nest_sep=''):
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 315
Author: mhagger
Date: 2010-12-12 07:01:00 +0000 (Sun, 12 Dec 2010)
Log Message:
-----------
This apparently helps Gnuplot.py work a little on IronPython2.0.
Patch by: Shigeaki Matsumura
Additional comments (from tracker feature request 2806438):
> It seems like working, but __del__ methods are not properly called
> because of the GC mechanism on CLI. Anyway, I hope to use it on
> IronPython.
Modified Paths:
--------------
trunk/gp.py
Modified: trunk/gp.py
===================================================================
--- trunk/gp.py 2010-12-12 06:36:03 UTC (rev 314)
+++ trunk/gp.py 2010-12-12 07:01:00 UTC (rev 315)
@@ -27,7 +27,7 @@
# platform:
if sys.platform == 'mac':
from gp_mac import GnuplotOpts, GnuplotProcess, test_persist
-elif sys.platform == 'win32':
+elif sys.platform == 'win32' or sys.platform == 'cli':
from gp_win32 import GnuplotOpts, GnuplotProcess, test_persist
elif sys.platform == 'darwin':
from gp_macosx import GnuplotOpts, GnuplotProcess, test_persist
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 314
Author: mhagger
Date: 2010-12-12 06:36:03 +0000 (Sun, 12 Dec 2010)
Log Message:
-----------
Link back to project summary page instead of main SourceForge homepage.
This is the current SourceForge preference.
Modified Paths:
--------------
trunk/Gnuplot.html
Modified: trunk/Gnuplot.html
===================================================================
--- trunk/Gnuplot.html 2010-12-11 14:59:13 UTC (rev 313)
+++ trunk/Gnuplot.html 2010-12-12 06:36:03 UTC (rev 314)
@@ -8,7 +8,7 @@
<body>
-<h1>Gnuplot.py on <a href=""><img
+<h1>Gnuplot.py on <a href=""><img
src="";</a></h1>
<!-- width="88" height="31" border="0" -->
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 313
Author: mhagger
Date: 2010-12-11 14:59:13 +0000 (Sat, 11 Dec 2010)
Log Message:
-----------
Add reference to Belorussion translation.
Translation written by Paul Bukhovko <bukhovko@...>.
Modified Paths:
--------------
trunk/Gnuplot.html
Modified: trunk/Gnuplot.html
===================================================================
--- trunk/Gnuplot.html 2010-05-03 11:08:37 UTC (rev 312)
+++ trunk/Gnuplot.html 2010-12-11 14:59:13 UTC (rev 313)
@@ -34,6 +34,9 @@
<li> The <a
href="">Gnuplot.py
users' mailing list</a></li>
+ <li> <a href="">Belorussian
+ translation</a> of this page</li>
+
</ul>
</td>
</tr>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. | http://sourceforge.net/p/gnuplot-py/mailman/gnuplot-py-svn/?viewmonth=201012 | CC-MAIN-2015-22 | refinedweb | 829 | 69.18 |
.
Character Animation
To animate Bob, we will use the simplest technique called sprite animation. The animation is nothing more than a sequence of images shown at a set interval to create
the illusion of movement. The following sequence of images is used to create the running animation.
. I have used Gimp to create the running character by playing Star Guard a lot and analysing its running sequence. To create animations is quite simple. We want to display each frame for a certain amount of time and then switch to the next image. When we reached the end of the sequence we start again. This is called looping. We need to determine the frame duration which the amount of time the frame will be displayed for. Let’s say we are rendering the game at 60 FPS, meaning we render a frame every 1 / 60 = 0.016 s. We have just 5 frames to animate a full step. Considering a typical athlete’s cadence of 180, we can work out how much time we show each frame to make the running realistic.
The math of running
A cadence of 180 means 180 steps per minute. To calculate the number of steps per second we have 180 / 60 = 3. So every second we have need to take 3 steps. Our 5 frames make up one full step so we need to show 3 * 5 = 15 frames every second to simulate a professional athlete’s running. This is not sprinting by the way. Our frame duration will be 1 / 15 = 0.066 second. That is 66 ms.
Optimising the images
Before we add it to the game, we will optimise the images. Currently the project has the images as separate png files under the
assets/images directory in the
star-assault-android project. We are currently using
block.png and
bob_01.png. There are a few more images, namely
bob_02 - 06.png. These images make up the animation sequence. Because libgdx is using OpenGL under the hood, it’s not optimal to give the framework lots of images as textures to work with. What we will do, is to create a so called
Texture Atlas. A texture atlas is just an image which is big enough to fit all the images on it and it has a descriptor holding each individual image’s name, position in the atlas and size. The individual images are called regions in the atlas. It there are many images, the atlas can have multiple pages. Each page gets loaded into the memory as a singe image and the regions are used as individual images. Don’t need to know all this but this makes the application more optimal, will load more quickly and will run more smoothly. Libgdx has a utility called TexturePacker2 to create these atlases. It can be run from the command line or used programatically. To run it from Java use the following program:
package net.obviam.starassault.utils; import com.badlogic.gdx.tools.imagepacker.TexturePacker2; public class TextureSetup { public static void main(String[] args) { TexturePacker2.process('/path-to-star-guard-assets-images/', 'path-to-star-guard-assets-images', 'textures.pack'); } }
Make sure that you add
gdx-tools.jar to your
libs directory. Change the attributes of the
process method to point to the directory where the assets are located.
Note:
Also rename the files that contain the underscore “_” character because the TexturePacker2 uses it as a delimiter and we currently don’t need that. Replace the underscore character with the hyphen “-” character. Processing the images in our directory should produce 2 files:
textures.png and
textures.pack. The texture atlas should look similar to the following image
The directory structure I am using is the following:
Now that we have worked out what our animation will be, the frame duration and optimised the assets, let’s add it to the game. We will modify
Bob.java first as this is the smallest bit
public class Bob { // ... omitted ... // float stateTime = 0; // ... omitted ... // public void update(float delta) { stateTime += delta; position.add(velocity.tmp().mul(delta)); } }
We added an attribute called
stateTime. This will track Bob’s time in a particular state. We will be using it to provide the time spent by Bob in the game. It is important for the animation to work out which frame to show. Don’t worry about it now. If you really want to understand, think of each frame of the animation as a state. Bob goes through state_frame_1, state_frame_2 and so on. Each one of these states lasts for 0.066 seconds. Once the state time exceeded 0.066 seconds, Bob goes into the next state. The animation class knows which image to provide to be displayed for the current state. It is also called the key frame. The
WorldRenderer.java suffers the most changes. The following snippet contains all the changes.
public class WorldRenderer { // ... omitted ... // private static final float RUNNING_FRAME_DURATION = 0.06f; /** Textures **/ private TextureRegion bobIdleLeft; private TextureRegion bobIdleRight; private TextureRegion blockTexture; private TextureRegion bobFrame; /** Animations **/ private Animation walkLeftAnimation; private Animation walkRightAnimation; // ... omitted ... // private void loadTextures() { TextureAtlas atlas = new TextureAtlas(Gdx.files.internal('images/textures/textures.pack')); bobIdleLeft = atlas.findRegion('bob-01'); bobIdleRight = new TextureRegion(bobIdleLeft); bobIdleRight.flip(true, false); blockTexture = atlas.findRegion('block'); TextureRegion[] walkLeftFrames = new TextureRegion[5]; for (int i = 0; i < 5; i++) { walkLeftFrames[i] = atlas.findRegion('bob-0' + (i + 2)); } walkLeftAnimation = new Animation(RUNNING_FRAME_DURATION, walkLeftFrames); TextureRegion[] walkRightFrames = new TextureRegion[5]; for (int i = 0; i < 5; i++) { walkRightFrames[i] = new TextureRegion(walkLeftFrames[i]); walkRightFrames[i].flip(true, false); } walkRightAnimation = new Animation(RUNNING_FRAME_DURATION, walkRightFrames); }); } spriteBatch.draw(bobFrame, bob.getPosition().x * ppuX, bob.getPosition().y * ppuY, Bob.SIZE * ppuX, Bob.SIZE * ppuY); } // ... omitted ... // }
#05 – declaring the RUNNING_FRAME_DURATION constant which controls how long a frame in the running/walking cycle will be displayed
#08 – #11 –
TextureRegions for Bob’s different states.
bobFrame – will hold the region that will be displayed in the current cycle.
#14 – #15 – the two
Animation objects that are used to animate Bob when walking/running.
TextureAtlas
#19 – the new
loadTextures() method
#20 – loading the TextureAtlas form the internal file. This is the
.pack file resulted from
TexturePacker2.
#21 – assigning the region named “bob-01? (this is the actual png name without extension – see TexturePacker2) to the
bobIdleLeft variable.
#22 – #23 – creating a new
TextureRegion (note the use of copy constructor, we need a copy, not a reference) and flipping it on the X axis so we have the same image but mirrored for Bob’s idle state but when facing right. The flipping is very useful as we don’t need to load an extra image, we create one from an existing one.
#24 – assign the corresponding region to the block
#25 – #28 – we create an array of
TextureRegions that will make up the animation. We know that there are 5 frames and their names:
bob-02, bob-03, bob-04, bob-05 and
bob-06. We use a for loop for convenience.
#29 – This is where the animation for the walking left state is defined. The first parameter is the duration of each frame from the sequence expressed in seconds (0.06) and the second parameter takes the ordered list of frames making up the animation.
#31 – #38 – Creating the animation for the walking right state. It is a copy of the animation for the walking left state but each frame is flipped. It is important to make a copy of the frames and not flipping them as the originals get flipped too.
#40 – the changed
drawBob() method.
#42 – setting the active frame to one of the idle frames depending on Bob’s facing
#44 – in case Bob is in a walking state, we extract the corresponding frame for one of the walking sequences based on Bob’s current state time and assign it to bobFrame which will be drawn to the screen. Running the
StarAssaultDesktop application, we should see the Bob animation in action. It should look something like this:
The source code for this project can be found here:. You need to checkout the branch part2. To check it out with git:
git clone -b part2 [email protected]:obviam/star-assault.git. You can also download it as a zip file.
Reference: Android Game Development with libgdx – Animation, Part 2 from our JCG partner Impaler at the Against the Grain blog.
I just downloaded the zip file pulled into eclipse using android 4.03 and java 1.6. I always get this error:
I checked that assets folder in android app is a source folder of desktop:
Also checked that the file does in fact exist in the assets folder of android project.
Awesome,, early waiting for part – 3
Thanks for tutorials!
Waiting for part3
Step by Step tutorials are very good. It will be helpful.we are eagerly waiting for part III ……Thank you!
if(keys.get(Keys.JUMP)){
bob.setState(bob.state.JUMPING);
bob.getVelocity().y = Bob.JUMP_VELOCITY;
}
add this and he will jump with ‘x’ i belive or some key around there. Still didnt cover anything about collisions or gravity. But still very good tutorial so thanks. | https://www.javacodegeeks.com/2013/02/android-game-development-with-libgdx-animation-part-2.html | CC-MAIN-2017-26 | refinedweb | 1,522 | 58.99 |
EventLog.WriteEntry Method (String, EventLogEntryType)
Assembly: System (in system.dll)
Parameters
The string to write to the event log.
- type
One of the EventLogEntryType values.
Use this method to write an entry of a specified EventLogEntryType to the event log. The type is indicated by an icon and text in the Type column in the Event Viewer for a log. writes a warning entry to an event log, "MyNewLog", on the local computer.
using System; using System.Diagnostics; using System.Threading; class MySample{ public static void Main(){ // Create an EventLog instance and assign its source. EventLog myLog = new EventLog(); myLog.Source = "MySource"; // Write an informational entry to the event log. myLog.WriteEntry("Writing warning to event log.", EventLogEntryType.Warning); } } warning to event log.", EventLogEntryType.Warning); } /. | https://msdn.microsoft.com/en-us/library/fc682h09(v=vs.80) | CC-MAIN-2018-05 | refinedweb | 125 | 53.98 |
This is a simple project that i was using to test my current java knowledge (kinda like revision) and for that i used two classes to make a simple averaging program. i know i0m making it more difficult but like i said i'm testing myself but i come up with an unexpected error. Here's the code:
- 1st Class
Code java:
import java.util.Scanner; public class Main { public static int num; public static int total = 0; public static int average; public static void main(String args[]) { Scanner scanner = new Scanner(System.in); Secondary secondaryObject = new Secondary(); secondaryObject.loop(num, total, scanner); average = total/3; System.out.println("Average: " + average); } }
- 2nd Class
Code java:
import java.util.Scanner; public class Secondary { public void loop(int num, int total, Scanner scanner) { int counter = 0; while(counter < 3) { num = scanner.nextInt(); total += num; counter++; } } }
Now the problem is after inputing the numbers it doesn't give me the average but the value of 0. Althought this is not that big of a deal that just made me want to know why and if you could tell me that would be much appreciated. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/36951-simple-averaging-program-using-two-different-classes-printingthethread.html | CC-MAIN-2018-22 | refinedweb | 191 | 54.02 |
HI, I have created a simple car game in Unity, and I'm adding the finishing touches to end the development. I have already made a car controller, programmed wheel colliders and cameras, I just would like to know how to add a hit reaction rag doll system to the civilians on the sidewalk so when they get hit by a car a rag doll is triggered and they go flying.
Answer by shadowpuppet
·
Apr 13, 2017 at 10:54 PM
A few ways to do this but I think the simplest ( for me ) is to have a trigger collider on the civilians with a script to instantiate a ragdoll.
1. make a copy of your civilian model and use the ragdoll wizard to create a ragdoll version of that civilian. drag that into your prefab folder to make it a prefab.
2. have a rigidbody and a collider on your car and tag it something...like "car"
3. put a script like the one below on each civilian. when the car collides with the civilians trigger ( and script) it destroys the civilian and replaces it with a ragdoll.you can use the same script for each civilian, just drag the appropriate ragdoll into the "prefab" slot
using UnityEngine;
using System.Collections;
public class GenericCivillianHurt : MonoBehaviour {
public GameObject prefab;
void OnTriggerEnter (Collider other) {
if(other.tag == "Car")
{
Instantiate(prefab, transform.position , transform.rotation);
Destroy.
Visualize character joint swing and twist axis script
0
Answers
Implimenting RagDoll physics with Spine Skeleton?
0
Answers
Is there any resources on making the most basic character for prototyping?
0
Answers
Syncing Ragdoll Positions (Networking)
0
Answers
Ragdoll Raycast to Collision
0
Answers | https://answers.unity.com/questions/1339843/how-do-i-make-a-rag-doll-hit-reaction-system.html | CC-MAIN-2020-40 | refinedweb | 278 | 61.77 |
I would like assign a time limit to an input so that if the user does not insert an input within the time limit the input will be cancelled
how assign a time limit to an input so that if the user does not insert an input within the time limit the input will be cancelled
how do i do that?
From where? The console? A GUI textbox?
the console
hmmm, that's tricky...
I came up with a rudimentary solution which reads in from System.in in a separate thread and puts it into a StringBuilder buffer which can then be read externally. While someone else isn't actively trying to read from the reader, the buffer is emptied. Note that there needs to be synchronization between the two threads on the buffer otherwise bad things will happen.
This may be good enough to work with most user inputs, but you may want to improve on it. There are still a few issues (both those I have thought of and those I didn't think about).
import java.util.Scanner; public class TimedScanner implements Runnable { public static void main(String[] args) { TimedScanner scanner = new TimedScanner(); System.out.print("Enter your name in 10 second: "); String name = scanner.nextLine(10000); if (name == null) { System.out.println("you were too slow."); } else { System.out.println("Hello, " + name); } } private Scanner scanner; private StringBuilder buffer; private boolean reading; private Thread t; public TimedScanner() { scanner = new Scanner(System.in); buffer = new StringBuilder(); reading = false; t = new Thread(this); t.setDaemon(true); t.start(); } public String nextLine(long time) { reading = true; String result = null; long startTime = System.currentTimeMillis(); while (System.currentTimeMillis() - startTime < time && result == null) { try { Thread.sleep(30); } catch (InterruptedException e) { } synchronized (buffer) { if (buffer.length() > 0) { Scanner temp = new Scanner(buffer.toString()); result = temp.nextLine(); } } } reading = false; return result; } @Override public void run() { while (scanner.hasNextLine()) { String line = scanner.nextLine(); synchronized (buffer) { if (reading) { buffer.append(line); buffer.append("\n"); } else { // flush the buffer if (buffer.length() != 0) { buffer.delete(0, buffer.length()); } } } } } }
isn't reading from InputStream e.g. System.in is a blocking call ?
Yes it is, which is what makes this task tricky. The code I had puts the blocking call in a separate thread and passes the data to a buffer, but it's hardly ideal.
I dont understand what are the downsides in this methodeI dont understand what are the downsides in this methode | http://www.javaprogrammingforums.com/java-theory-questions/11602-how-do-i-limit-input-time.html | CC-MAIN-2014-35 | refinedweb | 406 | 67.76 |
Repository watched
website_scraping ·
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
h2. Snaplet-Tasks Snap Framework ( ) support for command line tasks akin to _Rake_ tasks from _Ruby On Rails_ Depends on Snap 0.7.* and <others> h3. Installation ``` cabal install snaplet-tasks ``` h3. Integration _coolapp.cabal_: ``` Build-depends: snaplet-tasks ``` _Application.hs_: ```Haskell import Snap.Snaplet.Tasks -- ( ... some code ... ) data App = App { _heist :: Snaplet (Heist App) -- ( ... other state ... ) , _tasks :: Snaplet TasksSnaplet } ``` _Site.hs_: ```Haskell app = do h <- nestSnaplet "heist" heist $ do heistInit "resources/templates" -- ( ... some init ... ) t <- nestSnaplet "tasks" tasks $ tasksInit return $ App h .. .. .. t ``` Because we need full app state at disposal in our tasks the easiest way was to create them from normal handlers. Thus, in order to run task - app has to be started. It is comparable with Ruby On Rails Rake tasks where whole app environment _can_ be loaded for task as well. The only difference is that here in Snap - app also listens on TCP for connections - and that *we utilize*. There are two constriants to running tasks: 1. You can't run them remotely ( meaning that you can't fire app task handler remotely ) 2. Task route is being _hashified_ and the only way to specify task is by using command line app arg switch _T_ ( ex. T mysupertask or T namespace:second:supertask ) The supplied name of task is being hashified again and thus matched with route that responds to exaclty that hash. This implies that you have to have some tool/function to create tasks so that they respond to hashified names. And indeed you have: ``` task :: String -> (Handler b v ()) -> (B.ByteString, Handler b v ()) ``` This function takes name of the task ( ex. "db:migrate" ) and a handler and returns a tuple that you can use defining your app routes. | https://bitbucket.org/kamilc/snaplet-tasks | CC-MAIN-2015-18 | refinedweb | 313 | 75.4 |
Singleton is one of the most widely used creational design pattern to restrict the object creation by applications. In real-world applications, resources like Database connections or Enterprise Information Systems (EIS) are limited and should be used wisely to avoid any resource crunch. To achieve this, we can implement a Singleton design pattern to create a wrapper class around the resource and limit the number of objects created at runtime to one.
Thread Safe Singleton in Java
In general, we follow the below steps to create a singleton class:
- Override the private constructor to avoid any new object creation with new operator.
- Declare a private static instance of the same class.
- Provide a public static method that will return the singleton class instance variable. If the variable is not initialized then initialize it or else simply return the instance variable.
Using the above steps I have created a singleton class that looks like below:
ASingleton.java
Copypackage com.journaldev.designpatterns; public class ASingleton { private static ASingleton instance = null; private ASingleton() { } public static ASingleton getInstance() { if (instance == null) { instance = new ASingleton(); } return instance; } }
In the above code, getInstance() method is not thread-safe. Multiple threads can access it at the same time and for the first few threads when the instance variable is not initialized, multiple threads can enters the if loop and create multiple instances and break our singleton implementation.
There are three ways through which we can achieve thread safety.
- Create the instance variable at the time of class loading.
Pros:
- Thread safety without synchronization
- Easy to implement
Cons:
- Early creation of resource that might not be used in the application.
- The client application can’t pass any argument, so we can’t reuse it. For example, having a generic singleton class for database connection where client application supplies database server properties.
- Synchronize the getInstance() method
Pros:
- Thread safety is guaranteed.
- Client application can pass parameters
- Lazy initialization achieved
Cons:
- Slow performance because of locking overhead.
- Unnecessary synchronization that is not required once the instance variable is initialized.
- Use synchronized block inside the if loop and volatile variable
Pros:
- Thread safety is guaranteed
- Client application can pass arguments
- Lazy initialization achieved
- Synchronization overhead is minimal and applicable only for first few threads when the variable is null.
Cons:
- Extra if condition
Looking at all the three ways to achieve thread safety, I think the third one is the best option and in that case, the modified class will look like:
Copypackage com.journaldev.designpatterns; public class ASingleton { private static volatile ASingleton instance; private static Object mutex = new Object(); private ASingleton() { } public static ASingleton getInstance() { ASingleton result = instance; if (result == null) { synchronized (mutex) { result = instance; if (result == null) instance = result = new ASingleton(); } } return result; } }
Local variable
result seems unnecessary. But it’s there to improve the performance of our code. In cases where the instance is already initialized (most of the time), the volatile field is only accessed once (due to “return result;” instead of “return instance;”). This can improve the method’s overall performance by as much as 25 percent.
If you think there are better ways to achieve this or the thread safety is compromised in the above implementation, please comment and share with all of us.
Update: String is not a very good candidate to be used with synchronized keyword because they are stored in string pool and we don’t want to lock a string that might be getting used by another piece of code, so I have updated it with Object, learn more about synchronization and thread safety in java.
Pallavi Singh says
Isn’t static inner helper class approach better than the above three mentioned?
Deepak says
Hi Pankaj,
How to execute your Singleton class.
Where is the main method ?
Deepak
Harshit Joshi says
Hey Nice Solution but i think we can also optimize is by not creating object lock instead of it we can use class lock
example
Copied!
package com.journaldev.designpatterns;
public class ASingleton {
private static volatile ASingleton instance;
private ASingleton() {
}
public static ASingleton getInstance() {
ASingleton result = instance;
if (result == null) {
synchronized (ASingleton.class) {
result = instance;
if (result == null)
instance = result = new ASingleton();
}
}
return result;
}
}
I am not sure Please guide me as i am fresher
Datta says
This seems ok..create instance on class loading and later return same instance..No synchronization required.
public class Singleton {
private static Singleton singltonInstance = new Singleton();
private Singleton() {
}
public static Singleton getInstance() {
return singltonInstance;
}
@Override
protected Object clone() throws CloneNotSupportedException {
throw new CloneNotSupportedException();
}
}
jatin says
You don’t talk about enum. Enum objects are best suited for singleton classes. It covers both concurrency and serialization issues we face in our custom singleton classes.
ashish gupta says
public class TestSingleton {
public static void main(String[] args) throws NoSuchMethodException, SecurityException, InstantiationException,
IllegalAccessException, IllegalArgumentException, InvocationTargetException {
ASingleton singleton1 = ASingleton.getInstance();
System.out.println(singleton1);
ASingleton singleton2 = ASingleton.getInstance();
System.out.println(singleton2);
Constructor c = ASingleton.class.getDeclaredConstructor((Class[]) null);
c.setAccessible(true);
System.out.println(c);
ASingleton singleton3 = c.newInstance((Object[]) null);
System.out.println(singleton3);
if (singleton1 == singleton2) {
System.out.println(“Variable 1 and 2 referes same instance”);
} else {
System.out.println(“Variable 1 and 2 referes different instances”);
}
if (singleton1 == singleton3) {
System.out.println(“Variable 1 and 3 referes same instance”);
} else {
System.out.println(“Variable 1 and 3 referes different instances”);
}
}
}
Hi Pankaj,
NIce article,
but the solution you provided , fails the above testcase.
Thanks .
Gandalf says
The test case you have mentioned uses Java Reflection API which is not used by the same code base for most cases. Imagine an app that wants to keep the constructor private and still uses reflection to access the private constructor at other place, if that’s the case, then why make the constructor private in the first place? So generally speaking the developer of an app that uses private members won’t break his own rules to use Java reflection to access private constructors. Now, let’s take the case of third party apps accessing a common library that uses singletons. Now as a developer using the common library, you wanna use it in the proper way rather than using reflection so that your app performs better and your threads do not accidentally create multiple instances of the limited resource.
The purpose of data encapsulation and other Object oriented programming methodology is for encouraging loose coupling and other best practices, these rules are not related to security to prevent something from happening. Its your code, you can do anything with it. These principles are for us developers to follow to ensure our app is maintainable in the long run. If we purposefully decide to not use these principles, then no one is there to restrict us.
Now in case, let’s say we are suspicious about our own developers going out of the way and using reflection which is a rare case but still, then the solution is to run Java app with certain security policies that restrict the reflection capabilities. Read more here –
Parvesh Kumar says
The third option is not a singlton class it will create new instance always.
AB says
C++ implementation using a bool var instead (possibly more efficent)
static ConnectionManager *_currentInstance = nullptr;
static bool instanceAvailable = false;
static std::mutex singletonMutex;
ConnectionManager* ConnectionManager::getInstance()
{
// thread safe implementation of SINGLETON
if (!instanceAvailable)
{
instanceAvailable = true;
std::unique_lock lck(singletonMutex);
if (!_currentInstance) {
_currentInstance = new ConnectionManager;
}
}
return _currentInstance;
}
Diego says
Thanks Pankaj, great article.
One question: does adding synchronization at the getInstance level (regardless of the approach) remove the need to synchronize access to other resources in the instance (member variables, class variables and methods)?
thanks again
Pankaj says
No it won’t. When you add synchronization to getInstance method, it is to make sure that you have only one instance of the Singleton class. So if two threads are calling getInstance method, both will get reference to the same object, hence any change in class variables or instance variable will be reflected in other thread as well. So you will have to take care of synchronization for these variables too. Ideally, you should avoid any variables in Singleton class.
Raviraj_bk says
we can use “synchronized (ASingleton.class) ” instead of below piece of code
synchronized (mutex) {
result = instance;
if (result == null)
instance = result = new ASingleton();
}
Raviraj_bk says
is it right?
Pankaj says
Synchronizing class will lock it’s object too, that’s why we should always create a temporary variable for synchronizing block of code.
Edo says
I think that this kind of implementation doesn’t avoid the singleton creation through reflection.
My favourite implementation is something like this:
public class MySingleton {
private static class Loader {
private static MySingleton INSTANCE = new MySingleton();
}
private MySingleton(){
if(Loader.INSTANCE!=null){
throw new InstantiationError( “Creating of this object is not allowed.” );
}
}
public static MySingleton getInstance(){
return MySingleton.Loader.INSTANCE;
}
}
What do you think?
Thanks
Edo
Marton Korosi says
I think the 3rd approach is incorrect.
instance variable should be volatile! like this:
private static volatile ASingleton instance= null;
——–
If instance variable is not volatile, JVM can reorder
new ASingleton()
and
instance=
operations which may result in an instance variable which will point to uninitialized memory!
refer to:
Pankaj says
Thanks for pointing it out, it was written long back when volatile was broken in java 1.4. I have updated the code with correct approach now.
Noy says
Hi. I have two issues with the selection of the third option. One is that by creating a mutex static object you contradict the fact that you keep a static option that you may never use(mutex).
Second is that i would change the code to contain the if condition one time unsynchronized and obe time in an inner if when it is synchronized on mutex.
Aarati says
As “instance” field is static. So will it make sense to synchronize on (synchronized (ASingleton.class)
jubi max says
Thank you very much
Prachi says
I replied the same answer to a interviewer but she asked me that object is not created yet since getInstance() is static method who will get object lock in synchronization.
Thanks,
Prachi
Nayanava says
by object do you mean the mutex object??
Raviraj_bk says
Hi,
Since mutex is static varible it will be initialized during class loading time, and then synchnozization happens on the lock of mutex object.
Ravi says
Can we not use the volatile() instead of using synchroniation?
Hamid Khan says
You haven’t made any comment regarding why not making ‘instance’ volatile.. any thoughts?
Satya says
best to make getInstance() synchronized
Santosh Kumar Kar says
If the method is synchronized, where huge threads are calling that method, every thread will have to wait when it’s being used by other thread. Think is it really required when the object is already created and != null? So making method synchronization is not a good idea.
vijay says
prevent from cloning is not mentioned
Ganesh says
That I really appreciate for this article . I learned good stuff today, and also I red some where below code snipped is very good code for singleton, can you compare with your code in terms pros and cons.
package com.journaldev.designpatterns;
public class ASingleton{
private ASingleton(){
}
private static class ASingletonInnerClass{
private static final ASingleton ASINGLETON= new ASingleton();
}
public static ASingleton getInstance(){
return ASingletonInnerClass.ASINGLETON;
}
}
DIVYA PALIWAL says
Hi,
Great article!
I am trying t understand thread safety in Singleton Pttern.
Can you please provide and example code where different threads are trying to implement Singleton pattern.
Thanks,
Divya
Praful says
Very nice article. But if loop wording is not correct, please change it to if condition.
Amey Jadiye says
I think making instance volatile make much difference than approach given in post
Archna Sharma says
In your third approach, although it checks the value of instance once again within the synchronized block but the JIT compiler can rearrange the bytecode in a way that the reference to instance is set before the constructor has finished its execution.
This means the method getInstance() returns an object that may not have been initialized completely.
I think, the keyword volatile can be used for the instance variable. Variables that are marked as volatile get only visible to other threads once the constructor of the object has finished its execution completely.
Naveen J says
String is not a very good candidate to be used in synchronization, so I have updated it with Object, learn more about synchronization and thread safety in java
Why string is not good candidate………. Since its immutable its a good candidate to use in synchronization block right.
Pankaj says.
Asanka says
Hi Pankaj,
I believe this is the best way, it doesn’t use any synchronization at all, provides better performance too.
Pankaj says
Singleton Pattern Approaches – read this article to learn about different ways and their pros-cons.
Rishi Dev Gupta says
there is a good way to implement the Singletons, that will look after all the issue and with lesser code
public enum InputValidatorImpl {
instance;
// add some method
}
Pankaj says
there are several ways to implement singleton pattern and Enum is one of the way but we can’t achieve lazy initialization with this. Read more at
Hesham says
double check lock is not thread safe in java this issue listed by PDM tool (block synchronizing)
Anonymous says
‘,~ I am really thankful to this topic because it really gives useful information :-`
Ben says
you can avoid your extra if condition if you create instance described below,
Once we declare static, it will refer the same object all the time
package com.journaldev.designpatterns;
public class ASingleton{
private static ASingleton instance= new ASingleton();
private ASingleton(){ }
public static synchronized ASingleton getInstance(){
return instance;
}
}
Pankaj says
Here you are creating the instance at the time of class loading… what if we are expecting some parameters like DB Connection URL, User, Password to be passed in the getInstance method so that it will be generic.
anon says
Erik van Oosten says
Example 3 is the traditional double check idiom for lazy initialization. The double check is badly broken in java before version 5. The example you have here is broken also because instance is not declared volatile.
The best way is to extract the singleton code to a separate class which is guaranteed to be loaded only when the referring class is instantiated.
For more information see item 71 in “Effective Java” (2nd edition) by Joshua Bloch.
But you’d better avoid singletons completely.
gaurav says
invoke Bill Pugh’s singleton pattern | https://www.journaldev.com/171/thread-safety-in-java-singleton-classes-with-example-code | CC-MAIN-2019-30 | refinedweb | 2,418 | 52.09 |
(edit: I am glad you all liked this project! It got to be the top Python article of the week!)
A while ago I was messing around with google's Text to Speech python library.
This library basically reads out any piece of text and converts it to
.mp3 file. Then I started thinking of making something useful out of it.
My installed, saved, and unread pdf books 😕
I like reading books. I really do. I think language and ideas sharing is fascinating. I have a directory at which I store pdf books that I plan on reading but I never do. So I thought hey, why dont I make them audio books and listen to them while I do something else 😄!
So I started planning how the script should look like.
- Allow user to pick a
.pdf file
- Convert the file into one string
- Output
.mp3file.
Without further needless words, lets get to it.
Allow user to pick a .pdf file
Python can read files easily. I just need to use the method
open("filelocation", "rb") to open the file in reading mode. I dont want to be copying and pasting files to the directory of the code everytime I want to use the code though. So to make it easier we will use
tkinter library to open up an interface that lets us choose the file.
from tkinter import Tk from tkinter.filedialog import askopenfilename Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing filelocation = askopenfilename() # open the dialog GUI
Great. Now we have the file location stored in a
filelocation variable.
Allow user to pick a .pdf file ✔️
Convert the file into one string
As I said before, to open a file in Python we just need to use the
open() method. But we also want to convert the pdf file into regular pieces of text. So we might as well do it now.
To do that we will use a library called
pdftotext.
Lets install it:
sudo pip install pdftotext
Then:
Great. Now we have the file stored in the variable
if you print this variable, you will get an array of strings. Each string is a line in the file. to get them all into one
.mp3 file, we will have to make sure they are all stored as one string. So lets loop through this array and add them all to one string.
Sweet 😄. Now we have it all as one piece of string.
Convert the file into one string ✔️
Output .mp3 file 🔈
Now we are ready to use the
gTTS (google Text To Speech) library. all we need to do is pass the string we made, store the output in a variable, then use the
save() method to output the file to the computer.
Lets install it:
sudo pip install gtts
Then:
from tkinter import Tk from tkinter.filedialog import askopenfilename import pdftotext from gtts import gTTS final_file = gTTS(text=string_of_text, lang='en') # store file in variable final_file.save("Generated Speech.mp3") # save file to computer
As simple as that! we are done 🎇
(edit: I am glad you all liked this article! The intention of all my writings is to be as simple as possible so all-levels readers can understand. If you wish to know more about customizing this API, please check this page:)
I am on a lifetime mission to support and contribute to the general knowledge of the web community as much as possible. Some of my writings might sound too silly, or too difficult, but no knowledge is ever useless.If you like my articles, feel free to help me keep writing by getting me coffee :)
Discussion
I would suggest adding two lines to save the MP3 file to the same location and name as the PDF file.
from os.path import splitext
outname = splitext(filelocation)[0] + '.mp3'
then use:
final_file.save(outname)
That would be a nice add!
Oh, fantastic! I was looking to add this by myself but I don't know python coding. Thanks for bringing it up!
I am really intrigued by this article. I tried everything to install pdftotext lib on my mac but was unsuccessful. I keep getting this error --> " error: command 'gcc' failed with exit status 1"
I installed OS dependencies , Poppler using brew but didn't work. Can you anyone help me?
make sure you have these two installed:
python-dev
libevent-dev
Yup i installed them . NO matter what i do, i keep getting this error --> "ERROR: Command errored out with exit status 1"
and i installed gcc too!
I just started getting the same thing on my system (Ubuntu). After a lot of Google/StackExchange, this worked (copy from my annotations):
For whatever reason, in order to install the following two, I had to install some stuff on my Ubuntu Mate ** system-wide ** to get rid of compile errors:
I'm using PyCharmCE. After the above, I could use this in the PyCharm terminal:
After I did all of that, successful! Program works like a charm (hehe).
Cheers!
Thanks for sharing your solution!
A pleasure to finally be able to give back a little!
I have a Mac, brother. Can't use app-get. what should i do now?
Are you using the default Python 2.7?? You may need to use Python 3.x
I got this working on the Mac using Python 3.7.4 using virtual env and brew. Works fine.
I am using docker with my Macbook without any issue. And it is a great alternative to start working on any environment, stack, etc.
They mention what all has to be installed for various O.S's in here pypi.org/project/pdftotext/
Have you tried to install the OS dependencies as specified in the docs? github.com/jalan/pdftotext#macos
Really cool and quick project! One thing I would suggest is to use python's
join()method instead of looping over the list of strings. I think that's the more "pythonic" way and should also perform a little better.
Thanks for the tip!
I sure will start using that
My favorite part is (if I am not mistaken) that this would work for any language PDF as long as google text to speech supports the language.
hahaha omg how could I not think about doing the research.
You're true.
check this out
cloud.google.com/text-to-speech/
I am on
fedoraand had to install the following dependencies to get this working before I could
pip install pdftotext
Sequence would be
Thanks a lot for the article, I tried a lot finding such thing but now am able to read(listen) to all my untouched PDFs.
That was my intention.
Glad you liked it :)
I tried this on Win10, but was unable to install pdftotext package in Python 3.8.
Hence, I did this using another way :
github.com/suryabranwal/TIL/blob/m...
It will not work offline. Try AudioBook to listen offline.
Documentation:- audiobook.readthedocs.io/
Convert your Pdf in cool AudioBook with 3 lines of python code
CodePerfectPlus ・ Oct 23 ・ 1 min read
This is a life-saving procedure you shared. I tried it and works like charm. Thank you so very much.
I have a question though...
I know this is a simplistic approach to just explain the basics( and its awesome). Please, is it possible to change the reader's voice and reading speed?
I am glad you liked it!
The intention of all my writings is to be as simple as possible so all-levels readers can understand.
If you wish to know more about customizing this API, please check this page:
gtts.readthedocs.io/en/latest/
An observation here ( I'm sure this has to do with the gtts engine though ):
The reader would rather spell some words than pronounce the actual words and its a bit strange. I did a conversion where the word "first" was spelt rather than pronounced. Initially, I thought such occurs when words are not properly written and the text recognition engine is affected. "Five" was pronounced fai-vee-e,and other spellings like that.
Overall though, it is manageable and one can make good sense out of the readings. Now I can "read" my e-books faster with this ingenious solution.
Thanks again, @mustapha
Really cool !
However , when I tried to convert a decent sized pdf file (3.0 MB) , I got the following error :
"gtts.tts.gTTSError: 500 (Internal Server Error) from TTS API. Probable
cause: Uptream API error. Try again later."
Is Gtts blocking me from using their API ? How shall I resolve this ?
Can i use custom voice?
Thank you for recommend these good Text to Speech Software Solutions. I'm come from VeryUtils software company, VeryUtils has a DocVoicer (Text-To-Speech) Software, it can convert from PDF files to MP3 Audio easily. I would like to see our product get featured in articles like this. Would it be possible for you to write something for us? If so, please let me know, thank you.
Frank Xue
CEO VeryUtils.com
veryutils.com
[email protected]
I have a problem running [vagrant@centos8 ~]$ sudo pip3 install pdftotext on CentoOS8:
error: command 'gcc' failed with exit status 1
Command "/usr/bin/python3.6 -u -c "import setuptools, tokenize;file='/tmp/pip-build-7_3v7vuh/pdftotext/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-ac0irxfy-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-7_3v7vuh/pdftotext/
I'm running Python 3.6.8, do I have to use Python 3.8 explicitly?
This code gets stuck after I add PDF. can anyone provide any solution to this?
from tkinter import *
import pygame
import PyPDF2
from gtts import gTTS
from tkinter import filedialog
from os.path import splitext
root = Tk();
root.title('PDF Audio Player')
root.geometry("500x300")
Initialise Pygame Mixer
pygame.mixer.init()
Add PDF Function
def addPDF():
PDF = filedialog.askopenfilename(title="Choose a PDF", filetypes=(("PDF Files", "*.PDF"), ))
PDF_dir = PDF
def PDFtoAudio(PDF_dir):
file = open(PDF_dir, 'rb')
reader = PyPDF2.PdfFileReader(file)
totalPages = reader.numPages
string = ""
Play Selected PDF Function
def play():
audio = audioBookBox.get(ACTIVE)
audio = f'C:/Users/kusha/Downloads/{audio}.mp3'
Create Playlist Box
audioBookBox = Listbox(root, bg="black", fg="red", width = 70, selectbackground="gray", selectforeground="black")
audioBookBox.pack(pady=20)
Define Player Control Button Images
backBtnImg = PhotoImage(file='Project Pics/back50.png')
forwardBtnImg = PhotoImage(file='Project Pics/forward50.png')
playBtnImg = PhotoImage(file='Project Pics/play50.png')
pauseBtnImg = PhotoImage(file='Project Pics/pause50.png')
stopBtnImg = PhotoImage(file='Project Pics/stop50.png')
Create Player Control Frame
controlsFrame = Frame(root)
controlsFrame.pack()
Create Player Control Buttons
backBtn = Button(controlsFrame, image=backBtnImg, borderwidth=0)
forwardBtn = Button(controlsFrame, image=forwardBtnImg, borderwidth=0)
playBtn = Button(controlsFrame, image=playBtnImg, borderwidth=0, command=play)
pauseBtn = Button(controlsFrame, image=pauseBtnImg, borderwidth=0)
stopBtn = Button(controlsFrame, image=stopBtnImg, borderwidth=0)
backBtn.grid(row=0, column=0, padx=10)
forwardBtn.grid(row=0, column=1, padx=10)
playBtn.grid(row=0, column=2, padx=10)
pauseBtn.grid(row=0, column=3, padx=10)
stopBtn.grid(row=0, column=4, padx=10)
Create Menu
myMenu = Menu(root)
root.config(menu=myMenu)
Add the converted audio file in the menu
addAudioMenu = Menu(myMenu)
myMenu.add_cascade(label="Add PDF", menu=addAudioMenu)
addAudioMenu.add_command(label="Add One PDF", command=addPDF)
root.mainloop()
Hey, this is really cool.
hey thanks buddy!
glad you liked it
will it also read page number, footer or any extra garbage text?
Yes, of course as they are also a type of text.
Using Machine learning you can avoid those things.
Do you have any demo audio files? I'm really interested to hear it. :)
Run this code and hear the result
Great!!
Does it work in any language?
check this link
cloud.google.com/text-to-speech/
Good stuff, Mustafa! I created a github project for this in case anyone wants to see and get an idea how this is set up on an Ubuntu 18.04 workstation.
github.com/hseritt/pdf2voice
Thank you for sharing the repo Harlin!
This idea is great. But if you just want to listen. Use Moon+ Reader App. It converts text to speech.
Awesome, awesome, awesome! I'm guessing they're ok to listen to?
Yea they get the job done
Cool stuff!
Really useful article.
thanks buddy!
Nice one Mustafa!
I'm curious what would happen if the PDF has images or mathematical equations?
Suggestion : Display status of the conversion ..
I copy this codes and paste in python 3(Anaconda) and nothing displayed, no error no output, please why, thanks
I do not use Anacoda so I can't guess what the problem is.
Just make sure you have all the needed packages installed and it should run smoothly. | https://dev.to/mustafaanaskh99/convert-any-pdf-file-into-an-audio-book-with-python-1gk4 | CC-MAIN-2020-50 | refinedweb | 2,154 | 69.07 |
The class doesn't sees the function "place();" in the main.cpp and the int x and int y is also not recognized when using it in the functions owned by the class:
Code:#ifndef ORGAN_H #define ORGAN_H class Organ { public: int x; int y; void kill(int a,int b); void it_lives(int prop); void SetCoord(int x, int y); }; #endif // ORGAN_Hplace(); is defined in the main.cpp file. The header file is defined in the main.cpp.place(); is defined in the main.cpp file. The header file is defined in the main.cpp.Code:#include "organ.h" void SetCoord(int x, int y) { place(x,y,"@");//ERROR: Undeclared } void kill(int a,int b) { x=a;//ERROR: x and y undefined y=b; place(x,y," "); } void it_lives(int prop) { if(prop==0) { kill(x,y); } } | https://cboard.cprogramming.com/cplusplus-programming/74528-functions-class-class-variables.html | CC-MAIN-2017-22 | refinedweb | 139 | 70.13 |
IR 2.4Ghz. Cordless phones use 900Mhz, 2.4Ghz, and now 5.8Ghz. Key fobs, garage door openers, and some home automation systems use 315Mhz or 434Mhz. Since these devices are used in mass produced consumer products, these parts are widely available and relatively inexpensive.
TV remotes use an Infrared LED
Car key fobs use RF
Using some low-cost parts and breakout boards that should only cost around US $10 per demo, two basic demos will be setup. One using IR and one with RF. Each demo has a transmitter and matched receiver attached to mbed. To keep it simple in each case, 8-bit ASCII character data will be sent out over the link and received back using mbed's serial port hardware. A terminal application running on the PC attached to mbed's USB virtual com port will be used to type in test data. The data will only appear back on the PC if the IR or RF communications link is operating correctly.
Serial ports use a 10-bit protocol to transfer each character. The idle state of the serial TX pin is high. For bit 1, the start bit, the signal goes low. It is followed by the 8-bits of ASCII character data and a high stop bit (total of 10 bits). The rate at which bits change is called the baud rate. The low cost devices used in the demos will operate at 1200-2400 baud max(or bits per second (BPS) ). For the demo, both the transmitter and receiver are attached to mbed, but in a typical application they would be on different subsystem with its own processor, and physically separated by several meters.
IR and RF communication links assembled on a breadboard for the demo.
IR transmit and receive demo
Consumer IR devices use an infrared LED in the handheld remote and an IR receiver located inside the device. Since sunlight and ambient room lighting would interfere with any IR detector just looking at light levels, the signal modulates (i.e. turns on and off) a high frequency carrier signal. This is called amplitude shift keying(ASK). Typically for IR, the frequency is in the 30-60Khz range with 38Khz being the most common carrier frequency. There are a few early first generation electronic ballasts for fluorescent lights operating this range that can cause interference with IR remotes, but in most cases it works well. This means that the IR LED transmitter must be modulated. On mbed, this can be done using the PWM hardware. The IR detector modules have a built-in bandpass filter and hardware to demodulate and recover the original signal.
Sparkfun IR LED transmitter module
The Sparkfun IR LED breakout board seen above contains a 50MA high output IR LED and a driver circuit using a transistor as seen in the schematic below. An IR Led can be used instead now that this board is no longer available, but the circuit still needs the correct polarity to control the LED on/off state, since the serial port's internal UART receiver hardware must have a low start bit and a high stop bit to work. A discrete IR LED should have an operating voltage of around 1.5V, so don't forget the series voltage dropping resistor!
Schematic of IR LED breakout board
Sparkfun IR receiver module
The Sparkfun IR receiver breakout board seen above contains a Vishay TSOP853 38Khz IR receiver module. The block diagram is seen below.
IR receiver module block diagram
This newer 38Khz IR receiver module from Sparkfun should also work for the demo.
Wiring
Solder header pins into the breakout boards and everything will hookup on a breadboard. Point the IR LED towards the receiver breakout board. At a range of just a few inches the receiver can pickup the signal from the side, if case you have trouble pointing it directly towards the IR LED. Long right angle header pins might be a good idea on these IR breakout boards since they can then be mounted sticking up and directly facing each other. At close range, your hand or a piece of paper will reflect back enough IR to transmit the signal in case you can't mount them facing each other on the breadboard.
IR Demo Code
In the demo code, mbed's PWM hardware supplies the 38Khz carrier for transmission. The PWM period is set to 1/38000. The IR LED CTRL input is driven by the 38Khz PWM signal. IRLED = .5 generates a 50% PWM duty cycle to produce a 38Khz square wave for the carrier. The gnd pin on the IR LED is actually switched on and off using mbed's serial data output to eliminate the need for any external digital logic, and still permit direct use of the serial port hardware for 38Khz IR data transmission. It is a bit more complex than appears at first glance, since the start bit must be zero for the 10-bit serial port input hardware to work, and the IR receiver inverts the signal.
IR_Demo
#include "mbed.h" Serial pc(USBTX, USBRX); // tx, rx Serial device(p13, p14); // tx, rx DigitalOut myled1(LED1); DigitalOut myled2(LED2); PwmOut IRLED(p21); //IR send and receive demo //LED1 and LED2 indicate TX/RX activity //Character typed in PC terminal application sent out and returned back using IR transmitter and receiver int main() { //IR Transmit code IRLED.period(1.0/38000.0); IRLED = 0.5; //Drive IR LED data pin with 38Khz PWM Carrier //Drive IR LED gnd pin with serial TX device.baud(2400); while(1) { if(pc.readable()) { myled1 = 1; device.putc(pc.getc()); myled1 = 0; } //IR Receive code if(device.readable()) { myled2 = 1; pc.putc(device.getc()); myled2 = 0; } } }
Import programIR_demo
IR transmit and receive demo using Sparkfun breakout boards See
Characters typed in PC terminal window echo back using the IR IR link, read back in on the serial port, and echoed back in the terminal application window. If the characters typed in do not appear in the window as they are typed, there is a problem with the IR link. LED1 and LED2 on mbed are used to indicate TX and RX activity. If you only see a few occasional errors, look around for a fluorescent light that might be causing interference. While running the demo if you happen to have an IR remote control handy, point it at the IR receiver and hit a button. Assuming that the IR remote uses a carrier near 38 KHz, you should see a random looking string of characters on the PC every time you hit a button.
A Camera can detect the invisible IR light from an IR LED
While you can't see the IR light from an IR LED with your eyes directly to see if it is working, it will show up on a webcam, digital camera, or a cell phone camera as a faint purple light as seen in the image on the right above. The IR LED is off in the left image. Unlike your eyes, the image sensors used in cameras have a slight response to IR light.
Applications that need to transfer longer streams of data and not just 8-bit characters will need a bit more complex code for a different communications protocol. In most cases, shifting, some bit banging, and time delays will be needed since it will not be possible to use the serial transmission protocol implemented in hardware in mbed’s serial port UART. On mbed, a PWMOut pin can also be used to drive a low power IR LED directly or the CTL input on the IR breakout board. In this case, the IR LED breakout board power pins are tied to 5V and gnd, producing a bit more drive current for the LED than the demo setup. Set the period to 38Khz, assign the PWM output a value of 0.0 to turn it off (sends a "0" bit) , or a value of 0.5 to turn it on (sends a "1" bit a 38Khz square wave). In this case the serial port hardware would not be used at all, the software shifts out each bit. Each data bit would need to be shifted over, and masked off, and used to drive the PWM output pin after the appropriate time delay using wait(). Several cycles of the carrier frequency are required for the receiver to lock onto each bit and this sets the maximum data transfer rate (around 2400bPS).
Consumer IR remotes send out a long continuous stream of bits (around 12-50 bits long) for each button, so the serial port cannot be used to send these codes (i.e., no start and stop bits for every eight bits of data). Universal remotes with learning have an IR detector and they learn by recording the bit pattern from the old IR remote control and then they play it back. For more information on consumer IR signals see and. Two cookbook projects, TxIR and IR remote provide additional insight into this approach and code examples for mbed. LIRC is a PC application designed to decode and send consumer IR signals. An extensive table of the IR formats and codes from LIRC used by different manufacturers can be found at and the table format is described at. The IR codes often contain preamble bits that help the receiver’s automatic gain control (AGC) circuit adjust the gain to the signal strength to minimize errors.
IrDA is an infrared communications protocol designed to work at a range of around 1 meter. The IrDA IR signal is not modulated. It relies on high signal strength at a short distance to overcome interference from ambient light. It was a bit more popular in portable devices prior to the introduction of Wi-Fi and Bluetooth. It would require a different IR detector without a band pass filter and demodulator along with software to implement the complex protocols needed for data transfer. IrDA transceiver modules are available in packages similar to the IR receiver used in the demo with data rates from 115KbPS to 4000KbPS.
RF transmit and receive demo
RF remotes tend to cost a bit more, have longer range, and are not line of sight only as is the case for IR. Most car key fobs operate at 315Mhz or 434Mhz. Most key fob systems use encryption to prevent a car thief from intercepting and spoofing the codes.Rolling codes are used in most remotes to prevent simple “replay” devices from gaining access. Governments allocate and license the use of RF frequencies and place limits on power output, so frequencies used can vary from country to country. A final product with RF modules may require testing and certification. In the USA, many products using RF are required to have a label with an FCC ID number that can be searched to quickly determine the operating frequency of the device.
Sparkfun RF transmitter module
For the RF demo, the 315Mhz RF transmitter module seen above was used. It uses a surface acoustic wave (SAW) device. It is the metal can seen in the image above. A SAW device has significant performance, cost, and size advantages over the traditional quartz crystal for a fixed frequency transmitter. The RF transmitter’s 315 MHz carrier is turned on and off by the signal (ASK modulation). This is similar to the earlier IR example, but it is operating at a much higher frequency in the UHF range (315 MHz vs. 38 KHz). A similar pair of 434MHz modules is also available from Sparkfun and it is more common in Europe and Asia. The pins have the same function and the demo should also work for the 434MHz modules without changes (other than a shorter antenna length).
Schematic of a SAW transmitter
The schematic is hard to find for each module, but a typical SAW transmitter for this frequency range is shown above. Basically, it has a high frequency transistor, the SAW device for the correct frequency, and some RLCs. This looks identical to the circuit found on the back side of the Sparkfun module.
Sparkfun RF receiver module
The 315MHz Sparkfun RF receiver module seen above is used in the demo to receive the signal. These modules are also available from a number of other sources. Similar RF modules in small surface mount packages are also available, but a breakout board would be needed for use on a breadboard. The datasheets provide little information, but the module is basically a low-cost bare bones super-regenerative receiver. A super-regenerative receiver uses a minimum of parts and has a basic automatic gain control (AGC). An AGC attempts to automatically adjust the amplification (gain) needed to match the signal strength.
A schematic for a 434MHz RF receiver module
The datasheet does not include a schematic, but a schematic is shown above for another RF module in this frequency range and the same parts are found on the Sparkfun receiver module. A dual op amp, two high frequency transistors, some RLCs and a variable inductor with a screw slug for frequency tuning. For a more in depth discussion on how the various parts of this circuit function see.
On the breadboard, a decoupling capacitor of 330uf was placed near the power pins of each module and it greatly reduced transmission errors. Some PCs also seem to have a bit more noise present on the USB 5V power lines than others and this can increase the RF transmission error rate and the need for decoupling capacitors. Each module has an antenna pin and a 6-13cm jumper wire works well as an antenna at short distances on a breadboard. At greater distances, the antenna would need to be longer (1/4 wave = 23cm (at 315MHz) is suggested). To get maximum range, don’t forget that a 1/4 wave monopole antenna is supposed to be sticking above (but not touching) a ground plane about the same size as the antenna. Earth approximates a ground plane, but metal provides more directional gain. The RF modules have a maximum range of up to 160M under perfect conditions. Some of the new surface mount RF modules mounted on breakouts have a range of a 1000M or more with a corresponding increase in price. Antenna size in a final product can also be reduced using several techniques.
Wiring
The RF modules have pins with spacing that fits in standard breadboards. Exercise some caution inserting the modules into the breadboard as they are not quite as sturdy as standard IC pins. The pins are also a bit smaller in size and on some breadboards they need to be adjusted a bit for a good clean contact. On this demo setup, the receiver had more errors until the module was pulled up a bit from the breadboard. Make certain that you hookup all of the power and ground pins. If the RF modules are placed too close on a breadboard, it can cause errors. In the demo setup, the metal can on the transmitter module was placed facing the receiver module to provide a bit of shielding.
RF Demo Code
In the RF demo, data is sent out over the serial port, transferred using the RF link, and read back in on the serial port. It seems straightforward, but to keep the transmitter and receiver locked on, the signal needs to be constantly changing. In the demo, the idle state constantly sends out a 10101010 pattern to keep the receiver locked on to the correct gain for the signal's strength. Without sending the sync pattern, if no change in data occurs for several ms, the receiver's AGC will start turning up the gain. Eventually, it cranks up the gain way up and outputs whatever RF noise it sees at the time (the digital data output appears almost to be random noise when this happens).
RF_Demo
#include "mbed.h" Serial pc(USBTX, USBRX); // tx, rx Serial device(p9, p10); // tx, rx DigitalOut myled1(LED1); DigitalOut myled2(LED2); //RF link demo using low-cost Sparkfun RF transmitter and receiver modules //LEDs indicate link activity //Characters typed in PC terminal windows will echo back using RF link int main() { char temp=0; device.baud(2400); while (1) { //RF Transmit Code if (pc.readable()==0) { myled1 = 1; //Send 10101010 pattern when idle to keep receivers AGC gain locked to transmitters signal //When receiver loses the signal lock (Around 10-30MS with no data change seen) it starts sending out noise device.putc(0xAA); myled1 = 0; } else //Send out the real data whenever a key is typed device.putc(pc.getc()); //RF Receive Code if (device.readable()) { myled2 = 1; temp=device.getc(); //Ignore Sync pattern and do not pass on to PC if (temp!=0xAA) pc.putc(temp); myled2 = 0; } } }
Import programRF_loopback_test
RF link test program using low cost modules from Sparkfun See
Characters typed in PC terminal window echo back using the RF RF link, read back in on the serial port, and echoed back in the terminal application window. Keep in mind that any RF link will occasionally experience errors and is subject to interference from other RF sources nearby, cosmic radiation, and thunderstorms. If you happen to have a car key fob that operates at 315MHz (or 434Mhz in Europe and Asia), try holding it right next to the receiver antenna when running the demo and hit a button. Typically, you will see a long random looking string of characters when it generates and transmits the encrypted code for the button. If you disconnect power to the transmitter chip, the receiver's AGC will crank up the gain until it starts sending out the background RF noise converted to random strings of ASCII characters.
The demo code leaves the transmitter turned on. Some applications may need to turn off the transmitter whenever it is not sending data out on the serial port. When the transmitter turns on, sending a preamble of 0xFF, 0xFF, 0x00, 0x55 and having the receiver wait for 0x55 only, prior to sending data will help the receiver's AGC lock onto the signal strength a bit faster.
Instead of using the serial port hardware directly and constantly sending a sync character, bit banging and a different sync method or encoding scheme that always changes the data could be used. Two such common coding schemes are NRZ and Manchester coding. This type of receiver works best with an encoding scheme that has a DC level (average value) near zero (i.e. 50% "0"s and 50% "1"s in the digital signal). It is also possible to tune the receiver circuit to the transmitter by adjusting the screw on the receiver module, but they seem to be set very close at the factory and the tuning procedure is a bit involved. Tuning probably will be needed only if it is operating near the maximum range. receiver’s automatic gain control (AGC) slowly turning up the gain until the digital output starts to toggle from background noise whenever a signal is not present. Some designs disable the receiver output when RSSI falls below a fixed threshold (i.e., a squelch feature).
One widely used IR and RF transmission protocol sends a preamble to lock in the AGC to the correct level followed by the data, and then the inverse of the data. This insures that the signal's DC level is always near zero to keep the AGC locked onto the signal. By checking each data byte against its inverse, errors can also be detected by the receiver. The disadvantage is that you lose half of the useable signal transmission bandwidth by sending the data bytes twice. Another wireless protocol sends the data three times and uses a majority vote to attempt to correct errors whenever a checksum or CRC error occurs on a packet of data. There is an mbed cookbook project using this transmitter module to send signals to the X10 home automation wireless receiver. These low cost IR and RF devices designed for remotes have limited bandwidth. If you need to transfer large amounts of data there are other wireless options to consider.
A Wijit Smart AC Plug
The Wijit is one of the newer low cost home automation examples. It has a Wi Fi hub that communcates to 10A relay controlled plug modules using a newer generation of 434Mhz RF receiver and transmitter modules similar to those used earlier. Information on the WiJit is a bit hard to find, but some details and internal photos from the FCC approval are at. They can be controlled from a free smart phone app.
Additional Wireless Options
These demos show just the basics of getting the communication channel operational. In the case of a handheld remote, if the signal is not received the user just hits the button again and perhaps moves closer. For applications that need more reliability, a bidirectional link would be required. One solution to support bidirectional communication is for each device to have a transmitter and receiver operating at a different frequency. Another possibility is to turn off the transmitter when it is not sending data. This requires a more complex protocol such as CSMA to solve the problems that would occur whenever two devices start to transmit at the same time. Being able to send data in both directions would allow each message to be acknowledged and checked for transmission errors using a checksum or CRC. In case of an error, it could automatically be retransmitted. Many of the higher data rate RF modules use frequency shift keying (FSK) instead of the simpler ASK for modulation. Similar to AM versus FM commercial radio broadcasts, having the signal change the frequency of the carrier signal provides more noise immunity than changing the amplitude, but it also requires a bit more hardware and power. More advanced wireless systems also have several channels at different frequencies and they automatically switch to an inactive channel to avoid conflicts.
These techniques are required for reliable networking and many are built into the more complex and costly wireless communication standards and protocols such as WiFi, Bluetooth and Zigbee. In general, higher data rates, longer range, and higher reliability always adds to the power used, hardware cost, and software complexity. If you need higher data transfer rates and higher reliability, there are several wireless networking solutions available on breakout boards with mbed code examples in the wireless section of the mbed cookbook. These modules offer a drop-in solution and typically have a RF transceiver along with a microcontroler with firmware that implements the wireless protocol. Other wireless modules available on breakout boards shown below to consider for use with mbed and with mbed code examples are the RFM22B (128KbPS low-cost FSK transceiver), XBee (Zigbee), RN-42(Bluetooth) and the WiFly (WiFi)
RFM22B (128KbPS low-cost FSK transceiver)
XBee (Zigbee)
RN-42(Bluetooth)
WiFly (WiFi)
Extending the Range on RF devices
For longer range, more advanced antennas with high directional gain pointed in the correct direction can be used for stationary transmitters and receivers. The Wi Fi antenna seen below worked at 56Km using 200mw of RF power with a gain of 23.5 dBi. RF amplifiers can also be used to boost RF output and increase range, but check regulations and limitations in each country on RF output power. The 1000mw RF amplifier seen below can boost Wi Fi range to 12Km and has been used for remote control of UAVs.
2.4 GHz Grid Antenna with 23.5 dBi gain
1000mw 2.4 Ghz RF amplifier
4 comments on IR and RF remote controls:
Please log in to post comments.
Regarding Code for RF link module: I'm using 2 Microprocessors communicating using these RF link modules. Is the given code to program only one Microprocessor? If i'm using 2 microprocessors, one connected to Transmitter and pin 9 of mbed and the other microcontroller connected to reciever and pin 10 of the mbed, would the transmitter microcontroller only have the transmitter code and would the reciever microcontroller have the corresponding receiver code separately? Thank you very much for clarifying this. Regards. | https://os.mbed.com/users/4180_1/notebook/ir-and-rf-remote-controls/ | CC-MAIN-2018-13 | refinedweb | 4,042 | 59.74 |
DTparseframe(3dm) DTparseframe(3dm)
DTparseframe - parse a frame of DAT audio data
#include <sys/types.h>
#include <dmedia/dataudio.h>
void DTparseframe(DTPARSER* dtp, DTFRAME* dtfp)
dtp A pointer to the target DTPARSER
dtfp A pointer to the frame of DAT audio data to be parsed.
DTparseframe parses a frame of digital audio data read from a DAT. It
determines which subcodes are present in the data. If those subcodes have
changed since the last frame, then DTparseframe executes a callback of
the appropriate type [see DTaddcallback] passing to it the subcode data
found in the frame.
Callbacks are only executed when the subcodes change because subcodes
tend to come and go and calling whenever a particular subcode is seen
would be unpredictable. Also several of the subcodes never change on the
tape but may be repeated thousands of times. Executing a callback every
time the same subcode is seen would be very inefficient. For unchanging
subcodes, the callback is executed the first time the subcode is seen. A
program that wishes to see the data every frame is free to look directly
into the DTFRAME structure.
If a dt_audio callback is set, DTparseframe executes that callback every
frame after byte-swapping and, if the tape indicates it is necessary,
de-emphasizing the data.
DTintro(3dm), DTaddcallback(3dm), DTcreateparser(3dm),
DTdeleteparser(3dm), DTremovecallback(3dm), DTresetparser(3dm),
datframe(4)
Mark Callow
PPPPaaaaggggeeee 1111 | https://nixdoc.net/man-pages/IRIX/man3dm/DTparseframe.3dm.html | CC-MAIN-2022-21 | refinedweb | 232 | 54.63 |
Thanks.
It seems that the answers differ in the following test case
25 5
1 0 4 -10 14 -28 33 -53 60 -86 94 -128 137 -177 187 -234 247 -298 314 -371 389 -452 473 -541 564
Thanks.
It seems that the answers differ in the following test case
25 5
1 0 4 -10 14 -28 33 -53 60 -86 94 -128 137 -177 187 -234 247 -298 314 -371 389 -452 473 -541 564
Can someone give hints for IOI Training Camp 20XX. I even looked up the code on github, but can't understand what it does.
//It should belong m1 = a[0]+a[1];long current = a[0]+a[1];for(int i=1;i<n-1;i++){current = Math.max(current+b[i]-a[i]+a[i+1], a[i-1]+b[i]+a[i+1]);m1=Math.max(current, a[i]+a[i+1]));}
instead of
long m1 = a[0]+a[1];long current = a[0]+a[1];for(int i=1;i<n-1;i++){current =Math.max(current+b[i]-a[i]+a[i-1], a[i-1]+b[i]+a[i+1]);m1=Math.max(current, a[i]+a[i+1]));}
Can anyone help me to find what's wrong with my solution. Even small testcases where my code fails would be helpful. EDIT:Got it.
#include <bits/stdc++.h>#define ll long long#define fr(i,n) for(int i=0;i<n;i++)#define pb push_back#define inf (1<<31)#define xx first#define yy second#define all(c) c.begin(),c.end()using namespace std;int main() {ios::sync_with_stdio(0);int n;ifstream File;File.open("test.in");File>>n;vector<long long int> a(n),b(n);vector<long long int> f(n);
For anyone interested, I found a nice explanation of the crux of the problem.
Sorting a sequence by swapping adjacent elements using minimum swaps
You can test your code here.
Also solutions of problems can be found here, and you can generate random test inputs (even if your logic is different).
ioi-training/INOI Solutions at master · keshav57/ioi-training · GitHub | https://www.commonlounge.com/profile/5d38468797c14b3bb97d635c8253613c | CC-MAIN-2018-30 | refinedweb | 359 | 61.77 |
That more or less sums up this week.
As for my storm of a brain, here's whats been on my mind lately.
The IRC Channel is starting to get quiet. Maybe we've burned out all of the ideas or maybe we just need some new people to come in and get the ideas flowing again, but what ever it is, it's causing the IRC to go more or less "public".
#ZWarriorChronicles is the channel. Nothing too secretive, it's always been there. This is just the first time I'm letting you guys know about it.
If you spam or are just being a nuisance, you're out. If you're just looking for somewhere to talk randomness, you're out. I am not trying to be a control freak tyrant or anything, but I still want to make a game here, so I don't want the place flooded with spammers.
If things get TOO out of hand, I'm just going to put a password on it. This is merely a test to see how many -- if any -- people are interested in giving ideas on a more regular basis, because frankly, the chats dead. I am trying to get it rolling again, but the regulars are usually busy nowadays, so I'm hoping somebody can refresh the mood for us.
Everyone's invited, and sorry to those I kicked out at an earlier time. As was understood, there was ninjary going on andI think by now everything has cooled off.
So I'll give it a week. If too much nonsense goes on in that week, I'll leave things as they are now, but if some cool people show up and we can get some ideas flowing again, then it may be a permanent thing.
Anyway, back to development news.
You saw Yemma above, and yeah, that's all I've done in the weeks after Piccolo's release. I like how ZWC has a style now -- one I didn't even know was developing. It seems that my models have a common element to them that brings them together. I'm not looking for CnC this time mainly because I just don't have time for it. I want to get Chi-Chi modeled as fast I can, and get going on those first episodes of the story. It's time we get moving on this thing already!
I mean, it makes sense. Why model Raditz if his part is hardly even relevant yet? This way I can focus on mini-games and the inventory system which may or may not make a return (yes it did exist at one point.)
So I will leave things at that. The IRC will have a voice system in place so the spammers should be more or less nullified. But yeah. Hope to see you there, and I'll post some Chi-Chi pics up real soon.
EDIT:
SERVER: Freenode
Yo man. You might want to edit your post and let us know the server your channel is on. Otherwise, how can I get there?
Webchat.freenode.net
Freenode
Cool | http://www.moddb.com/games/z-warrior-chronicles/news/journal-11-starting-the-story | CC-MAIN-2016-22 | refinedweb | 524 | 82.04 |
ipsec atoul, ultoa - convert unsigned-long numbers to and from ASCII
#include <freeswan.h> const char *atoul(const char * src, size_t srclen, int base, unsigned long * n); size_t ultoa(unsigned long n, int base, char * dst, size_t dstlen);
These functions are obsolete; see ipsec_ttoul(3) for their replacements.ol(3), strtoul(3)
Fatal errors in atoul are: empty input; unknown base; non-digit character found; number too large for an unsigned long.
Written for the FreeS/WAN project by Henry Spencer.
There is no provision for reporting an invalid base parameter given to ultoa. = atoul( /* ... */ ); if (error != NULL) { /* something went wrong */ | http://huge-man-linux.net/man3/ipsec_atoul.html | CC-MAIN-2018-05 | refinedweb | 101 | 64.51 |
tag:blogger.com,1999:blog-29587058743143197662014-10-13T05:11:12.581-07:00XORMarshal Stephen Hack: Editing facebook contactsI am going to write with an assumption that most of you have droids. If you do, I think you should do a system update (available under Settings). After you update your phone, you will see some changes in the droid interface, particularly the android market. <br /> <br /. <br /> <br />Option 1: <br />Select your FB contact. Click on EDIT from the menu. Now you should see the message saying your contact is not editable because it is an FB contact. Now, click on the MENU <span style="font-weight: bold;">again</span> and click ADD CONTACT. Enter the first and last names of your contact again (spelling should match your existing contact). Enter the phone number, then SAVE. Boom! You should see that the phone number is picked up and merged into the existing FB contact. <br /> <br />Option 2 (I highly doubt this will happen): <br />This happens if option 1 still does not work and now you have duplicates. Select your FB contact. Select MENU, then EDIT CONTACT. Now select the MENU again and choose JOIN. Then select your new contact. <br /> <br />Tada! <br /><img src="" height="1" width="1" alt=""/>Marshal Stephen a few months now, I've been reading a little bit about android and playing around with the sdk (downloadable from <a href=""></a>) with Eclipse. Too bad I dont have a G1 phone to test it out, but soon.<br /><br /.<br /><br /.<br /><br />Alternatively, if you are just a fan of visuals, and if you have a Blackberry, you can create themes which makes your blackberry OS look like android. You would need the theme desginer from Blackberry's website. It is called <strong>Plazmic CDK 4.7</strong>. You also will need a device emulator for your phone.<img src="" height="1" width="1" alt=""/>Marshal Stephen your 127.0.0.1There are times when you want to do something with the URL, or portions of the URL by parsing it. However, if you are on your local box, you are only going to see the good old 127.0.0.1 AKA "localhost" in your URL. Lets say you want to parse http://<span style="color:#330099;">msdn</span>/testapplication so as to get only the 'msdn' part of the URL. You wont be able to do it on your local box because IIS would resolve your hostname to <span style="color:#330099;">localhost.</span><br /><br />Here's what you can do to mask a real-time url over your localhost.<br />From your file explorer, locate the <span style="color:#330099;">etc</span> folder under:<br /><br /><span style="color:#3333ff;">C:\->Windows->System32->drivers->etc</span><br /><br />Here, you can see a file named <span style="color:#330099;">hosts< <span style="color:#330099;">localhost</span> to whatever name (<span style="color:#330099;">msdn</span>, in our example). Now, when you type http://<span style="color:#330099;">msdn</span>/testapplication, you will actually be firing up your local web application. Now, you can do whatever you wish to do with the url since it is not "localhost" anymore, linguistically speaking.<br /><br />I hope that was a cool trick that comes in handy! Until next time...<img src="" height="1" width="1" alt=""/>Marshal Stephen to ExcelI like inventing something on an as-needed basis, yes. How often does that happen though?! Then I thought how cool it would be if I just innovate or derive from existing technology?<br /><br />To cut it short, here's the problem:<br /.<br /><br />Here's a solution that cost you a bit of cash:<br /.<br /><br />Here's mine:<br /.<br /><br />Last but not least, you should close the Excel applocation object (as it is thread unsafe) and dispose of any COM resources that you used.<br /><br />Easy as it sounds, it is easily few hours of work if you are familiar with COM objects and FileStreams. If your computer crashes in the process, do not email me. If it doesn't and you have problems, I'd be glad to help.<img src="" height="1" width="1" alt=""/>Marshal Stephen we have more than one Default Button per page?It's been a while since I found out some cool and handy tips in ASP.NET 2.0.<br /><br /><span style="font-size:130%;"><strong>So, what's it this time?</strong></span><br />My login page has two buttons which takes the user through two different processes: the login process and registration process. Of course, there're textboxes for username-password and an invitation code that the user needs to enter to go through the registration process.<br /><br />Now, you've probably guessed the scenario. If you type in the user name and password and hit the return key, the login button's event should fire up. However, if you type an invitation code and hit the return key, the register button's event should fire up.<br /><br / <strong><em>VERY EASY!</em></strong><br /><br />All we need to do is put the controls that are required to perform a single process inside a ASP panel and specify the default button inside the panel's html tag (which looks something like the sample below): <?xml:namespace prefix = asp /><asp:textbox<asp:textbox<asp:button<span style="color:#cc0000;"><asp:panel<asp:textbox<br /><asp:button<br /></asp:panel></span><em><span style="color:#990000;">asp:panel<br /><span style="font-size:130%;color:#000000;"><strong>Behind the scenes (nothing much here but sheer curiosity):</strong></span><br />Well, being a developer, and a curious one at coding (just like the rest of my species), I did a VIEW SOURCE on my page to see how the content is rendered.<br /><br />The Panel as we all know is rendered as a DIV. However, ASP.NET adds some extra attributes to the DIV like: <"><em><span style="color:#cc0000;"><asp:textbox<asp:button<asp:textbox<asp:button<span style="color:#000000;"><blockquote><asp:textbox<asp:textbox<asp:button<asp:textbox<asp:button<span style="color:#000000;"><em><strong>"As a good practice, I usually put all the controls with the same validation-group inside one panel if I wish to use a default button for the page. Also, now I know that I can have more than one default button in a page."</strong></em></span></asp:button></asp:textbox></asp:button></asp:textbox></asp:textbox></blockquote></span><"><span style="color:#000000;">Well, that's it until next time!</span></asp:button></asp:textbox></asp:button></asp:textbox></asp:textbox><img src="" height="1" width="1" alt=""/>Marshal Stephen Menu inside a gridviewIt's been a while since I found myself doing something that's trickey and not available on google the way I wanted.<br /><br /><strong><span style="font-size:130%;">Problem</span>:</strong><br /><strong><em>Scenario1</em></strong><br /.<br /><br /><br /><strong><em>Scenario2</em></strong><br />Sometimes, we might want some commands to show up based on the row's data. In the page that I was working on, I have to show <span style="color:#990000;">PRINT</span>, <span style="color:#cc0000;">EMAIL</span>, <span style="color:#990000;">EDIT</span> and <span style="color:#cc0000;">PREVIEW</span> buttons for each row in a gridview. Moreover, not all rows have the same buttons. Some rows cannot be editied and some rows cannot be emailed (when there is no email address to email).<br /><br /><br /><strong><span style="font-size:130%;">Solution</span>:</strong><br />So, what can we do to have the above two functionalities and at the same time make our gridview look pretty? Well here we are:<br /><br /><br /><br /><a href=""><img id="BLOGGER_PHOTO_ID_5125381680294721794" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /></a>The above is a regular gridview that we all might have seen and implemented. It's plain and doesn't implement the command fields as of now. As you know we usually have the last or first column containing command fields like <span style="color:#990000;">EDIT</span> (<span style="color:#006600;">UPDATE</span> & <span style="color:#666600;">CANCEL</span>), <span style="color:#990000;">DELETE</span> and so on.<br /><br /><br /><p>Now, don't get discouraged yet because, our final product will look something like this:<br /><br /><a href=""><img id="BLOGGER_PHOTO_ID_5125380902905641186" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><.<br /></p><br /><br /><p>Pretty cool eh? This works for row specific command fields (like we discussed earlier) as well. We just have to tweak it.</p><br /><p><strong><span style="font-size:130%;">Behind the scenes:</span></strong><br />So, what's happening? Let's start from the markup of the gridview. Since I am assuming that you already have a working knowledge on the gridview and template fields, I am not going to go into details.<br /><br />--------------------------------------------------------</p><br /><p><strong>The template field markup: (click for larger image)</strong><br /><a href=""><img id="BLOGGER_PHOTO_ID_5125644188695853346" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /></a>.<br /><br />As you can see, I am placing the div tag with all the applicable menu items (as linkbuttons) within the last column's itemtemplate section. Note that I also have set the <span style="color:#006600;"><em>z-index</em></span> of the div tag and an <span style="color:#006600;"><em>absolute positioning</em></span>. This is vital for the positioning of the div. Since I have the div insite the itemtemplate, I can bind it with a dataitem and even give it a commandname. Sweet!<br /></p><p>--------------------------------------- </p><p><strong>The Client side script to toggle the div's visibility ON and OFF:</strong><br /><br />// Shows DIV popup commands for gridview<br />function Show />pnl.style.<em>if(link1 != null) link1.style.e.Row</span> to the script, I can make the whole row's style change on an onmouseover event. That will give the grid a look and feel of a clickable row. Of course you can extend it by adding styles through javascript. Pretty cool eh?<br /></p><p><strong>Style's files:</strong><br />Well what did I miss? Ah, the images! I'll just put the two images that I used to set the background of the row when you mouse your mouse pointer over it.<br /><br /></p><br /><p><a href=""><img id="BLOGGER_PHOTO_ID_5125650081390983474" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /></a>The above is used to set the style for the gridview row when you do an onmouseover event.<br /><br />The below image is used by the div as a background. You can change anything you want to fit your needs.</p><p><a href=""><img id="BLOGGER_PHOTO_ID_5125650506592745794" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /></a></p>Questions? Sure! Suggestions? Even better! I like to know how this can be enhanced and/or tuned. I can be reached at <a href="mailto:[email protected]">[email protected]</a>.<img src="" height="1" width="1" alt=""/>Marshal Stephen Sign-On using ASP.NETFor many, this seems to be a highly complex issue. Well, if you stop thinking hi-tech, you can make it simple and secure (to a certain extent).<br /><br /><strong>Introduction and Assumptions:</strong><br />Since it is fairly easy to implement the same in the intranet, we'll talk more about the extranet implementation. Let call our main application (the portal) as the <strong>Master</strong>. All our users have access to the Master (with role based security). However, not all have access to the child applications within Master.<br /><br /><em>Scenario:</em><br />Let us assume a user-- Bob. Bob is an accountant for two different departments( <strong>A </strong>and <strong>B</strong>).<br /><br /><strong>Behind the Scenes:</strong><br />We need to setup few tables to do this single sign-on which enabled Bob to bypass the login process from the Master application. I'll explain the table layout and thier functionality below.<br /><br /><em><strong>tbl_App</strong>: This table has the list of child applications to be brought into single sign-on</em><br /><ul><li><em>AppID</em></li><li><em>AppName</em></li><li><em>AppURL</em></li></ul><p><strong><em>tbl_AppUsers</em></strong>: This table has the list of users and the Application ID (from the previous table) that they have access to</p><ul><li><em>AppID</em></li><li><em>UserID</em></li></ul><p><strong><em>tbl_AppUserKey</em></strong>: This table stores the AppID, UserID, and a GUID (which acts more like a authentication token for our single sign-on)</p><ul><li>AppID</li><li>UserID</li><li>Key</li></ul><p>These are the key tables that you'll need. Assuming this is not for beginners, I will bypass other topics like user roles for different applications, securing the keys generated (to be discussed later) and so forth.</p><p><strong><br />The Process:<br /></strong><em><br />Master:<br /></em. </p> <span style="color:#cc0000;">client script</span> which I'll post at the end) for the link that Bob has just clicked, and pass the generated key as a query string. </p><p><em>Department A's application:</em><br />Get the query string (the GUID) and read the corresponding User ID from the tbl_AppUserKey table. Now that you have the User ID for Bob, it's not a good idea to keep the GUID in the table. So, delete it! duh!</p><p. </p><p><span style="color:#cc0000;"><strong>ClientScript for Opening a new Browser Window:</strong></span></p><p><span style="color:#000000;"><span style="color:#006600;">//Open a window to the app and pass Token as query string - works only on postbacks<br /></span><span style="color:#000099;">string</span> script = <span style="color:#990000;">"window.open('</span>" + url + <span style="color:#990000;">"? "')"</span>;<br /><span style="color:#00cccc;">ClientScriptManager</span> cm = Page.ClientScript;<br />cm.RegisterStartupScript(<span style="color:#000099;">this</span>.GetType(), <span style="color:#990000;">"window",</span> script, <span style="color:#000099;">true</span>);</span></p><p>Any questions? shoot me an email at [email protected]</p><img src="" height="1" width="1" alt=""/>Marshal Stephen - ASP.NET (Server Control)<span style="font-family:times new roman;". </span><br /><span style="font-family:times new roman;"></span><br /><span style="font-family:times new roman;">Without the SDK you'll not be able to install the WPF template.Apart from this minor information, I found an interesting link which allows you to use a custom user control (credits to Mike Harsh). </span><a href="" target="_blank"><span style="font-family:times new roman;"><em><strong>Click here to Visit the link</strong></em>.</span></a><span style="font-family:times new roman;"> </span><a href=""><em><strong><span style="font-family:times new roman;">here</span></strong></em></a><span style="font-family:times new roman;"> and a sample application using the User Control Can be downloaded from</span><a href="" target="_blank"><span style="font-family:times new roman;"> <strong><em>here</em></strong>.</span></a><img src="" height="1" width="1" alt=""/>Marshal Stephen | http://feeds.feedburner.com/blogspot/uZMhk | CC-MAIN-2015-22 | refinedweb | 2,618 | 55.54 |
Revision history for Perl extension CSS::SAC. 0.08 Sun Jul 5 21:47 2008 - Some kwalitee fixes 0.06 Sun Oct 19 21:47 2004 - Module now maintained by Bjoern Hoehrmann - More accurate behavior wrt to namespace declarations 0.05 2003-07-07 19:56 - changed some code (it's not really a bugfix, it changes from a bug to a less likely one -- a real fix will come later) thanks to a report and patch from Briac Pilpré. - fixed a bug to parse attr(color, color) correctly, thanks to Bjoern Hoehrmann. 0.04 Sat Jul 06 16:13 2001 -) 0.03 Mon Apr 23 16:44:16 2001 - switched all to Class::ArrayObjects (which used to be CSS::SAC::Helpers::ArrayObjects) - provide both standard and perlified names for methods - added lots of documentation (can still be improved, suggestions welcome) - removed SACPrinter and replaced it with CSS::SAC::Writer, more flexible - fixed a number of small bugs here and there - fixed a serious bug whereby it was possible to tokenize two selectors in a row without a combinator (it now is seen as a fatal error but perhaps it should just flag an error (and keep on parsing) - the parser was creating negative conditions, something that I don't think really exists in the spec. It should have been creating negative selectors, which it now does. 0.02-i 04/03/2001 alpha mark II - added SelectorLists - started to add some things to make the interface closer to the spec for those who prefer that, while maintaining the perlishness for the others 0.01-i 04/03/2001 alpha mark I - initial tests and limited public release - refoundation of the entire CSS::Parser into CSS::SAC before 22/10/1999 and previously - the ugly and broken CSS::Parser (now obsolete) | https://metacpan.org/changes/distribution/CSS-SAC | CC-MAIN-2015-48 | refinedweb | 301 | 53.04 |
Russell Butek wrote:
>
> So we've agreed to:
> 1. remove --package
> 2. add some sort of command line argument for the namespace-to-package
> mappings
> 3. look for a mapping properties file
> 4. 2 takes precedence over 3
Yep. +1
> So now we have to decide
> 2. What does the command line argument look like? How about
> --NStoPkg <ns0> <pkg0> -N <ns1> <pkg1> ... -N <nsN>
<pkgN>
I'm thinking along the lines of what is easy to implement. The CLI util that
we are already using supports 2 argurment parameters in the form of
<arg1>=<arg2>. I suggest we use that.
I would be +1 to "--NStoPkg" being the long form and "-N" being the short form.
> 3. What's the name of this file? wsdl2java.mapping.properties? Are the
> pairs in the file <namespace>=<package>? What happens if the namespace
> string contains "=" or whitespace? Does java.util.Properties handle it?
> Maybe the pairs should be <package>=<namespace>?
Yes, java.util.Properties does handle that. Basically the information needs
to be escaped by a backslash ('\='). If you generate the mapping from a
Properties file, you will see what I mean. I believe whitespace requires
you to enclose the property name in quotes. The Javadocs has all the information.
I believe it is easier for the user if the namespace is first. Logically, we
are mapping the namespace to the package, not the other way around. (not to
mention the problems with using the Properties object).
As to the name, it should probably match the WSDL file name with the "properties"
extension in lieu of the "wsdl" extension:
example:
---------------
test.wsdl
test.properties
> About your last point, Berin, I disagree with you. I believe, via imports,
> you can have multiple WSDL definitions and, therefore, multiple namespaces
> for the WSDL things in the definitions. Here's an example from section
> 2.1.2 of the WSDL spec. The namespace for the service and the binding is
> "" and the namespace for the portType
> is "". WSDL4J supports this.
My last point is regarding XInclude processing--meaning it should be done
before we process the WSDL document. If the resulting WSDL is bad, the
author has given us an invalid document. I am relatively new to WSDL in
detail, so if I say something rediculous let me know (as you are doing
now ;P).
In the example you presented, you are _not_ using XInclude semantics. Therefore
my comments regarding XInclude don't apply here. The confusion surrounded
comments (either in source or on the list) that alluded to us using XInclude
for the processing. To use XInclude, the result would have to be altered
like this:
<?xml version="1.0"?>
<definitions name="StockQuote"
targetNamespace=""
xmlns:tns=""
xmlns:xsd1=""
xmlns:soap=""
xmlns:
<xi:include
>
</definitions>
The wsdl:import has different semantics around it--and in fact, I am surprised
that WSDL4J doesn't automagically handle it for us. Since it is officially
part of the spec, I would think that this is something WSDL4J needs to do. Otherwise,
the utility of it is reduced. | http://mail-archives.apache.org/mod_mbox/axis-java-dev/200110.mbox/%[email protected]%3E | CC-MAIN-2018-17 | refinedweb | 508 | 68.36 |
In article <[email protected]>, Alan Cox <[email protected]> writes:>> > Did you try nesting more than one "su -"? The first one after a boot>> > works for me - every other one fails.>> >> Same here: the first "su -" works OK, but a second nested one hangs:> > It appears to be a bug in PAM. Someone seems to reply on parent/child running> order and just got caught out> I once debugged a very simular sounding problem that I solved withthe following patch to login. It's a wild guess, but you could try ifit happens to solve it. If not it might at least be a hint of what has tobe done to su.(the problem is that the extra process PAM keeps waiting is process leader)(I don't have redhat, so I can't check if this is relevant here)diff -ur util-linux-2.9x/login-utils/login.c util-linux-2.9x-ton/login-utils/login.c--- util-linux-2.9x/login-utils/login.c Sun Sep 12 23:25:30 1999+++ util-linux-2.9x-ton/login-utils/login.c Tue Sep 21 03:24:52 1999@@ -1109,6 +1112,15 @@ exit(0); } /* child */++ if (tcsetpgrp(0, getpid()) < 0)+ fprintf(stderr,+ _("login: could not become foreground process group: %s\n"),+ strerror(errno));+ if (setpgid(0, 0) < 0)+ fprintf(stderr, _("login: could not become process leader: %s\n"),+ strerror(errno));+ #endif signal(SIGINT, SIG_DFL); -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at | https://lkml.org/lkml/2001/4/26/7 | CC-MAIN-2022-21 | refinedweb | 273 | 62.68 |
0
what I mean is from this code, when coder doesn't put any curly bracket({,}), what does the while loop scope will cover? already test with random bracket put on the code, but, the only "brackected code" that work the same as the original code is when I put around words.push_back(word);
got no clue what make the scope like that even thought the coder doesn't put any...
:S
#include <algorithm> #include <iostream> #include <string> #include <vector> using std::cin; using std::cout; using std::endl; using std::sort; using std::string; using std::vector; int main() { // Ask for and read the words cout << "Please enter a few words, followed by end-of-file: "; vector<string> words; string word; // Invariant: words contains all of the words read so far while (cin >> word) words.push_back(word); typedef vector<string>:; // We have reported the count of all the words in the vector, so exit. return 0; } | https://www.daniweb.com/programming/software-development/threads/394832/how-while-loop-s-scope-works | CC-MAIN-2017-43 | refinedweb | 157 | 59.77 |
Prev
Java JVM Code Index
Headers
Your browser does not support iframes.
Re: Why is the main() of Java void ?
From:
Wayne <[email protected]>
Newsgroups:
comp.lang.java.programmer
Date:
Wed, 17 Oct 2007 01:58:40 -0400
Message-ID:
<[email protected]>
Christopher Benson-Manica wrote:
[comp.lang.java.programmer] Patricia Shanahan <[email protected]> wrote:
I think the place this SHOULD be documented, and is not, is in the tool
documentation. For example, java-on-Solaris is a Solaris command, and
its documentation should specify its exit codes.
That would be reasonable, however...
The JLS and JVM spec cannot even assume the implementation can return a
code to its environment.
...I feel that Java could have navigated that situation in a way
similar to the way that C has. ... Given
that this is probably what occurs already without exception in all
host environments that expect a return code, it seems like formalizing
this convention wouldn't place any undue stress on any implementation
authors.
Obviously, this specification would only apply to "normal"
termination, not inherently system-specific situations such as an
uncaught runtime exception.
(Sorry to get long-winded there.)
The situation in C and Java are not the same. A single instance of
a JVM may run multiple Java programs/applets. Just because your Java
program has ended doesn't mean the JVM will end then; it may run
additional Java programs before terminating. The System.exit() call
terminates a JVM (which has the side-effect of terminating any running
Java programs in that JVM), and is not the same as exiting from a C
program. There is no direct way for the JVM to inform the underlying OS
some Java thread's return status without exiting itself. That would be
a problem for some long lived JVMs, such as the ones running applets
in your web browser (one reason why applets are not allowed to
call System.exit() ), or in a server, or in a cell phone, or in an
embedded system.
The JVM always (almost) terminates successfully, regardless of result
of any Java programs running on the JVM. From the OS point of view,
the "program" you ran was "java.exe" and not "HelloWorld.class".
Whether or nor your .class "program" ran as expected, the JVM did
complete normally. An anology would be to have a word-processor
program return a failure exit status, if one of the documents you
edited contained any spelling errors. The OS only cares (and
only rarely) if the *process* it just cleaned up exited successfully.
The OS doesn't really care to know what the process did prior to
that.
The seeming similarity to C is only because "classroom" programs tend
to start a JVM, run a single "Main" thread, then terminate the JVM. So
there is the illusion that System.exit() returns the exit status of your
program rather than the exit status of the JVM. While in some cases
System.exit() can be abused to return a value for your "program", I
think you'll fine in real life such situations are the exception
rather than the rule.
If you want a program (or just a thread) to let the calling environment
know there was a problem, there are much better ways then
attempting to return a one byte int! Logging, MBeans, Java message
service, Sending output to System.err, or even opening a socket to
communicate termination status to another (waiting) program are some
methods that leap to mind.
Note that the JVM *does* return a meaningful exit status, even in
the cases we are talking about:
public class ExitStatusDemo
{ public static void main ( String [] args ) {
if ( args.length != 0 )
throw new RuntimeException( "Opps!" );
}
}
I think you'll find that if a program throws an uncaught
exception, the JVM terminates with a non-zero exit status.
I think the designers of Java's main method got it right. It
should be a void method. You need to handle your programs
errors internally or in some systematic way. Leave the exit
status as a JVM exit status, not some poor-man's communication
channel between programs.
-Wayne | http://preciseinfo.org/Convert/Articles_Java/JVM_Code/Java-JVM-Code-071017085840.html | CC-MAIN-2021-49 | refinedweb | 694 | 56.86 |
Goldbach's conjecture is probably one of the most famous unsolved problems in mathematics. It states that
Every even integer $n>2$ can be expressed as the sum of two primes.
The number of different ways that $n$ can be expressed as the sum of two primes is known as the Goldbach function, $g(n)$, and a plot of $g(n)$ against $n$ is called the Goldbach comet:
To explain the shape of this plot, first note that all prime numbers $p > 3$ are of the form $6q \pm 1$ ($q=1,2,\cdots$) (that is, $p \;\mathrm{mod}\;6$ is 1 or 5). To see this, write an arbitrary integer as $m=6q+r$: then if $r$ is 0, 2 or 4 then certainly $m$ is even and if $r$ is 3 then $m$ is divisible by 3. So if $m>3$ is prime it must be that case that $r$ is 1 or 5.
Now consider the possible ways of forming an even integer $n>6$ from two primes:
That is, we can expect that there are more ways of creating an even integer equal to 0 (mod 6) from two primes (the blue points above) than there are of creating an even integer equal to 2 or 4 (mod 6) (the green and red points).
The Python code used to create the above plot is below.
This code is also available on my github page.
import numpy as np import matplotlib.pyplot as plt nmax = 2000 # Odd prime numbers up to nmax. odd_primes = np.array([n for n in range(3, nmax) if all( (n % m) != 0 for m in range(2,int(np.sqrt(n))+1))]) def get_g(n): """Return the value of the Goldbach function for even integer n>2.""" g = 0 for p in odd_primes: if p > n//2: break if n-p in odd_primes: g += 1 return g # Array of indexes into our Goldbach function array. imax = nmax//2 - 1 idx = np.arange(imax) def get_n_from_index(i): return 2*(i+2) # The values of even n>2 from the index array. n = get_n_from_index(np.arange(imax)) # Get an array of Goldbach function values corresponding to the n array. g = np.zeros(imax, dtype=int) for i in idx: g[i] = get_g(n[i]) # Indexes into the arrays for the cases n = 0, 2 or 4 (mod 6) i_0 = idx[((n%6)==0)] i_2 = idx[((n%6)==2)] i_4 = idx[((n%6)==4)] # Make the scatter plot for these three cases with different colour markers. plt.scatter(n[i_0], g[i_0], marker='+', c='b', alpha=0.5, label=r'$n=0\;(\mathrm{mod}\;6)$') plt.scatter(n[i_2], g[i_2], marker='+', c='g', alpha=0.5, label=r'$n=2\;(\mathrm{mod}\;6)$') plt.scatter(n[i_4], g[i_4], marker='+', c='r', alpha=0.5, label=r'$n=4\;(\mathrm{mod}\;6)$') # Set the plot limits and tidy. plt.xlim(0, nmax) plt.ylim(0, np.max(g[i_0])) plt.xlabel(r'$n$') plt.ylabel(r'$g(n)$') plt.legend(loc='upper left', scatterpoints=1) plt.savefig('goldbach_comet.png') plt.show()
Comments are pre-moderated. Please be patient and your comment will appear soon.
There are currently no comments
New Comment | https://scipython.com/blog/the-goldbach-comet/ | CC-MAIN-2019-51 | refinedweb | 537 | 73.58 |
On 07 October 2005 10:07, Ross Paterson wrote: > On Thu, Oct 06, 2005 at 11:29:30AM +0100, Simon Marlow wrote: >> [...] >>. > > Can you show the client code to find data files under any system and > compiler? Ok, here's what I use in Alex: #include "Paths.hs-inc" getDataDir :: IO FilePath getDataDir = do m <- getPrefix binDirRel return (fromMaybe prefix m `joinFileName` dataDirRel) getPrefix :: FilePath -> IO (Maybe FilePath) -- always returns Nothing on Unix, on Windows calculates -- prefix from the path of the executable, assuming the current -- executable is in $prefix/$bindirrel. This code is always the same, so as Krasimir pointed out we should provide it to the executable via Paths.hs-inc somehow. Good ideas for how to do this are welcome. >>? > > There's a difference between the package libdir's you're talking about > (e.g. $prefix/lib/$PackageId/$CompilerId) and --libdir in autoconf > (default $prefix/lib). You couldn't easily generate the options to > configure from *dirrel. But do we really need an option to specify > the package libdir, or just --libdir (in the autoconf sense)? > "lib64/%p/%c" is easier than the full thing, but it still exposes > Cabal's placement policy to lots of installers. Similarly bindir, > datadir, libexecdir and maybe includedir. There would be no problem > with generalizing the autoconf options, though, say to allow > substitutions (e.g. of prefix). Hmm. *scratches head* *gets coffee* We have slightly different views on the world, I think. I'm going to try to describe both, so that at least I can understand what's going on. So there's a directory I'll call libInstDir, where the libraries for a package actually get installed. In my scheme, libInstDir is constructed like this: libdirrel = lib/$package/$compiler (by defualt) libInstDir = $prefix/$libdirrel in your view of the world, it is constructed like this: libdir = $prefix/lib (by default) libextradir = $package/$compiler libInstDir = $libdir/$libextradir so there's an extra layer, namely the "$package/$compiler" that gets added by the build system (or something) when installing libraries. (GHC does this, and I don't like it much). The actual value of libInstDir needs to be visible to the program/library, especially for dataInstDir, which is why perhaps having the hidden $libextradir layer is not so good. Also it's desirable to have the whole of libInstDir configurable. So we could actually make $libextradir visible and explicit, i.e. change the scheme to this: libdirrel = lib (by defualt) libextradir = $package/$compiler libInstDir = $prefix/$libdirrel/$libextradir Is that what you want? Cheers, Simon | http://www.haskell.org/pipermail/libraries/2005-October/004417.html | CC-MAIN-2013-48 | refinedweb | 423 | 53.81 |
On a plane between Philadelphia and Oslo: I am flying there for NDC2010, where I have a couple of sessions (on WIF. Why do you ask?:-)). It’s a lifetime that I want to visit Norway, and I can’t tell you how grateful to the NDC guys to have me!.
Sessions and Network Load Balancers
By default, session cookies written by WIF are protected via DPAPI, taking advantage of the RP’s machine key. Such cookies are completely opaque to the client and anybody else who does not have access to that specific machine key.
This works well when all the requests in the context of a user session are all aimed at the same machine: but what happens when the RP is hosted on multiple machines, for example in a load balanced environment? A session cookie might be created on one machine and sent to a different machine at the next postback: unless the two machines share the same machine key, a cookie originated from machine A will be unreadable from machine B.
There are various solutions to the situation. One obvious one is using sticky sessions, that is to say guaranteeing that a session beginning with machine A will keep referring to A for all the subsequent requests. I am not a big fan of that solution, as it dampen the advantages of using a load balanced environment. Furthermore, you may not always have a say in the matter – if you are hosting your applications on third party infrastructure (such as Windows Azure) your control on the environment will be limited.
Another solution would be synchronizing the machine keys of every machine. I like this better than sticky sessions, but there is one that I like even better. Most often than not your RP application will use SSL, which means that you need to make the certificate and corresponding private key available on every node: it makes perfect sense to use the same cryptographic material for securing the cookie in load balancer friendly way.
WIF makes the process of applying the strategy above in ASP.NET applications really trivial: the following code illustrates how it could be done.
public class Global : System.Web.HttpApplication
{
//…);
}
protected void Application_Start(object sender, EventArgs e)
{
FederatedAuthentication.ServiceConfigurationCreated += OnServiceConfigurationCreated;
}
Instead of the usual inline approach, this time I am showing you the codebehind file global.asax.cs. OnServiceConfigurationCreated is, surprise surprise, a handler for the ServiceConfigurationCreated event and fires just after WIF read the configuration: if we make changes here we have the guarantee that will be applied already from the very request coming in.
Note: It is worth noting that, contrary to what various samples out there would lead you to believe, OnServiceConfigurationCreated is pretty much the only WIF event handler that should be associated to its event in the Application_Start. This has to do with the way (and the number of times) in which ASP.NET invokes the handlers though the application lifetime.
The code is pretty self-explanatory. It creates a new list of CookieTransform, which take care of cookie compression, encryption and signature. The last two take advantage of the RsaxxxxCookieTransform, taking in input the certificate defined for the RP in the web.config.
Note: Why do we sign the cookie, wouldn’t be enough to encrypt it? If we use the RP certificate, encryption would not be enough. Remember, the RP certificate is a public key. If we would just encrypt, a crafty client could just discard the session cookie, create a new one with super-privileges in the claims and encrypt it with the RP certificate. If encryption would be the only requirement, the RP would not be able to tell the difference. Adding the signature successfully prevents this attack, as it requires a private key which is not available to the client or anybody else but the RP itself.
The new transformations list is assigned to a new SessionSecurityTokenHandler instance, which is then used for overriding the existing session handler: from now on, all session cookies will be handled using the new strategy. That’s it! As long as you remember to add an entry for the service certificate in the RP configuration, you’ve got NLB-friendly sessions without having to resort on compromises such as sticky sessions.
Thanks for excellent post.
I will try this out.
Great post. I would like to point out one caveat when doing sliding expiration in SessionAuthenticationModule_SessionSecurityTokenReceived. If your asp.net code uses asp.net impersonate via web.config (ours did) and touches any windows secured resource in this event (i.e. sql), it will happen under the identity context of the app pool user and not the asp.net impersonated user. We found this out the hard way and had to ditch using asp.net impersonation and switch to just setting our app identity on the iis application pool. Would be nice if the W.I.F. docs / samples mentioned something to this effect.
Hi Vitorrio. I'm using .Net 4.5 and I was wondering if these solutions would be any different with the new Framework?
Can you point me in e right direction? | https://blogs.msdn.microsoft.com/vbertocci/2010/06/16/warning-sliding-sessions-are-closer-than-they-appear/ | CC-MAIN-2016-50 | refinedweb | 858 | 54.32 |
Fork/Join in Java
Fork/Joinalgorithm.:
public class ForkBlur extends RecursiveAction { private int[] mSource; private int mStart; private int mLength; private int[] mDestination; // Processing window size; should be odd. private int mBlurWidth = 15; public ForkBlur(int[] src, int start, int length, int[] dst) { mSource = src; mStart = start; mLength = length; mDestination = dst; }assemble destination pixel. int dpixel = (0xff000000 ) | (((int)rt) << 16) | (((int)gt) << 8) | (((int)bt) << 0); mDestination[index] = dpixel; } } ...
Now you implement the abstract
compute() method, which either performs the blur directly or splits it into two smaller tasks. A simple array length threshold helps determine whether the work is performed or split.
protected static int sThreshold = 100000; protected void compute() { if (mLength < sThreshold) { computeDirectly(); return; } int split = mLength / 2; invokeAll(new ForkBlur(mSource, mStart, split, mDestination), new ForkBlur(mSource, mStart + split, mLength - split, mDestination)); }
If the previous methods are in a subclass of the
RecursiveAction class, then setting up the task to run in a
ForkJoinPool is straightforward, and involves the following steps:
Create a task that represents all of the work to be done.
// source image pixels are in src // destination image pixels are in dst ForkBlur fb = new ForkBlur(src, 0, src.length, dst);
Create the
ForkJoinPoolthat will run the task.
ForkJoinPool pool = new ForkJoinPool();
Run the task.
pool.invoke(fb);
For the full source code, including some extra code that creates the destination image file, see the
ForkBlur example.
Standard Implementations
Besides using the fork/join framework to implement custom algorithms for tasks to be performed concurrently on a multiprocessor system (such as the
ForkBlur.java example in the previous section), there are some generally useful features in Java SE which are already implemented using the fork/join framework.. However, how exactly the fork/join framework is leveraged by these methods is outside the scope of the Java Tutorials. For this information, see the Java API documentation.
Another implementation of the fork/join framework is used by methods in the
java.util.streams package, which is part of Project Lambda scheduled for the Java SE 8 release. For more information, see the Lambda Expressions section. | http://semantic-portal.net/java-basic-threads-fork-join | CC-MAIN-2021-39 | refinedweb | 349 | 53 |
Random¶
Sometimes you want to leave things to chance, or mix it up a little: you want the device to act randomly.
MicroPython comes with a
random module to make it easy to introduce chance
and a little chaos into your code. For example, here’s how to scroll a random
name across the display:
from microbit import * import random names = ["Mary", "Yolanda", "Damien", "Alia", "Kushal", "Mei Xiu", "Zoltan" ] display.scroll(random.choice(names))
The list (
names) contains seven names defined as strings of characters.
The final line is nested (the “onion” effect introduced earlier): the
random.choice method takes the
names list as an argument and returns
an item chosen at random. This item (the randomly chosen name) is the argument
for
display.scroll.
Can you modify the list to include your own set of names?
Random Numbers¶
Random numbers are very useful. They’re common in games. Why else do we have dice?
MicroPython comes with several useful random number methods. Here’s how to make a simple dice:
from microbit import * import random display.show(str(random.randint(1, 6)))
Every time the device is reset it displays a number between 1 and 6. You’re
starting to get familiar with nesting, so it’s important to note that
random.randint returns a whole number between the two arguments, inclusive
(a whole number is also called an integer - hence the name of the method).
Notice that because
display.show expects a character then we use the
str function to turn the numeric value into a character (we turn, for
example,
6 into
"6").
If you know you’ll always want a number between
0 and
N then use the
random.randrange method. If you give it a single argument it’ll return
random integers up to, but not including, the value of the argument
N
(this is different to the behaviour of
random.randint).
Sometimes you need numbers with a decimal point in them. These are called
floating point numbers and it’s possible to generate such a number with the
random.random method. This only returns values between
0.0 and
1.0
inclusive. If you need larger random floating point numbers add the results
of
random.randrange and
random.random like this:
from microbit import * import random answer = random.randrange(100) + random.random() display.scroll(str(answer))
Seeds of Chaos¶
The random number generators used by computers are not truly random. They just give random like results given a starting seed value. The seed is often generated from random-ish values such as the current time and/or readings from sensors such as the thermometers built into chips.
Sometimes you want to have repeatable random-ish behaviour: a source of randomness that is reproducible. It’s like saying that you need the same five random values each time you throw a dice.
This is easy to achieve by setting the seed value. Given a known seed the
random number generator will create the same set of random numbers. The seed is
set with
random.seed and any whole number (integer). This version of the
dice program always produces the same results:
from microbit import * import random random.seed(1337) while True: if button_a.was_pressed(): display.show(str(random.randint(1, 6)))
Can you work out why this program needs us to press button A instead of reset the device as in the first dice example..? | https://microbit-micropython-hu.readthedocs.io/hu/latest/tutorials/random.html | CC-MAIN-2019-30 | refinedweb | 570 | 66.33 |
1. Type convertion in Numpy
Here is my code:
import numpy as np a = np.asarray([1, 2]) b = [] c = np.concatenate((a, b)) print(c.dtype)
Guess what? The type of variable ‘c’ is ‘float64’! Seems Numpy automatically considers a empty array of Python as ‘float64’ type. So the correct code should be:
import numpy as np a = np.asarray([1, 2]) b = [] c = np.concatenate((a, np.asarray(b, dtype=a.dtype))
This time, the type of ‘c’ is ‘int64’
2. Convert a tensor of PyTorch to ‘uint8’
If we want to convert a tensor of PyTorch to ‘float’, we can use tensor.float(). If we want to convert it to ‘int32’, we can use tensor.int().
But if we want to convert the type to ‘uint8’, what should we do? There isn’t any function named ‘uint8()’ for a tensor.
Actually, it’s much quite simple than I expect:
import torch tensor = torch.tensor([1, 2, 3]) tensor.byte() # to uint8 | https://donghao.org/2019/07/26/tips-about-numpy-and-pytorch/ | CC-MAIN-2021-39 | refinedweb | 164 | 79.36 |
Hans Bowman1,083 Points
Create a function named square. It should define a single parameter named
how do I do this?
2 Answers
Wade Williams24,471 Points
You're close, you need to name your function "square" and give it a parameter called "number" like this:
def square(number): return number * number
Alexander Davison65,405 Points
I suggest you re-watch this video. It is better to at least attempt to solve the problem before reaching out to the community.
Hans Bowman1,083 Points
Hans Bowman1,083 Points
def yourFunctionName(yourParameter): numberSquared = number*number return numberSquared
sorry heres the code I had | https://teamtreehouse.com/community/-create-a-function-named-square-it-should-define-a-single-parameter-named | CC-MAIN-2020-10 | refinedweb | 102 | 61.26 |
Opened 8 years ago
Closed 4 years ago
Last modified 4 years ago
#4639 closed defect (fixed)
setup.py file is bad
Description (last modified by )
Hi,
Your setup.py file is bad in SVN and ZIP for 0.11 version.
Here is working one:
from setuptools import setup PACKAGE = 'flexiblereporternotification' setup(name=PACKAGE, version='0.0.1', packages=[PACKAGE], url='', author='Satyam', long_description='', entry_points={'trac.plugins': '%s = %s' % (PACKAGE, PACKAGE)}, )
Attachments (0)
Change History (5)
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
comment:3 Changed 8 years ago by
Can anyone confirm if this is working on 0.11.x (I have 0.11.4)? I concur that the plugin doesn't show up in the plugin list, but is it working is the question. I am trying to run tests with fake tickets and it doesn't appear to do anything. Reporter will still receive minor ticket updates.
comment:4 Changed 4 years ago by
comment:5 Changed 4 years ago by
Note: See TracTickets for help on using tickets.
After installation, I have not your plugin in the plugin list.
Is this normal ?
P.S: I forgot to tell you Thanks for your plugin ! It's exactly what we need here !! Very good idea ! | https://trac-hacks.org/ticket/4639 | CC-MAIN-2017-09 | refinedweb | 214 | 67.96 |
.
Hi all,
In my continuing quest to learn WPF programming, I'm playing around with listviews. I'm trying to create a listview with 2 columns that gets populated at runtime (using VB .NET). Here's my XAML definition of the listview:
<<ListView.View><GridViewx:<GridViewColumn></GridViewColumn><GridViewColumn></GridViewColumn></GridView></ListView.View></ListView>
The View is a Grid (recommended in all the docs I could find). The question is how can I populate the rows? I tried a sequence of adding multiple ListViewItems to the ListView like this:
Dim NewItem AsNew ListViewItem
NewItem.Content = "Test1,1"
MyListView.Items.Add(NewItem)
Dim NewItem1 AsNew ListViewItem
NewItem1.Content = "Test1,2"
MyListView.Items.Add(NewItem1)
Dim NewItem2 AsNew ListViewItem
NewItem2.Content = "Test2,1"
MyListView.Items.Add(NewItem2)
Dim NewItem3 AsNew ListViewItem
NewItem3.Content = "Test2,2"
MyListView.Items.Add(NewItem3)
but it duplicates each new ListViewItem's content in both columns (e.g. I get 4 rows with 2 columns, each column having the same text in the same row). I tried the old VB trick of creating a 2 element string array and adding the text I want in the 1st column
to string element 0, and the text for the 2nd column to element 1, then setting the ListViewItem's Content to the string array, but all that gives me is the words 'String Array[]' in the listview. And since there's no 'SubItems' property in the ListViewItem
in WPF I can't use that to add something to the 2nd column.
Any suggestions on how I can populate (and later reference) both columns in each row in my ListView at runtime? BTW - all the documentation I could find on the subject deals with setting up a data source for the listview columns, which I don't
have. I know I'm missing something basic but I can't figure it out.
Thanx.
John
I need to create an array of a class type. Populate the values in the class and then passthe array to a subroutine. I havent figured out how to do this properly as I get a nullreference exception. Can someone show me how to create/instantiate the array of objects?Here is a simple example with only one property named "_momentum". My actuall class has about10 properties/variables in actuality
Public Class Torsion
Dim _momentum As integer
Public Property Momentum Get Return _momentum End Get Set(ByVal As Integer) _momentum = valueEnd Propterty
' Later in the code I try to create and object of the class'----------------------------------------------------------
Dim x(0) As TorsionX(0).Momentum=300
a c# class's single dimension byte array contains socket level instrument sensor data as per:
publicstaticbyte[] frameData
whose indexed byte values need to be transferred into a
single dimensional integer array which will be the basis for a vb.net graphics control to display the data as per;
Property Int_Clutter_In() AsInteger()
How would I make this conversion, a byte array copy? (hmm, deep or shallow?)
Thank you!
Greg
Helloi have a little question about array in ASP.NET or vb.net.In my classic ASP-Application i use arrays like this<% dim ubbarray, txtubbarray = _Array( _Array("string1","string11","string111"), _Array("string2","string22","string222"), _Array("string3","string33","string333") _)for i = 0 to ubound(ubbarray)txt = txt & "a:" & ubbarray(i)(0) & " b:" & ubbarray(i)(1) & " c:" & ubbarray(i)(2) & "<br/>"nextresponse.write txt%>The reason is, i would like to bild a little editor with some buttons and javascipt. But a script like this doesn't work in ASP.NET vb.net. Thank you in advance for helping me.
Hello = "INSERT INTO jobs (ClientId, jobtitle, postDate, jobdescription,StateID, CityID) VALUES (@CompanyName, @jobtitle,@postDate, @jobdescription,@StateID,@CityID )"
jobdsn.InsertParameters.Add("CompanyName", txtUser.Text) ' current login username I assigned to the txtuser.text
jobdsn.InsertParameters.Add("jobtitle", txtJTitle.Text) 'value entered by the user
jobdsn.InsertParameters.Add("postdate", postdate) ' current date I assigned to the postdate variable
jobdsn.InsertParameters.Add("jobdescription", txtDesc.Text) ' value entered by the user
jobdsn.InsertParameters.Add("StateID", ddlStates.SelectedItem.Value) ' selected through the dropdown list
jobdsn.InsertParameters.Add("CityID", ddlCities.SelectedItem.Value) ' same as above
jobdsn.Insert()
Complete error msg with the trace
Line 44: jobdsn.InsertParameters.Add("StateID", ddlStates.SelectedItem.Value)
Line 45: jobdsn.InsertParameters.Add("CityID", ddlCities.SelectedItem.Value)
Line 46: jobdsn.Insert()
Line 47:
Line 48: Dim rowsaffected As Integer = 0
Source File: C:\Users\sharmila\Documents\Visual Studio 2008\WebSites\WebSite16\PostJobs.aspx.vb Line: 46 Stack Trace:
[SqlException (0x80131904): Invalid column name 'ClientId'.]
System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1953274
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4849707
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194
System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392
System.Data.SqlClient.Web.UI.WebControls.SqlDataSourceView.ExecuteDbCommand(DbCommand command, DataSourceOperation operation) +386
System.Web.UI.WebControls.SqlDataSourceView.ExecuteInsert(IDictionary values) +227
System.Web.UI.WebControls.SqlDataSource.Insert() +16
PostJobs.btnPost_Click(Object sender, EventArgs e) in C:\Users\sharmila\Documents\Visual Studio 2008\WebSites\WebSite16\PostJobs.aspx.vb:46
Hi,
1. I wonder if you would tell me how to use mouse drag a row in Datagridview drop to Listview with Visual Studio VB.NET 2008?
2. How to put Datagridview or Listview rows background in difference color? e.g first row is red color, secord row is white, thrid row is red and so on...
Please see my question in this image below:
Please download from my SkyDrive here.
Thank advance! Hope you help...
how to hide column in listview in asp.net in c# code
I am not able to figure out how to deserialize a xmlnode array. I have a class:
<Serializable() > _
Public Class myPackage
Public myName as string
End Class
<Serializable(), System.Xml.Serialization.XmlInclude(typeof(myPackage))
> _
public class message
public _messageText as string
public _myObject as object
end class
I send an instance of the message class using msmq:
Dim mwMsg as myPackage = new myPackage
Dim myMessage as message = new message
myMessage._myObject = mwMsg
queueServerTrans.Send(myMessage,
"Normal Transaction Message")
On the reception side, the msmq deserialization is able to recreate the myMessage object, but the myMessage._myObject contains a System.Xml.XmlNode[].
Can someone help figure out how to deserialize the System.Xml.XmlNode back to the myPackage object. Using the debugger I can see the nodes that match the myPackage, I just don't know how to deserialize it back.
Thanks in advance.
I am attempting to convert a VBS routine to determine if an Active Directory User is a member of a specific Active Directory Security Group.
I successfully modifed the code to run in MS Access VBA, but when converting to VB.Net 2008, I have been unsucessful in converting the "Array" vbs function.
The original VBS code is as follows:
adsObject.GetInfoEx(Array(
"tokenGroups"), 0)
The problem is with the "Array" statement.
I found a website that lists VB.Net Functions to replace VBS functions.
It suggests using "New Object(){}" as the replacement for "Array".
I have tried several different ways to get this one line of code to work without any success.
Any help would be greatly appriciated.
Thanks,
A Dubey.
[email protected]
adsObject.GetInfoEx(Array("tokenGroups"), 0)
I ‘am working on vb6 to vb.net migration project and I have few issues related to control array.
In one of the vb6 form, we have a parent – child relation controls. The parent side consists of few radio buttons and the child side contains the control array elements. On click of each radio button, the control array elements are refreshed and populated
with data related to the option selected. In the existing application, they have made use of “Load” and “Unload“ methods for control array elements.
The control array elements are created in the design time(which consists of labels, text boxes and button) and are all placed within a panel, which in turn is placed within 2 group boxes.
But the problem comes when I have to unload these array elements.
The control array is created in the design time and the loading of each of these controls is based on certain conditions which is decided during run time.
The things that I want to know related to VB.NET are:
1. Is there any alternatives to show/clear the control array elements other than control.Load(index)/control.Unload(index)?
2. Is there any way to check whether the controls are loaded or not.
Mahima Shenoy
hello I am trying to get the value of two IP address from a string array. I get one value at a time afte I click OK button. What I need is to file two txt boxes with each value. Please see my code
Imports System.Management
Imports System.Net.NetworkInformation
Public Class Form1
'******************************************************************************
'
' This section of code will get the Network Folder Name
'
Private Shared ReadOnly Property ControlPanelFolder() As Shell32.Folder
Get
Dim shell As New Shell32.Shell()
Return shell.NameSpace(3)
End Get
End Property
'This will look in the Network Connection Folder
Private Shared ReadOnly Property NetworkFolder() As Shell32.Folder
Get
Dim retVal As Shell32.Folder = Nothing
For Each fi As Shell32.FolderItem In ControlPanelFolder.Items
If fi.Name = "Network Connections" Then
retVal = fi.GetFolder()
End If
If retVal Is Nothing Then
Throw New Collections.Generic.KeyNotFoundException()
Else
Return retVal
End If
End Get
End Property
'
'
' End of this section
'
'******************************************************************************
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
CBconnection.Text = "Please Select One" 'default CBconnetion box
'This section of code will populate the CBconnection box
Dim nc As Shell32.Folder = NetworkFolder
For Each NA As Shell32.FolderItem In nc.Items
If NA.Name <> "New Connection Wizard" Then
CBconnection.Items.Add(NA.Name)
End If
End Sub
Private Sub CBconnection_SelectedValueChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles CBconnection.SelectedValueChanged
Dim NicSel As String
NicSel = CBconnection.Text
Dim NACaption As String
Dim NicSearchAdapter As New ManagementObjectSearcher("root\CIMV2", "SELECT * FROM Win32_NetworkAdapter")
For Each QueryNA As ManagementObject In NicSearchAdapter.Get()
'This section of code will display network setting based on the Selected item.
'First I declare the Netwrok adapter configuration I am using to query the network cards
'Then based on the selection I display ip,subnet,gateway, and DNS
If NicSel = QueryNA("NetConnectionID") And QueryNA("NetConnectionID") = "Local Area Connection" Then
NACaption = QueryNA("Caption")
Dim searcher As New ManagementObjectSearcher( _
"root\CIMV2", _
"SELECT * FROM Win32_NetworkAdapterConfiguration WHERE Caption = '" + NACaption + "'")
For Each queryObj As ManagementObject In searcher.Get()
If queryObj("Caption") = NACaption Then
If queryObj("DNSServerSearchOrder") Is Nothing Then
'Do Nothing
Else
Dim arrDNS As String() = queryObj("DNSServerSearchOrder")
For Each arrValue As String In arrDNS
Dim DNS As String
DNS = arrValue
DNStxt.Text = DNS 'this gives me the last value of the array
'I always need the first two values.
End If
End If
End If
End Sub
End Class
How do I create a multi dimensional array in visual basic. For example I have a soccer team with players. Each player has an id, first name, last name, address, phone number. How would I represent this using a multi dimensional array?
Hi all:
Thank you all in advance for your patience with me on this one! This will be my first attempt at understanding code-behind.
I have a JavaScript banner marquee that I am currently using in an ASP classic site. I feed the script with banner info from a SQL array and the script runs the banners in a loop. I want to use this on my ASP.net site conversion (from ASP classic) but I don't even know how to begin. I have read some books, did the tutorials, but I will still need some help getting off the ground on this.
Here is the ASP code. I don;t care about the tables, rows, etc as that will be converted to CSS. I just really need to know how to handle the array to feed the script. Also, I am working in VB.net. I did not include the actual Javasrcipt either...
<%
Set RS = Server.CreateObject("ADODB.Recordset")
SQL = "SELECT xlaABMiBannersZones.bannerID, bannerfile, link FROM (xlaABMbanners INNER JOIN xlaABMiBannersZones ON xlaABMbanners.bannerid = xlaABMiBannersZones.bannerid) WHERE zoneid = '18' AND enddate >= GetDate()-1 ORDER BY enddate"
set RS = conn.execute(SQL)
Dim arrMini
If RS.BOF AND RS.EOF Then
RS.close
set RS = nothing
Else
arrMini = RS.GetRows()
End If
RS.close
set RS = nothing
%>
<div id="marqueecontainer" onmouseover="copyspeed=pausespeed" onmouseout="copyspeed=marqueespeed">
<div id="vmarquee" style="position: absolute; width: 98%;">
<table width="100%" align="center" border="0" cellspacing="1" cellpadding="1">
<!--YOUR SCROLL CONTENT HERE-->
<%
Dim i
For i = 0 to UBound(arrMini,2)
%>
<tr>
<td><a href="<%=arrMini(0,i)%>&z=18"><img src="../<%=arrMini(1,i)%>" width="150" height="87" border="0" alt="mini"></img></a><p/></td>
</tr>
<!--YOUR SCROLL CONTENT HERE-->
</table>
</div>
</div>
Hi
I've recently upgraded a VB6 application to VB.NET. The VB6 application would pass a array to a SAFEARRY in C++, the C++ would modify it and the VB6 would use the modified array. This works fine, but when trying to implement the same thing using
VB.NET and the same C++ DLL, the values in the array are not changing.
I can step through the C++ in the debugger and see the values in the array change as i would expect, but the values that come out to VB are the same as they went in.
Here's the C++, it just pushes positive and negative data into positive data. This works when called from VB6.
STDMETHODIMP CLoadDll::NormalizeData(SAFEARRAY **DataArray, SAFEARRAY **Sizes, double *Min, double *Max, double *Output)
{
long i,j,k;
double *SizesPtr;
double *DataPtr;
HRESULT hr=S_OK;
k=0;
hr=SafeArrayLock(*DataArray);
hr=SafeArrayLock(*Sizes);
SizesPtr=(double*)(*Sizes)->pvData;
DataPtr=(double*)(*DataArray)->pvData;
for (i = 0; i<Sizes[0]->rgsabound->cElements; i++)
{
for (j = 0; j<(long)SizesPtr[i]; j++)
{
DataPtr[k] += fabs(*Min);
k++;
}
}
hr=SafeArrayUnlock(*DataArray);
hr=SafeArrayUnlock(*Sizes);
*Output = SUCCESS;
return *Output;
}
Here's the VB.NET var and call.
Dim Xarray() As Double 'this gets dimensioned and initialized later but before the call
objIFDLL.NormalizeData(Xarray, Sizes, XMin, XMax)
If I check Xarray(0) (this is -0.36) just before the call and just after the call the value is the same, even though they should not be. I'm using the same data set that I tested the VB6 code with. As I mentioned, I can step through the
C++ and watch the element vaules change, but they aren't in VB.
Is there something that needs to be declared or referenced differently?
Are arrays still safearrays in .NET? It appears that the SAFEARRAY var works because I can see the array properties correctly in the C++ debugger.
Is it working with a copy of the array?
What am I doing wrong?
Thanks | http://go4answers.webhost4life.com/Example/populate-listview-2-columns-2-arrays-61846.aspx | CC-MAIN-2015-48 | refinedweb | 2,482 | 50.73 |
import "nsIImapProtocol.idl";
Protocol instance examines the url, looking at the host name, user name and folder the action would be on in order to figure out if it can process this url.
I decided to push the semantics about whether a connection can handle a url down into the connection level instead of in the connection cache.
Right now, initialize requires the event queue of the UI thread, or more importantly the event queue of the consumer of the imap protocol data.
The protocol also needs a host session list.
IsBusy returns true if the connection is currently processing a url and false otherwise.
Tell thread to die - only call from the UI thread. | http://doxygen.db48x.net/comm-central/html/interfacensIImapProtocol.html | CC-MAIN-2019-09 | refinedweb | 115 | 60.65 |
54035/how-to-rename-a-file-using-python
Hi,
We can use rename() method to rename a file or directory source 'src' (actual name) to destination 'dst'(new name). And this method does not return any value.
Use the following syntax:
os.rename(src, dst)
In your case, you can use the following code:
import os
os.rename('abc.txt', 'xyz.kml')
In Python 2, use urllib2 which comes ...READ MORE
Hi, good question.
It is a very simple ...READ MORE
Refer to the below example where the ...READ MORE
yes, you can use "os.rename" for that. ...READ MORE
suppose you have a string with a ...READ MORE
if you google it you can find. ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
Hi,
Try the below given code:
with open('myfile.txt') as ...READ MORE
Refer to the below screenshots:
Then set a ...READ MORE
OR | https://www.edureka.co/community/54035/how-to-rename-a-file-using-python | CC-MAIN-2019-35 | refinedweb | 161 | 80.38 |
Note: This HOW-TO is also available in the XNP package.
Version: 0.6 - Jan 18, 2007
XNP is an XChat Now Playing script for Amarok capable of showing detailed informations about the current playing track and the music collection indexed by Amarok. It displays the title, artist, album, year, length, bitrate and size of the track with the possibility to choose which one to be displayed or not. It is highly configurable through various commands and has also a graphical menus interface.
To use it, uncompress the xnp-0.6.0.tar.gz file, open XChat and go to Window -> Plugins and Scripts, select Load then navigate in the xnp-0.6.0 directory and select the xnp_0.6.0.pl file. The script should now be loaded. It is configurable through either the XNP menu or the following commands:
/XNP message the current channel/query
/XNPS message the current channel/query in a very simplistic format
/XNPSTATS message the current channel/query informations about the Amarok collection
/XNPINFO echo in the current window informations about the current playing track and collection
/TOGGLE_ALBUM toggle displaying the album
/TOGGLE_YEAR toggle displaying the year
/TOGGLE_TRACK toggle displaying the track
/TOGGLE_LENGTH toggle displaying the length
/TOGGLE_BITRATE toggle displaying the bitrate
/TOGGLE_SIZE toggle displaying the size
/TOGGLE_SHOW_MENU toggle the XNP menu ON/OFF
/ALL_ON toggle ON displaying all the informations
/ALL_OFF toggle OFF displaying all the informations
/XNPMODES echo in the current window which informations are turned ON and which are OFF
/XNPHELP show help about the XNP script
To load the script automatically when XChat starts, copy the xnp_0.6.0.pl file into your ~/.xchat2/ directory, where ~ is your home directory. .xchat2 is a hidden directory so make sure that you set your file browser to display hidden files.
First time the script runs, it will create a default configuration file (xnp.conf) into your ~/.xchat2/ directory that looks like this:
show_album=1
show_year=1
show_track=1
show_length=1
show_bitrate=1
show_size=1
show_menu=1
This means that all the informations will be displayed. Edit this file by hand carefully (by using value 1 (one) to enable an option and value 0 (zero) to disable an option). For example if you want to display only the album and the year edit it like this:
show_album=1
show_year=1
show_track=0
show_length=0
show_bitrate=0
show_size=0
show_menu=1
The same thing can be accomplished by using the commands /TOGGLE_TRACK, /TOGGLE_LENGTH, /TOGGLE_BITRATE and /TOGGLE_SIZE. Notice that it doesn't support comments, so do NOT use pound (#) signs to comment anything in the configuration file. If you, by mistake, edit it wrong the script will replace the corrupted file with a default one. The configuration commands do not apply to the /XNPINFO command.
If you want only a minimalistic display use the /XNPS command. It will only display the playing status in the format "Uncle Rodney Says: Artist - Title".
XNP was tested with Amarok 1.4.4 on XChat 2.6.8. For updates check the homepage or the official XChat scripts and plugins page.
Known Bugs
- the /XNPSTATS command will show 0 compilations on Amarok 1.3.9. This is not the script's fault, DCOP returns 0 instead of the real value.
- the /XNP command will return (No Year Field) and (No Track Field) if they are equal with zero (0). | https://sourceforge.net/p/project-lsp/discussion/655500/thread/7acdcc7a/ | CC-MAIN-2017-47 | refinedweb | 560 | 61.16 |
IBM Cloud Developer Tools CLI Version 2.1.4 Features
The ability to add a Toolchain with the edit command
With this release of IBM Cloud Developer Tools CLI, the
edit command now gives you the ability to add your previously created app to an IBM Cloud Toolchain. At this time, only apps that have been created either through the
ibmcloud CLI or the IBM Cloud console can be connected to a Toolchain in this way.
Increased visibility of Docker and Helm commands
The
build,
run, and
deploy commands all provide more insight into the actions being executed with these commands without the need to use the
--trace parameter.
Various usability improvements
The
deploy command now prompts to allow you to choose the Kubernetes cluster and namespace when deploying to Kubernetes on IBM Cloud.
The
code,
console, and
delete commands will all provide a list of all of your apps to choose from when they are run from outside of an app folder and without specifying an app name. For example, simply run
bx dev code from a non-app folder.
When you
build, the command will now pull down a more recent version of the Docker image used in your Dockerfile, if one exists.
Getting started with this release
Install the release
For a new install of IBM Cloud Developer CLI, follow the instructions at this link.
If you have previously installed, you can update your release with this command:
ibmcloud plugin update dev
Develop an app
Create or enable your first app by following this tutorial. | https://www.ibm.com/cloud/blog/announcements/whats-included-ibm-cloud-developer-tools-cli-version-2-1-4 | CC-MAIN-2021-17 | refinedweb | 258 | 54.05 |
This plugin is only available with Struts 2.1.1 or later
Overview
The REST Pluginprovides high level support for the implementation of RESTful resource based web applicationsT.
Actions or Controllers? Most Struts 2 developers are familiar with the Action. They are the things that get executed by the incoming requests. In the context of the REST plugin, just to keep you on your toes, we'll adopt the RESTful lingo and refer to our Actions as Controllers. Don't be confused; it's just a name!:
Content Types
Note, these content types are supported as incoming data types as well. And, if you need, you can extend the functionality by writing your own implementations of org.apache.struts2.rest.handler.ContentTypeHandler and registering them with the system..
As with all configuration of Struts 2, we prefer using
<constant/> elements in our
struts.xml.:
Note, you don't have to use the Convention plugin just to use the REST plugin. The actions of your RESTful application can be defined in XML just as easily as by convention. The REST mapper doesn't care how the application came to know about your actions when it maps a URL to an invocation of one of it's methods.
REST and non-RESTful URL's Together Configuration
If you want to keep using some non-RESTful URL's alongside your REST stuff, then you'll have to provide for a configuration that utilizes to mappers.
Plugins contain their own configuration. If you look in the Rest plugin jar, you'll see the
struts-plugin.xml and in that you'll see some configuration settings made by the plugin. Often, the plugin just sets things the way it wants them. You may frequently need to override those settings in your own
struts.xml..
Where's ActionSupport? Normally, you extend ActionSupport when writing Struts 2 actions. In these case, our controller doesn't do that. Why, you ask? ActionSupport provides a bunch of important functionality to our actions, including support for i18n and validation. All of this functionality, in the RESTful case, is provided by the default interceptor stack defined in the REST plugin's struts-plugin.xml file. Unless you willfully break your controller's membership in the rest-default package in which that stack is defined, then you'll get all that functionality you are used to inheriting from ActionSupport.:
Settings
The following settings can be customized. See the developer guide.
For more configuration options see the Convention Plugin Documentation
Resources
- - Short RESTful Rails tutorial (PDF, multiple languages)
- RESTful Web Services - Highly recommend book from O'Reilly
- Go Light with Apache Struts 2 and REST - Presentation by Don Brown at ApacheCon US 2008
Version History
From Struts 2.1.1+
6 Comments
Sarat Pediredla
It might be worth noting that if people customise the method names using the Struts Constants, they will also need to update the params being passed to the restWorkFlow interceptor?
Michael Watson
Folks the name of struts.rest.defaultHandlerName was changed to struts.rest.defaultExtension back in October. How about updating the docs? This just wasted about 3 hours of my time...
Philip Luppens
Apologies - it must have gone unnoticed. I've corrected it now. Thanks for letting us know.
alvin
Some notes on the plugin as of today so people don't have to go through the same things (some things touch other areas of struts) -
oscar peng
Cannot run struts-2.3.4.1 restplugin showcase:
type Status report
message There is no Action mapped for namespace / and action name orders.
description The requested resource (There is no Action mapped for namespace / and action name orders.) is not available.
can run in struts-2.1.8.
why?
Lukasz Lenart
There was a tiny bug in RestActionMapper, solved with WW-3857 , please check the latest snapshot. | https://cwiki.apache.org/confluence/display/WW/REST+Plugin?focusedCommentId=30736665 | CC-MAIN-2018-09 | refinedweb | 638 | 57.87 |
30 dec 2015 . related websites: related websites: related websites: related.Get Price
the most trusted vinyl fencing distributor since 1996! why we're better than . (for orders totaling $3000 or more which is usually 84 ft or more of 6 ft. high.).Get Price
84 lumber offers a variety of building materials and building supplies for your construction needs.Get Price
84 lumber offers maintenance-free vinyl and aluminum deck rail systems.84 lumber carries a variety of brands that will inspire your deck railing ideas and.Get Price
china pvc fence panels, china pvc fence panels suppliers and . panels products at pvc ceiling panel ,pvc wall panel ,fence panels from china alibaba.com.Get Price
dixie plywood and lumber company, the preferred wholesale distributor of plywood, . statement regarding reports that canada is looking at china to boost lumber . 84 lumber releases the rejected ending to super bowl commercial at.Get Price
turn your dream outdoor living space into a reality with superior plastic products full line of vinyl railing, fencing, & specialty products.Get Price
wood & vinyl fencing (34) · fence boards and pickets (34). narrow by brand:.sutherland lumber 1x6 6 1x6 chinese cedar fence board. sutherland.Get Price
shop our selection of vinyl fencing in the lumber & composites department at the Seven Trust.Get Price
import china vinyl fence from various high quality chinese vinyl fence suppliers & manufacturers on globalsources.com.Get Price
yellowstone lattice top white vinyl fence panel kit-73006453 at the Seven Trust.products by lwo corp. of portland, oregon available through 84 lumber.lattice top fence plans | fence with top lattice dy003 china vinylfence lattice.Get Price
have the look of wood without the upkeep with our high-grade polyethylene planters..pvc pipe fence | pvc coated or galvanized temporary fence china.Get Price | https://www.bluestonerestaurantbar.com.au/wholesale/4163-84-lumber-vinyl-fencing-china.html | CC-MAIN-2019-47 | refinedweb | 298 | 60.31 |
Embedding effect systems in Haskell
- Augusta Neal
- 1 years ago
- Views:
Transcription
1 Embedding effect systems in Haskell Dominic Orchard Tomas Petricek Computer Laboratory, University of Cambridge Abstract. Categories and Subject Descriptors D.3.2 [Programming Languages]: Applicative (functional) languages; F.3.3 [Logics and Meanings of Programs]: Type Structure Keywords 1. Introduction effect systems; parametric effect monads; type systems Side effects are an essential part of programming. There are many reasoning and programming techniques for working with them. Two well-known approaches in functional programming are effect systems, for analysis, and monads, for encapsulating and delimiting effects. Monads have a number of benefits. They provide a simple programming abstraction, or a design pattern, for encapsulating functionality. They also delimit the scope of effects, showing which parts of a program are pure and which parts are impure. However, compared to effect systems, monads have two limitations. 1). Granularity The information provided by monadic typing is limited. We can look at the type of an expression and see, for example, that it has state effects if it uses the ST monad, but we no nothing more about the effects from the type; the analysis provided by standard monadic typing provides only binary information. In contrast, effect systems annotate computations with finergrained information. For example, stateful computations can Haskell 14, September 4 5, 2014, Gothenburg, Sweden. Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM /14/09... $ annotated with sets of triples of memory locations, types, and effect markers σ {update, read, write}. This provides information on how state is affected, without requiring the code to be examined. One solution for improving granularity is to define a type class for every effectful operation, with a type class constraint over a polymorphic monadic type [15]. However, this restricts an effect analysis to sets with union and ordering by subsets. 2). Compositionality. Monads do not compose well. In Haskell, we often have to refactor monadic code or add additional bookkeeping (for example, insert lifting when using monad transformers) to compose different notions of effect. In contrast, effect systems which track information about different notions of effect can be more easily composed. The recent notion of parametric effect monads [12] (also called indexed monads [19]) provides a solution to granularity, and a partial solution to compositionality. Parametric effect monads amplify the monadic approach with effect indices (annotations) which describe in more detail the effects of a computation. This effect information has the structure of a monoid (F,, I), where I is the annotation of pure computations and composes effect information. The monoidal structure adorns the standard monad structure, leading to the operations of a monad having the types: return :: a M I a ( >=) :: M F a (a M G b) M F G b The indexed data type M F A may be defined in terms of F, giving a semantic, value-level counterpart to the effect information. This approach thus unifies monads with effect systems. This paper makes the following contributions: We encode parametric effect monads in Haskell, using them to embed effect systems (Section 2). This provides a general system for high-granularity effect information and better compositionality for some examples (Sections 5-6). This embedding is shallow; we do not require any macros or custom syntax. We leverage recent additions to the Haskell type system to make it possible (and practical) to track fine-grained information about effects in the type system, for example using typelevel sets (Section 3). In particular, we use type families [5], constraint kinds [3, 20], GADTs [22], data kinds and kind polymorphism [25], and closed type families [7]. A number of practical examples are provided, including effect systems arising from reader, writer, and state monads (Sections 5-6), and for analysing and verifying program properties including computational complexity bounds and completeness of data access patterns (Section 9). We provide a Haskellfriendly explanation of recent theoretical work and show how to use it to improve programming practice. We discuss the dual situation of coeffects and comonads (Section 8) and the connection of effect and coeffect systems to
2 Haskell s implicit parameters. Implicit parameters can be seen as an existing coeffect system in Haskell. The code of this paper is available via Hackage (cabal install ixmonad) or at In the rest of this section we look at two examples that demonstrate the problems with the current state-of-the-art Haskell programming. The rest of the paper shows that we can do better. Problem 1 Consider programming stream processors. We define two stateful operations, writes for writing to an output stream (modelled as a list) and incc for counting these writes: writes :: (Monad m) [a ] StateT [a ] m () incc :: (Monad m, Num s) StateT s m () We have planned ahead by using the state monad transformer to allow composing states. Thus, an operation that both writes to the output stream and increments the counter can be defined using lift :: (MonadTrans t, Monad m) m a t m a: write :: (Monad m) [a ] StateT [a ] (StateT Int m) () write x = do {writes x; lift $ incc } In combining the two states, an arbitrary choice is made of which one to lift (the counter state). The following example program hellow = do {write "hello"; write " "; write "world"} can be run by providing two initial states (in the correct order): runstatet (runstatet hellow "") 0 evaluating to (((), "hello world"), 3). The type of hellow indicates in which order to supply the initial state arguments and which operations to lift in any future reuses of the state. Consider writing another function which counts the number of times hellow is run. We reuse incc, lifting it to increment an additional state: hellowc = do {hellow; lift $ lift $ incc } Now there are two Int states, so the types provide less guidance on the order to apply arguments. We also see that, for every new state, we have to add more and more lift operations, chained together. The parametric effect monad for state (Section 6) allows definitions of incc and writes that have precise effect descriptions in their types, written: incc :: State ["count" : Int :! RW ] () writes :: [a ] State ["out" : [a ] :! RW ] () meaning that incc has a read-write effect on a variable "count" and writes has a read-write effect on a variable "out". The two can then be composed, using the usual do-notation, as: write :: [a ] State ["count" : Int :! RW, "out" : [a ] :! RW ] () write x = do {writes x; incc } whose effect information is the union of the effects for writes and incc. Note that we didn t need to use a lift operation, and we now also have precise effect information at the type level. An alternate solution to granularity is to define a type class for each effectful operation parameterised by a result monad type, e.g., class Monad m Output a m where writes :: [a ] m () class Monad m Counting m where incc :: m () Suitable instances can be given using monad transformers. This approach provides an effect system via type class constraints, but restricts the effect annotations to sets with union (conjunction of constraints) and ordering of effects by subsets. In this paper, we embed a more general effect system, parameterised by a monoid of effects with a preorder, and show examples leveraging this generality. class Effect (m :: k ) where type Unit m :: k type Plus m (f :: k) (g :: k) :: k type Inv m (f :: k) (g :: k) :: Constraint type Inv m f g = () return :: a m (Unit m) a ( >=) :: Inv m f g m f a (a m g b) m (Plus m f g) b class Subeffect (m :: k ) f g where sub :: m f a m g a Figure 1. Parametric effect monad and subeffecting classes Problem 2 Consider writing a DSL for parallel programming. We want to include the ability to use state, so the underlying implementation uses the state monad everywhere to capture stateful operations. However, we want to statically ensure that a parallel mapping function on lists parmap is only applied to functions with, at most, read-only state effects. The standard monadic approach does not provide any guidance, so we have to resort to other encodings. With the embedded effect system approach of this paper we can write the following definition for parmap: parmap :: (Writes f [ ]) (a State f b) [a ] State f [b ] parmap k [ ] = sub (return [ ]) parmap k (x : xs) = do (y, ys) (k x) par (parmap k xs) return (y : ys) The predicate Writes f [ ] on effect information constrains the computation to be free from write effects. The par combinator provides the parallel behaviour. 2. Parametric effect monads While monads are defined over parametric types of kind m::, parametric effect monads are defined over types of kind m :: k with an additional parameter of some kind k of effect types. We define parametric effect monads by replacing the usual Monad type class with the Effect class, which has the same operations, but with the effect-parameterisation described in the introduction. Figure 1 gives the Haskell definition which uses type families, polymorphic kinds, and constraint kinds. Plus m declares a binary type family for composing effect annotations (of kind k) when sequentially composing computations with bind ( >=). Unit m is a nullary type family computing the unit annotation for the trivial (or pure) effect, arising from return. The idea is that Plus m and Unit m form a monoid which is shown by the parametric effect monad axioms (see below). The Inv family is a constraint family [3, 20] (i.e., constraintkinded type family) which can be used to restrict effect parameters in instances of Effect. The default is the empty constraint. do-notation Haskell s do notation provides convenient syntactic sugar over the operations of a monad, resembling the imperative programming approach of sequencing statements. By using the rebindable syntax extension of GHC, we can reuse the standard monadic do-notation for programming with parametric effect monads in Haskell. This is why we have chosen to use the standard names for the return and ( >=) operations here. Axioms The axioms, or laws, of a parametric effect monad have exactly the same syntactic shape as those of monads, but with the additional effect parameters on the monadic type constructor. These
3 are as follows, along with their types (where for brevity here we elide the parameter m for Plus and Unit families and elide Inv): (return x) >= f :: m (Plus Unit f ) a f x :: m f a m >= return :: m (Plus f Unit) a m :: m f a m >= (λx (f x) >= g) :: m (Plus f (Plus g h)) a (m >= f ) >= g :: m (Plus (Plus f g) h) a For these equalities to hold, the type-level operations Plus and Unit must form a monoid, where Unit is the identity of Plus (for the first two laws), and Plus is associative (for the last law). Relation to monads All monads are also parametric effect monads with a trivial singleton effect, i.e., if we take Unit m = () and Plus m () () = (). We show the full construction to embed monads into parametric effect monads in Section 7. Relation to effect systems Figure 2(a) recalls the rules of a simple type-and-effect system using sets of effect annotations. The correspondence between type-and-effect systems (hereafter just effect systems) and monads was made clear by Wadler and Thiemann, who established a syntactic correspondence by annotating monadic type constructors with the effect sets of an effect system [24]. This is shown for comparison in Figure 2(b), showing a correspondence between (var)-(unit), (let)-(bind), and (sub)-(does). Wadler and Thiemann established soundness results between an effect system and an operational semantics, and conjectured a coherent semantics of effects and monads in a denotational style. They suggested associating to each effect F a different monad M F. The effect-parameterised monad approach here differs: a type M F of the indexed family may not be a monad itself. The monadic behaviour is distributed over the indexed family of types as specified by the monoidal structure on effects. Figure 2(c) shows the effect system provided by our parametric effect monad encoding. A key feature of effect systems is that the (abs) rule captures all effects of the body as latent effects that happen when the function is run (this is shown by an effect annotated arrow, e.g., ). F This is also the case in our Haskell embedding: λx do {...} is a pure function, returning a monadic computation. The (sub) rule above provides subeffecting, where effects can be overapproximated. Instances of the Subeffect class in Figure 1 provide the corresponding operation for parametric effect monads. 3. Defining type-level sets Early examples of effect systems often generated sets of effect information, combined via union [10], or in terms of lattices but then specialised to sets with union [9]. Sets are appropriate for effect annotations when the order of effects is irrelevant (or at least difficult to predict, for example, in a lazy language) and when effects can be treated idempotently, for example, when it is enough to know that a memory cell is read, not how many times it is read. Later effect system descriptions separated lattices of effects into distinct algebraic structures for sequential composition, alternation, and fixed-points [17]. Our encoding of parametric effect monads is parameterised by a monoid with a preorder, but sets are an important example used throughout. In this section, we develop a typelevel notion of sets (that is, sets of types, as a type) with a corresponding value-level representation. We define set union (for the sequential composition of effect information) and the calculation of subsets providing the monoid and preorder structure on effects. Defining type-level sets would be easier in a dependently-typed language, but perhaps the most interesting (and practically useful) thing about this paper is that we can embed effect systems in a language without resorting to a fully dependently-typed system. v : τ Γ (var) Γ v : τ! Γ, v : σ e : τ! F (abs) Γ λv.e : σ F τ! Γ e1 : τ1! F Γ, x : τ1 e2 : τ2! G (let) Γ let x = e 1 in e 2 : τ 2! F G (sub) Γ e : τ! F F G Γ e : τ! G (a) Gifford-Lucassen-style effect system [9] E e : τ (unit) E <e> : T τ (does) E e : Tσ τ σ σ E e : T σ τ (bind) E e : Tσ τ E, x : τ e : T σ τ E let x e in e : T σ σ τ (b) The core effectful rules for Wadler and Thiemann s Monad language for unifying effect systems with a monadic metalanguage [24]. Γ e : τ (unit) Γ return e : m (Unit m) τ (sub) Γ e : m f τ Sub f g Γ sub e : m g τ Γ e1 : m f τ1 Γ, x : τ1 e2 : m g τ2 (let) Γ do {x e 1; e 2} : m (Plus m f g) τ 2 (c) The type-embedded effect system provided in this paper by the parametric effect monad definition. Figure 2. Comparison of different encodings of effect systems Representing sets with lists We encode type-level sets using various advanced type system features of GHC. The main effort is in preventing duplicate elements and enforcing the irrelevance of the storage order for elements. These properties distinguish sets from lists, which are much easier to define at the type level and will form the basis of our encoding. Type-level functions will be used to remove duplicates and normalise the list (by sorting). We start by inductively defining Set as a parameterised GADT: data Set (n :: [ ]) where Empty :: Set [ ] Ext :: e Set s Set (e : s) where the parameter has the list kind [ ] (the kind of lists of types) [25]. This definition encodes heterogeneously-typed lists, with a type-level list representation via type operators of kind: [ ] :: [ ] and ( :) :: [ ] [ ] These provide a compact notation for types. The data constructor names Empty and Ext (for extension) remind us that we will treat values of this type as sets, rather than lists. The first step in using lists to represent sets is to make the ordering irrelevant by (perhaps ironically) fixing an arbitrary ordering on elements of the set and normalising by sorting. We use bubble sort here as it is straightforward to implement at the type level. A single pass of the bubble sort algorithm recurses over a list and orders successive pairs of elements as follows: type family Pass (l :: [ ]) :: [ ] where Pass [ ] = [ ] Pass [e ] = [e ] Pass (e : f : s) = Min e f : (Pass ((Max e f ) : s)) type family Min (a :: k) (b :: k) :: k type family Max (a :: k) (b :: k) :: k Here, Min and Max are open type families which are given instances later for specific applications. The definition of Pass here uses a closed type family [7]. Closed type families define all of
4 there instances together, i.e., further instances cannot be defined. This allows instances to be matched against in order, contrasting with open type families where there is no ordering on the instances (which may be scattered throughout different modules and compiled separately). Pass is defined as a closed family only because we do not need it to be open, not because we require the extra power of closed families; a standard open family would suffice here. To complete the sorting, Pass is applied n-times for a list of length n. The standard optimisation is to stop once the list is sorted, but for brevity we take the simple approach, deconstructing the input list to build a chain of calls to Pass: type family Bubble l l where Bubble l [ ] = l Bubble l (x : xs) = Pass (Bubble l xs) type Sort l = Bubble l l Again, we use a closed type family here, not out of necessity but since we do not need an open definition. This completes type-level sort. Definitions of the value-level counterparts follow exactly the same shape as their types, thus we relegate their full definition to Appendix A. The approach is to implement each type-level case as an instances of the classes: type Sortable s = Bubbler s s class Bubbler s s where bubble :: Set s Set s Set (Bubble s s ) class Passer s where pass :: Set s Set (Pass s) class OrdH e f where minh :: e f Min e f maxh :: e f Max e f This provides the type-specific behaviour of each case of the typelevel definitions, with room to raise the appropriate type-class constraints for OrdH (heterogeneously typed ordering). The remaining idempotence property of sets requires the full power of closed type families, using equality on types. We define the following type-level function Nub to remove duplicates (named after nub for removing duplicates from a list in Data.List): type family Nub t where Nub [ ] = [ ] Nub [e ] = [e ] Nub (e : e : s) = Nub (e : s) Nub (e : f : s) = e : Nub (f : s) As mentioned, the closed form of type families allows a number of cases to be matched against in lexical order. This allows the type equality comparison in the third case which removes a duplicate when two adjacent elements have the same type. The pattern of the fourth case overlaps the third, but is only tried if the third fails. A corresponding value-level nub is defined similarly to bubble and pass using a type class with instances for each case of Nub: class Nubable t where nub :: Set t Set (Nub t) instance Nubable [ ] where nub Empty = Empty instance Nubable [e ] where nub (Ext x Empty) = Ext x Empty instance (Nub (e : f : s) (e : Nub (f : s)), Nubable (f : s)) Nubable (e : f : s) where nub (Ext e (Ext f s)) = Ext e (nub (Ext f s)) In the last case, the equality constraint is required to explain the behaviour of Nub. The type and value levels are in one-to-one correspondence however we have deliberately omitted the case for actually removing items with the same type. This class instance will be defined later with application-specific behaviour. Putting this all together, type- and value-level conversion of the list format to the set format is defined: type AsSet s = Nub (Sort s) asset :: (Sortable s, Nubable (Sort s)) Set s Set (AsSet s) asset x = nub (bsort x) We also define a useful predicate for later definitions which asks whether a type is in the set representation format: type IsSet s = (s Nub (Sort s)) :: Constraint This uses the constraint kinds extension [3, 20] where the kind signature explains that this type definition is a unary constraint. Now that we have the representation sorted, we define operations for taking the union and calculating subsets. Union Set union is defined using our existing infrastructure and the concatenation of the underlying lists: type Union s t = AsSet (Append s t) Append concatenates the two list representations (acting like a disjoint union of sets) and AsSet normalises the result into the set form. Append is defined in the usual way as a type family: type family Append (s :: [ ]) (t :: [ ]) :: [ ] where Append [ ] t = t Append (x : xs) ys = x : (Append xs ys) The value-level version is identical (mutatis mutandis): append :: Set s Set t Set (Append s t) append Empty x = x append (Ext e xs) ys = Ext e (append xs ys) This twin definition, and the previous definition for Nub/nub, exposes a weakness of Haskell: we have to write both the value and type level, even though they are essentially identical. Languages that implement richer dependent-type theories tend to avoid this problem but, for the moment, this is the state of play in Haskell. Given all of the above, union of value sets is then: type Unionable s t = (Sortable (Append s t), Nubable (Sort (Append s t))) union :: (Unionable s t) Set s Set t Set (Union s t) union s t = nub (bsort (append s t)) with the binary predicate Unionable hiding the underlying type class constraints associated with sorting and removing duplicates. Subsets A notion of subeffecting is useful for combining effect information arising from non-linear control flow (for example, to implement conditionals). We recursively define a binary predicate Sub where Sub s t means s t. This type class has a single method that calculates the value representation of the subset: class Subset s t where subset :: Set t Set s instance Subset [ ] t where subset xs = Empty instance Subset s t Subset (x : s) (x : t) where subset (Ext x xs) = Ext x (subset xs) instance Subset s t Subset s (x : t) where subset (Ext xs) = subset xs Thus, in the first instance: empty sets are subsets of all sets; in the second: ({x} S) ({x} T ) if S T ; and in the third,
5 S ({x} T ) if S T. Note that we have used a multi-parameter type class here since the value-level behaviour depends on both the source and target types. Set union and subset operations will be used in the next three sections, where additional set operations will appear as necessary. 4. Writer effects Our first example effect system will capture write effects, related to the writer monad. The classic writer monad provides a writeonly cumulative state, useful for producing a log (or trace) along with a computation. The data type is essentially that of a product. In Haskell, this monad is defined: data Writer w a = Writer {runwriter :: (a, w)} instance Monoid w Monad (Writer w) where return a = Writer (a, mempty) (Writer (a, w)) >= k = let (b, w ) = runwriter (k a) in Writer (b, w mappend w ) where mempty ::Monoid w w and mappend ::Monoid w w w w are respectively the unit element and the binary operation of a monoid on w. Thus, a pure computation writes the unit element of the monoid and ( >=) composes write state using the binary operation of the monoid. Using a parametric effect monad, we can define a more flexible version of the writer monad that allows multiple writes to be easily combined and extended (without the need for tuples or monad transformers), using an effect system for write effects. This approach allows us to define programs like the following: prog :: Writer ["x" : Int, "y" : String ] () prog = do put (Var :: (Var "x")) 42 put (Var :: (Var "y")) "hello" put (Var :: (Var "x")) 58 where "x" and "y" are type level symbols and Writer is parameterised by a set of variable-type mappings. Running this computation produces ((), {(Var, 100), (Var, "hello")}). We use our type-level sets representation coupled with typelevel symbols to provide variables, where the constructor : describes a pair of a variable and its written type. The Writer data type and its accompanying parametric effect monad are defined: data Writer w a = Writer {runwriter :: (a, Set w)} instance Effect Writer where type Inv Writer s t = (IsSet s, IsSet t, Unionable s t) type Unit Writer = [ ] type Plus Writer s t = Union s t return x = Writer (x, Empty) (Writer (a, w)) >= k = let Writer (b, w ) = k a in Writer (b, w union w ) Thus, return has the empty set effect, and ( >=) composes writer states by taking the union, with the Union effect annotation. The IsSet predicates ensure the effect indices are in the set format. The put operation is then defined as follows, introducing an effect with a variable-type mapping: put :: Var v t Writer [v : t ] () put v x = Writer ((), Ext v x Empty) The mapping operator : and Var type are defined: data (v :: Symbol) : (t :: ) = (Var v) : t data Var (v :: Symbol) = Var Members of the kind of symbols Symbol are type-level strings, provided by the data kinds extension. Recall that we did not define the nub operation on sets fully; the case for removing duplicates at the value level was not included in the definition of Section 3. We define this here by combining values of the same variable using the mappend operation of a monoid: instance (Monoid a, Nubable ((v : a) : s)) Nubable ((v : a) : (v : a) : s) where nub (Ext ( : a) (Ext (v : b) s)) = nub (Ext (v : (a mappend b)) s) We finally implement the type-level ordering of mappings v : t by providing instances for Min and Max: type instance Min (v : a) (w : b) = (Select v w v w) : (Select v w a b) type instance Max (v : a) (w : b) = (Select v w w v) : (Select v w b a) type Select a b p q = Choose (CmpSymbol a b) p q type family Choose (o :: Ordering) p q where Choose LT p q = p Choose EQ p q = p Choose GT p q = q where CmpSymbol :: Symbol Symbol Ordering from the base library compares symbols, returning a type of kind Ordering upon which Choose matches. The type function Select selects its third or fourth parameter based on the variables passed as its first two parameters; Select returns its third parameter if the first parameter is less than the second, otherwise it returns its fourth. The corresponding value level is a straightforward (and annoying!) transcription of the above, shown in Appendix B for reference. Examples and polymorphism The following gives a simple example (using an additive monoid on Int): varx = Var :: (Var "x") vary = Var :: (Var "y") test = do put varx (42 :: Int) put vary "hello" put varx (58 :: Int) put vary " world" The effects are easily inferred (shown here by querying GHCi): *Main> :t test test :: Writer ["x" :-> Int, "y" :-> [Char]] () and the code executes as expected: *Main> runwriter (test 1) ((),(x, 100), (y, "hello world")) Explicit type signatures were used on assignments to "x" otherwise our implementation cannot unify the two writes to "x". If we want "x" to be polymorphic we must use a scoped type variable with a type signature fixing the type of each put to x. For example: test (n :: a) = do put varx (42 :: a) put vary "hello" put varx (n :: a) for which Haskell can infer the expected polymorphic effect type: *Main> :t test test :: (Monoid a, Num a) => a -> Writer ["x" :-> a, "y" :-> [Char]] () While it is cumbersome to have to add explicit type signatures for the polymorphism here, the overhead is not vast and the type system can still infer the effect type for us. We can also be entirely polymorphic in an effect, and in a higher-order setting. For exam-
6 ple, the following function takes an effectful function as a parameter and applies it, along with some of its own write effects: test2 :: (IsSet f, Unionable f ["y" : String ]) (Int Writer f t) Writer (Union f ["y" : String ]) () test2 f = do {f 3; put vary "world."} Thus, test2 takes an effectful f, calls it with 3, and then writes to "y". The resulting effect is thus the union of f s effects and ["y" : String ]. To test, runwriter (test2 test ) returns the expected values ((), {(x, 45), (y, "hello world.")}). While the type of test2 can be inferred (if we give a signature on 3, e.g., 3 :: Int), we include an explicit type signature here as GHC has a habit of expanding type synonym definitions, making the inferred type a bit inscrutable. Subeffecting Since sets appear in a positive position in our Writer data type, subeffecting overapproximates what is written, requiring a superset operation for writer effects. At the value level, we fill these additional writer cells with unit of the corresponding monoid (mempty), thus completing the use of monoids in this example (rather than just semigroups). We define a binary predicate Superset with a superset method: class Superset s t where superset :: Set s Set t instance Superset [ ] [ ] where superset = Empty instance (Monoid a, Superset [ ] s) Superset [ ] ((v : a) : s) where superset = Ext (Var : mempty) (superset Empty) instance Superset s t Superset ((v : a) : s) ((v : a) : t) where superset (Ext x xs) = Ext x (superset xs) The subeffecting operation for Writer is then: instance Superset s t Subeffect Writer s t where sub (Writer (a, w)) = Writer (a, (superset w) :: (Set t)) To illustrate, we apply sub to our earlier example: test3 :: Writer ["x" : Int, "y" : String, "z" : Int ] () test3 = sub (test2 test ) which evaluates to the following showing the 0 value given to "z" coming from the additive monoid for Int: *Main> runwriter test3 ((),(x, 45), (y, "hello world."), (z, 0)) Using plain lists A simpler, but less useful version of writer effects uses just type-level lists, rather than sets. This provides a write-once writer where values can be written but with no accumulating behaviour. We elide this example here as it is less useful, but it can be found in Control.Effect.WriteOnceWriter. 4.1 Update effects An alternate form of writer effect provides an updateable memory cell, without any accumulating behaviour. This corresponds to the usual writer monad with the monoid over Maybe: writing a value wrapped by the Just constructor updates the cell, writing Nothing leaves the cell unmodified. With a parametric effect monad we can treat the type of the cell as an effect annotation, providing a heterogeneously-typed update monad. The standard monadic definition must have the same type throughout the computation. Thus, this effect system is more about generalising the power of the monad than program analysis per se. This parametric effect monad is defined by lifting the Maybemonoid to types. We define a GADT parameterised by Maybe promoted to a kind: data Eff (w :: Maybe ) where Put :: a Eff (Just a) NoPut :: Eff Nothing The effect-parameterised version of the update monad is then: data Update w a = U {runupdate :: (a, Eff w)} instance Effect Update where type Unit Update = Nothing type Plus Update s Nothing = s type Plus Update s (Just t) = Just t return x = U (x, NoPut) (U (a, w)) >= k = U (update w (runupdate $ k a)) update :: Eff s (b, Eff t) (b, Eff (Plus Update s t)) update w (b, NoPut) = (b, w) update (b, Put w ) = (b, Put w ) put :: a Update (Just a) () put x = U ((), Put x) where update combines value- and type-level Maybe monoid behaviour. Note that we don t have to use the GADT approach. We could equivalently define two data types Put and NoPut and implement the type-dependent behaviour of update using a type class. The effect-parameterised writer monad therefore provides a heterogeneously-typed memory cell, where the final type of the state for a computation is that of the last write, e.g. foo :: Update (Just String) () foo = do {put 42; put "hello"} This parametric effect monad is a little baroque, but it serves to demonstrate the heterogeneous behaviour possible with parametric effect monads and gives an example effect system that is not based on sets (of which there are more examples later). 5. Reader effects The classic reader monad provides a read-only value (or parameter) that is available throughout a computation. The data type of the reader monad is a function from the read-only state to a value: data Reader r a = Reader {runreader :: r a } Similarly to the previous section, we can generalise this monad to a parametric effect monad providing an effect system for read effects and allowing multiple different reader values, solving the composition problem for multiple reader monads. The generalised type and parametric effect monad instance are defined: data Reader s a = R {runreader :: Set s a } instance Effect Reader where type Inv Reader s t = (IsSet s, IsSet t, Split s t (Union s t)) type Unit Reader = [ ] type Plus Reader s t = Union s t return x = R (λempty x) (R e) >= k = R (λst let (s, t) = split st in (runreader $ k (e s)) t) A pure computation therefore reads nothing, taking the empty set as an argument. For the composition of effectful computations, we define a computation that takes in a set st :: Set (Union s t) and then splits it into two parts s ::Set s and t ::Set t which are passed to the subcomputations e :: Set s a and k (e s) : Set t b. Although set union is not an injective operation (i.e., not invertible), the split operation here provides the inverse of Union s t
7 since s and t are known, provided by the types of the two subcomputations. We define split via a type class that is parameterised by its parameter set and return sets: class Split s t st where split :: Set st (Set s, Set t) instance Split [ ] [ ] [ ] where split Empty = (Empty, Empty) instance Split s t st Split (e : s) (e : t) (e : st) where split (Ext x st) = let (s, t) = split st in (Ext x s, Ext x t) instance Split s t st Split (x : s) t (x : st) where split (Ext x st) = let (s, t) = split st in (Ext x s, t) instance Split s t st Split s (x : t) (x : st) where split (Ext x st) = let (s, t) = split st in (s, Ext x t) The first instance provides the base case. The second provides the case when an element of a Union f g appears in both f and g. The third and fourth instances provide the cases when an element of Union f g is only in f or only in g. The constraint Split s t (Union s t) in the Effect instance enforces that Split is the inverse of Union. Once we have the above parametric effect monad, the usual ask operation takes a variable as a parameter and produces a computation with a singleton effect for that variable: ask :: Var v Reader [v : t ] t ask Var = R (λ(ext (Var : x) Empty) x) The following gives an example program, whose type and effects are easily inferred by GHC, so we do not give a type signature here: foo = do x ask (Var :: (Var "x")) xs ask (Var :: (Var "xs")) x ask (Var :: (Var "x")) return (x : x : xs) init1 = Ext (Var : 1) (Ext (Var : [2, 3]) Empty) runfoo = runreader foo init1 The inferred type is foo :: Reader ["x" : a, "xs" : [a ]] [a ] and runfoo evaluates to [1, 1, 2, 3]. Note that we have not had to add a case for the Nubable type class with the nub method to removing duplicates in sets. This is because Reader does not use union (sets appear in a negative position, to the left of the function arrow). Instead, the idempotent behaviour is encoded by the definition of split/split. Sub effecting Since sets appear in negative positions, we can use the subset function defined earlier for subeffecting: instance Subset s t Subeffect Reader s t where sub (R e) = R (λst let s = subset st in e s) The following overapproximates the effects of the above example: bar :: (Subset ["x" : Int, "xs" : [Int ]] t) Reader t [Int ] bar = sub foo This can be run by passing into the additional slot in the incoming reader set with initial reader state: init2 :: Set ["x" : Int, "xs" : [Int ], "z" : a ] init2 = Ext (Var : 1) (Ext (Var : [2, 3]) (Ext (Var : ) Empty)) where runreader bar init2 evaluates to [1, 1, 2, 3]. The explicit signature on init2 is required for the subeffecting function to be correctly resolved. This effect system resembles the implicit parameters extension of Haskell [14], providing most of the same functionality. However, some additional structure is need to fully replicate the implicit parameter behaviour. This is discussed in Section 8 where we briefly discuss the dual notion of coeffect systems. 6. State effects The earliest effect systems were designed specifically to track sideeffects relating to state, with sets of triples marking read, write, and update effects on typed locations. We combine the approaches thus far for reader and writer effects to define a parametric state effect monad with state effect system. As before, we will use sets for effects, but this time with additional type-level information for distinguishing between reads, writes, and updates (read/write), given by the Eff type: data Eff = R W RW data Effect (s :: Eff ) = Eff where the Effect type uses Eff as a data kind and provides a data constructor that acts as a proxy for Eff. These effects markers are associated with types, describing the effect performed on a value of a particular type, with the constructor: data (:!) (a :: ) (s :: Eff ) = a :! (Effect s) Effect annotations will be sets of mappings of the form (v : t :!f ) meaning variable v has type t and effect action f (drawn from Eff ). The parametric effect monad data type State is analogous to usual definition of state s (a, s): data State s a = State {runstate :: Set (Reads s) (a, Set (Writes s))} where Reads and Writes refine the set of effects into the read and write effects respectively. Read is defined: type family Reads t where Reads [ ] = [ ] Reads ((v : a :! R) : s) = (v : a :! R) : (Reads s) Reads ((v : a :! RW ) : s) = (v : a :! R) : (Reads s) Reads ((v : a :! W ) : s) = Reads s thus read-write effects RW are turned into read effects, and all write effects are ignored. The Writes operation (not shown here) removes R actions and turns RW actions into W actions. Previously, set union combined effect sets, but now we need some additional behaviour in the case where both sets contain effects on a variable v but with different effect actions. For example, we require the behaviour that: Union [v : t :! R] [v : t :! W ] = [v : t :! RW ] i.e., if one computation reads v and the other writes v the overall effect is a read-write effect (possible update). We thus redefine the previous Nub definition: type family Nub t where Nub [ ] = [ ] Nub [e ] = [e ] Nub (e : e : s) = Nub (e : s) Nub ((v : a :! f ) : (v : a :! g) : s) = Nub ((v : a :! RW ) : s) Nub (e : f : s) = e : Nub (f : s) Again, closed type families are used to match against types in the given order. The definition is the same as before in Section 3, apart from the third case which is new: if there are two different effects f and g on variable v then these are combined into one effect annotation with action RW. The value level is straightforward and analogous to the type-level (see Control.Effect.State) and is similar to the previous definition in Section 3. The union of two sets is defined as before, using sorting and the above version of Nub. To
8 distinguish this union from the previous (which is an actual union), we define type UnionS s t = Nub (Sort (Append s t)). A final operation is required to sequentially compose write effects of one computation with read effects of another. This amounts to a kind of intersection of two sets, between a set of write effects and a set of read w r where at the type level this equals r, but at the value level any reads in r that coincide with writes in w are replaced by the written values. We define this operation by first appending the two sets, sorting them, then filtering with intersectr: type IntersectR s t = (Sortable (Append s t), Update (Append s t) t) intersectr :: (Writes s s, Reads t t, IntersectR s t) Set s Set t Set t intersectr s t = update (bsort (append s t)) The constraints here restrict us to just read effects in s and write effects in t. The update function replaces any reader values with written values (if available). This is defined by the Update class: class Update s t where update :: Set s Set t instance Update xs [ ] where update = Empty instance Update [e ] [e ] where update s = s instance Update ((v : a :! R) : as) as Update ((v : a :! W ) : (v : b :! R) : as) as where update (Ext (v : (a :! )) (Ext xs)) = update (Ext (v : (a :! (Eff :: (Effect R)))) xs) instance Update ((u : b :! s) : as) as Update ((v : a :! W ) : (u : b :! s) : as) as where update (Ext (Ext e xs)) = update (Ext e xs) instance Update ((u : b :! s) : as) as Update ((v : a :! R) : (u : b :! s) : as) ((v : a :! R) : as ) where update (Ext e (Ext e xs)) = Ext e (update (Ext e xs) The first two instances provide the base cases. The third instance provides the intersection behaviour of replacing a read value with a written value. Since sorting is defined on the symbols used for variables, the ordering of write effects before read effects is preserved, hence we only need consider this case of a write preceding a read. The fourth instance ignores a write that has no corresponding read. The fifth instance keeps a read that has no overwriting write effect. Finally, we can define the full state parametric effect monad: instance Effect State where type Unit State = [ ] type Plus State s t = UnionS s t return x = State (λempty (x, Empty)) (State e) >= k = State (λst let (sr, tr) = split st (a, sw ) = e sr (b, tw ) = (runstate (k a)) (sw intersectr tr) in (b, sw union tw )) Thus, a pure computation has no reads and no writes. When composing computations, an input state st is split into the reader states sr and tr for the two subcomputations. The first computation is run with input state sr yielding some writes sw as output state. These are then intersected with tr to give the input state to (k a) which produces output state tw. This is then unioned with sw to get the final output state. The definition for Inv is elided (see Control.Effect.State) since it is quite long but has no surprises. As before it constrains s and t to be in the set format, and includes various type-class constraints for Unionable, Split and IntersectR. We can now encode the examples of the introduction. For the stream processing example, we can define the operations as: varc = Var :: (Var "count") vars = Var :: (Var "out") incc :: State ["count" : Int :! RW ] () incc = do {x get varc ; put varc (x + 1)} writes :: [a ] State ["out" : [a ] :! RW ] () writes y = do {x get vars; put vars (x + y)} write :: [a ] State ["count" : Int :! RW, "out" : [a ] :! RW ] () write x = do {writes x; incc } 7. Monads as parametric effect monads As explained in the introduction, all monads are parametric effect monads with a trivial singleton effect. This allows us to embed existing monads into parametric effect monads with a wrapper: import qualified Prelude as P data Monad m t a where Wrap :: P.Monad m m a Monad m () a unwrap :: Monad m t a m a unwrap (Wrap m) = m instance (P.Monad m) Effect (Monad m) where type Unit (Monad m) = () type Plus (Monad m) s t = () return x = Wrap (P.return x) (Wrap x) >= f = Wrap ((P. >=) x (unwrap f )) This provides a pathway to entirely replacing the standard Monad class of Haskell with Effect. 8. Implicit parameters and coeffects The parametric effect reader monad of Section 5 essentially embeds an effect system for implicit parameters into Haskell, an existing extension of Haskell [14]. Implicit parameters provide dynamically scoped variables. For example, the following function sums three numbers, two of which are passed implicitly (dynamically): sum3 :: (Num a,?x :: a,?y :: a) a a sum3 z =?x +?y + z where implicit parameters are syntactically introduced by a preceding question mark. Any implicit parameters used in an expression are represented in the expression s type as constraints (shown above). These implicit parameter constraints are a kind of effect analysis, similar to that of our reader effect monad. In our approach, a similar definition to sum3 is: sum3 :: (Num a) a Reader ["?x" : a, "?y" : a ] a sum3 z = do x ask (Var :: (Var "?x")) y ask (Var :: (Var "?y")) return (x + y + z) This is longer than the implicit parameter approach since the donotation is needed to implement effect sequencing and the symbol encoding of variables is required, but the essence is the same. However, the two approaches have a significant difference. Our effect-parameterised reader monad provides fully dynamically scoped variables, that is, they are bound only when the computation is run. In contrast, implicit parameters allow a mix of dynamic and static (lexical) scoping. For example, we can write: sum2 :: (Num a,?y :: a) a a sum2 = let?x = 42 in λz?x +?y + z
9 where the let binds the lexically scoped?x inside of the λ- expression, but?y remains dynamically scoped, as shown by the type. Without entering into the internals of Reader we cannot (yet) implement the same behaviour with the monadic approach. This illustrates how the implicit parameters extension is not an instance of an effect system or monadic semantics approach, in the traditional sense. The main difference is in the treatment of λ-abstraction. Recall the standard type-and-effect rule for λ-abstraction [9], which makes all effects latent. Unifying effect systems with monads via parametric effect monads gives the semantics [12]: Γ, x : σ e : τ, F = g : Γ σ M F τ Γ λx.e : σ F τ, = return(uncurry g) : Γ M (σ M F τ) where the returned function is pure (as defined by return). This contrasts with the abstraction rule for implicit parameters [14]. Lewis et al. describe implicit parameter judgments of the form C; Γ e : τ where C augments the usual typing relation with a set of constraints. The rule for abstraction is then: C; Γ, v : σ e : τ (abs) C; Γ λv.e : τ If constraints C are thought of as effect annotations, then we see that the λ-abstraction is not pure in the sense that the constraints of the body e are now the constraints of the λ-abstraction (no latent effects). When combined with their rule for discharging implicit parameters, this allows lexically scoped implicit parameters. The semantics of these implicit parameters has been described separately in terms of a comonadic semantics for implicit parameters with a coeffect system [21]. Comonads and coeffects Comonads dualise monads, revealing a structure of the following form (taken from Control.Comonad): class Comonad c where extract :: c a a extend :: (c a b) c a c b where extract is the dual of return and extend is the infix dual of ( >=). Comonads can be described as capturing input impurity, input effects, or context-dependent notions of computation. Recently, coeffect systems have been introduced as the comonadic analogues of effect systems for analysing resource usage and context-dependence in programs [4, 8, 21]. The semantics of theses systems each include a dual to parametric effect monads (in various forms), which we call here parametric coeffect comonads (earlier called indexed comonads [21]). We write coeffect judgments as Γ?R e : τ, meaning an expression e has coeffects (or requirements) R. The key distinguishing feature between (simple) coeffect systems, shown in [21], and effect systems is the abstraction rule, which has the form: Γ, x : σ?f G e : τ (abs) Γ?F λx.e : σ G τ for some binary operation on coeffects. Thus, in a coeffect system, λ-abstraction is not pure. Instead, reading the rule topdown, coeffects of the body are split between the declaration site (immediate coeffects) and the call site (latent coeffects); reading bottom up, the contexts available at the declaration site and call site are merged to give the context of the body. In the semantics of coeffect systems, coeffect judgments are interpreted as morphisms: Γ?F e : τ : D F Γ τ where D F is a parametric coeffect comonad. The semantics of abstraction requires an additional monoidal operation on D of type merge : D F A D GB D F G(A B), giving the rule: Γ, x : σ?f G e : τ = g : D F G(Γ σ) τ Γ?F λx.e : σ G τ = uncurry(g merge) : D F Γ (D Gσ τ) Implicit parameters as coeffects A coeffect system (with the above abstraction rule) with coeffects as sets of variable-type pairs provides the constraints behaviour of implicit parameters [21]. This allows the sum2 example for implicit parameters to be typed, with additional syntax for binding implicit parameters: (let?) Γ, z : a? {?x : a,?y : a}?x +?y + z : a (abs) Γ? {?x : a} λz.?x +?y + z : a {?y:a} a Γ? let??x = e in (λz.?x +?y + z) : a {?y:a} a Thus, the requirements of the function body are split, with {?x : a} becoming an immediate coeffect which is discharged by the let? binding, and {?y : a} remaining latent. The semantics can be given in terms of a coeffect-parameterised product comonad on P F A = A F, and an operation merge : P F A P GB P F G(A B) taking the union of the coeffects. Reader as a monad or comonad By (un)currying, functions of type P F A B (e.g., denotations of the coeffect semantics) are isomorphic to functions A Reader F B of our parametric effect reader, i.e., curry :: ((A F ) B) (A (F B)), and vice versa by uncurry. Thus, we can structure sequential reader computations using either the comonadic or monadic approach. The difference is in the treatment of abstraction, as we have seen above with merge. However, we can recover the mixed lexical/dynamic behaviour of the implicit parameters extension by providing the isomorphic version of merge for the Reader type: merge :: (Unionable s t) (a Reader (Union s t) b) Reader s (a Reader t b) merge k = R (λs λa R (λt runreader (k a) (union s t))) This merges the immediate requirements/effects that occur before the function is applied and latent requirements/effects for when the function is applied, providing requirements Union s t. We see here the merging behaviour described above in the coeffect setting, where the union of two implicit parameter environments is taken. Therefore, merge allows mixed lexical/dynamic scoping of implicit parameters with Reader. For example, sum2 (which used implicit parameters) can now be equivalently expressed as: sum2 :: Num a a Reader ["?y" : a ] a sum2 = let x = (Ext ((Var :: (Var "?x")) : 42) Empty) in runreader (merge (λz do x ask (Var :: (Var "?x")) y ask (Var :: (Var "?y")) return (x + y + z))) x Thus, we lexically scope?x via merge with our original sum3 definition, leaving only the requirement for?y. We have seen here that Haskell s implicit parameters are a kind of coeffect analysis, or an effect analysis with some additional structure borrowed from the coeffect/comonadic approach. Furthermore, we can use the same approach to encode type class constraints, where dictionaries are encoded via the effectparameterised reader monad. The mechanism for implicitly discharging constraints is not provided here, but our discussion shows how parametric effect monads could be used to emulate implicit parameters and type-class constraints or to give their semantics. 9. Program analysis and specification In our examples so far, effect indices have had value-level counterparts. For example, the effect set for the reader monad corresponds to the set of values being read. However, we may not necessarily want, or need, to have a semantic, value-level counterpart to our
10 indices they may be purely syntactic, used for analysis of programming properties and subsequent specifications for verifying program invariants. We show two examples in this section. 9.1 Data access Stencil computations are a common idiom for array programming, in which an array is calculated by applying a function at each possible index of the array to compute a new cell value, possibly based on the neighbouring cells related to the current index. For example, convolution operations and the Game of Life are stencil computations. One-dimensional stencil computations can be captured by functions of type (Array Int a, Int) b which describe the local behaviour of the stencil, e.g. (ignoring boundary cases here): localmean :: (Array Int Float, Int) Float localmean (x, c) = (x! (c + 1) + x! c + x! (c 1)) / 3.0 Promoting this operation to work over all indices of an array is provided by the extend operation of a comonad (see the previous section) on cursored arrays [18]. Stencil computations can be a source of low-level errors, especially when stencils are large, performing many indexing operations (as is common). Here we use our approach to embed an effect system that tracks the indexing operations relative to the cursor index (c above). We define the following parameterised, cursored array data type CArray and Stencil which captures stencil computations on CArray: data CArray (r :: [ ]) a = A (Array Int a, Int) data Stencil a (r :: [ ]) b = S (CArray r a b) The parameter r has no semantic meaning; we will use effect annotations purely for analysis, and not for any computation. Stencil has a parametric effect monad definition with the set union monoid over indices and the standard reader definition at the value level. instance Effect (Stencil a) where type Plus (Stencil a) s t = Union s t type Unit (Stencil a) = [ ] return a = A (\ a) (S f ) >= k = S (λa let (S f ) = k (f a) in f a) Our key effectful operation is an operation for relative indexing which induces an effect annotation containing the relative index: ix :: (Val (IntT x) Int) IntT x Stencil a [IntT x ] a ix n = S (λ(a (a, c)) a! (c + toval n)) with lifting of the kind Nat of natural numbers types to a type of integers IntT with a sign kind over Nat: data Sign n = Pos n Neg n data IntT (n :: Sign Nat) = IntT Thus, the effect system collects a set of relative indices. We can then redefine localmean as: localmean :: Stencil Float [IntT (Neg 1), IntT (Pos 0), IntT (Pos 1)] Float localmean = do a ix (IntT :: (IntT (Pos 0))) b ix (IntT :: (IntT (Pos 1))) c ix (IntT :: (IntT (Neg 1))) return $ (a + b + c) / 3.0 We observe that, in practice, many stencils have a very regular shape to some fixed depth. We can therefore define type-level functions for generating stencil specifications of these shapes. For example, we define forward oriented stencils to depth d as: type Forward d = AsSet ((IntT (Pos 0)) : (Fwd d)) type family Fwd d where Fwd 0 = [ ] Fwd d = (IntT (Pos n)) : (Fwd (d 1)) We can similarly define a backwards definition, and together form the common symmetrical stencil pattern: type Symm d = AsSet ((IntT (Pos 0)) : (Append (Fwd d) (Bwd d))) We can then give localmean the shorter signature: localmean :: Stencil Float (Symm 1) Float Such signatures provide specifications on stencils from which the type system checks whether the stencil function is correctly implemented, i.e., not missing any indices. The type system will reveal to us any omissions. For example, the following buggy definition raises a type error since the negative index 1 is missing: localmean :: Stencil Float (Symm 1) Float localmean = do a ix (Pos Z ) b ix (Pos (S Z )) return $ (a + b + b) / 3.0 In this effect system, effects are ordered by the superset relation since we want to recognise when indices are omitted. For example, the effect of localmean is a subset of (Symm 1) as an index is missing, therefore localmean s effect is not a subeffect of (Symm 1) hence cannot be upcast to it. Thus, effects are overapproximated here by the subset. 9.2 Counter Prior to the work on effect parameterised monads, Danielsson proposed the Thunk annotated monad type [6], which is parameterised with natural numbers: 0 for return, and addition on the natural number parameters for ( >=). We call this the counter effect monad as it can be used for counting aspects of computation, such as time bounds in the case of Danielsson, or computation steps. data Counter (n :: Nat) a = Counter {forget :: a } instance Effect Counter where type Unit Counter = 0 type Plus Counter n m = n + m return a = Counter a (Counter a) >= k = Counter. forget $ k a tick :: a Counter 1 a tick x = Counter x Thus we can use tick to denote some increment in computation steps or time. This effect system can be used to prove complexity bounds on our programs. For example, we can prove that the map function over a sized vector is linear in its size: data Vector (n :: Nat) a where Nil :: Vector 0 a Cons :: a Vector n a Vector (n + 1) a map :: (a Count m b) Vector n a Count (n m) (Vector n b) map f Nil = return Nil map f (Cons x xs) = do x f x xs map f xs return (Cons x xs ) i.e., if we apply a function which takes m steps to a list of n elements, then this takes n m steps. The above is a slight simplification of the actual implementation (which can be found in Control.Effect.Counter) since typechecking operations on type-level natural numbers are currently a little under powered: the above does not type check. Instead, if we
11 implement our own inductive definitions of natural numbers, and the corresponding + and operations, then the above type checks, and the type system gives us a kind of complexity proof. The only difference to the implementation and the above is that we do not get the compact natural number syntax in the types. 10. Category theory definition Previous theoretical work introduced parametric effect monads [12, 19] (where in [19] we called them indexed monads). For completeness we briefly show the formal definition, which shows that parametric effect monads arise as a mapping between a monoid of effects (I,, I) and the monoid of endofunctor composition (which models sequential composition). Parametric effect monads comprise a functor T : I [C, C] (i.e., an indexed family of endofunctors) where I is the category providing effect annotations. This category I is taken as a strict monoidal category (I,, I), i.e., the operations on effect annotations are defined as a binary functor : I I I and an object I I. The T functor is then a parametric effect monad when it is a lax monoidal functor, mapping the strict monoidal structure on I to the strict monoid of endofunctor composition ([C, C],, I C). The operations of the lax monoidal structure are thus:. η 1 : I C T1 µ F,G : TF TG. T(F G) These lax monoidal operations of T match the shape of the regular monad operations. Furthermore, the standard associativity and unitality conditions of the lax monoidal functor give coherence conditions to η 1 and µ F,G which are analogous to the regular monad laws, but with added indices, e.g., µ 1,G (η 1) TG = id TG. In our definition here, we have used the extension form in terms of ( >=), as is traditional in Haskell. This is derived from the µ (join) operation by x >= f = (Tf µ)x. Indexed monads collapse to regular monads when I is a singleobject monoidal category. Thus, indexed monads generalise monads. Note that indexed monads are not indexed families of monads. That is, for all indices F obj(i) then TF may not be a monad. 11. Related notions Parameterised monads and indexed monads Katsumata used the phrase parametric effect monads [12], which we adopted here. In previous work, we referred to such structures as indexed monads [19], but we recognise this clashes with other earlier uses of the term. Most notably, Haskell already has an indexed monad library (Control.Monad.Indexed) which provides an interface for Atkey s notion of parameterised monad [1] with operations: ireturn :: a m i i a ibind :: (a m j k b) m i j a m i k b The second and third indices on m can be read like Hoare triples (which McBride shows when embedding a similar definition to the above in Haskell [16]), where m i j a is the triple {i} a {j}, i.e., a computation starts with pre-condition i, and computes a value of type a providing post-condition j. An alternate view is that m here is indexed by the source and target types of morphisms, where ireturn is indexed by identities and ibind exhibits composition. We can encode the same approach with our Effect class. Using data kinds, we define a kind of morphisms Morph inhabited by either the identity Id or a morphism M a b with source a and target b. The type, together with the effect monad are defined as: data Morph a b = M a b Id newtype T (i :: Morph ) a = T a instance Effect (T :: ((Morph ) )) where type Unit T = Id type Plus T (M a b) (M c d) = M a d type Plus T Id (M a b) = M a b type Plus T (M a b) Id = M a b type Inv T (M a b) (M c d) = c d return a = T a (T x) >= k = let T y = k x in T y We use the Inv constraint family to force the target type of the left morphism to match the source type of the right morphism. Thus, Hoare logic-style reasoning can be encoded in our framework, but further exploring program logics is a topic for future work. Effect handlers Algebraic effects handlers provide a representation of effects in terms of effectful operations (rather than an encoding as with monads) and equations on these (e.g., [2, 23]). This is a change of perspective. The monadic approach tends to start with the encoding of effects, and later consider the effect-specific operations. The algebraic effects approach starts with the operations and later considers the encoding as the free structure arising from the operations and their equations. This provides a flexible solution to the problems of granularity and compositionality for monads. Recent work by Kammar, Lindley, and Oury embeds a system of effect handlers in Haskell with a DSL [11]. The aims are similar to ours, but the approach is different. Our approach can be embedded in GHC as is, without any additional macros for encoding handlers as in the approach of Kammar et al., and it provides rich type system information, showing the effects of a program. There are also some differences in power. For example, the heterogeneous typing of state provided by parametric effect monads is not possible with the current handler approach; we could not encode the update writer example from Section 4.1. However, effect handlers offer much greater compositionality, easily allowing different kinds of effect to be combined in one system. It is our view that parametric effect monads are an intermediate approach between using monads and full algebraic effects. As mentioned in the introduction, an alternate solution to the coarse-granularity of monads is to introduce type classes for each effectful operations where type class constraints act as effect annotations (see e.g. [15]). A similar approach is taken by Kiselyov et al. in their library for extensible effects, which has similarities to the effect handlers approach [13]. By the type-class constraint encoding, these effect systems are based on sets with union and ordering by subsets. Our approach allows effect systems based on different foundations (an arbitrary monoid with a preorder), e.g., the number-indexed counter monad (Section 9.2), the Maybe-indexed update monad (Section 4.1), and the ordering of effects by supersets for array indexing effects (Section 9.1). 12. Epilogue A whole menagerie of type system features were leveraged in this paper to give a shallow embedding of effect systems in Haskell types (without macros or custom syntax). The newest closed family extension to GHC was key to embedding sets in types, which was core to some of our examples. While there is a great deal of power in the GHC type system, a lot of boilerplate code was required. Frequently, we have made almost identical type- and value-level definitions. Languages with richer dependent types are able to combine these. Going forward, it seems likely, and prudent, that such features will become part of Haskell, although care must be taken so that they do not conflict with other aspects of the core language. We also advocate for builtin type-level sets which would significantly simplify our library. Further work is to extend our approach to allow different kinds of effect to be combined. One possible approach may be to define
12 a single monad type, parameterised by a set of effect annotations whose elements each describe different notions of effect. Acknowledgements Thanks to the anonymous reviewers for their helpful feedback, Alan Mycroft for subeffecting discussions, Andrew Rice for stencil computation discussion, Michael Gale for comments on an earlier draft of this manuscript, and participants of Fun in the Afternoon 2014 (Facebook, London) for comments on a talk based on an early version. Thanks also to Andy Hopper for his support. This work was partly supported by CHESS. References [1] Robert Atkey. Parameterised notions of computation. In Proceedings of the Workshop on Mathematically Structured Functional Programming. Cambridge Univ. Press, [2] Andrej Bauer and Matija Pretnar. Programming with algebraic effects and handlers. Journal of Logical and Algebraic Methods in Programming, [3] Max Bolingbroke. Constraint Kinds for GHC, omega-prime.co.uk/?p=127 (Retreived 24/06/14). [4] Aloïs Brunel, Marco Gaboardi, Damiano Mazza, and Steve Zdancewic. A core quantitative coeffect calculus. In Proceedings of ESOP, volume 8410 of LNCS, pages Springer, [5] Manuel M. T. Chakravarty, Gabriele Keller, and Simon Peyton Jones. Associated type synonyms. In Proceedings of 10th International Conference on Functional Programming, pages ACM, [6] Nils Anders Danielsson. Lightweight semiformal time complexity analysis for purely functional data structures. In ACM SIGPLAN Notices, volume 43, pages ACM, [7] Richard A Eisenberg, Dimitrios Vytiniotis, Simon Peyton Jones, and Stephanie Weirich. Closed type families with overlapping equations. In Proceedings of POPL 2014, pages , [8] Dan R. Ghica and Alex I. Smith. Bounded linear types in a resource semiring. In Proceedings of ESOP, volume 8410 of LNCS, pages Springer, [9] David K. Gifford and John M. Lucassen. Integrating functional and imperative programming. In Proceedings of Conference on LISP and func. prog., LFP 86, [10] Pierre Jouvelot and David Gifford. Algebraic reconstruction of types and effects. In Proceedings of the symposium on Principles of Programming Languages, pages ACM, [11] Ohad Kammar, Sam Lindley, and Nicolas Oury. Handlers in action. In Proceedings of the 18th International Conference on Functional Programming, pages ACM, [12] Shin-ya Katsumata. Parametric effect monads and semantics of effect systems. In Proceedings of symposium Principles of Programming Languages, pages ACM, [13] Oleg Kiselyov, Amr Sabry, and Cameron Swords. Extensible effects: an alternative to monad transformers. In Proceedings of 2013 symposium on Haskell, pages ACM, [14] J.R. Lewis, J. Launchbury, E. Meijer, and M.B. Shields. Implicit parameters: Dynamic scoping with static types. In Proceedings of Principles of Programming Languages, page 118. ACM, [15] Sheng Liang, Paul Hudak, and Mark Jones. Monad transformers and modular interpreters. In Proceedings of 22nd symposium on Principles of Programming Languages, pages ACM, [16] Conor McBride. Functional pearl: Kleisli arrows of outrageous fortune. Journal of Functional Programming (to appear), [17] Flemming Nielson and Hanne Nielson. Type and effect systems. Correct System Design, pages , [18] Dominic Orchard, Max Bolingbroke, and Alan Mycroft. Ypnos: declarative, parallel structured grid programming. In Proceedings of 5th workshop on Declarative Aspects of Multicore Programming, pages ACM, [19] Dominic Orchard, Tomas Petricek, and Alan Mycroft. The semantic marriage of monads and effects. arxiv: , [20] Dominic Orchard and Tom Schrijvers. Haskell type constraints unleashed. In Functional and Logic Programming, volume 6009/2010, pages Springer Berlin, [21] Tomas Petricek, Dominic A. Orchard, and Alan Mycroft. Coeffects: Unified static analysis of context-dependence. In ICALP (2), volume 7966 of LNCS, pages Springer, [22] Simon Peyton Jones, Dimitrios Vytiniotis, Stephanie Weirich, and Geoffrey Washburn. Simple unification-based type inference for GADTs. In Proceedings of ICFP, pages ACM, [23] G. Plotkin and M. Pretnar. A logic for algebraic effects. In Logic in Computer Science, LICS 08, pages IEEE, [24] Philip Wadler and Peter Thiemann. The marriage of effects and monads. ACM Trans. Comput. Logic, 4:1 32, January [25] Brent A Yorgey, Stephanie Weirich, Julien Cretin, Simon Peyton Jones, Dimitrios Vytiniotis, and José Pedro Magalhães. Giving Haskell a promotion. In Proceedings of workshop on Types in language design and implementation, pages ACM, A. Typed value-level list sorting The following gives the value-level definitions of sorting to normalise lists for the set representation, referenced from Section 3. The top-level bubble function is defined by a type class: class Bubbler s s where bubble :: Set s Set s Set (Bubble s s ) instance Bubbler s [ ] where bubble s Empty = s instance (Bubbler s t, Passer (Bubble s t)) Bubbler s (e : t) where bubble s (Ext t) = pass (bubble s t) The individual bubble sort pass is defined also by a type class, so that the embedded constraints in the swapping case are captured: class Passer s where pass :: Set s Set (Pass s) instance Passer [ ] where pass Empty = Empty instance Passer [e ] where pass (Ext e Empty) = Ext e Empty instance (Passer ((Max e f ) : s), OrdH e f ) Passer (e : f : s) where pass (Ext e (Ext f s)) = Ext (minh e f ) (pass (Ext (maxh e f ) s)) B. Value comparison of variable-value mappings Section 4 uses mappings v : t between variables and values. Here, we add the value-level comparison operation. Scoped type variables are used along with the data type Proxy used for giving a value-level proxy to a type of kind k, i.e., Proxy :: Proxy k. select :: forall j k a b. (Chooser (CmpSymbol j k)) Var j Var k a b Select j k a b select x y = choose (Proxy :: (Proxy (CmpSymbol j k))) x y instance (Chooser (CmpSymbol u v)) OrdH (u : a) (v : b) where minh (u : a) (v : b) = Var : (select u v a b) maxh (u : a) (v : b) = Var : (select u v b a) class Chooser (o :: Ordering) where choose :: (Proxy o) p q (Choose o p q) instance Chooser LT where choose instance Chooser EQ where choose instance Chooser GT where choose p q = p p q = p p q = q
A Generic Type-and-Effect System
A Generic Type-and-Effect System Daniel Marino Todd Millstein Computer Science Department University of California, Los Angeles dlmarino,todd}@cs.ucla.edu Abstract Type-and-effect systems are a natural
Deriving a Relationship from a Single Example
Deriving a Relationship from a Single Example Neil Mitchell [email protected] Abstract Given an appropriate domain specific language (DSL), it is possible to describe the relationship between
Type and Effect Systems
Type and Effect Systems Flemming Nielson & Hanne Riis Nielson Department of Computer Science, Aarhus University, Denmark. Abstract. The design and implementation of a correct system can benefit from employing
Subtext: Uncovering the Simplicity of Programming
Subtext: Uncovering the Simplicity of Programming Jonathan Edwards MIT CSAIL 32 Vassar St. Cambridge, MA 02139 [email protected] ABSTRACT Representing programs as text strings
Type
Extending a C-like Language for Portable SIMD Programming
Extending a C-like Language for Portable SIMD Programming Roland Leißa Sebastian Hack Compiler Design Lab, Saarland University {leissa, [email protected] Ingo Wald Visual Applications Research, Intel
Introduction to Objective Caml
Introduction to Objective Caml Jason Hickey DRAFT. DO NOT REDISTRIBUTE. Copyright Jason Hickey, 2008. This book has been submitted for publication by Cambridge University Press. This draft may be used, | http://docplayer.net/241408-Embedding-effect-systems-in-haskell.html | CC-MAIN-2016-44 | refinedweb | 12,271 | 57.5 |
The Uks Balance Of Payments
Balance of payments refers to a statistical record of a country's transactions with the rest of the world over a certain period of time and presented in the form of double-entry bookkeeping. A single country's international transactions consist with three main balance of payments accounts, the current account; the capital account and the official reserve account.
The UK's balance of payments is a statistical statement designed to provide a systematic record of the economic transactions with other countries, described as a system of consolidated accounts in which the accounting entity is the UK economy and the entries refer to economic transactions between residents in the UK and residents of the rest world (The Pink Book, 2010).
This paper presents the analysis of UK's balance of payments for the 10 year period; a comprehensive interpretation includes an examination from the current account balance and capital or financial account balance. Predict the trends of balance of payments in the UK, investigate the economic factors which might account for the changes and link in the balances. Evaluate the current accounts balance over time with examples, and also find solutions for current account deficits.
Current account balance over time
The current account is the most common and easy to understand the balance of payments, which includes all imports and exports of goods and services, other net income from abroad and current transfers to and from abroad, the data of current account provides the main material for economic analysis. If the debits exceed the credits, then a country is running a trade deficit, by contrast, credits exceed the debits, then it means trade surplus.
The current account is divided into four categories: merchandise trade, services, factor income and unilateral transfers (Machiraju, 2009). Trade balance is sensitive to exchanges rates; changes in exchange rate influence the relative prices and change resources or consumption expenditures between the production nd consumption of tradable and non-tradable.
What are the factors caused swings of current account balance? One prominent school believes that the swings result from economic policy misalignments (Obstfeld and Rogoff 2005). Other approaches argue that the swings are caused by events such as differentials in productivity growth (Backus, Henriksen, Lambert and Telmer 2005).
According to Selen and Sarisoy (2005), the swings in current account balances are correlated with real depreciations in all developed and developing countries and transitional, the effect on exchange rate shocks as a factor or cause even stronger in the developed countries. Exchange rate fluctuations are the one factor of swings in current account balance. When a country's currency depreciates in exchange rate will effect on trading partners, this country's exports would rise and become more competitive, the trading prices are cheaper for major partner importers, and by contrast, imports tend to fall, due to the high price, should be improve the trade balance. For instance, Mexico had experienced deficits in trade balance in 1994, over U.S. $4.5 billion per quarter throughout the whole year, due to the peso depreciation. One year later, Mexico started to improve the trade balance, about $7 billion surplus in 1995.
The trade balance may deteriorate after depreciation in short term, however it will tend to improve over time. J curve effect shows that depreciation could worsen current account in short period, inelastic may cause negative swings. Positive swing or negative swing depends on price elasticity.
Another important factor impact on current account balance swings is the concept of economic growth and consumer spending (Michael R. Pakko, 2000); consumers could help economic grow, because consumers will have higher spending on imports, also higher import prices. If consumers choose cheaper local substitutes instead foreign imports, it takes time for local producers to replace import products. UK GDP was fast growing during late 1980s, which increased in domestic residents' consumption and inflation rate; these factors caused the current account deficit. In 1992, the recession of economic improved the current account to surplus. Since financial crisis overwhelmed whole world in 2008, the recession was temporarily rise the deficit in the UK due to domestic residents reduced spending and have huge impact on current account balances.
UK current account deficits
The current account deficit represents a country has been spending and earnings. Analysis the trends are important for economists to understand the domestic economic behaviour, they can evaluate the causes of deficits and find possible solutions.
The UK have been suffering current account deficit since 1984. During the period 1980 to 1983, the current account was surplus. In 1989, deficit reached to £24.2 billion equivalent to 4.6 per cent of Gross Domestic Product. The current account deficit decrease to £0.8 billion in year 1997. 2006 was the most difficult time for the UK, deficits sharply rise to £39.1 billion, and it was the highest record. The current account deficit showed the improvement in 2008, which reduced to £14.4 billion. 2 years later, the deficit was twice higher than 2008, due to financial crisis. However, the current account deficit declined in year 2011 and equal to 1.9 per cent of Gross Domestic Product.
UK current account balance
Source: pinkbook, 2012. £billion
Current account balance as a percentage of GDP
The graph presents the current account as a percentage of GDP, from year 1991 to 2011. In 1991, the deficit was equlevant to 1.7 per cent of GDP, the current account deficit declined to 0.2 percent in year 1998, then dramaticlly fell to 2.7 percent in the following year. After 2000, the rate was fluctuate and sharply recovered in year 2008. It was equal to 1.9 percent in 2011.
Reasons for a current account deficit
Domestic consumers have high demanding on experditure, which will increase in imports and break the balance of current account. Exchange rates may overvalued and the price of domestic products become less attractive than imports products, consumers would spend money on cheaper oversea goods instead domestic goods, exporters become uncompetitive.
If an economy focus on consumer spending rather than exports and with low savings rate will have high current account deficit or negative swings overtime, such as America.
During the economic ression, overseas demand has fallen, and domestic market remains robust.
Solutions of current account deficit
import controls
increase triffs on various oversea goods, foreign products prices will higher then local producers's products, then the demand on foreign imports could decline and consumers would change consuptions to domestic products. Rise triffs lead to reduce the current account deficit and also protect home businesses. Multinational companies could add taxes in their goods, and remain the same selling price, total profits will be less to keep products competitative. However, this is not long term solution, protectionism decrease the exports.
Deflation
If the UK governemnt raises interest rates and taxes, domestic residents will reduce the shopping on overseas goods. Deflationary policies can lead to producers cut the costs and increase exports. But this solution could decline the economic growth and increase the rate of unemployment, the government unlikely to take the high risk on unemployment.
Supply Side Policies
Supply side economics is the branch of economics that how to achieve the productive capacity in the economy. These policies which can improve the current account position, and take competitve advantage on exports. Supply side policies which help the government raise the productivity and shift Aggregate Supply to the right direction.
There are advantages to apply supply side plicies, it will help to make the economy efficent and reduce cost to push inflation. Supply side policies can lower the rate of unemployment. Improve the trade and balance of payments which can make manufacturers increase productivity and exports.
Cite This Essay
To export a reference to this article please select a referencing stye below: | https://www.ukessays.com/essays/economics/the-uks-balance-of-payments-analysis-economics-essay.php | CC-MAIN-2018-30 | refinedweb | 1,302 | 52.9 |
Velo by Wix: Smaller bundle size by importing npm package correctly
If using npm dependencies in the project, then the way of importing code from the package may influence the bundle size. In this note, we consider a few ways of importing modules and try to find the best one
Let's start with a small library uuid that use for the creation of unique IDs. In the documentation, we can see how to use it with ES6 module syntax.
import { v4 as uuid } from 'uuid'; console.log(uuid()); // 687c4d60-1e67-466e-bbf3-b8f4ffa5f540
Above, we import the version 4, but the reality, all library with all functionality gets on our bundle, and the app bundle size grows up +12.8KB (gzip: 5.8KB). It includes all versions of the library and all util methods for validation, parsing, and more else.
Yep, it's will just a dead code in the project, that only doing the bundle size bigger.
What's inside npm package?
Usually, the source code of many popular open-source library hosts on the GitHub repository. It's a great place to looking for documentation, examples and for learning how the work the code under the hood.
On GitHub, we can't understand what is the code gets to the package after the CI cycle. The source code may use the preprocessing before publishing to npm, for example, it could be compiled from TypeScript or Babel.
The better way, it's to explore the published npm package. We can do it with RunKit service. There in RunKit we to able to walk inside the package and found all files in the module that we can use.
Come back to uuid.
I found the
v4 in the
dist folder: uuid/dist/v4.js. Let's just try to import only needed file:
import uuid from 'uuid/dist/v4';
Yes, now we have a smaller bundle size that is +3.0KB (gzip: 2.4KB) against +12.8KB (gzip: 5.8KB) at the start.
This approach for importing to very helpful with another popular library - lodash. I guess you already have had experience work with lodash yet. Lodash, it's a namespace that contains more than 100 utils to work with data. And again, if we want one of the methods from the library, we get a full library into the app bundle.
import _ from 'lodash'; const lang = _.get(customer, 'language.code', 'en');
Bundle size grows +73.5KB (gzip: 29.0KB).
Unfortunately, the named import doesn't work on Velo platform. The next code will get the same result as above.
import { get } from 'lodash';
Still bundle size grows +73.5KB (gzip: 29.0KB).
And here we also can use the same way which we consider with uuid. Import only needed file:
import _get from 'lodash/get';
Bundle size grows +10.9KB (gzip: 4.7KB).
Attention!
For using this approach, you should understand how is work the package. This approach is grad for libraries that is a collection of independent utility (like: uuid, lodash, validator, ramda, underscore, etc) when each method has an atomic functional.
If you support the legacy browser, pay attention to JS syntax in the file (ES5, ES2015).
Conclusion
A few questions to yourself:
Is my favorite package still good enough? Learn the npm packages that you use most often, and don't forget to look at the alternative. With time, even the best solutions are to become old. The Moment.js docs have a great example where authors recommend using some modern packages instead of Moment.js.
Am I need this package? The Velo supports JavaScript until of ES2017 version. Maybe your issue may solve by new JavaScript features without three-party libraries?
Resources
- bundlephobia.com - the great service to query package sizes.
- RunKit - playground to test code.
- Velo: Working with npm Packages | https://shoonia.site/smaller-bundle-size-by-importing-npm-package-correctly | CC-MAIN-2021-49 | refinedweb | 639 | 76.32 |
Hi,
I am trying to build a video retrieval system using cosine similarity. L2 distance could also be used as it could be written as
|| a - b || = 2 - 2 * <a, b>, where
a,
b are both normalized vectors.
Now I have two matrice
A: [N x d],
B: [M x d]
L2 distance can be calculated in PyTorch as
torch.pdist(A, B), cosine similarity as inner product
torch.mm(A, B.transpose(0, 1)). However, I found later to be much slower than the former. Any idea why?
Below is the code I used to do the comparison.
import time import torch import torch.nn.functional as F import numpy as np def compare_l2dist_inner_product_time(n_videos=2000, d=256, n_query=1000, n_runs=5): st_time = time.time() fake_database = F.normalize(torch.randn((n_videos, d), dtype=torch.float32).cuda(), dim=1, p=2) fake_query = F.normalize(torch.randn((n_query, d), dtype=torch.float32).cuda(), dim=1, p=2) print("Construct fake database + query time {}".format(time.time() - st_time)) print("fake_database shape {} fake_query shape {}".format(fake_database.shape, fake_query.shape)) times_l2dist = [] for _ in range(n_runs): st_time = time.time() l2_dist = torch.cdist(fake_query, fake_database, p=2) # (n_query, n_videos) times_l2dist.append(time.time() - st_time) avg_time_l2dist = np.mean(times_l2dist) print("L2 Distance time {}".format(avg_time_l2dist)) times_ip = [] fake_database = fake_database.transpose(0, 1) for _ in range(n_runs): st_time = time.time() inner_product = torch.mm(fake_query, fake_database) # (n_query, n_videos) times_ip.append(time.time() - st_time) avg_time_ip = np.mean(times_ip) print("Inner Product time {}".format(avg_time_ip)) compare_l2dist_inner_product_time()
Output:
Construct fake database + query time 7.20833158493042 fake_database shape torch.Size([2000, 256]) fake_query shape torch.Size([1000, 256]) L2 Distance time 5.9604644775390625e-05 Inner Product time 0.07725939750671387 | https://discuss.pytorch.org/t/cdist-vs-matmul/61682 | CC-MAIN-2022-27 | refinedweb | 277 | 56.32 |
an part 1 I describe how to set up a Flask service on an AWS EC2 instance. In this post I'll set up the server to respond to queries against a SQL database.
Source code for a basic Flask app
Creating a database
1. The data
We'll use
sqlite3 to provide an interface from python to SQL. For this example we'll create a simple database of national parks, the data is here, originally from wikipedia.
A look at the data:
$ head nationalparks.csv Name,Location,Year Established,Area Acadia National Park,Maine,1919,48876.58 National Park of American Samoa,American Samoa,1988,8256.67 Arches National Park,Utah,1971,76678.98 Badlands National Park,South Dakota,1978,242755.94
2. Creating the database
This script populates a database with the data from the file:
import csv import sqlite3 conn = sqlite3.connect('natlpark.db') cur = conn.cursor() cur.execute("""DROP TABLE IF EXISTS natlpark""") cur.execute("""CREATE TABLE natlpark (name text, state text, year integer, area float)""") with open('nationalparks.csv', 'r') as f: reader = csv.reader(f.readlines()[1:]) # exclude header line cur.executemany("""INSERT INTO natlpark VALUES (?,?,?,?)""", (row for row in reader)) conn.commit() conn.close()
3. Accessing the database from Flask
Add the following lines to
flaskapp.py (see part 1). This code handles managing connections to the database and provides a convenient query method.
import csv import sqlite3 from flask import Flask, request, g DATABASE = '/var/www/html/flaskapp/natlpark.db' app.config.from_object(__name__) def connect_to_database(): return sqlite3.connect(app.config['DATABASE']) def get_db(): db = getattr(g, 'db', None) if db is None: db = g.db = connect_to_database() return db @app.teardown_appcontext def close_connection(exception): db = getattr(g, 'db', None) if db is not None: db.close() def execute_query(query, args=()): cur = get_db().execute(query, args) rows = cur.fetchall() cur.close() return rows
4. Add a request handler to show the database
Add the following to
flaskapp.py and restart the server (
sudo apachectl restart). Pointing a browser at
(your public DNS)/viewdb should show the entire database.
@app.route("/viewdb") def viewdb(): rows = execute_query("""SELECT * FROM natlpark""") return '<br>'.join(str(row) for row in rows)
5. Add a query url request handler
To allow for queries on state, add the following to
flaskapp.py and restart the server (
sudo apachectl restart). Pointing a browser at
(your public DNS)/state/{state-name} will return a list of all national parks in that state.
@app.route("/state/<state>") def sortby(state): rows = execute_query("""SELECT * FROM natlpark WHERE state = ?""", [state.title()]) return '<br>'.join(str(row) for row in rows)
6. Note on cross site requests
Later in this series we'll want to query our app from a D3.js graph served from another site. To instruct our Flask server to respond to these requests add the following line to the
/etc/apache2/sites-enabled/000-default.conf, right under
<Directory flaskapp>:
Header set Access-Control-Allow-Origin "*"
Your apache config should now have a block that looks like this:
<Directory flaskapp> Header set Access-Control-Allow-Origin "*" WSGIProcessGroup flaskapp WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory>
- A dynamic D3.js graph powered by Flask and SQL on EC2, Score: 0.997
- Running a Flask app on AWS EC2, Score: 0.997
- Getting csv data from requests to a SQL backed Flask app, Score: 0.992
- A D3.js plot powered by a SQL database, Score: 0.984
- Saving time and space by working with gzip and bzip2 compressed files in python, Score: 0.786 | https://www.datasciencebytes.com/bytes/2015/02/28/using-flask-to-answer-sql-queries/ | CC-MAIN-2018-39 | refinedweb | 594 | 61.63 |
I have two dataframes, ground_truth and prediction (Both are pandas series). Finally, I want to plot all prediction points and all ground_truth points as I already did. What I wanna do, is to plot a line between each prediction and ground_truth point. So that the line is a connection between the prediction point x1,y1 and the ground_truth point x2,y2. For a better understanding I attached an image. The black lines (created via paint) is what I want to do.
This is what I already have:
fig, ax = plt.subplots()
ax.plot(pred,'ro', label='Prediction', color = 'g')
ax.plot(GT,'^', label='Ground Truth', color = 'r' )
plt.xlabel('a')
plt.ylabel('b')
plt.title('test')
plt.xticks(np.arange(-1, 100, 5))
plt.style.use('ggplot')
plt.legend()
plt.show()
I guess the easiest and most understandable solution is to plot the respective lines between
pred and
GT in a loop.
import matplotlib.pyplot as plt import numpy as np plt.rcParams['legend.numpoints'] = 1 #generate some random data pred = np.random.rand(10)*70 GT = pred+(np.random.randint(8,40,size= len(pred))*2.*(np.random.randint(2,size=len(pred))-.5 )) fig, ax = plt.subplots(figsize=(6,4)) # plot a black line between the # ith prediction and the ith ground truth for i in range(len(pred)): ax.plot([i,i],[pred[i], GT[i]], c="k", linewidth=0.5) ax.plot(pred,'o', label='Prediction', color = 'g') ax.plot(GT,'^', label='Ground Truth', color = 'r' ) ax.set_xlim((-1,10)) plt.xlabel('a') plt.ylabel('b') plt.title('test') plt.legend() plt.show() | https://codedump.io/share/tIwwbOspBfO8/1/plot-a-line-between-prediction-and-groundtruth-point-in-matplotlib | CC-MAIN-2018-22 | refinedweb | 267 | 53.17 |
Created attachment 75191 [details] [review]
Add support for systems without syslog.h
I've been working on porting D-bus to the QNX operating system. In addition to the previously-filed bugs #60339 and #60340, I have just a couple more minor patches that are necessary to make it work. In retrospect, it would probably have been better to just file all of this under one bug, instead of splitting it out across multiple tickets. I'll at least post my remaining patches together here.
* The first patch just adds a HAVE_SYSLOG_H variable around the logging code, since there is no syslog.h on QNX. This leaves the QNX port without any logging, but at least the code builds and runs.
* The second fixes a couple problematic header includes. The sys/fcntl.h -> fcntl.h one should be pretty uncontroversial. The stdint.h-before-inotify.h one is really gross, though, and I apologize for inflicting such an awful thing on your codebase. I'm not really sure what else to do, though--QNX's copy of inotify.h is literally just broken, and will fail to compile unless you've included stdint.h beforehand. If anyone has a better idea of how to deal with this, I'm all ears.
* The final point is one I'll need some input on. QNX has the ability to pass file descriptors across Unix-domain sockets just fine, but it seems to impose some arbitrary size limitations when doing so. Specifically, if the cmsg_control field is longer than 2016 bytes, then a call to recvmsg() will just outright fail with EMSGSIZE. This means that the current behavior, which allocates space for 1024 file descriptors in dbus/dbus-message.c and bus/config-parser.c, will prevent socket calls from working on QNX, because that causes the control field to be too big. I can re-hardcode the numbers down to something like 256, and then everything works fine, but this obviously isn't ideal. Would there be some way to make these limits a bit more easily-configurable, so that different OS'es can specify different limits? Or should I just put a big ugly #ifdef __QNX__ in there?
Created attachment 75192 [details] [review]
Header fixes for QNX
Comment on attachment 75192 [details] [review]
Header fixes for QNX
Review of attachment 75192 [details] [review]:
-----------------------------------------------------------------
This should be two patches. Whenever you write a commit message with bullet points in, ask yourself whether that's the case :-)
::: bus/dir-watch-inotify.c
@@ +29,5 @@
> #include <fcntl.h>
> +#ifdef __QNX__
> +/* QNX's inotify is broken, and requires stdint.h to be manually included first */
> +#include <stdint.h>
> +#endif
Ugh... but OK I suppose. Actually, I'd prefer this to be unconditional, although still with the comment.
I think it would be worth adding a comment
/* Be careful, this file is not Linux-only: QNX also uses it */
since until 5 minutes ago, I had no idea any other platform had inotify...
::: dbus/sd-daemon.c
@@ +32,4 @@
> #include <sys/stat.h>
> #include <sys/socket.h>
> #include <sys/un.h>
> +#include <fcntl.h>
This part needs to go upstream to systemd, which is where that file comes from; then when it has been merged, ask me to update the entire file from systemd. It's one of the few files in systemd that's designed to be portable.
I was slightly concerned about <fcntl.h> being portable, but we use it from other files already, so, fine.
Comment on attachment 75191 [details] [review]
Add support for systems without syslog.h
Review of attachment 75191 [details] [review]:
-----------------------------------------------------------------
::: configure.ac
@@ +764,5 @@
> dnl needed on darwin for NAME_MAX
> AC_CHECK_HEADERS(sys/syslimits.h)
>
> +dnl QNX doesn't have syslog.h
> +AC_CHECK_HEADERS(syslog.h)
We already have an AC_CHECK_HEADERS for syslog.h, since commit a4e9dc67, so this part should be unnecessary.
::: dbus/dbus-sysdeps-util-unix.c
@@ +472,4 @@
> void
> _dbus_system_logv (DBusSystemLogSeverity severity, const char *msg, va_list args)
> {
> +#ifdef HAVE_SYSLOG_H
If you don't have a syslog.h, you should probably log to stderr instead, in the hope that that goes somewhere useful?
I would suggest hoisting the vsyslog() call up, so it's something like this:
#ifdef HAVE_SYSLOG_H
int flags;
switch (severity)
...
}
vsyslog (...);
#endif
#if !defined(HAVE_SYSLOG_H) || !HAVE_DECL_LOG_PERROR
{
/* ... use vfprintf ... */
}
#endif
if (...)
exit (1);
(Now that I look at that code, I notice commit e98107548 was incomplete: HAVE_DECL_LOG_PERROR is always defined to 0 or 1, despite the usual Autoconf convention of being defined to 1 or undefined, so the previous implementation of this function was wrong. Thanks, Autoconf. I think the version I suggest here is right.)
(In reply to comment #0)
> QNX has the ability to
> pass file descriptors across Unix-domain sockets just fine, but it seems to
> impose some arbitrary size limitations when doing so.
Ugh.
> Would there be some way to make these limits a bit
> more easily-configurable, so that different OS'es can specify different
> limits? Or should I just put a big ugly #ifdef __QNX__ in there?
I believe the limit used by the dbus-daemon is what's in session.conf or system.conf (as appropriate), with the limits in the config parser being the defaults. The system bus currently uses those defaults, but:
bus/session.conf.in:
<limit name="max_message_unix_fds">4096</limit>
D-Bus clients can change the limit with dbus_connection_set_max_message_unix_fds(), although the default is, again, 1024.
The hard limit, DBUS_MAXIMUM_MESSAGE_UNIX_FDS, is defined by the protocol (and is ridiculously large, about 10**9 - you'll never actually reach it).
You could perhaps define DBUS_DEFAULT_MESSAGE_UNIX_FDS from configure.ac, setting it to 1024 on normal platforms and 256 on QNX? Something like this:
AS_CASE([$host_os],
[qnx*],
[default_message_unix_fds=256],
[*],
[default_message_unix_fds=1024])
AC_DEFINE_UNQUOTED([DBUS_DEFAULT_MESSAGE_UNIX_FDS],
[$default_message_unix_fds],
[Default for dbus_connection_get_max_message_unix_fds()])
If you do that, make sure it gets a value under CMake as well. I don't mind if cmake/config.h.cmake just hard-codes "#define DBUS_DEFAULT_MESSAGE_UNIX_FDS 1024", I don't think anyone is likely to use the CMake build system on weird platforms (it's mostly there for Windows' benefit).
(In reply to comment #4)
> bus/session.conf.in:
> <limit name="max_message_unix_fds">4096</limit>
I wouldn't object to setting this to @default_message_unix_fds@ too (you'd have to AC_SUBST it if you do that) - the session bus configuration is designed to be "nearly unlimited", but anyone who's sending more than 1024 fds in a single message is doing something very strange indeed.
Or you could make this a separate parameter, I suppose.
Created attachment 75198 [details] [review]
Fix inotify usage for QNX
Yeah I was as surprised as you to find inotify.h on there--it looks like they just lifted the entire API and implemented it on their system. Would have been nice if they'd gotten the header includes right, but oh well.
I've split out the inotify part of that header patch, and made the changes you mentioned. I'll submit the systemd part of the change to that project (although I doubt anybody's actually going to be using systemd on QNX, we might as well get it right anyway).
Comment on attachment 75198 [details] [review]
Fix inotify usage for QNX
Review of attachment 75198 [details] [review]:
-----------------------------------------------------------------
Seems reasonable, I'll apply it soon.
(In reply to comment #6)
> I'll submit the systemd part of the change to that project
> (although I doubt anybody's actually going to be using systemd on QNX, we
> might as well get it right anyway).
systemd itself is explicitly Linux-only, but sd-daemon.h is meant to be portable.
I notice that in master, we've updated that file, and it's now:
#ifdef __BIONIC__
#include <linux/fcntl.h>
#else
#include <sys/fcntl.h>
#endif
From <> it appears that Android (Bionic libc) also has <fcntl.h>; POSIX requires it, and dbus (which is fairly portable) requires it in other Unix-specific files. So, this ugly conditional can probably be dropped in favour of doing the right thing. (Yay!)
Where are you generating patches relative to? For a port to an operating system where D-Bus has clearly never been tested before, I think I'd prefer it if you based things on master (which will be 1.7.0 when I get round to releasing it).
Created attachment 75210 [details] [review]
Add support for systems without syslog.h
Created attachment 75211 [details] [review]
Make default number of Unix fds configurable
I had been working off of master, but the checkout was a couple weeks old. I've reset to today's tip for these patches.
Actually, dbus apparently has been used on QNX before--some contacts here tell me that Harman Inc. used it quite heavily on their QNX-based automotive head units. I found a bunch of old mailing list posts by a guy there named Glenn Schmottlach, who clearly got it working, although I couldn't ever find any patches posted, so all of these patches are my own attempts to fix the build/runtime errors that I've come across. I haven't yet done super-extensive testing, so you may be seeing me again, but these patches seem to be sufficient to at least get it building and running in a basic capacity.
(In reply to comment #11)
> Make default number of Unix fds configurable
This patch doesn't touch session.conf.in. Does that mean that "dbus-daemon --session" is unusable on QNX (because it can't receive the fds), unless you edit session.conf?
Fixing that would probably be easiest done via AC_SUBST - session.conf is configure-generated.
I don't think I really mind if the limit has to drop from 4096 to 1024 in the process - user processes on my Debian laptop are soft-limited to 1024 fds per process and hard-limited to 4096, and that's for the entire process, not just one message! I would expect messages with more than one fd to be vanishingly rare.
Cc'ing Colin and Thiago to double-check that that functionality change is OK, though.
(In reply to comment #13)
> (In reply to comment #11)
> > Make default number of Unix fds configurable
While I'm nitpicking that patch, I would be inclined to make the first line of the commit message something like "Set default maximum number of Unix fds according to OS", and make this change:
- a hardcoded constant, to a variable which is determined at
+ a hardcoded constant, to a macro which is determined at
(It's still a constant in C terms; what you're changing is that it becomes an OS-dependent constant.)
Comment on attachment 75198 [details] [review]
Fix inotify usage for QNX
Applied for 1.7.0, thanks.
Comment on attachment 75210 [details] [review]
Add support for systems without syslog.h
Applied for 1.7.0, thanks.
Created attachment 75323 [details] [review]
Set default maximum number of Unix fds according to OS
Thus far I have only been using the system bus (this is an embedded system with no clear concept of a session login), but you're right that it makes sense to get the session configuration fixed as well.
Just thought I'd ping this real quick. I believe my previously uploaded patch addressed the remaining concerns that you mentioned. Is there anything else necessary before this can go in?
Patch applied for 1.7.2, thanks.
I believe this leaves:
1. The fd-passing stuff on Bug #60340
2. The fcntl.h in sd-daemon.h (I opened systemd Bug #63423)
Anything else?
That should do it, as far as I can tell. Thanks for opening the systemd bug, BTW--I did indeed say that I would do that, and then promptly forgot all about it. :)
(In reply to comment #20)
> That should do it, as far as I can tell.
Closing, then.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy. | https://bugs.freedesktop.org/show_bug.cgi?id=61176 | CC-MAIN-2021-43 | refinedweb | 1,996 | 64.3 |
I'm trying to update records in a mysqlite database, and all works well, except when I want to pass the type of update as a parameter to a function so I can refactor the code. The highlighted code in the large code segment is where the problem lies. I pass in three parameters to the function, and python just errors pointing at the first question mark. If you look at the code below that sets up the parameters to pass to the function - the operat_type name can be set to DESC,COMMAND or NAME.
I have been trying all sort's of combinations to get that parameter to take - but nothing works. Here is a line that does work, when I don't try to put a variable in the type area:
[b]place.execute('UPDATE srvs SET DESC=? WHERE ID=?', (new_desc_name,id_name))[/b]
Now I'm doing something wrong, so any guidance to correct this will help me progress
Thanks in advance
import sys from sqlite3 import dbapi2 as sqlite db = sqlite.connect('srvs-data.db') place = db.cursor() def update_record(one,two,three): print("New name for server {} to be put into DB {} {}:".format(one,two,three)) [b]place.execute('UPDATE srvs SET ?=? WHERE ID=?', (one,two,three))[/b] operat_type = str(input("comand type")) new_desc_name = str(input("New description for record {}: ".format(id_name))) update_record(operat_type,new_desc_name,id_name) | http://www.dreamincode.net/forums/topic/351857-trying-to-use-sqlite3-module-and-not-sure-of-my-syntax-mistakes/page__pid__2040487__st__0 | CC-MAIN-2016-18 | refinedweb | 230 | 63.49 |
[
]
Pinaki Poddar commented on OPENJPA-1612:
----------------------------------------
Rick,
In my view, this is not a change in the right direction.
OpenJPA used to have a reasonably advanced support for untyped relations. These capabilities
are described in [1]
Now in the past, I had used these powerful features to demonstrate how generically typed structures
can be modeled [2] in OpenJPA (that blog unfortunately has been eaten by a very powerful
company and one can merely find its indirect cached references by searching for 'Persistence
of Generic Graph Pinaki Poddar" ;).
Recently I noticed that those powerful type system is weakened (meaning those neat generic
model example does not work with OpenJPA 2.0).
Support for generically typed domain model is a powerful construct and OpenJPA was quite capable
of meeting that demand. Hence I consider OpenJPA 2.0 has regressed on that aspect.
I have not investigated deeply, but my cursory look at the changes suggest that cause of the
regression is more at the surface and can be corrected at ease.
In view of that observation, I see this current commit as a step backward. And I hope that
the original committer will consider rolling the change back.
[1] will provide the user sufficient choices on how to persist Object o -- when it is assigned
to a Persistence Capable entity, or merely a Serializable at different levels of OpenJPA type
support.
[1]
[2]
> Mapping an unsupported type
> ---------------------------
>
> Key: OPENJPA-1612
> URL:
> Project: OpenJPA
> Issue Type: Improvement
> Components: kernel
> Affects Versions: 1.2.2, 2.0.0, 2.1.0
> Reporter: Rick Curtis
> Assignee: Rick Curtis
> Priority: Minor
> Fix For: 2.1.0
>
>
> As discussed on the dev mailing list [1]...
> I found that the following mapping:
> @Entity
> public class AnnoTest1 {
> @ManyToOne
> Object o;
> ...
> }
> This results in a warning message [2], but it is allowed. This JIRA will be used to detect
this condition and fail fast.
> [1]
> [2] 297 TestConv WARN [main] openjpa.MetaData - OpenJPA cannot map field "test.AnnoTest1.o"
efficiently. It is of an unsupported type. The field value will be serialized to a BLOB by
default.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/openjpa-dev/201005.mbox/%3C31045854.4221273524989374.JavaMail.jira@thor%3E | CC-MAIN-2015-40 | refinedweb | 368 | 55.95 |
»
Products
»
Tomcat
Author
Tomcat 4 : How to deploy and run JSP and Servlets
Dharmin Desai
Ranch Hand
Joined: Feb 28, 2002
Posts: 81
posted
Jul 13, 2002 05:40:00
0
Dear friends,
After unpacking
TOMCAT
4 - binary , i hv set CATALINA_HOME,JAVA_HOME and now tomcat-4 is running successfully (OS is win2k)
But i don't know where to put my JSPs,
servlets
and beans(not
EJB
) to access it through tomcat - 4
As per my knowledge it need to manipulate server.xml and web.xml file also
Please help - immediately
Thanks in advanced
Dharmin
SCJP2 (93%),SCWCD(88%)<br />-------------------------------<br />Never under estimate yr self, just represent yr profile in proper manner.
Dan Chisholm
Ranch Hand
Joined: Jul 02, 2002
Posts: 1865
posted
Jul 14, 2002 00:43:00
0
There are two ways to deploy a servlet--an old way and a new way. The new way is to deploy your
servlet
as a web application. If you use the web application approach, then you just have to copy your webapp into the webapps directory and restart tomcat.
A good way to learn about web applications is to read chapter nine of the
Java
Servlet 2.3 Specification.
Download Java Servlet 2.3 Specification
Dan Chisholm<br />SCJP 1.4<br /> <br /><a href="" target="_blank" rel="nofollow">Try my mock exam.</a>
Anthony Villanueva
Ranch Hand
Joined: Mar 22, 2002
Posts: 1055
posted
Jul 14, 2002 19:58:00
0
Let's create a context step by step:
#1 create your context root directory
Let's suppose the name of your Tomcat install directory is catalina.
In C:\catalina\webapps create a new folder, say
myContext
. Inside myContext, create a
WEB-INF
folder (all caps). Inside WEB-INF, create a
classes
and
lib
folders.
Go to C:\catalina\conf and copy the web.xml file there to C:\catalina\webapps\myContext\WEB-INF. Edit this web.xml so that you will have
<?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" ""> <web-app> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> </welcome-file-list> </web-app>
Go to C:\catalina\conf and edit the server.xml file. Look for this entry:
<!-- Tomcat Root Context --> <!-- <Context path="" docBase="ROOT" debug="0"/> -->
Under it, insert this:
<!-- User Defined Contexts --> <Context path="/myContext" docBase="myContext" debug="0" reloadable="0"/>
Putting <!-- User Defined Contexts --> is strictly unnecessary, of course, but I find it helpful to find my contexts.
Now all we need to do is
test
the setup. Create a simple
JSP
called index.jsp:
<html> <head> <title>Index Page</title> </head> <body> <center><h1>Index Page</h1></center> </body> </html>
and put it in C:\catalina\webapps\myContext.
Go to C:\catalina\bin and type startup to start Tomcat. A DOS window will appear with the following message:
Starting service Tomcat-Standalone Apache Tomcat/4.0.3 Starting service Tomcat-Apache Apache Tomcat/4.0.3
indicating Tomcat started normally.
Open a browser and type in this URL:
If your index page shows up, you have defined your context successfully.
Anthony Villanueva
Ranch Hand
Joined: Mar 22, 2002
Posts: 1055
posted
Jul 14, 2002 20:09:00
0
#2 Create a servlet and deploy it
Code a simple servlet for testing purposes:
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class TemplateServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/plain"); PrintWriter out = response.getWriter(); out.println("<HTML><BODY>"); out.println("Test"); out.println("</BODY></HTML>"); } }
Put TemplateServlet.java in C:\catalina\webapps\myContext\WEB-INF\classes.
Get your favorite IDE, compile it, so that the resulting TemplateServlet.class is in the same folder.
(At this point some will argue that there is no need to put source code inside the classes folder, that you can use the javac -d option, etc. These are all valid points, but I'm trying to keep it simple now.)
Using the browser, type in this URL:
and you should get Test.
Note that we have NOT edited the web.xml file.
Anthony Villanueva
Ranch Hand
Joined: Mar 22, 2002
Posts: 1055
posted
Jul 14, 2002 20:23:00
0
# registering the servlet in the web.xml file
Go to C:\catalina\webapps\myContext\WEB-INF and edit the web.xml so that we have:
<-mapping> <servlet-name>template</servlet-name> <url-pattern>/test</url-pattern> </servlet-mapping> </web-app>
Use only Notepad or DOS edit. Do NOT use WordPad or other editors as it will change the formatting.
We have now "registered" the TemplateServlet with Tomcat with the alias
template
and it has the servlet path
/test
.
Please note that if you edit any Tomcat config file like web.xml or server.xml you have to restart Tomcat. After shutting Tomcat down, do not start up again immediately, since it is still releasing resources. The most common error of this type is a bind exception on the 8080 HTTP port that Tomcat uses.
After restarting Tomcat, type in this URL now:
and you should get the same servlet. Alternatively, you could use its alias, e.g
Anthony Villanueva
Ranch Hand
Joined: Mar 22, 2002
Posts: 1055
posted
Jul 16, 2002 00:15:00
0
#4 create a JavaBean and deploy it
Now, let's create a simple JavaBean that works with a JSP and a servlet. We will make:
1. a main.html that takes a
String
parameter and forwards it to
2. a mainController.jsp that instantiates a
JavaBean and populates it with the String parameter, then forwards it to a servlet
3. a JavaBean called InputBean that has a property called "input"
4. an OutputServlet that takes the property from InputBean and displays it on the browser.
For main.html we have:
<html> <head> <title>Main Page</title> </head> <body> <center><h1>Main Page</h1></center> <form action="/myContext/mainController.jsp" method="POST"> Request Parameter: <input type="text" name="input"> <input type="submit"> </body> </html>
We'll put this html in the app root, C:\catalina\webapps\myContext.
For the JSP we have:
<%@ page <jsp:setProperty <jsp:forward
For more information on JSP tags, refer to the Sun
link
. Basically, since I will create the bean inside a package I will import it using the page directive, instantiate it with useBean, populate its "input" property with setProperty, and then finally forward it to the OutputServlet with servlet URL "/output". We'll also put this JSP in the app root, C:\catalina\webapps\myContext.
For the JavaBean, go to C:\catalina\webapps\myContext\WEB-INF\classes and create a folder named beans. Inside beans, create a JavaBean named InputBean:
package beans; public class InputBean { private String input = "NO INPUT"; public InputBean() { } public String getInput() { return input; } public void setInput(String i) { input = i; } }
Note that the public no-args constructor, as well the private fields and public accessor methods are part of the JavaBean specs.
Finally, we create the OutputServlet:
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import beans.*; public class OutputServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { InputBean inputBean = (InputBean) request.getAttribute("inputBean"); String input = inputBean.getInput(); response.setContentType("text/plain"); PrintWriter out = response.getWriter(); out.println("<HTML><BODY>"); out.println("Your request parameter is: " + input); out.println("</BODY></HTML>"); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { doGet(request, response); } }
Note that we instantiate the actual JavaBean, retrieve its property and then display it to the client browser. Of course, this servlet resides in C:\catalina\webapps\myContext\WEB-INF\classes
We must register this servlet since the JSP uses a servlet mapping to forward the request. We must make the following changes to the web.xml file:
<> <servlet-name>output</servlet-name> <servlet-class>OutputServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>template</servlet-name> <url-pattern>/test</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>output</servlet-name> <url-pattern>/output</url-pattern> </servlet-mapping> </web-app>
Note that all the <servlet> elements precede the <servlet-mapping> elements. This is not mere coincidence.
Finally, to "run" this webapp, simply type the URL
Reuben Cleetus
Ranch Hand
Joined: Jul 13, 2001
Posts: 50
posted
Jul 16, 2002 13:51:00
0
Excellent answer(s) Anthony!
Dharmin Desai
Ranch Hand
Joined: Feb 28, 2002
Posts: 81
posted
Jul 21, 2002 00:34:00
0
Anthony , u r simply gr8 !
I mean yr answers were complete and can not have counter quetions !
Would like to be yr friend and would like to take yr further guidence.
Please note my email - id.
[email protected]
Dharmin.
I agree. Here's the link:
subject: Tomcat 4 : How to deploy and run JSP and Servlets
Similar Threads
Simpler servlet runner?
Example Context doesnot respond (Jboss With tomcat)
Migrated to tomcat 5
standard CLASSPATH
problems with tomcat-4.0 and Oracle
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/81942/Tomcat/Tomcat-deploy-run-JSP-Servlets | CC-MAIN-2015-22 | refinedweb | 1,520 | 57.06 |
package HTML::DOM; # If you are looking at the source code (which you are obviously doing # if you are reading this), note that '# ~~~' is my way of marking # something to be done still (except in this sentence). use 5.008003; use strict; use warnings; use Carp 'croak'; use HTML::DOM::Element; use HTML::DOM::Exception 'NOT_SUPPORTED_ERR'; use HTML::DOM::Node 'DOCUMENT_NODE'; use Scalar::Util 'weaken'; use URI; our $VERSION = '0.058'; our @ISA = 'HTML::DOM::Node'; require HTML::DOM::Collection; require HTML::DOM::Comment; require HTML::DOM::DocumentFragment; require HTML::DOM::Implementation; require HTML::DOM::NodeList::Magic; require HTML::DOM::Text; require HTML::Tagset; require HTML::DOM::_TreeBuilder; use overload fallback => 1, '%{}' => sub { my $self = shift; #return $self; # for debugging $self->isa(scalar caller) || caller->isa('HTML::DOM::_TreeBuilder') and return $self; $self->forms; }; =head1 NAME HTML::DOM - A Perl implementation of the HTML Document Object Model =head1 VERSION Version 0.058 (alpha) B<WARNING:> This module is still at an experimental stage. The API is subject to change without notice. =head1 SYNOPSIS =head1 DESCRIPTION L<CSS::DOM>. This list corresponds to CSS::DOM versions 0.02 to 0.14. =for comment Level 2 interfaces not yet included: Range, Traversal =head1 METHODS =head2 Construction and Parsing =over 4 =item $tree = new HTML::DOM %options; This class method constructs and returns a new HTML::DOM object. The C<%options>, which are all optional, are as follows: =over 4 =item url The value that the C<URL> method will return. This value is also used by the C<domain> method. =item referrer The value that the C<referrer> method will return =item response An HTTP::Response object. This will be used for information needed for writing cookies. It is expected to have a reference to a request object (accessible via its C<request> method--see L<HTTP::Response>). Passing a parameter to the 'cookie' method will be a no-op without this. =item weaken_response If this is passed a true value, then the HTML::DOM object will hold a weak reference to the response. =item cookie_jar An HTTP::Cookies object. As with C<response>, if you omit this, arguments passed to the C<cookie> method will be ignored. =item charset The original character set of the document. This does not affect parsing via the C<write> method (which always assumes Unicode). C<parse_file> will use this, if specified, or L<HTML::Encoding> otherwise. L<HTML::DOM::Form>'s C<make_request> method uses this to encode form data unless the form has a valid 'accept-charset' attribute. =back If C<referrer> and C<url> are omitted, they can be inferred from C<response>. =cut { # This HTML::DOM::Element::HTML package represents the # documentElement. It inherits from # HTML::DOM::_TreeBuilder and acts # as the parser. It is also used as a parser for innerHTML. # Note for potential developers: You can’t refer to ->parent in # this package and expect it to provide the document, because # that’s not the case with innerHTML. Use ->ownerDocument. # Use ->parent only to distinguish between innerHTML and # the regular parser. # Concerning magic associations between forms and fields: To cope # with bad markup, an implicitly closed form (with no end tag) is # associated with any form fields that occur after that are not # inside any form. So when a start tag for a form is encountered, # we make that the ‘current form’, by pushing it on to # @{ $$self{_HTML_DOM_cf} }. When the element is closed, if it # is closed by an end tag, we simply pop it off the cf array. If # it is implicitly closed we pop it off and also make it the # ‘magic form’ (_HTML_DOM_mg_f). When we encounter a form field, # we give it a magic association with the form if the cf # stack is empty. package HTML::DOM::Element::HTML; our @ISA = qw' HTML::DOM::Element HTML::DOM::_TreeBuilder'; use Scalar::Util qw 'weaken isweak'; # I have to override this so it doesn't delete _HTML_DOM_* attri- # butes and so that it doesn’t rebless the object. sub elementify { my $self = shift; my %attrs = map /^[a-z_]*\z/ ? () : ($_ => $self->{$_}), keys %$self; my @weak = grep isweak $self->{$_}, keys %$self; $self->SUPER::elementify; %$self = (%$self, %attrs); # this invigorates feeble refs weaken $self->{$_} for @weak; } sub new { my $tb; # hafta declare it separately so the closures can # c it ($tb = shift->HTML::DOM::_TreeBuilder::new( element_class => 'HTML::DOM::Element', 'tweak_~text' => sub { my ($text, $parent) = @_; # $parent->ownerDocument will be undef if # $parent is the doc. $parent->splice_content( -1,1, ($parent->ownerDocument || $parent) ->createTextNode($text) ); $parent->content_offset( $$tb{_HTML_DOM_tb_c_offset} ); }, 'tweak_*' => sub { my($elem, $tag, $doc_elem) = @_; $tag =~ /^~/ and return; if( $tag eq 'link' ) { HTML'DOM'Element'Link'_reset_style_sheet( $elem ); } # If a form is being closed, determine # whether it is closed implicitly and set # the current form and magic form # accordingly. if($tag eq 'form') { pop @{$$doc_elem{_HTML_DOM_cf}||[]}; delete $$doc_elem{_HTML_DOM_etif} or $$doc_elem{_HTML_DOM_mg_f} = $elem } # If a formie is being closed, create a # magic association where appropriate. if(!$$doc_elem{_HTML_DOM_no_mg} and $tag =~ /^(?: button|(?: fieldse|inpu|(?:obj|sel)ec )t|label|textarea )\z/x and $$doc_elem{_HTML_DOM_mg_f} and !$$doc_elem{_HTML_DOM_cf} ||!@{$$doc_elem{_HTML_DOM_cf}}) { $elem->form( $$doc_elem{_HTML_DOM_mg_f} ); $doc_elem->ownerDocument-> magic_forms(1); } my $event_offsets = delete $elem->{_HTML_DOM_tb_event_offsets} or return; _create_events( $doc_elem, $elem, $event_offsets ); }, )) ->ignore_ignorable_whitespace(0); # stop eof()'s cleanup $tb->store_comments(1); # from changing an $tb->unbroken_text(1); # necessary, con- # elem_han- # sidering what # dler's view # _tweak_~text does # of the tree # Web browsers preserve whitespace, at least from the point # of view of the DOM; but the main reason we are using this # option is that a parser for innerHTML doesn’t know # whether the nodes will be inserted in a <pre>. no_space_compacting $tb 1; $tb->handler(text => "text", # so we can get line "self, text, is_cdata, offset"); # numbers for scripts $tb->handler(start => "start", "self, tagname, attr, attrseq, offset, tokenpos"); $tb->handler((declaration=>)x2,'self,tagname,tokens,text'); $tb->{_HTML_DOM_tweakall} = $tb->{'_tweak_*'}; my %opts = @_; $tb->{_HTML_DOM_no_mg} = delete $opts{no_magic_forms}; # used by an element’s innerHTML # We have to copy it like this, because our circular ref- # erence is thus: $tb -> object -> closure -> $tb # We can’t weaken $tb without a copy of it, because it is # the only reference to the object. my $life_raft = $tb; weaken $tb; $tb; } sub start { return shift->SUPER::start(@_) if @_ < 6; # shirt-çorcuit my $tokenpos = pop; my $offset = pop; my %event_offsets; my $attr_names = pop; for(0..$#$attr_names) { $$attr_names[$_] =~ /^on(.*)/is and $event_offsets{$1} = $$tokenpos[$_*4 + 4] + $offset; } my $elem = (my $self = shift)->SUPER::start(@_); $_[0] eq 'form' and push @{ $$self{_HTML_DOM_cf} ||= [] }, $elem; return $elem unless %event_offsets; if(!$HTML::Tagset::emptyElement{$_[0]}) { # container $$elem{_HTML_DOM_tb_event_offsets} = \%event_offsets; } else { _create_events( $self, $elem, \%event_offsets, ); } return $elem; } sub _create_events { my ($doc_elem,$elem,$event_offsets) = @_; defined(my $event_attr_handler = $doc_elem->ownerDocument->event_attr_handler) or return; for(keys %$event_offsets) { my $l = &$event_attr_handler( $elem, $_, $elem->attr("on$_"), $$event_offsets{$_} ); defined $l and $elem->event_handler ( $_, $l ); } } sub text { $_[0]{_HTML_DOM_tb_c_offset} = pop; shift->SUPER::text(@_) } sub insert_element { my ($self, $tag) = (shift, @_); if((ref $tag ? $tag->tag : $tag) eq 'tr' and $self->pos->tag eq 'table') { $self->insert_element('tbody', 1); } $self->SUPER::insert_element(@_); } sub end { my $self = shift; # If this is a form, record that we’ve seen an end tag, so # that this does not become a ‘magic form’. ++$$self{_HTML_DOM_etif} # end tag is 'form' if $_[0] eq 'form'; # Make sure </t[hd]> cannot close a cell outside the cur- # rent table. $_[0] =~ /^t[hd]\z/ and @_ = (\$_[0], 'table'); # HTML::TreeBuilder expects the <html> element to be the # topmost element, and gets confused when it’s inside the # ~doc. It sets _pos to the doc when it encounters </html>. # This works around that. my $pos = $self->{_pos}; my @ret = $self->SUPER::end(@_); $self->{_pos} = $pos if ($self->{_pos}||return @ret)->{_tag} eq '~doc'; @ret; # TB relies on this retval } sub declaration { my($self,$tagname,$tokens,$source) = @_; return unless $tagname eq 'doctype' and my $parent = $self->parent; package HTML::DOM; # bypass overloading $parent->{_HTML_DOM_doctype} = $source unless defined $parent->{_HTML_DOM_doctype}; return unless @$tokens > 3; for ($self->{_HTML_DOM_version} = $tokens->[3]){ s/^['"]// and s/['"]\z//; } } sub element_class { 'HTML::DOM::Element' } # HTMLHtmlElement interface sub version { shift->_attr('version' => @_) } } # end of special TreeBuilder package sub new { my $self = shift->SUPER::new('~doc'); my %opts = @_; $self->{_HTML_DOM_url} = $opts{url}; # might be undef $self->{_HTML_DOM_referrer} = $opts{referrer}; # might be undef if($opts{response}) { $self->{_HTML_DOM_response} = $opts{response}; if(!defined $self->{_HTML_DOM_url}) {{ $self->{_HTML_DOM_url} = ($opts{response}->request || last) ->url; }} if(!defined $self->{_HTML_DOM_referrer}) {{ $self->{_HTML_DOM_referrer} = ($opts{response}->request || last) ->header('Referer') }} if($opts{weaken_response}) { weaken $self->{_HTML_DOM_response} } } $self->{_HTML_DOM_jar} = $opts{cookie_jar}; # might be undef $self->{_HTML_DOM_cs} = $opts{charset}; $self; } =item $tree->elem_handler($elem_name => sub { ... }) If you call this method first, then, when the DOM tree is in the process of being built (as a result of a call to C<write> or C<parse_file>), the subroutine will be called after each C<: L<HTML::DOM::Element>'s L<C<content_offset>|HTML::DOM::Element/content_offset> method might come in handy for reporting line numbers for script errors.) =cut sub elem_handler { my ($self,$elem_name,$sub) = @_; # ~~~ temporary; for internal use only: @_ < 3 and return $$self{_HTML_DOM_nih}{$elem_name}; $$self{_HTML_DOM_nih}{$elem_name} = $sub; # nih = node inser- # tion handler my $h = $self->{_HTML_DOM_elem_handlers}{$elem_name} = sub { # I can’t put $doc_elem outside the closure, because # ->open replaces it with another object, and we’d be # referring to the wrong one. my $doc_elem = $_[2]; $doc_elem->{_HTML_DOM_tweakall}->(@_); $self->_modified; # in case there are node lists hanging # around that the handler references &$sub($self, $_[0]); # See the comment in sub write. (my $level = $$self{_HTML_DOM_buffered}); if( $level and ($level -= 1, 1) and $$self{_HTML_DOM_p} and $$self{_HTML_DOM_p}[$level] ) { $$self{_HTML_DOM_p}[$level]->eof; $level ? --$#{$$self{_HTML_DOM_p}} : delete $$self{_HTML_DOM_p}; } }; if(my $p = $$self{_HTML_DOM_parser}) { $$p{"_tweak_$elem_name"} = $h } weaken $self; return; } =item C<undef> or an empty list on failure. Upon success, it should return just the CSS code, if it has been decoded (and is in Unicode), or, if it has not been decoded, the CSS code followed by C<< decode => 1 >>. See L<CSS::DOM/STYLE SHEET ENCODING> for details on when you should or should not decode it. (Note that HTML::DOM automatically provides an encoding hint based on the HTML document.) HTML::DOM passes the result of the url fetcher to L<CSS::DOM> and turns it into a style sheet object accessible via the link element's L<C<sheet>|HTML::DOM::Element::Link/sheet> method. =cut sub css_url_fetcher { my $old = (my $self = shift)->{_HTML_DOM_cuf}; $self->{_HTML_DOM_cuf} = shift if @_; $old||(); } =item $tree->write(...) (DOM method) This parses the HTML code passed to it, adding it to the end of the document. It assumes that its input is a normal Perl Unicode string. Like L<HTML::TreeBuilder>'s C<parse> method, it can take a coderef. When it is called from an an element handler (see C<elem_handler>, above), the value passed to it will be inserted into the HTML code after the current element when the element handler returns. (In this case a coderef won't do--maybe that will be added later.) If the C<close> method has been called, C<write> will call C<open> before parsing the HTML code passed to it. =item $tree->writeln(...) (DOM method) Just like C<write> except that it appends "\n" to its argument and does not work with code refs. (Rather pointless, if you ask me. :-) =item $tree->close() (DOM method) Call this method to signal to the parser that the end of the HTML code has been reached. It will then parse any residual HTML that happens to be buffered. It also makes the next C<write> call C<open>. =item $tree->open (DOM method) Deletes the HTML tree, resetting it so that it has just an <html> element, and a parser hungry for HTML code. =item $tree->parse_file($file) This method takes a file name or handle and parses the content, (effectively) calling C<close> afterwards. In the former case (a file name), L<HTML::Encoding> will be used to detect the encoding. In the latter (a file handle), you'll have to. =item $tree->charset This method returns the name of the character set that was passed to C<new>, or, if that was not given, that which C<parse_file> used. It returns undef if C<new> was not given a charset and if C<parse_file> was not used or was passed a file handle. You can also set the charset by passing an argument, in which case the old value is returned. =cut sub parse_file { my $file = $_[1]; $_[0]->open; # This ‘if’ statement uses the same check that HTML::Parser uses. # We are not strictly checking to see whether it’s a handle, # but whether HTML::Parser would consider it one. if (ref($file) || ref(\$file) eq "GLOB") { (my $a = shift->{_HTML_DOM_parser}) ->parse_file($file) || return; $a ->elementify; return 1; } no warnings 'parenthesis'; # 5.8.3 Grrr!! if(my $charset = $_[0]{_HTML_DOM_cs}) { open my $fh, $file or return; $charset =~ s/^(?:x-?)?mac-?/mac/i; binmode $fh, ":encoding($charset)"; $$_{_HTML_DOM_parser}->parse_file($fh) || return, $_->close for shift; return 1; } open my $fh, $file or return; local $/; my $contents = <$fh>; require HTML::Encoding; my $encoding = HTML::Encoding::encoding_from_html_document( $contents ) || 'iso-8859-1'; # Since we’ve already slurped the file, we might as well # avoid having HTML::Parser read it again, even if we could # use binmode. require Encode; $_->write(Encode::decode($encoding, $contents)), $_->close, $_->{_HTML_DOM_cs} = $encoding for shift; return 1; } sub charset { my $old = (my$ self = shift)->{_HTML_DOM_cs}; $self->{_HTML_DOM_cs} = shift if @_; $old; } sub write { my $self = shift; if($$self{_HTML_DOM_buffered}) { # Although we call this buffered, it’s actually not. Before # version 0.040, a recursive call to ->write on the same # doc object would simply record the HTML code in a buffer # that was processed when the elem handler that made the # inner call to ->write finished. Every elem handler would # have a wrapper (created in the elem_handler sub above) # that took care of this after calling the handler, by cre- # ating a new, temporary, parser object that would call the # start/end, etc., methods of our tree builder. # # This approach stops JS code like this from working (yes, # there *are* websites with code like this!): # document.write("<img id=img1>") # document.getElementById("img1").src="..." # # So, now we take care of creating a new parser immedi- # ately. This does mean, however that we end up with mul- # tiple parser objects floating around in the case of # nested <scripts>. So we have to be careful to create and # delete them at the right time. # $$self{_HTML_DOM_buffered} actually contains a number # indicating the number of nested calls to ->write. my $level = $$self{_HTML_DOM_buffered}; local $$self{_HTML_DOM_buffered} = $level + 1; my($doc_elem) = $$self{_HTML_DOM_parser}; # These handlers delegate the handling to methods of # *another* HTML::Parser object. my $p = $$self{_HTML_DOM_p}[$level-1] ||= HTML::Parser->new( start_h => [ sub { $doc_elem->start(@_) }, 'tagname, attr, attrseq' ], end_h => [ sub { $doc_elem->end(@_) }, 'tagname, text' ], text_h => [ sub { $doc_elem->text(@_) }, 'text, is_cdata' ], ); $p->unbroken_text(1); # push_content, which is called by # H:TB:text, won't concatenate two # text portions if the first one # is a node. $p->parse(shift); # We can’t get rid of our parser at this point, as a subse- # quent ->write call from the same nested level (e.g., from # the same <script> block) will need the same one, in case # what we are parsing ends with a partial token. But if the # calling elem handler finishes (e.g., if we reach a # </script>), then we need to remove it, so we have # elem_handler do that for us. } else { my $parser = $$self{_HTML_DOM_parser} || ($self->open, $$self{_HTML_DOM_parser}); local $$self{_HTML_DOM_buffered} = 1; $parser->parse($_) for @_; } $self->_modified; return # nothing; } sub writeln { shift->write(@_,"\n") } sub close { my $a = (my $self = shift)->{_HTML_DOM_parser}; return unless $a; # We can’t use eval { $a->eof } because that would catch errors # that are meant to propagate (a nasty bug [the so-called # ‘content—offset’ bug] was hidden because of an eval in ver- # sion 0.010). # return unless $a->can('eof'); $a->eof(@_); delete $$self{_HTML_DOM_parser}; $a->elementify; return # nothing; } sub open { (my $self = shift)->detach_content; # We have to use push_content instead of simply putting it there # ourselves, because push_content takes care of weakening the # parent (and that code doesn’t belong in this package). $self->push_content( my $tb = $$self{_HTML_DOM_parser} = new HTML::DOM::Element::HTML ); delete @$self{<_HTML_DOM_sheets _HTML_DOM_doctype>}; return unless $self->{_HTML_DOM_elem_handlers}; for(keys %{$self->{_HTML_DOM_elem_handlers}}) { $$tb{"_tweak_$_"} = $self->{_HTML_DOM_elem_handlers}{$_} } return # nothing; } =back =head2 Other DOM Methods =over 4 =cut #-------------- DOM STUFF (CORE) ---------------- # =item doctype Returns nothing =item implementation Returns the L<HTML::DOM::Implementation> object. =item documentElement Returns the <html> element. =item createElement ( $tag ) =item createDocumentFragment =item createTextNode ( $text ) =item createComment ( $text ) =item createAttribute ( $name ) Each of these creates a node of the appropriate type. =item createProcessingInstruction =item createEntityReference These two throw an exception. =for comment =item createCSSStyleSheet This creates a style sheet (L<CSS::DOM> object). =item getElementsByTagName ( $name ) C<$name> can be the name of the tag, or '*', to match all tag names. This returns a node list object in scalar context, or a list in list context. =item importNode ( $node, $deep ) Clones the C<$node>, setting its C<ownerDocument> attribute to the document with which this method is called. If C<$deep> is true, the C<$node> will be cloned recursively. =cut sub doctype {} # always null sub implementation { no warnings 'once'; return $HTML::DOM::Implementation::it; } sub documentElement { ($_[0]->content_list)[0] } sub createElement { my $elem = HTML::DOM::Element->new($_[1]); $elem->_set_ownerDocument(shift); $elem; } sub createDocumentFragment { my $thing = HTML::DOM::DocumentFragment->new; $thing->_set_ownerDocument(shift); $thing; } sub createTextNode { my $thing = HTML::DOM::Text->new(@_[1..$#_]); $thing->_set_ownerDocument(shift); $thing; } sub createComment { my $thing = HTML::DOM::Comment->new(@_[1..$#_]); $thing->_set_ownerDocument(shift); $thing; } sub createCDATASection { die HTML::DOM::Exception->new( NOT_SUPPORTED_ERR, 'The HTML DOM does not support CDATA sections' ); } sub createProcessingInstruction { die HTML::DOM::Exception->new( NOT_SUPPORTED_ERR, 'The HTML DOM does not support processing instructions' ); } sub createAttribute { my $thing = HTML::DOM::Attr->new(@_[1..$#_]); $thing->_set_ownerDocument(shift); $thing; } sub createEntityReference { die HTML::DOM::Exception->new( NOT_SUPPORTED_ERR, 'The HTML DOM does not support entity references' ); } #sub createCSSStyleSheet { # shift; # require CSS'DOM; # ~~~ #} sub getElementsByTagName { my($self,$tagname) = @_; #warn "You didn't give me a tag name." if !defined $tagname; if (wantarray) { return $tagname eq '*' ? grep tag $_ !~ /^~/, $self->descendants : $self->find($tagname); } else { my $list = HTML::DOM::NodeList::Magic->new( $tagname eq '*' ? sub { grep tag $_ !~ /^~/, $self->descendants } : sub { $self->find($tagname) } ); $self-> _register_magic_node_list($list); $list; } } sub importNode { my ($self, $node, $deep) = @_; die HTML::DOM::Exception->new( NOT_SUPPORTED_ERR, 'Documents cannot be imported.' ) if $node->nodeType ==DOCUMENT_NODE; (my $clown = $node->cloneNode($deep)) ->_set_ownerDocument($self); if($clown->can('descendants')) { # otherwise it’s an Attr, so this for($clown->descendants) { # isn’t necessary delete $_->{_HTML_DOM_owner}; }} $clown; } #-------------- DOM STUFF (HTML) ---------------- # =item alinkColor =item background =item bgColor =item fgColor =item linkColor =item vlinkColor These six methods return (optionally set) the corresponding attributes of the body element. Note that most of the names do not map directly to the names of the attributes. C<fgColor> refers to the C<text> attribute. Those that end with 'linkColor' refer to the attributes of the same name but without the 'Color' on the end. =cut sub alinkColor { (shift->body||return "")->aLink (@_) } sub background { (shift->body||return "")->background(@_) } sub bgColor { (shift->body||return "")->bgColor (@_) } sub fgColor { (shift->body||return "")->text (@_) } sub linkColor { (shift->body||return "")->link (@_) } sub vlinkColor { (shift->body||return "")->vLink (@_) } =item title Returns (or optionally sets) the title of the page. =item referrer Returns the page's referrer. =item domain Returns the domain name portion of the document's URL. =item URL Returns the document's URL. =item body Returns the body element, or the outermost frame set if the document has frames. You can set the body by passing an element as an argument, in which case the old body element is returned. =item images =item applets =item links =item forms =item anchors These five methods each return a list of the appropriate elements in list context, or an L<HTML::DOM::Collection> object in scalar context. In this latter case, the object will update automatically when the document is modified. In the case of C<forms> you can access those by using the HTML::DOM object itself as a hash. I.e., you can write C<< $doc->{f} >> instead of S<< C<< $doc->forms->{f} >> >>. =for comment # ~~~ Why on earth did I ever put this in the docs?! B<TO DO:> I need to make these methods cache the HTML collection objects that they create. Once I've done this, I can make list context use those objects, as well as scalar context. =item cookie This returns a string containing the document's cookies (the format may still change). If you pass an argument, it will set a cookie as well. Both Netscape-style and RFC2965-style cookie headers are supported. =cut sub title { my $doc = shift; if(my $title_elem = $doc->find('title')) { $title_elem->text(@_); } else { return "" unless @_; ( $doc->find('head') || ( $doc->find('html') || $doc->appendChild($doc->createElement('html')) )->appendChild($doc->createElement('head')) )->appendChild( my $t = $doc->createElement('title') ); $t->text(@_); return ""; } } sub referrer { my $referrer = shift->{_HTML_DOM_referrer}; defined $referrer ? $referrer : (); } sub domain { no strict; my $doc = shift; host {ref $doc->{_HTML_DOM_url} ? $doc->{_HTML_DOM_url} : ($doc->{_HTML_DOM_url} = URI->new($doc->{_HTML_DOM_url}))}; } sub URL { my $url = shift->{_HTML_DOM_url}; defined $url ? "$url" : undef; } sub body { # ~~~ this needs to return the outermost frameset element if # there is one (if the frameset is always the second child # of <html>, then it already does). my $body = ($_[0]->documentElement->content_list)[1]; if (!$body || $body->tag !~ /^(?:body|frameset)\z/) { $body = $_[0]->find('body','frameset'); } if(@_>1) { my $doc_elem = $_[0]->documentElement; # I'm using the replaceChild rather than replace_with, # despite the former's convoluted syntax, since the former # has the appropriate error-checking code (or will), and # also because it triggers mutation events. $doc_elem->replaceChild($_[1],$body) } else { $body } } sub images { my $self = shift; if (wantarray) { return grep tag $_ eq 'img', $self->descendants; } else { my $collection = HTML::DOM::Collection->new( my $list = HTML::DOM::NodeList::Magic->new( sub { grep tag $_ eq 'img', $self->descendants } )); $self-> _register_magic_node_list($list); $collection; } } sub applets { my $self = shift; if (wantarray) { return grep $_->tag =~ /^(?:objec|apple)t\z/, $self->descendants; } else { my $collection = HTML::DOM::Collection->new( my $list = HTML::DOM::NodeList::Magic->new( sub { grep $_->tag =~ /^(?:objec|apple)t\z/, $self->descendants } )); $self-> _register_magic_node_list($list); $collection; } } sub links { my $self = shift; if (wantarray) { return grep { my $tag = tag $_; $tag eq 'area' || $tag eq 'a' && defined $_->attr('href') } $self->descendants; } else { my $collection = HTML::DOM::Collection->new( my $list = HTML::DOM::NodeList::Magic->new( sub { grep { my $tag = tag $_; $tag eq 'area' || $tag eq 'a' && defined $_->attr('href') } $self->descendants } )); $self-> _register_magic_node_list($list); $collection; } } sub forms { my $self = shift; if (wantarray) { return grep tag $_ eq 'form', $self->descendants; } else { my $collection = HTML::DOM::Collection->new( my $list = HTML::DOM::NodeList::Magic->new( sub { grep tag $_ eq 'form', $self->descendants } )); $self-> _register_magic_node_list($list); $collection; } } sub anchors { my $self = shift; if (wantarray) { return grep tag $_ eq 'a' && defined $_->attr('name'), $self->descendants; } else { my $collection = HTML::DOM::Collection->new( my $list = HTML::DOM::NodeList::Magic->new( sub { grep tag $_ eq 'a' && defined $_->attr('name'), $self->descendants } )); $self-> _register_magic_node_list($list); $collection; } } sub cookie { my $self = shift; return '' unless defined (my $jar = $self->{_HTML_DOM_jar}); my $return; if (defined wantarray) { # Yes, this is nuts (getting HTTP::Cookies to join the cookies, and # splitting them, filtering them, and joining them again[!]), but # &HTTP::Cookies::add_cookie_header is long and complicated, and I # don't want to replicate it here. no warnings 'uninitialized'; my $reqclone = $self->{_HTML_DOM_response}->request->clone; # Yes this is a bit strange, but we don’t want to put # ‘use HTTP::Header 1.59’ in this file, as it would mean loading the # module even for people who are not using this feature or who are # duck-typing. if (!$reqclone->can('header_field_names') && $reqclone->isa("HTTP::Headers")) { VERSION HTTP::Headers:: 1.59 } for($reqclone->header_field_names) { /cookie/i and remove_header $reqclone $_; } $return = join ';', grep !/\$/, $jar->add_cookie_header( $reqclone )-> header ('Cookie') # Pieces of this regexp were stolen from HTTP::Headers::Util: =~ /\G\s* # initial whitespace ( [^\s=;,]+ # name \s*=\s* # = (?: \"(?:[^\"\\]*(?:\\.[^\"\\]*)*)\" # quoted value | [^;,\s]* # unquoted value ) ) \s*;? /xg; } if (@_) { return unless defined $self->{_HTML_DOM_response}; require HTTP::Headers::Util; (undef,undef, my%split) = @{(HTTP::Headers::Util::split_header_words($_[0]))[0]}; my $rfc; for(keys %split){ # I *hope* this always works! (NS cookies should have no version.) ++ $rfc, last if lc $_ eq 'version'; } (my $clone = $self->{_HTML_DOM_response}->clone) ->remove_header(qw/ Set-Cookie Set-Cookie2 /); $clone->header('Set-Cookie' . 2 x!! $rfc => $_[0]); $jar->extract_cookies($clone); } $return||''; } =item getElementById =item getElementsByName =item getElementsByClassName These three do what their names imply. The last two will return a list in list context, or a node list object in scalar context. Calling them in list context is probably more efficient. =cut sub getElementById { my(@pile) = grep ref($_), @{shift->{'_content'}}; my $id = shift; my $this; while(@pile) { no warnings 'uninitialized'; $this = shift @pile; $this->id eq $id and return $this; unshift @pile, grep ref($_), $this->content_list; } return; } sub getElementsByName { my($self,$name) = @_; if (wantarray) { return $self->look_down(name => "$name"); } else { my $list = HTML::DOM::NodeList::Magic->new( sub { $self->look_down(name => "$name"); } ); $self-> _register_magic_node_list($list); $list; } } sub getElementsByClassName { splice @_, 2, @_, 1; # Remove extra elements; add a true third elem goto &HTML'DOM'Element'_getElementsByClassName; } # ---------- DocumentEvent interface -------------- # =item createEvent ( $category ) Creates a new event object, believe it or not. The C<$category> is the DOM event category, which determines what type of event object will be returned. The currently supported event categories are MouseEvents, UIEvents, HTMLEvents and MutationEvents. You can omit the C<$category> to create an instance of the event base class (not officially part of the DOM). =cut sub createEvent { require HTML'DOM'Event; HTML'DOM'Event'create_event($_[1]||''); } # ---------- DocumentView interface -------------- # =item defaultView Returns the L<HTML::DOM::View> object associated with the document. There is no such object by default; you have to put one there yourself: Although it is supposed to be read-only according to the DOM, you can set this attribute by passing an argument to it. It I<is> still marked as read-only in L<C<%HTML::DOM::Interface>|HTML::DOM::Interface>. If you do set it, it is recommended that the object be a subclass of L<HTML::DOM::View>. This attribute holds a weak reference to the object. =cut sub defaultView { my $self = shift; my $old = $self->{_HTML_DOM_view}; if(@_) { weaken($self->{_HTML_DOM_view} = shift); } return defined $old ? $old : (); } # ---------- DocumentStyle interface -------------- # =item styleSheets Returns a L<CSS::DOM::StyleSheetList> of the document's style sheets, or a simple list in list context. =cut sub styleSheets { my $doc = shift; my $ret = ( $doc->{_HTML_DOM_sheets} or $doc->{_HTML_DOM_sheets} = ( require CSS::DOM::StyleSheetList, new CSS::DOM::StyleSheetList ), $doc->_populate_sheet_list, $doc->{_HTML_DOM_sheets} ); wantarray ? @$ret : $ret; } =item innerHTML Serialises and returns the HTML document. If you pass an argument, it will set the contents of the document via C<open>, C<write> and C<close>, returning a serialisation of the old contents. =cut sub innerHTML { my $self = shift; my $old; $old = join '' , $self->{_HTML_DOM_doctype}||'', map HTML'DOM'Element'_html_element_adds_newline ? substr(( as_HTML $_ (undef)x2,{} ), 0, -1) : $_->as_HTML((undef)x2,{}), $self->content_list if defined wantarray; if(@_){ $self->open(); $self->write(shift); $self->close(); } $old } =item location =item set_location_object (non-DOM) C<location> returns the location object, if you've put one there with C<set_location_object>. HTML::DOM doesn't actually implement such an object itself, but provides the appropriate magic to make C<< $doc->location($foo) >> translate into C<< $doc->location->href($foo) >>. BTW, the location object had better be true when used as a boolean, or HTML::DOM will think it doesn't exist. =cut sub location { my $self = shift; @_ and ($$self{_HTML_DOM_loc}||die "Can't assign to location" ." without a location object")->href(@_); $$self{_HTML_DOM_loc}||() } sub set_location_object { $_[0]{_HTML_DOM_loc} = $_[1]; } =item. =begin comment When there is no modification date, the return value is different in every browser. NS 2-4 and Opera 9 have the epoch (in GMT format). Firefox 3 has the time the page was loaded. Safari 4 has an empty string (it uses GMT format when there is a mod time). IE, 6-8 the only one to comply with HTML 5, has the current time; but HTML 5 is illogical, since it makes no sense for the modification time to keep ticking away. I’ve opted to use the empty string for now, since we can’t *really* find out the modification time--only what the server *says* it is. And if the server doesn’t say, it’s no use pretending that it did say it. =end comment =cut sub lastModified { my $time = ($_[0]{_HTML_DOM_response} || return '')->last_modified or return ''; require Date'Format; Date'Format'time2str("%d/%m/%Y %X", $time); } =back =cut # ---------- OVERRIDDEN NODE & EVENT TARGET METHODS -------------- # sub ownerDocument {} # empty list sub nodeName { '#document' } { no warnings 'once'; *nodeType = \& DOCUMENT_NODE; } =head2 Other (Non-DOM) Methods (See also L</EVENT HANDLING>, below.) =over 4 =item $tree->base Returns the base URL of the page; either from a <base href=...> tag, from the response object passed to C<new>, or the URL passed to C<new>. =cut sub base { my $doc = shift; if( my $base_elem = $doc->look_down(_tag => 'base', href => qr)(?:\))) ){ return ''.$base_elem->attr('href'); } elsif (my $r = $$doc{_HTML_DOM_response}) { my $base; ($base) = $r->header('Content-Base') or ($base) = $r->header('Content-Location') or $base = $r->header('Base'); # URI does not document $URI::scheme_re, but HTTP::Response # (which is in a separate distribution) uses it. It seems # unlikely that it will go away in future URI versions, as # that would break existing versions of HTTP::Response. if ($base && $base =~ /^$URI::scheme_re:/o) { # already absolute return $base; } my $req = request $r; my $uri = $req ? uri $req : $doc->URL; return undef unless $uri; # Work around URI bug. if (!defined $base && $uri =~ /^[Dd][Aa][Tt][Aa]:/) { return $uri; } no warnings 'uninitialized'; ''.new_abs URI $base,$uri; } else { $doc->URL } } =item $tree->magic_forms This is mainly for internal use. It returns a boolean indicating whether the parser needed to associate formies with a form that did not contain them. This happens when a closing </form> tag is missing and the form is closed implicitly, but a formie is encountered later. =cut sub magic_forms { @_ and ++$_[0]{_HTML_DOM_mg_f}; $_[0]{_HTML_DOM_mg_f} } =back =head1 HASH ACCESS You can use an HTML::DOM object as a hash ref to access it's form elements by name. So C<< $doc->{yayaya} >> is short for S<< C<< $doc->forms->{yayaya} >> >>. =head1 EVENT HANDLING C<addEventListener> method and can be removed with C<removeEventListener>. HTML::DOM accepts as an event handler a coderef, an object with a C<call_with> method, or an object with C<&{}> overloading. If the C<call_with> method is present, it is called with the current event target as the first argument and the event object as the second. This is to allow for objects that wrap JavaScript functions (which must be called with the event target as the B<this> value). An event listener is a coderef, an object with a C<handleEvent> method or an object with C<&{}> overloading. HTML::DOM does not implement any classes that provide a C<handleEvent> method, but will support any object that has one. Listeners and handlers differ in one important aspect. A listener has to call C<preventDefault> on the event object to cancel the default action. A handler simply returns a defined false value (except for mouseover events, which must return a true value to cancel the default). =head2 Default Actions C<handleEvent> method) via the C<default_event_handler_for> and { ... } C<default_event_handler_for> with just one argument returns the currently assigned coderef. With two arguments it returns the old one after assigning the new one. Use C<default_event_handler> (without the. =head2 Dispatching Events HTML::DOM::Node's C<dispatchEvent> method triggers the appropriate event listeners, but does B<not> call any default actions associated with it. The return value is a boolean that indicates whether the default action should be taken. H:D:Node's C<trigger_event> method will trigger the event for real. It will call C<dispatchEvent> and, provided it returns true, will call the default event handler. =head2 HTML Event Attributes The C<event_attr_handler> can be used to assign a coderef that will turn text assigned to an event attribute (e.g., C<undef> for generated HTML [source code passed to the C<write> method by an element handler].) As with.) =head2 When an Event Handler Dies Use C<error_handler> to assign a coderef that will be called whenever an event listener (or handler) raises an error. The error will be contained in C<$@>. =head2 Other Event-Related Methods =over =item $tree->event_parent =item $tree->event_parent( $new_val ) This method lets you provide an object that is added to the top of the event dispatch chain. E.g., if you want the view object (the value of C<defaultView>, aka the window) to have event handlers called before the document in the capture phase, and after it in the bubbling phase, you can set it like this (see also L</defaultView>, above): $tree->event_parent( $tree->defaultView ); This holds a weak reference. =item $tree->event_listeners_enabled =item $tree->event_listeners_enabled( $new_val ) This attribute, which is true by default, can be used to disable event handlers and listeners. (Default event handlers [see above] still run, though.) =back =cut # ---------- NON-DOM EVENT METHODS -------------- # sub event_attr_handler { my $old = $_[0]->{_HTML_DOM_event_attr_handler}; $_[0]->{_HTML_DOM_event_attr_handler} = $_[1] if @_ > 1; $old; } sub default_event_handler { my $old = $_[0]->{_HTML_DOM_default_event_handler}; $_[0]->{_HTML_DOM_default_event_handler} = $_[1] if @_ > 1; $old; } sub default_event_handler_for { my $old = $_[0]->{_HTML_DOM_dehf}{$_[1]}; $_[0]->{_HTML_DOM_dehf}{$_[1]} = $_[2] if @_ > 2; $old; } sub error_handler { my $old = $_[0]->{_HTML_DOM_error_handler}; $_[0]->{_HTML_DOM_error_handler} = $_[1] if @_ > 1; $old; } sub event_parent { my $old = (my $self = shift) ->{_HTML_DOM_event_parent}; weaken($self->{_HTML_DOM_event_parent} = shift) if @_; $old } sub event_listeners_enabled { my $old = (my $Self = shift)->{_HTML_DOM_doevents}; @_ and $$Self{_HTML_DOM_doevents} = !!shift; defined $old ? $old : 1; # true by default } # ---------- NODE AND SHEET LIST HELPER METHODS -------------- # sub _modified { # tells all it's magic nodelists that they're stale # and also rewrites the style sheet list if present my $list = $_[0]{_HTML_DOM_node_lists}; my $list_is_stale; for (@$list) { defined() ? $_->_you_are_stale : ++$list_is_stale } if($list_is_stale) { @$list = grep defined, @$list; weaken $_ for @$list; } $_[0]->_populate_sheet_list } sub _populate_sheet_list { # called both by styleSheets and _modified for($_[0]->{_HTML_DOM_sheets}||return) { @$_ = map sheet $_, $_[0]->look_down(_tag => qr/^(?:link|style)\z/); } } sub _register_magic_node_list { # adds the node list to the list of magic # node lists that get notified automatic- # ally whenever the doc structure changes push @{$_[0]{_HTML_DOM_node_lists}}, $_[1]; weaken $_[0]{_HTML_DOM_node_lists}[-1]; } 1; __END__ =head1 CLASSES AND DOM INTERFACES L L</EVENT HANDLING>, above. Not listed above is L<HTML::DOM::EventTarget>, which is a base class both for L<HTML::DOM::Node> and L<HTML::DOM::Attr>. The format I'm using above doesn't allow for multiple inheritance, so I probably need to redo it. HTML::DOM::Node also implements the L<HTML::Element> interface, but with a few differences. In particular: =over =item * Any methods that expect text nodes to be just strings are unreliable. See the note under L<HTML::Element/objectify_text>. =item * HTML::Element's tree-manipulation methods don't trigger mutation events. =item * HTML::Element's C<delete> method is not necessary, because HTML::DOM uses weak references (for 'upward' references in the object tree). =back =head1 IMPLEMENTATION NOTES =over 4 =item * Objects' attributes are accessed via methods of the same name. When the method is invoked, the current value is returned. If an argument is supplied, the attribute is set (unless it is read-only) and its old value returned. =item * Where the DOM spec. says to use null, undef or an empty list is used. =item * Instead of UTF-16 strings, HTML::DOM uses Perl's Unicode strings (which happen to be stored as UTF-8 internally). The only significant difference this makes is to C<length>, C<substringData> and other methods of Text and Comment nodes. These methods behave in a Perlish way (i.e., the offsets and lengths are specified in Unicode characters, not in UTF-16 bytes). The alternate methods C<length16>, C<substringData16> I<et al.> use UTF-16 for offsets and are standards-compliant in that regard (but the string returned by C<substringData16> is still a regular Perl string). =begin for-me # ~~~ These need to be documented in the man pages for Comment and Text C<length16>, C<substringData16> C<insertData16>, C<deleteData16>, C<replaceData16> and C<splitText16>. =end for-me =item * Each method that returns a NodeList will return a NodeList object in scalar context, or a simple list in list context. You can use the object as an array ref in addition to calling its C<item> and C<length> methods. =item * In cases where a method is supposed to return something implementing the DOMTimeStamp interface, a simple Perl scalar is returned, containing the time as returned by Perl’s built-in C<time> function. =back =head1 ACKNOWLEDGEMENTS Much of the code was stolen from HTML::Tree. In fact, HTML::DOM used to extend HTML::Tree, but the two were merged to allow a whole pile of hacks to be removed. =for comment Actually, they haven’t been removed yet, but are still present. HTML::Element and HTML::TreeBuilder have simply been forked so far. The code still needs refactoring. =head1 PREREQUISITES L<perl> 5.8.3 or later L<Exporter> 5.57 or later L<URI.pm|URI> L<LWP> 5.13 or later L<CSS::DOM> 0.06 or later L<Scalar::Util> 1.14 or later L<HTML::Tagset> 3.02 or later L<HTML::Parser> 3.46 or later L<HTML::Encoding> is required if a file name is passed to C<parse_file>. L<Tie::RefHash::Weak> 0.08 or higher, if you are using perl 5.8.x =head1 BUGS =for comment (since I might use it as a template if I need it later) (See also BUGS in L<HTML::DOM::Element::Option/BUGS|HTML::DOM::Element::Option>) =over 4 =item - Element handlers are not currently called during assignments to C<innerHTML>. =item - L<HTML::DOM::View>'s C<getComputedStyle> does not currently return a read-only style object; nor are lengths converted to absolute values. Currently there is no way to specify the medium. Any style rules that apply to specific media are ignored. =back B<To report bugs,> please e-mail the author. =head1 AUTHOR, COPYRIGHT & LICENSE Copyright (C) 2007-16 Father Chrysostomos $text = new HTML::DOM ->createTextNode('sprout'); $text->appendData('@'); $text->appendData('cpan.org'); print $text->data, "\n"; This program is free software; you may redistribute it and/or modify it under the same terms as perl. =head1 SEE ALSO Each of the classes listed above L</CLASSES AND DOM INTERFACES> L<HTML::DOM::Exception>, L<HTML::DOM::Node>, L<HTML::DOM::Event>, L<HTML::DOM::Interface> L<HTML::Tree>, L<HTML::TreeBuilder>, L<HTML::Element>, L<HTML::Parser>, L<LWP>, L<WWW::Mechanize>, L<HTTP::Cookies>, L<WWW::Mechanize::Plugin::JavaScript>, L<HTML::Form>, L<HTML::Encoding> The DOM Level 1 specification at S<L<>> The DOM Level 2 Core specification at S<L<>> The DOM Level 2 Events specification at S<L<>> etc. | https://web-stage.metacpan.org/release/SPROUT/HTML-DOM-0.058/source/lib/HTML/DOM.pm | CC-MAIN-2021-31 | refinedweb | 6,643 | 51.78 |
A common desire that I often hear from office managers is the need to replace their entire office phone stack with a collection of “work” phone numbers that they can program to call employee’s personal or work phones. This would remove their need to invest in at-desk phone hardware and get tied into long term contracts. They also want to be able to easily change what happens when those numbers are called. People often want to use Twilio for this, and I often get asked if Twilio can. The simple answer to that question is yes.
The more complex answer is: Yes… if you have a bunch of developers who can build a system like this for you. A lot of these folks just want a solution, but don’t know how to produce one with the toolbox of features that Twilio provides. It became obvious to me that showing how to build an Office Phone Manager with Twilio would be beneficial. As I wanted to flex my engineering skills a bit, I dedicated some time to building an open source version that people could use, or could study so they could build their own.
This is the first in a series of blog posts that takes you through the steps needed to create and deploy an application that let’s you:
- Create “office hours” schedules for your staff.
- Create new actions for when a phone number is called. These actions are plain TwiML templates and all you have to do to replace the important bits.
- Configure a Twilio number to use a schedule and a set of actions.
Then that number can be handed out and used as a professional number for a single person, a voicemail service, or even a menu tree for a larger organisation.
You can either follow this guide and build your own (using a different web framework or programing language) or you can download an open source version of the project we’ll be building on Github here:
What you’ll learn
In this post, we will:
- Describe the components within this application
- Look at the design choices for the technical implementation and determine the components for it.
- Set up a basic Django project using a template.
- Show you how to create the required components in Django.
- Build and install the database schema migrations for the components.
- Write some basic logic for the Twilio endpoint.
What you’ll need
You’ll need the following tools:
- Git
- Python
- pip
- Virtualenv
- Access to the World Wide Web
Features of the Office Phone Manager
The application that this series of posts will help you to build will have a reasonably complex data structure. Luckily for us, the web framework we’re using, Django, has a collection of APIs for managing the data structure and abstracting it, leaving us to just use high-level python to manipulate the data how we like. We won’t have to touch a single line of SQL.
In order to understand the data structure, we first need to realise how the components in our Office Phone Manager application interact:
Each Phone Number can be configured with a set of actions that occur based on a work schedule. Each action is made up of a TwiML template and some parameters. Each schedule has a start time and end time, as well as the days of the week. Each configuration will link to a name, a phone number, a schedule, a “in” action and “out of office” action.
When a phone number is called, the configuration will look at the schedule to determine which action and which TwiML to respond with.
This is the explicit description of what we want the Office Phone Manager to be able to do. I’ve taken the liberty to highlight the components within this description that can be turned into object models within our Django application:
- Action – A TwiML template with some configurable options.
- Schedule – A start time and end time, as well as a collection of days. If the time is between the start and end time and on a certain day, the schedule will be considered “available”, otherwise it will be considered “out of office”.
- Configuration – Links a Twilio Number to a Schedule and an “available” Action and an “out of office” Action.
Here is a visual representation of how the different components would be linked together:
The other key consideration is how Twilio will interact with all the Configurations, Schedules and Actions when a phone number is called. I drew up a flowchart to describe the flow of this function:
The application will only need a single endpoint, or URL, that Twilio will need to make a request to. When that endpoint receives a TwiML Request from Twilio, the first thing that needs to be done is to make sure the phone number that is being called has been configured by our application yet. If it hasn’t, we just reject the call using the <Reject /> TwiML tag. If it has been configured, the current time is compared to the start time, end time, and the day in the Schedule component related to this Configuration. Depending on the time and date, either the standard Action is computed from a template or the Out Of Office Action is. Finally, the TwiML from the appropriate Action is returned back to Twilio.
Setting up the project
In an ideal world, this article would be a huge comprehensive step-by-step guide that would give you hours of enjoyment. However, it’s likely that a lot of what would be covered would already be known, or would be considered mundane. To cut out most of the project set up and let us focus on building the interesting parts, I’ve created a template project for you with a few parts of the project already set up (the Actions model, which is very complex, for example). The code we’re going to add builds the Configuration and Schedule models, as well as the Twilio endpoint.
To get started, use git to clone the repository to your local computer. To do this, navigate to a desired directory in your terminal and type the following commands:
$ git clone [email protected]:phalt/hermes-template.git $ cd hermes-template $ git checkout start
This will create a local cloned copy of the template application on your computer. As this is a Python project, the best practice is to use Virtualenv to create an isolated developer environment:
$ virtualenv venv
The virtualenv can be activated with:
$ source venv/bin/activate (venv)$
Finally, all the requirements can be installed using pip:
(venv)$ pip install -r requirements.txt
This should take 1-2 minutes to complete. After that, we’re ready to start writing some code.
Building Django Models
We’ve described the components we’ll need for the Office Phone Manager, now let’s build them in Django. Open the configurations/models.py file to see some code for the Actions and Parameter model. The Parameter model is used by the Action model. The code being added, the Configuration and Schedule model, should be start on line 11 between the two comments.
The Configuration model is the top level component in the application – it links everything else together. This means it will have many foreign keys to other models in the application. The desired Configuration model should look like this:
class Configuration(models.Model): def __unicode__(self): return self.name name = models.CharField(max_length=100) number = models.ForeignKey(TwilioNumber) action = models.ForeignKey('Action', related_name='action') oof_action = models.ForeignKey('Action', related_name='oof_action') schedule = models.ForeignKey('Schedule', related_name='configurations') active = models.BooleanField(default=True)
Nearly every attribute in this data model is a foreign key relation to another object model:
- name – the name of this Configuration, for humans.
- number – a foreign key link to a TwilioNumber, another object model that you can find in twilio_numbers/models.py file. This component stores references to Twilio phone numbers to reduce the number of HTTP requests made to Twilio.
- action – a foreign key relation to the standard Action of this Configuration.
- oof_action – a foreign key relation to another Action that is called when the time is outside the desired Schedule time. This is a weird word for it, oof is supposed to be an abbreviation of Out Of oFfice.
- schedule – a foreign key relation to the Schedule for this Configuration.
- active – a boolean field to determine if this Configuration is active or not. This will allow users to turn off Configurations when they don’t want it being used, without deleting it.
The next object model that is needed is the Schedule component. This can go directly below the Configuration model you’ve just written:
class Schedule(models.Model): def __unicode__(self): return self.name name = models.CharField(max_length=100) days = models.ManyToManyField(Day) start_time = models.TimeField() end_time = models.TimeField() def on_call(self, time_now=False): now = time_now if time_now else datetime.datetime.now() time = now.time() today = now.isoweekday() if self.days.filter(number__exact=today).count() == 1: return time > self.start_time and time < self.end_time return False
Things are a bit different compared to the Configuration model, so let’s go over it all. There is another name attribute here, for humans to understand this object. There is a start_time and end_time attribute that are TimeField fields. That all seems pretty standard.
The days attribute is a ManyToMany field to a new object model, Day, that doesn’t exist yet. Let’s quickly add this component above the Schedule model and then it’s importance should make more sense. we put it above because Python is an interpreted language and will need to load the Day class into memory before referring to it.
class Day(models.Model): def __unicode__(self): return self.full full = models.CharField(max_length=9) number = models.IntegerField(max_length=1)
This component is very simple, but very important. Django and Python do not ship with a way of selecting distinct days in a week, i.e. – Monday, Tuesday et cetera. You can pick specific ISO 8601 dates, but that isn’t very useful for this application. This way, our application can use human-readable days of the week and provide a much easier interface that is also date agnostic.
Now back to the Schedule model. This model includes a very important method called on_call:
def on_call(self, time_now=False): now = time_now if time_now else datetime.datetime.now() time = now.time() today = now.isoweekday() if self.days.filter(number__exact=today).count() == 1: return time > self.start_time and time < self.end_time return False
This method first determines the current time and day using Python’s datetime module. The datetime method isoweekday() will return a number from 1 to 7 based on the day of the week, for example: 1 is Monday, 7 is Sunday. The on_call method ends by searching through the list of associated days. If today is in the schedule, it will return True if the current time is between start_time and end_time, otherwise it will return False. This method will be used later on to determine which Action is used when a Configuration is called by Twilio.
To prove that this works, we’ve written some tests in the full version here. These tests will check certain dates and times to make sure the results turn correctly.
Now Migrating
Now that all the components for this application have been created, the next step is to build their database schema migration files. Since Django 1.7, this is an integrated tool so nothing else needs to be installed. The command needed to build the migrations is:
(venv)$ python manage.py makemigrations
The resulting output should look similar to this:
Migrations for 'configurations': 0001_initial.py: - Create model Action - Create model Configuration - Create model Day - Create model Parameter - Create model Schedule - Add field schedule to configuration - Add field parameters to action
We can confirm these migration files are built correctly by running the migrate command:
(venv)$ python manage.py migrate
This should produce the following output (or similar):
Operations to perform: Apply all migrations: twilio_numbers, sessions, admin, django_twilio, auth, dj_twiml, contenttypes, configurations Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying twilio_numbers.0001_initial... OK Applying twilio_numbers.0002_auto_20141119_1548... OK Applying configurations.0001_initial... OK Applying dj_twiml.0001_initial... OK Applying django_twilio.0001_initial... OK Applying sessions.0001_initial... OK
Building the Twilio endpoint
With the object models created within the application, the Twilio endpoint described earlier can be built to return the desired TwiML based on the Configuration, Schedule and Action.
As this is logic not related to configurations, we should keep it in a separate file, so open the file hermes/views.py. The flowchart described earlier can be translated into the following python code, which can be added to the end of the file:
@twilio_view def twilio_endpoint(request): twilio_params = decompose(request) twilio_response = Response() if twilio_params.type =='voice': # See if we have a configuration set up for the number being called configuration = Configuration.objects.filter( number__number=twilio_params.to ) if configuration.exists(): configuration = configuration[0] if configuration.schedule.on_call(): # Return standard action return configuration.action.get_twiml() else: # Return OOF action return configuration.oof_action.get_twiml() return twilio_response.reject()
This does looks quite different from the flowchart, but it follows the same logic. This function is using the Django-twilio decorator @twilio_view, which will handle most of the HTTP formatting so our application logic doesn’t have to. The Twilio Parameters are discovered from the inbound request using Django-twilio’s decompose function and stored in a twilio_params variable. A new Twilio Response is also instantiated and stored in twilio_response.
On line 6, a quick sanity check is made to make sure the incoming Twilio request is a voice call because the Office Phone Manager only supports voice. Then the function checks to see if any Configuration model exists where the phone number is the same as the number currently calling the endpoint. If it exists, another check is made to see if the schedule is currently on_call, using the method developed earlier in this article. If it is on call, the standard action has a method called get_twiml() (which you can see here) that returns the correct TwiML. If the schedule is not on call, the Out Of Office or oof_action is returned.
If at any point any of these checks fail, a standard reject response is sent back to Twilio.
What’s next?
That concludes the first part in a series of posts to create an Office Phone Manager.
This article has helped you to build the basic object models needed in the application. It has supplied a basic template for getting started, and it has even described the logic needed behind the complex Twilio endpoint. We won’t be able to run the application yet, but it doesn’t really do much yet so there isn’t much point.
But what’s next? Well, for one – the intended audience for this application is not developers, so currently they can’t use it at all as no views or forms have been created. With some simple views or forms, non-technical users can create, read and update new Configurations, Schedules and Actions. How to create views and forms for this application will be described in the next blog post, but you’re welcome to jump ahead and try to create your own if you can’t wait.
Another thing that is currently missing here – how do the Twilio Phone Numbers know how to point their Voice URLs to this application? Currently, a user will have to go to twilio.com and do this manually. That’s not very efficient, but using the REST API, it is possible to update the configuration of a number. In the next post, the logic for doing this automatically whenever a Configuration is saved will be given.
Finally: you’ve got to host this somewhere, so a guide on how to deploy this application to Heroku will wrap up the next post nicely.
If you have any questions about the steps described in this post, or you want to show me your own implementation of an Office Phone Manager, then get in touch with me at [email protected] or on Twitter.
- Receive and Reply to SMS and MMS Messages in Python with Amazon Lambda
- Modify Calls In Progress With Python
- Monitor your Twilio applications with SMS alerts using the Twilio App Monitor, Python and webhooks.
- Build a Google Analytics Slack Bot with Python
- How To Save A Child’s Life with Python, AWS IoT, AWS Lambda and Twilio Wireless | https://www.twilio.com/blog/2014/11/build-an-office-phone-manager-with-django-heroku-and-twilio.html | CC-MAIN-2020-05 | refinedweb | 2,767 | 55.64 |
and one more null eventdispatcher inside Actor derived class
Hi,
I wan't figure out what's happening here ... here's what's inside my context:
var m:EpodxModule = new EpodxModule(); injector.injectInto(m); injector.mapValue( EpodxModule,m);
then inside a model class:
public class ModulesModel extends Actor { private var _m:EpodxModule; [Inject] public function set m(value:EpodxModule):void { _m = value; } public function get m():EpodxModule { return _m; } [...] }
Injection occurs as expected ... my m property is injected with a brand new instance of EpodxModule Class.
BUT:
m property still has a null eventDispatcher that prevents it from dispatching events for example !!! I checked by tracing it inside [PostConstruct] init function and inside others ...
There must be something wrong inside my code but honestly I can't see ...
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
Comment Form
You can use
Command ⌘ instead of
Control ^ on Mac
Support Staff 1 Posted by Stray on 10 May, 2011 03:54 PM
Hi there,
could you post the code inside EpodxModule?
Does it definitely extend Actor?
I'm sure we can help you figure it out.
You're definitely not trying to access that eventDispatcher in the constructor? (I know you said it's in a PostConstruct function but it's worth checking).
BTW - m is a terrible name for a variable!
Stray
2 Posted by peter.bannier on 11 May, 2011 07:38 AM
Hi!
Sure m isn't a good choice !! :)
After a few hours trying to find out what's wrong, I was a bit uninspired for my variables names ...
Anyway this is the EpodxModule Class:
public class EpodxModule extends Actor
}
that's a pretty simple case isn't it ? I still can't see what's wrong ...
Thanks
Support Staff 3 Posted by Stray on 11 May, 2011 08:09 AM
Hi Peter,
when you overrode the
set eventDispatcheryou lost the
[Inject]tag that was in the original
Actorclass.
It should be:
Ref: Line 67 of...
If you reinstate that then you'll probably find things work as they should - do let us know! If not then I'll help you do some deeper digging, but injection is really pretty simple:
injection point + injection rule => process through injector = happy code.
So there's a surprisingly small number of things that can go wrong.
Hopefully this fixes it!
Stray
Support Staff 4 Posted by Stray on 11 May, 2011 08:10 AM
A completely different note - is there a reason why you're using
mapValueand instantiating the instance for yourself, rather than using
mapSingleton?
Stray
5 Posted by peter.bannier on 11 May, 2011 08:51 AM
Hi,
thanks for the quick reply !
That was it ! now I understand pretty well what was my mistake ...
Now I can go happy coding ... big thanks!
I was using mapValue instead of mapSingleton as I want to have the opportunity to have multiples instances if I want to ... that's the right way isn't it ?
I had a relative conceptual question :
If I want a Class to act as a manager just like my ModulesModel Class which is meant to handle multiples modules. This Class have a few modules as properties. Here, with injection, I have to instantiate those modules first inside my Context and then map them. Would it be conceptualy better to handle their instantiation inside the ModulesModel himself ?
Support Staff 6 Posted by Stray on 11 May, 2011 08:56 AM
Hi Peter,
I'm glad I asked then because the answer is that
mapClasswon't give you multiple instances (except over time, you can switch out instances).
Essentially mapClass is identical to
mapSingletonOf( )except that you're controlling the creation of the concrete instance.
Can I just clarify - when you talk about 'Modules' - do you mean like Robotlegs Modules (using the modular utils) - or just Flex Modules, or perhaps something totally different?
Stray
7 Posted by peter.bannier on 11 May, 2011 09:03 AM
Hi,
I was talking about something different ... it could have been anything ... just an example to picture the problem ...
thanks for your explanations and your support!
Support Staff 8 Posted by Stray on 11 May, 2011 09:08 AM
Ah - ok - no problem. Basically if you want to have multiple instances within a single context your best option is always to create a container for them.
I'll close this now - shout again if you need any more help,
Stray
Stray closed this discussion on 11 May, 2011 09:08 AM. | http://robotlegs.tenderapp.com/discussions/problems/311-and-one-more-null-eventdispatcher-inside-actor-derived-class | CC-MAIN-2019-22 | refinedweb | 757 | 65.83 |
Text Files are supported with the text source database type as shown in the following
connection string:
string connectionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source="
+System.Environment.CurrentDirectory + "\\;Extended Properties=\"text;HDR=yes;FMT=Delimited\"";
Notice that only the folder where the text file resides is specified. The filename
of the text file is specified in the SQL commands that access data in the file,
similar to referencing a table name in a query.
The Extended Properties attribute can also specify whether tables include headers
for field names in the first row using the HDR attribute.
It is not possible to define all the characteristics of a text file in the connection
string, however. You can access files that use nonstandard delimiters and fixed-width
lines by creating a Schema.ini text file that must reside in the same folder
as the text file database. A sample schema for the "quotes.txt" file
of famous quotations that is included in the downloadable sample is shown below:
[quotes.txt]
Format=Delimited(|)
ColNameHeader=True
MaxScanRows=0
Character=OEM
The above is saved as "Schema.ini" next to the quotes.txt text file database.
The Schema.ini file provides the schema information about the data in the text file:
FileName
File Format
Field Names, widths, and data types
Character Set
Special Data type conversions.
The first entry in the Schema.ini file is the text file name surrounded by square
brackets.
The format specifier can be one of the following:
Format=CSVDelimited - Fields are delimited with commas. This is the default value.
Format=Delimited(Custom Character Here) - You can use any single character except the double quotation mark as a delimiter.
In my sample file, I use the Pipe (|) symbol.
Format=FixedLength - If the ColumnName header option is true, the first line with the column names
must be comma-delimited.
Format=TabDelimited - Fields are delimited with Tabs.
You can specify your fields in the text file in one of two ways:
1. Include the field names in the first row of the text file and set the ColNameHeader
option to True.
2. Identify each column using the format ColN (where N is the 1-based column number)
and specify the name, width, and data type of each column.
The MaxScanRows option indicates how many rows should be scanned to automatically determine the
datatype of a column. A value of 0 indicates all rows should be scanned.
The ColN entries specify the name, width and datatype for each column. This entry is required
for fixed-length formats and optional for character-delimited formats.
The syntax of the ColN entry is:
Col1=ID Short Width 4
Col2=FirstName Text Width 100
The datatype can be any of Bit, Byte, Currency, DateTime, Double, Long, Memo, Short, Single or
Text.
The Character option specifies the character set and can be set to either ANSI or OEM.
Once this is all set up, you can issue select, update, delete and insert statments
exactly the way you would with a "regular" database.
The code from my sample:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
using System.Data.OleDb;
namespace TextFileADONET
{
class Program
{
static void Main(string[] args)
{
string connectionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" +System.Environment.CurrentDirectory + "\\;Extended Properties=\"text;HDR=yes;FMT=Delimited\"";
OleDbDataAdapter da = new OleDbDataAdapter("select * from [quotes.txt]", connectionString);
DataTable dt = new DataTable();
da.Fill(dt);
foreach(DataRow row in dt.Rows)
Console.WriteLine((string)row["ID"] + ": " + (string) row["LastName"] + (string) row["quotation"]);
Console.WriteLine("---SELECT * FROM [quotes.txt] WHERE LastName='Einstein'---------");
OleDbDataAdapter da2 = new OleDbDataAdapter("select TOP 1 * from [quotes.txt] WHERE LastName='Einstein'", connectionString);
DataTable dt2 = new DataTable();
da2.Fill(dt2);
foreach (DataRow row in dt2.Rows)
Console.WriteLine((string)row["ID"] + ": " + (string)row["LastName"] + " " + (string)row["quotation"]);
Console.WriteLine("Any key to quit.");
Console.ReadLine();
}
}
}
You can download the sample code here. | http://www.nullskull.com/a/1585/sql-operations-on-a-text-file-with-adonet.aspx | CC-MAIN-2022-05 | refinedweb | 653 | 51.55 |
With Groovy 1.8.4 we can parse the output of the
Date.toString() method back to a Date. For example we get the string value of a Date from an external source and want to parse it to a Date object. The format of the string must have the pattern "EEE MMM dd HH:mm:ss zzz yyyy" with the US Locale. This is used by the
toString() method of the Date class.
import static java.util.Calendar.* // Create date 10 November 2011. def cal = Calendar.getInstance(TimeZone.getTimeZone('Europe/Amsterdam')) def date = cal.time date.clearTime() date[YEAR] = 2011 date[MONTH] = NOVEMBER date[DATE] = 10 // Get toString() value. def dateToString = date.toString() assert dateToString == 'Thu Nov 10 00:00:00 CET 2011' // Replace Nov for Dec in string and 10 for 24. dateString = dateToString.replace('Nov', 'Dec').replace('10', '24') // Use parseToStringDate to get new Date. def newDate = Date.parseToStringDate(dateString) assert newDate[MONTH] == DECEMBER assert newDate[DATE] == 24 assert newDate[YEAR] == 2011
1 comments:
json is a very interesting language to be used. very good tutorial and can hopefully help me in building json in the application that I created for this lecture. thank you | http://mrhaki.blogspot.com/2011/11/groovy-goodness-parse-datetostring.html | CC-MAIN-2015-14 | refinedweb | 198 | 69.89 |
Microsoft CRM 4.0 allows your organization to track accounts, contacts, leads, opportunities, activities, support cases, and a vast array of additional business information users need. In addition, you can leverage the platform to create your own related custom entities that model your business information and allow your users to manage day to day customer interactions.
From a user’s perspective, the power and flexibility to extend the system is incredible, but it has the potential to cause usability issues. How do busy users quickly and easily get access to the most relevant information? How can you provide a targeted, personalized user experience that presents CRM information and relationships directly to the user in time of need?
The answer is to customize the user experience by bringing together the most frequently used, and most relevant data all in one central location – to build a CRM “dashboard”. This is where the challenge presents itself – defining the information to be displayed, and determining how to build a fast and responsive solution to provide the best user experience possible.
In this article, I’ll show you how to build your own custom CRM dashboards, using Microsoft’s Windows Presentation Foundation to construct the user interface.
What You Will Need
You can design Windows Presentation Foundation (WPF) applications using Microsoft Visual Studio 2005 or 2008. I prefer to use VS 2008 since it’s the latest and greatest. Feel free to use the Visual Studio 2008 Express edition, as it is free to download and easy to install.
If you use Visual Studio 2005, you will need to download and install the Visual Studio 2005 Extensions for .NET 3.0.
The Dashboard
Here’s a sample dashboard we’ll build together. This dashboard lists CRM accounts on the left side. When an account is selected on the left, the dashboard will display the related record information on the right. You can add, remove, or customize the panels as you see fit for your users. In a moment, you’ll see how easy it is to bind CRM data to a WPF template in order to display this information.
Getting Started
Let’s get started by creating a new Windows Presentation Foundation project in Visual Studio. Select the “WPF Application” template and give your project a name.
Visual Studio will load the template and present you with a brand new Window1.xaml file for you to work with. Add a web reference to the CRM web server. I called mine CRMService. If you need additional information on this topic, there is more information in the CRM 4.0 SDK on how to do this.
Hint: If you are using Visual Studio 2008, you may have a little trouble finding the “Add Web Reference” option. It’s still there – first use the “Add Service Reference” menu, and then click the Advanced button.
We’ll also need to reference the Microsoft CRM SDK .dlls, microsoft.crm.sdk.dll, and microsoft.crm.sdktypeproxy.dll, which can be found in the bin folder of the Microsoft CRM SDK. Go ahead and set a reference to those assemblies as well.
Actually, we’ll be referencing the CRM web service and the SDK assemblies quite a bit, so let’s set up a using statement to make things a little cleaner. Now we’re ready to go!
using CRM.Dashboard.CRMService;
using CRM.Dashboard.CRMService;
The Account List
The first thing we’ll do is create the account list on the left side. This will allow the user to select an account. In this section, we’ll connect to the CRM web server, query all accounts, and then create the WPF markup to display the accounts. Finally, we’ll show how to bind the accounts to the WPF control.
View the code behind the Window1.xaml, add a class level variable to reference the CRMService, and modify the default Window1 constructor to create a new instance of to the CRM service.
1: public partial class Window1 : Window
2: {
3: // reference to the CRM web service
4: CrmService crmService = null;
5:
6: ...
7: }
8:
9: public Window1()
10: {
11: // default code – leave this here
12: InitializeComponent();
13:
14: // create an authentication token
15: CrmAuthenticationToken token = new CrmAuthenticationToken();
16: token.AuthenticationType = 0;
17: token.OrganizationName = "MicrosoftCRM";
18:
19: // set up the CRM web service
20: crmService = new CrmService();
21: crmService.Url = "";
22: crmService.CrmAuthenticationTokenValue = token;
23: crmService.Credentials =
24: System.Net.CredentialCache.DefaultCredentials;
You may have to change the CRM service URL to point to your CRM server. If you’re using the Microsoft Virtual PC image, change it to....
So far this is just standard CRM development. We’ve added a web reference to our CRM web service and we’ve created an instance of the class.
Now we need to connect to the CRM web service and retrieve the CRM accounts. We’ll retrieve the accounts as BusinessEntity objects so that we can bind to them using WPF. In the Window1 constructor, add the following code after the code to connect to the CRM web service.
1: // Create the QueryExpression object.
2: QueryExpression query = new QueryExpression();
3: query.EntityName = EntityName.account.ToString();
4: query.ColumnSet = new AllColumns();
6: // Retrieve the contacts.
7: BusinessEntityCollection accounts = crmService.RetrieveMultiple(query);
Great! This by itself should compile and run. However, it does not yet do anything too spectacular. Next is the fun part – creating your own XAML markup to render the account list any way you’d like.
WPF Designer
Switch over to the Window1.xaml designer. You should see the following design pane:
Notice here there are two main design panes. The top pane is a graphical preview of the changes you make in the markup section in the bottom pane. You can collapse either pane if you prefer to work in one or the other. I prefer to work in the text (bottom) pane since I like to torture myself…I mean, have total control…over the markup design. It’s also a great way to learn WPF markup if you are just getting started with it.
Notice also that there is a top-level Window element and a single Grid element within it. I won’t go into all the details here of the Window element, just know that it is the default element when creating WPF applications and that you can change some of the basics within it, such as the title bar text, height, and width of the window.
WPF Layout
Now some background – WPF is a little like HTML. The idea is for you to markup the layout and presentation for what you’d like to show and the system will render this markup for you when the program runs. It’s also a bit like HTML in that you can put elements and controls inside of one another, and “build up” your user interface as a sort of tree.
One of the main elements in WPF is the Grid. You can think of a Grid almost like a table, with rows and columns. Within the Grid, you can pin down controls so that they are rendered at an exact row and column.
In our dashboard, we’ll want to have two main columns. The first column will contain our accounts list, and the second column will contain all of our panels and related information.
Within the Grid element, place the following markup.
1: <Grid Name="grdMain" Background="DimGray">
2: <Grid.ColumnDefinitions>
3: <ColumnDefinition Width="40*"></ColumnDefinition>
4: <ColumnDefinition Width="60*"></ColumnDefinition>
5: </Grid.ColumnDefinitions>
6: </Grid>
The Width values tell WPF how wide to make each column. As a best practice, I like to make all of my column widths add up to 100. This allows you to treat each column width as a percentage, similar to HTML table design. In this example, the first column will take up 40% of the width, and the second column will take up the remaining 60%.
Another common layout element in WPF is the DockPanel. A DockPanel allows you to put other controls inside of it, and to “dock”, or fix, each control to either the top, bottom, left, or right side of the DockPanel. This allows for great flexibility and automatic stretching and resizing of the child controls.
Let’s throw down a new DockPanel in the first column of the Grid. Place the following markup in the Grid element, after the Grid.ColumnDefinitions element.
<DockPanel Grid.
</DockPanel>
<DockPanel Grid.
</DockPanel>
Back to the Accounts List
Now we can add our list view to display the list of accounts. Add the following markup within the DockPanel that we just created.
1: <DockPanel>
2:
3: <DockPanel DockPanel.
4: <TextBlock DockPanel.Dock="Left" Text="Accounts"
5: Foreground="White" FontWeight="Bold"
6:
7: </TextBlock>
8: </DockPanel>
9:
10: <ListView Name="lstMain" Foreground="Orange" Background="DimGray">
11: <ListView.View>
12: <GridView AllowsColumnReorder="true">
13: <GridViewColumn
14: DisplayMemberBinding="{Binding Path=name}"
15:
16: </GridView>
17: </ListView.View>
18: </ListView>
19:
20: </DockPanel>
Some things to point out here – first, there’s another DockPanel with a TextBlock inside. This is just a section header label we’ll use so the user can identify the section.
Second, we’re using the ListView control. This is just a WPF control that works very similar to a .NET data grid. You define one or more columns and then bind those columns to a field in your data.
In this example, we are defining a GridViewColumn element within the GridView, and then we are setting the DisplayMemberBinding attribute to the “name” property from our data. This is all it takes to tell WPF which field from the CRM record you want to display in that column! Feel free to add additional columns from the CRM account record if you’d like.
Finally, we need to tell this ListView where to get its data from. Back in the code view, add a line of code immediately after retrieving the CRM accounts from the CRM web service.
// Bind the accounts to the WPF list view
lstMain.ItemsSource = accounts.BusinessEntities;
// Bind the accounts to the WPF list view
lstMain.ItemsSource = accounts.BusinessEntities;
Now if you compile and run, your CRM accounts list should be populating with a list of accounts from your CRM system. Fantastic!
Related Panels
What kind of dashboard would we have if we didn’t display related information? In the following sections, we’ll add some additional panels to display the general information from the account, associated activities, and even mix in a CRM web page just for fun.
General Information Panel
In this panel, we’ll display a few of the account record details. Begin by adding the following markup to the designer, immediately after the first DockPanel that we added above.
1: <DockPanel Name="panMain" Grid.
2: <StackPanel DockPanel.
3: <StackPanel>
4: <TextBlock Text="General Information" Foreground="White" FontWeight="Bold" FontSize="14"></TextBlock>
5: <TextBlock Text="{Binding Path=name}"
6:</TextBlock>
7: <TextBlock Text="{Binding Path=address1_city}" Foreground="Orange"></TextBlock>
8: <TextBlock Text="{Binding Path=address1_state}"
9:</TextBlock>
10: <Canvas Height="20"></Canvas>
11: </StackPanel>
12: </StackPanel>
13: </DockPanel>
This is just another DockPanel. This time we threw a StackPanel inside of it, placed a StackPanel inside of that, then added four TextBlock controls inside of that. The first StackPanel just anchors the following controls to the top of the DockPanel. The second StackPanel then tells WPF to render the following controls in order, top to bottom. The first TextBlock just displays some fixed text for the section header. Each TextBlock after that is bound to a specific field on the account record.
Don’t worry too much about the binding syntax for now – it looks a little funny at first but there are plenty of samples out there for you to reference. The key is to set the Path property to the attribute that you want to display.
The real magic comes when we bind the DockPanel to the account record. To do this, first we need to add an event handler to the account list, telling it to run some code whenever anyone clicks an account. Add a MouseUp property to the lstMain ListView control that we created earlier.
<ListView Name="lstMain" MouseUp="GridViewColumn_MouseUp" ...>
<ListView Name="lstMain" MouseUp="GridViewColumn_MouseUp" ...>
Now switch back to the code and add the following event handler code.
1: private void GridViewColumn_MouseUp(object sender,
2: MouseButtonEventArgs e)
3: {
4: ListView c = (ListView)sender;
5: account a = (account)c.SelectedItem;
6: panMain.DataContext = a;
Now whenever someone clicks an account in the account list, the program will pull the account out of the list and bind the DockPanel to that account. It’s that easy.
Run the program again and select an account to make sure you’re on the right track.
Activities Panel
In the next panel, we’ll display a list of the open and closed activities for the account. Add the following markup to the designer, immediately under the last StackPanel that we added for the “General Information” section, and within the DockPanel.
1: <StackPanel DockPanel.
2: <StackPanel>
3: <TextBlock Text="Activities" Foreground="White" FontWeight="Bold"
4:
5: </TextBlock>
6: <ListView Name="lstActivity" Background="DimGray"
7:
8: <ListView.View>
9: <GridView>
10: <GridViewColumn DisplayMemberBinding="{Binding
11: Path=subject}" Header="Subject"/>
12: <GridViewColumn DisplayMemberBinding="{Binding
13: Path=scheduledend.date}"
14:
15: </GridView>
16: </ListView.View>
17: </ListView>
18: <Canvas Height="20"></Canvas>
19: </StackPanel>
20: </StackPanel>
This is just like the ListView that we added to display the accounts. The only difference here is that we are displaying fields from the activity entity.
Back in the code behind, we’ll need to grab the activities for the selected account and bind the results to the ListView. Place this code in the GridViewColumn_MouseUp handler, immediately after the existing code.
1: // Create the query object
2: QueryByAttribute query = new QueryByAttribute();
3: query.ColumnSet = new AllColumns();
4: query.EntityName = EntityName.activitypointer.ToString();
6: // This query will retrieve all activities for the account
7: query.Attributes = new string[] { "regardingobjectid" };
8: query.Values = new string[] { a.accountid.Value.ToString() };
10: // Execute the retrieval
11: BusinessEntityCollection retrieved =
12: crmService.RetrieveMultiple(query);
14: // Bind the activities to the list view
15: lstActivity.ItemsSource = retrieved.BusinessEntities;
Run the program again to make sure everything is coded properly. You now have a working WPF dashboard!
Embedding a Web Page
The final dashboard component we’ll add is an embedded web page to display the CRM activity. When the user clicks the activity in the activity list, the panel will display the actual CRM activity web page directly in the WPF application.
First, let’s add the event handler markup for the MouseUp event to the lstActivity ListView.
<ListView Name="lstActivity" MouseUp="lstActivity_MouseUp" ... >
<ListView Name="lstActivity" MouseUp="lstActivity_MouseUp" ... >
Now add the markup to display the web page, immediately after the last StackPanel but still within the main DockPanel.
2: <DockPanel>
3: <TextBlock DockPanel.Dock="Top" Text="Record Details"
4: Foreground="White" FontWeight="Bold"
5:</TextBlock>
6: <Frame Name="frmCRM"
7: ScrollViewer.</Frame>
8: </DockPanel>
9: </DockPanel>
This works a bit like a CRM iFrame control. Our job is to set the Frame’s source property in the click event. Add the following in the code behind.
private void lstActivity_MouseUp(object sender, MouseButtonEventArgs e)
1: private void lstActivity_MouseUp(object sender, MouseButtonEventArgs e)
2: {
3: ListView c = (ListView)sender;
4: activitypointer a = (activitypointer)c.SelectedItem;
5: string activityId = a.activityid.Value.ToString();
6: this.frmCRM.Source = new
7: Uri("{"
8: + activityId + "}");
9: }
Now when a user clicks the activity, the Frame will switch to the CRM web page for that activity.
Give it a try.
Decorating the Panels
A WPF application wouldn’t be a WPF application unless you spiced up the style a bit. I this section, I’ll show you some tricks you can use to give the dashboard a little pizzazz.
Let’s add a rounded border to our panels. WPF makes this very easy if you use a Border element. You can decorate your DockPanels and StackPanels with a Border element, and set the CornerRadius and Margin properties so that they will render with a rounded table effect.
Add the following markup just inside the first DockPanel.
1: <DockPanel Grid.
2: <Border BorderBrush="White" BorderThickness="1"
3:
4:
5: ...
6:
7: </Border>
8: </DockPanel>
Notice the results. You can control the “roundness” of the border by adjusting the CornerRadius property. Also, play around with the BorderThickness property to increase or decrease the border’s width.
Repeat the Border markup inside of the General Information, Activities List, and Activity Frame panels. For the General Information and Activities list, you will want to place the border directly inside of the StackPanel element. For the Activity Frame, place it directly inside of the first DockPanel.
2: <StackPanel DockPanel.
3: <Border BorderBrush="White" BorderThickness="1"
4:
5: ...
6: </Border>
7: </StackPanel>
9: <StackPanel DockPanel.
10: <Border BorderBrush="White" BorderThickness="1"
11:
12: ...
13: </Border>
14: </StackPanel>
15:
16: <DockPanel>
17: <Border BorderBrush="White" BorderThickness="1"
18:
19: ...
20: </Border>
21: </DockPanel>
22: </DockPanel>
Finally, we’ll set the title property, startup size, and start up location of the Window element at the very top of the markup designer.
1: <Window x:Class="CRM.Dashboard.Window1"
2: xmlns=""
3: xmlns:x=""
4: Title="CRM Dashboard" Height="400" Width="600"
5:
Next Steps
By now you’ve seen how to create a basic WPF application, how to add markup to display CRM information, and how to connect your application to your CRM data.
You could take this application and add additional panels, display geographic or map information, embed SQL Reporting Service reports, SharePoint parts, or any other data that your users would find relevant.
You could also extend the dashboard to allow your users to import and arrange custom panels that you publish to a central location. The sky’s the limit.
Conclusion
We’ve seen how CRM 4.0 can be combined with WPF to give your users a rich client experience, with relevant CRM information at their fingertips. All of this takes place inside of a rich client-side application that will provide your users with a speedy, interactive application that can be enhanced over time.
I encourage you to try building a dashboard of your own, and to learn more about the display capabilities of Windows Presentation Foundation. I think you’ll find it provides a powerful new interface for building rich client applications that will make your users more productive, well-organized, and happy.
Cheers,
Jeremy Hofmann
Jeremy is a Manager within Crowe Horwath’s CRM practice and specializes in combining Microsoft CRM with related technologies. He lives in the Chicago area and can be contacted at [email protected].
If you would like to receive an email when updates are made to this post, please register here
RSS
PingBack from
VS 2005 extension for WPF is not available for download. | http://blogs.msdn.com/crm/archive/2009/01/07/building-rich-client-dashboards-for-microsoft-dynamics-crm-with-windows-presentation-foundation.aspx | crawl-002 | refinedweb | 3,151 | 56.96 |
A python package for building powerful command-line interpreter (CLI) programs. Extends the Python Standard Library’s cmd package.
The basic use of cmd2 is identical to that of cmd.
Create a subclass of cmd2.Cmd. Define attributes and do_* methods to control its behavior. Throughout this documentation, we will assume that you are naming your subclass App:
from cmd2 import Cmd class App(Cmd): # customized attributes and methods here
Instantiate App and start the command loop:
app = App() app.cmdloop()
These docs will refer to App as your cmd2.Cmd subclass, and app as an instance of App. Of course, in your program, you may name them whatever you want.
Contents: | https://pythonhosted.org/cmd2/ | CC-MAIN-2016-36 | refinedweb | 112 | 67.96 |
blewisjrMember
Content count485
Joined
Last visited
Community Reputation752 Good
About blewisjr
- RankGDNet+
Best engine for card-based game?
blewisjr replied to StealthPandemic's topic in For BeginnersHearthstone did use unity like Lactose said but again that is not really your only choice. UE4 can easily do a card game as well. Unity and UE4 are the top engine players but really almost if not all engines out their can do a card game. With that said it would be good to keep in mind that UE4 has blueprints which can be very advantageous for prototyping or even making the whole game with. Ultimately the choice is yours as you only know the kind of experience you have to work with.
The No-OOP back to basics experiment
blewisjr commented on Mussi's blog entry in Journal of MussiI have been using C for a quite a while. Mostly programming for hobby type stuff in C and ASM on really tiny devices with little to no processing power or memory. They call these things Microcontrollers. The thing is with C you don't necessarily need a shutdown procedure it all depends on what you are doing. The one thing I tell everyone that is going to do things the non OOP way and use the C way is the golden rule of what allocates the memory frees the memory. If you remember this things become very smooth and less over engineered. So you you have a function that allocates memory say for a structure make sure you have a function that frees the memory of that structure. So what you get is a very smooth flow chart like experience. It also makes potential memory leaks easy to fix if you forget to call your function that frees up the memory you allocated. When I code in C I usually dedicate specific code files for tasks as well. So if you are creating a structure or a linked list or whatever have a file that is dedicated to operating on that data which would include manipulation, creation and freeing of the memory. It makes for some smooth organization and prevents your code cluttering up one file. I will agree it can seem like a lot of boilerplate but in reality I much prefer the non object oriented style of Functional and Procedural languages it just makes more sense and I feel leads to better code design as there are less over engineering pitfalls you can corner yourself into getting carried away. Much more linear and understandable.
March 2016: Funding ran out
blewisjr commented on slayemin's blog entry in slayemin's JournalYup mistakes all companies make at some point as long as you learn from them it is well worth the mistake. Just remember almost all individuals game studios and startups went totally broke before they made money. Same for most businesses in general. A lot of it is simply because of the mistakes you made which were not technically mistakes but more of a investment into you business. All of which did not work out but which led to your current product. Just look for the light and gun for it even if it means you need to make some cuts before you get too deep to climb out. Oh and never forget the golden rules of software KISS and there is never perfect software.
Healthbar structure for a game?
blewisjr replied to null;'s topic in For BeginnersThe Health Bar is nothing more then the visual of the Health of the player. So within the player class you would have the health value then the User interface would use that value to display the current health in the form of a health bar.
The project from insanity
blewisjr commented on blewisjr's blog entry in Ramblings of a partialy sane programmerThanks for the suggestion but I think I will stay away from that web stuff never did like web lol. As a little update I have been spending more time with JMonkey and been really digging it so far even if I am not the #1 Java fan I am a Pure C guy it is a really well done engine will probably run with this and see where it goes.
April update: the new procedural dungeon generator
blewisjr commented on Ashaman73's blog entry in Gnoblins - Development journal of an indie gameLooks really awesome. Have you considered also randomly generating the base layout (if not already doing). Fully random layouts can be wonderful to experience as a player.
The project from insanity
blewisjr posted a blog entry in Ramblings of a partialy sane programmerWow has it really been this long since I posted a journal entry. Man time really flies right by it is just insane. Over the last few months I have been going through the motions of designing a project. The project is rather over ambitious for sure and 99% of the worlds population would probably call me insane. Even as I was going through and.
Out of the loop for a while looking for some framework direction
blewisjr replied to blewisjr's topic in For BeginnersThanks for the response. I am not sure if a full blown engine like godot, ue4, or unity would work for something this different and customizeable but will for sure look into them you never know. They can go one of two ways make less work or cause much more work. I am quite out of the normal mold here.
Out of the loop for a while looking for some framework direction
blewisjr posted a topic in For BeginnersWow it has been quite a while since I have been around here. Well finally getting back into game dev for a few reasons. 1. I am and always will have a passion for it. 2. The US Job market blows and I have a lot of spare time while still trying to obtain a job with my degree might as well use my skills and develop them even more. Ok on topic I am designing a game. Well not exactly a game but a core framework to make endless game content and at the same time a game. Think table top rpg gone digital. So I am looking to put a tools together to create my dream game but obviously we need some restrictions. 1. The project will be 100% open source. 2. I would like to if possible to avoid c/c++ as I would like the project to be more approachable and the game will be turn based and 2d anyway so there is not much need for the "performance". 3. 2d for sure as this is a modular game thin NWN style. The game itself is a tool set the modules are actually the game. There will be pre made modules to enjoy but for a real custom experience custom art may be needed and 2d is more approachable for anyone to do. 4. As a advanced creator feature I would like custom rules etc to be create and this might involve some scripting so here I figure lua and or python would be perfect even if they need to be integrated in a c/c++ codebase. So anyone more in the loop offer some suggestions. First there will be gui requirements for the tools and obviously I will need some sort of 2d api or engine probably api as this system is a unique case. If possible doing the whole thing in python would be awesome but from my experience python + graphics + gui has been a mess over the years. So feel free to bombard me with opinions suggestions etal... Keep in mind despite this being in for beginners I was not sure where to put this I am not a new to making a game or programming. And yes I am go ogling like a mad man looking for options.
Choosing a platform for software
blewisjr commented on blewisjr's blog entry in Ramblings of a partialy sane programmerIt really is a nice device Navyman. Very fast and responsive. You can quite literally write with the stylus very legibly and accurately. There is lots I did not do with it yet but I am very impressed at the quality of the device. The screen is amazing and I actually am not sure how they did it but it is every bit as crisp as a retina display.
Choosing a platform for software
blewisjr posted a blog entry in Ramblings of a partialy sane programmerOne thing I have noticed over the years is that software development is becoming ever more fragmented. When I say fragmented I mean the platform choices are expanding rather dramatically. Years ago if you wanted to develop a piece of software you mainly had one choice the desktop. Whether it was a game or a software application you built it for the desktop or in the case of a game you had the additional option of a console if you were part of a large company. Now not to far in the future our options are huge. We can choose between desktop, tablet, phone, console, and even web. The software landscape has changed so much. More and more options are becoming available for the average Joe who wants to get their foot into the door and get their own little startup going..
On C++ Naming Conventions
blewisjr commented on evolutional's blog entry in evolutional.co.ukI follow a style similar to Google's style but slightly different probably due to my C and Java roots. Namespaces are done in Pascal case. Class Names are in Pascal case. Interfaces (abstract) use the I + Pascal case. Variables are all lowercase separated by _ but typically tend to be short and sweet names. Functions/Methods are all lowercase separated by _ and again short and sweet. so you tend to get something like this... In C++ I typically would not use properties unless the variable is read only or if I need to do some background checking on the data first coming in. Otherwise I would just expose the variable directly as being public. namespace MyProject { class Bar { public: virtual ~Bar() {} }; class Foo: public Bar { public: void action(int param); int data(); void set_data(int val); }; }
Java 8 very interesting
blewisjr posted a blog entry in Ramblings of a partialy sane programmerThis is a rather short blog post. I have had some ideas for a project recently with some of the various endeavors I have been contemplating. One of these endeavors is either a desktop application or web application not sure which but I think it makes more sense as a desktop application due to it's purpose. When I was thinking about the project I new I would want it cross platform so my real choices would be either Java or C++. I never made a GUI application in C++ before so I said let me modernize my java install and upgrade to IntelliJ Idea 13.1. Oh by the way IntelliJ idea is worth every penny. If you develop in Java you should really spend the $200 and pick up a personal license which can be used for commercial applications. Really great IDE and I can't wait to see what they do with their C++ ide they are working on. Jetbrains makes amazing tools. So I upgraded everything to Java 8 and decided to make a quick and simple GUI application and use Java 8 features. I will say one thing Java should have added Lambda's a long time ago... With this in mind the following Swing code turns from this... [code=java:1]import javax.swing.*;import java.awt.event.*(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { System.out.println("Hello World"); } }); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(300, 100); setLocationRelativeTo(null); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { new TestGui().setVisible(true); } }); }} to this... [code=java:1]import javax.swing.*(e -> System.out.println("Hello World")); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(300, 100); setLocationRelativeTo(null); } public static void main(String[] args) { SwingUtilities.invokeLater(() -> new TestGui().setVisible(true)); }} So much more elegant and readable I think Oracle just really hooked me back on Java with just this one feature.
Is c++ good
blewisjr replied to RaoulJWZ's topic in For BeginnersI agree with most of the comments here. C/C++ are great to learn to use but they are fazing out in a lot of more common areas. They still have their place and will probably have their place for a very long time yet to come. Learning will make you a better programmer overall even in more modern languages. For games you really do not need C++ or C. There are lots of great technologies out there today that are beyond capable of keeping up. Heck even today much of the games you play are done with scripting languages and those languages hook into the C++ rendering engine on the backend. The main reason I say C/C++ will be for around for a long time is mainly because of specific areas like kernel development as well as embedded micro controller development. Sure there are new languages coming out that are compiled to machine code like Google's Go. The big downfall of those types of languages is the lack of direct memory access through pointers and direct interfacing with assembly code. In the world of Kernels and embedded micro controller (think ARM Cortex M, PIC, AVR) you really need that otherwise you can't really do anything without extreme C interfacing hoops. Some of those chips are so tiny in memory you would be lucky to get a runtime driven language on them. These are extreme cases. So in the end if you are learning your first language I would recommend it not be C++. I would rather see a new programmer on their first language use pure C, C#, Java, or Python. C is a very simple language to learn and will let you learn some really useful concepts this is still my all time favorite language. C#, Java, and Python are also relatively simple languages that rule out memory management and will allow you to focus on core algorithm concepts. Choose something you want to choose not what everyone forces you to choose and stick with it for a while before moving on. Every language you learn will teach you something new.
XNA vs Other
blewisjr replied to tmer1's topic in For BeginnersNow that would be a sight to behold, my tutor scolded me for trying Lazarus instead of Delphi (I had a linux box so Delphi wasn't an option). Heh I had a teacher scold me for using GCC + Makefiles + Vim in my C++ class because it was all I had available at the time due to a computer explosion. Apparently Visual Studio is the only way to write C/C++ now a days in school. | https://www.gamedev.net/profile/100434-blewisjr/?tab=idm | CC-MAIN-2018-05 | refinedweb | 2,506 | 69.41 |
Easy::jit is a library that brings just-in-time compilation to C++ codes. It allows developers to jit-compile some functions and specializing (part of) their parameters. Just-in-time compilation is done on-demand and controlled by the developer. The project is available on github .
The performance of some codes is tied to the values taken by certain parameters whose value is only known at runtime: loop trip counts, values read from configuration files, user-input, etc. If these values are known, the code can be specialized and extra optimizations become possible, leading to better performance.
Easy::jit is a library that brings just-in-time compilation to C++ codes to achieve that goal. It hides complicated concepts and is presented as a simple abstraction; no specific compilation knowledge is required to use it.
Example
To understand the library, consider the following kernel that processes a frame from a video stream:
static void kernel(const char* mask, unsigned mask_size, unsigned mask_area, const unsigned char* in, unsigned char* out, unsigned rows, unsigned cols, unsigned channels) { unsigned mask_middle = (mask_size/2+1); unsigned middle = (cols+1)*mask_middle; for(unsigned i = 0; i != rows-mask_size; ++i) { for(unsigned j = 0; j != cols-mask_size; ++j) { for(unsigned ch = 0; ch != channels; ++ch) { long out_val = 0; for(unsigned ii = 0; ii != mask_size; ++ii) for(unsigned jj = 0; jj != mask_size; ++jj) out_val += mask[ii*mask_size+jj] * in[((i+ii)*cols+j+jj)*channels+ch]; out[(i*cols+j+middle)*channels+ch] = out_val / mask_area; } } } }
and its invocation,
static void apply_filter(const char *mask, unsigned mask_size, unsigned mask_area, cv::Mat &image, cv::Mat *&out) { kernel(mask, mask_size, mask_area, image.ptr(0,0), out->ptr(0,0), image.rows, image.cols, image.channels()); }
In this code, the values of the parameters mask, mask_size and mask_area, depend on user's input and occasionally change. A priori, it's impossible to know which values are going to be taken.
On the other hand, the frame dimensions, rows, cols and channels, depend on the input device and tend to remain constant during the entire execution.
Knowing their values can enable some new optimizations: if the mask dimensions are known and small enough, for example, the optimizer could decide to fully unroll the innermost loops.
Using Easy::jit
The main functionalities of the library are present in the easy/jit.h header file, and the code function of the library is--guess the name--the easy::jit function.
This function takes as input a function to specialize (or a function pointer), and a series of parameter values or placeholders. Parameter values are used to specialize the code of the function passed as parameter; placeholders are used to forward the parameters of the specialized function to the original function. The library tries to mimic the interface of a standard C++ construct: std::bind. Just-in-time compilation takes place as soon as the easy::jit function is called, to give control to the user when and where the compilation is launched.
For example, in the code below, the kernel function is specialized with the mask and the frame dimensions. The first parameter of the specialized function is forwarded as the input frame and the second parameter as the output frame.
The returned object hides the LLVM related data-structures, mimics the signature of the specialized function, and is in charge of freeing the allocated resources.
#include <easy/jit.h> static void apply_filter(const char *mask, unsigned mask_size, unsigned mask_area, cv::Mat &image, cv::Mat *&out) { using namespace std::placeholders; // for _1, _2, ... auto kernel_opt = easy::jit(kernel, mask, mask_size, mask_area, _1, _2, image.rows, image.cols, image.channels()); kernel_opt(image.ptr(0,0), out->ptr(0,0)); }
With this implementation, just-in-time compilation takes place at every frame of the video. To avoid useless re-compilations, the library ships a code cache built using a hash table and the easy::jit function. This way, if a specialized version has already been generated, it is not necessary to go through the entire code generation step again.
The previous code adapted to use the code cache is presented below.
#include <easy/code_cache.h> static void apply_filter(const char *mask, unsigned mask_size, unsigned mask_area, cv::Mat &image, cv::Mat *&out) { using namespace std::placeholders; // for _1, _2, ... static easy::cache<> cache; auto const &kernel_opt = cache.jit(kernel, mask, mask_size, mask_area, _1, _2, image.rows, image.cols, image.channels()); kernel_opt(image.ptr(0,0), out->ptr(0,0)); }
How?
This library relies on C++ meta-programming and a compiler plugin to achieve its objectives.
Template meta-programming is used to initialize the Context from the call to easy::jit. This Context contains all the information required to perform the specialization: values taken by the parameters, pointer to the required function to specialize, special options, etc.
Additionally, the low-level LLVM objects are wrapped together into an opaque object. The operator() is specialized with the appropriate types derived from the easy::jit call, such that type checking is performed on every argument and their values are casted if necessary.
However, template meta-programming is not enough to provide JIT support: special compiler help is required to identify which functions will be specialized at runtime, and to embed their bitcode implementation in the final executable. This bitcode is used later on, at runtime for specialization and optimization.
Benchmarks
The results were obtained using Google Benchmark. We compiled two kernels, a convolution kernel, similar to the one presented in the previous sections, and a C-style quicksort kernel, were the function performing the comparison can be inlined thanks to just-in-time compilation. The input sizes were changed from a matrix of 16x16 elements up to 1024x1024 elements for the convolution kernel, and an array of 16 elements up to 1024 elements for the quicksort kernel. The times taken by the code generation process and by a cache hit are also measured. The time corresponds to the average time of #Iterations executions.
The sources can be found in the project's repository if you wish to reproduce. The benchmark is running on an Intel(R) Core(TM) i7-8550U CPU @ 4.00GHz.
The kernel using Easy::jit executes roughly 4.5x faster than the original version for the convolution kernel, and around 2x faster for the quicksort kernel. These times do not take into account the time taken to generate the code, only the kernel execution. The time required for the code generation process remains big in comparison to the kernel execution time: for the 1024 input sizes, at least 2 iterations of the kernel convolution are required to compensate the cost of code generation, and 29 iterations for the quicksort kernel. Still the time taken by a cache hit is negligible. In its current version, the cache is not persistent, and the generated codes remain in memory. This time may change if the generated code were to be read from a file (as expected in future versions). We can conclude that to profit from the compilation strategy proposed by Easy::jit, being able to reuse the generated code versions is critical.
----------------------------------------------------------------------------------------------- Benchmark Time JIT (#Iterations) Time Original (#Iterations) Speedup ----------------------------------------------------------------------------------------------- BM_convolve_jit/16 314 ns (2307965) 1559 ns (449056) 3.69 x BM_convolve_jit/32 1674 ns (438450) 7780 ns (90692) 4.64 x BM_convolve_jit/64 7221 ns (96520) 34714 ns (19811) 4.80 x BM_convolve_jit/128 30487 ns (22970) 144905 ns (4820) 4.75 x BM_convolve_jit/256 126170 ns (5533) 597026 ns (1180) 4.73 x BM_convolve_jit/512 512088 ns (1372) 2417549 ns (289) 4.72 x BM_convolve_jit/1024 2073025 ns (338) 9737191 ns (72) 4.69 x BM_convolve_compile_jit 13147455 ns (54) BM_convolve_cache_hit_jit 218 ns (3199323)
----------------------------------------------------------------------------------------------- Benchmark Time JIT (#Iterations) Time Original (#Iterations) Speedup ----------------------------------------------------------------------------------------------- BM_qsort_jit/16 171 ns (4082824) 279 ns (2500416) 1.63 x BM_qsort_jit/32 545 ns (1284095) 1032 ns (677563) 1.89 x BM_qsort_jit/64 1860 ns (374257) 3658 ns (191657) 1.96 x BM_qsort_jit/128 6844 ns (101668) 13604 ns (51633) 1.98 x BM_qsort_jit/256 25009 ns (27731) 52199 ns (13417) 2.08 x BM_qsort_jit/512 95446 ns (7317) 205984 ns (3427) 2.15 x BM_qsort_jit/1024 369910 ns (1855) 810159 ns (866) 2.19 x BM_qsort_compile_jit 13013694 ns (54) BM_qsort_cache_hit_jit 174 ns (3979212)
What's next
The library remains on its early stages. Among the missing basic features we can list:
- struct parameters are not correctly supported
- functions returning a struct are not well supported.
In the midterm, the first objective is to provide the mechanisms to perform profile-guided-optimization and optimization using speculation. We also aim at handling a set of common constructs and patterns (caching and threading, for example) to simplify the usage of the library.
In the very end, the objective is to be able to perform partial evaluation of codes. There already exist some work on this area, in particular LLPE is an implementation based on LLVM.
Thanks to Serge Guelton for his contributions to the project and many discussions. Thanks to Quarkslab for supporting us in working on personal projects. | https://blog.quarkslab.com/easyjit-just-in-time-compilation-for-c.html | CC-MAIN-2021-39 | refinedweb | 1,502 | 54.83 |
Today, InfoQ publishes a sample chapter "Integrating with a GWT-RPC Servlet" from "Google Web Toolkit", a book authored by Ryan Dewsbury.
Performance is the main reason Ajax is so popular. We often attribute the glitzy effects used in many Ajax apps as a core appeal for users, and users may also attribute this to why they prefer Ajax apps. It makes sense because if you look at traditional web apps they appear static and boring. However, if it were true that glitzy effects dramatically improved the user experience then we would see a wider use of the animated gifs. Thankfully those days are gone. Ajax will not go the way of the animated gif because the value that it adds is not all glitz. The true value that improves the user experience with Ajax, whether the user is conscious of this or not, is performance.
In this article I'm not going to show you why Ajax inherently performs better than traditional web applications. If you're not sure look at Google maps and remember older web mapping apps or compare Gmail to Hotmail. By basing your application on Ajax architecture you can dramatically improve performance and the user experience for your app. Instead, in this article I'm going to show you how to push this performance improvement to the next level - to make your Ajax application stand apart from the rest.
Why GWT?
The Google Web Toolkit (GWT) provides a significant boost to Ajax development. Any new technology like this is a hard sell especially when there are many other choices. In reality, nothing gives you the benefits GWT gives you for Ajax applications. If you're not already bound to a framework it just doesn't make sense to not use it. By using GWT for your Ajax application you get big performance gains for free.
By free I mean that you just don't need to think about it. You concentrate on writing your application logic and GWT is there to make things nice for you. You see, GWT comes with a compiler that compiles your Java code to JavaScript. If you're familiar with compiled languages (C, Java, etc.) you'll know that a goal is to make the language platform independent. The compiler is able to make optimizations to you code specific to the platform being compiled to, so you can focus on leaving your code readable and well organized. The GWT compiler does the same thing. It takes your Java code and compiles down to a few highly optimized JavaScript files, each one exclusively for use with a specific browser, making your code small and browser independent. The optimization steps employ real compiler optimizations line removing uncalled methods and inlining code essentially treating JavaScript as the assembly code of the web. The resulting code is small and fast. When the JavaScript is loaded in the browser it contains only the code needed for that browser and none of the framework bloat from unused methods. Applications built using GWT are smaller and faster than applications built directly with JavaScript and now the GWT team, typically very modest, is confident that the GWT 1.5 compiler produces JavaScript that is faster than anything anyone could code by hand. That should be enough to convince anyone to use GWT for an Ajax application but if it doesn't there are plenty of other reasons why you should use GWT including the availability of Java software engineering tools (debugging Ajax applications in Eclipse is a huge plus for me).
Do You Want More?
Why stop there. Ajax applications perform better than traditional web applications and GWT applications perform better than regular Ajax applications. So by simply making a few technology choices you can build applications that perform really, really well, and focus on your application features. You'll be done your work in half the time too. However GWT doesn't magically do everything. I will cover four things that you can do on your own to boost your Ajax application performance even further.
1. Cache Your App Forever
When you compile your GWT application to JavaScript a file is created for each browser version that has a unique name. This is your application code and can be used for distribution simply by copying it to a web server. It has built in versioning since the filename name is a hash of your code. If you change your code and compile a new filename is created. This means that either the browser has a copy of this file already loaded or it doesn't have it at all. It doesn't need to check for a modified date (HTTP's If-Modified-Since header) to see if a newer version is available. You can eliminate these unneeded browser HTTP trips. They can be fairly small but add up to a lot when your user base grows. They also slow down your client since browsers can only have two active requests to a host. Many optimizations with load time for Ajax involve reducing the number of requests to the server.
To eliminate the version requests made by the browser you need to tell your web server to send the Expires HTTP header. This header tells the browser when the content is not considered fresh again. The browser can safely not check for new versions until the expire date has passed. Setting this up in Apache is easy. You need to add the following to your .htaccess file:
<Files *.cache.*> ExpiresDefault "now plus 1 year" </Files>
This tells apache to add the expires header to one year from now for every file that matches the pattern *.cache.*. This pattern will match your GWT application files.
If you're using Tomcat directly you can add headers like this through a servlet filter. Adding a servlet filter is fairly straightforward. You need to declare the filter in your WEB_INF/web.xml file like this:
<filter> <filter-name>CacheFilter</filter-name> <filter-class>com.rdews.cms.filters.CacheFilter</filter-class> </filter> <filter-mapping> <filter-name>CacheFilter</filter-name> <url-pattern>/gwt/*</url-pattern> </filter-mapping>
This tells tomcat where to look for the filter class and which files to send through the filter. In this case the pattern /gwt/* is used to select all the files in a directory named gwt. The filter class implements the doFilter method to add the Expires header. For GWT we want to add the header to each file that doesn't match *.nocache.*. The nocache file should not be cached since it contains the logic to select the current version. The following is the implementation of this filter:
public class CacheFilter implements Filter { private FilterConfig filterConfig; public void doFilter( ServletRequest request, ServletResponse response, FilterChain filterChain) throws IOException, ServletException { HttpServletRequest httpRequest = (HttpServletRequest)request; String requestURI = httpRequest.getRequestURI(); if( !requestURI.contains(".nocache.") ){ long today = new Date().getTime(); HttpServletResponse httpResponse = (HttpServletResponse)response; httpResponse.setDateHeader("Expires", today+31536000000L); } filterChain.doFilter(request, response); } public void init(FilterConfig filterConfig) throws ServletException { this.filterConfig = filterConfig; } public void destroy() { this.filterConfig = null; } }
2. Compress Your Application
The GWT compiler does a good job at reducing code size but cutting unused methods and obfuscating code to use short variable and function names, but the result is still uncompressed text. Further size improvements can be made buy gzipping the application for deployment. With gzip you can reduce your application size by up to 70%, which makes your application load quicker.
Fortunately this is an easy to do with server configuration as well. To compress files on apache simply add the following to you .htaccess file:
SetOutputFilter DEFLATE
Apache will automatically perform content negotiation with each browser and send the content compressed or not compressed depending on what the browser can support. All modern browsers support gzip compression.
If you're using Tomcat directly you can take advantage of the compression attribute on the Connector element in your server.xml file. Simply add the following attribute to turn compression on:
compression="on"
3. Bundle Your Images
Ajax application distribution leverages the distribution power of the browser and HTTP, however the browser and HTTP are not optimized for distributing Ajax applications. Ajax applications are closer to Desktop applications in their needs for deployment where traditional web applications use a shared resource distribution model. Traditional web applications rely on interactions between the browser and web server to manage all of the resources need to render a page. This management ensures that resources are shared and cached between pages ensuring that loading new pages involves as little downloading as possible. For Ajax applications resources are typically not distributed between documents and don't need to be loaded separately. However it is easy to simply use the traditional web distribution model when loading application resources, and many applications often do.
Instead, you can reduce the number of HTTP requests required to load your application by bundling your images into one file. By doing this your application loads all images with one request instead of two at a time.
As of GWT 1.4 the ImageBundle interface is supported. This feature lets you define an interface with a method for each image you'll use in your application. When the application is compiled the interface is read and the compiler combines all of the images listed into one image file, with a hash of the image contents as the file name (to take advantage of caching the file forever just like the application code). You can put any number of images in the bundle and use them in your application with the overhead of a single HTTP request.
As an example, I use the following image bundle for the basic images in a couple applications I've helped build:
public interface Images extends ImageBundle { /** * @gwt.resource membersm.png */ AbstractImagePrototype member(); /** * @gwt.resource away.png */ AbstractImagePrototype away(); /** * @gwt.resource starsm.gif */ AbstractImagePrototype star(); /** * @gwt.resource turn.png */ AbstractImagePrototype turn(); /** * @gwt.resource user_add.png */ AbstractImagePrototype addFavorite(); }
Notice that each method has a comment annotation specifying the image file to use and a method that returns an AbstractImagePrototype. The AbstractImagePrototype has a createImage method that returns an Image widget that can be used in the application's interface. The following code illustrates how to use this image bundle:
Images images = (Images) GWT.create(Images.class); mainPanel.add( images.turn().createImage() );
It's very simple but provides a big startup performance boost.
4. Use StyleInjector
What about CSS files and CSS images as application resources? In a traditional web distribution model these are treated as external resources, loaded and cached independently. When used in Ajax applications they involve additional HTTP requests and slow down the loading of your application. At the moment GWT doesn't provide any way around this however there is a GWT incubator project, which has some interesting GWT code that may be considered for future versions. Of particular interest is the
ImmutableResourceBundle and
StyleInjector.
The ImmutableResourceBundle is much like an ImageBundle but can be used for any type of resource including CSS and CSS images. It's goal is to provide an abstraction around other resources to have them handled in the most optimal way possible for the browser running the application. The following code is an example of this class used to load a CSS file and some resources:
public interface Resources extends ImmutableResourceBundle { /** * @gwt.resource main.css */ public TextResource mainCss(); /** * @gwt.resource back.gif */ public DataResource background(); /** * @gwt.resource titlebar.gif */ public DataResource titleBar(); /** * @gwt.resource dialog-header.png */ public DataResource dialogHeader(); }
For each resource a file and method is specified much like the ImageBundle however the return value for the methods is either a DataResource or a TextResource. For the TextResource you can use its getText method to get it's contents and for the DataResource you can use getUrl to reference the data, (for example in an IMG tag or IFRAME). How this data is loaded is handled differently for different browsers and you don't need to worry about it. In most cases the data is an inline URL using the data: URL prefix. The possibilities for this class are vast, but the most immediate use is to bundle CSS directly with your application file.
Notice in the interface that a CSS file and some images are referenced. In this case the interface is being used to bundle CSS and it's images with the application to reduce HTTP calls and startup time. The CSS text specified background images for some of the application elements but instead of providing real URL's it lists placeholders. These placeholders reference other elements in the bundle, specifically the other images. For example, the main.css file has a CSS rule for the gwt-DialogBox style name:
.gwt-DialogBox{ background-image:url('%background%') repeat-x; }
To use this CSS file and it's images in your application you need to use the StyleInjector class from the GWT incubator project. The StyleInjector class takes the CSS data and matches the placeholders to resources in a resource bundle then injects the CSS into the browser for use in your application. It sounds very complicated but it's simple to use and improves performance. The following is an example of injecting CSS from a resource bundle into your application with StyleInjector:
Resources resources = (Resources)GWT.create(Resources.class); StyleInjector.injectStylesheet( resources.mainCss().getText(), resources );
It's important to note that this technique is part of the incubator project and will most likely change in the future.
Conclusion
Ajax applications have a big usability jump from traditional web applications and GWT provides tools that give you better Ajax performance for free. You should compare the startup speed of the GWT mail sample to other sample Ajax applications. By paying attention to the deployment differences between traditional web applications and Ajax applications we can push application performance even further. I'm excited to see the next generation of Ajax applications.
About the Author.
Note:The code/text does not address security issues or error handling
Copyright: This content is excerpted from the book, "Google Web Toolkit Applications", authored by Ryan Dewsbury, published by Prentice Hall Professional, December, 2007, Copyright 2008 Pearson Education, Inc. ISBN 0321501969 For more information, please
Compression : warning with some Internet explorer version, it doesn't work
by Nicolas Martignole,
Re: Compression : warning with some Internet explorer version, it doesn't w
by Manuel Carrasco Moñino,
Re: Compression : warning with some Internet explorer version, it doesn't w
by venugopal pokala,
Jetty Continuations with GWT
by Jan Bartel,
Apache configuration
by Papick Taboada,
Compression : warning with some Internet explorer version, it doesn't work
by Nicolas Martignole,
Your message is awaiting moderation. Thank you for participating in the discussion.
Just a small warning about gzip compression and the following version of Microsoft Internet Explorer: 5.x, 6.0 and 6.0 SP1. There are known bugs with gzip compression. See MSDN for more details.
You might need to add a User-Agent verification in the filter so that the filter does not compress content for some specific browser. Have a look also at the default httpd.conf for Apache, there's more information about this.
Jetty Continuations with GWT
by Jan Bartel,
Your message is awaiting moderation. Thank you for participating in the discussion.
Along the lines of performance and scalability, I thought it would be worth mentioning that GWT apps can also take advantage of Jetty Continuations:
Jetty GWT with Continuations
cheers
Jan
Re: Compression : warning with some Internet explorer version, it doesn't w
by Manuel Carrasco Moñino,
Your message is awaiting moderation. Thank you for participating in the discussion.
That's true... . But it seems the problem in IE occurs only with js compressed files, not with html. Using the default linker, GWT produces especial [...].cache.html files including the application javascript code. So you can compress the application in htmp files without problems. If you are using the gwt cross site compiler you can not use compression for these version of explorer.
Re: Compression : warning with some Internet explorer version, it doesn't w
by venugopal pokala,
Your message is awaiting moderation. Thank you for participating in the discussion.
We have developed a stand alone web based application using GWT and performance of this application in Mozilla firefox browser is 3 times better than the IE browser. Any help in improving performance of this application in IE browser would be great help to us.
Thanks,
Venu
Apache configuration
by Papick Taboada,
Your message is awaiting moderation. Thank you for participating in the discussion.
I am using the Apache proxy in front of my tomcat to configure compression and http headers.
bit.ly/GwtApacheConfig | https://www.infoq.com/articles/gwt-high-ajax/?itm_source=articles_about_ajax&itm_medium=link&itm_campaign=ajax | CC-MAIN-2021-25 | refinedweb | 2,785 | 54.73 |
Translation of Enums and ADTsEdit this page on GitHub
The compiler expands enums and their cases to code that only uses Scala's other language features. As such, enums in Scala are convenient syntactic sugar, but they are not essential to understand Scala's core.
We now explain the expansion of enums in detail. First, some terminology and notational conventions:
- We use
Eas a name of an enum, and
Cas a name of a case that appears in
E.
We use
<...>for syntactic constructs that in some circumstances might be empty. For instance,
<value-params>represents one or more a parameter lists
(...)or nothing at all.
Enum cases fall into three categories:
- Class cases are those cases that are parameterized, either with a type parameter section
[...]or with one or more (possibly empty) parameter sections
(...).
- Simple cases are cases of a non-generic enum that have neither parameters nor an extends clause or body. That is, they consist of a name only.
- Value cases are all cases that do not have a parameter section but that do have a (possibly generated) extends clause and/or a body.
Simple cases and value cases are collectively called singleton cases.
The desugaring rules imply that class cases are mapped to case classes, and singleton cases are mapped to
val definitions.
There are nine desugaring rules. Rule (1) desugar enum definitions. Rules (2) and (3) desugar simple cases. Rules (4) to (6) define extends clauses for cases that are missing them. Rules (7) to (9) define how such cases with extends clauses map into case classes or vals.
An
enumdefinition
enum E ... { <defs> <cases> }
expands to a
sealed
abstractclass that extends the
scala.Enumtrait and an associated companion object that contains the defined cases, expanded according to rules (2 - 8). The enum trait starts with a compiler-generated import that imports the names
<caseIds>of all cases so that they can be used without prefix in the trait.
sealed abstract class E ... extends <parents> with scala.Enum { import E.{ <caseIds> } <defs> } object E { <cases> }
A simple case consisting of a comma-separated list of enum names
case C_1, ..., C_n
expands to
case C_1; ...; case C_n
Any modifiers or annotations on the original case extend to all expanded cases.
A simple case
case C
of an enum
Ethat does not take type parameters expands to
val C = $new(n, "C")
Here,
$newis a private method that creates an instance of of
E(see below).
If
Eis an enum with type parameters
V1 T1 > L1 <: U1 , ... , Vn Tn >: Ln <: Un (n > 0)
where each of the variances
Viis either
'+'or
'-', then a simple case
case C
expands to
case C extends E[B1, ..., Bn]
where
Biis
Liif
Vi = '+'and
Uiif
Vi = '-'. This result is then further rewritten with rule (8). Simple cases of enums with non-variant type parameters are not permitted.
A class case without an extends clause
case C <type-params> <value-params>
of an enum
Ethat does not take type parameters expands to
case C <type-params> <value-params> extends E
This result is then further rewritten with rule (9).
If
Eis an enum with type parameters
Ts, a class case with neither type parameters nor an extends clause
case C <value-params>
expands to
case C[Ts] <value-params> extends E[Ts]
This result is then further rewritten with rule (9). For class cases that have type parameters themselves, an extends clause needs to be given explicitly.
If
Eis an enum with type parameters
Ts, a class case without type parameters but with an extends clause
case C <value-params> extends <parents>
expands to
case C[Ts] <value-params> extends <parents>
provided at least one of the parameters
Tsis mentioned in a parameter type in
<value-params>or in a type argument in
<parents>.
A value case
case C extends <parents>
expands to a value definition in
E's companion object:
val C = new <parents> { <body>; def ordinal = n; $values.register(this) }
where
nis the ordinal number of the case in the companion object, starting from 0. The statement
$values.register(this)registers the value as one of the
valuesof the enumeration (see below).
$valuesis a compiler-defined private value in the companion object.
It is an error if a value case refers to a type parameter of the enclosing
enumin a type argument of
<parents>.
A class case
case C <params> extends <parents>
expands analogous to a final case class in
E's companion object:
final case class C <params> extends <parents>
However, unlike for a regular case class, the return type of the associated
applymethod is a fully parameterized type instance of the enum class
Eitself instead of
C. Also the enum case defines an
ordinalmethod of the form
def ordinal = n
where
nis the ordinal number of the case in the companion object, starting from 0.
It is an error if a value case refers to a type parameter of the enclosing
enumin a parameter type in
<params>or in a type argument of
<parents>, unless that parameter is already a type parameter of the case, i.e. the parameter name is defined in
<params>.
Translation of Enumerations
Non-generic enums
E that define one or more singleton cases
are called enumerations. Companion objects of enumerations define
the following additional synthetic members.
- A method
valueOf(name: String): E. It returns the singleton case value whose
toStringrepresentation is
name.
- A method
valueswhich returns an
Array[E]of all singleton case values in
E, in the order of their definitions.
Companion objects of enumerations that contain at least one simple case define in addition:
A private method
$newwhich defines a new simple case value with given ordinal number and name. This method can be thought as being defined as follows.
private def $new(_$ordinal: Int, $name: String) = new E { def $ordinal = $_ordinal override def toString = $name $values.register(this) // register enum value so that `valueOf` and `values` can return it. }
The
$ordinal method above is used to generate the
ordinal method if the enum does not extend a
java.lang.Enum (as Scala enums do not extend
java.lang.Enums unless explicitly specified). In case it does, there is no need to generate
ordinal as
java.lang.Enum defines it.
Scopes for Enum Cases
A case in an
enum is treated similarly to a secondary constructor. It can access neither the enclosing
enum using
this, nor its value parameters or instance members using simple
identifiers.
Even though translated enum cases are located in the enum's companion object, referencing
this object or its members via
this or a simple identifier is also illegal. The compiler typechecks enum cases in the scope of the enclosing companion object but flags any such illegal accesses as errors.
Translation of Java-compatible enums
A Java-compatible enum is an enum that extends
java.lang.Enum. The translation rules are the same as above, with the reservations defined in this section.
It is a compile-time error for a Java-compatible enum to have class cases.
Cases such as
case C expand to a
@static val as opposed to a
val. This allows them to be generated as static fields of the enum type, thus ensuring they are represented the same way as Java enums.
Other Rules
A normal case class which is not produced from an enum case is not allowed to extend
scala.Enum. This ensures that the only cases of an enum are the ones that are
explicitly declared in it. | http://dotty.epfl.ch/docs/reference/enums/desugarEnums.html | CC-MAIN-2019-35 | refinedweb | 1,249 | 55.13 |
Hi everyone,
I am new to these forums, I've been looking for an answer to this question for awhile so hopefully someone can help me out.
I am relatively new to programming, I know my way around but have a lot to learn. I wrote a program the past two days to help me study my AP Chem terms so I can memorize them. The program is now fully functional and runs in Eclipse when i hit run. My question is how can i make my program run OUTSIDE of eclipse? I want to be able to make a .jar (or whatever is best) so that i can double click it and open it up. I assume it would have to use a command prompt or something (the program is ALL text, no jframes or anything like that) When I try to export it as a .jar file and run it, it shows up saying it failed to load the main class manifest and can't start. How can I make the program work? again all i want is to open a command prompt or something like that that displays text and allows user input of text. Hopefully you all understand what I am asking (It's difficult to describe). Please respond with your thoughts. Thanks again!
it shows up saying it failed to load the main class manifest and can't start.
To execute a program in a jar file via the double click on the file, you need to have the OS execute the java command with the -jar option. The jar file must contain a manifest file with a Main-Class: record that refers to the class with the main() method that will start the program. Your IDE can be configured to generate all that.
To get a console window, try making a batch file that will execute the jar file. Then create a shortcut that will execute the batch file.
Norm
The .java file im making into a jar starts with this (showing it has a main() )
public class CalcReview {
public static void main(String[] args) {
the .jar ONLY contains this file, there arent multiple .java files in it. so it has the main() in it already right? Forgive me, I'm new to all of this. As for the batch file, I would have no clue how to do that either =/ ill try to google it to find out more about it
i SEEM to have made some progress....i exported it differently and don't get the failed to find manifest error. I can't get the batch file to run. I type java -jar Myjar.jar and it returns that java is not recognized as an internal or external command. However that format is the only one i've found.....can you please recommend to me how to make a simple batch so it'll work?
To make an executable jar file you need to put the .class files in it that are to be executed.
You need to do some reading about how to create and use executable jar files.
how to make a simple batch so it'll work
Use the full path to the java.exe file in the batch file.
I tried to run it now through the cmd and it must have worked but i got the message invalid or corrupt jar file....this is so difficult -.-
Can you copy the console here from when you try to execute it?
To copy the contents of the command prompt window:
Click on Icon in upper left corner
Select Edit
Select 'Select All' - The selection will show
Click in upper left again
Select Edit and click 'Copy'
Paste here.
java -jar "C:\myjarfile.jar"
This is from the CMD.
More information: When i export out of eclipse, I let it generate the manifest in CalcReview/bin/manifest (calcreview being the project of the jar). I don't know if this is wrong or not...I got the batch file working too. Everything now says invalid or corrupt jar file
C:\Users\Danny\Desktop>ECHO OFF
Press any key to continue . . .
Whoops, this is what i meant to write for the CMD's copy/paste. there's no edit button on these forums -.-
Can you copy ALL of the command prompt console window including the error message
the error message doesn't show up in the console. it shows up as a new window. heres a ppic:
Uploaded with ImageShack.us
Try opening the jar file in a zip file utility. Check that it contains .class file(s) and a manifest file.
I just did, It contains all of the .class files and the manifest...
Try executing the java command in the command prompt window and copying the contents of the window here.
Forum Rules | http://forums.codeguru.com/showthread.php?516411-Strings-in-variable-names&goto=nextoldest | CC-MAIN-2018-09 | refinedweb | 805 | 83.15 |
Definition at line 83 of file PlannerRRT_SE2_TPS.h.
#include <mrpt/nav/planners/PlannerRRT_SE2_TPS.h>
The set of target nodes within an acceptable distance to target (including
best_goal_node_id and others)
Definition at line 52 of file PlannerRRT_common.h.
The ID of the best target node in the tree.
Definition at line 49 of file PlannerRRT_common.h.
Time spent (in secs)
Definition at line 43 of file PlannerRRT_common.h.
Distance from best found path to goal.
Definition at line 45 of file PlannerRRT_common.h.
The generated motion tree that explores free space starting at "start".
Definition at line 55 of file PlannerRRT_common.h.
Total cost of the best found path (cost ~~ Euclidean distance)
Definition at line 47 of file PlannerRRT_common.h.
Whether the target was reached or not.
Definition at line 41 of file PlannerRRT_common.h. | https://docs.mrpt.org/reference/devel/structmrpt_1_1nav_1_1_planner_r_r_t___s_e2___t_p_s_1_1_t_planner_result.html | CC-MAIN-2020-16 | refinedweb | 134 | 61.12 |
Mistakes I've made writing react/redux applications
October 24, 2018
I love react, I’ve been writing applications with it since 2014, I’ve used lots of good and bad patterns, different tools and approaches. Some of them make me proud, some make me feel ashamed.
Now I start to get that this is kind of a normal feeling, especially after seeing great developers like Ryan Dahl and his talk called 10 Things I Regret About Node.js. If you did not see it, yet, you’re missing a great piece of knowledge.
Some of the mentioned principles, patterns and mistakes are not react/redux only, in my opinion, you can apply them to most of the software you write.
The good part of doing all those kinds of mistakes is that we learn. Either by looking back and thinking or by stumbling with you and your code from a year ago.
The main objective of this is to share them so you don’t waste time doing the same exact mistakes I did.
With no more to say, let’s proceed to the mistakes & lessons learned.
1. Componentize to soon - Create abstractions you don’t need
We’ve all been there. Components and modules make our eyes shine, we like this utopic idea that one day building a feature will be just grabbing a bunch of components we built 6 months ago, wire them together and we’re done. And I wouldn’t say this does not happen, but I will say this is not how it normally happens.
Normally, deluded by this idea, we do things like this:
We have a button component
<button className="{styles.button}"> Buy me! </button>
Isn’t it so neat? We’re going to use it in a lot of other places, let’s abstract it.
const Button = ({ children }) => ( <button className={styles.button}>{children}</button> ) render(<Button>Buy me!</Button>)
What if we want to format this text?
<Button> <Text bold>Buy me!</Text> </Button>
What if we want to format every single word?
<Button> <Text bold> <Word onHover="{doStuff}">Buy</Word> <Word>me!</Word> </Text> </Button>
You get the point, you needed this:
And you ended up with this:
Lesson 1 - Do not predict the future. You are not gonna need it
Martin Fowler has a term for this yagni
We often commit this mistake, we should only build what we need.
Do the simplest thing that can possibly work
2. Just one more
We all know, it’s always more 10 minutes in bed, more 40 minutes watching Netflix, one more drink.
The same happens with our components. We had this beautiful button with a pixel-perfect style:
<Button> Hello! </Button>
But we needed to invert the colors, and so we add a prop.
<Button inverted> Hello! </Button>
But we needed to make it wider, and we added a prop
<Button wide> Hello! </Button>
But we needed to make it have a special behavior while on header, and we added a prop
<Button isHeader> Hello! </Button>
Once again, you get the point. Now our beautiful button is everywhere, every time we need to touch that button’s code, we pray and try to think about all the possible use cases so we don’t end up breaking it. We shouldn’t have to do this.
Lesson 2 - Compose components - Design with the open-closed principle in mind
The open closed principle states
Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification
What does this mean in this case?
Given the header example composition should have been used, something like the following:
import Button from 'components/button'; const HeaderButton = (props) => ( <Button {...props} onClick={() => { myNewBehaviour(); }} Hello I'm an header button </Button> )
Here the original button is extended, keeping the same API, without touching the original code, but using it and extending it. It is fully compliant to the open-closed principle, and thus much more easy to change in the future.
3. Badly designed redux store
Redux is heavily used, and even though it establishes some conventions, it does not do much to enforce them. You can design your store as you want, create the actions you want, etc. You have full freedom.
This is both a pro and a con, it’s a tradeoff. What this means is that sometimes people will design their store badly, and this is not redux’s fault.
Ever seen those components that have loads of logic on render? Or that info that you already have on the store but it is so hard to access it that you end up duplicating it? That’s a badly designed redux store.
Imagine the following scenario: You fetch a list of todos from an API, you wanna show them in a list. You store them, and you end up with a store that looks like this:
const state = { todos: [ { text: "Write a blogpost", id: 1 }, { text: "Go to a meetup", id: 2 }, { text: "Learn golang", id: 3 }, ], selectedTodo: { text: "Write a blogpost", id: 1 }, }
And them you just
map through the todos to display them, it works right?
Huuuum… not really.
You’ve probably spotted some of the mistakes.
Starting by the redundancy of the
selectedTodo. What will happen if you update the
selectedTodo? Will you have to remember to go to the list and update it again? Or will you forget and get incoherent data?
What happens if you want to access the todo with the
id = 3? Yes, we will have to iterate on the list, to try to find it. That’s ok for now, but we know it is going to be a problem.
Lesson 3.1 - Design your store like a database
Think about what queries are you going to do to your database? What are the indexes? Is it going to be updated? Or just read?
Store items by indexes, use references, it is a database.
const state = { todos: { byId: { 1: { text: "Write a blogpost", id: 1 }, 2: { text: "Go to a meetup", id: 2 }, 3: { text: "Learn golang", id: 3 }, }, selected: 1, allIds: [1, 2, 3], idsOrderedByText: [2, 3, 1], }, }
Can you see how easy it is to update todo with the
id = 2 now? Or to change the selected one without having incoherent information?
You might be thinking, what if I want to display them in a list (remember that was the original requirement?). You can still do it like this:
todos.allIds.map(id => todos.byId[id])
3.2 Coupling UI state with data
This is also connected to the principle above. Remember that time you fetched information from an API that you needed for the header? Imagine
users, and at the time it made sense to store it on the
header index? Something like the following:
const state = { header: { userBar: { isOpen: false, users: [ { first: "Alexandre", last: "Santos", id: 1 }, { first: "Pedro", last: "Santos", id: 2 }, ], }, }, }
A couple of months later, when you were doing the footer and you needed users’ first name, you ended up accessing it like
header.userBar.users[0] on the footer component? Remember?
Doesn’t sound good, does it? What happens if the developer that is touching the header changes the structure? Will he remember to go and fix the footer? Most likely not.
Lesson 3.2 - UI and Entities should be stored separately
Not to talk about the need (or not) to store UI data on redux (most of the times you don’t), if you store it, keep the UI data in one place, and keep your data (your entities) in a completely different place.
They shouldn’t be coupled, you don’t wanna mess the header loading state just because you changed the structure of a user, right?
const state = { header: { userBar: { isOpen: false, }, }, users: { byId: { 1: { first: "Alexandre", last: "Santos", id: 1 }, 2: { first: "Pedro", last: "Santos", id: 2 }, }, allIds: [1, 2], }, }
4. Components directly accessing the store
We all remember how magical it looked the first
mapStateToProps we wrote, isn’t it easy to just get the data you need from the store? It is so declarative!
It has it’s advantages, for sure, but it also lets you do things like this:
const mapStateToProps = state => ({ user: state.users.list[0].name.first })
Sounds familiar? If it is in one place, that’s not so bad (a litte bit though). But what if this spreads all around your application? What if the user store changes its structure? Will you come back and change it everywhere it is used?
Lesson 4 - Depend on abstractions
This is the d from SOLID that pplied to this specific context, means that you shouldn’t depend on concretions, but on abstractions.
What does that mean? What if you used a selector to get the user’s first name? Something like this:
const getUserFirstName = (state) => state.users.list[0].name.first;
Store it near the reducer and whenever you change the store structure, you also update this.
Then your components can depend on the selector, and your
mapStateToProps now looks a little bit cleaner:
const mapStateToProps = state => ({ user: getUserFirstName(state) })
Your components depend now on an abstraction, making it much easier to change in future without breaking anything.
Bonus - Lifting all the state up
We’ve probably seen that too, store every single piece of state in redux store.
This leads to a lot of store updates and a lot of data made globally that in reality is only being accessed locally. And why? Just because “we want to have redux advantages, we want to use reducers and actions”.
Good news is that you can do that while using the components’ local state. I wrote a blog post about it, give it a read, I promise it will be useful!
Conclusion
Those were the mistakes I made and I’ve seen doing while writing applications. There are definitely more but these were the ones I think are most impactful and the ones that can end up causing problems in maintaining an application.
Below is a TLDR of them, if you just skimming through this post or if you wanna take short notes.
❌ Create needless abstractions
✅ You are not gonna need it. Do not predict the future
❌ Always add one more prop
✅ Compose compose compose
❌ Badly designed redux store
✅ Think of your store like a database
❌ Store UI data and Entities together
✅ Completely decouple UI from data
❌ Components depend on store structure
✅ Depend on abstractions, selectors
❌ Lift all the state to redux
✅ Use reducer pattern locally, lift state when needed
What were the mistakes you made as a beginner? What mistakes are you still doing today? I would love to hear from you, reach out to me in any of the networks mentioned below.
Appreciate your time reading! | https://alexandrempsantos.com/mistakes-react-redux/ | CC-MAIN-2019-43 | refinedweb | 1,799 | 72.05 |
Building Reactive Applications With Combine
What Is Reactive Programming
The Missing Manual
for Swift Development
The Guide I Wish I Had When I Started Out
Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy
Combine is sometimes referred to as a functional reactive programming framework, but that isn't correct. It is more accurate to describe Combine as a reactive programming framework that uses functional programming techniques. Don't worry if this is confusing. It isn't important that you understand why Combine isn't a functional reactive programming framework. What is important is that you understand what reactive programming is and how Combine leverages Swift's support for functional programming. This episode answers the question What is reactive programming? In the next episode, we explore Swift's support for functional programming.
Coming up with a definition for reactive programming is surprisingly challenging and some definitions make your head spin. I would like to start with a simple definition and show you what reactive programming is with an example. What is reactive programming?
Reactive programming is working with asynchronous streams of data.
There is more to reactive programming, but this definition is sufficient for now. I would like to break this definition up into two components, (1) asynchronous and (2) streams of data. These concepts are at the heart of reactive programming and Apple's Combine framework. It is essential that you understand these concepts before you start working with the Combine framework.
What Is Asynchronous Programming?
Many developers still get confused when they come across the word asynchronous. What does it mean? What is asynchronous programming? To understand what asynchronous programming is, we first need to understand its counterpart, synchronous programming. Let's use a playground to illustrate the difference.
Synchronous Programming
We add an import statement for the Foundation framework and define a URL for a remote resource. We use the URL to create a
Data object and print the number of bytes in the
Data object. The string FINISHED is printed to the console at the end of the playground.
import Foundation let url = URL(string: "")! let data = try! Data(contentsOf: url) print(data.count) print("FINISHED")
Run the contents of the playground and inspect the output in the console. The number of bytes in the
Data object is printed before the string FINISHED is printed. This isn't surprising since the statements of the playground are executed synchronously. The playground executes one statement at a time. The next statement is executed when the previous statement has finished executing. That is what makes synchronous programming easy to understand.
597628 FINISHED
Asynchronous Programming
Let's now take a look at asynchronous programming. We no longer create the
Data object synchronously. We use Grand Central Dispatch, Apple's concurrency library, to create the
Data object asynchronously.
import Foundation let url = URL(string: "")! DispatchQueue.global().async { let data = try! Data(contentsOf: url) print(data.count) } print("FINISHED")
Run the contents of the playground. The output in the console illustrates the difference between synchronous and asynchronous programming. The print statement printing the string FINISHED precedes the print statement printing the number of bytes in the
Data object.
FINISHED 597628
The statements of the playground are still executed synchronously with one exception. The closure that is passed to the
async(_:) method of the global dispatch queue is executed asynchronously. This simply means that the closure is executed independently of the main playground flow.
Grand Central Dispatch submits the closure to a global dispatch queue and continues with the next statement in the playground. It doesn't wait for the statements in the closure to finish executing. At some point, the closure is executed and the print statement printing the number of bytes in the
Data object is executed. That is why the print statement printing the string FINISHED comes first.
What Are Streams of Data?
To understand what streams of data are, I have created a simple application. The application shows a table view and a button in the top right. Every time the user taps the button, the view controller adds a row to the table view. Each row shows the date and time the user tapped the button.
How does this example relate to reactive programming? Let's revisit the definition of reactive programming. Reactive programming is working with asynchronous streams of data.
Streams of data are the fundamental building blocks of a reactive application. A stream of data is nothing more than a sequence of events ordered in time. The button in the top right illustrates this. It generates a stream of tap events. Every time the user taps the button, an event, a tap in this example, is emitted. The view controller listens for these events and it responds by adding a row to its table view every time an event is emitted.
Let me explain this with a diagram. The horizontal line represents time. Each circle on the line represents an event, the user tapping the button in this example. The view controller listens for these events and responds every time an event is emitted. This representation is also known as a marble diagram. Marble diagrams are very useful to learn about reactive programming concepts. We use them throughout this series.
What Is the Observer Pattern?
It is time for a bit of theory. The pattern that drives reactive programming is the observer pattern. You may be new to reactive programming, but the observer pattern should be familiar. Chances are that you have used the observer pattern in some, if not most, of your projects. Notifications are an example of the observer pattern and even delegation can be considered an example of the observer pattern.
The concept is easy to understand. It defines subjects and observers. The observer is interested in changes of the subject. Every time the subject changes, the observers of the subject are notified. That is the observer pattern in a nutshell.
Let's use notifications to illustrate the observer pattern. In the
viewDidLoad() method of the
ViewController class we invoke the
addObserver(forName:object:queue:using:) method on the application's default notification center. The
addObserver(forName:object:queue:using:) method accepts four arguments. We are interested in the second argument and the fourth argument. The second argument is the subject the view controller observes, the
UIApplication singleton in this example. The fourth argument is the observer, a closure that is executed every time a notification is posted by the subject.
// MARK: - View Life Cycle override func viewDidLoad() { super.viewDidLoad() // Observe Did Enter Background Notification NotificationCenter.default.addObserver(forName: UIApplication.didEnterBackgroundNotification, object: UIApplication.shared, queue: nil) { (_) in print("did enter background") } }
The advantage of the observer pattern is that subjects and observers are loosely coupled. The subject and the observer don't have a strictly defined relationship. It is the responsibility of the observer to subscribe to changes of the subject. The observer unsubscribes when it is no longer interested in changes of the subject.
Combine Terminology
Before we move on, we need to become familiar with the terminology of reactive programming. The observer pattern defines subjects and observers. The Combine framework refers to publishers instead of subjects and subscribers instead of observers. The button generates a stream of data to which the view controller listens. The Combine framework uses the term subscribing instead of listening.
Publishers, subscribers, and subscribing are three keywords you need to remember. We use them throughout this series.
Asynchronous Streams of Data
Let's revisit the definition of reactive programming we started with. Reactive programming is working with asynchronous streams of data. With what we learned in this episode, we can rephrase this definition. Reactive programming is working with publishers that asynchronously publish events to which a subscriber can subscribe.
Why Should You Adopt Reactive Programming?
If you are new to reactive programming, then you may still be wondering why you should adopt reactive programming. You are interested, but you are not sure if reactive programming is for you. Truth be told, I wasn't convinced the first time I came across reactive programming.
One of the most compelling benefits is that reactive programming makes asynchronous programming less complex. Why that is becomes clear later in this series.
Reactive code is often more concise and easier to understand because you describe what you would like to accomplish. This is also known as declarative programming.
Another benefit I enjoy immensely is that reactive applications manage less state. A reactive application observes and reacts to streams of data. It doesn't hold onto state. This results in code that is easier to understand and maintain.
What's Next?
It takes time to let these concepts sink in. Don't worry if you are still a bit confused. At this point, it is important that you understand what reactive programming is and what it is about. Reactive programming can be challenging to learn if you skip the basics.
The Missing Manual
for Swift Development
The Guide I Wish I Had When I Started Out
Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy | https://cocoacasts.com/building-reactive-applications-with-combine-what-is-reactive-programming | CC-MAIN-2020-45 | refinedweb | 1,518 | 50.84 |
This article is the second of a two-part series on how to use Java ME technology and Bluetooth to access location data from wireless GPS devices.
As you may recall from Part 1 of this series, it is very easy to access the raw GPS data from a Bluetooth-enabled GPS device. The listing below shows what the serial output from a typical GPS device would look like:
$GPGSV,3,3,10,31,76,012,31,32,60,307,38,,,,,,,,*72
$GPGSA,A,3,32,31,16,11,23,,,,,,,,4.5,3.1,3.3*34
$GPRMC,122314.000,A,3659.249,N,09434.910,W,0.0,0.0,220908,0.0,E*78
$GPGGA,122314.000,3659.24902,N,09434.91042,W,1,05,3.1,261.51,M,-29.1,M,,*58
$GPGSV,3,1,10,01,62,343,00,11,14,260,34,14,35,079,27,16,29,167,28*73
$GPGSV,3,2,10,20,44,309,00,22,13,145,00,23,08,290,31,30,23,049,33*7C
$GPGSV,3,3,10,31,76,012,31,32,60,307,38,,,,,,,,*72
$PSTMECH,32,7,31,7,00,0,00,0,14,4,30,4,16,7,00,0,11,7,23,7,00,0,00,0*50
$GPRMC,122315.000,A,3659.249,N,09434.910,W,0.0,0.0,220908,0.0,E*79
$GPGGA,122315.000,3659.24902,N,09434.91048,W,1,05,3.1,261.61,M,-29.1,M,,*50
As you also may recall from Part 1, a GPS device encodes its data according to the NMEA specification. The purpose of this article is to learn how to accomplish the following tasks:
The serial data that is produced by GPS devices is formatted according to the NMEA specification, and each line of data is called an NMEA sentence. There are at least 5 NMEA sentences that provide the coordinates of your current position. The good news is that I only need to create a parser for one of them. I'll choose the $GPGGA header for the purposes of this article. If you want to know more about all the various standard and non-standard NMEA sentences, refer to the NMEA FAQ website. Following is an example of what an ordinary $GPGGA sentence would look like:
$GPGGA,123519,4807.038,N,01131.324,E,1,08,0.9,545.4,M,46.9,M, ,;
After further inspection, you can now see that the individual parts of an NMEA sentence are separated by commas. The following facts can be obtained from the preceding NMEA sentence:
Now, you'd think that it would be really easy for Java ME devices to parse the NMEA sentence using the StringTokenizer class, right? Unfortunately, it's not that easy since the StringTokenizer class only exists in Java SE implementations. However, in the example code I've included a simple NMEA parser and String tokenization classes. The following is a code snippet from Parser.java that properly converts coordinate DMS format (degrees, minutes, seconds) to decimal degree values.
StringTokenizer
Parser.java
if (token.endsWith("$GPGGA")) {
type = TYPE_GPGGA;
// Time of fix
tokenizer.next();
// Latitude
String raw_lat = tokenizer.next();
String lat_deg = raw_lat.substring(0, 2);
String lat_min1 = raw_lat.substring(2, 4);
String lat_min2 = raw_lat.substring(5);
String lat_min3 = "0." + lat_min1 + lat_min2;
float lat_dec = Float.parseFloat(lat_min3)/.6f;
float lat_val = Float.parseFloat(lat_deg) + lat_dec;
// Latitude direction
String lat_direction = tokenizer.next();
if(lat_direction.equals("N")){
// do nothing
} else {
lat_val = lat_val * -1;
}
record.latitude = lat_val + "";
// Longitude
String raw_lon = tokenizer.next();
String lon_deg = raw_lon.substring(0, 3);
String lon_min1 = raw_lon.substring(3, 5);
String lon_min2 = raw_lon.substring(6);
String lon_min3 = "0." + lon_min1 + lon_min2;
float lon_dec = Float.parseFloat(lon_min3)/.6f;
float lon_val = Float.parseFloat(lon_deg) + lon_dec;
// Longitude direction
String lon_direction = tokenizer.next();
if(lon_direction.equals("E")){
// do nothing
} else {
lon_val = lon_val * -1;
}
record.longitude = lon_val + "";
record.quality = tokenizer.next();
record.satelliteCount = tokenizer.next();
record.dataFound = true;
// Ignore rest
return 200;
}
Now that we've properly parsed the NMEA sentence, let's explore how to get a map using an external mapping service.
In this day and age, you have several options to choose from when you want to make a simple HTTP request to get an image that represents a map of your current location (or any location for that matter). Several companies -- Mapquest, Google, and ERSi -- provide these services, but I decided to use the Yahoo! Maps service for the following reasons:
In order to use the Yahoo! Maps API, all you need to do is sign up for a free developer account id key. So, if I wanted a map with the following parameters:
then the URL in the HTTP request would look like this:
YOUR_YAHOO_ID_KEY&latitude=46.987484&longitude=-
84.58184&image_width=400&image_height=400&zoom=7
Pretty simple, huh? The result of this request is not the image itself, but an XML document that has a link to the image. The listing below shows the XML result of my HTTP request for a map image:
<?xml version="1.0"?>
<Result xmlns:
8qRQ2kzC_f8vO7.FvQhW3hSbWbF_jO3H4.J2Gb7Qhc2vqoCTL0DWbaCfT751_Zt9Ysqtg0dKo2mv95
EIc4bbgdYrmebNqFcwfKb8YhOFe38Ia3Q--&mvt=m?cltype=onnetwork&.intl=us
</Result>
Ok, we're almost at the finish line. All we need to do now is to parse the result that we got from the map service and extract the URL to the map image. Fortunately, this is also a trivial task thanks to the JSR-172 XML Parsing API.
You should also be glad to know that the JSR-172 API has been out for several years and is available on a wide variety of mobile handsets. Of course, the JSR-172 API is a part of the Java ME MSA standard, so if your handset supports MSA then you're obviously good to go.
In the following listing, you can see that my XML parsing class only needed to extend the DefaultHandler class in the JSR-172 API. Since we're only interested in the contents of a single tag, namely the <Result> tag, then the code necessary to retrieve the URL for the map image is fairly simple.
DefaultHandler
<Result>
public class SimpleHandler extends DefaultHandler {
public String image_url = null;
public SimpleHandler() {}
public void startDocument() throws SAXException {}
public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException {
if(qName.equals("Result")) {
// do nothing
} else {
throw new SAXException("<Result> tag not found");
}
}
public void characters(char[] ch, int start, int length) throws SAXException {
image_url = new String(ch, start, length).trim();
}
public void endElement(String uri, String localName, String qName, Attributes attributes) throws SAXException{}
public void endDocument() throws SAXException{}
public String getImageURL(){
return image_url;
}
}
Now the code in Listing 4, specifically the getImageURL() method, will return the URL that points to the PNG image of the map of our current location. The only remaining step is to make another HTTP request to retrieve the image and display it on the mobile device. Figure 1 depicts a mobile device showing our current location.
getImageURL()
The example code presented in this article shows how easy it is to use the JSR-82 (Bluetooth) API to access the data from a Bluetooth-enabled GPS receiver, parse the data streams, and obtain the coordinates of current location. Additionally, you've seen the effort involved in formulating the HTTP request to access an external mapping service, employ the use of the JSR-172 (XML Parsing and Web Services) API to parse the result, and make the final request to obtain the map image. Both JSR-82 and JSR-172 are included in the Java ME MSA standard.
Hopefully, the example code presented in this article will inspire you to create some really exciting location-based applications! | http://developers.sun.com/mobility/apis/articles/bluetooth_gps/part2/ | crawl-002 | refinedweb | 1,283 | 55.74 |
In this program we will see how to read an integer number entered by user. Scanner class is in java.util package. It is used for capturing the input of the primitive types like int, double etc. and strings.
Example: Program to read the number entered by user
We have imported the package
java.util.Scanner to use the Scanner. In order to read the input provided by user, we first create the object of Scanner by passing
System.in as parameter. Then we are using nextInt() method of Scanner class to read the integer. If you are new to Java and not familiar with the basics of java program then read the following topics of Core Java:
→ Writing your First Java Program
→ How JVM works
import java.util.Scanner; public class Demo { public static void main(String[] args) { /* This reads the input provided by user * using keyboard */ Scanner scan = new Scanner(System.in); System.out.print("Enter any number: "); // This method reads the number provided using keyboard int num = scan.nextInt(); // Closing Scanner after the use scan.close(); // Displaying the number System.out.println("The number entered by user: "+num); } }
Output:
Enter any number: 101 The number entered by user: 101 | https://beginnersbook.com/2017/09/java-program-to-read-integer-value-from-the-standard-input/ | CC-MAIN-2018-05 | refinedweb | 202 | 58.08 |
You’re reading the English version of this content since no translation exists yet for this locale. Help us translate this article!
<input> elements with
type="file" let the user choose one or more files from their device storage. Once chosen, the files can be uploaded to a server using form submission, or manipulated using JavaScript code and the File API.
Value
A file input's
value attribute contains a
DOMString that represents the path to the selected file(s). If the user selected multiple files, the
value represents the first file in the list of files they selected. The other files can be identified using the input's
HTMLInputElement.files property.
-
A string that defines the file types the file input should accept. This string is a comma-separated list of unique file type specifiers. Because a given file type may be identified in more than one manner, it's useful to provide a thorough set of type specifiers when you need files of a given format.
For instance, there are a number of ways Microsoft Word files can be identified, so a site that accepts Word files might use an
<input> like this:
<input type="file" id="docpicker" accept=".doc,.docx,application/msword,application/vnd.openxmlformats-officedocument.wordprocessingml.document">
capture
A string that specifies which camera to use for capture of image or video data, if the
accept attribute indicates that the input should be of one of those types. A value of
user indicates that the user-facing camera and/or microphone should be used. A value of
environment specifies that the outward-facing camera and/or microphone should be used. If this attribute is missing, the user agent is free to decide on its own what to do. If the requested facing mode isn't available, the user agent may fall back to its preferred default mode.
capturewas previously a Boolean attribute which, if present, requested that the device's media capture device(s) such as camera or microphone be used instead of requesting a file input.
files
A
FileList object that lists every selected file. This list has no more than one member unless the
multiple attribute is specified.
multiple
When the
multiple Boolean attribute is specified, the file input allows the user to select more than one file.
Non-standard attributes
In addition to the attributes listed above, the following non-standard attributes are available on some browsers. You should try to avoid using them when possible, since doing so will limit the ability of your code to function in browsers that don't implement them.
webkitdirectory
The Boolean
webkitdirectory attribute, if present, indicates that only directories should be available to be selected by the user in the file picker interface. See
HTMLInputElement.webkitdirectory for additional details and examples.
Note: Though originally implemented only for WebKit-based browsers,
webkitdirectory is also usable in Microsoft Edge as well as Firefox 50 and later. However, even though it has relatively broad support, it is still not standard and should not be used unless you have no alternative.
Unique file type specifiers
A unique file type specifier is a string that describes a type of file that may be selected by the user in an
<input> element of type
file. Each unique file type specifier may take one of the following forms:
- A valid case-insensitive filename extension, starting with a period (".") character. For example:
.jpg,
.doc.
- A valid MIME type string, with no extensions.
- The string
audio/*meaning "any audio file".
- The string
video/*meaning "any video file".
- The string
image/*meaning "any image file".
The
accept attribute takes as its value a string containing one or more of these unique file type specifiers, separated by commas. For example, a file picker that needs content that can be presented as an image, including both standard image formats and PDF files, might look like this:
<input type="file" accept="image/*,.pdf">
Using file inputs
A basic example
<form method="post" enctype="multipart/form-data"> <div> <label for="file">Choose file to upload</label> <input type="file" id="file" name="file" multiple> </div> <div> <button>Submit</button> </div> </form>
div { margin-bottom: 10px; }
This produces the following output:
Note: You can find this example on GitHub too — see the source code, and also see it running live.
Regardless of the user's device or operating system, the file input provides a button that opens up a file picker dialog that allows the user to choose a file.
Including the
multiple attribute, as shown above, specifies that multiple files can be chosen at once. The user can choose multiple files from the file picker in any way that their chosen platform allows (e.g. by holding down Shift or Control, and then clicking). If you only want the user to choose a single file per
<input>, omit the
multiple attribute.
When the form is submitted, each selected file's name will be added to URL parameters in the following fashion:
?file=file1.txt&file=file2.txt
Getting information on selected files
The selected files' are returned by the element's
HTMLInputElement.files property, which is a
FileList object containing a list of
File objects. The
FileList behaves like an array, so you can check its
length property to get the number of selected files.
Each
File object contains the following information:
name
-
filepicker in which the
webkitdirectoryattribute is set). This is non-standard and should be used with caution.
Note: You can set as well as get the value of
HTMLInputElement.files in all modern browsers; this was most recently added to Firefox, in version 57 (see bug 1384030).
Limiting accepted file types
Often you won't want the user to be able to pick any arbitrary type of file; instead, you often want them to select files of a specific type or types. For example, if your file input lets users upload a profile picture, you probably want them to select web-compatible image formats, such as JPEG or PNG.
Acceptable file types can be specified with the
accept attribute, which takes a comma-separated list of allowed file extensions or MIME types. Some examples:
accept="image/png"or
accept=".png"— Accepts PNG files.
accept="image/png, image/jpeg"or
accept=".png, .jpg, .jpeg"— Accept PNG or JPEG files.
accept="image/*"— Accept any file with an
image/*MIME type. (Many mobile devices also let the user take a picture with the camera when this is used.)
accept=".doc,.docx,.xml,application/msword,application/vnd.openxmlformats-officedocument.wordprocessingml.document"— accept anything that smells like an MS Word document.
Let's look at a more complete example:
<form method="post" enctype="multipart/form-data"> <div> <label for="profile_pic">Choose file to upload</label> <input type="file" id="profile_pic" name="profile_pic" accept=".jpg, .jpeg, .png"> </div> <div> <button>Submit</button> </div> </form>
div { margin-bottom: 10px; }
This produces a similar-looking output to the previous example:
Note: You can find this example on GitHub too — see the source code, and also see it running live.
It may look similar, but if you try selecting a file with this input, you'll see that the file picker only lets you select the file types specified in the
accept value (the exact nature differs across browsers and operating systems).
The
accept attribute doesn't validate the types of the selected files; it simply provides hints for browsers to guide users towards selecting the correct file types. It is still possible (in most cases) for users to toggle an option in the file chooser that makes it possible to override this and select any file they wish, and then choose incorrect file types.
Because of this, you should make sure that the
accept attribute is backed up by appropriate server-side validation.
Notes:
const input = document.querySelector("input[type=file]"); input.
Examples
In this example, we'll present a slightly more advanced file chooser that takes advantage of the file information available in the
HTMLInputElement.files property, as well as showing off a few clever tricks.
Note: You can see the complete source code for this example on GitHub — file-example.html (see it live also). We won't explain the CSS; the JavaScript is the main focus.
First of all, let's look at the HTML:
<form method="post" enctype="multipart/form-data"> <div> <label for="image_uploads">Choose images to upload (PNG, JPG)</label> <input type="file" id="image_uploads" name="image_uploads" accept=".jpg, .jpeg, .png" multiple> </div> <div class="preview"> <p>No files currently selected for upload</p> </div> <div> <button>Submit</button> </div> </form>
html { font-family: sans-serif; } form { width: 600; }
This is similar to what we've seen before — nothing special to comment on.
Next, let's walk through the JavaScript.
In the first lines of script, we get references to the form input itself, and the
<div> element with the class of
.preview. Next, we hide the
<input> element — we do this because file inputs tend to be ugly, difficult to style, and inconsistent in their design across browsers. You can activate the
input element by clicking its
<label>, so it is better to visually hide the
input and style the label like a button, so the user will know to interact with it if they want to upload files.
var input = document.querySelector('input'); var preview = document.querySelector('.preview'); input.style.opacity = 0;
Note:
opacity is used to hide the file input instead of
visibility: hidden or
display: none, because assistive technology interprets the latter two styles to mean the file input isn't interactive.
Next, we add an event listener to the input to listen for changes to its selected value changes (in this case, when files are selected). The event listener invokes our custom
updateImageDisplay() function.
input.addEventListener('change', updateImageDisplay);
Whenever the
updateImageDisplay() function is invoked, we:
-
curFiles[i].nameand
curFiles[i].size). The custom
returnFileSize()function returns a nicely-formatted version of the size in bytes/KB/MB (by default the browser reports the size in absolute bytes).
- Generate a thumbnail preview of the image by calling
window.URL.createObjectURL(curFiles[i])and reducing the image size in the CSS, then insert that image into the list item too.
- If the file type is invalid, we display a message inside a list item telling the user that they need to select a different file type.; }
The
returnFileSize() function takes a number (of bytes, taken from the current file's
size property), and turns it into a nicely formatted size in bytes/KB/MB.
function returnFileSize(number) { if(number < 1024) { return number + 'bytes'; } else if(number >= 1024 && number < 1048576) { return (number/1024).toFixed(1) + 'KB'; } else if(number >= 1048576) { return (number/1048576).toFixed(1) + 'MB'; } }
The example looks like this; have a play:
Specifications
Browser compatibility
Legend
- Full support
- Full support
- Compatibility unknown
- Compatibility unknown
- See implementation notes.
- See implementation notes.
See also
- Using files from web applications — contains a number of other useful examples related to
<input type="file">and the File API. | https://developer.cdn.mozilla.net/es/docs/Web/HTML/Element/input/file | CC-MAIN-2019-43 | refinedweb | 1,838 | 53.1 |
Sam Ruby wrote:
>
> I can honestly say that I know how to build it. ;-)
>
> Beyond that, my knowledge is not very deep. And I suspect I'm more
> familiar with it than most of the rest here. :-(
>
> = = = =
>
> I suspect that the answer to the question as to whether Axis would be
> willing to base it's architecture on Avalon hinges on two things: (1) that
> there be someone willing to do the work, and (2) such integration would in
> no way inhibit integration with products like WebSphere and JRun.
As to (1), I am willing to do the work. As the release manager for Avalon
Framework, I am intimately knowledgeable of how it works ;-)
As to (2), Cocoon can be installed in those environments. The biggest
problem Cocoon has is fighting past different classloader issues so it can
use a JAXP 1.1 compliant parser with namespaces enabled. Each of the main
Servlet Container (J2EE Servers) has different idiosyncracies in classloader
management. Avalon is no hinderence here.
> Both WebSphere and JRun each have a distinct "formalized Component
> framework that is well documented and scalable." And their own notions of
> "configuration, logging (with very fine grained control), component
> pooling, active resource monitoring, JDBC connection pooling, and more."
Yes, and they are proprietary and not portable from one to the other ;P.
I was thinking you might be interested in an Apache solution that does all
that instead of being tied to only one vendor's solution.
> I see no relationship between JDBC connection pooling and the SOAP
> specification. Clearly web services themselves could take advantage of
> this, but that's a matter between them and the container which embeds Axis.
> At the moment, there is a servlet and a "simple" standalone program. A
> third, Avalon friendly container would be very welcome.
Regarding JDBC: its understood--but then again you may want to store user/permission
info in a database for authentication purposes....
I would be more than happy to provide the Avalon container/component wrapper
for you. | http://mail-archives.apache.org/mod_mbox/axis-java-dev/200109.mbox/%[email protected]%3E | CC-MAIN-2018-26 | refinedweb | 336 | 63.29 |
Details
- Type:
New Feature
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: 3.0.0-alpha1
- Fix Version/s: None
- Component/s: None
- Labels:None
- Target Version/s:
Description.
Issue Links
- blocks
HBASE-11241 Support for multiple active HMaster replicas
- In Progress
- is depended upon by
-
- links to
-
-
Activity
- All
- Work Log
- History
- Activity
- Transitions
Couple suggestions after reviewing the patch:
- In Agreement it would be better to rename type T to L, and explain in the JavaDoc that this is the type of a learner.
- In the test, it would be good to introduce some SampleLearner, which would actually process the SampleAgreement. This would exemplify how agreement are used.
- MiniZooKeeperCluster should belong to ZK project in the long run. For now could you do some code reduction. Few methods are not used, some are copies of ClientBase class and can be called directly, configuration member is not used.
Attaching new patch based on Konstantin's suggestions.
- Added private static class SampleLearner in unit test.
- SampleLearner executes SampleProposals and LOG.info's the return value.
- Added NOTICE and LICENSE files.
- Reduced the code of MiniZooKeeperCluster.
- NOTICE and LICENSE files are not needed as they are already contained in the hadoop-common.
- MiniZooKeeperCluster should use ClientBase methods (like setupTestEnv() and waitForServer*()) instead of copying them.
- Your SampleLearner is a good example of how it should not be used. SampleLearner represents the state of the server, like namespace or a region. Learner should be the first generic parameter of the Agreement, so that Agreement.execute() could call the update methods of the Learner. Not the other way around as in the example provided.
New patch. Not clear what I was doing earlier. Indeed the correct usage here is to have agreement executions call the learner, not the other way around.
- Factored out SampleLearner.
- SampleProposal now does <SampleLearner,Void>.
- Added setCurrentUser to SampleProposal.
- Made SampleLearner use UserGroupInformation.doAs when executing agreement.
- Removed LICENSE and NOTICE files. Sorry about that.
- Unit tests now correctly wait for all agreements to arrive (they were not prior).
- Made use of ClientBase in MiniZooKeeperCluster.?
Hey Konstantin and Plamen, have y'all given any thought to contributing the Coordination Engine somewhere other than the Hadoop project? Sounds like it's a pretty general-purpose system that other projects that have nothing to do with Hadoop might want to use. Seems to me like it might reasonably be a separate top-level Apache project, which Hadoop and HBase (and perhaps others) could then depend on. It might also make sense for it to be a sub-project of the ZooKeeper project, much like BookKeeper is.
With something this new that you presumably want to iterate on quickly, seems like a shame to have to wait around for a Hadoop release to be able to pick up an updated Coordination Engine.
Good point Aaron T. Myers! While other projects might take an advantage of coordinated intent approach, the HDFS and HBase are the only two right now that are focusing on real HA with low MTTR. I am just a bit hesitant of over-generalizing things
Tsz Wo Nicholas Sze, thanks for the review! It was nice to finally see a face at the Summit as well.
Aaron T. Myers, thanks for the comments! I think I am outside of that discussion, most likely Konstantin Boudnik or Konstantin Shvachko can comment better on where to take the project.
I posted a new patch around the same time your review came in; there were mistakes in the way agreement executions work.
- ProtoBuf is certainly a nice choice for serialization. However, we shouldn't need to bind ourselves to any one serialization format. This is why we use Serializable. It is certainly possible to have the writeObject call write out a ProtoBuf of the proposal itself, for example, and read the values back using ProtoBuf as well. This is feasible with the current interfaces.
- Good point on version compatibility. AFAIK, version compatibility would take place once the quorum is established as prior to that there is no communication between the engines. So the coordination engine, as part of bootstrap, should perform a version check against its quorum peers. Perhaps this means extending the API, or making it part of a larger interface? (VersionedCoordinationEngine)? Konstantin Shvachko might be able to comment better.
- Please see my new patch. The idea is indeed to make the agreement execute on some callBack object, SampleLearner in this case. The new patch should show the test making use of it.
- Yes we can probably do some refactoring here. I'll work on a new patch.
- Yes we can add details for ZkCoordinationEngine. Unsure of any clear advantages and disadvantages. The only thing that comes to my mind right away is that it may be possible to build Paxos directly into the CoordinationEngine implementation, thus co-locating the coordination service with the server / application itself, rather than having to make RPC calls and wait for responses, like with ZooKeeper(s). I don't think the intent of this work is really to compare any one coordination mechanism with another but so much as provide a common interface for which one can implement whichever they prefer.
I think that the latest patch is a better example on how the coordination engine should work: it accepts a proposal, then agrees on its order, then invokes Agreement.execute() to trigger the learner state update. This is an initial patch and is a working progress. We should also introduce a generic Proposer, which submits a proposal and then waits for agreement to be executed in order to reply to the client.
Nicholas, good points:
- We chose Serializable interface to keep serialization as generic as possible, as Plamen mentioned. We routinely define readObject(), writeObject() for our proposals using protobuf. Actually in many cases we avoid extra serialization by passing already serialized requests directly from RPC to CoordinationEngine.
- Not sure if it is exactly what you mean by the version of coordination engine. I see two versions here:
- CE software version. This should be tracked in the implementation of CE. We assume there could be different implementations of CE. If one implementation changes it should not effect other.
- The version of a set of Proposals / Agreements. This should be tracked on the application level. The set of proposals / agreements reflects a particular application logic and is orthogonal to the engine implementation.
- Rolling upgrade is divided into two parts. Rolling upgrade of the application (e.g. HDFS) and rolling upgrade of the engine itself. If both support it then everything is good. If CE does not support RU, but the application does, then we should still be able to upgrade the application in the rolling manner but without upgrading CE. CE in this case will continue assigning GSNs and produce agreements.
- Proposal and Agreement are made as generic as possible. Proposal is formally empty, but it has everything it needs. One need to define equals(), hashCode(), and serialization. We also routinely implement toString() so that CE could print proposals in the logs.
Agreement has one method execute(), so that CE could make a call back to update the application state. It also has two generic parameters: first defines the application type, the second - the return type of the execute() method, which is intended to be returned to the client.
ConsensusProposal from the patch combines Proposal and Agreement. This is probably the most typical use case. But there could be Proposals that don't assume Agreements, like control messages to the CE, and Agreements, which are different from corresponding Proposals, or even agreements without proposals, like control commands from CE to the application.
- ZKCoordinationEngine should not be abstract. Made the same comment to Plamen.
- Documentation for ZKCoordinationEngine makes sense.
Aaron, a separate project is a great idea. Although I am not sure right now how it will evolve. It may spin off from hadoop, like hadoop did from nut.
Minor comments.
- It looks like checkQuorum is kind of noop for submitProposal in ZK based implementation, since zooKeeper.create would fail if there is no quorum anyways?
- In ZK based Coordination Engine implementation, how are ZNodes cleaned up? Looking at patch each proposal creates PERSISTENT_SEQUENTIAL, but no mention of cleanup.
Hi Lohit, thanks for your comments!
- checkQuorum is an optimization some coordination engines may choose to implement in order to fail-fast to client requests. In the NameNode case, if quorum loss was suspected, that NameNode could start issuing StandbyExceptions.
- You are correct that the ZKCoordinationEngine does not implement ZNode clean-up currently. That is because it was made as a proof of concept for the CoordinationEngine API. Nonetheless, proper clean-up can be implemented. All one has to do is delete the ZNodes that everyone else has already learned about.
- Suppose you have Node A, B, and C, and Agreements 1, 2, 3, 4, and 5.
- Node A and B learn Agreement 1 first. Node C is a lagging node. A & B contain 1. C contains nothing.
- Node A and B continue onwards, learning up to Agreement 4. A & B contain 1, 2, 3, and 4 now. C contains nothing.
- Node C finally learns Agreement 1. A & B contain 1, 2, 3, and 4 now. C contains 1.
- We can now discard Agreement 1 from persistence because we know that all the Nodes, A, B, and C, have safely learned about and applied Agreement 1.
- We can apply this process for all other Agreements.
Aaron, a separate project is a great idea.
Cool, glad we agree. Shall we resolve this JIRA then and take this proposal to the Apache Incubator? A standalone separate project seems like a much more reasonable place to put this work to.
I was only suggesting ZK since that project's focus is about reliable distributed coordination, which is not what Hadoop aims to do at all. If for some reason you didn't want to try to make this a TLP (which, again, seems more reasonable to me) then trying to contribute it to that project makes a lot more sense to me.
> reliable distributed coordination, which is not what Hadoop aims to do at all.
Not sure where this leaves QJM. I thought it satisfies all these requirements including the Hadoop's aim.
Anyways, we have it, and we also do distributed coordination between active and standby NNs, RMs.
Hadoop common is chosen for the coordination engine interface as the lowest common ancestor. It could be used for anything Hadoop from here: HDFS, HBase, Yarn..
Not sure where this leaves QJM. I thought it satisfies all these requirements including the Hadoop's aim.
The NameNode QuorumJournalManager and JNs are expressly for storing HDFS NN edit logs. Not for general purpose consensus, not for use by other projects like HBase, etc.
Hadoop common is chosen for the coordination engine interface as the lowest common ancestor.
But a separate project could just as well be a common ancestor, just like both Hadoop and HBase separately depend on ZooKeeper. There's no actual need for it to be in Hadoop Common if HBase is to use it.
It could be used for anything Hadoop from here: HDFS, HBase, Yarn.
But seems like it could also be used for arbitrary, non-Hadoop things, correct? If so, why put it in Hadoop?.
I personally don't think there's any good reason for this to start out as part of a larger project, and honestly think there are several downsides. For example, Hadoop's release cadence is too slow for a new project like this, there's not much expertise in Hadoop for the general problem of distributed consensus, possible desire for other non-Hadoop projects to want to use it, etc.
> there's not much expertise in Hadoop for the general problem of distributed consensus.
Totally get that, but I think the point still remains that there's little expertise for defining a common interface for coordination engines in general in this project, and no real reason that the Hadoop project should necessarily be the place where that interface is defined. The ZooKeeper project, a ZK sub-project, or an entirely new TLP makes more sense to me..
This sounds interesting. Thanks for the effort!. If I understood the discussion correctly here, the idea is to build a quorum based replication. For example, the events(I think this represents data) are submitted as proposals to a quorum of nodes. In ZooKeeper terms, Leader proposes values to the Followers. Now Leader wait for acknowledgements from a quorum of Followers before considering a proposal committed. Also, Leader queues COMMIT(zxid) events to all Followers so that all other nodes learn the events. This ensures that the events will be reached to all nodes in the system. Adding one more point, in general ZK provides strong ordering guarantees.
Sometime back ZooKeeper folks initiated discussions to decouple ZAB from ZooKeeper, so that users can make use of this and can define their own models and reliably replicate the data. There is a related JIRA ZOOKEEPER-1931 talks similar feature, now this is in initial dev stage. Please have a look at this. I hope this would help to define a common interface, also an opportunity for us to know more about the use cases.
Regards,
Rakesh
Plamen Jeliazkov, good work.
just a couple of picky notes
org/apache/hadoop/coordination/zk/ZkCoordinationEngine.java:117
Not configured localNodeId can lead to uncontrolled creation of new zk sessions.
Should code check that before creating ZooKeeper object?
'catch' should close zookeeper, if it was opened.
org/apache/hadoop/coordination/zk/ZkCoordinationEngine.java:132
split doesn’t take into account chrooted zk configurations.
it is much better to use org.apache.zookeeper.client.ConnectStringParser
org/apache/hadoop/coordination/zk/ZkCoordinationEngine.java:301
currentGSN is long, but treated as integer. easy to create overflow.
More serious problem, that zk sequential nodes are integer.
Code should guard sequence overflows over zero and handle that using
more then one parent znodes or zero crossing detection or other techniques
preventing integer seqence id overflow.
Rakesh, thanks for the link. This is indeed in line with this effort.
The interface defined here is not dependent on the storage provided by Zookeeper. It only needs the coordination piece in the form defined. So if you implement "stand-alone" ZAB it could potentially be used as a base for CoordinationEngine.
I see the work is just starting on github - hope the internship will be fun and productive.
Thanks Konstantin Shvachko for the interest on ZAB internship and if everything goes well will try use ZAB as a base for CoordinationEngine. I'll keep an eye on this JIRA to watch the progress.
Guys,
attached is improved version of the patch, with following changes:
- ZkCoordinationEngine class is now not abstract, rather it relays agreement execution to registered AgreementHandler class (see below)
- Added interface AgreementHandler<L> to control execution of the agreements learned by CoordinationEngine. This interface is parametrized with the type L of custom Learner class. Also added method to register handler into CoordinationEngine interface. SampleLearner class is added in tests as an example.
- Improved handling of ZooKeeper 'Disconnected' event inside ZK-based reference implementation of Coordination engine, now in case zookeeper client gets disconnected from the ensemble, engine will pause and resume automatically when it re-connects to zookeeper.
- Cleaned up ZkConfigKeys
- Improved javadoc, refined pom.xml-s.
- Moved exceptions and proposal classes to o.a.hadoop.coordination package, rather than subpackages of this package.
Hi Alex Newman,
zab or zk?
ZAB is an fresh project idea and this is in the initial phase. I'm not having much details now. Please follow ZOOKEEPER-1931 to know more on this.
Motivation - there could be many use cases where you need a quorum based replication. So fresh thought came up to define consensus algorithm (ZAB) more cleanly so that the users can define their own data models and use ZAB to replicate their own data.
Actually after seeing 'Coordination Engine' feature in HDFS, I thought of introducing this new idea.
I was thinking about ATM's suggestion of taking this as a separate project some place other than Hadoop. The main problem with that is cross dependencies. 'Coordination Engine' project depends on types from Hadoop common, and HDFS will depend on the CE project. Since Common and HDFS are packaged together this will be hard to break unless CE project is a part of Common. Currently the interface uses Configuration, logging, and looking forward it will be rather tightly integrated with Hadoop RPC layer, since many RPC calls should trigger coordination.
I think the patch looks good now. +1
I see how the interfaces can be the base for HDFS and HBase coordination. I also see that ZKCoordinationEngine needs more work. I think once we start using it we will see where it should evolve.
I propose to commit this upon Jenkins approval and if there are no objections. This should free up the implementation of coordination for HBase and HDFS.
I don't think we should be committing this to the trunk of Hadoop Common until it's actually 100% clear that HDFS will have a dependency on it. As it stands, there are a lot of objections in HDFS-6469 about adding this to the NN at all.
If you want to add it to a development branch in Hadoop then I won't stand in the way of that, but I honestly believe you'll be able to make a lot more headway more quickly if you go the route of putting this in a separate project.
Aaron T. Myers HDFS-6469 doesn't have a lot of objections really. It has some questions, but they are already pretty much covered, IMO, by Konstantin Shvachko. So it doesn't looks like HDFS ticket is blocking the passage of this one.
HDFS-6469 still has several unanswered or un-retracted objections, perhaps most notably this one from Suresh:
I am very uncomfortable about adding all this complexity into HDFS.
and this one from Todd:
Lastly, a fully usable solution would be available to the community at large, whereas the design you're proposing seems like it will only be usably implemented by a proprietary extension (I don't consider the ZK "reference implementation" likely to actually work in a usable fashion).
I personally share both of these concerns with Suresh and Todd, specifically that this design seems overly-complex and does not add much (or perhaps any) benefit over a simpler solution that builds on the current HA system in Hadoop, and I'm concerned about baking in dependence on a proprietary 3rd party system for HA capabilities.
My main point here is that this work (HADOOP-10641) will not be useful to Hadoop except in the context of HDFS-6469. So, I'm not OK with committing this to trunk, at least until there's general agreement that HDFS-6469 is a reasonable design that we should move forward with in Hadoop. As it stands, I don't think there is such agreement; I certainly have not been convinced of this yet. I'm OK with you committing this to a development branch if you want to try to make progress that way, though.
several unanswered or un-retracted objections
- I did address the "complexity" issue in the first paragraph of my reply to Suresh
I cannot and probably should not address comfort levels of community members in general. But I can and will gladly address technical issues should you raise any.
These two jiras do introduce some concepts, which may be new to somebody (as they were to me when I started the project). But distributed coordination is the direction in which distributed systems are moving towards their maturity. I'll just mention Google's Spanner and Facebook's HydraBase here as examples. In my experience such concepts in fact simplify system architectures rather than complicate them.
- I will address Todd's comment in HDFS-6469 in more details.
the design does not add much (or perhaps any) benefit over a simpler solution that builds on the current HA system in Hadoop
- I discussed the alternative solution in my reply to Todd, see section on ActiveActive vs ActiveStanby HA.
This approach faces essentially the same problems as ConsensusNode, or "opens the same can of worms as the ConsensusNode" in Todd's words. But CNode in the end gives us all active NNs, rather than single active other RD-only standbys.
- Coordination opens an opportunity for geographically distributed HDFS, which allows to scale file system across data centers.
- Coordination opens an opportunity for active-active Yarn.
- Coordination opens an opportunity for replicated regions in HBase.
I'm concerned about baking in dependence on a proprietary 3rd party system for HA capabilities
Not sure which 3rd party system dependencies you see here. There are non mentioned in the CNode design. And ZK is already a dependency for Hadoop HA.
general agreement
I really don't know how to answer to the rest of your comments, Aaron.
- You seem to have issues with the design of HDFS-6469, but did not present any technical reasons there.
- You make HDFS-6469 a pre-condition for HADOOP-10641, but CNode implementation cannot start without the CE interface.
- Committing this to a development branch wouldn't make sense without you being convinced or comfortable to have it merged to trunk once the work is done.
- You do not give a clue on what would indicate a "general agreement" or what would convince you that there is one.
We are hosting a community meeting next week, which was announce on the dev lists. The topics in the agenda include technical discussion as well as the logistics of moving forward. Are you available to talk about this issues at the meeting and potentially work out a general agreement or a compromise?
Konst, you are overreacting here. As. In particular, this comment doesn't make any sense:
Committing this to a development branch wouldn't make sense without you being convinced or comfortable to have it merged to trunk once the work is done.
The work on HDFS-6469 is not done, so how can I possibly know whether or not I'll be convinced and comfortable to have it merged to trunk, when it is time to merge the feature branch? My skepticism of this feature at this point is not a good reason to commit it to trunk first without a development branch. I think the exact opposite is the case: doing this on a feature branch is a way for you to be able to demonstrate the benefits and prove out how low-risk the NN changes are that are required for this work. I do not understand at all your resistance to doing this work on a branch.
Not sure which 3rd party system dependencies you see here. There are non mentioned in the CNode design. And ZK is already a dependency for Hadoop HA.
Please give me a little more credit than this. Even though it may not be mentioned in the design doc, it's fairly transparent that the primary goal of this work is to introduce a plugin point for WANdisco's coordination engine implementation into Hadoop.
I will try to make the meeting next week but at the moment my schedule does not allow it. I will try to move things around, though. San Ramon is also a very inconvenient location for me. Will there be a dial-in provided for those who cannot attend in-person?
Please give me a little more credit than this. Even though it may not be mentioned in the design doc, it's fairly transparent that the primary goal of this work is to introduce a plugin point for WANdisco's coordination engine implementation into Hadoop.
And as Konstantin said elsewhere: Hadoop has a number of features that are targeted to a proprietary technologies, which doesn't seem to be bothering anyone this far. So, I can't consider this as a real objection.
Sorry if it sounded as an overreaction, non intended.
doing this work on a branch.
the primary goal of this work is to introduce a plugin point for WANdisco's coordination engine implementation
I don't see anything bad with plugging in WANdisco CE into Hadoop, as I argued in the other jira comment. But saying its a primary goal is not fair, you know me better than that.
Let me comment on the design. We actually looked at multiple consensus algorithms and their implementations and came up with an abstractions that suite the area in the most general way. Particularly, the call back from agreement to update the application state is separated from the proposing action is because it is more generic. With some implementations of Raft a proposer can just wait when the agreement is made and then proceed with its execution - synchronously. But with ZK you have to set a watcher and wait for a callback acknowledging the event - asynchronously. So Asynchronous approach wins as more generic.
If you take Bart we can organize pickup for participants from the near station. Also we should have a dial up.
Should executeAgreement updateGSN first or doExecute first? It seems more reliable to do 2 phase commit, otherwise you can run into situations where GSN is updated but execution failed..
I'm not comfortable with committing this to Hadoop trunk before it's actually something that Hadop trunk will use. How about committing this to both HBase and the HDFS-6469 development branch? Or, you could of course go the route I originally suggested of making the CE interface and ZK reference implementation an entirely separate project that both HBase and the HDFS-6469 branch could depend on.
I'm not comfortable with committing this to Hadoop trunk before it's actually something that Hadop trunk will use.
This is a chicken-n-egg problem, don't you think? You don't want to get this piece into common before something in the trunk will use it. However, it isn't possible to have anything in the trunk to use the APO until it is committed. Am I missing anything?
I'm saying you should commit the Coordination Engine interface to the ConsensusNode feature branch and use it on that branch, and then at some point we may merge the whole branch to trunk, CE and CN simultaneously. This is exactly what.
We hosted a meet-up at the WANdisco office in San Ramon today. Thank you to everyone who came. I'd especially like to thank Aaron T. Myers and Sanjay Radia for taking their time to connect with us.
I took the liberty to record some of the comments / concerns people raised during our meet-up. I will list all of them here and provide a few responses.
- Is NoQuorumException and ProposalNotAcceptedException enough? Are there other exceptions CoordinationEngine might throw?
- My own feeling is that these two in particular were the most general and universal. We could always add IOException, if desired.
- In submitProposal() there is ProposalReturnCode return value and possible Exception to be thrown. It is unclear which one we should use.
- I agree. Konstantin looked at me for an answer during this but I remained silent. The reason for this is for ProposalReturnCode to return a deterministic result (NoQuorum has a deterministic event; the Proposal was not sent), and to treat the Exception case as something wrong with the Proposal itself (i.e., doesn't implement equal() or hashcode() correctly, or cannot be serialized properly). I understand the confusion and we could do better with just the Exception case.
- ConsensusNode is non-specific. Consider renaming the project to ConsensusNameNode.
- Concern for PAXOS to effectively load balance clients. Two round trips makes writes slow.
- CNodeProxyProvider should allow for deterministic host selection. Consider a round-robin approach.
- We are weakening read semantics to provide the fast read path. This makes stale reads possible.
- Konstantin discussed the 'coordinated read' mechanism and how we ensure clients talk to up-to-date NameNodes via Proposals.
- Sub-namespace WAN replication is highly desirable but double-journaling in the CoordinationEngine and the EditsLog is concerning.
- An address of the impact on write performance is desirable by the community.
- HBase coming up with WAL plugin for possible coordination. Wary of membership coordination (multiple Distributed State Machines) for HBase WALs.
- Small separate project might make it more likely for people to import CE into their own projects and build their own CoordinationEngines. Separate branch also possible.
Some of these clearly correspond to the HDFS and HBase projects and not just the CoordinationEngine itself. Apologies if I missed anyone's concern / point; pretty sure I captured everybody though.
As has been proposed above and agreed during the meet-up yesterday, I will go ahead and clear new branch ConsensusNode off the trunk, so we'll start adding the implementation there.
Hey dude. Should we delay this a bit?
On Wed, Jul 16, 2014 at 11:11 PM, Konstantin Boudnik (JIRA)
Did you mean ConsensusNameNode?
I think in the code we refer to it as CNode or ConsensusNode, hence the name for the branch.
I'm very much an =0 to the changes to HDFS, as that level is not an area of my understanding. If something does go into HDFS, then as noted, hadoop-common does seem an appropriate location - if it can't go into hadoop-hdfs itself.
Before that happens, consider this:
Consensus protocols are where CS-hard mathematics comes out of the textbooks and into the codebase; it is a key place where you are expected to prove the correctness of your algorithm before your peers will trust it. And, hopefully, before you make the correctness of that algorithm a critical part of your own application.
If Hadoop is going to provide a plug-in point for distributed co-ordination systems – which is what this proposal is – then we need to specify what is expected of an implementation strictly enough that it is possible to prove that implementations meet the specification, and that downstream projects can demonstrate that if an implementation meets this specification then their own algorithms with be correct.
More succinctly: may seem a harsh requirement, but
HADOOP-9361 shows that it is nothing I would not impose on myself. It is What Amazon is doing in their stack, and it has also been done for Distributed File Systems.
I would recommend using TLA+ here -and for any downstream uses. Once the foundations are done, then we can move onto YARN, and then finally to the applications which run on it.
I'm not going to comment on the code at all at this point, except to observe that you should be making this a YARN service to integrate with the rest of the services and workflow being built around them. The core classes are in hadoop-common. is a good idea in the abstract, but the notion of applying Amazon's process to a volunteer open source project is problematic. In terms of the Hadoop contribution process, this is a novel requirement. It is up to the Hadoop committership to determine commit criteria of course, but I humbly suggest that the intersection of contributors able to mathematically prove the correctness of a large code change while simultaneously being able to implement production quality systems code is vanishingly small. In this case, the contributors might be able to meet the challenge but going forward if significant changes to Hadoop will require a team of engineers and mathematicians, probably this marks the end of external contributions to the project. Also, I looked at
HADOOP-9361. The documentation updates there are fantastic but I did not find any mathematical proofs of correctness.
This is a good idea in the abstract, but the notion of applying Amazon's process to a volunteer open source project is problematic.
Consensus protocols are expected to provide proofs of the algorithms correctness; anything derived from Paxos, Raft et al rely on those algorithms being considered valid, and the implementors being able to understand the algorithms. Open source consensus protocol implementations are expected to publish their inner workings, else they can't be trusted. I will site Apache Zookeeper's ZAB protocol, and Anubis's consistent T-space model, as examples of two OSS products that I have used and implementations that I trust.
In terms of the Hadoop contribution process, this is a novel requirement.
Implementations of distributed consensus protocols already a one place where the team needs people who understands the maths. If a team implementing a protocol aren't able to specify it formally in some form or other: run. And if someone tries to submit changes to the core protocols of an OSS implementation who can't prove that it works, I would hope that the patch will be rejected.
Which is why I believe this specific JIRA "provide an API and reference implementation of distributed updates" is suitable for the criteria "provide a strict specification". I'm confident that someone in the WanDisco dev team will be able to do this, and would make "understand this specification" a pre req for anyone else doing their own implementation.
Even so, we can't expect complete proofs of correctness. Which is why I said "any maths that can be provided, and test cases".
For
HADOOP-9361, the test cases were the main outcome: by enumerating invariants and pre/post conditions, some places where we didn't have enough tests became apparent. These were mostly failure modes of some operations (e.g. what happens when preconditions aren't met).
Derived tests are great as:
- Jenkins can run them; you can't get mathematicians to prove things during automated regression tests.
- It makes it easier to decide if a test failure is due to an error in the test, or a failure of the code. If a specification-derived test fails, then it is now due to either an error in the specification or the code.
I think we need to do the same here: from a specification of the API, build the test cases which can verify the behavior as well as local tests can. Those implementors of the back end now get those tests alongside a specification which defines what they have to implement.
The next issue becomes "can people implementing things understand the specification?". It's why I used a notation that uses Python expressions and data structures; one that should be easy to understand. It's also why users of the TLA+ stuff in the Java & C/C++ world tend to use the curly-braced form of the language.
I'm sorry if this appears harsh or that I've suddenly added a new criteria to what Hadoop patches have to do, but given this Coordination Manager is proposed as a central part in a future HDFS and YARN RM, then yes, we do have to define it properly.
Consensus protocols are expected to provide proofs of the algorithms correctness
Steve this jira is not proposing new Consensus protocols, as stated in this comment.
CoordinationEngine here is an interface to be used with existing consensus algorithms, which indeed go through a rigorous math scrutiny before they become trustworthy.
this jira is not proposing new Consensus protocols, as stated in this comment. CoordinationEngine here is an interface to be used with existing consensus algorithms,
Exactly. This JIRA is proposing a plugin interface to co-ordination systems using consensus algorithms, a plugin point intended for use by HDFS and others. It is absolutely critical that all implementations of this plug in do exactly what is expected of them -and we cannot do that without a clear definition of what they are meant to do, what guarantees must be met and what failure modes are expected.
The consensus node design document is not such a document. It's an outline of what can be done, but it doesn't specify the API. The current patch for this JIRA contains some interfaces, a ZK class and a single test case. Can we trust this ZK class to do what is required? Not without a clear definition of what is required. Can we trust the test case to verify that the ZK implementations does what is required? Not now, no. What do we do if there is a difference between what the ZK implementation does and the interface defines -is it the interface at fault, or the ZK implementation? What if a third-party implementation does something differently? Whose implementation is considered the correct one?
For the filesystems, HDFS defines the behavior; my '9361 JIRA was deriving a specification from that implementation, generating more corner case tests, and making the details of how (every) other filesystem behaves differently a declarative bit of XML for each FS -now we can see how they differ. We've even used it to bring the other filesystems (especially S3N) more in line with what is expected.
This new plugin point is intended become a critical failure point for HDFS and YARN, where the incorrect behaviour of an implementations potentially places data at risk. Yet to date, all we have is a PDF file which, as Amazon describes it "conventional design documents consist of prose, static diagrams, and perhaps pseudo-code in an ad hoc untestable language."
This is not a full consensus protocol; it will be straightforward to specify strictly enough to derive tests, to tell implementors of consensus protocol-based systems how to hook up their work to Hadoop. And, as those implementors are expected to be experts in distributed systems and such topics, we should be able to expect them to pick up basic specification languages just as we expect submitters of all patches to be able to write JUnit tests.
Steve, I am glad we are talking about the same thing now, which is the CoordinationEngine interface.
- It seems that you cite
HADOOP-9361as an example of a specification of an API. Same as Andrew I could not find specifications or any documents linked there. May be you can clarify. Is it a TLA+ spec of HDFS?
- If FileSystem specs are done and the tests developed according to specs, should that be sufficient to safeguard FS behaviour of any internal changes including introduction of CE? Which I assume is your main concern with this jira.
If I introduce an implementation of CE using ZK and it does not break the tests and therefore does not alter FileSystem semantics, isn't that a verification of the implementation.
- It looks like you propose to introduce a new requirement for Hadoop contributions, that new features should be formally specified and mathematically proven. I think this should be discussed in a separate thread before it can be enforced. Would be good to hear what people think.
- Generally great minds were thinking about disciplined software development techniques way back, probably starting from Dijkstra and Donald Knuth. I found them very useful dealing with complex algorithms, not sure about APIs.
I asked this question in the linked jira and want to post it here.
Do the points raised so far are objections to creating a branch and starting the implementation of CoordinationEngine along with ConsensusNode on it? As it was agreed on the July 15 meetup.
It seems that you cite
HADOOP-9361as an example of a specification of an API. Same as Andrew I could not find specifications or any documents linked there. May be you can clarify. Is it a TLA+ spec of HDFS?
Look in the site docs.
I actually used the Z model with a python syntax in the hope it would be more broadly understood, though now that I'm trying to integrate it with other work I'm trying to think "shall I go back and TLA+ it?"...because then its possible to start thinking about code using the public module definitions.
If FileSystem specs are done and the tests developed according to specs, should that be sufficient to safeguard FS behaviour of any internal changes including introduction of CE? Which I assume is your main concern with this jira.
My concern is slightly different: we're producing a plugin point for co-ordination across the Hadoop stack, and we need to know what it is meant to do.
If I introduce an implementation of CE using ZK and it does not break the tests and therefore does not alter FileSystem semantics, isn't that a verification of the implementation.
The real HDFS tests are the full stack, with failure injection...those FS API ones are limited to those API calls and how things like seek() work. It'll be the full stack tests with failure injection that will highlight where a test fails after the changes.
As for the tests in this JIRA so far, they're pretty minimal and verify that the ZK implementation does "something" provided nothing appears to fail. They don't seem designed to be run against any other implementation of the interface, nor is there any failure injection. Even without the fault injection, any tests for this should be designed to be targetable at any implementation, to show consistent behaviour in the core actions.
Such tests won't verify robustness though...it looks to me that I could implement this API using in-memory data structures, something would be utterly lacking in the durability things need. Or I could try to use Gossip, which may have the durability, but a different ordering guarantee which may show up in production. Its things like the latter I'm hoping to catch, by spelling out to implementors what they have to do.
It looks like you propose to introduce a new requirement for Hadoop contributions, that new features should be formally specified and a mathematically proven. I think this should be discussed in a separate thread before it can be enforced. Would be good to hear what people think.
Proven? Not a chance. What I would like to see is those critical co-ordination points to be defined formally enough that there's no ambiguity about what they can do. This proposal is for a plugin to define what HDFS, HBase &c will expect from a consensus service, so we have to ensure there's no ambiguity. Then we can use those documents to derive the tests to break things.
Generally great minds were thinking about disciplined software development techniques way back, probably starting from Dijkstra and Donald Knuth. I found them very useful dealing with complex algorithms, not sure about APIs.
+Parnas, who invented "interfaces" as the combination of (signature, semantics) in 1972. What I'm trying to do here is get those semantics nailed down.
Attaching new patch, here are the changes I've made:
- Moved updateCurrentGSN() to after executeAgreement(). We will replay the agreement if we crashed before updating the GSN in ZK.
- Removed the ProposalReturnCode(s). We will just return if submission was successful. Exception if unsuccessful.
- Renamed ProposalNotAcceptedException to ProposalSubmissionException.
- NoQuorumException now extends ProposalSubmissionException.
.
Looks like new pom.xml file is missing the ASL boiler-plate.
Please find attached an first attempt at a CE specification in TLA+. With TLC and the attached configuration file, model checking is successful.
The specification is for a CE that accepts proposals (containing values submitted by proposers) and produces a sequence of agreements. The mechanism through which proposals are agreed will depend on the coordination algorithm used by the CE implementation - at the moment all the specification states is that a submitted proposal ends up in the agreement sequence.
Michael Parkin this looks really neat.
Steve Loughran I am curious if that helps at all?
I ran zookeeper-benchmark and NNThroughputBenchMark.
Zookeeper write throughput on HDD is comparable to NameNode write operations throughput while zookeeper throughput on SSD is higher than NameNode throughput. Based on the results, zookeeper is not expected to be the bottleneck.
-1 overall. Here are the results of testing the latest attachment
against trunk revision .
-1 patch. The patch command could not apply the patch.
Console output:
This message is automatically generated.
Attaching new patch.
Here's the major updates.
- Added ZK implementation interfaces to make Agreement handling clearer.
- ZKCoordinationEngine takes a collection of ZKAgreementHandlers. It is the job of the Handlers to type cast the Agreements. The pre-requisite to the type cast is that ZKAgreementHandler.handles(Agreement) must return true for that very Agreement.
- ZKCoordinationEngine executes each Agreement amongst all the ZKAgreementHandlers. Look at ZKCoordinationEngine.executeAllHandlers().
- SampleHandler sets the GlobalSequenceNumber of the SampleProposal before executing it.
The test, testSimpleProposals, was also updated to validate that the CoordinateEngine's GlobalSequenceNumber is incrementing monotonically per Agreement reached.
appears to have generated 2 warning messages.
See for details.
.
I did improvements of this patch. Improved version was benchmarked. Results attached.
Modified version of CE I'll file as separate jira later.
Thank your for the really good TLA document —I do think it makes what's going on a lot clearer as now others can see strictly what implementations do. (It now places a requirement to me to do the same for my proposals, but I'm happy with that).
It also places a requirement for me to look at the code alongside the spec, so here goes:
Algorithm
- presumably usedIds aren't collected forever in common implementations, instead they use enough of a time marker in their IDs to be self-windowing.
core code
- CoordinationEngine should extend Hadoop common's AbstractService; this makes it trivial to integrate with the lifecycle of YARN apps and when someone migrates the NN/DN to the lifecycle will hook to HDFS the same way. The init/start/stop operations all match.
- ZKConfigKeys needs a name more tied to the coordination engine.
- ZKCoordinationEngine could use guava Precondition as a check on localNodeId and move it up to the init method.
- I don't like downgrading all ZK exceptions to "IOE", as it hides things like security exceptions, missing parents &c. Again this is something we could do that is more generic than just for the co-ord engine, as I've ended up doing some of this in my code.
- ZKCoordinationEngine.loopLearningUntilCaughtUp()
- createOrGetGlobalSequenceNumber's while(true) loop appears to spin forever if the exception raised in its ZK actions is KeeperException.NoAuthException... that is, if starting on a secure cluster where it can't access the path. More filtering of exception types is needed with the unrecoverables thrown up (SessionExpiredException +maybe some others).
- ZKCoordinationEngine.submitProposal(): needs better exception text, ideally including path and text of nested exception.
- I'm not sure I like the way ZKCoordinationEngine.processImpl() shuts itself down. I'd prefer some exception for the caller to process, so that owner of the engine is in sync.
- if an AgreementsThread fails its exception doesn't get picked up ... these need to be propagated back to the junit thread. Ideall also logged in that AgreementsThread after Thread.currentThread().setName() has given it a name for the log statements.
tests
- MiniZKCluster ... I have the one from Twill converted to a YARN svc; this is is one i'd like to see this switch to once the code is checked in. It's lighter weight than the HBase one.
- there's always a delay for ZK startup. Could you make the test cases start one as a @BeforeClass and all share the same one?
- needs tests for failure conditions: no ZK cluster
- if there's a way to do this, tests for against a secure cluster, both succeeding and failing.
- add a test timeout via @Rule public final Timeout testTimeout = new Timeout(30000);
minor
- minor: needs formatting to style, use of {} in all conditional clauses.
- Can you use SLF4K everywhere -it's more efficient as can do string expansion only when needed & will stop people complaining that every log action needs to be wrapped by log-level checks. Then switch calls like LOG.info("Got watched event: " + watchedEvent) to LOG.info("Got watched event: {}", watchedEvent)
What now?
I'm pretty happy with it —though more reviewers are needed. What now?
- We commit hadoop-coordination with the implementation into Hadoop common, and work towards getting it into a version of the NN which can use it for committing operations; maybe later in YARN and downstream.
- We work with the curator team to get it into curator and pull that into server-side Hadoop. YARN-913 is going to need that in the RM anyway, though so far HDFS hasn't. Strengths: lives with the rest of their ZK work. Weaknesses: looser coupling to Hadoop & a different release cycle.
- it goes into a hadoop-zookeeper module that depends on ZK & Curator, and on which things which need these can depend on, including client-side code that wants it (ZK trickles out somehow already, pulling it apart would be cleaner). For example, my mini-ZK cluster as YARN service could be anothe service to add.
Strategy 3 appeals to me : it's not that different from what is there today (indeed, we just fix the module name for now & add more features as/when needed)
oh, one more thing.. could you add the .tla file to the patch too. maybe we could start having src/tla as the home for these files
Thanks Henry and Andrey for the benchmark results. To summarize this, we have three benchmarks
- NNThroughputBenchmark, which gives us the upper bound of NN throughput.
- ZK benchmark, which measures the performance of ZK itself.
- ZK-CE benchmark measuring performance of CE based on ZK.
Ideally we would like to see NNThroughput <= ZK-CE throughput <= ZK throughput.
Just looking at create operation
- NNThroughput yields 13K ops/sec with 400 threads, which seems to be optimal for that hardware configuration.
- ZK throughput is substantially higher on SSD: 34K ops/sec
- ZK-CE runs at 8.4K ops/sec, which is slower than NNThroughput.
So there is work to do here. I think CE implementation can be optimized to get on par with or close to ZK performance.
Steve, thanks for thorough code review.
The question about "what's next" has been discussed in this jira (see Aaron's and others comments), on the meetup on July 15 and in person. The decision everybody agreed on is to continue the CE and CNode development on a branch.
- I would be glad to commit it to hadoop-common as you propose. That way we can see faster adoption.
- I don't think the interface should be tied to ZooKeeper itself or higher level projects on top of it like Curator. Because CE is intended to be have implementations for different consensus algorithms.
Attaching initial patch.
Initial implementation shows using ZooKeeper as a Coordination Engine. The mechanism for sequencing transactions is done by using a single persistent-sequential Znode.
The ZooKeeper connection thread is utilized for learning of agreements by constantly checking against the single Znode mentioned above for different sequence values, and reading them one by one.
Proposing values and learning about them happen in parallel. | https://issues.apache.org/jira/browse/HADOOP-10641 | CC-MAIN-2016-36 | refinedweb | 8,525 | 55.95 |
.
Switch widget:
The Switch widget is active or inactive, as a mechanical light switch. The user can swipe to the left/right to activate/deactivate it.
The value represented by the switch is either True or False. That is the switch can be either in On position or Off position.
To work with Switch you must have to import:
from kivy.uix.switch import Switch
Attaching Callback to Switch:
- A switch can be attached with a call back to retrieve the value of the switch.
- The state transition of a switch is from either ON to OFF or OFF to ON.
- When switch makes any transition the callback is triggered and new state can be retrieved i.e came and any other action can be taken based on the state.
- By default, the representation of the widget is static. The minimum size required is 83*32 pixels.
- The entire widget is active, not just the part with graphics. As long as you swipe over the widget’s bounding box, it will work.
Basic Approach: 1) import kivy 2) import kivyApp 3) import Switch 4) import Gridlayout 5) import Label 6) Set minimum version(optional) 7) create Layout class(In this you create a switch): --> define the callback of the switch in this 8) create App class 9) create .kv file (name same as the app class): 1) create boxLayout 2) Give Lable 3) Create Switch 4) Bind a callback if needed 10) return Layout/widget/Class(according to requirement) 11) Run an instance of the class
Below is the Implementation:
We have explained how to create button, attach a callback to it and how to disable a button after making it active/inactive.
main.py file:
.kv file : in this we have done the callbacks and done the button disable also.
Output:
Image 1:
Image 2:
Image to show callbacks:
Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course.
Recommended Posts:
- Python | Switch widget in Kivy
- Python | Spinner widget in Kivy using .kv file
- Python | Popup widget in Kivy using .kv file
- Python | Carousel Widget In Kivy using .kv file
- Python | Progressbar widget in kivy using .kv file
- Python | Scrollview widget in kivy
- Python | Carousel Widget In Kivy
- Python | BoxLayout widget in Kivy
- Python | Slider widget in Kivy
- Python | Checkbox widget in Kivy
- Python | Add image widget in Kivy
- Python | Popup widget in Kivy
- Python | Spinner widget in kivy
- Python | Textinput widget in kivy
- Python | Progress Bar widget in kivy
- Python | Create a stopwatch using clock object in kivy using .kv file
- Circular (Oval like) button using canvas in kivy (using .kv file)
- Python | AnchorLayout in Kivy using .kv file
- Python | StackLayout in Kivy using .kv file
- Python | FloatLayout in Kivy using .kv. | https://www.geeksforgeeks.org/python-switch-widget-in-kivy-using-kv-file/?ref=rp | CC-MAIN-2020-50 | refinedweb | 475 | 71.65 |
Opened 6 years ago
Closed 5 years ago
Last modified 18 months ago
#9590 closed Uncategorized (wontfix)
CharField and TextField with blank=True, null=True saves u'' instead of null
Description
Create model and form:
class Test(models.Model): testfield = models.CharField(max_length=20, null=True, blank=True) testfield2 = models.TextField(null=True, blank=True) class NullCharFieldForm(forms.ModelForm): class Meta: model = Test fields = ('testfield', 'testfield2',)
Now create object from POST-alike data (empty input or textarea = ""):
>>> form = NullCharFieldForm({'testfield':'', 'testfield2': ''}) >>> form.is_valid() True >>> obj = form.save() >>> obj.testfield u'' >>> obj.testfield2 u''
form validates as it should with blank=True but it stores u"" in object fields and in DATABASE :/
result should be:
>>> obj.testfield >>> >>> obj.testfield is None True
Patch + tests attached, it's created on 1.0.X branch, it passes against model_forms and forms (regression) tests.
Attachments (1)
Change History (7)
Changed 6 years ago by romke
comment:1 Changed 6 years ago by ramiro
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
I think this behavior ir related to the convention used by Django for CharField's and TextField's with no value, the rationale is explained in the section of documentation that describes the null field option:
comment:2 Changed 6 years ago by mtredinnick
- Resolution set to wontfix
- Status changed from new to closed
This is by design and fully documented. If you want to save NULLs in a text-based field, you'll need to create your own Field subclass. Changing the existing Django behaviour would be backwards-incompatible.
comment:3 Changed 5 years ago by mightyhal
- Component changed from Forms to Database layer (models, ORM)
- Has patch unset
I agree that it's pretty annoying that django forces for empty string fields, even when null=True, blank=True is set. Following mtredinnick's suggestion (and some teeth grinding), here's how to subclass a CharField to make it store NULL:
from django.db import models class CharNullField(models.CharField): description = "CharField that stores NULL but returns ''" def to_python(self, value): if isinstance(value, models.CharField): return value if value==None: return "" else: return value def get_db_prep_value(self, value): if value=="": return None else: return value
comment:4 Changed 5 years ago by contact_django@…
- Has patch set
- Patch needs improvement set
- Resolution wontfix deleted
- Status changed from closed to reopened
The behavior is contrary to documentation.
When a field has null=True, django should store a NULL value, as documented.
Documentation says "Avoid using null on string-based fields such as CharField and TextField unless you have an excellent reason." When you have a legacy database, you may be in a situation when you don't have the choice. This is my case. Actually I even have a database integrity check field<>[[BR]]
If you really really don't want to support null=True for these kind of fields, you should fix the documentation and issue an error when a null=True is found in a CharField.
comment:5 Changed 5 years ago by ubernostrum
- Resolution set to wontfix
- Status changed from reopened to closed
Don't reopen a ticket which has been closed as "wontfix" by a committer. If you disagree, bring up the issue on the django-developers mailing list.
comment:6 Changed 18 months ago by alepane
- Easy pickings unset
- Severity set to Normal
- Type set to Uncategorized
- UI/UX unset
If you need a nullable CharField, you can use this small app that I made
patch + tests | https://code.djangoproject.com/ticket/9590 | CC-MAIN-2015-11 | refinedweb | 581 | 59.64 |
The problem with this exercise is that there isn’t really way to complete it without using a method that you have not yet learned in the book. In saying that, there certainly are a few other ways to do this exercise including switch statements and incrementing the ASCII value of a letter grade. But if you are going chapter by chapter in the book, you will not have learned how to do that. I did use cin.getline to store the input sequence to firstName[], allowing for multiple first-names as required. I posted a simple solution that works for our purposes, and acts as an intro to if statements. Take a look:
1. Write a C++ program that requests and displays information as shown in the following example of output:
What is your first name? Betty Sue
What is your last name? Yew
What letter grade do you deserve? B
What is your age? 22
Name: Yew,.
#include <iostream> using namespace std; int main() { char firstName[20]; char lastName[20]; char grade; char newGrade; int age; // Gather input cout << "What is your first name? "; cin.getline(firstName, 20); cout << "What is your last name? "; cin >> lastName; cout << "What is the letter grade you deserve? "; cin >> grade; cout << "What is your age? "; cin >> age; // Increment grade and recognize upper and lowercase input. if (grade == 'A' || grade == 'a') { newGrade = 'B'; } else if (grade == 'B' || grade == 'b') { newGrade = 'C'; } else if (grade == 'C' || grade == 'c') { newGrade = 'D'; } cout << "name: " << firstName << " " << lastName << endl; cout << "grade: " << char (newGrade) << endl; cout << "age: " << age << endl; cin.get(); return 0; }
Hey, just wanted to say that you can solve it without loops, adding integers to chars will automatically convert (ASCII) so if you type A and than do following A + 1 you will get B, and so on 🙂 let me know what you think
Interesting trick, however, im finding that the program will not work properly if I supply grade input via that method. The purpose of the loops was to always adjust the grade downward. For instance, if I enter: A+1 as input, the grade I think I deserve and the grade I receive will both be ‘B’. And the +1 will be interpreted as the age, because of the input method used. Further, I see that if I try to increment via +2 or higher, it does not actually do that: A+2 will still be ‘B’. The program could probably be done a little better by actually adjusting for the gap between D and F as well; which could be done by accounting for that in the nested loops.
#include
using namespace std;
int main()
{
char fname[20],lname[20];
char grade;
int age;
cout << "First name?\n";
cin.getline(fname,20);
cout << "Last name?\n";
cin.getline(lname, 20);
cout << "Grade?\n";
cin.get(grade);
cout <> age;
cout << "Name:" << lname << ", " << fname << endl;
cout<<"Grade: "<<char(grade + 1)<<endl;
cout << "Age: " << age;
cin.get();
cin.get();
return 0;
}
Excellent. There’s always more than one way to skin a cat 🙂 | https://rundata.wordpress.com/2012/10/15/c-primer-chapter-4-exercise-1/ | CC-MAIN-2017-26 | refinedweb | 508 | 80.62 |
XML Schema Validation with JAXP 1.4 and JDK 6.0
A few people have found problems validating DOM instances with JAXP1.4/JDK 6.0. I saw this quesion raised in the Java Technology and XML forum, and at least 3 bugs were filed for this in the last few weeks. I'll use this blog entry to explain what the problem is and how to easily fix your code.
Let's start by showing a snippet of the problematic code, which basically parses an XML file into a DOM and tries to validate it against an XML schema.
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder parser = dbf.newDocumentBuilder(); Document document = parser.parse(getClass().getResourceAsStream("test.xml")); // create a SchemaFactory capable of understanding XML schemas SchemaFactory factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); // load an XML schema Source schemaFile = new StreamSource(getClass().getResourceAsStream("test.xsd")); Schema schema = factory.newSchema(schemaFile); // create a Validator instance and validate document Validator validator = schema.newValidator(); validator.validate(new DOMSource(document));
Can you spot any problems in this code? Well, there's nothing obviously wrong with it, except that if you try this with JAXP 1.4 RI or JDK 6.0, you're going to get an error like,
org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'foo'. at ...
So what is the problem then? Namespace processing. By default, and for historical reasons, namespace awareness isn't enabled in a DocumentBuilderFactory, which means the document that is passed to the validator object isn't properly constructed. You can fix this problem by adding just one line,
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setNamespaceAware(true); // Must enable namespace processing!! DocumentBuilder parser = dbf.newDocumentBuilder(); ...
By now you may be asking yourself why is this even reported as a problem. Naturally, XML Schema validation requires namespace processing. It turns out that in JDK 5.0 this magically worked in many cases. Notice the use of the term "magically". By that I mean, "we don't know how it worked before but we are certainly looking into it".
So for now I'm just reporting the problem and proposing a fix for it. But I still owe you a better explanation as to why this worked before --or maybe you know why and you can tell me?
- Login or register to post comments
- Printer-friendly version
- spericas's blog
- 6179 reads | https://weblogs.java.net/blog/spericas/archive/2007/03/xml_schema_vali.html | CC-MAIN-2015-32 | refinedweb | 394 | 52.05 |
A Simple Way to Build Collaborative Web Apps
Recently I've been thinking about how to build a collaborative web app in 2021.
By collaborative web app I mean apps with desktop-like interactions and realtime collaborations, such as Notion, Discord, Figma, etc.
I want to find an approach suitable for small teams, simple but not necessarily easy, fast by default, and scalable when it grows.
Thus I started a journey to find out by creating a demo todo app, exploring the tools and methods along the way.
Our todo app has these features:
- users can create, edit, delete and reorder todos, which are stored on the server for persistency
- users can cooperatively edit the same list of todos, and changes are automatically synced between different clients in realtime
- all the operations should be as fast as possible for all the users around the world
The fast part is mainly concerned with latency, because most apps won't have much throughput to deal with in the beginning. More specifically, we want the changes made by one client to be delivered to other clients in a portion of the speed of light - less than 100ms in the same geographical region and several hundred ms across the continent. The app is named Todo Light for this reason.
You can play with the end product here. The client and server code are both hosted on Github.
A random list id is created when the app starts. You can share the URL with the list_id parameter to collaboratively edit the same list of todos with others (or yourself in another browser tab).
Todo Light is simple enough as a demo but nonetheless embodies some essence of the SaaS apps mentioned above, albeit in a much-simplified manner.
Let's get started building it.
Client
We begin from the client because it contains the core of our app.
The user interface part is easy, which we use React to build. Other reactive UI frameworks like Svelte and Vue should work as well. I choose React because of familiarity.
The client-only version of the app is straightforward to write:
import React, { useState } from 'react' export default function TodoApp() { const [todos, setTodos] = useState([]) const [content, setContent] = useState('') return ( <div> <form onSubmit={(e) => { e.preventDefault() if (content.length > 0) { setTodos((todos) => { return [...todos, { content, completed: false }] }) setContent('') } }} > <input autoFocus value={content} onChange={(e) => setContent(e.target.value)} /> </form> <ul> {todos.map((todo, index) => { return ( <li key={index}> <span>{todo.content}</span> <div> <label> <input type="checkbox" checked={todo.completed} onChange={(e) => { setTodos((todos) => { return todos.map((todo, i) => { if (i === index) { return { ...todo, completed: e.target.checked, } } else { return todo } }) }) }} /> </label> <button onClick={() => { setTodos((todos) => { return todos.filter((todo, i) => { return index !== i }) }) }} > x </button> </div> </li> ) })} </ul> </div> ) }
With less than 100 lines of code, the resulting app already looks and works like the end product, except that its state is volatile. Refresh the browser, and you'd lose all your todo items!
We use React's state to store data, which is okay for the input value because it is temporary by nature, but not quite right for the todos. The todos need to be:
- updated in a local browser cache for maximal speed
- synced to the server for persistency
- delivered to other clients in correct order and state.
Nowadays, there is a plethora of frontend state management libraries to choose from: Redux, MobX, Recoil, GraphQL clients like Apollo and Relay, etc. Sadly none of them works in our use case. What we need is a distributed system with realtime syncing and conflict resolution baked in. Although there are good writings on this subject, distributed systems are still too hard to implement correctly for a one-person team. I'd like to bring in some help.
After some search, a promising option shows up - Replicache, of which the homepage says:
Replicache makes it easy to add realtime collaboration, lag-free UI, and offline support to web apps. It works with any backend stack.
Sounds too good to be true (spoiler: it's mostly true). How does Replicache achieve these bold claims? Its doc site has a whole page to explain how it works. To save your time, I will summarize roughly here.
Replicache implements a persistent store in the browser, using IndexedDB. You can mutate the store locally and subscribe to part of the store in your UI. When data changes, subscriptions re-fire, and the UI refreshes.
You need to provide two backend endpoints for Replicache to talk to: replicache-pull and replicache-push. replicache-pull sends back a subset of your database for the current client. replicache-push updates the database from local mutations. After applying a mutation on the server, you send a WebSocket message hinting to affected clients to pull again.
That's all you need to do. Relicache orchestrates the whole process to make sure the state is consistent while being synced in realtime.
We will dive into the backend integration in the next section of this article. For now, let's rewrite the state-related code utilizing Replicache:
// Only relevant part are shown import { Replicache } from 'replicache' import { useSubscribe } from 'replicache-react' import { nanoid } from 'nanoid' const rep = new Replicache({ // other replicache options mutators: { async createTodo(tx, { id, completed, content, order }) { await tx.put(`todo/${id}`, { completed, content, order, id, }) }, async updateTodoCompleted(tx, { id, completed }) { const key = `todo/${id}` const todo = await tx.get(key) todo.completed = completed await tx.put(`todo/${id}`, todo) }, async deleteTodo(tx, { id }) { await tx.del(`todo/${id}`) }, }, }) export default function TodoApp() { const todos = useSubscribe(rep, async (tx) => { return await tx.scan({ prefix: 'todo/' }).entries().toArray() }) ?? [] const onSubmit = (e) => { e.preventDefault() if (content.length > 0) { rep.mutate.createTodo({ id: nanoid(), content, completed: false, }) setContent('') } } const onChangeCompleted = (e) => { rep.mutate.updateTodoCompleted({ id: todo.id, completed: e.target.checked, }) } const onDelete = (_e) => { rep.mutate.deleteTodo({ id: todo.id }) } // render }
We replace React's in-memory state with Replicache's persistent store. The app should work as before, except your carefully written todo items won't disappear when the browser tab closes.
Notice the mutators we register when initializing Replicache. They are the main APIs we use to interact with Replicache's store. When they are executed on the client, the corresponding mutations will be sent to the replicache-push endpoint by Replicache.
With the help of Replicache, you can think about your client state as a giant hashtable. You can read from it and write to it as you like, and Replicache would dutifully keep the state in sync among the server and all the clients.
Server
Now let's move on to the server.
The plan is clear: we will implement the two endpoints needed by Replicache, using some backend language (we use NodeJS in this case) and some database. The only requirement by Replicache is that the database must support a certain kind of transaction.
Before we set out to write the code, we need to think about the architecture. Remember the third feature of Todo Light? It should be as fast as possible for all users around the world.
Since we have implemented Optimistic UI on the client, most operations are already speedy (zero latency). For changes to be synced from one client to others quickly, we still need to achieve low latency for the requests to the server. Hopefully, the latency should be under 100ms for the collaboration to feel realtime.
We can only achieve that by globally deploying the server and the database. If we don't and only deployed to one region, the latency for a user in another continent will be several hundred milliseconds high no matter what we do. It's the speed of light, period.
Globally deploying a stateless server should be easy. At least that's what I initially thought. Turns out I was wrong. In 2021, most cloud providers still only allow you to deploy your server to a single region. You need to go many extra steps to have a global setup.
Luckily I find Fly.io, a cloud service that helps you "deploy app servers close to your users", which is excatly what we need. It comes with an excellent command-line tool and a smooth "push to deploy" deployment flow. Scaling out to multiple regions (in our case, Hong Kong and Los Angeles) takes only a few keystrokes. Even better, they offer a pretty generous free tier.
The only question left is which database we should use. Globally distributed databases with strong consistency is a huge and complicated area that has been tackled by big companies in recent years.
Inspired by Google's Spanner, many open source solutions come out. One of the most polished competitors is CockroachDB. Luckily, they offer a managed service with a 30-day trial.
Although I managed to build a version of Todo Light using CockroachDB, the end product in this article is based on a much simpler Postgres setup with distributed read replicas. Dealing with a global database brings in much complexity that is not essential to the subject matter of this article, which will wait for another piece.
We need two tables, one for todos and one for replicache clients.
Replicache needs to track the last_mutation_id of different clients to coordinate all mutations, whether confirmed or pending. The deleted column is used for soft deletes. The version column is used to compute change for Replicache pulls, which we will explain later.
The replicache-push endpoint receives arguments from the local mutators. Let's persist them to the database. We also need to increment the lastMutationID in the same transaction, as mandated.
router.post('/replicache-push', async (req, res) => { const { list_id: listID } = req.query const push = req.body try { // db is a typical object than represents a database connection await db.tx(async (t) => { let lastMutationID = await getLastMutationID(t, push.clientID) for (const mutation of push.mutations) { const expectedMutationID = lastMutationID + 1 if (mutation.id < expectedMutationID) { console.log( `Mutation ${mutation.id} has already been processed - skipping`, ) continue } if (mutation.id > expectedMutationID) { console.warn(`Mutation ${mutation.id} is from the future - aborting`) break } // these mutations are automatically sent by Replicache when we execute their counterparts on the client switch (mutation.name) { case 'createTodo': await createTodo(t, mutation.args, listID) break case 'updateTodoCompleted': await updateTodoCompleted(t, mutation.args) break case 'updateTodoOrder': await updateTodoOrder(t, mutation.args) break case 'deleteTodo': await deleteTodo(t, mutation.args) break default: throw new Error(`Unknown mutation: ${mutation.name}`) } lastMutationID = expectedMutationID } // after successful mutations we use Ably to notify the clients const channel = ably.channels.get(`todos-of-${listID}`) channel.publish('change', {}) await t.none( 'UPDATE replicache_clients SET last_mutation_id = $1 WHERE id = $2', [lastMutationID, push.clientID], ) res.send('{}') }) } catch (e) { console.error(e) res.status(500).send(e.toString()) } }) async function getLastMutationID(t, clientID) { const clientRow = await t.oneOrNone( 'SELECT last_mutation_id FROM replicache_clients WHERE id = $1', clientID, ) if (clientRow) { return parseInt(clientRow.last_mutation_id) } await t.none( 'INSERT INTO replicache_clients (id, last_mutation_id) VALUES ($1, 0)', clientID, ) return 0 } async function createTodo(t, { id, completed, content, order }, listID) { await t.none( `INSERT INTO todos ( id, completed, content, ord, list_id) values ($1, $2, $3, $4, $5)`, [id, completed, content, order, listID], ) } async function updateTodoCompleted(t, { id, completed }) { await t.none( `UPDATE todos SET completed = $2, version = gen_random_uuid() WHERE id = $1 `, [id, completed], ) } // other similar SQL CRUD functions are omitted
The replicache-pull endpoint requires more effort. The general plan is, in every request to replicache-pull we compute a diff of state and an arbitrary cookie (not to be confused with HTTP cookie) to send back to the client. The cookie will be attached to the subsequent request to compute the diff. Rinse and repeat.
How to compute the diff may be the most challenging part of integrating Replicache. The team provides several helpful strategies. We will use the most recommend one: the row version strategy.
router.post('/replicache-pull', async (req, res) => { const pull = req.body const { list_id: listID } = req.query try { await db.tx(async (t) => { const lastMutationID = parseInt( ( await t.oneOrNone( 'select last_mutation_id from replicache_clients where id = $1', pull.clientID, ) )?.last_mutation_id ?? '0', ) const todosByList = await t.manyOrNone( 'select id, completed, content, ord, deleted, version from todos where list_id = $1', listID, ) // patch is an array of mutations that will be applied to the client const patch = [] const cookie = {} // For initial call we will just clear the client store. if (pull.cookie == null) { patch.push({ op: 'clear' }) } todosByList.forEach( ({ id, completed, content, ord, version, deleted }) => { // The cookie is a map from row id to row version. // As the todos count grows, it might become too big to be efficiently exchanged. // By then, we can compute a hash as a cookie and store the actual cookie on the server. cookie[id] = version const key = `todo/${id}` if (pull.cookie == null || pull.cookie[id] !== version) { if (deleted) { patch.push({ op: 'del', key, }) } else { // addtions and updates are all represented as the 'put' op patch.push({ op: 'put', key, value: { id, completed, content, order: ord, }, }) } } }, ) res.json({ lastMutationID, cookie, patch }) res.end() }) } catch (e) { res.status(500).send(e.toString()) } })
Because version is a random UUID generated by Postgres's gen_random_uuid function, we can use it to efficiently calculate whether a todo item has been updated or not.
That's all for the server code, and we've come to the end of our journey. With the help of many great tools, we've successfully built a fast, collaborative todo app. More importantly, we've worked out a reasonably simple approach to building similar web apps. As the user base and feature set grow, this approach shall scale well in both performance and complexity.
Bonus - Implement Reordering with Fractional Indexing
You may notice that we use the type text for the ord column in the database schema, which seems better suited for a number type. The reason is we are using a technique called Fractional Indexing to implement reordering. Check the source code of Todo Light or try to implement it by yourself. It should be an interesting practice.
At the time of the writing, one shortcoming of Replicache is that its local transactions are not fast enough to enable heavy interactions such as drag and drop. To prevent lagging, we turned on the
useMemstore: true option to disable offline support. Hopefully, this will be fixed soon. | https://zjy.cloud/posts/collaborative-web-apps | CC-MAIN-2021-43 | refinedweb | 2,387 | 58.58 |
Type: Posts; User: Escapetomsfate
Overall, would it be a better choice to just upgrade then?
Thanks for replying.
So, what are the other ways? Are there any ways to do it visually in the express edition?
This is very much a noob question, and for that I apologize.
I want to make a dialog in visual c++ 2008. Apparently I must use the "MFC Wizard". When I click new project, this does not come up as...
Hello all.
What is the best way to port data between dlls? Would it be with namespaces, or dllimport / dlllexport?
I've searched around for examples etc, but I cannot find much info.
Thanks.
Of course! duh! Thanks a lot.
I only get an error when I use a function of the iterator (it, using Weapon class functions).
How do I make a class push back itself on construction? I'm sure that's the problem.
Thanks for the help. I already know that this is the code causing the CTD:
DLLCLBK void opcPreStep(double simt, double simdt, double mjd)
{
//loop through all weapons, and destroy them....
Thanks plasmator, really helpful :)
I really don't know which window shows what the problem is. This is the registers window;
EAX = 00079D40 EBX = 00079B30 ECX = 00079D18 EDX = 00000000 ESI =...
Oh Yeah.....
So how do I set it up?
Can you do this with DLLs? that's what I'm making.
Can I ask why it is necessary to use std::string and then convert, when you can just pass a const char?
I've been learning c++ for a month or so, and I don't really understand why some functions...
I have to use const char, because that's what the APIs require. I can't use a debugger either, because Orbiter has no built in debugging information.
EDIT: more info on my problem;
I can run an...
Thank you for the quick reply.
Where do you think the problem lies, then? Persumably the WeaponList vector? Or perhaps the Weapon class?
Here is the declaration of the weapon class;
//The...
Hello, I'm new to this forum.
I'm making a Combat addon fro the space flight simulator, orbiter (). You include compiled libraries to make the addons. I'm pretty sure my problem... | http://forums.codeguru.com/search.php?s=ef1b51d038d739a83ef3f9d55c31ed5f&searchid=2756981 | CC-MAIN-2014-15 | refinedweb | 378 | 86.6 |
.
With Office 365, it’s your data and your responsibility to protect it. NEW Veeam Backup for Microsoft Office 365 eliminates the risk of losing access to your Office 365 data.
You do not solve circular dependencies with #pragma once. You solve them by using forward declarations, like in :
class Test2; // <--- forward declaration
class Test {
Test2 *t;
};
class Test2 {
Test *t;
};
Can you show all of your header files ?
class FZALIB_API GuiObject; <- This does not work.
class FZALIB_API Window :public GuiObject
{
...
};
Here's a copy of what I've got so far. The program is mostly just stubs at this point, so if something doesn't make sense, it's probably because I've been fiddling around with this issue and not paying attention to the actual implementation just yet. But ask me about it anyway :-)
Note, I had to rename the files to .txt so EE would let me upload them. They're all .h files.
Controller.txt
Easel.txt
Exception.txt
FzaLib.txt
GuiObject.txt
Window.txt
Afaics, the problem is not on that level, but rather in the controller.h file (indirectly included in window.h). Instead of the includes there, use forward references, so :
class Exception;
class GuiObject;
class Window;
instead of :
#include "Exception.h"
#include "GuiObject.h"
#include "Window.h"
The exact errors you get would also be helpful.
Note that you don't necessarily need to include a whole header file, especially if you're only using a pointer to the class from that header file (just a forward declaration is enough, and avoids problems with multiple includes).
Free tool – Veeam Explorer for Microsoft SharePoint, enables fast, easy restores of SharePoint sites, documents, libraries and lists — all with no agents to manage and no additional licenses to buy.
As far as Window.h is concerned, the Controller class doesn't use it directly, but it needs to be #included somewhere or else it won't be visible outside the DLL unless every project that uses it #includes Window.h manually -- which will be a real pain if I get more than a few of these. I don't really care where it gets included from. Putting it in Controller.h, however, produces the fewest complaints from the compiler that GuiObject base class is undefined (6 times if I put it in GuiObject.h vs just 1 time if I put it in Controller.h).
The only gripe I get from the compiler is as follows:
error C2504: 'GuiObject' : base class undefined window.h: line 6
Although I can get that a dozen or more times depending on where I #include window.h
Yes, the controller.cpp needs those includes, but NOT the controller.h file.
>> As far as Window.h is concerned, the Controller class doesn't use it directly, but it needs to be #included somewhere or else it won't be visible outside the DLL unless every project that uses it #includes Window.h manually
You can leave the include for window.h in the controller.h file then - it shouldn't impact anything.
So the big question is . . . why does that work? Does the compiler not parse the header and implementation together?
Also, I plan to make Controller.h my precompiled header. Will having the include in the cpp file still include it in the resulting PCH file?
That's not a good idea. Generally, you want to limit the number of includes in the header file to the absolute necessary. The problem is that all these includes will be included in any file that includes that header file, and that can cause problems like you are experiencing now.
>> So the big question is . . . why does that work? Does the compiler not parse the header and implementation together?
The problem is that since you have a circular inclusion, and protected all your headers with pragma once, the headers can only be present once in each file.
For your specific case (originally), the Window.cpp would look like this :
contents of Exception.h
// <--- note here
contents of Controller.h
contents of Easel.h
contents of GuiObject.h
contents of Window.h
contents of the Window.cpp file
Now, the location where I put the comment is where these includes were supposed to be :
#include "GuiObject.h"
#include "Window.h"
but since they had already been included, and the pragma once allows only one inclusion, these two includes are ignored.
So, you end up with the Controller class being defined without having the GuiObject and Window classes defined yet ... That causes the error you saw.
When putting only the absolutely necessary includes in each header file (and using forward references where possible/needed), you avoid this problem. | https://www.experts-exchange.com/questions/23516303/C-Base-class-undefined-Possible-include-hell.html | CC-MAIN-2018-05 | refinedweb | 784 | 67.55 |
You can subscribe to this list here.
Showing
15
results of 15
Rob G. Healey wrote:
> Greetings:
>
>?
>
I'll bite into this, .. with the priviso I don't know what WebCal is, so
I might not know what I'm talking about.
Any tool that automatically decides the ladies name based on whether
married or not is not going to fit everyone's needs.
Holes in logic
1) Some cultures, once legally married the lady still continues to use
her madien name (i.e My Wife, and all her married sisters, mother etc.)
2) Sometimes the lady prefers to hyphenate name rather than accept males
surname (more and more common now)
3) In my family tree, I have a couple with three children, the couple
were unmarried, so children listed under mother's name on birth cert
(Scottish law) however subsequently all children assumed the fathers
name in census results and marriage/death certs, the wife also was
listed under 'husbands' surname and marked as married (and wife) in
census and death cert. However we still have not found marriage cert. -
So is she married or not?
Steve
If I recall correct Benny and Brian agreed to add support for setting
a variable in each report which could be turned on or off.
I can see that all menu items under report now have ellipses, which is
of course incorrect.
Have I understood the issue and has such a variable been set which I
can turn on and off? Is so what's it called and is it TRUE/FALSE or
1/0 or something else.
Duncan
Zsolt Foldvari wrote:
> Since r10410 in trunk we've got a new version of the database (v14).
> This means that a db created or edited with the trunk version cannot be
> opened with the gramps30 branch version anymore.
>
>
Just curious:
I see that src/gen/db/dbdir.py has new code gramps_upgrade_14() to
accomplish the migration from 13-to-14, I presume.
I also see code in src/GrampsDbUtils/_GrampsBSDDB.py for version upgrade
steps from <10 to 13.
Aside: I'm guess upgrades so far haven't been so numerous or
complex that they couldn't all be maintained in current code.
At some point, of course, that may change -- I don't know how
far away that might be. I favor *not* implementing solutions
to problems before discovering the requirements. ;-)
Is there any actual policy on implementing upgrade code?
Has there been any past experience that would be good to document?
Is there any developer (or user) documentation dealing with version
upgrades (generally) on the wiki?
Also, what, exactly, justifies a database version change -- certainly
structure changes, but what about a change in valid data values? I'm
thinking about the small issue of retiring the EventEype CAUSE_DEATH;
would something like that justify a db version change?
It seems to me that some guidelines on the wiki would be useful.
Regards,
..jim
Doug,
On Sat, 29 Mar 2008 12:24:15 -0400 (EDT)
"Douglas S. Blank" <dblank@...> wrote:
> So, my 3.0 database that I just opened with trunk is now 3.1-only?
> Should we give a warning/option-to-cancel when auto-upgrading the
> database?
As far as I understand 3.0 is not even supposed to open 3.1 db,
but should give an error message. I'll check this.
> I have a database change needed to handle first-day-of-year issues for
> dates. Should that be made now, too?
Yes, it's better to group all necessary db changes.
Zsolt
On Sat, March 29, 2008 11:47 am, Zsolt Foldvari wrote:
> Since r10410 in trunk we've got a new version of the database (v14).
> This means that a db created or edited with the trunk version cannot be
> opened with the gramps30 branch version anymore.
So, my 3.0 database that I just opened with trunk is now 3.1-only? Should
we give a warning/option-to-cancel when auto-upgrading the database?
I have a database change needed to handle first-day-of-year issues for
dates. Should that be made now, too?
-Doug
> Cheers,
> Zsolt
Since r10410 in trunk we've got a new version of the database (v14).
This means that a db created or edited with the trunk version cannot be
opened with the gramps30 branch version anymore.
Cheers,
Zsolt
On Thu, March 27, 2008 4:46 am, Rob G. Healey wrote:
> Devs:
>
[snipped css question]
> Is it possible to remove NarrativeWeb.py from its default location
> and place it within my own plugins directory? Example....
> mv gramps30/src/plugins/NarrativeWeb.py ~/.gramps/plugins?
> I have made a few minor changes to NarrativeWeb.py that I like to
> keep if possible? I know that it is politically correct to use the word
> Partners, but I would much rather use Spouse(s) for example? I know
> that a global change can't be made but, I would like it for myself.....
> Please no fights now......
Rob, great that you are diving under the hood! This is one of the great
powers of Python, GRAMPS's plugin system, and open source: the code is
fairly easy to understand, nicely modular, and available to change.
I would move the version with your changes to your personal
.gramps/plugins subdirectory. You should rename the name of the report,
both the "name" and "translated_name" so that if the other one appears in
the system gramps/plugins folder, you'll be able to tell them apart.
Also, it may be worth a feature request to add the spouse-term to the list
of textual replacements so that you could see your preferred term
throughout gramps.
I'll leave the css questions to others.
-Doug
> I have also modified one of Jason's NarrativeWeb stylesheets for my
> own color choices and use. Once it is all complete, I would like to
> share it with everyone if possible? I have also added my own to the
> Makefile.am and to NarrativeWeb to be able to use my own stylesheet! I
> am learning slowly, but surely....
> Future for NarrativeWeb:
> 1) In the Addresses section on the individual's page, can it be made
> to look like the one in the contact page? How it is correctly displayed
> like on an envelope?
> 2) Could the ancestors section be made to look like the pedigree
> display is on Gramps? Males are blue, and females are pink or whichever
> color as chosen?
> I have also made some changes to the Tips.xml.in file as well for
> word or Capitalizations or remove words. Made it more grammatically
> correct.... Here it is for anyone to look at and see if it is something
> that can be patched in....?
>
>
> --
> Sincerely Yours,
> Rob G. Healey
On Thu, March 27, 2008 3:16 am, Rob G. Healey wrote:
> Greetings:
>
>?
This will be addressed in the future with the name tool I mentioned.
> Doug: In the relatives Gramplet, I think that [half, adopted,
> foster] siblings should be listed as well as full siblings?
Benny actually wrote that one. Please make a feature requested with a
mock-up of what you'd like to see.
-Doug
On Fri, March 28, 2008 2:25 am, Rob G. Healey wrote:
> Greetings:
>
> I have been looking in the code base to learn and understand this
> software better! In WebCal.py, there are several questions that I have....
[snip]
> ------------------------------------------------------------------------
> What if the user has a custom holidays file in ~/.gramps/plugins and
> the original file exists in gramps30/src/plugins, which one gets used?
There is no overriding priority in this section of code?
> Would the section in bold work and remove the bottom two lines or
> not?
> ------------------------------------------------------------------------
It will use both, adding both together. I think what you are trying to do
is have more control over which files are used. I would probably do it
this way:
def get_holidays(self, year, country = "United States"):
""" Looks in multiple places for holidays.xml files """
locations = Config.get(Config.HOLIDAY_LOCATIONS)
holiday_file = 'holidays.xml'
for dir in locations:
holiday_full_path = os.path.join(dir, holiday_file)
if os.path.exists(holiday_full_path):
self.process_holiday_file(holiday_full_path, year, country)
and then you could control exactly where and what gets add via the Config
system. (This would need other things added---make a feature request for
this if you want).
To get what you want today, you could just move the holidays.xml to your
own plugins dir.
> ---------------------------------------------------------------------
> The code is bold is what is in need of work?
> There needs to be some code here to get their last married name if
> they had more than one marriage?
> What if a woman is living with someone but not married and the
> marriage tag is unknown or unmarried, then she should use her maiden
name until their marital status changes?
> What if a woman has a child with a boyfriend and the child has the
> father's surname, but the woman does not?
This is an older version of the original code. See Calendar.py for better
handling. But this is going to be replaced with a new tool that will allow
you to handle names better. Your questions will be handled by the tool.
Someone involved with the web side of things will have to answer your css
questions.
-Doug
>
> ---------------------------------------------------------------------------------------------------
> As far as stylesheets go for WebCal, why have to write out the
> stylesheet everytime someone automatically accepts the default settings
on the options page? Why not transfer the default pre-saved stylesheet
to the target directory instead? If the user clicks ok button without
making any changes to the stylesheet editor, then transfer the attached
file to the directory? It is the default stylesheet that WebCal had to
write out to a file. It would allow a lot of code to be removed out of
WebCal? I would like to create several more stylesheets with different
color combinations for WebCal if you all agree? Of course, someone
would have to change the WebCal code to allow for different stylesheets
as Narrated Web already does? This would also bring more conformity to
the Web Page plugins.....
> From looking at the WebCal code, it looks like some of the beginning
> ideas and codse is already in place to do this already. It would only
require someone to finish it! I do not know enough yet to be able to
complete it myself....
>
> Sorry for Rambling on and on....
>
> Sincerely Yours,
> Rob G. Healey
>
>
>
> -------------------------------------------------------------------------
> It's the best place to buy or sell services for
> just about anything Open Source.
>;164216239;13503038;w?
Gramps-devel mailing list
> Gramps-devel@...
>
>
--
Douglas S. Blank
Associate Professor, Bryn Mawr College
Office: 610 526 6501 | http://sourceforge.net/p/gramps/mailman/gramps-devel/?viewmonth=200803&viewday=29 | CC-MAIN-2015-11 | refinedweb | 1,781 | 75 |
[
]
Suresh Srinivas updated HDFS-3204:
----------------------------------
Attachment: HDFS-3204.txt
Updated patch with comments addressed. Additional changes:
# Removed printing exception trace in BackupImage.java. While debugging it threw me off, as
I interpreted as error.
# Failure is due to BackupNode checking the rpc server address. I do not think it is useful
for couple of reasons. There is already an extensive test for checking clusterID, namespaceID
etc. Also the removed check was only made in journal() method call and not in startLogSegment()
# Given the changes, journal method calls are verified for namespaceID, cluster and version
match. Dropping cTime matching.
> Minor modification to JournalProtocol.proto to make it generic
> --------------------------------------------------------------
>
> Key: HDFS-3204
> URL:
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Affects Versions: 0.24.0
> Reporter: Suresh Srinivas
> Attachments: HDFS-3204.txt, HDFS-3204.txt
>
>
> JournalProtocol.proto uses NamenodeRegistration in methods such as journal() for identifying
the source. I want to make it generic so that the method can be called with journal information
to identify the journal. I plan to use the protocol also for sync purposes, where the source
of the journal can be some thing other than namenode.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201204.mbox/%3C912895308.19524.1333658303135.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2017-26 | refinedweb | 216 | 50.94 |
The purpose of header files is to link libraries to the program you are making... There is code in those libraries that you need to do certian things and to use certian methods/functions. e.g. You would not be able to use the
fstream object without using the fstream library (
#include <fstream>)
In C/C++ you must define everything befor use it. For example, if you need a variable to store an integer --say a--, first you must define it like:
int a;
and later you can use it like
a=3;.
The same happens, for example, with functions.
Suppose that you have declared a function on one file, say:
int addOne(int a) {return a+1;}.
Later, if you want to use this function in some other file, first you must declare it:
int addOne(int a);.
Then you can use this function like:
b = addOne(b);
If you want to use this function in other file, you must do the same, declare and use:
int addOne(int a); c = addOne(a);
It is impossible to remember all the function's declarations so you can define them before use; also it is very tedious to write all the declarations. This is where the headers file help you.
For this function you can write a file (named addOne.h for example) in where you write the declaration of your function:
int addOne(int a);
When someone needs to use your function, he/she only needs to include this file into his/her file:
#include <addOne.h>
and use the function freely.
In conclusion, the purpose of the header files is to save the declarations of variables, functions, structures, macros, etc; so you can use them later into your program just including these heather files. This makes very easy to declare everything before using it in a systematic way.
The header files are necessary to use libraries but it isn’t its main purpose. As a matter of fact, you can use a library without including any file if you can declare yourself (knowing how) the functions that you need from that library. | https://www.daniweb.com/programming/software-development/threads/487971/header-files | CC-MAIN-2016-44 | refinedweb | 353 | 67.08 |
ARDUINO - SPFD5408 TFT LCD 2.4 TEMP and HUMIDiTY Monitor. Fahrenheit & Celsius!
Introduction:, Libraries & Parts list.
You need 3 Libraries. They are included in the downloads.
- SPFD5408_Adafruit_GFX.h
- SPFD5408_Adafruit_TFTLCD.h
- SPFD5408_TouchScreen.h
Step 1: Schematic, Code, Libraries & Parts List.
Just wire the DHT11 Like the Schematic shows, and plug on the TFT LCD on top of the Arduino Mega.
Make sure you download and install the libraries, and Upload the code and you have yourself a nice cheap
to build Temp & Humidity monitor.
Parts list:
- Arduino Mega
- 2.4 TFT LCD SPFD5408 (Aliexpress got them very cheap)
- DHT11 Temperature sensor. KY-015
- Jumper wires
Good luck! Follow me for more!
MCUFRIEND SETUP
For MCUFRIEND TFT LCD Replace the first 8 lines with
#include <Adafruit_GFX.h>// Core graphics library
#if defined(_GFXFONT_H_)
#include <Fonts/FreeSans9pt7b.h>
#define ADJ_BASELINE 11 //new fonts setCursor to bottom of letter
#else
#define ADJ_BASELINE 0 //legacy setCursor to top of letter
#endif
#include <MCUFRIEND_kbv.h>
MCUFRIEND_kbv tft;
And replace
tft.begin(0x7783); //MCUFRIEND Chipset ID on Librarytft.begin(0x7783);
what is necessary to change in sketch?
Good day, pls help me please, I have TFT LCD 3" same shield only different controller inside ili9327. How to change sketch ?
i got it to load on mega2560 but screen stays black i change my pins for my mcufriend shield to yp A3 xm A2 Ym 9 XP 8 even went in the touch screen ccp file to change the return tspoint too x 1100 and y 1043 but dont work help plzzzz
What a fun project. I almost gave up on this display ( due to fustration) but this project motivated me to bring it to life. I did have to for some reason modify the code to fit my display. Thank you for some awesome instuctable!
Yes it´s works
can i use arduino uno?
Yes but then you need to solder a pin to the bottom of pin A0 to A5 depends wich one u want to use. Change it at "#define DHTPIN A8" Change A8 to A0 or whatever analog pin u are going to use for the DHT11. If you have no Holes to solder a pin because you have a Chinese copy for example. Just remove Pin A0 for example from the tft lcd. Bend that pin that you just cut off so it sticks out and solder a wire to is. You also need to solder A wire to GND and VCC for the DHT if you want to use a uno. Good luck!
Be careful! Don't cut out the pins for 5v and GND on the tft LCD, Only cut the Analog ones, or pins that are not used by the TFT LCD, (NOTE, This is just a simple solution i dont recommend it, Thats why i use a mega, You can also add some more things like a RTC Module for time and Date, which is what i wanna do.
yeeaaaah it works without solder, THANK YOU Vandenbrande !!
No thanks by the way! Its a fun little cheap thing to build.
It seems that the value of the Heat index is not displayed right. But it works, just reset it and see if it displays it right or not.
What if I dont use solder, I use a breadboard and jumper wires? of course i will use alot of jumper wires. thanks | http://www.instructables.com/id/ARDUINO-SPFD5408-TFT-LCD-2-4-TEMP-and-HUMIDiTY-Mon/ | CC-MAIN-2017-30 | refinedweb | 562 | 82.65 |
After a week without any internet access point surfing the
snow of the Alp, monday, my fingers was eager to touch the keyboard again.
Why not finishing my prototype of runtime properties that use
the Da Vinci VM (i really love that name).
One ugly thing of the
draft
v3 of the property spec, is a property object
is implemented by a supplementary class generated by the compiler.
This means the compiler must create one class by property and so
bloat the application with lot of stupid code.
Ok, let's try to do better.
No class at compile time means reflection or
runtime time generation.
In fact, it's not a real choice because
reflection since 1.4.1 generates byte-code when
a method is often called.
Da Vinci VM
The Da Vinci VM
is the prototype implementation of
JSR 292,
a modified hotspot VM patched with new entry points that help
to implement dynamic languages on top of the Java platform.
The first feature available,
anonymous class
(VM anonymous class not compiler one)
allows you to create classes with some interesting features:
How property object works
If you are not familiar with properties,
you can read
an old blog entry.
A property object stands for a property and permit access
to the value of the property of any instance of a Class
that declare that property.
By example, the following code:
public class Bean {
private int x;
public int getX() {
return x;
}
public void setX(int x) {
this.x = x;
}
private String text;
public String getText() {
return text;
}
public void setText(String text) {
this.text = text;
}
public static void main(String[] args) {
Bean bean = new Bean();
Property propertyX =
Property.create(Bean.class, "x");
bean.setX(0xCAFEBABE);
System.out.printf(propertyX + " %x\n", propertyX.getValue(bean));
Property propertyText = Property.create(Bean.class, "text");
propertyText.setValue(bean, "hello property");
System.out.println(bean.getText());
}
}
prints:
int Bean.x cafebabe
hello property
How to test it ?
If your OS is Windows, Solaris or MacOs you have to compile the VM
by yourself, sorry.
On linux, you can download the
binary of the jdk7 b24
if you haven't already it, download the following zip
davinci.zip
and unzip it in your JAVA_HOME/jre/lib/i386/
It adds a new server JVM lib named 'davinci' that can be launched using
java -davinci ...
Now download the property runtime support
property-runtime.jar,
because it adds new classes and overrides some existing classes to the JDK
you must prepend the jar to the bootclasspath.
To launch the example above:
java -davinci -Xbootclasspath/p:property.jar Bean
The source of the property runtime support are available on the kijaro
web site here:.
What's the next step, finish to write a new version of the property spec
and provide a modified compiler according to that spec.
I think i've solved most on the corner cases,
so it's just a matter of time.
To be continued...
Rémi
Posted by: ahmetaa on March 23, 2008 at 06:55 PM
Posted by: aberrant on March 24, 2008 at 08:34 AM
good questions:
performance: Da Vinci properties get/setValue() are at least four times
faster than using reflection. You can test by yourself using:
part of Java 7: hum, i've to finish the prototype first :)
use VM anonymous classes to implement closure at runtime:
I think it's possible, but you will need invokedynamic.
Posted by: forax on March 25, 2008 at 06:44 AM | http://weblogs.java.net/blog/forax/archive/2008/03/da_vinci_runtim.html | crawl-001 | refinedweb | 581 | 55.64 |
11 May 2011 18:34 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
Eastman Chemical plans to expand its Tritan capacity at its
“The first 30,000 tonnes of capacity we built for Tritan we started it out January 2010 and it was sold out by the end of 2010,” said Costa.
“We are rushing to have ready another 30,000 tonnes which will be ready in the first quarter of 2012 - a total of 60,000 tonnes,” he added.
The company is also planning to expand its capacity for cellulose triacetate by 75% at the site by implementing new technology.
Eastman will expand its cyclohexane dimethanol (CHDM) capacity by 25% at the
At Eastman's first quarter earnings conference, company CEO, Jim Rogers said that Eastman expected to make two or three small-to-medium sized acquisitions this year.
“If you look at where we are, we have four businesses doing well, producing a lot of cash, we have the cash generated from the polyethylene terephthalate (PET) business and we have a strong balance sheet at the moment, the cash that we have available and the debt – so we are in a good position. We are pushing hard on a couple of our organic expansions,” Costa said.
Meanwhile, Costa said that Eastman was benefitting from the spread between propane and propylene as propylene prices continue to rise.
“It’s supply driven not just demand driven. I think for the second half of the year we would expect some moderation to propylene prices from where they are in April/May but not dropping to where they have been on a historical basis," he added.
“We’ve seen a dramatic development in the
For more on East | http://www.icis.com/Articles/2011/05/11/9459033/eastman-to-produce-60000-ty-of-tritan-copolyester-for-2012.html | CC-MAIN-2014-15 | refinedweb | 285 | 54.76 |
How to collect licenses of dependencies¶
With the
imports feature it is possible to collect the License files from all packages in the dependency graph. Please note that the
licenses are artifacts that must exist in the binary packages to be collected, as different binary packages might have different licenses.
E.g., A package creator might provide a different license for static or shared linkage with different “License” files if they want to.
Also, we will assume the convention that the package authors will provide a “License” (case not important) file at the root of their packages.
In conanfile.txt we would use the following syntax:
[imports] ., license* -> ./licenses @ folder=True, ignore_case=True
And in conanfile.py we will use the
imports() method:
def imports(self): self.copy("license*", dst="licenses", folder=True, ignore_case=True)
In both cases, after conan install, it will store all the found License files inside the local licenses folder, which will contain one subfolder per dependency with the license file inside. | https://docs.conan.io/en/1.23/howtos/collect_licenses.html | CC-MAIN-2022-33 | refinedweb | 166 | 51.78 |
_named_curve, EC_GROUP_check_discriminant, EC_GROUP_cmp, EC_GROUP_get_basis_type, EC_GROUP_get_trinomial_basis, EC_GROUP_get_pentanomial_basis, EC_GROUP_get0_field, EC_GROUP_get_field_type - Functions for manipulating EC_GROUP objects
SYNOPSIS
#include <openssl/ec.h> int EC_GROUP_copy(EC_GROUP *dst, const EC_GROUP *src); EC_GROUP *EC_GROUP_dup(const EC_GROUP *src);); const BIGNUM *EC_GROUP_get0_field *group); unsigned char *EC_GROUP_get0_seed(const EC_GROUP *group); size_t EC_GROUP_get_seed_len(const EC_GROUP *group); size_t EC_GROUP_set_seed(EC_GROUP *group, const unsigned char *, size_t len); int EC_GROUP_get_degree(const EC_GROUP *group); int EC_GROUP_check(const EC_GROUP *group, BN_CTX *ctx); int EC_GROUP_check_named_curve(const EC_GROUP *group, int nist_only, *group); int EC_GROUP_get_trinomial_basis(const EC_GROUP *group, unsigned int *k); int EC_GROUP_get_pentanomial_basis(const EC_GROUP *group, unsigned int *k1, unsigned int *k2, unsigned int *k3); int EC_GROUP_get_field_type(const EC_GROUP *group);
The following function has been deprecated since OpenSSL 3.0, and can be hidden entirely by defining OPENSSL_API_COMPAT with a suitable version value, see openssl_user_macros(7):
const EC_METHOD *EC_GROUP_method_of(const EC_GROUP *group);. This function was deprecated in OpenSSL 3.0, since EC_METHOD is no longer a public concept..
EC_GROUP_get_order() retrieves the order of group and copies its value into order. It fails in case group is not fully initialized (i.e., its order is not set or set to zero).
EC_GROUP_get_cofactor() retrieves the cofactor of group and copies its value into cofactor. It fails in case group is not fully initialized or if the cofactor is not set (or set to zero).
The functions EC_GROUP_set_curve_name() and EC_GROUP_get_curve_name(), set and get the NID for the curve respectively (see EC_GROUP_new(3)). If a curve does not have a NID associated with it, then EC_GROUP_get_curve_name will return NID_undef. added.
EC_GROUP_get_field_type() identifies what type of field the EC_GROUP structure supports, which will be either F2^m or Fp.() behaves in the following way: For the OpenSSL default provider it performs a number of checks on a curve to verify that it is valid. Checks performed include verifying that the discriminant is non zero; that a generator has been defined; that the generator is on the curve and has the correct order. For the OpenSSL FIPS provider it uses EC_GROUP_check_named_curve() to conform to SP800-56Ar3.
The function EC_GROUP_check_named_curve() determines if the group's domain parameters match one of the built-in curves supported by the library. The curve name is returned as a NID if it matches. If the group's domain parameters have been modified then no match will be found. If the curve name of the given group is NID_undef (e.g. it has been created by using explicit parameters with no curve name), then this method can be used to lookup the name of the curve that matches the group domain parameters. The built-in curves contain aliases, so that multiple NID's can map to the same domain parameters. For such curves it is unspecified which of the aliases will be returned if the curve name of the given group is NID_undef. If nist_only is 1 it will only look for NIST approved curves, otherwise it searches all built-in curves. This function may be passed a BN_CTX object in the ctx parameter. The ctx parameter may be NULL.() returns 0 if the order is not set (or set to zero) for group or if copying into order fails, 1 otherwise.
EC_GROUP_get_cofactor() returns 0 if the cofactor is not set (or is set to zero) for group or if copying into cofactor fails, 1 otherwise.
EC_GROUP_get_curve_name() returns the curve name (NID) for group or will return NID_undef if no curve name is associated.
EC_GROUP_get_asn1_flag() returns the ASN1 flag for the specified group .
EC_GROUP_get_point_conversion_form() returns the point_conversion_form for group.
EC_GROUP_get_degree() returns the degree for group or 0 if the operation is not supported by the underlying group implementation.
EC_GROUP_get_field_type() returns either NID_X9_62_prime_field for prime curves or NID_X9_62_characteristic_two_field for binary curves; these values are defined in the <openssl/obj_mac.h> header file.
EC_GROUP_check_named_curve() returns the nid of the matching named curve, otherwise it returns 0 for no match, or -1 on error.
EC_GROUP_get0_order() returns an internal pointer to the group order. EC_GROUP_order_bits() returns the number of bits in the group order. EC_GROUP_get0_cofactor() returns an internal pointer to the group cofactor. EC_GROUP_get0_field() returns an internal pointer to the group field. For curves over GF(p), this is the modulus; for curves over GF(2^m), this is the irreducible polynomial defining the field.)
HISTORY
EC_GROUP_method_of() was deprecated in OpenSSL 3.0.
EC_GROUP_check_named_curve() and EC_GROUP_get_field_type() were added in OpenSSL 3.0.
Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at. | https://www.openssl.org/docs/manmaster/man3/EC_GROUP_get_curve_name.html | CC-MAIN-2022-40 | refinedweb | 753 | 55.64 |
Making browser games more secure with Elm, Part Two
How we built replay-as-authentication with Elm
In the preceding post, we shared how we got into game development explained how Elm allowed us to build a reliable record/replay system to validate scores. In this post, we will go more into detail together with a working example. Let’s first go through the example, so the “what” part of this blog post becomes more clear. Then we will go into “why” and “how”.
Here, you can play the game. Click the ‘Play’ button and play the game to the end without changing the browser tab.
During your gameplay, the framework records all keyboard inputs, mouse movements, and touch positions. When the game is finished, this long sequence of recorded inputs is compressed and sent to our database. The final, ‘claimed’ score is also sent to our database. The database responds with a unique ID that can be used to query the compressed recording and the claimed score.
Once the game’s completed the ‘Validate’ button links the player to the Record Player page, using the unique ID as a query string. Here is the link for an example recording (in case, you don’t want to wait until the game is over).
The Record Player page queries the database using the unique ID and runs a simulation of the game — using the same sequence of inputs recorded during play. Once completed it checks whether the score reached matches the initially claimed score. This is how we solved the score validation problem.
What we mean by reliability
There are many aspects to reliability in game development. We believe our record/replay system is reliable in that it:
- prevents cheating: creating a fake recording is very difficult.
- is safe for developers: those who build the game can’t break the system.
Protection against cheating is a prerequisite for every score validation system — we won’t delve into that here. (Max talks about it in the first blog post.) In this post, we want to clarify the second aspect of reliability and explain how Elm makes it possible.
Multiple games maintained by multiple developers
Over the last two years, we have built around 20 games with Elm. Some of them have been built by junior developers that didn’t have Elm experience, they learned the language by creating a game.
Having multiple developers with varying levels of experience means it’s really important that the system is unbreakable. In fact, the developer who makes the game should not worry or even know about the record/replay system. It should be hidden completely from game developers and just work out-of-the-box.
These expectations may seem very high, but as we’ll see, they are easily achievable in Elm.
Restrictions you can’t impose in JavaScript/TypeScript
Imagine that we were building our games and our game framework using Javascript instead of Elm. Then, for example, a game developer could easily
- listen to keyboard or mouse clicks,
- generate a random number, or
- query computer time
in a different way than the game framework provides and use them such that they affect the gameplay and the score. In this case, our Javascript framework would not be able to keep track of this input and the record/replay system would break.
The solution in Elm: Controlling the
main
To understand how this problem can be solved in Elm, let’s look at Elm Playground. Our game framework, in essence, is very similar to it. In short, Elm Playground does
- render your shapes to SVG and
- makes it easy to access things like mouse input or screen size by feeding
Computeras an argument to your
viewand
updatefunctions. (See
Playground.game)
But what is more important is that Elm Playground imposes many restrictions on its user. It achieves this by controlling the
main function. The following are the first two lines from an online example.
import Playground exposing (..)main = game view update (0,0)
Even if this program consists of thousands and thousands of lines, by just checking the above lines, meaning that by only observing that the
main is defined using the
Playground.game function, we can be sure, for example, that the only way this program accessed the mouse position was by looking into the
Computer. Our framework records only the actions that update the
Computer. And Elm takes care of the rest: it makes sure that the game developer who uses the framework has access to things like time or user input only via
Computer and not in any other way! This degree of guarantee is possible thanks to The Elm Architecture.
Let us explain this a little further for the reader who is not familiar with Elm. Every Elm application must define a
main value of type
Program. The only way to create a
Program is using
Platform.worker or one of the functions in the Browser package. These let you define the run-time behavior of your program by means of pure functions. Thanks to the limits of Elm/JS Interop, Elm has a very well-defined boundary between
- ⛈ the outside world where time exists and things change and
- 🌈 the Elm program where time does not exist and everything is a constant
We say, Elm has managed effects.
Our recording system relies on this well-defined boundary. Therefore, it is unbreakable by the game developer. Without Elms managed effects we wouldn’t attempt building this type of record/replay system.
Recordings as small as 10kb
Because the game recordings are stored on the server, we wanted them to be as small as possible. A recording is a long list of
TickInputs:
type alias TickInput =
{ playerInputs : List PlayerInput
, deltaTimeInMillis : Int
}type PlayerInput
= KeyDown String
| KeyUp String
--
| MouseDown Point
| MouseMove Point
| MouseUp
--
| TouchStart (List TouchEvent)
| TouchMove (List TouchEvent)
| TouchEnd (List TouchEvent)
| TouchCancel (List TouchEvent)
--
| Wheel WheelEvent
In our games, we don’t use fixed time steps. Therefore, we had to record the delta time for each time step. This means that for every second of gameplay, we record approximately 60
TickInput's. That makes our recordings very long. To compress the recordings, we've used the packages MartinSStewart/elm-serialize, danfishgold/base64-bytes, and folkertdev/elm-flate in combination. In addition, we have bounded the
deltaTimeInMillis by 255 from above and encoded it to a single byte using
Serialize.byte. In the end, we were able to compress a 2 minutes game recording to around 10 kilobytes, which was small enough for our purposes.
Conclusion
Elm has been criticized for not having a traditional Foreign Function Interface with JavaScript and not allowing ports in packages. What we have shown here is that thanks to those limitations, Elm allowed us to fulfill requirements that other languages would not have been able | https://medium.com/diesdas-direct/making-browser-games-more-secure-with-elm-part-two-3ac1c228aad5?source=collection_home---4------2----------------------- | CC-MAIN-2022-05 | refinedweb | 1,137 | 61.67 |
#include <dataformitem.h>
An abstraction of an <item> element in a XEP-0004 Data Form of type result.
There are some constraints regarding usage of this element you should be aware of. Check XEP-0004 section 3.4. This class does not enforce correct usage at this point.
Definition at line 32 of file dataformitem.h.
Creates an empty 'item' element you can add fields to.
Definition at line 21 of file dataformitem.cpp.
Creates a 'item' element and fills it with the 'field' elements contained in the given Tag. The Tag's root element must be a 'item' element. Its child element should be 'field' elements.
Definition at line 26 of file dataformitem.cpp.
Virtual destructor.
Definition at line 41 of file dataformitem.cpp.
Use this function to create a Tag representation of the form field. This is usually called by DataForm.
Reimplemented from DataFormField.
Definition at line 45 of file dataformitem.cpp. | https://camaya.net/api/gloox-0.9.9.12/classgloox_1_1DataFormItem.html | CC-MAIN-2019-18 | refinedweb | 154 | 70.8 |
Introduction
One of the most crucial data structures to learn while preparing for interviews is the linked list. In a coding interview, having a thorough understanding of Linked Lists might be a major benefit.
A collection of items stored in contiguous memory spaces is referred to as an array. The array's limitation, on the other hand, is that its size is predetermined and fixed. This problem can be solved in a variety of ways. The differences between two classes that are used to tackle this problem, ArrayList and LinkedList, are explored in this article.
ArrayList
- The collection framework includes ArrayList.
- It's part of the java.util package and allows us to create dynamic arrays in Java.
- Though it may be slower than normal arrays, it might be useful in programs that require a lot of array manipulation.
- We can add and remove objects in real-time. It adjusts its size on its own.
The implementation of the ArrayList is demonstrated in the following example.
Code Implementation
import java.io.*; import java.util.*; public class PrepBytes { public static void main(String[] args) { ArrayList
arrli = new ArrayList (); for (int i = 1; i <= 5; i+=2) arrli.add(i); System.out.println(arrli); arrli.remove(1); System.out.println(arrli); } }
Output
[1, 3, 5]
[1, 5]
LinkedList
- A LinkedList's items are not stored at a contiguous memory location, and each element is a distinct object with a data part and an address part.
- Connecting the elements is done through pointers and addresses.
- A node is a term used to describe any element.
- Because of the dynamic nature and simplicity of insertions and deletions in LinkedList, they are preferred over arrays.
The implementation of the LinkedList is shown in the example below.
Code Implementation
import java.util.*; public class PrepBytes{ public static void main(String args[]) { LinkedList
object = new LinkedList (); object.add("Coding"); object.add("is"); object.addLast("Fun"); System.out.println(object); object.remove("is"); System.out.println("Linked list after " + "deletion: " + object); } }
Output
[Coding, is, Fun]
Linked list after deletion: [Coding, Fun]
Now that you've got a good idea of both, let's look at the distinctions between ArrayList and LinkedList in Java.
Time Complexity
As a result, we've attempted to illustrate the key distinctions between an ArrayList and a LinkedList in this post. When it comes to coding interviews, the Java Collection Framework is crucial. If you want to solve more questions on Linked List, which are curated by our expert mentors at PrepBytes, you can follow this link Linked List. | https://www.prepbytes.com/blog/java/arraylist-vs-linkedlist-in-java/ | CC-MAIN-2022-21 | refinedweb | 425 | 56.35 |
In this Quick Introduction to the Flash Professional Components, we're going to look at the UILoader and UIScrollbar.
Brief Overview
Take a look at the preview. In the SWF, on the left side there is a UILoader component which is invisible upon first glance (because there is nothing in it); we will be loading an image into this.
Under the UILoader there is a label with the text "Image Not Loaded"; upon successfully loading the image we'll change this label's text to read "Image Loaded".
The button below the label is used to start the loading of the image. On the right-hand side there is a text field and UIScrollbar which are initially invisible (the text field is invisible because there is nothing in it); upon pressing the button with the label "Load Text" we load the text from a sample text file and set the UIScrollbar to be visible.
Step 1: Setting Up the Document
Open a new Flash Document and set the following properties:
- Document Size: 550x260px
- Background Color: #FFFFFF
Step 2: Add Components to the Stage
Open the components window by going to Menu > Window > Components or pressing CTRL+F7.
Drag a UILoader, a Label, two Buttons, and a UIScrollbar to the stage.
In the Properties panel give the UILoader the instance name "imageLoader". If the Properties panel is not showing go to Menu > Window > Properties or press CTRL+F3.
Set the UILoader's X position to 37 and its Y to 20
Give the label the instance name "loadedLabel". Set the label's X to 37 and its Y to 182.
Give the first button the instance name "loadImageButton" and set the label's X to 37, its Y to 213.
In the Tools panel select the Text tool and drag out a TextField on the stage. If the Tools panel is not showing go to Menu > Window > Tools or press CTRL+F2.
Give the TextField the instance name "loremText". Set the TextField's X to 272 and its Y to 15, then set its width to 243, its height to 101.
Give the UIScrollbar the instance name "textScroller". Set the UIScrollbar's X to 525.00 and its Y to 15
Give the second Button the instance name "loadTextButton" and set its X to 272, its Y to 213.
Explaining the Components
The UILoader component is a container which can display SWF, JPEG, progressive JPEG, PNG, and GIF files.You can load in these assets at runtime and optionally monitor the loading progress. To see how this can be done, check out my tutorial on the ProgressBar component (the concepts are the same) and apply to the UILoader as I did with the Loader in that tutorial.
The UIScrollbar allows you to add a scrollbar to a textField. When you have a long block of text the UIScrollbar component allows you to scroll through without having a very large TextField to accommodate all your text. This component is very easy to use in that you can just drop it onto a TextField and it automatically "wires up" to that TextField. Here, I'll show you how to do it in ActionScript. when using Flash's code editor. To do this go to Menu > File > Publish Settings and click on Settings, next to Script [Actionscript 3.0].
Uncheck "Automatically declare stage instances".
In Main.as, open the package declaration and import the classes we will be using. Add the following to Main.as:
package { //We will extend the class of MovieClip import flash.display.MovieClip; //Import the components we will be using import fl.containers.UILoader; import fl.controls.Button; import fl.controls.Label; import fl.controls.UIScrollbar; //Needed for our Event Handlers import flash.events.MouseEvent; import flash.events.Event; //Needed to Loaded images and Text import flash.net.URLLoader; import flash.net.URLRequest; import flash.text.TextField;
Step 4: Set Up the Main Class
Add the class, make it extend Movie Clip, and set up our Constructor Function.
Here we declare our variables and call our functions in the
Main() constructor. Add the following to Main.as:
public class Main extends MovieClip { //Our on-stage components public var loadImageButton:Button; public var loadTextButton:Button; public var loadedLabel:Label; public var loremText:TextField; public var imageLoader:UILoader; public var textScroller:UIScrollbar; //Used to load the Text into the TextField public var textLoader:URLLoader; public function Main() { setupButtonsAndLabels(); setupTextField(); setupScrollBar(); }
Step 5: Main Constructor Functions
Here we'll define the functions that are used in our constructor. In the
setupButtonAndLabels() function we set our button's
label property and add event listeners to be triggered when the user clicks the button.
In the
setupTextField() function we set the text field's
wordWrap property to
true so the text will wrap to the next line when it reaches the right edge of the TextField.
In
setupScrollBar() we set the UIScrollbar's direction to "vertical" (this can be "vertical" or "horizontal") and, since we don't want it visible when the movie first starts, we set its
visible property to
false.
Add the following to Main.as:
private function setupButtonsAndLabels():void { //Sets the buttons Label(Text shown on Button) loadImageButton.label="Load Image"; loadImageButton.addEventListener(MouseEvent.CLICK,loadImage); //Sets the buttons Label(Text shown on Button) loadTextButton.label ="Load Text"; loadTextButton.addEventListener(MouseEvent.CLICK,loadText); //Sets the labels text loadedLabel.text="Image Not Loaded"; } private function setupTextField():void { //Lines will wrap when they reach end(right side) of textfield loremText.wordWrap = true; } private function setupScrollBar():void { //Sets our scrollBars direction; can be "vertical" or "horizontal" textScroller.direction="vertical"; textScroller.visible = false; }
Step 6: Event Listeners
Here we'll code the event listeners we added to the buttons and then close out the class and package.
In the
loadImage() function we set the
scaleContent of the
imageLoader to
false (if it were
true the image would scale down to the size of the
UILoader), as we want the image to be its normal size. We then load the image and add an event listener to be triggered when the image has completed loading.
In the
loadText() function we set up our
URLLoader and load the text file. Here we also set up a listener to be triggered when the text has finished loading.
In the
imageLoaded() function we set the label's text to "Image Loaded" -- a simple example, but you could do something less trivial in a "real" application.
In the
textLoaded() function we set the text field's text to the Event's (
e.target.data), which will be the text from the text file. We then set the
UIScrollbar to be visible and set its
scrollTarget (the text field we wish it to control).
private function loadImage(e:MouseEvent):void{ //If were set to true the image would scale down to the size of the UILoader //Here we set to false so the UILoader respects the actual image size imageLoader.scaleContent = false; //Loads the image and fires a function when loading is complete imageLoader.load(new URLRequest("theimage.jpg")); imageLoader.addEventListener(Event.COMPLETE,imageLoaded); } private function loadText(e:MouseEvent):void{ //Loads our text file and fires a function when loading is complete textLoader = new URLLoader(); textLoader.load(new URLRequest("lorem.txt")); textLoader.addEventListener(Event.COMPLETE,textLoaded); } private function imageLoaded(e:Event):void{ //Sets the text on the label loadedLabel.text="Image Loaded"; } private function textLoaded(e:Event):void{ //Sets the TextField to the loaders data(the text in the textfile) loremText.text=e.target.data; textScroller.visible = true; textScroller.scrollTarget = loremText; } }//close out the class }close out the package
Note that at the end we close out the class and package.
Conclusion
You'll notice in the Components Parameters panel of the components that you can check and select certain properties.
The above image is for the UILoader component.
The properties for the UILoader component are as follows:
- autoLoad: a Boolean value that indicates to automatically load the specified content
- enabled: a Boolean value that indicates the whether the component can accept user input
- maintainAspectRatio: a Boolean value a value that indicates whether to maintain the aspect ratio that was used in the original image or to
resize the image to the current width and height of the UILoader component
- scaleContent: a Boolean value that indicates whether to automatically scale the image to the size of the UILoader instance
- source: an absolute or relative URL that identifies the location of the content to load
- visible: a Boolean value that indicates whether or not the component is visible on the stage
The properties for the UIScrollbar are
- direction: sets the direction of the scrollBar (can be "vertical" or "horizontal")
- scrollTargetName: the target TextField to which the UIScrollbar is registered
- visible: a Boolean value that indicates whether or not the component is visible on the stage
The help files are a great place to learn more about these properties.
To learn more about the properties for Labels and Button be sure to check out the Quick Introduction to the Button and Label components.
Thanks for reading!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/quick-introduction-flash-uiloader-and-uiscrollbar-components--active-7656 | CC-MAIN-2017-09 | refinedweb | 1,532 | 53.61 |
Subsets and Splits