text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #24752 closed Bug (fixed) Reusing a Case/Where object in a query causes a crash Description Reusing a conditional expression Case / When that has already been used causes a crash. Here is a simple example: import django django.setup() from django.contrib.auth.models import User from django.db.models import When, Case, CharField, Value SOME_CASE = Case( When(pk=0, then=Value('0')), default=Value('1'), output_field=CharField(), ) print User.objects.annotate(somecase=SOME_CASE) print User.objects.annotate(somecase=SOME_CASE) You can safely execute this program in your environment. The second queryset crashes because it reuses the SOME_CASE object. This probably related to #24420. This problem exists in both 1.8 and 1.8.1. Change History (4) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 Changed 5 years ago by comment:4 Changed 5 years ago by Note: See TracTickets for help on using tickets. Shameless self-review.
https://code.djangoproject.com/ticket/24752
CC-MAIN-2020-24
en
refinedweb
Evgeny Shvarov 24 January 2018Code snippet, Coding Guidelines, ObjectScript, Caché, InterSystems IRIS? For one-line if I prefer postconditionals: As for For - the option with braces. It's more readable. And one line inside For loop quickly and often becomes several lines, so braces become mandatory anyway. But... Can one condition in "postconditionals" "quickly and often" convert into several lines? Maybe postconditionals shoud be If with braces too? Postconditionals are to be used only for the most simple checks, for example: instead of But several conditions (or several lines inside) should be as if, I agree. for several conditions there's always $S too. I tend to use "if" rather than postconditionals because it makes the code read more like spoken language, and I think it's easier to understand for those who are not seasoned Caché veterans. However I make exceptions for "quit" and "continue": quit:$$$ISERR(tSC) continue:x'=y I like to have these out in front, rather than buried in a line somewhere, so that when I'm scanning code I can easily see "this is where the execution will branch if a certain condition is met." I also like this style, as it helps me spot the lines where execution at the current level might terminate. Yep, brace always. Because you will never know if you need to update your condition or chain another. The brace shows what is the scope of the condition, also for loops, for, for each, etc. It is a good practices and it is more readable. As Eduard Lebedyuk has commented, a single IF for use to set a variable, the postconditionals is a good idea, coz you know the condition before to set. Agreed, and if you want to have your cake and eat it, too: Brace always for me too, unless I am writing some quick debugging code or something along those lines Agree with brace always. This really helps with new developers as the syntax is something they've seen before in other languages. Postcondition is preferred especially quit:erroror continue:ignore for i=1:1:5 write i," "is as fine as for i=1:1:5 { write i," " } write "end" but this is really bad: f i=1:1:5 d w "? " .w i," " w "end" OMG!!! What is that!!! It seems a KGB spy code !!! I think they named it M(umps) Besides all that is mentioned, the If without braces has a side-effect that the If with braces does not have : it affects the system variable $Test. Not important, only if you use the ELSE (legacy) command... Indeed... the brace delimits the scope of the operation Thanks, Danny! Is there any "non-legacy" case when $test can be useful? I mean, should we consider using $test in a code as a bad practice? If you want to check on the result of a LOCK command for example LOCK +<SOMERESOURCE>:<timeout> IF $T {do something} So I wouldn't call it bad practice just yet. Yes, same here, i also use following code a lot: Open <some device>:"R":0 Else Write "could not open device" Quit I always use Else on the same line just after the command that changes $test, having too much instructions between them creates problems. Oh right, forgot about that. I would do it this way to avoid Else: Open <some device>:"R":0 If '$test { Write "could not open device" Quit } C'mon, let's stop mentioning old features like argumentless Do and legacy ELSE There's just no point in doing so. In the real world, there are lots of programs still using this style, every developer using Caché Object Script should be able to read this and understand the side-effects, pitfalls, even if we recommend to not use it anymore... totally right!! there is more old code out than you could / would like to imagine And Caché is backward compatible for 40 years (since DSM V1) New developers should not write it. But they need to understand it. Or at least recognize it. It's pretty similar to reading the bible in original language with no translation. Danny and Robert, of course I understand what you are saying (we are the same generation!). Also, I am not trying to be the Developer Community police. In posts that are asking "what's the recommended way to do this?" we should answer that question. There's no point in mentioning old syntax in that context. Saying "don't do this, but you might see it someday" helps the subset of people that might work with old code, but doesn't help and may confuse the people that are working with new code. It doesn't belong. On the other hand, if there's a post asking "what are some older/confusing syntaxes I might encounter when working with M or ObjectScript?" we should answer that question. We all have examples of that. I wonder if it would be possible to have a more rigid syntax check (now in Atelier / and switchable) to enforce all kind of brackets (eg: condition like in JS). I tried to recommend this while teaching COS myself. With very limited success as some Dinos ignored it. Everyday I read legacy code like this, but if we're talking about the ideal (like coding guidelines) we want to avoid these. I agree that every Object Script developer should be able to read this code, but they should also know that there are better modern ways to write this. I think that part of growing the community of Object Script developers is to lower barriers to entry. Having worked with a lot of new Object Script developers (and been one myself), one of the largest barriers to entry is the prevalence of cryptic legacy syntax in existing code. Remember, just because we can read it doesn't mean that everyone can. Argumentless DO is still the simplest way to introduce a scope to NEW a variable, including $namespace or $roles. Yes, you can extract that out to another procedure or method, but I often prefer to do it inline. Good point Jon. Here's a concrete example, taken from the CheckLinkAccess classmethod of %CSP.Portal.Utils (2017.2 version). I have done the macro expansion myself and highlighted the expansions: Do . New $roles . Set $roles="" . If ($extract($roles,1,$length("%All"))="%All") Set tHaveAllRole = 1 If tHaveAllRole { ... Congratulations! I new I had seen something similar We want our code to be as accessible and readable as possible to as many people as possible. We always prefer whitespace formatting and curly brackets to arcane one line syntax. Someone who's not an Object Script expert will know what this does - But there's no guarantee that a novice will necessarily know what these do - Good code is readable and clever one liners will get in the way of that goal. "We want our code to be as accessible and readable as possible to as many people as possible. " Do we? ;) "We always prefer whitespace formatting and curly brackets to arcane one line syntax." -- I don't ;) Using arcane one liners shows that you are a superiour wizard. One should always use the short forms of commands to make code more terse and save space. If people want to read the code, they can use 'expand code' in studio (which expands the shortform commands to the full version). Note, there might be some sarcasm in here. People who know me, might also know that I'm only maybe 80% joking;) you remind me the joke about hieroglyphs: - What's wrong with them? - Nothing! Priests in old Egypt were reading the "book of death" like you would read a newspaper. - You just have to learn the 'encryption'. The legacy versions of If and For commands don't use curly braces. But there are no legacy versions of While and Do/While, which require curly braces. So rather than trying to remember which commands use or don't use curly braces, just use curly braces all the time for these, as well as Try/Catch. Regarding post-conditionals, it's a good habit to put parentheses around the condition, which allows you to put spaces in the condition. This is valid syntax: set:a=4 y=10 This is valid syntax and more readable: set:(a = 4) y = 10 This is invalid syntax: set:a = 4 y = 10 I just went through the whole conversation on this and find it knowledgeable for me as i am beginner for cache. Now i am able to understand how to use them in with the standards and where i need to careful thanks for all. Hi, colleagues! The sense of the discussion I raised is to get Coding Guidelines for ObjectScript to let us cook the code which suits the goals and team guidelines the best. Thanks for a lot of bright thoughts and true experience! If I am modifying a routine I didn't originally write then I tend to stick to the coding style the original author adopted - even it is using legacy syntax. It is good practice not to mix old and new constructs within the same code block ie. don't mix curly braces with line-oriented syntax. If you are completely re-writing something or writing new code then I prefer curly braces. I would have liked to re-factor some of my code better but I found the 'use for' construct breaks when you try that. In this pseudocode, I was using the 'write' statement to write to a file on the file system. There is a condition that is checked to determine whether it gets written to file1 or file2. for arrayIndex=1:1:arrayOfIds.Count() continue:errorStatus'=1 use file1 for testIndex=1:1:testSetObj.TestCollection.Count() use file2 for testIndex=1:1:testSetObj.TestCollection.Count() I miss 3 things in this example: New file. If the specified file does not exist, the system creates the file. If the specified file already exists, the system creates a new one with the same name. The status of the old file depends on which operating system you are using. On UNIX® and Windows, Caché deletes the old file. (Note that file locking should be used to prevent two concurrent processes using this parameter from overwriting the same file.) So to get all output you may append arrayIndex to filename to have unique file names Thanks for the comments Robert. It's safe to assume there is an IF statement in each inner-for to determine whether to do anything or not. Each object in arrayOfIds needs to be opened and a Boolean check is done against a property. I haven't had a problem with the 'N' parameter. The code appears to store 'write' records in process memory before writing everything to file after the outer-for. $ZSTORAGE has been buffed to accommodate this. Unique file names have been achieved using the following assumptions: 1) The process will run no more than once a day 2) Append today's date in ODBC format using $zd(+$h,3) to the end of the filename. The 'close' statement appears after the outer-for but I'm not sure if the curly braces implicitly does it anyway. Thanks for the clarification. It's sometimes hard to guess the picture if you miss some pixels. I'd still recommend moving the CLOSE after each inner loop to have OPEN / CLOSE strictly paired The implicit CLOSE happens only if you exit the partition. There's no class to act undercover. Curly braces just structure the code with no operational side effect. It's already 256Mb, since 2012.2 Strange. Our production servers are Caché 2018 on AIX but still showing only 16,384 KB. Must have preserved the existing setting on upgrades rather than use the new value. Fresh local Caché install on Windows install shows 256MB though. Cache for UNIX (IBM AIX for System Power System-64) 2018.1.2 (Build 309_5) Wed Jun 12 2019 20:08:03 EDT %SYS>w $zs 16384 Which part of it? There's like 5 different suggestions contradicting each other. None of them. The best practices is "Do what you want"... hehe Now seriously. I think the best practices is use according to the explication of each comment. Use brace to delimit the scope, not use if you have one line and precondition, etc.. The conversation is helpful. Ok, this is fair - we can remove the Best Practice tag for this. The idea was that this is important conversation if you consider internal guidelines on this for your organization and can take one approach from the best practices of experienced developers. I think it is a "Best practice", but the information is scattered throughout the conversation. Why not create a last comment with a brief of the conversation, with pros and cons? It could be more clear. Regards
https://community.intersystems.com/post/and-if-one-line-brace-or-not-brace
CC-MAIN-2020-05
en
refinedweb
The following are the release notes for the 8.12.9 version of sendmail(1M), plus the 8.12.10 security features included on Unixware. These notes were provided by sendmail.org, and are part of the sendmail distribution. We list below only the changes since the previous version of sendmail included with UnixWare (version 8.10.1). Note that the version numbers at the beginning of each section below indicate the versions of sendmail and it configuration file sendmail.cf, respectively, to which the notes apply. Also see .. 8.12.10/8.12.10 2003/09/24 SECURITY: Fix a buffer overflow in address parsing. Problem detected by Michal Zalewski, patch from Todd C. Miller of Courtesan Consulting., settings for Irix 6: remove confSBINDIR, i.e., use default /usr/sbin, change owner/group of man pages and user-executable to root/sys, set optimization limit to 0 (unlimited). Based on patch from Ayamura Kikuchi, M.D, and proposal from Kari Hurtta of the Finnish Meteorological Institute. Do not assume LDAP support is installed by default under Solaris 8 and later. Add support for OpenUNIX. CONFIG: Increment version number of config file to 10. CONFIG: Add an install target and a README file in cf/cf. CONFIG: Don't accept addresses of the form a@b@, a@b@c, a@[b]c, etc. CONFIG: Reject empty recipient addresses (in check_rcpt). CONFIG: The access map uses an option of -T<TMPF> to deal with temporary lookup failures. CONFIG: New value for access map: SKIP, which causes the default action to be taken by aborting the search for domain names or IP nets. CONFIG: check_rcpt can deal with TEMPFAIL for either recipient or relay address as long as the other part allows the email to get through. CONFIG: Entries for virtusertable can make use of a third parameter "%3" which contains "+detail" of a wildcard match, i.e., an entry like user+*@domain. This allows handling of details by using %1%3 as the RHS. Additionally, a "+" wildcard has been introduced to match only non-empty details of addresses. CONFIG: Numbers for rulesets used by MAILERs have been removed and hence there is no required order within the MAILER section anymore except for MAILER(`uucp') which must come after MAILER(`smtp') if uucp-dom and uucp-uudom are used. CONFIG: Hosts listed in the generics domain class {G} (GENERICS_DOMAIN() and GENERICS_DOMAIN_FILE()) are treated as canonical. Suggested by Per Hedeland of Ericsson. CONFIG: If FEATURE(`delay_checks') is used, make sure that a lookup in the access map which returns OK or RELAY actually terminates check_* ruleset checking. CONFIG: New tag TLS_Rcpt: for access map to be used by ruleset tls_rcpt, see cf/README for details. CONFIG: Change format of Received: header line which reveals whether STARTTLS has been used to "(version=${tls_version} cipher=${cipher} bits=${cipher_bits} verify=${verify})". CONFIG: Use "Spam:" as tag for lookups for FEATURE(`delay_checks') options friends/haters instead of "To:" and enable specification of whole domains instead of just users. Notice: this change is not backward compatible. Suggested by Chris Adams from HiWAAY Informations Services. CONFIG: Allow for local extensions for most new rulesets, see cf/README for details. CONFIG: New FEATURE(`lookupdotdomain') to lookup also .domain in the access map. Proposed by Randall Winchester of the University of Maryland. CONFIG: New FEATURE(`local_no_masquerade') to avoid masquerading for the local mailer. Proposed by Ingo Brueckl of Wupper Online. CONFIG: confRELAY_MSG/confREJECT_MSG can override the default messages for an unauthorized relaying attempt/for access map entries with RHS REJECT, respectively. CONFIG: FEATURE(`always_add_domain') takes an optional argument to specify another domain to be added instead of the local one. Suggested by Richard H. Gumpertz of Computer Problem Solving. CONFIG: confAUTH_OPTIONS allows setting of Cyrus-SASL specific options, see doc/op/op.me for details. CONFIG: confAUTH_MAX_BITS sets the maximum encryption strength for the security layer in SMTP AUTH (SASL). CONFIG: If Local_localaddr resolves to $#ok, localaddr is terminated immediately. CONFIG: FEATURE(`enhdnsbl') is an enhanced version of dnsbl which allows checking of the return values of the DNS lookups. See cf/README for details. CONFIG: FEATURE(`dnsbl') allows now to specify the behavior for temporary lookup failures. CONFIG: New option confDELIVER_BY_MIN to specify minimum time for Deliver By (RFC 2852) or to turn off the extension. CONFIG: New option confSHARED_MEMORY_KEY to set the key for shared memory use. CONFIG: New FEATURE(`compat_check') to look up a key consisting of the sender and the recipient address delimited by the string "<@>", e.g., sender@sdomain<@>recipient@rdomain, in the access map. Based on code contributed by Mathias Koerber of Singapore Telecommunications Ltd. CONFIG: Add EXPOSED_USER_FILE() command to allow an exposed user file. Suggested by John Beck of Sun Microsystems. CONFIG: Don't use MAILER-DAEMON for error messages delivered via LMTP. Problem reported by Larry Greenfield of CMU. CONFIG: New FEATURE(`preserve_luser_host') to preserve the name of the recipient host if LUSER_RELAY is used. CONFIG: New FEATURE(`preserve_local_plus_detail') to preserve the +detail portion of the address when passing address to local delivery agent. Disables alias and .forward +detail stripping. Only use if LDA supports this. CONFIG: Removed deprecated FEATURE(`rbl'). CONFIG: Add LDAPROUTE_EQUIVALENT() and LDAPROUTE_EQUIVALENT_FILE() which allow you to specify 'equivalent' hosts for LDAP Routing lookups. Equivalent hostnames are replaced by the masquerade domain name for lookups. See cf/README for additional details. CONFIG: Add a fourth argument to FEATURE(`ldap_routing') which instructs the rulesets on what to do if the address being looked up has +detail information. See cf/README for more information. CONFIG: When chosing a new destination via LDAP Routing, also look up the new routing address/host in the mailertable. Based on patch from Don Badrak of the United States Census Bureau. CONFIG: Do not reject the SMTP Mail from: command if LDAP Routing is in use and the bounce option is enabled. Only reject recipients as user unknown. CONFIG: Provide LDAP support for the remaining database map features. See the ``USING LDAP FOR ALIASES AND MAPS'' section of cf/README for more information. CONFIG: Add confLDAP_CLUSTER which defines the ${sendmailMTACluster} macro used for LDAP searches as described above in ``USING LDAP FOR ALIASES, MAPS, AND CLASSES''. CONFIG: confCLIENT_OPTIONS has been replaced by CLIENT_OPTIONS(), which takes the options as argument and can be used multiple times; see cf/README for details. CONFIG: Add configuration macros for new options: confBAD_RCPT_THROTTLE BadRcptThrottle confDIRECT_SUBMISSION_MODIFIERS DirectSubmissionModifiers confMAILBOX_DATABASE MailboxDatabase confMAX_QUEUE_CHILDREN MaxQueueChildren confMAX_RUNNERS_PER_QUEUE MaxRunnersPerQueue confNICE_QUEUE_RUN NiceQueueRun confQUEUE_FILE_MODE QueueFileMode confFAST_SPLIT FastSplit confTLS_SRV_OPTIONS TLSSrvOptions See above (and related documentation) for further information. CONFIG: Add configuration variables for new timeout options: confTO_ACONNECT Timeout.aconnect confTO_AUTH Timeout.auth confTO_LHLO Timeout.lhlo confTO_STARTTLS Timeout.starttls CONFIG: Add configuration macros for mail filter API: confINPUT_MAIL_FILTERS InputMailFilters confMILTER_LOG_LEVEL Milter.LogLevel confMILTER_MACROS_CONNECT Milter.macros.connect confMILTER_MACROS_HELO Milter.macros.helo confMILTER_MACROS_ENVFROM Milter.macros.envfrom confMILTER_MACROS_ENVRCPT Milter.macros.envrcpt Mail filters can be defined via INPUT_MAIL_FILTER() and MAIL_FILTER(). See libmilter/README, cf/README, and doc/op/op.me for details. CONFIG: Add support for accepting temporarily unresolvable domains. See cf/README for details. Based on patch by Motonori Nakamura of Kyoto University. CONFIG: confDEQUOTE_OPTS can be used to specify options for the dequote map. CONFIG: New macro QUEUE_GROUP() to define queue groups. CONFIG: New FEATURE(`queuegroup') to select a queue group based on the full e-mail address or the domain of the recipient. CONFIG: Any IPv6 addresses used in configuration should be prefixed by the "IPv6:" tag to identify the address properly. For example, if you want to use the IPv6 address 2002:c0a8:51d2::23f4 in the access database, you would need to use IPv6:2002:c0a8:51d2::23f4 on the left hand side. This affects the access database as well as the relay-domains and local-host-names files. CONFIG: OSTYPE(aux) has been renamed to OSTYPE(a-ux). CONFIG: Avoid expansion of m4 keywords in SMART_HOST. CONFIG: Add MASQUERADE_EXCEPTION_FILE() for reading masquerading exceptions from a file. Suggested by Trey Breckenridge of Mississippi State University. CONFIG: Add LOCAL_USER_FILE() for reading local users (LOCAL_USER() -- $={L}) entries from a file. CONTRIB: dnsblaccess.m4 is a further enhanced version of enhdnsbl.m4 which allows to lookup error codes in the access map. Contributed by Neil Rickert of Northern Illinois University. DEVTOOLS: Add new options for installation of include and library files: confINCGRP, confINCMODE, confINCOWN, confLIBGRP, confLIBMODE, confLIBOWN. DEVTOOLS: Add new option confDONT_INSTALL_CATMAN to turn off installation of the the formatted man pages on operating systems which don't include cat directories. EDITMAP: New program for editing maps as supplement to makemap. MAIL.LOCAL: Mail.local now uses the libsm mbdb package to look up local mail recipients. New option -D mbdb specifies the mailbox database type. MAIL.LOCAL: New option "-h filename" which instructs mail.local to deliver the mail to the named file in the user's home directory instead of the system mail spool area. Based on patch from Doug Hardie of the Los Angeles Free-Net. MAILSTATS: New command line option -P which acts the same as -p but doesn't truncate the statistics file. MAKEMAP: Add new option -t to specify a different delimiter instead of white space. RMAIL: Invoke sendmail with '-G' to indicate this is a gateway submission. Problem noted by Kari Hurtta of the Finnish Meteorological Institute. SMRSH: Use the vendor supplied directory on FreeBSD 3.3 and later. VACATION: Change Auto-Submitted: header value from auto-generated to auto-replied. From Kenneth Murchison of Oceana Matrix Ltd. VACATION: New option -d to send error/debug messages to stdout instead of syslog. VACATION: New option -U which prevents the attempt to lookup login in the password file. The -f and -m options must be used to specify the database and message file since there is no home directory for the default settings for these options. VACATION: Vacation now uses the libsm mbdb package to look up local mail recipients; it reads the MailboxDatabase option from the sendmail.cf file. New option -C cffile which specifies the path of the sendmail.cf file. New Directories: libmilter/docs New Files: cf/cf/README cf/cf/submit.cf cf/cf/submit.mc cf/feature/authinfo.m4 cf/feature/compat_check.m4 cf/feature/enhdnsbl.m4 cf/feature/msp.m4 cf/feature/local_no_masquerade.m4 cf/feature/lookupdotdomain.m4 cf/feature/preserve_luser_host.m4 cf/feature/preserve_local_plus_detail.m4 cf/feature/queuegroup.m4 cf/sendmail.schema contrib/dnsblaccess.m4 devtools/M4/UNIX/sm-test.m4 devtools/OS/OpenUNIX.5.i386 editmap/* include/sm/* libsm/* libsmutil/cf.c libsmutil/err.c sendmail/SECURITY sendmail/TUNING sendmail/bf.c sendmail/bf.h sendmail/sasl.c sendmail/sm_resolve.c sendmail/sm_resolve.h sendmail/tls.c Deleted Files: cf/feature/rbl.m4 cf/ostype/aix2.m4 devtools/OS/AIX.2 include/sendmail/cdefs.h include/sendmail/errstring.h include/sendmail/useful.h libsmutil/errstring.c sendmail/bf_portable.c sendmail/bf_portable.h sendmail/bf_torek.c sendmail/bf_torek.h sendmail/clock.c Renamed Files: cf/cf/generic-solaris2.mc => cf/cf/generic-solaris.mc cf/cf/generic-solaris2.cf => cf/cf/generic-solaris.cf cf/ostype/aux.m4 => cf/ostype/a-ux.m4 8.11.7/8.11.7 2003/03/29 SECURITY: Fix a remote buffer overflow in header parsing by dropping sender and recipient header comments if the comments are too long. Problem noted by Mark Dowd of ISS X.11.7.11.7 defaults, set MaxMimeHeaderLength to 0/0. Properly clean up macros to avoid persistence of session data across various connections. This could cause session oriented restrictions, e.g., STARTTLS requirements, to erroneously allow a connection. Problem noted by Tim Maletic of Priority Health. Ignore comments in NIS host records when trying to find the canonical name for a host. Fix a memory leak when closing Hesiod maps. Set ${msg_size} macro when reading a message from the command line or the queue. Prevent a segmentation fault when clearing the event list by turning off alarms before checking if event list is empty. Problem noted by Allan E Johannesen of Worcester Polytechnic Institute. Fix a potential core dump problem if the environment variable NAME is set. Problem noted by Beth A. Chaney of Purdue University. Prevent a race condition on child cleanup for delivery to files. Problem noted by Fletcher Mattox of the University of Texas. CONFIG: Do not bounce mail if FEATURE(`ldap_routing')'s bounce parameter is set and the LDAP lookup returns a temporary error. CONFIG: Fix a syntax error in the try_tls ruleset if FEATURE(`access_db') is not enabled. LIBSMDB: Fix a lock race condition that affects makemap, praliases, and vacation. LIBSMDB: Avoid a file creation race condition for Berkeley DB 1.X and NDBM on systems with the O_EXLOCK open(2) flag. MAKEMAP: Avoid going beyond the end of an input line if it does not contain a value for a key. Based on patch from Mark Bixby from Hewlett-Packard. MAIL.LOCAL: Fix a truncation race condition if the close() on the mailbox fails. Problem noted by Tomoko Fukuzawa of Sun Microsystems. SMRSH: SECURITY: Only allow regular files or symbolic links to be used for a command. Problem noted by David Endler of iDEFENSE, Inc. 8.11.6/8.11.6 2001/08/20 SECURITY: Fix a possible memory access violation when specifying out-of-bounds debug parameters. Problem detected by Cade Cairns of SecurityFocus. Avoid leaking recipient information in unrelated DSNs. This could happen if a connection is aborted, several mails had been scheduled for delivery via that connection, and the timeout is reached such that several DSNs are sent next. Problem noted by Dileepan Moorkanat of Hewlett-Packard. Fix a possible segmentation violation when specifying too many wildcard operators in a rule. Problem detected by Werner Wiethege. Avoid a segmentation fault on non-matching Hesiod lookups. Problem noted by Russell McOrmond of flora.ca 8.11.5/8.11.5 2001/07/31 Fix a possible race condition when sending a HUP signal to restart the daemon. This could terminate the current process without starting a new daemon. Problem reported by Wolfgang Breyha of SE Netway Communications. Only apply MaxHeadersLength when receiving a message via SMTP or the command line. Problem noted by Andrey J. Melnikoff. When finding the system's local hostname on an IPv6-enabled system which doesn't have any IPv6 interface addresses, fall back to looking up only IPv4 addresses. Problem noted by Tim Bosserman of EarthLink. When commands were being rejected due to check_relay or TCP Wrappers, the ETRN command was not giving a response. Incoming IPv4 connections on a Family=inet6 daemon (using IPv4-mapped addresses) were incorrectly labeled as "may be forged". Problem noted by Per Steinar Iversen of Oslo University College. Shutdown address test mode cleanly on SIGTERM. Problem noted by Greg King of the OAO Corporation. Restore the original real uid (changed in main() to prevent out of band signals) before invoking a delivery agent. Some delivery agents use this for the "From " envelope "header". Problem noted by Leslie Carroll of the University at Albany. Mark closed file descriptors properly to avoid reuse. Problem noted by Jeff Bronson of J.D. Bronson, Inc. Setting Timeout options on the command line will also override their sub-suboptions in the .cf file, e.g., -O Timeout.queuereturn=2d will set all queuereturn timeouts to 2 days. Problem noted by Roger B.A. Klorese. Portability: BSD/OS has a broken setreuid() implementation. Problem noted by Vernon Schryver of Rhyolite Software. BSD/OS has /dev/urandom(4) (as of version 4.1/199910 ?). Noted by Vernon Schryver of Rhyolite Software. BSD/OS has fchown(2). Noted by Dave Yadallee of Netline 2000 Internet Solutions Inc. Solaris 2.X and later have strerror(3). From Sebastian Hagedorn of Cologne University. CONFIG: Fix parsing for IPv6 domain literals in addresses (user@[IPv6:address]). Problem noted by Liyuan Zhou. 8.11.4/8.11.4 2001/05/28 Clean up signal handling routines to reduce the chances of heap corruption and other potential race conditions. Terminating and restarting the daemon may not be instantaneous due to this change. Also, non-root users can no longer send out-of-band signals. Problem reported by Michal Zalewski of BindView. If LogLevel is greater than 9 and SASL fails to negotiate an encryption layer, avoid core dump logging the encryption strength. Problem noted by Miroslav Zubcic of Crol. If a server offers "AUTH=" and "AUTH " and the list of mechanisms is different in those two lines, sendmail might not have recognized (and used) all of the offered mechanisms. Fix an IP address lookup problem on Solaris 2.0 - 2.3. Patch from Kenji Miyake. This time, really don't use the .. directory when expanding QueueDirectory wildcards. If a process is interrupted while closing a map, don't try to close the same map again while exiting. Allow local mailers (F=l) to contact remote hosts (e.g., via LMTP). Problem noted by Norbert Klasen of the University of Tuebingen. If Timeout.QueueReturn was set to a value less the time it took to write a new queue file (e.g., 0 seconds), the bounce message would be lost. Problem noted by Lorraine L Goff of Oklahoma State University. Pass map argument vector into map rewriting engine for the regex and prog map types. Problem noted by Stephen Gildea of InTouch Systems, Inc. When closing an LDAP map due to a temporary error, close all of the other LDAP maps which share the original map's connection to the LDAP server. Patch from Victor Duchovni of Morgan Stanley. To detect changes of NDBM aliases files check the timestamp of the .pag file instead of the .dir file. Problem noted by Neil Rickert of Northern Illinois University. Don't treat temporary hesiod lookup failures as permanent. Patch from Werner Wiethege. If ClientPortOptions is set, make sure to create the outgoing socket with the family set in that option. Patch from Sean Farley. Avoid a segmentation fault trying to dereference a NULL pointer when logging a MaxHopCount exceeded error with an empty recipient list. Problem noted by Chris Adams of HiWAAY Internet Services. Fix DSN for "Too many hops" bounces. Problem noticed by Ulrich Windl of the Universitaet Regensburg. Fix DSN for "mail loops back to me" bounces. Problem noticed by Kari Hurtta of the Finnish Meteorological Institute. Portability: OpenBSD has a broken setreuid() implementation. CONFIG: Undo change from 8.11.1: change 501 SMTP reply code back to 553 since it is allowed by DRUMS. CONFIG: Add OSTYPE(freebsd4) for FreeBSD 4.X. DEVTOOLS: install.sh did not properly handle paths in the source file name argument. Noted by Kari Hurtta of the Finnish Meteorological Institute. DEVTOOLS: Add FAST_PID_RECYCLE to compile time options for OpenBSD since it generates random process ids. PRALIASES: Add back adaptive algorithm to deal with different endings of entries in the database (with/without trailing ' '). Patch from John Beck of Sun Microsystems. New Files: cf/ostype/freebsd4.m4 8.11.3/8.11.3 2001/02/27 Prevent a segmentation fault when a bogus value was used in the LDAPDefaultSpec option's -r, -s, or -M flags and if a bogus option was used. Problem noted by Allan E Johannesen of Worcester Polytechnic Institute. Prevent "token too long" message by shortening {currHeader} which could be too long if the last copied character was a quote. Problem detected by Jan Krueger of digitalanswers communications consulting gmbh. Additional IPv6 check for unspecified addresses. Patch from Jun-ichiro itojun Hagino of the KAME Project. Do not ignore the ClientPortOptions setting if DaemonPortOptions Modifier=b (bind to same interface) is set and the connection came in from the command line. Do not bind to the loopback address if DaemonPortOptions Modifier=b (bind to same interface) is set. Patch from John Beck of Sun Microsystems. Properly deal with open failures on non-optional maps used in check_* rulesets by returning a temporary failure. Buffered file I/O files were not being properly fsync'ed to disk when they were committed. Properly encode '=' for the AUTH= parameter of the MAIL command. Problem noted by Hadmut Danisch. Under certain circumstances the macro {server_name} could be set to the wrong hostname (of a previous connection), which may cause some rulesets to return wrong results. This would usually cause mail to be queued up and delivered later on. Ignore F=z (LMTP) mailer flag if $u is given in the mailer A= equate. Problem noted by Motonori Nakamura of Kyoto University. Work around broken accept() implementations which only partially fill in the peer address if the socket is closed before accept() completes. Return an SMTP "421" temporary failure if the data file can't be opened where the "354" reply would normally be given. Prevent a CPU loop in trying to expand a macro which doesn't exist in a queue run. Problem noted by Gordon Lack of Glaxo Wellcome. If delivering via a program and that program exits with EX_TEMPFAIL, note that fact for the mailq display instead of just showing "Deferred". Problem noted by Motonori Nakamura of Kyoto University. If doing canonification via /etc/hosts, try both the fully qualified hostname as well as the first portion of the hostname. Problem noted by David Bremner of the University of New Brunswick. Portability: Fix a compilation problem for mail.local and rmail if SFIO is in use. Problem noted by Auteria Wally Winzer Jr. of Champion Nutrition. IPv6 changes for platforms using KAME. Patch from Jun-ichiro itojun Hagino of the KAME Project. OpenBSD 2.7 and higher has srandomdev(3). OpenBSD 2.8 and higher has BSDI-style login classes. Patch from Todd C. Miller of Courtesan Consulting. Unixware 7.1.1 doesn't allow h_errno to be set directly if sendmail is being compiled with -kthread. Problem noted by Orion Poplawski of CQG, Inc. CONTRIB: buildvirtuser: Substitute current domain for $DOMAIN and current left hand side for $LHS in virtuser files. DEVTOOLS: Do not pass make targets to recursive Build invocations. Problem noted by Jeff Bronson of J.D. Bronson, Inc. MAIL.LOCAL: In LMTP mode, do not return errors regarding problems storing the temporary message file until after the remote side has sent the final DATA termination dot. Problem noted by Allan E Johannesen of Worcester Polytechnic Institute. MAIL.LOCAL: If LMTP mode is set, give a temporary error if users are also specified on the command line. Patch from Motonori Nakamura of Kyoto University. PRALIASES: Skip over AliasFile specifications which aren't based on database files (i.e., only show dbm, hash, and btree). Renamed Files: devtools/OS/OSF1.V5.0 => devtools/OS/OSF1.V5.x 8.11.2/8.11.2 2000/12/29 Prevent a segmentation fault when trying to set a class in address test mode due to a negative array index. Audit other array indexing. This bug is not believed to be exploitable. Noted by Michal Zalewski of the "Internet for Schools" project (IdS). Add an FFR (for future release) to drop privileges when using address test mode. This will be turned on in 8.12. It can be enabled by compiling with: APPENDDEF(`conf_sendmail_ENVDEF', `-D_FFR_TESTMODE_DROP_PRIVS') in your devtools/Site/site.config.m4 file. Suggested by Michal Zalewski of the "Internet for Schools" project (IdS). Fix potential problem with Cyrus-SASL security layer which may have caused I/O errors, especially for mechanism DIGEST-MD5. When QueueSortOrder was set to host, sendmail might not read enough of the queue file to determine the host, making the sort sub-optimal. Problem noted by Jeff Earickson of Colby College. Don't issue DSNs for addresses which use the NOTIFY parameter (per RFC 1891) but don't have FAILURE as value. Initialize Cyrus-SASL library before the SMTP daemon is started. This implies that every change to SASL related files requires a restart of the daemon, e.g., Sendmail.conf, new SASL mechanisms (in form of shared libraries). Properly set the STARTTLS related macros during a queue run for a cached connection. Bug reported by Michael Kellen of NxNetworks, Inc. Log the server name in relay= for ruleset tls_server instead of the client name. Include original length of bad field/header when reporting MaxMimeHeaderLength problems. Requested by Ulrich Windl of the Universitat Regensburg. Fix delivery to set-user-ID files that are expanded from aliases in DeliveryMode queue. Problem noted by Ric Anderson of the University of Arizona. Fix LDAP map -m (match only) flag. Problem noted by Jeff Giuliano of Collective Technologies. Avoid using a negative argument for sleep() calls when delaying answers to EXPN/VRFY commands on systems which respond very slowly. Problem noted by Mikolaj J. Habryn of Optus Internet Engineering. Make sure the F=u flag is set in the default prog mailer definition. Problem noted by Kari Hurtta of the Finnish Meteorological Institute. Fix IPv6 check for unspecified addresses. Patch from Jun-ichiro itojun Hagino of the KAME Project. Fix return values for IRIX nsd map. From Kari Hurtta of the Finnish Meteorological Institute. Fix parsing of DaemonPortOptions and ClientPortOptions. Read all of the parameters to find Family= setting before trying to interpret Addr= and Port=. Problem noted by Valdis Kletnieks of Virginia Tech. When delivering to a file directly from an alias, do not call initgroups(); instead use the DefaultUser group information. Problem noted by Marc Schaefer of ALPHANET NF. RunAsUser now overrides the ownership of the control socket, if created. Otherwise, sendmail can not remove it upon close. Problem noted by Werner Wiethege. Fix ConnectionRateThrottle counting as the option is the number of overall connections, not the number of connections per socket. A future version may change this to per socket counting. Portability: Clean up libsmdb so it functions properly on platforms where sizeof(u_int32_t) != sizeof(size_t). Problem noted by Rein Tollevik of Basefarm AS. Fix man page formatting for compatibility with Solaris' whatis. From Stephen Gildea of InTouch Systems, Inc. UnixWare 7 includes snprintf() support. From Larry Rosenman. IPv6 changes for platforms using KAME. Patch from Jun-ichiro itojun Hagino of the KAME Project. Avoid a typedef compile conflict with Berkeley DB 3.X and Solaris 2.5 or earlier. Problem noted by Bob Hughes of Pacific Access. Add preliminary support for AIX 5. Contributed by Valdis Kletnieks of Virginia Tech. Solaris 9 load average support from Andrew Tucker of Sun Microsystems. CONFIG: Reject addresses of the form a!b if FEATURE(`nouucp', `r') is used. Problem noted by Phil Homewood of Asia Online, patch from Neil Rickert of Northern Illinois University. CONFIG: Change the default DNS based blacklist server for FEATURE(`dnsbl') to blackholes.mail-abuse.org. CONFIG: Deal correctly with the 'C' flag in {daemon_flags}, i.e., implicitly assume canonical host names. CONFIG: Deal with "::" in IPv6 addresses for access_db. Based on patch by Motonori Nakamura of Kyoto University. CONFIG: New OSTYPE(`aix5') contributed by Valdis Kletnieks of Virginia Tech. CONFIG: Pass the illegal header form <list:;> through untouched instead of making it worse. Problem noted by Motonori Nakamura of Kyoto University. CONTRIB: Added buildvirtuser (see `perldoc contrib/buildvirtuser`). CONTRIB: qtool.pl: An empty queue is not an error. Problem noted by Jan Krueger of digitalanswers communications consulting gmbh. CONTRIB: domainmap.m4: Handle domains with '-' in them. From Mark Roth of the University of Illinois at Urbana-Champaign. DEVTOOLS: Change the internal devtools OS, REL, and ARCH m4 variables into bldOS, bldREL, and bldARCH to prevent namespace collisions. Problem noted by Motonori Nakamura of Kyoto University. RMAIL: Undo the 8.11.1 change to use -G when calling sendmail. It causes some changes in behavior and may break rmail for installations where sendmail is actually a wrapper to another MTA. The change will re-appear in a future version. SMRSH: Use the vendor supplied directory on HPUX 10.X, HPUX 11.X, and SunOS 5.8. Requested by Jeff A. Earickson of Colby College and John Beck of Sun Microsystems. VACATION: Fix pattern matching for addresses to ignore. VACATION: Don't reply to addresses of the form owner-* or *-owner. New Files: cf/ostype/aix5.m4 contrib/buildvirtuser devtools/OS/AIX.5.0. Problem noted by Tim "Darth Dice" Bosserman of EarthLink. 8.11.0/8.11.0 2000/07/19 SECURITY: If sendmail is installed as a non-root set-user-ID binary (not the normal case), some operating systems will still keep a saved-uid of the effective-uid when sendmail tries to drop all of its privileges. If sendmail needs to drop these privileges and the operating system doesn't set the saved-uid as well, exit with an error. Problem noted by Kari Hurtta of the Finnish Meteorological Institute. SECURITY: sendmail depends on snprintf() NUL terminating the string it populates. It is possible that some broken implementations of snprintf() exist that do not do this. Systems in this category should compile with -DSNPRINTF_IS_BROKEN=1. Use test/t_snprintf.c to test your system and report broken implementations to [email protected] and your OS vendor. Problem noted by Slawomir Piotrowski of TELSAT GP. Support SMTP Service Extension for Secure SMTP (RFC 2487) (STARTTLS). Implementation influenced by the example programs of OpenSSL and the work of Lutz Jaenicke of TU Cottbus. Add new STARTTLS related options CACERTPath, CACERTFile, ClientCertFile, ClientKeyFile, DHParameters, RandFile, ServerCertFile, and ServerKeyFile. These are documented in cf/README and doc/op/op.*. New STARTTLS related macros: ${cert_issuer}, ${cert_subject}, ${tls_version}, ${cipher}, ${cipher_bits}, ${verify}, ${server_name}, and ${server_addr}. These are documented in cf/README and doc/op/op.*. Add support for the Entropy Gathering Daemon (EGD) for better random data. New DontBlameSendmail option InsufficientEntropy for systems which don't properly seed the PRNG for OpenSSL but want to try to use STARTTLS despite the security problems. Support the security layer in SMTP AUTH for mechanisms which support encryption. Based on code contributed by Tim Martin of CMU. Add new macro ${auth_ssf} to reflect the SMTP AUTH security strength factor. LDAP's -1 (single match only) flag was not honored if the -z (delimiter) flag was not given. Problem noted by ST Wong of the Chinese University of Hong Kong. Fix from Mark Adamson of CMU. Add more protection from accidentally tripping OpenLDAP 1.X's ld_errno == LDAP_DECODING_ERROR hack on ldap_next_attribute(). Suggested by Kurt Zeilenga of OpenLDAP. Fix the default family selection for DaemonPortOptions. As documented, unless a family is specified in a DaemonPortOptions option, "inet" is the default. It is also the default if no DaemonPortOptions value is set. Therefore, IPv6 users should configure additional sockets by adding DaemonPortOptions settings with Family=inet6 if they wish to also listen on IPv6 interfaces. Problem noted by Jun-ichiro itojun Hagino of the KAME Project. Set ${if_family} when setting ${if_addr} and ${if_name} to reflect the interface information for an outgoing connection. Not doing so was creating a mismatch between the socket family and address used in subsequent connections if the M=b modifier was set in DaemonPortOptions. Problem noted by John Beck of Sun Microsystems. If DaemonPortOptions modifier M=b is used, determine the socket family based on the IP address. ${if_family} is no longer persistent (i.e., saved in qf files). Patch from John Beck of Sun Microsystems. sendmail 8.10 and 8.11 reused the ${if_addr} and ${if_family} macros for both the incoming interface address/family and the outgoing interface address/family. In order for M=b modifier in DaemonPortOptions to work properly, preserve the incoming information in the queue file for later delivery attempts. Use SMTP error code and enhanced status code from check_relay in responses to commands. Problem noted by Jeff Wasilko of smoe.org. Add more vigilance in checking for putc() errors on output streams to protect from a bug in Solaris 2.6's putc(). Problem noted by Graeme Hewson of Oracle. The LDAP map -n option (return attribute names only) wasn't working. Problem noted by Ajay Matia. Under certain circumstances, an address could be listed as deferred but would be bounced back to the sender as failed to be delivered when it really should have been queued. Problem noted by Allan E Johannesen of Worcester Polytechnic Institute. Prevent a segmentation fault in a child SMTP process from getting the SMTP transaction out of sync. Problem noted by Per Hedeland of Ericsson. Turn off RES_DEBUG if SFIO is defined unless SFIO_STDIO_COMPAT is defined to avoid a core dump due to incompatibilities between sfio and stdio. Problem noted by Neil Rickert of Northern Illinois University. Don't log useless envelope ID on initial connection log. Problem noted by Kari Hurtta of the Finnish Meteorological Institute. Convert the free disk space shown in a control socket status query to kilobyte units. If TryNullMXList is True and there is a temporary DNS failure looking up the hostname, requeue the message for a later attempt. Problem noted by Ari Heikkinen of Pohjois-Savo Polytechnic. Under the proper circumstances, failed connections would be recorded as "Bad file number" instead of "Connection failed" in the queue file and persistent host status. Problem noted by Graeme Hewson of Oracle. Avoid getting into an endless loop if a non-hoststat directory exists within the hoststatus directory (e.g., lost+found). Patch from Valdis Kletnieks of Virginia Tech. Make sure Timeout.queuereturn=now returns a bounce message to the sender. Problem noted by Per Hedeland of Ericsson. If a message data file can't be opened at delivery time, panic and abort the attempt instead of delivering a message that states "<<< No Message Collected >>>". Fixup the GID checking code from 8.10.2 as it was overly restrictive. Problem noted by Mark G. Thomas of Mark G. Thomas Consulting. Preserve source port number instead of replacing it with the ident port number (113). Document the queue status characters in the mailq man page. Suggested by Ulrich Windl of the Universitat Regensburg. Process queued items in which none of the recipient addresses have host portions (or there are no recipients). Problem noted by Valdis Kletnieks of Virginia Tech. If a cached LDAP connection is used for multiple maps, make sure only the first to open the connection is allowed to close it so a later map close doesn't break the connection for other maps. Problem noted by Wolfgang Hottgenroth of UUNET. Netscape's LDAP libraries do not support Kerberos V4 authentication. Patch from Rainer Schoepf of the University of Mainz. Provide workaround for inconsistent handling of data passed via callbacks to Cyrus SASL prior to version 1.5.23. Mention ENHANCEDSTATUSCODES in the SMTP HELP helpfile. Omission noted by Ulrich Windl of the Universitat Regensburg. Portability: Add the ability to read IPv6 interface addresses into class 'w' under FreeBSD (and possibly others). From Jun Kuriyama of IMG SRC, Inc. and the FreeBSD Project. Replace code for finding the number of CPUs on HPUX. NCRUNIX MP-RAS 3.02 SO_REUSEADDR socket option does not work properly causing problems if the accept() fails and the socket needs to be reopened. Patch from Tom Moore of NCR. NetBSD uses a .0 extension of formatted man pages. From Andrew Brown of Crossbar Security. Return to using the IPv6 AI_DEFAULT flag instead of AI_V4MAPPED for calls to getipnodebyname(). The Linux implementation is broken so AI_ADDRCONFIG is stripped under Linux. From John Beck of Sun Microsystems and John Kennedy of Cal State University, Chico. CONFIG: Catch invalid addresses containing a ',' at the wrong place. Patch from Neil Rickert of Northern Illinois University. CONFIG: New variables for the new sendmail options: confCACERT_PATH CACERTPath confCACERT CACERTFile confCLIENT_CERT ClientCertFile confCLIENT_KEY ClientKeyFile confDH_PARAMETERS DHParameters confRAND_FILE RandFile confSERVER_CERT ServerCertFile confSERVER_KEY ServerKeyFile CONFIG: Provide basic rulesets for TLS policy control and add new tags to the access database to support these policies. See cf/README for more information. CONFIG: Add TLS information to the Received: header. CONFIG: Call tls_client ruleset from check_mail in case it wasn't called due to a STARTTLS command. CONFIG: If TLS_PERM_ERR is defined, TLS related errors are permanent instead of temporary. CONFIG: FEATURE(`relay_hosts_only') didn't work in combination with the access map and relaying to a domain without using a To: tag. Problem noted by Mark G. Thomas of Mark G. Thomas Consulting. CONFIG: Set confEBINDIR to /usr/sbin to match the devtools entry in OSTYPE(`linux') and OSTYPE(`mklinux'). From Tim Pierce of RootsWeb.com. CONFIG: Make sure FEATURE(`nullclient') doesn't use aliasing and forwarding to make it as close to the old behavior as possible. Problem noted by George W. Baltz of the University of Maryland. CONFIG: Added OSTYPE(`darwin') for Mac OS X and Darwin users. From Wilfredo Sanchez of Apple Computer, Inc. CONFIG: Changed the map names used by FEATURE(`ldap_routing') from ldap_mailhost and ldap_mailroutingaddress to ldapmh and ldapmra as underscores in map names cause problems if underscore is in OperatorChars. Problem noted by Bob Zeitz of the University of Alberta. CONFIG: Apply blacklist_recipients also to hosts in class {w}. Patch from Michael Tratz of Esosoft Corporation. CONFIG: Use A=TCP ... instead of A=IPC ... in SMTP mailers. CONTRIB: Add link_hash.sh to create symbolic links to the hash of X.509 certificates. CONTRIB: passwd-to-alias.pl: More protection from special characters; treat special shells as root aliases; skip entries where the GECOS full name and username match. From Ulrich Windl of the Universitat Regensburg. CONTRIB: qtool.pl: Add missing last_modified_time method and fix a typo. Patch from Graeme Hewson of Oracle. CONTRIB: re-mqueue.pl: Improve handling of a race between re-mqueue and sendmail. Patch from Graeme Hewson of Oracle. CONTRIB: re-mqueue.pl: Don't exit(0) at end so can be called as subroutine Patch from Graeme Hewson of Oracle. CONTRIB: Add movemail.pl (move old mail messages between queues by calling re-mqueue.pl) and movemail.conf (configuration script for movemail.pl). From Graeme Hewson of Oracle. CONTRIB: Add cidrexpand (expands CIDR blocks as a preprocessor to makemap). From Derek J. Balling of Yahoo,Inc. DEVTOOLS: INSTALL_RAWMAN installation option mistakenly applied any extension modifications (e.g., MAN8EXT) to the installation target. Patch from James Ralston of Carnegie Mellon University. DEVTOOLS: Add support for SunOS 5.9. DEVTOOLS: New option confLN contains the command used to create links. LIBSMDB: Berkeley DB 2.X and 3.X errors might be lost and not reported. MAIL.LOCAL: DG/UX portability. Problem noted by Tim Boyer of Denman Tire Corporation. MAIL.LOCAL: Prevent a possible DoS attack when compiled with -DCONTENTLENGTH. Based on patch from [email protected]. MAILSTATS: Fix usage statement (-p and -o are optional). MAKEMAP: Change man page layout as workaround for problem with nroff and -man on Solaris 7. Patch from Larry Williamson. RMAIL: AIX 4.3 has snprintf(). Problem noted by David Hayes of Black Diamond Equipment, Limited. RMAIL: Prevent a segmentation fault if the incoming message does not have a From line. VACATION: Read all of the headers before deciding whether or not to respond instead of stopping after finding recipient. Added Files: cf/ostype/darwin.m4 contrib/cidrexpand contrib/link_hash.sh contrib/movemail.conf contrib/movemail.pl devtools/OS/SunOS.5.9 test/t_snprintf.c 8.10.2/8.10.2 2000/06/07 SECURITY: Work around broken Linux setuid() implementation. On Linux, a normal user process has the ability to subvert the setuid() call such that it is impossible for a root process to drop its privileges. Problem noted by Wojciech Purczynski of elzabsoft.pl. SECURITY: Add more vigilance around set*uid(), setgid(), setgroups(), initgroups(), and chroot() calls. Added Files: test/t_setuid.c The following are the known problems and limitations in Sendmail 8.12.9, as provided by sendmail.org. For descriptions of bugs that have been fixed, see the ``Sendmail Version 8.12.9 Release Notes''. * Delivery to programs that generate too much output may cause problems If e-mail is delivered to a program which generates too much output, then sendmail may issue an error: timeout waiting for input from local during Draining Input Make sure that the program does not generate output beyond a status message (corresponding to the exit status). This may require a wrapper around the actual program to redirect output to /dev/null. Such a problem has been reported for bulk_mailer. *. * Header checks are not called if header value is too long or empty. If the value of a header is longer than 1250 (MAXNAME + MAXATOM - 6) characters or it contains a single word longer than 256 (MAXNAME) characters then no header check is done even if one is configured for the header. * Sender addresses whose domain part cause a temporary A record lookup failure but have a valid MX record will be temporarily rejected in the default configuration. Solution: fix the DNS at the sender side. If that's not easy to achieve, possible workarounds are: - add an entry to the access map: dom.ain OK - (only for advanced users) replace # Resolve map (to check if a host exists in check_mail) Kresolve host -a<OKR> -T<TEMP> with # Resolve map (to check if a host exists in check_mail) Kcanon host -a<OKR> -T<TEMP> Kdnsmx dns -R MX -a<OKR> -T<TEMP> Kresolve sequence dnsmx canon * Duplicate error messages. Sometimes identical, duplicate error messages can be generated. As near as I can tell, this is rare and relatively innocuous. * Misleading error messages. If an illegal address is specified on the command line together with at least one valid address and PostmasterCopy is set, the DSN does not contain the illegal address, but only the valid address(es). * . * Client ignores SIZE parameter. When sendmail acts as client and the server specifies a limit for the mail size, sendmail will ignore this and try to send the mail anyway. The server will usually reject the MAIL command which specifies the size of the message and hence this problem is not significant. *-user-ID files Sendmail will deliver to a fail if the file is owned by the DefaultUser or has the set-user-ID. * MAIL_HUB always takes precedence over LOCAL_RELAY Despite the information in the documentation, MAIL_HUB ($H) will always be used if set instead of LOCAL_RELAY ($R). This will be fixed in a future version. $Revision: 8.55.2.1 $, Last updated $Date: 2002/12/18 22:38:48 $
http://uw714doc.xinuos.com/en/MM_admin/sendmail-8.12.9-rn.html
CC-MAIN-2020-05
en
refinedweb
Hello, all! First off, this is program from the ACM programming contest, of which today I and two teammates competed. We managed to solve two of nine correctly, but sadly the two programs I was responsible for were never accepted. I am hoping that someone here can help me understand what I overlooked. Don't worry, the contest is over, so you are not helping me cheat. We had no access to the internet during the contest. The program I will focus on in this thread is as follows: Rank Order Your team has been retained by the director of a competition who supervises a panel of judges. The competition asks the judges to assign integer scores to competitors -- the higher the score, the better. Although the event has standards for what score values mean, each judge is likely to interpret those standards differently. A score of 100, say, may mean different things to different judges. The director's main objective is to determine which competitors should receive prizes for the top positions. Although absolute scores may differ from judge to judge, the director realizes that relative rankings provide the needed information -- if two judges rank the some competitors first, second, third, ... then they agree on who should receive the prizes. Your team is to write a program to assist the director by comparing the scores of pairs of judges. The program is to read two lists of integer scores in competitor order and determine the highest ranking place (first place being highest) at which the judges disagree. Input Input to your program will be a series of score list pairs. Each pair begins with a single integer giving the number of competitors N, 1 < N < 1,000,000. The next N integers are the scores from the first judge in competitor order. The are followed by the second judge's scores -- N more integers, also in competitor order. Scores are in the range 0 to 100,000,000 inclusive. Judges are not allowed to give ties, so each judge's scores will be unique. Values are separated from each other by one or more spaces and/or newlines. The last score list pair is followed by the end-of-file indicator. Output For each score pair, print a line with the integer representing the highest-ranking place at which the judges do not agree. If the judges agree on ever place, print a line containing only the word 'agree'. Use the format below: "Case", one space, the case number, a colon and one space, and the answer for that case with no trailing spaces. Sample Input 4 3 8 6 2 15 37 17 3 8 80 60 40 20 10 30 50 70 160 100 120 80 20 60 90 135 Sample Output Case 1: agree Case 2: 3 The following is (roughly, I'm retyping now my memory): The logic is fairly simple:The logic is fairly simple:Code:#include <iostream> using namespace std; struct contestant { int score, cont_num; }; void swap( contestant a[], int i, int j ); void qsort( contestant a[], int left, int right ); int num_of_conts; int case_num = 1; int main(){ cin >> num_of_conts; while( cin ) { contestant * judge1 = new contestant[num_of_conts]; // list for each judges scores contestant * judge2 = new contestant[num_of_conts]; // same index means same contestant, just score from diff judge for( int i = 0; i < num_of_conts; i++ ) { cin >> judge1[i].score; judge1[i].cont_num = i; } for( int i = 0; i < num_of_conts; i++ ) { cin >> judge2[i].score; judge2[i].cont_num = i; } // sort both arrays of contestants by score, from lowest score to highest qsort( judge1, 0, num_of_conts - 1 ); qsort( judge2, 0, num_of_conts - 1 ); int i; for( i = num_of_conts - 1; i >= 0; i-- ) if( judge1[i].cont_num != judge2[i].cont_num ) break; cout << "Case " << case_num++ << ": "; if( i == -1 ) cout << "agree" << endl; else cout << num_of_conts - i << endl; delete [] judge1; delete [] judge2; cin >> num_of_conts; } return 0; } void swap( contestant a[], int i, int j ) { contestant temp = a[i]; a[i] = a[j]; a[j] = temp; } void qsort( contestant a[], int left, int right ) { int i, last; if( left >= right ) return; swap( a, left, ( left + right ) / 2 ); last = left; for( i = left + 1; i <= right; i++ ) if( a[i].score < a[left].score ) swap( a, ++last, i ); swap( a, left, last ); qsort( a, left, last - 1 ); qsort( a, last + 1, right ); } *chug in the scores *sort contestants based on scores *iterate along the list of contestants, from best to worst, breaking on mismatch The judges never accepted any iteration of this code. All I could think of doing was switching most/all the ints to long and then long longs in case of any overflows. Any thoughts on why it was never accepted?
https://cboard.cprogramming.com/cplusplus-programming/164914-acm-programming-contest-seeking-answers-post1216794.html?s=63e6ffd83e37f4cdd6d82b4e00257f9e
CC-MAIN-2020-05
en
refinedweb
Populating a DropDownList from DB (ASP.Net) Hi, Me again :) Wanting to hook in a drop down list to DB data, have the stored procedure ready to go. What's the best way to populate it? I've seen various methods but what I haven't been able to figure out is how to hook it up with just the DropDownBox.Datasource property? Anyone care to explain how? Regards James James'Smiler' Farrer Thursday, August 7, 2003 Here's a decent tutorial:. If you're using Sql Server, use the SqlClient namespace as opposed to the OleDb one. Also, remember to dispose of you Command objects in addition to closing your database connection. You can probably find many similar ones with a quick search from Google. rick rick Thursday, August 7, 2003 Here's a sub I use for a ddl. Output parameter could be an Input parameter. parameters need to be created in the same order they appear in the Stored Procedure. earlofroberts Private Sub TypeLoad() 'Fills the ddlTypes drop down list box Dim dr As SqlDataReader Dim sSql As String Dim scn As String Dim cmd As New SqlCommand() Dim i As Integer 'ocosp_4000_GetAllContactTypes returns all the Types 'from TypeDriver table sSql = "ocosp_4000_GetAllContactTypes" scn = Session("ConnectString").ToString With cmd .Connection = _ New SqlConnection(scn) .CommandText = sSql .CommandType = CommandType.StoredProcedure .Connection.Open() With .Parameters.Add("@ct", SqlDbType.Int) .Direction = ParameterDirection.Output End With .Parameters("@ct").Value = 0 dr = .ExecuteReader(CommandBehavior.CloseConnection) End With Try 'Fills ddlTypes With ddlTypes .DataSource = dr .DataTextField = "TypeDesc" .DataValueField = "TypeCode" .DataBind() End With Catch End Try dr.Close() cmd.Connection.Close() End Sub Ed Roberts Tuesday, August 19, 2003 Recent Topics Fog Creek Home
https://discuss.fogcreek.com/dotnetquestions/2099.html
CC-MAIN-2020-05
en
refinedweb
Backup/Restore RabbitMQ queues Project description This is a Python library and utilities for saving and loading RabbitMQ messages to and from disk. Why? - A program cannot publish a message to RabbitMQ, but does not want to lose the message. Using this library, the program can save the message to a file. Later, after fixing the problem, a person can use a utility in this package to publish the message. - The consumer has a bug and messages are piling up in a queue. You have fixed the bug and your tests seem to indicate the consumer will now work, but you want some insurance. Using a program in this package, you can back up the queue to a file. If the program processes the messages incorrectly, you can use use a program in this package to restore the queue from the file. - You want to edit messages in a queue. You can dump the queue to a file, use a text editor or a one-off program to change the file, and then write the messages back into the queue. Why not? This library is brand new and has not been battle tested. Proceed with caution. Backup format Queues are backed up and restored from text files containing json. Each line in the file is a JSON dictionary with this structure: { "body": "This is a test", "delivery_info": { "delivery_tag": 1 }, "properties": { "app_id": null, "cluster_id": null, "content_encoding": null, "content_type": "text/plain", "correlation_id": null, "delivery_mode": 2, "expiration": null, "headers": { "client_id": null, "host_ip": "10.0.0.15", "host_name": "treebeard", "library_id": "olio_msg (Python) 1.9.0", "metadata_version": "1.0.0", "program_name": "olio_msg_send_test_messages", "version": "1.0.0" }, "message_id": "1df3eb23ffeb476b8355d87b475eb627", "priority": null, "reply_to": null, "timestamp": 1434644774, "type": "test", "user_id": null } } The line was shown pretty-printed, but it’s actually just one line: {"body": "This is a test", "delivery_info": {...}, "properties: {...}} Using the command-line utilities To backup a queue rabbit_droppings --host localhost --queue jobs \ --file /path/to/save/file --dump To backup and purge a queue rabbit_droppings --host localhost --queue jobs \ --file /path/to/save/file --purge To restore a queue rabbit_droppings --host localhost --queue jobs \ --file /path/to/save/file --restore Saving a message with the pika library If you have a program using the pika library to publish messages to a RabbitMQ server, here’s how to save messages that could not be published: First, import the library: import rabbit_droppings If you have a program that needs to save messages that cannot be published using the pika library, it should create a rabbit_droppings.Writer (probably in the constructor): file = open('/path/to/my/file', 'a') self._rd_writer = rabbit_droppings.Writer(file) When you publish a message and a pika exception occurs, save the message: body = "Message body" properties = pika.spec.BasicProperties( content_type='text/plain' ) try: channel.basic_publish(exchange='', routing_key='some_queue_name', properties=properties, body=body) except (pika.exceptions.AMQPError, pika.exceptions.ChannelError) as e: pika_message = rabbit_droppings.PikaMessage(body, properties=properties) self._rd_writer.write(pika_message) Versioning This library practices Semantic Versioning. This library is currently in alpha; it’s versions look like “0.1.0”, “0.2.0”, etc. There are no guarantees with alpha versions: Any version bump could be any combination of bug fix, backwards compatible API change, or breaking API change. When the library becomes stable, its version number will be bumped to “1.0.0”. Semantic versioning makes these promises for stable versions: - A patch-level version bump (e.g. “1.0.0” to “1.0.1”) does not change the public API. - A minor-level version bump (e.g. “1.0.0” to “1.1.0”) changes the public API in a backward-compatible manner. - A major-level version bump (e.g. “1.0.0” to “2.0.0”) changes the public API in some way that is not backward compatible. Python version Known to work with Python versions: - 2.6.9 - 2.7.9 Development Running the tests requires a RabbitMQ server installed locally. The tests are known to pass with these RabbitMQ versions: - 3.4.1 To run the tests: ./setup.py test Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rabbit_droppings/
CC-MAIN-2020-05
en
refinedweb
Stephen StewartPro Student 4,840 Points Stuck on this bad boy! I tried different attempts of trying to crack this and my head is about to pop :/ I can't think of what I'm doing wrong or it is the lack of oxygen going to my brain to think. using Treehouse.Models; namespace Treehouse.Data { public class VideoGamesRepository { public VideoGame GetVideoGame(int id) { foreach(var videoGame in _videoGames) { if (videoGame.Id == id) { return videoGame; } } return null; } + ")"; } } } } 1 Answer Steven Parker179,649 Points Don't forget the caution given at the start of the challenge: "Important: In each task of this code challenge, the code you write should be added to the code from the previous task.". I see your task 2 code, but I don't see the function you must have written to pass task 1. Perhaps you replaced it instead of adding to it while working on task 2. It might help to leave the two comment lines in the code, and add each task's function below the related comment.
https://teamtreehouse.com/community/stuck-on-this-bad-boy
CC-MAIN-2020-05
en
refinedweb
By: Stanley B. Printer Friendly Format There are two kinds of comments in C++: single-line and paired. A single-line comment starts with a double slash (//). Everything to the right of the slashes on the current line is a comment and ignored by the compiler. The other delimiter, the comment pair (/* */), is inherited from the C language. Such comments begin with a /* and end with the next */. The compiler treats everything that falls between the /* and */ as part of the comment: #include <iostream> /* Simple main function: Read two numbers and write their sum */ int main() { // prompt user to enter two numbers std::cout << "Enter two numbers:" << std::endl; int v1, v2; // uninitialized std::cin >> v1 >> v2; // read input return 0; } A comment pair can be placed anywhere a tab, space, or newline is permitted. Comment pairs can span multiple lines of a program but are not required to do so. When a comment pair does span multiple lines, it is often a good idea to indicate visually that the inner lines are part of a multi-line comment. Our style is to begin each line in the comment with an asterisk, thus indicating that the entire range is part of a multi-line comment. Programs typically contain a mixture of both comment forms. Comment pairs generally are used for multi-line explanations, whereas double slash comments tend to be used for half-line and single-line remarks. Too many comments intermixed with the program code can obscure the code. It is usually best to place a comment block above the code it explains. Comments should be kept up to date as the code itself changes. Programmers expect comments to remain accurate and so believe them, even when other forms of system documentation are known to be out of date. An incorrect comment is worse than no comment at all because it may mislead a subsequent++
https://www.java-samples.com/showtutorial.php?tutorialid=1445
CC-MAIN-2020-05
en
refinedweb
Predicting stock prices has always been an attractive topic to both investors and researchers. Investors always question if the price of a stock will rise or not, since there are many complicated financial indicators that only investors and people with good finance knowledge can understand, the trend of stock market is inconsistent and look very random to ordinary people. Machine learning is a great opportunity for non-experts to be able to predict accurately and gain steady fortune and may help experts to get the most informative indicators and make better predictions. The purpose of this tutorial is to build a neural network in TensorFlow 2 and Keras that predicts stock market prices. More specifically, we will build a Recurrent Neural Network with LSTM cells as it is the current state-of-the-art in time series forecasting. Alright, let's get start. First, you need to install Tensorflow 2 and other libraries: pip3 install tensorflow pandas numpy matplotlib yahoo_fin sklearn More information on how you can install Tensorflow 2 here. Once you have everything set up, open up a new Python file (or a notebook) and import the following libraries: from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, Dense, Dropout from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from yahoo_fin import stock_info as si from collections import deque import numpy as np import pandas as pd import matplotlib.pyplot as plt import random import time import os We are using yahoo_fin module, it is essentially a Python scraper that extracts finance data from Yahoo Finance platform, so it isn't a reliable API, feel free to use other data sources such as Alpha Vantage. Learn also: How to Build a Spam Classifier using Keras in Python. As a first step, we need to write a function that downloads the dataset from the Internet and preprocess it: def load_data(ticker, n_steps=50, scale=True, shuffle=True, lookup_step=1, test_size=0.2, feature_columns=['adjclose', 'volume', 'open', 'high', 'low']): # see if ticker is already a loaded stock from yahoo finance if isinstance(ticker, str): # load it from yahoo_fin library df = si.get_data(ticker) elif isinstance(ticker, pd.DataFrame): # already loaded, use it directly df = ticker # this will contain all the elements we want to return from this function result = {} # we will also return the original dataframe itself result['df'] = df.copy() # make sure that the passed feature_columns exist in the dataframe for col in feature_columns: assert col in df.columns if scale: column_scaler = {} # scale the data (prices) from 0 to 1 for column in feature_columns: scaler = preprocessing.MinMaxScaler() df[column] = scaler.fit_transform(np.expand_dims(df[column].values, axis=1)) column_scaler[column] = scaler # add the MinMaxScaler instances to the result returned result["column_scaler"] = column_scaler # add the target column (label) by shifting by `lookup_step` df['future'] = df['adjclose'].shift(-lookup_step) # last `lookup_step` columns contains NaN in future column # get them before droping NaNs last_sequence = np.array(df[feature_columns].tail(lookup_step)) # drop NaNs df.dropna(inplace=True) sequence_data = [] sequences = deque(maxlen=n_steps) for entry, target in zip(df[feature_columns].values, df['future'].values): sequences.append(entry) if len(sequences) == n_steps: sequence_data.append([np.array(sequences), target]) # get the last sequence by appending the last `n_step` sequence with `lookup_step` sequence # for instance, if n_steps=50 and lookup_step=10, last_sequence should be of 59 (that is 50+10-1) length # this last_sequence will be used to predict in future dates that are not available in the dataset last_sequence = list(sequences) + list(last_sequence) # shift the last sequence by -1 last_sequence = np.array(pd.DataFrame(last_sequence).shift(-1).dropna()) # add to result result['last_sequence'] = last_sequence # construct the X's and y's X, y = [], [] for seq, target in sequence_data: X.append(seq) y.append(target) # convert to numpy arrays X = np.array(X) y = np.array(y) # reshape X to fit the neural network X = X.reshape((X.shape[0], X.shape[2], X.shape[1])) # split the dataset result["X_train"], result["X_test"], result["y_train"], result["y_test"] = train_test_split(X, y, test_size=test_size, shuffle=shuffle) # return the result return result This function is long but handy, it accepts several arguments to be as flexible as possible. The ticker argument is the ticker we want to load, for instance, you can use TSLA for Tesla stock market, AAPL for Apple and so on.. scale is a boolean variable that indicates whether to scale prices from 0 to 1, we will set this to True as scaling high values from 0 to 1 will help the neural network to learn much faster and more effectively. lookup_step is the future lookup step to predict, the default is set to 1 (e.g next day). We will be using all the features available in this dataset, which are the open, high, low, volume and adjusted close. The above function does the following: To understand even more better, I highly suggest you to manually print the output variable (result) and see how the features and labels are made. Related: How to Make a Speech Emotion Recognizer Using Python And Scikit-learn. Now that we have a proper function to load and prepare the dataset, we need another core function to build our model: def create_model(input_length, units=256, cell=LSTM, n_layers=2, dropout=0.3, loss="mean_absolute_error", optimizer="rmsprop"): model = Sequential() for i in range(n_layers): if i == 0: # first layer model.add(cell(units, return_sequences=True, input_shape=(None, input_length))) elif i == n_layers - 1: # last layer model.add(cell(units, return_sequences=False)) else: # hidden layers model.add(cell(units, return_sequences=True)) # add dropout after each layer model.add(Dropout(dropout)) model.add(Dense(1, activation="linear")) model.compile(loss=loss, metrics=["mean_absolute_error"], optimizer=optimizer) return model Again, this function is flexible too, you can change the number of layers, dropout rate, the RNN cell, loss and the optimizer used to compile the model. The above function constructs a RNN that has a dense layer as output layer with 1 neuron, this model requires a sequence of features of input_length (in this case, we will pass 50) consecutive time steps (which is days in this dataset) and outputs a single value which indicates the price of the next time step. Now that we have all the core functions ready, let's train our model, but before we do that, let's initialize all our parameters (so you can edit them later on your needs): # Window size or the sequence length N_STEPS = 50 # Lookup step, 1 is the next day LOOKUP_STEP = 1 # test ratio size, 0.2 is 20% TEST_SIZE = 0.2 # features to use FEATURE_COLUMNS = ["adjclose", "volume", "open", "high", "low"] # date now date_now = time.strftime("%Y-%m-%d") ### model parameters N_LAYERS = 3 # LSTM cell CELL = LSTM # 256 LSTM neurons UNITS = 256 # 40% dropout DROPOUT = 0.4 ### training parameters # mean squared error loss LOSS = "mse" OPTIMIZER = "rmsprop" BATCH_SIZE = 64 EPOCHS = 300 # Apple stock market ticker = "AAPL" ticker_data_filename = os.path.join("data", f"{ticker}_{date_now}.csv") # model name to save model_name = f"{date_now}_{ticker}-{LOSS}-{CELL.__name__}-seq-{N_STEPS}-step-{LOOKUP_STEP}-layers-{N_LAYERS}-units-{UNITS}" Let's make sure the results, logs and data folders exist before we train: # create these folders if they does not exist if not os.path.isdir("results"): os.mkdir("results") if not os.path.isdir("logs"): os.mkdir("logs") if not os.path.isdir("data"): os.mkdir("data") Also, you may train offline, if the dataset already exists in data folder, let's load it instead: # load the CSV file from disk (dataset) if it already exists (without downloading) if os.path.isfile(ticker_data_filename): ticker = pd.read_csv(ticker_data_filename) Finally, let's train the model: # load the data data = load_data(ticker, N_STEPS, lookup_step=LOOKUP_STEP, test_size=TEST_SIZE, feature_columns=FEATURE_COLUMNS) if not os.path.isfile(ticker_data_filename): # save the CSV file (dataset) data["df"].to_csv(ticker_data_filename) # construct the model model = create_model(N_STEPS, loss=LOSS, units=UNITS, cell=CELL, n_layers=N_LAYERS, dropout=DROPOUT, optimizer=OPTIMIZER) # some tensorflow callbacks checkpointer = ModelCheckpoint(os.path.join("results", model_name), save_best_only=True, verbose=1) tensorboard = TensorBoard(log_dir=os.path.join("logs", model_name)) history = model.fit(data["X_train"], data["y_train"], batch_size=BATCH_SIZE, epochs=EPOCHS, validation_data=(data["X_test"], data["y_test"]), callbacks=[checkpointer, tensorboard], verbose=1) model.save(os.path.join("results", model_name) + ".h5") We used ModelCheckpoint that saves our model in each epoch during the training. We also used TensorBoard to visualize the model performance in the training process. After running the above block of code, it will train the model for 300 epochs, so it will take some time, here is the first output lines: Epoch 1/300 3510/3510 [==============================] - 21s 6ms/sample - loss: 0.0117 - mean_absolute_error: 0.0515 - val_loss: 0.0065 - val_mean_absolute_error: 0.0487 Epoch 2/300 3264/3510 [==========================>...] - ETA: 0s - loss: 0.0049 - mean_absolute_error: 0.0352 Epoch 00002: val_loss did not improve from 0.00650 3510/3510 [==============================] - 1s 309us/sample - loss: 0.0051 - mean_absolute_error: 0.0357 - val_loss: 0.0082 - val_mean_absolute_error: 0.0494 Epoch 3/300 3456/3510 [============================>.] - ETA: 0s - loss: 0.0039 - mean_absolute_error: 0.0329 Epoch 00003: val_loss improved from 0.00650 to 0.00095, saving model to results\2020-01-08_NFLX-mse-LSTM-seq-50-step-1-layers-3-units-256 3510/3510 [==============================] - 14s 4ms/sample - loss: 0.0039 - mean_absolute_error: 0.0328 - val_loss: 9.5337e-04 - val_mean_absolute_error: 0.0150 Epoch 4/300 3264/3510 [==========================>...] - ETA: 0s - loss: 0.0034 - mean_absolute_error: 0.0304 Epoch 00004: val_loss did not improve from 0.00095 3510/3510 [==============================] - 1s 222us/sample - loss: 0.0037 - mean_absolute_error: 0.0316 - val_loss: 0.0034 - val_mean_absolute_error: 0.0300 After the training ends (or during the training), try to run tensorboard using this command: tensorboard --logdir="logs" Now this will start a local HTTP server "localhost:6006", after going to the browser, you'll see something similar to this: The loss is the Mean Squared Error as specified in the create_model() function, the orange curve is the training loss, whereas the blue curve is what we care about the most, the validation loss. As you can see, it is significantly decreasing over time, so this is working ! Now let's test our model: # evaluate the model mse, mae = model.evaluate(data["X_test"], data["y_test"]) # calculate the mean absolute error (inverse scaling) mean_absolute_error = data["column_scaler"]["adjclose"].inverse_transform(mae.reshape(1, -1))[0][0] print("Mean Absolute Error:", mean_absolute_error) Remember that the output will be a value between 0 to 1, so we need to get it back to a real price value, here is the output: Mean Absolute Error: 4.4244003 Not bad, in average, the predicted price is only far to the real price by 4.42$. Alright, let's try to predict the future price of Apple Stock Market: def predict(model, data, classification=False): # retrieve the last sequence from data last_sequence = data["last_sequence"][:N_STEPS] # retrieve the column scalers column_scaler = data["column_scaler"] # reshape the last sequence last_sequence = last_sequence.reshape((last_sequence.shape[1], last_sequence.shape[0])) # expand dimension last_sequence = np.expand_dims(last_sequence, axis=0) # get the prediction (scaled from 0 to 1) prediction = model.predict(last_sequence) # get the price (by inverting the scaling) predicted_price = column_scaler["adjclose"].inverse_transform(prediction)[0][0] return predicted_price This function uses the last_sequence variable we saved in the load_data() function, which is basically the last sequence of prices, we use it to predict the next price, let's call this: # predict the future price future_price = predict(model, data) print(f"Future price after {LOOKUP_STEP} days is {future_price:.2f}$") Output: Future price after 1 days is 308.20$ Sounds interesting ! The last price was 298.45$, the model is saying that the next day, it will be 308.20$. The model just used 50 days of features to be able to get that value, let's plot the prices and see: def plot_graph)) plt.plot(y_test[-200:], c='b') plt.plot(y_pred[-200:], c='r') plt.xlabel("Days") plt.ylabel("Price") plt.legend(["Actual Price", "Predicted Price"]) plt.show() This function plots the last 200 days of the test set (you can edit it as you wish) as well as the predicted prices, let's call it and see how it looks like: plot_graph(model, data) Result: Great, as you can see, the blue curve is the actual test set, and the red curve is the predicted prices ! Notice that the stock price recently is dramatically increasing, that's why the model predicted 308$ for the next day. Until now, we have used to predict only the next day, I have tried to build other models that use different lookup_steps, here is an interesting result in tensorboard: Interestingly enough, the blue curve is the model we used in the tutorial, which uses the next timestep stock price as the label, whereas the green and orange curves used 10 and 30 lookup steps respectively, for instance, in this example, the orange model predicts the stock price after 30 days, which is a great model for more long term investments (which is usually the case). Now you may think that, but what if we just want to predict if the price is going to rise or fall, not the actual price value as we did here, well you can do it using one of the two ways, one is you compare the predicted price with the current price and you make the decision, or you build an entire model and change the last output's activation function to sigmoid, as well as the loss and the metrics. The below function calculates the accuracy score by converting the predicted price to 0 or 1 (0 indicates that the price went down, and 1 indicates that it went up): def get_accuracy)) y_pred = list(map(lambda current, future: int(float(future) > float(current)), y_test[:-LOOKUP_STEP], y_pred[LOOKUP_STEP:])) y_test = list(map(lambda current, future: int(float(future) > float(current)), y_test[:-LOOKUP_STEP], y_test[LOOKUP_STEP:])) return accuracy_score(y_test, y_pred) Now let's call the function: print(LOOKUP_STEP + ":", "Accuracy Score:", get_accuracy(model, data)) Here is the result for the three models that use different lookup_steps: 1: Accuracy Score: 0.5227156712608474 10: Accuracy Score: 0.6069779374037968 30: Accuracy Score: 0.704935064935065 As you may notice, the model predicts more accurately in long term prices, it reaches about 70.5% when we train the model to predict the price of the next month (30 lookup steps), and it reaches about 86.6% accuracy when using 50 lookup steps and 70 sequence length (N_STEPS). Alright, that's it for this tutorial, you can tweak the parameters and see how you can improve the model performance, try to train on more epochs, say 500 or even more, increase or decrease the BATCH_SIZE and see if does change to the better, or play around with N_STEPS and LOOKUP_STEPS and see which combination works best. You can also change the model parameters such as increasing the number of layers or the number of LSTM units, or even try the GRU cell instead of LSTM. Note that there are other features and indicators to use, in order to improve the prediction, it is often known to use some other information as features, such as the company product innovation, interest rate, exchange rate, public policy, the web and financial news and even the number of employees ! I encourage you to change the model architecture, try to use CNNs or Seq2Seq models, or even add bidirectional LSTMs to this existing model, see if you can improve it ! Also, use different stock markets, check the Yahoo Finance page and see which one you actually want ! If you're not using a notebook or an interactive shell, I have splitted the code to different Python files, each one for its purpose, check it here. Read also: How to Make an Image Classifier in Python using Keras. Happy Training ♥View Full Code
https://www.thepythoncode.com/article/stock-price-prediction-in-python-using-tensorflow-2-and-keras
CC-MAIN-2020-05
en
refinedweb
Piyush was here, I was ready with my cup of tea. I always used to worry about visualizations. For me, most of the visualization work was done by either my Visualization tool or the master of all i.e. MS Excel. Needless to say that I was nervous, because I had tried working on this part multiple times in the past, but have always found a good way to escape. My past experience with Piyush tells me that either I will get out of this day learning something worthy or else I will never learn to do it this way. I was getting a bit comfortable with Python, so that was a win for me. But the little milestone I was aiming for was right here. Lost in thoughts, I missed that Piyush was waiting for his Latte, I sipped and sat on the edge of the seat. I did not want him to know that I was nervous, but I guess he must have assumed my condition by this gesture. ‘So, nervous or excited? ‘Umm..nervous, a lot to be frank’ ‘Pihu, I have taken multiple Python session, I know people are scared as fuck to visualize things in a programming language. They always find some way to dodge the bullet. But the real beauty of visualization, and by visualization I mean a good informative graph and not something like a simple bar or line chart, is that you can tell a whole story with it’ ‘You can substitute a lot of table and a handful of graph with just one graph’ ‘That sounds fun’ I couldn’t show mush enthusiasm, but had to nod to show respect to his words. ‘I will start with a question, how or which graph can you use to show the population of each country by continents along with the GDP and life expectancy of each ? This is a classic example of how a graph can exhibit multiple information at one place’ He was looking for at least some sense in my words ‘Well, may be a scatter plot with axis as GDP and Health expectancy !!’ That was quick from my side and I waited for the response. ‘That is close, let me show you’ And he took the laptop from his bag, a silver colored Dell XPS,I used to have the same when I was in college. ‘Just look at the cool graph made by Gapminder’ ‘Wooooo, that is cool and I must say that I was close’ I was excited ‘This is what a good visualization looks like, you don’t have to make 4 graphs each showing a different data and make it interactive to escape creating this one’ ‘Chalo, let’s start with the basics of Python plotting and once we are good to go, we will try creating fancy graphs’ I was a bit excited for the class now, but we only had 40 minutes left and I wanted to make the most out of it. ‘To start with, import the best package available as per now i.e. matplot import matplotlib.pyplot as plt ‘and now create the first plot, a basic line graph with few data points’ age = (20,30,40,50,60) wage = (4000,5000,7000,1000,15000) plt.plot(age,wage) ‘Well that was really easy Piyush’ I was elated after creating my first graph within minutes of our class. ‘It’s always hard before you take the first step and the other way round, once you start exploring more. But, this graph is not even close to what we can make. Let’s add some more information to this graph’ ‘Before we move to label and other graphs, do know that if you have numbers or values on your axis in the range of thousands or millions or any large number, then it’s always better to scale your axis, See the way we scale our age = (20,30,40,50,60) wage = (4000,5000,7000,1000,15000) plt.plot(age,wage) plt.yscale(‘log’) plt.show() ‘Poor Pihu, now you have to mess up with some awesome histograms’ He chuckled, he knew that I never wanted to create a boring histogram. ‘Ohh, am soo excited’ I said with disappointment pouring down my face. ‘It’s not that bad, See, histogram makes you divide various data points in a bin and thus you can get an idea of the average values, the outliers, and many other things. Do one thing, make a list of 20 random values. I hope you know how to create a list. Do one more thing, create a list of random values by using the random function’ I can see doubts in his eyes But, I had done my homework, I knew how to import a random package and create a list out of it import random my_rand = random.sample(range(1,30),20) print(my_rand) print(type(my_rand)) [27, 7, 3, 21, 19, 4, 23, 26, 13, 28, 29, 24, 20, 14, 25, 8, 2, 1, 9, 16] <class 'list'> I showed the my_rand list to him and he was happy to say the least. ‘This is good Pihu, now just remember the basics that when you want to plot a histogram, you need to have two things, a list with values and the number of bins, you want your histogram to have. See the code below’ plt.hist(my_rand,bins = 6) ‘Pihu, why don’t you label the axis? I have no idea what the x and y-axis is showing’ ‘But, I don’t know how to label my graph?’ I had no clue what so ever. ‘In 1998, Larry Page…’ ‘Okay ‘Got it, see if I am correct’ It took me 3 minutes to get to the syntax plt.hist(my_rand,bins = 6) plt.xlabel(‘Year of Experience’) plt.ylabel(‘Number of company switch’) plt.title(‘Year of Exp. vs No. of Company changes ‘) plt.show() ‘The graph is good, but I really liked the labels’ He chuckled. ‘And you provided a graph title as well, good job Pihu’ ‘To make the graph more appealing om the axis label part, try the ticks thing’ ‘So, a tick actually give plt.hist(my_rand,bins = 6) plt.xlabel(‘Year of Experience’) plt.ylabel(‘Number of company switch’) plt.title(‘Year of Exp. vs No. of Company changes ‘) plt.xticks([5,10,15,20,25,30],[‘5 yrs.’,’10 yrs.’,’15 yrs.’,’20 yrs.’,’25 yrs.’,’25 yrs.’,’30 yrs.’]) plt.show()
http://thedatamonk.com/visualization-in-python/
CC-MAIN-2020-05
en
refinedweb
setting core profile gets a blank background I was following the cube tutorial the other day, and was trying to get it work with OpenGL 3.3 core profile. I have set my QSurfaceFormatin main correctly as you can see in the following code: int main(int argc, char *argv[]) { QApplication app(argc, argv); QSurfaceFormat format; format.setDepthBufferSize(24); format.setVersion(3,3); format.setRenderableType(QSurfaceFormat::OpenGL); format.setProfile(QSurfaceFormat::CoreProfile); QSurfaceFormat::setDefaultFormat(format); app.setApplicationName("textured cube"); app.setApplicationVersion("0.1.0"); #ifndef QT_NO_OPENGL Dialog w; w.show(); #else QLabel note("OpenGL Support required"); note.show(); #endif return app.exec(); } I did also added #version 330in both my vertex and fragment shader here is my result What did I do wrong? another question: My application is tend to be cross desktop platforms in the future, should I choose OpenGL 2.0 ES over OpenGL 3.3 Core?
https://forum.qt.io/topic/69455/setting-core-profile-gets-a-blank-background
CC-MAIN-2020-05
en
refinedweb
Hi My application is a boot-start Android service. Up until now, I have been starting the service from MainActivity (which calls Forms.Init), this works fine. When the service is started by the system via a receiver on the android.intent.action.BOOT_COMPLETED intent, it obviously cannot call Forms.Init as there is no activity present until such a point as the user taps the notification and launches MainActivity. I have had a search of the forums and there appears to be no way around this other than a massive re-factoring effort to move platform independent business logic code into Xamarin.Android (and then to duplicate it for Xamarin.iOS). Before I undertake this massive re-factoring and code-duplication effort, is anyone able to confirm that this is, indeed the case? Background: My application's service connects to a device over BLE and maintains this connection. It notifies the user when events arrive at the service. All my code dealing with queue's, queue draining, the intermittent nature of a BLE connection etc was in a .net2.0std PCL, but since I cannot use the DependencyService from a service, it looks like I simple CANNOT CALL ANY SHARED CODE from a boot start service, as going so REQUIRES the DependencyService and the DependencyService can only be started from an activity?? Nigel Solved this thus, and tested on Android 6,7 and 9 (don't have an 8.0 target currently) Short Version From a new boot-started BootService, launch an activity which calls Forms.Init before launching your main-service (different to the BootService). BootService then self-stops, MainService inits, and can utilise DependenceService. In Normal-start (user taps icon) in MainActivity detect service running flag, if not running, start it, and schedule an intent to start MainActivity, if service already running, launch FormsApp. Note that I am SingleTop, NoHistory, your milage may differ! Long Version Usage: Works reliably on android 6,7,9 not been able to test on 8.0 as yet. Might well break in a future version, google seems to have gone to war on backgrounding. See attached zip for code scratch.veletron.com/DependencyServiceFromBootStartServiceAndroidWorkAround.zip Nigel Hi @NigelWebber , I'm trying to do the same service. I'm using dependecy service foreground service notificaiton? Can you help me? Thanks Hi All the necessary code is in the zip attached to my post above. I believe you should be able to fathom it out from this, and the text above. Pretty naff oversight on the part of Xamarin to tie the dependency service to forms.init, and insist that it is called from an Activity rather than an application. Since writing the above, I ditched most of my interfaces in favour of multi-platform shared code (via a .net 2.0 class lib), thus reducing the need for the dependency service. This shared code exists as a series of Singletons, for instance my BleComms class. Depending on your use-case and how different your code called from the Service is for Android (vs the iOS calls in AppDelegate) is, this might prove to be a better solution? My BleComms class now lives in shared code, accessed as a Singleton rather than via an interface: The above will self-instantiate when BleComms.SingletonBleComms is first used, but to avoid the pregnant pause @ init, I force instantiation before I need it in my Service.OnCreate and AppDelegate.FinishedLaunching calling: BleComms.SingletonBleComms.Init = true; You can then useBleComms.SingletonBleComms.AnyMethod() from anywhere in your application without worrying about the DependencyService. Beware thread-safe code if you are hitting the singleton from other threads. In my case, my only interaction with it from outside the class is via a couple of ConcurrentQueues for Rx and Tx, and some events that the external code can subscribe to (data arrived, BLE Off, On etc). Nigel Hi, Thanks for reply. I think I worked most of code. The application comes to the screen and closes after 1 second when boot completed. Is this normally? Is there any way to prevent this? Sounds like its force-closing or dropping right out the bottom of your service - I assume you are running with the debugger connected, and testing service startup works fine when you are starting it from an icon-tap before you start trying to debug boot-start stuff? I don't try and start Any visible activity at boot, I just display a notification. Are you calling the right method to start your service in the boot receiver? Also note that you must call StartForeground(); within 5 seconds after your service starts. Also, check your permissions in the manifest - have you requested the required permissions? Start the device log and filter for your app and locate the exception that caused it to exit. Hi @NigelWebber I'm shared all my code and my purpose. I'm already using foreground services notification with dependency service. The foreground notification working on app running. I want start foreground service notification on boot completed. Please can you edit my code for boot completed? Not working right now. only work toast message. something is bug again Thanks in advance manifest (edit) Bootupservice.cs ` [BroadcastReceiver] [IntentFilter(new[] { Android.Content.Intent.ActionBootCompleted })] public class BootUpReceiver : BroadcastReceiver { } ` Depedentservice `[assembly: Xamarin.Forms.Dependency(typeof(DependentService))] namespace Sample.Droid { [Service] public class DependentService : Service, IService { public static string messageBody = "test"; public static string title = "test"; ` add mainactivity { } ` You appear to be trying to start your MainActivity from your receiver. directly, rather than launching a BootService, and having that start your MainActivity. In My application there are two services BootService and MainService. BootService is started by the receiver, this launches MainActivity before calling 'stopself'. In my MainActivity I call Forms.Init, and check to see if my MainService is running, if not, then I start it before calling Finish on MainActivity, if MainService is already running then I launch my UI. If you don't need the service to continue running after starting your Activity then you would not need my 'MainService'. In my use-case, normal operation is MainService only to be running. The UI is rarely displayed to the end user. It works when changed the code below. But app started one second and off Bootupreceiver Intent i = new Intent(context, typeof(MainActivity)); i.AddFlags(ActivityFlags.NewTask); context.StartActivity(i); Not sure then, I have not tried to launch an actual UI at boot, only a service - I think auto-launching a UI would annoy my users a fair amount! You might want to check this out: Check that the manifest entries match yours - the above also shows how to replace the launcher with your app. Beware that if stuff works on Android 7, there is no guarantee it will work on 6,9 etc. The whole services thing and restrictions around it seems to get changed willy-nilly every release. Hi @NigelWebber public override StartCommandResult OnStartCommand(Intent intent, StartCommandFlags flags, int startId) { Intent resultIntent = new Intent(StaticDefs.Com_Spacelabs_ProteusPatientApp_Android_SwitchScreenIntent); resultIntent.PutExtra(PageId.PageIdStringIdent, (int)PageId.RequestedPageId.PatientEventListScreen); resultIntent.SetFlags(ActivityFlags.NoHistory | ActivityFlags.NewTask | ActivityFlags.SingleTop); I don't understand the your code here. Intent and pageid. What should i write? MainActivity? Yep, this is just something specific to my own app. I pass stuff into MainActivity via the extra's. I then use these to decide what xamarin.forms page should be loaded. You need: Intent resultIntent = new Intent(Application.Context, MainActivity); You should not need the extra's (these are a way of feeding parameters into the intent, for extraction by the destination. Note that my flags indicate SingleTop, NoHistory, this might not be what you want. Nigel Hi @NigelWebber Thank you for all your help.So my project won't be exactly what I want. I think not working xamarin forms messaging center and essential secure storage half starting. I think only working on runtime . do you agree? I wonder sqlite works on half starting? do you know? Hi For best practice, try to avoid the use of the MessagingCenter, it might seem like an easy way out, but it quickly becomes unmanageable. In my application, the classes running under my service simply fire events that the viewmodel subscribes to, you can do these subs/unsubs via either the dependency service, or, for platform-independent shared code (in my case, my BleComms class), by accessing via a singleton. The MessagingCenter works fine in a service ditto the DependencyService (both require Forms.Init to have been called). I use the DependencyService extensively. My service uses both SharedPrefs (Tip get Xamarin.Essentials nuget - makes it easy). And also SqlLite (via the Nuget). These both work just fine from the boot start service, which gets the MAC address for the previously connected BLE pheriperal, reconnects then reloads the existing database. Note that I had issues using Application.Context.GetSharedPreferences (ISharedPreferences) from my Service (even after having called forms.init. Initially, I rolled my own interface and implemented an android and ios-specific version accessed via an interface, before switching initially to Xam.Plugins.Settings and then to Xamarin.Essentials (which has basically incorporated the former). This works fine from a service, but needs you to call Xamarin.Essentials.Platform.Init(this, bundle); in an activity after forms.init See also secure storage in Xamarin.Essentials: Make sure you have the permissions, and if it crashes, look at the logs - they will tell you whats missing. Note that some Android permissions must be requested before use (the end user will be prompted to accept), there is no way in newer versions of android to get these auto-allocated via the manifest (although they still need to be listed in the manifest). I was fairly seasoned as a C#, C++ and firmware developer (30 years) when I came to Xamarin, but MVVM was alien to me, and Xamarin appeared to just be a barrier. Its a steep learning curve. I developed my app for Android initially, but was able to get up up/running on iOS within 2 days. This despite the fact that this is not the usual simple app accessing some online database for content, but rather includes BLE Comms, NFC for pairing etc. It helped that I also developed the other end of the BLE connection on an NRF52832 so I was responsible for both ends. Xamarin's strengths lie in code re-use. I not only have shared libraries for my Apps, but libraries that I share between native windows applications and the xamarin app, My entire comms protocol for instance was implemented as a .net2.0 class lib and is used everywhere via that same lib, over BLE and USB and only needs unit-testing the once. Nigel
https://forums.xamarin.com/discussion/comment/379752/
CC-MAIN-2020-05
en
refinedweb
[Date Index] [Thread Index] [Author Index] factoring - To: mathgroup at smc.vnet.net - Subject: [mg115062] factoring - From: r_poetic <radford.schantz at mms.gov> - Date: Thu, 30 Dec 2010 04:09:17 -0500 (EST) Hello, an easy question: why does Factor[xy-xz+y^2-yz] fail to return (x+y)(y-z), and what command would do that? Thanks!
http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00740.html
CC-MAIN-2020-05
en
refinedweb
"Blogging is graffiti with punctuation" Contagion. I didn't know whether to laugh or kill myself! The question is, is this true? AnnabelleRC <snipped-no promotional links> I think blogging is very credible infact it may be the only form of media that isn't corrupt! blogging is really great. you can be short and to the point or just go off on a wild tangent and write about anything you're thinking about. It's also a good way to advertise about things without sinking a lot of money into it. I'm going to start blogging on other sites too, but I want to make sure all my content is original to each site. I remember when I first started blogging. What I thought and what I learned soon became very far separated. Soon I found myself lulled into the false hope of reciprocation, visiting like-blogs and leaving engaging comments that seldomly paid off in return visits. After spending so much time not blogging just to get my blog noticed, I began to question whether I had any right to write with authority on my passion, which was creative writing. My posts about the writing process and various writing exercises I had come up with seemed empty, even though they probably would have helped someone if only I'd had a larger audience. Time has taught me that it's better to just focus on your content and be true to what you believe is right. Traffic will come in time, and the people who choose to stick with you will be your validation. If that never comes, you've got to make a decision as to whether it's still worth your time. If you blog for yourself, it may still be time well spent. Now if you consider those points as a potential reader of a blog, you should realize that your patronage is worth something. In the end it is the reader who decides whether a blog is credible. (Other bloggers often come with their own motives and hopes of reciprocation, and shouldn't be counted on for validation.) I hate to say it, but I placed almost no stock in anything I read online. How many people do you know that write reviews on products just for affiliate commison? I cannot tell you how how many times I've looked at other writing sites and read reviews on them. Invariably, I find at least one person that touts the site as the greatest thing ever and has their referral link plastered all over the page. You go to their profile and find they've written two articles. There are some things in life I know very well. I've found just plain ridiculously wrong info on blogs, some of which is so far removed from the truth, it's almost painful. I do enjoy opinion pieces, provided they are not stating opinion and attempting to pass them off as factual. I have found great how-to type of info as well. It's a bit like saying "Are journals credible?" Define "journal." Blogging is a publishing platform, no more and no less. It usually lacks an editor, so for that reason it's less respected than literature that must be passed by an editor or editorial board. However, considering the shockingly poor quality control even on sites like TIME and the BBC these days -- to say nothing of the popularity of HuffPo -- you will find better-quality blog writing than some professional publishing platforms. It simply depends on the blog. Of course, I don't tend to think of blogging as a place to post reviews, although I suppose one could do that. I blog to discuss the same stuff I used to research and study in academia, relating it to contemporary issues and translating it into a style that's more accessible to non-academic readers. I've learned about SEO from well-researched blogs (and avoid less reputable ones). I follow my favorite sports team via a good blog. I read excellent fiction and get into thoughtful discussions of social issues and current events on friend's blogs. Define blog? Yes, it really does depend on the blog. I have written a blog for over 3 years on a health condition my son has. I don't give medical advice, just mom advice. That's something the doctor doesn't give. The blog is fairly popular for such a niche topic. I do reviews of things like books about this health condition and receive nothing in return (affiliate commission, etc). The moral of the story is that it's hard to have a "one size fits all" statement for blogs. There are so many that cover everything you can imagine and then some. I do get what you're saying though about people reviewing things positively just to get a commission. But every blog is not like that. Really. I too have a health blog that gets more views than my earlier blogs. I find that blogging has it good points and there are those people who prefer to get their info from blogs because they are more personal. Whereas a website publishes more technical articles, blogs are mostly from a personal point of view that most people can relate to. Blogging is freedom of speech, to some extent lol There are two types of bloggers - those who chatter and those who write with high quality content and hours of research. Sadly there is no name for those who care about the content. Why is the writer who spends hours sometimes days researching and writers 1200 words, adds in pertinent videos and photos called a blogger. I look at the extensive work of some of my fellow hubbers and feel the term "blogger" is not appropriate. As wordsmiths, we should coin a word that aptly describes the writer who seeks to deliver quality, unbiased content. Albeit we are being paid a commission whereas the journalists are being paid a salary. What should a high quality writer online be called? Blogger. There's no need to come up with a separate term, any more than there's a need to come up with a separate term for poets of good poetry and bad. They are poets. They write poetry. Leave the reader (again, there are good readers and bad) to judge. by erinshelby2 years ago What sites exist that are free to use, that allow writers to create any content (like HP), where you can make money? by Trudy Cooper2 years ago Please tell me what the difference is between writing a blog and writing a HubPage? If any? by Marisa Wright10 months ago I am always surprised that guest blogging is never discussed on HubPages. We all know it's important to have backlinks pointing to our Hubs and websites. We also know that Google is working hard to... yoshi977 years ago No links please, as they get flagged for self-promotion. I was just curious to know how many of us hubbers have blogs and how frequently you update them. I just started mine, but I'm at one update a day ... I just hope... by LittleFairy6 years ago How many of you have blogs and are they affected by google's new algorithm? Is it worth it having a blog and how do you make time for it along.
https://hubpages.com/community/forum/85048/is-blogging-credible-
CC-MAIN-2017-51
en
refinedweb
Natume 0.1.0 HTTP DSL Test Tool Natume is a http dsl test tool help you easily build test for mobile application. How to install Firstly download or fetch it form github then run the command in shell: cd natume # the path to the project python setup.py install Development Fork or download it, then run: cd natume # the path to the project python setup.py develop Compatibility Built and tested under Python 2.7 How to write your dsl http test The dsl rule most like ini file format. comment The line begins with “#” is a comment line method section The line begins with a [ and ends with ] a test method section: [add friend] # comment > POST /request fid=1233 access_token="Blabla" code: 200 content <- OK intialize your test instance variables You can intialize or bind the variables use intialize method: [intialize] @key = "key" @page = 2 All key begins with “@” will build to testcase instance attributes, like @key is compiled to “self.key”, and intialize method is called in SetUp method. http send command The line begins with > is a http request: > GET /post key="Blabla" page=1 > POST /profile name="Blabla" email="[email protected]" set request header Sometimes, the request requires to set headers, you can use “=>” command to set header referer => Referer => Accept-Encoding => gzip, deflate, sdch Accept_encoding => gzip, deflate, sdch Note the head key is caseinsentive and key parts can will auto trasfer to the http real key pattern. Assert the response Currently, supports content regex match assert, json data assert, and response header assert. and supports three assert tokens. : “”:”” assert token, it is compiled to assertEqual method, to check a header or response text, or response json data: code: 200 content_tpye : application/json charset: utf-8 <- it is compiled to assertIn method in response content test: content <- OK json <- ['data']['title'] = "Blabla" =~ it is compiled to to check a regex text in response content test, the regex value must begins and ends with “/”, and can combine with the regex options: content =~ /OK/ json =~ ['data']['title'] = /Blabla/i Note Currently supports three regex compile options “i”(re.I), “m” (re.M), “s” (re.S). Test response header info When set code command: code: 200 it will assert response status code. When set content_type command: content_tpye : application/json it will assert response content_type. When set charset command: charset: utf-8 it will assert response charset. Note When uses “:” to test response info, if the assert key not in (content, json, code, content_type, charset), it will test the response head info. content when we test the response text, supports the commands as below: content: OK content <- OK content =~ /Ok/i json When we test the response is json data, we can use json key to assert: json <- ['data']['title'] = 'title' json: ['data']['trackList'][0]['song_id'] = '1772167572' json: ['data']['type_id'] = 1 # date size json ~~ ['data'] = 56 DSLWebTestCase When you wanna write the dsl test in unittest testcase, please write test method in testcase class doc: from natume import DSLWebTestCase, WebClient import unittest class DSLWebTestCaseTest(DSLWebTestCase): u""" [index] > GET / content <- 虾米音乐网(xiami.com) [song api] > GET /song/playlist/id/1772167572/type/0/cat/json content_type: application/json charset: utf-8 json: ['data']['trackList'][0]['title'] = u'再遇见' json: ['data']['trackList'][0]['song_id'] = '1772167572' json: ['data']['type_id'] = 1 [search] > GET /search/collect key='苏打绿' code: 200 content <- 苏打绿歌曲: 最好听的苏打绿音乐试听 content =~ /Xiami.com/i [search page 2] > GET /search/collect/page/2 key=@key order='weight' code: 200 content <- 苏打绿歌曲: 最好听的苏打绿音乐试听 content =~ /XiaMi.com/i """ @classmethod def setUpClass(self): self.client = WebClient('') self.key = '苏打绿' def test_t(self): self.t(u""" > GET /search/collect/page/2 key=@key order='weight' code: 200 content <- 苏打绿歌曲: 最好听的苏打绿音乐试听 """) You can also use t method to build request section test. Note The WebClient will keep and fresh the cookies and etag when you use a same webclient to test your application. Run test in terminal Like unittest, natume can run in terminal also, can test directories and files. Here are the demos, the test file in project examples directory: $ python -m natume -u examples/xiami.smoke -d test_index (__builtin__.XiamiTest) ... ok test_search (__builtin__.XiamiTest) ... ok test_search_page_2 (__builtin__.XiamiTest) ... ok test_song_api (__builtin__.XiamiTest) ... ok ---------------------------------------------------------------------- Ran 4 tests in 0.674s OK $ python -m natume -u examples -d test_index (__builtin__.XiamiTest) ... ok test_search (__builtin__.XiamiTest) ... ok test_search_page_2 (__builtin__.XiamiTest) ... ok test_song_api (__builtin__.XiamiTest) ... ok ---------------------------------------------------------------------- Ran 8 tests in 2.893s OK: Thomas Huang - Keywords: http,test - License: GPL 2 - Categories - Development Status :: 5 - Production/Stable - Environment :: Web Environment - Intended Audience :: Developers - Programming Language :: Python - Programming Language :: Python :: 3 - Programming Language :: Python :: Implementation :: CPython - Programming Language :: Python :: Implementation :: PyPy - Topic :: Internet :: WWW/HTTP :: Dynamic Content - Package Index Owner: lyanghwy - DOAP record: Natume-0.1.0.xml
https://pypi.python.org/pypi/Natume/
CC-MAIN-2017-51
en
refinedweb
Preface This article was written by a friend on the MVC/MVP/MVVM framework to answer questions, aims to introduce the design idea of iOS under the MVC/MVP/MVVM three architectures and their advantages and disadvantages. The full text is about five thousand words, is expected to spend time reading 20 – 30 minutes. MVC - The concept of MVC MVC is the earliest in desktop applications, M refers to the business data, V refers to the user interface, C is the controller. In the specific business scenarios, C is between M and V connection, is responsible for obtaining business data input, then the processed data to the output do the corresponding interface display, in addition, in the data has been updated, C also need to submit timely updates to the corresponding interface display. In the process, because M and V are completely isolated, so in the business scene, usually only need to replace the corresponding C complex with existing M and V can to quickly build a new business scene. MVC because of its reusability, greatly improving the efficiency of development, has been widely used in the end of development. After the concept is over, take a look at the specific business scenarios in the MVC/MVP/MVVM are how to behave - MVC disappearance of the C layer screen snapshot 2017-03-04 PM 3.16.48.png on the page (business scenario) or similar page I believe we have done a lot, the specific implementation of each programmer may not be the same, here to talk about what I see part of the programmer’s writing: //UserVC - (void) viewDidLoad [[UserApi new] fetchUserInfoWithUserId:132 {[super viewDidLoad]; completionHandler:^ (NSError *error, ID result) {if (error) {[self showToastWithText:@ "failed to obtain user information ~"];} else {self.userIconIV.image} =... = self.userSummaryLabel.text... [[userApi new]...}]; fetchUserBlogsWithUserId:132 completionHandler:^ (NSError *error, ID result) {if (error [self showErrorInView:self.tableView) {info:} {[...]; else self.blogs addObjectsFromArray:result]; [self.tableView reloadData];}}];} / /... (omitted - UITableViewCell * (UITableView * TA) tableView:) BleView cellForRowAtIndexPath: (NSIndexPath * indexPath) {BlogCell *cell = [tableView dequeueReusableCellWithIdentifier:@ = "BlogCell"]; cell.blog self.blogs[indexPath.row]; return cell;} - (void) tableView: (UITableView * tableView) didSelectRowAtIndexPath: (NSIndexPath *) indexPath pushViewController:[BlogDetailViewController instanceWithBlog:self.blogs[indexPath.row]] {[self.navigationController animated:YES];} / / slightly... //BlogCell - (void) setBlog: (Blog) blog {_blog = blog; self.authorLabel.text = blog.blogAuthor; self.likeLebel.text = [NSString stringWithFormat:@ "%ld", blog.blogLikeCount];} Programmers quickly finished code, Command+R run, there is no problem, do other things with go. Then one day, the product requirements of the business needs change, the user is shown in the page in other information, see your information, a draft of the show, like this: screen snapshot 2017-03-04 PM 3.46.40.png so small white code will be changed to this: //UserVC - (void) viewDidLoad [super if ({viewDidLoad]; self.userId! = LoginUserId) {self.switchButton.hidden = self.draftTableView.hidden = YES; self.blogTableView.frame = [[UserApi}... New] fetchUserI... New] fetchUserBlogsWithUserId:132 completionHandler:^... [[UserApi (NSError *error, ID result) {//if Error... [self.blogs addObjectsFromArray:result] [self.blogTableView reloadData]; omitted;}]; [[userApi new] fetchUserDraftsWithUserId:132 completionHandler:^ (NSError *error, ID result) {//if Error... [self.drafts addObjectsFromArray:result] [self.draftTableView reloadData]; omitted;}];} - (NSInteger) tableView: (UITableView *) tableView numb ErOfRowsInSection: (NSInteger section) {return tableView = = self.blogTableView? Self.blogs.count: self.drafts.count;} / /... (omitted - UITableViewCell * (UITableView * tableView) tableView:) cellForRowAtIndexPath: (NSIndexPath * indexPath) {if (tableView = = self.blogTableView) {BlogCell *cell = [tableView dequeueReusableCellWithIdentifier:@ = "BlogCell"]; cell.blog self.blogs[indexPath.row]; return cell;} else {DraftCell *cell = [tableView dequeueReusableCellWithIdentifier:@ = "DraftCell"]; cell.draft self.drafts[indexPath.row]; return cell;}} - (void) tableView: (UITableView * tableView) didSelectRowAtIndexPath: (NSIndexPath * indexPath) {if (tableView = self.blogTableView) Slightly...} / /... //DraftCell - (void) setDraft: (Draft) draft {_draft = draft; self.draftEditDate =} //BlogCell - (void) setBlog: (Blog) blog {}} Then, the product that users see their pages plus a recycling station what will be very good, so programmers can add code logic, then… with the change of requirement, UserVC becomes more and more bloated, more and more difficult to maintain, develop and test is also very poor. Programmers also found that code write some problems, but the specific problem where? Isn’t it MVC? we will repeat the process with a diagram: screen snapshot 2017-03-04 PM 4.35.35.png through this map can be found, the user information page as a business scenario Scene needs to display a variety of data M (Blog/Draft/UserInfo), so the corresponding multiple View (blogTableView/draftTableView/image…), but each MV does not have a connection between the C layer should be distributed to each C layer processing logic are packaged into Scene this place, which is M-C-V to MM… -Scene-… VV, C layer so rather baffling disappeared. also, as V two cell M (blog/ direct coupled draft), which means that the input was the two V at the end of the corresponding M. Multiplex impossible. Finally, according to the business scene test abnormal trouble, because the business is bound to the initialization and destruction of the life cycle of VC, while the corresponding logic is also related to the And View click event, the test can only Command+R, point… - Correct MVC use posture Perhaps the UIViewController class has brought confusion to the new, people mistakenly think that VC must be C layer in MVC, or Button, Label or View is too simple without the need of a C layer with, in short, seen too much since I work experience of projects such as “MVC” so, what is the correct use of MVC position? is still cited above business scenarios, the correct MVC should look like this: screen snapshot 2017-03-04 PM 6.42.04.png UserVC as the business scene, to show the three kinds of data, corresponding to the three MVC, three MVC for each module of data acquisition, data processing and data display, and UserVC needs to do is to configure the three MVC layer, and data access from the notification of the C at the right time C, each layer to get the data of the corresponding treatment, after the treatment rendered to the respective View, each View UserVC will eventually have a good render the layout can be specific to the code as follows: @interface BlogTableViewHelper: NSObject< UITableViewDelegate, UITableViewDataSource> (instancetype) + helperWithTableView: (UITableView *) tableView userId: (NSUInteger) - (void) userId; fetchDataWithCompletionHandler: (completionHander; NetworkTaskCompletionHander) - (void) setVCGenerator: (ViewControllerGenerator) VCGenerator; @end @interface (BlogTableViewHelper) @property (weak, nonatomic) UITableView *tableView @property (copy, nonatomic); ViewControllerGenerator VCGenerator; @property (assign, nonatomic) NSUInteger userId @property (strong, nonatomic); NSMutableArray *blogs; @property (strong, nonatomic) UserAPIManager *apiManager; @end #define BlogCellReuseIdentifier @ "BlogCell" @implementation + BlogTableViewHelper (instancetype) (helperWithTableView: UITableView *) tableView userId: (NSUInteger) userId [[BlogTableViewHelper alloc] initWithTableView:tableView {return userId:userId];} - (instancetype) initWithTableView: (UITableView *) tableView userId: (NSUInteger userId) {if (self = [super init]) {self.userId = userId; tableView.delegat E = self; tableView.dataSource = self; self.apiManager = [UserAPIManager new]; self.tableView = tableView; __weak typeof (self) weakSelf = self; [tableView registerClass:[BlogCell class] forCellReuseIdentifier:BlogCellReuseIdentifier]; tableView.header = [MJRefreshAnimationHeader headerWithRefreshingBlock:^{// refreshUserBlogsWithUserId:userId completionHandler:^ (NSError pull-down refresh [weakSelf.apiManage *error, ID result) {/ /... Slightly}];}]; tableView.footer = [MJRefreshAnimationFooter headerWithRefreshingBlock:^{// load [weakSelf.apiManage loadMoreUserBlogsWithUserId:userId completionHandler:^ (NSError *error, ID result) {/ /... Slightly}]}]; return self;}}; #pragma mark - UITableViewDataSource & & Delegate; / /... - (NSInteger) tableView: slightly (UITableView *) tableView numberOfRowsInSection: (NSInteger) section {return self.blogs.count;} - (UITableViewCell *) tableView: (* UITableView) tableView (NSIndexPath * cellForRowAtIndexPath:) indexPath {BlogCell *cell = [tableView dequeueReusableCellWithIdentifier:BlogCellReuseIdentifier]; BlogCellHelper *cellHelper = self.blogs[indexPath.row]; if (cell.didLikeHandler!) __weak typeof (cell) {weakCell = cell; [cell setDidLikeHandler:^{cellHelper.likeCount = 1; weakCell.likeCountText = cellHelper.likeCoun TText;}]}; cell.authorText = cellHelper.authorText; / /... All set return cell;} - (void) tableView: (UITableView * tableView) didSelectRowAtIndexPath: (NSIndexPath *) indexPath [self.navigationController pushViewController: self.VCGenerator (self.blogs[indexPath.row]) {animated:YES]}; #pragma mark - Utils - (void) fetchDataWithCompletionHandler: (NetworkTaskCompletionHander) completionHander new] refreshUserBlogsWithUserId:self.userId completionHandler:^ {[[UserAPIManager (NSError *error ID, result) {if (error) {[self} else {showErrorInView:self.tableView info:error.domain]; for (Blog *blog in result) {[self.blogs addObject:[BlogCellHelper help ErWithBlog:blog]] [self.tableView reloadData];}}; completionHandler? CompletionHandler (error, result): Nil;}];} / /... @end @implementation BlogCell / /... - (void) onClickLikeButton: slightly (UIButton *) sender new] likeBlogWithBlogId:self.blogId userId:self.userId completionHandler:^ {[[UserAPIManager (NSError *error, ID result if (error)) {{//do error} else {/ / do success self.didLikeHandler (NIL)? Self.didLikeHandler: @end;}]}}; @implementation - BlogCellHelper (NSString * likeCountText) {return [NSString stringWithFormat:@ like%ld, self.blog.likeCount];} / /... (NSString * authorText) omitted - {return [NSString stringWithFormat:@ "Author: self.blog.authorName] @end;}% @". The Blog module is composed of BlogTableViewHelper (C), BlogTableView (V), Blogs (C), here is a bit special, blogs inside is not M, but Cell C CellHelper, this is because the Blog MVC actually is composed of several smaller MVC. M and V are not what to say. Talk about the main C as what TableVIewHelper did. In the actual development, each module of the View may be in the Scene in the corresponding Storyboard and new layout, this would not have to establish their own View modules (such as BlogTableViewHelper, let Scene here) to the C layer management on the line, of course, if you are a pure code, it requires the corresponding View the module was established. (for example, the following UserInfoViewController) to see their wishes, harmless. BlogTableViewHelper provides a constructing method of data acquisition interface and the necessary, within the corresponding initialization according to their own situation. when the external call interface fetchData, Helper will start the data access logic, because data acquisition may be related to some front page display (such as HUD), and specific display and is directly related to Scene (some Scene show HUD may show is a kind of style or no show), so this part will be in the form of CompletionHandler by Scene treatment. in Helper, the data acquisition failure will show the corresponding error page is established successfully and notify the smaller part MVC display data (i.e., notify the CellHelper driver Cell), in addition, TableView pull to refresh and down loading logic is also attached to the Blog The module, so processing in Helper. in the page logic, click the page by Scene directly through the VCGeneratorBlock configuration, so is decoupling (you can also use a didSelectRowHandler like mode to transfer data to the Scene layer, Scene made the jump, is the same. Finally, , V) (Cell) now only exposed Set method for external settings, and so M (Blog) is isolated, reuse is no problem. This series of processes are self management, in the future if the Blog module will be displayed in another SceneX, then SceneX only need to create a new BlogTableViewHelper, and then call helper.fetchData DraftTableViewHelper and BlogTableViewHelper logic is similar, do not paste, simply paste the logic of the UserInfo module: @implementation + UserInfoViewController (instancetype) instanceUserId: (NSUInteger) userId [[UserInfoViewController alloc] {return initWithUserId:userId];} - (instancetype) initWithUserId: (NSUInteger) userId [self addUI]; / / {...} / /... #pragma mark slightly - Action - (void) onClickIconButton: (UIButton *) sender [self.navigationController pushViewController:self.VCGenerator (self.user) {animated:YES]}; #pragma - mark Utils - (void) addUI UI {/ / self.userIconIV = [[UIImageView alloc] to initialize the layout of initWithFrame:CGRectZero]; self.friendCountLabel =......} - (void) fetchData new] fetchUserInfoWithUserId:self.userId completionHandler:^ {[[UserAPIManager (NSError *error, ID result) {if (error) {[self showErrorInView:self.view info:error.domain]} else {self.user = [User; objectWithKeyValues:result] = self.userIconIV.image; [UIImage imageWithURL:[NSURL URLWithString:self.user.url]]; / / self.friendCountLabel.text = [NSString stringWithFormat:@ data format like%ld, self.user.friendCount]; / / data format...}]};} @end UserInfoViewController in addition to more than two TableViewHelper multiple addUI sub control layout method, the other is their logic is similar, the management of MVC, only need to initialize can be used in any Scene. Now three self management module has been completed, UserVC needs only to do the corresponding assembly layout according to their own situation, and the building blocks: @interface (UserViewController) @property (assign, nonatomic) NSUInteger userId @property (strong, nonatomic); UserInfoViewController *userInfoVC; @property (strong, nonatomic) UITableView *blogTableView @property (strong, nonatomic); BlogTableViewHelper *blogTableViewHelper; @end @interface SelfViewController UserViewController @property (strong, nonatomic) UITableView *draftTableView; @property (strong, nonatomic) DraftTableViewHelper * draftTableViewHelper; @end #pragma mark - UserViewController @implementation UserViewController (instancetype) + instanceWithUserId: (NSUInteger userId) {if (userId = = LoginUserId) {return} else {[[SelfViewController alloc] initWithUserId:userId]; return [[UserViewContr Oller alloc] initWithUserId:userId];}} - {(void) viewDidLoad [super viewDidLoad]; [self addUI]; [self configuration]; [self fetchData];} #pragma mark - Utils (UserViewController) - (void) addUI {/ / here is just to express specific meaning logic layout certainly isn't as simple as self.userInfoVC [UserInfoViewController = instanceWithUserId:self.userId]; self.userInfoVC.view.frame = CGRectZero; [self.view addSubview:self.userInfoVC.view]; [self.view addSubview:self.blogTableView = [[UITableView alloc] initWithFrame:CGRectZero style:0]] (void);} - configuration {self.title = @ "user details"; / /... Other settings (ID params) [self.userInfoVC setVCGenerator:^UIViewController * { Return [UserDetailViewController instanceWithUser:params];}]; self.blogTableViewHelper = [BlogTableViewHelper helperWithTableView:self.blogTableView userId:self.userId]; [self.blogTableViewHelper setVCGenerator:^UIViewController * (ID params) {return [BlogDetailViewController instanceWithBlog:params];}];} - (void) fetchData {[self.userInfoVC fetchData]; //userInfo module does not require any page loading [HUD show]; //blog HUD [self.blogTableViewHelper fetchDataWithcompletionHandler:^ module may need (NSError *error, ID result) {[HUD hide]}]; @end #pragma mark;} - SelfViewController @implementation - SelfViewController (void viewDidLoad) {[super viewDidLoad]; [self addUI]; [self configuration]; [self fetchData];} #pragma mark - Utils (SelfViewController) - (void) addUI addUI] [self.view {[super; addSubview:switchButton]; / / special part of... / /... AddSubview:self.draftTableView = UITableView + [self.view settings alloc] initWithFrame:CGRectZero style:0]];} - {[super (void) configuration configuration]; self.draftTableViewHelper = [DraftTableViewHelper helperWithTableView:self.draftTableView [self.draftTableViewHelper setVCGenerator:^UIViewController (ID * userId:self.userId]; params) {return [DraftDetailViewController instanceWithDraft:params];}];} - {[super (void) fetchData fetchData]; [self.draftTableViewHelper fetchData] @end;} As the business scene of Scene (UserVC) to do something very simple, according to the configuration of its three modules (configuration), (addUI), and then notify the layout of each module to start (fetchData) can, because each module of the display and interaction is self managed, so Scene only can be responsible for and the business related. In addition, according to the access situation we create a subclass of UserVC SelfVC, SelfVC is doing something similar. MVC said this is similar to the above comparison error of MVC, we see what problems to solve: 1 code reuse: three small module V (cell/userInfoView) Set method of foreign exposure only, M and even C are isolated, no reuse problem. Three modules MVC can also be used for the rapid construction of similar business scenarios (reuse modules will be worse, than the small module below I will explain). 2 code bloat: because Scene most of the logic and layout are transferred to the corresponding MVC, we only have assembled MVC constructed two different business scenarios business, each scene can normal corresponding data show, there is also logic interaction accordingly, and these things, with space is about 100 lines of code (of course, here I ignored your Scene layout generation Code). 3 Excelstor Exhibition: both products in the future want to add the recycle bin or towers, I just need the new MVC module corresponding to the corresponding Scene, . 4 maintenance: separation of duties between each module, where the error change is completely does not affect other modules. In addition, each module the code is not much, even if one day write code people leave the guy according to error can quickly locate the error module. 5 testability: Unfortunately, the business is still bound initialization in the life cycle of Scene, and some logic still need to UI click event, we still only Command+R little… - Disadvantages of MVC As you can see, even the standard MVC framework is not perfect, there are still some problems difficult to solve, so the disadvantages of MVC are summarized as follows: where? 1 excessive focus on isolation: the fact that MV (x) series has the shortcomings, in order to achieve the complete isolation of the V layer, V external exposure to Set method. Generally not what problem, but when many attributes need to be set, a large number of repeated Set method to write up or tiring. 2 business logic and business show strong coupling: as you can see, some of the business logic (page Jump / praise / share…) is directly scattered in the V layer, the that means we were testing the logic, we must first generate the corresponding V, and then tested. Obviously, this is not reasonable. Because the business logic is a change in the final data M, our focus should be on the M, and Not show M V. - MVP The disadvantage is that MVC did not distinguish between business logic and business show, this is not friendly to the unit test. MVP was optimized for the above shortcomings, it will show the business logic and business also made a layer of isolation, the corresponding M becomes MVCP. and V function unchanged, the original C now is only responsible for the layout, and all the logic of all transferred to the P layer. The corresponding relationship is shown in Figure: screen snapshot 2017-03-05 PM 2.57.53.png The business scene did not change, still is to show three types of data, only three MVC replaced three MVP (I’m only drew Blog module), UserVC is responsible for the allocation of three MVP (each new VP, created by VP C, C will be responsible for the binding relationship between VP), and notify the P each layer in the appropriate time (before the notice of C layer) for data acquisition, each layer in the P access to the data after the corresponding treatment, after the treatment, View data binding advice has been updated, V received the update notification from P formatted data access page rendering, each View UserVC finally have a good render the layout. In addition, the V layer C layer is no longer any business logic, all events trigger the corresponding command all calls to the P layer, specific to the code as follows: @interface BlogPresenter: NSObject + (instancetype) instanceWithUserId: (NSUInteger) - userId (NSArray *); allDatas; / / moved to the P business logic layer and business related to M with P layer (void) - refreshUserBlogsWithCompletionHandler: (NetworkTaskCompletionHander) completionHandler; (void) - loadMoreUserBlogsWithCompletionHandler: (NetworkTaskCompletionHander) completionHandler; @end @interface (BlogPresenter) @property (assign, nonatomic) NSUInteger userId @property (strong, nonatomic); NSMutableArray *blogs; @property (strong, nonatomic) UserAPIManager *apiManager; @end @implementation BlogPresenter (instancetype) + instanceWithUserId: (NSUInteger) userId [[BlogPresenter alloc] {return initWithUserId:userId];} - (instancetype) initWithUserId: (NSUInteger) {if (self = userId [super init]) {self.userId = userId; self.apiManager = UserAPIManager / new]; / / #pragma}}... Slightly - Interface - mark (NSArray * allDatas) {return self.blogs;} / / for outer commands - (void) refreshUserBlogsWithCompletionHandler: (NetworkTaskCompletionHander) completionHandler {[self.apiMa Nager refreshUserBlogsWithUserId:self.userId completionHandler:^ (NSError *error, ID result) {if (! Error) {[self.blogs removeAllObjects]; / / empty data before for (Blog *blog in result [self.blogs addObject:[BlogCellPresenter) {presenterWithBlog:blog]]}}; completionHandler? CompletionHandler (error, result): Nil;}];} / / to the outer call command (void) loadMoreUserBlogsWithCompletionHandler: (NetworkTaskCompletionHander) completionHandler self.apiManager loadMoreUserBlogsWithUserId:self.userId completionHandler {[@end]}... @interface BlogCellPresenter: NSObject + (instancetype) presenterWithBlog: (Blog * blog); - (NSString * authorText); - (NSString * likeCountText); - (void) likeBlogWithCompletionHandler: (NetworkTaskCompletionHander) completionHandler; (void) - shareBlogWithCompletionHandler: (NetworkTaskCompletionHander) completionHandler; @end @implementation - BlogCellPresenter (NSString * likeCountText) {return [NSString stringWithFormat:@ like%ld, self.blog.likeCount];} - {return (NSString *) authorText [NSString stringWithFormat:@ "author name:% @", self.blog.authorName];} / /... Slightly - (void) likeBlogWithCompletionHandler: (NetworkTaskCompletionHander) completionHandler new] likeBlogWithBlogId:self.blogId userId:self.userId completionHandler:^ {[[UserAPIManager (NSError *error, ID result if (error)) {fail} else {//do {//do success self.blog.likeCount + = 1;} completionHandler? CompletionHandler (error, result): Nil;}];} / /... @end BlogPresenter and BlogCellPresenter were used as the P layer of BlogViewController and BlogCell, is a series of business logic set. BlogPresenter is responsible for obtaining Blogs raw data and through the original data structure of BlogCellPresenter, and BlogCellPresenter provides a variety of data formats of good for Cell rendering, in addition, point praise and sharing business now are transferred to here. The business logic is transferred to the P layer, V layer is the only need to do two things: 1 monitor P layer data update notification, refresh the page display. 2 in the click event is triggered, the corresponding method calls to the P layer, and the method of implementation results show. @interface BlogCell: UITableViewCell @property (strong, nonatomic) BlogCellPresenter *presenter; @end @implementation BlogCell - (void) setPresenter: (BlogCellPresenter * presenter) {_presenter = presenter; / / get the formatted data from the Presenter show self.authorLabel.text = presenter.authorText; self.likeCountLebel.text = presenter.likeCountText;} / /... #pragma mark slightly - Action - (void) onClickLikeButton: (UIButton * sender) {[self.presenter likeBlogWithCompletionHandler:^ (NSError *error, ID result) {if (error!) {// page refresh self.likeCountLebel.text = self.presenter.likeCountText;} / /... Slightly}]}; @end Between the C layer to do is layout and the binding of PV (here may not be obvious, because the layout code inside the BlogVC is TableViewDataSource, PV bound, because I am lazy with the Block notification callback, so also not too obvious, if the Protocol callback is obvious), the code is as follows: @interface BlogViewController: NSObject + (instancetype) instanceWithTableView: (UITableView *) tableView presenter: (BlogPresenter) presenter; (void) - setDidSelectRowHandler: (void (^) (Blog * didSelectRowHandler)); - (void) fetchDataWithCompletionHandler: (NetworkCompletionHandler) completionHandler; @end @interface BlogViewController (<); UITableViewDataSource, UITabBarDelegate, BlogView> @property (weak, nonatomic) UITableView *tableView @property (strong, nonatomic); BlogPresenter presenter; @property (copy, nonatomic) void (^didSelectRowHandler) (Blog); @end @implementation (instancetype) BlogViewController + instanceWithTableView: (UITableView *) tableView presenter: (BlogPresenter) {presenter return [[BlogViewController alloc] initWithTableView: tableView presenter:presenter];} - (instancetype) initWithTableView: (UITableView *) tableView presenter: (BlogPresenter presenter) {if (self = [super init]) {self.presenter = presenter; self.tableView = tableView; tableView.delegate = self; tableView .dataSource = self; __weak typeof (self) weakSelf = self; [tableView registerClass:[BlogCell class] forCellReuseIdentifier:BlogCellReuseIdentifier]; tableView.header = [MJRefreshAnimationHeader headerWithRefreshingBlock:^{// weakSelf.presenter refreshUserBlogsWithCompletionHandler:^ (NSError pull-down refresh [*error, ID, result) {[weakSelf.tableView.header endRefresh]; if (! Error) {[weakSelf.tableView reloadData];} / /...}] omitted;}]; tableView.footer = [MJRefreshAnimationFooter headerWithRefreshingBlock:^{// load [weakSelf.presenter loadMoreUserBlogsWithCompletionHandler:^ (NSError *error, ID result) {[weakSelf.tableView.footer endRefresh]; if (! Error) {[weakSelf.tableView reloadData];} / /... Slightly}]}]; return self;}}; #pragma mark - Interface - (void) fetchDataWithCompletionHandler: (NetworkCompletionHandler) completionHandler refreshUserBlogsWithCompletionHandler:^ (NSError {[self.presenter *error, ID result) {if (error) {//show error info} else {[self.tableView reloadData];} completionHandler? CompletionHandler (error, result): Nil}]; #pragma mark;} - UITableViewDataSource & & Delegate (NSInteger) - tableView: (UITableView *) tableView numberOfRowsInSection: (NSInteger) section {return self.presenter.allDatas.count;} - (UITableViewCell *) tableView: (* UITableView) tableView cellForRowAtIndexPath: (NSIndexPath * indexPath) {BlogCell *cell = [tableView dequeueReusableCellWithIdentifier:BlogCellReuseIdentifier]; BlogCellPresenter *cellPresenter = self.presenter.allDatas[indexPath.row]; cell.present = cellPresenter; return cell;} - (void) tableView: (UITableView * tableView * NSIndexPath (didSelectRowAtIndexPath:) indexPath) {self.didSelectRowHandler? Self.didSelectRowHandler (self.presenter.allDatas[indexPath.row]): Nil; @end} BlogViewController is no longer responsible for the actual data acquisition logic, interface, data acquisition directly call Presenter in addition, because the business logic is also transferred to the Presenter, so the layout of TableView is also used for Presenter.allDatas. Cell display, we replaced the original large Set method, and make the Cell according to the binding of CellPresenter to do the show. After all, now the logic is moved to the P layer, P layer and V Layer command to make corresponding interaction must rely on the corresponding, but V and M are still isolated, just and P coupling, P layer can replace the M, obviously not, this is a compromise. Finally, Scene, it’s not much change, just replace the configuration MVC for configuration MVP, in addition to the data acquisition is to take the P layer, do not go C layer (however, the code is not the case): - (void) configuration BlogPresenter *blogPresenter {/ /... Other settings = [BlogPresenter self.blogViewController = [BlogViewController instanceWithUserId:self.userId]; instanceWithTableView:self.blogTableView presenter:blogPresenter]; setDidSelectRowHandler:^ [self.blogViewController (Blog *blog) {[self.navigationController pushViewController:[BlogDetailViewController instanceWithBlog:blog] animated:YES];}]; / /...} - slightly (void) fetchData {/ /... FetchData] [HUD show] [self.userInfoVC; [self.blogViewController; fetchDataWithCompletionHandler:^ (NSError *error, ID result) {[HUD hide]; / /}]; or because of lazy, with Block go C layer forwarding will write code, or if it is Protocol Who KVO will use a self.blogViewController.presenter / / but Never mind, because we replace the MVC MVP in order to solve the problem of unit testing, now the usage does not affect the unit test, and only the concept does not match it. Slightly} / /... The above example is actually a problem, we assume that all events are initiated by the V layer and one-time. This fact is not established, cite a simple example: similar to WeChat voice chat on the page, click on the Cell Cell voice began to play, show broadcast animation, broadcast animation complete stop, and then play a voice. In this play in the scene, if CellPresenter was like the above offer only a playWithCompletionHandler interface is not feasible because the play after the callback is definitely in the C layer, C layer will be found when the execution command CellPresenter cannot tell Cell theanimations stop after that is not a one-time event trigger. In addition, the play is complete, the C layer traversal to the next to be broadcast call CellPresenterX player interface, CellPre SenterX because it does not know who is the corresponding Cell, certainly will not be able to notify Cell to start animation, event sponsors are not necessarily V. layer for these non disposable or other layer initiated events, processing method is very simple, just add a Block attribute in the CellPresenter, because it is the property, Block can be repeated callback, Block also can capture Cell, so don’t worry about can not find the corresponding Cell. or so: @interface VoiceCellPresenter: NSObject @property (copy, nonatomic) void (^didUpdatePlayStateHandler) (NSUInteger); - (NSURL *) playURL; @end @implementation VoiceCell - (void) setPresenter: (VoiceCellPresenter * presenter) {_presenter = presenter; if (presenter.didUpdatePlayStateHandler!) __weak typeof (self) {weakSelf = self; [presenter setDidUpdatePlayStateHandler:^ (NSUInteger playState) {switch (playState) {case Buffering: break; case Playing: weakSelf.playButton... WeakSelf.playButton... Break case Paused:; weakSelf.playButton break;}}]...}}; When playing, VC only need to keep a CellPresenter, then introduced the corresponding playState call didUpdatePlayStateHandler can update the status of the Cell . Of course, if it is a VP binding Protocol way, so do these things is very common, do not write. MVP will be like this, compared to MVC, it actually do only one thing, namely segmentation show business and the business logic separate display and logic, as long as we can guarantee the V update notification from P in the data can refresh the page, then the entire business is no problem. Because the notice received by V it is from the P layer data access / update operation, so long as we guarantee that the operation P layer is normal. We can only test the P layer logic, do not care about the V layer. - MVVM In fact, MVP is already a very good framework, solves almost all known problems, so why MVVM? is still for example, assume that there is now a Cell, click on Cell button can be attention and attention, can also be cancelled in concern, cancel concern, SceneA the first pop and SceneB ask, do pop, then cancel the attention operation and business setting strong association, so the interface may not be V layer called directly, will rise to the Scene layer. Specific to the code, probably like this: @interface UserCellPresenter: NSObject @property (copy, nonatomic) void (^followStateHander) (BOOL isFollowing); @property (assign, nonatomic) BOOL isFollowing; (void) follow; @implementation UserCellPresenter (void follow) {if (self.isFollowing!) {// not to pay attention to / / follow user else} {// has concerned to cancel self.followStateHander? Self.followStateHander (YES): Nil; / / the first notification Cell display follow state [[FollowAPIManager new] unfollowWithUserId:self.userId completionHandler:^ (NSError *error, ID result) {if {self.followStateHander (error) self.followStateHander? (NO): Nil; //follow eles {self.isFollowing} failed state rollback = YES;}}} / /...}] slightly; @end @implementation UserCell - (void) setPresenter: (UserCellPresenter * presenter) {_presenter = presenter; if (_presenter.followStateHander!) __weak typeof (self) {weakSelf = self; [_presenter setFollowStateHander:^ (BOOL isFollowing) {[weakSelf.followStateButton: setImage:isFollowing?...];}];}} - (void) onClickFollowButton: (UIButton * button) {// will focus on the button click the upload [self routeEvent:@ "followEvent" userInfo:@{@ "presenter": self.presenter}] @end;} Confirm whether the implementation of event @implementation FollowListViewController / intercept click event after the judgment - (void) routeEvent: (NSString * eventName) userInfo: (NSDictionary * userInfo) {if ([eventName isEqualToString:@ "followEvent"]) {UserCellPresenter *presenter userInfo[@ = "presenter"]; [self showAlertWithTitle:@ "prompt" message:@ "to confirm the cancellation of his attention?" cancelHandler:nil confirmHandler: [presenter follow] ^{;}]; @end}} @implementation UIResponder (Router) / / the responder chain event upload event was eventually discarded without treatment or intercept along - (void) routeEvent: (NSString * eventName) userInfo: (NSDictionary *) userInfo [self.nextResponder routeEvent: eventName userInfo:userInfo] {@end}; Block looks a little cumbersome, we look at the Protocol: @protocol UserCellPresenterCallBack < NSObject> - (void) userCellPresenterDidUpdateFollowState: (BOOL) isFollowing; @end @interface UserCellPresenter: NSObject @property (weak, nonatomic) id< UserCellPresenterCallBack> view; @property (assign, nonatomic) - (void) BOOL isFollowing; follow; @end @implementation UserCellPresenter (void follow) {if (self.isFollowing!) {// not to pay attention to / / follow user else} {// has concerned to cancel concern BOOL isResponse = (userCellPresenterDidUpdateFollowState) [self.view respondsToSelector:@selector]; isResponse? [self.view userCellPresenterDidUpdateFollowState:YES] [[FollowAPIManager new] unfollowWithUserId:self.userId: Nil; completionHandler:^ (NSError *error, ID result) {if {isResponse (error) [self.view? UserCellPresenterDidUpdateFollowState:NO]: Nil;} eles {self.isFollowing = YES;}}} / /...}] slightly; @end @implementation UserCell - (void) setPresenter: (UserCellPresenter * presenter) {_presenter = presenter; _presenter.view = self;} #pragma mark - UserCellPresenterCallBack - (void) userCellPresenterDidUpdateFollowState: (BOOL) isFollowing: setImage:isFollowing {[self.followStateButton}];?... The removal of Alert Route and VC in such code, we can find that whether Block or Protocol because of the need to isolate the page display and business logic code, Shangrao in a circle, virtually added the amount of code a lot, this is just an event like this, if there is more than one? That writes really hurt… Look at the above code will be found, if we continue to add events, so most of the code is to do one thing: the P layer data updates to the V layer. Block will add a lot of properties in the P layer, add a lot of Block logic in the V layer. And Protocol P although only add layer a property, but inside the Protocol method would have been increased, corresponding to the V layer will need to increase the method. Since the problem is found, then try to solve it, to achieve low coupling communication between two objects to OC, except for Block and Protocol, generally thought we look at the KVO. KVO in the above example behave: @interface UserCellViewModel: NSObject @property (assign, nonatomic) BOOL isFollowing; (void) follow; @end @implementation UserCellViewModel (void follow) {if (self.isFollowing!) {// not to pay attention to / / follow user else} {// has concerned to cancel self.isFollowing = YES; / / Cell follow [[FollowAPIManager to notify the state of new] unfollowWithUserId: self.userId completionHandler:^ (NSError *error, ID result) {if (error) {self.isFollowing = NO;}//follow failure state back slightly}];}} / /... @end @implementation UserCell (void awakeFromNib) {@weakify (self); [RACObserve (self, viewModel.isFollowing) subscribeNext:^ (NSNumber *isFollowing) {@strongify (self); [self.followStateButton setImage:[isFollowing boolValue]?:];};}... The code is less about half, in addition, logical read more clearly, Cell was observed to bind ViewModel isFollowing state, and in the status change, update their display. three data notice a simple comparison, believe that what kind of way more friendly to the programmer, everyone knows not here. is now about the mention of MVVM will think of RAC, but they are not what, for MVVM RAC only provides a data binding way elegant safe, if you do not want to learn RAC, own a KVOHelper or something is also possible. In addition, the charm of RAC lies in function response type programming, we should not only apply it is limited to MVVM, use should also be used daily in the development of . On MVVM, I want to say is this What more, because MVVM is only MVP binding evolution body, removing the data binding mode, and the other MVP is exactly the same, only possible presentation is Command/Signal instead of CompletionHandler and the like, so not to repeat them. Finally, make a brief summary: 1.MVC as the old architecture, has the advantages of the business scene by the display data type is divided into several modules, each module in the C layer is responsible for business logic and business show, while M and V should be isolated from each other for reuse, other each module can also be handled properly reuse unit. The split is decoupled, the way to do is to isolate burden, reuse, improve development efficiency. The disadvantage is that there is no distinction between business logic and business friendly display, not for unit testing. 2.MVP as the advanced version of MVC, is proposed to distinguish between business logic and business presentation, business logic will be all transferred to the P layer, data receiving the P layer V layer updates page display. Has the advantages of good stratification brought a friendly unit test, shortcomings in the hierarchical logic advantage will make the code around, at the same time It also brings a lot of code, not friendly to the programmer. As 3.MVVM synthesizer, through binding data to update the data, reducing the amount of code, while optimizing the code logic, just learning a high cost, not friendly enough for beginners. 4.MVP and MVVM MVC will be established so because stratified two times the above documents, code management good. 5 in MVP and MVVM, between V and P or VM theory is many to many relationship, different layout in the same logic only need to replace the V layer, and the same layout of different logic only need to replace P or VM layer. But in the actual development of P or VM because of coupling of the V display logic layer to degenerate into a one-to-one relationship (such as the SceneA to display “xxx+Name”, VM Name will be formatted as “XXX + Name”. One day S CeneB also used this module, all the click event and page display are the same, just Name display “YYY + Name”, the VM for showing the logical coupling of SceneA, it is embarrassing), for such cases, there are usually two ways, one is to judge the output state in the VM layer and the state one is in the VM layer, adding a layer of FormatHelper. because the former may appear too many states code ugly, although the latter is elegant and expand, but too much of the data reduction when stratified in slightly clumsy, we should choose according to need. Here is just a nonsense, some articles to say that MVVM is the C layer in order to solve the problem of testing over MVC, in fact is not the case. According to the architecture evolution sequence, the C layer is not a good resolution of most bloated MVC module, a good resolution on the line, do not need MVVM. and MVC to test can also be used MVP to solve, only MVP is not perfect, in the data exchange between the VP is too complicated, so it raises MVVM. when MVVM the body, we can see the origin from the results, it was doing a lot of things, but not that its predecessors and a lot of effort! - Architecture so much, in the end how to choose the daily development? Whether it is MVC, MVP, MVVM or MVXXX, the final goal is to serve the people, we focus on the architecture, focusing on stratification is to develop efficiency, in order to ultimately happy. So, in the actual development should not rigidly adhere to a particular architecture, based on the actual project, the general MVC can meet most of the development as for the demand, MVP and MVVM, you can try, but do not force . In short, I hope you can do: design, aware of. Line code, just happy. This article with the demo address
http://w3cgeek.com/review-mvcmvpmvvm.html
CC-MAIN-2017-51
en
refinedweb
Most Web applications need to be secure because the apps allow users to sign up and log in and out of the site. Web application security is used to control access to all or part of a site. Beginning with ASP.NET 2.0, the Membership API was added to simplify adding such security to a Web application. Find out how to use the Membership API with a SQL Server backend. Essentials Prior to version 2.0, .NET allowed developers to implement site security by providing a way to use Windows authentication, as well as a forms-based model. An issue with these approaches is the amount of development work that's necessary to get them working. In ASP.NET 2.0, the Membership API has been added; it takes over where the forms-based approach ends. The Membership API also allows you to create, delete, and edit user properties. It includes two standard Membership providers that allow you to integrate with Active Directory or utilize a SQL Server backend. You may develop a custom provider to use with the Membership API as well. Programming The Membership API is available with the Membership class in the System.Web.Security namespace. It exposes the following methods for working with site users: - CreateUser: Allows you to create a new user. - DeleteUser: Allows you to delete a user. - FindUserByEmail: Allows you to find users with a particular e-mail address. - FindUsersByName: Allows you to find users with a particular username. - GeneratePassword: Allows you to generate a random password. - GetAllUsers: Returns all users. - GetNumberOfUsersOnline: Returns all users currently on the site. - GetUser: Allows you to find a user by username. - GetUserNameByEmail: Allows you to find a user by e-mail address. - UpdateUser: Allows you to update a user. - ValidateUser: Allows you to validate a user and password. ValidateUser is used to log a user onto the site. These methods offer everything necessary to provide basic site security. Using SQL Server as the backend data store SQL Server is the default Membership provider; however, it does require setup to make it work. The.NET Framework includes a command-line tool (aspnet_regsql.exe) for adding the necessary database objects. The tool is available in this default directory: C:<windows dir>Microsoft.NETFramework<version>aspnet_regsql.exe When this tool runs without command-line parameters, a wizard guides you through setup. Basically, you choose the database server and the database to use. Then, a number of tables, views, and stored procedures are added to the database; these are used by the Membership API. With the database set up, you may use it in your code as the data provider for membership services. The database connection string and the membership settings are configured in the application's web.config file. The database connection string is defined in the connectionStrings element. The following example connects to an instance of SQL Server 2005 using SQL Server security: <connectionStrings> <add name="Test" connectionString="Data Source=TestServer;User ID=Chester;Password=Tester;Initial Catalog=MembershipTest;"/></connectionStrings> Other sections are added to enable the Membership API. First, the authentication element (which is contained in the system.web element) is set to Forms. Next, the membership element is added (under system.web). It contains the providers element, which is mapped to the database connection string in my example — it uses the name assigned to the connection string. <authentication mode="Forms" /> <membership defaultProvider="TestProvider"> <providers> <add name="TestProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="Test" /> </providers></membership> (To conserve space, this snippet only contains a portion of the complete web.config.) Now with the backend connections set up, it can be used in an application. Combining with Login controls A great aspect of the Membership API is that the Login controls available within Visual Studio 2005/2008 are designed to work with it. You can easily drop one of the controls on an ASP.NET Web Form and tie it to a membership provider defined in the web.config file. These login controls include the following: - Login: Provides username and password textboxes for user logon. An error message is displayed if the logon fails. The ValidateUser method is used to check the user against the database. - LoginView: Retrieves a user's login status. LoginView uses this status to display content (defined in control) regardless of whether a user is logged in. - PasswordRecovery: Provides the functionality to retrieve or reset a user's password based on their username. - LoginStatus: Detects the user's authentication status and displays the appropriate login/logout option. - LoginName: Displays the currently authenticated user's name on the page. No value is displayed if the user is not logged on. - CreateUserWizard: Provides an interface for registering a user on the site. By default, it collects username, password, e-mail address, and validation question. It can be extended to include more fields and steps within the process. - ChangePassword: Allows the user to change their current password. As an example, the following code snippet shows how the CreateUserWizard control may be used on an ASP.NET Web Form. The MembershipProvider attribute is set to the value assigned to the provider in our web.config file. The WizardSteps allows you to customize the steps in the registration process — that is accomplished in the following example with the message that is displayed upon successful registration (asp:CompleteWizardStep element). You may define additional steps as well. <asp:CreateUserWizard <WizardSteps> <asp:CreateUserWizardStep </asp:CreateUserWizardStep> <asp:CompleteWizardStep </asp:CompleteWizardStep> </WizardSteps></asp:CreateUserWizard> All of the controls are easily tied to a membership provider, so it's simple to use the controls in a Web application without any code. Another good example is the LoginStatus control, which allows you to display custom messages according to a user's status. <asp:LoginStatus Easily secure your application One of the goals with new releases of the .NET Framework is to simplify common programming chores. Providing site security via registration and logon is a common aspect of most Web applications. The Membership API provides methods for providing this functionality, and these methods are tied to the Login controls available for use on ASP.NET Web Forms. What ASP.NET 2.0 features simplify your projects? What features would you like to see added to ASP.NET?.
https://www.techrepublic.com/blog/software-engineer/secure-aspnet-20-sites-with-membership-api/
CC-MAIN-2017-51
en
refinedweb
import numpy as np treat = 330.8 treat_se = 99.7 placebo = 188.1 placebo_se = 55.5 n = 15 Formula for SE of difference: $$SE = \sqrt{\frac{SD_T^2}{n_T} + \frac{SD_C^2}{n_C}}$$ and $SE = SD/\sqrt{n}$, so for this example: $$SE = \sqrt{SE_T^2 + SE_C^2}$$ SE_diff = np.sqrt(99.7**2 + 55.5**2) SE_diff 114.10670444807351 Hence, a 95% confidence interval for the difference would be: d = treat-placebo d - 2*SE_diff, d + 2*SE_diff (-85.513408896147013, 370.91340889614708) Which, given n=15 and the variability, is pretty unccertain.
http://nbviewer.jupyter.org/gist/fonnesbeck/09d12cd7df09eae63175/Untitled.ipynb
CC-MAIN-2017-51
en
refinedweb
Understand IList and IList<T> interface in C# In this article we will try to understand IList and IList<T> in C#. IList and IList<T> both are interface of C# collection. IList is non-generic collection object that can be individually access by index. The IList interface has implemented from two interfaces and they are ICollection and IEnumerable. The definition of IList interface is like below. Public interface IList : ICollection , IEnumerable We can use the interface to implement another collection. For example we can instantiate IList object to array. Try to understand below code. class Program { static void Main(string[] args) { IList ilist = new int[10]; ilist.Add(100); Console.ReadLine(); } } We have used object of IList interface to define integer array. By nature IList object always takes object. So, even if we try to initialize an integer value, it will store as object. In VisualStudio we can check it. As IList takes object, we cannot add value type into it. Like below example. class Program { static void Main(string[] args) { IList ilist = new List<string>(); ilist.Add(100); Console.ReadLine(); } } In this example we are trying to insert integer value into IList. It will throw error like below. There are many methods in IList interface. Few of them are. Add: - To add new object into IList Clear: - To remove all the items from IList Insert: - Insert one new object into IList collection to specific location. Remove: - Remove the first occurrence of a specific object from the IList RemoveAt: - To remove item at particular object from IList. Few popular property of IList interface is given below. Count:- It will return number of object in List IsReadOnly:- This property will indicate whether the IList is read-only or not. Item:- It will return the particular Item of specified index. IList<T> interface is very similar with IList the only difference is that it is generic in type. We and pass any type in place of T. Have a look on below code. namespace Client{ class Program { static void Main(string[] args) { IList<string> List = new List<string>(); List.Add("Sourav"); List.Add("Ram"); List.Add("Shyam"); foreach (string name in List) { Console.WriteLine(name); } Console.ReadLine(); } }} Here we are passing string in place of String. The sample output is given below. As we said earlier that T can be any type, in another example we will pass object of one class as parameter of IList<T> interface. The below code implementation is very similar with above. using System;using System.Collections.Generic;using System.Text;using System.Collections;namespace Client{ class Person { public string Name { get; set;} } class Program { static void Main(string[] args) { List<Person> List = new List<Person>(); Person p = new Person(); p.Name = "Sourav"; List.Add(p); p = new Person(); p.Name = "Ram"; List.Add(p); foreach (Person O in List) { Console.WriteLine(O.Name); } Console.ReadLine(); } }} In this example we are storing object of Person class into IList<T>. Output of this example is given below. In this article we have tried to understand IList and IList<T> with example.Hope you have understood the difference between them. Latest Articles Latest Articles from Sourav.Kayal Login to post response
http://www.dotnetfunda.com/articles/show/2614/understand-ilist-and-ilistt-interface-in-csharp
CC-MAIN-2017-51
en
refinedweb
0 So I made this coin flipping program. It generates a random number between 1 and 2, and checks to see how many "flips" it would take for it to get 15 flips in a row. However, every time I run it, I get the same number, 13505, every single time! How can I fix this, so that it doesn't do this every time? I tired changing the value of num to 1-100, but it still comes up with the same answer every time(this time its 11341) #include <iostream> #include <string> #include <time.h> #include <windows.h> #include <iostream> #include <fstream> using namespace std; int main() { begin: system("cls"); system("TITLE Coin Flip"); int i=0; long int j=0; while(i<15) { int num=rand()%2+1; j++; if(num==1) { i++; } else if(num==2) { i=0; } } cout<<j<<endl; system("pause"); }
https://www.daniweb.com/programming/software-development/threads/243239/random-number-not-random
CC-MAIN-2017-51
en
refinedweb
: public class MyGenericClass<T> where T:IComparable { } Note For more information on the where clause in a query expression, see where clause.. class MyClass<T, U> where T : class where U : struct { }: public class MyGenericClass<T> where T : IComparable, new() { // The following line is not possible without new() constraint: T item = new T(); } The new() constraint appears last in the where clause. With multiple type parameters, use one where clause for each type parameter, for example: interface IMyInterface { } class Dictionary<TKey, TVal> where TKey : IComparable, IEnumerable where TVal : IMyInterface { public void Add(TKey key, TVal val) { } } You can also attach constraints to type parameters of generic methods, like this: public bool MyMethod<T>(T t) where T : IMyInterface { } Notice that the syntax to describe type parameter constraints on delegates is the same as that of methods: delegate T MyDelegate<T>() where T : new() For information on generic delegates, see Generic Delegates. For details on the syntax and use of constraints, see Constraints on Type Parameters. C# Language Specification For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage. See Also C# Reference C# Programming Guide Introduction to Generics new Constraint Constraints on Type Parameters
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/where-generic-type-constraint
CC-MAIN-2017-51
en
refinedweb
well...As far as I know, we could use any variable or names in the argument list or parameter providing that the data type and the function name are the same, right? However, this code won't work if I replace factor_num with n in the argument_list of the function definition? This program doesn't work.This program doesn't work.Code:#include <iostream> int triangle(int); int main() { using namespace std; // Declare a variable as fact_num that represent factorial number int fact_num; // Promt user for input cout << "Please enter a number: "; cin >> fact_num; cout << "Your factorial number is: " << triangle(fact_num) << endl; system("pause"); return 0; } // The function body. That's the function definition int triangle(int n){ int n; int sum = 0; for(n = 1; n <= fact_num; n++) sum += n; return sum; } But this one with different names for the argument_list in the main() or in the function definition works really well. I am just wondering why. That really baffles me. Besides, how come the author has to set [b]while (1) {()} before the function call?Besides, how come the author has to set [b]while (1) {()} before the function call?Code:#include <iostream> #include <cmath> using namespace std; int prime(int n); int main(){ int i; while(1){ cout << "Enter a number (0 to exit)"; cout << "and press ENTER:"; cin >> i; if(i == 0) break; if(prime(i)) // This is what I am talking about, see // the (i) variable in the argument_list cout << i << " is prime" << endl; else cout << i << " is not prime" << endl; } return 0; } int prime(int n){ // See this. the varialbe in the argument_list int i; // is now n, not i. for(i = 2; i <= sqrt(static_cast<double>(n)); i++){ if(n % i == 0) return false; } return true; }
https://cboard.cprogramming.com/cplusplus-programming/110329-variable-argument_list.html
CC-MAIN-2017-51
en
refinedweb
As far as I understood the "static initialization block" is used to set values of static field if it cannot be done in one line. But I do not understand why we need a special block for that. For example we declare a field as static (without a value assignment). And then write several lines of the code which generate and assign a value to the above declared static field. Why do we need this lines in a special block like: static {...} The non-static block: { // Do Something... } Gets called every time an instance of the class is constructed. The static block only gets called once, when the class itself is initialized, no matter how many objects of that type you create. Example: public class Test { static{ System.out.println("Static"); } { System.out.println("Non-static block"); } public static void main(String[] args) { Test t = new Test(); Test t2 = new Test(); } } This prints: Static Non-static block Non-static block
https://codedump.io/share/WPwHnVtrAODm/1/static-initialization-blocks
CC-MAIN-2017-51
en
refinedweb
How do you check if a string contains only numbers? I've given it a go here, need it in the simplest way. Thanks. import string def main(): isbn = input("Enter you're 10 digit ISBN number: ") if len(isbn) == 10 and string.digits == True: print ("Works") else: print("Error, 10 digit number was not inputted and/or letters were inputted.") main() if __name__ == "__main__": main() input("Press enter to exit: ") You'll want to use the isdigit method on your str object: if len(isbn) == 10 and isbn.isdigit(): From the isdigit documentation: str.isdigit() Return true if all characters in the string are digits and there is at least one character, false otherwise. For 8-bit strings, this method is locale-dependent.
https://codedump.io/share/d1OuL8XwgELU/1/how-do-you-check-if-a-string-contains-only-numbers---python
CC-MAIN-2017-51
en
refinedweb
Java Server Pages (JSP) and MySQL Internationalization, Localization, and Turkish Language Support Bulut F. Ersavaş, M.S. [email protected] 21 February 2005 (This article has been published in in Turkish.) 1. Introduction There are two big challenges to build a web service that will provide content with multiple languages. These are (i) developing the application so that it supports many region and language specific elements such as various character sets and (ii) setting up the database so that it can handle those elements. Most basically, a multilingual web service should be able to receive, process, and display text including international characters. A common problem is that it is not always clearly described how to make a collection of tools (e.g. application server, database server) and technologies (e.g. java, SQL) work together to support many languages and character encodings. This article, based on working models, explains how to make java applications work with MySQL databases with multiple language and character sets. This article includes examples of an application that supports both English and Turkish interfaces (i.e. menus, links etc. are presented in a language preferred by user). The application includes dynamic content (stored in database) from both languages, therefore, should be able to support characters included in one language but not in the other (e.g. x and ö). I will use UTF-8 as the default character encoding. This is what I recommend for those who would like to have more than one language support in their applications (especially Turkish or any other European Language). UTF-8, which stands for Unicode Transformation Format- 8, is an 8-bit lossless encoding of Unicode characters. However, the information and sample code excerpts given in this article can be applied to any character encoding. Reader should replace any “utf-8” reference with his/her selection of encoding. 2. Using Emacs with International Character Sets A developer, working with JSPs, Tomcat and MySQL, is very likely to use Emacs for editing and managing source files. This section briefly describes how to customize Emacs to support a specific character encoding. To read and write all “.jsp” files using UTF-8 coding system, add the following line to your .emacs file (most likely in your home directory). You can repeat the same expression for different file extensions (e.g. .html) and coding systems (e.g. 'iso-8859-9). (modify-coding-system-alist 'file "\\.jsp\\'" 'utf-8) Once you attach .jsp files to UTF-8 encoding, you can use Turkish characters in your source file without any problem. In emacs, you can select one of many ways to input special characters of one language. To see the description of one input method, press “Ctrl-h Shift-i” or M-x and type describe- input-method and then select the language/input method (hit Tab to see a list of selections). To set the input method use “M-x set-input-method”. To toggle between two input methods, you can use the “Ctrl-\” shortcut. You can set the default language preferences for your emacs by adding the following lines into your .emacs initialization file: ;; init file should only have one instance of custom-set-variables (custom-set-variables '(case-fold-search t) ;; optional '(current-language-environment "Turkish") '(default-input-method "turkish-postfix") '(global-font-lock-mode t nil (font-lock)) ;; optional '(transient-mark-mode t) ;; optional ) Generally, a standard emacs installation includes support for most of the character encodings, if not all. However, if this is not the case for your copy, please refer to GNU Emacs Manual [1] to install additional character encodings. Other useful commands to use in Emacs for character set support are as follows: Note: M-x stands for meta-x and is the command prompt reached usually by pressing left- Alt key with x. M-x prefer-coding-system: This command reads the name of a coding system from the minibuffer, and adds it to the front of the priority list, so that it is preferred to all others. [1] M-x set-buffer-file-coding-system: If you want to write files from current buffer using a different coding system, you can specify it for the buffer using this command. [1] M-x set-keyboard-coding-system: Use this command to set keyboard coding system to insert characters directly in this coding. [1] 3. Internationalization and Localization Locale is a set of region specific elements such as character set, currency and time format represented in an application. Separating the locale dependencies from application’s source code is called internationalization. Localization is adapting such an application to a specific locale. [2] Java provides java.util.Locale class to represent a specific geographical, political and cultural region. To contain locale-specific objects, resource bundles are used (java.util.ResourceBundle). For more information on these classes please refer to [3]. A simple example of how to use Locale and ResourceBundle in your JSP code is given below. // Get Turkish Resource Bundle: Locale locale = new Locale("tr","TR"); ResourceBundle messages = ResourceBundle.getResource("Message", locale); // Get the string for “Hello World” translated to Turkish: String myString = messages.getString("messages.hello_world"); out.println(myString); In the example above, the Turkish translation of the hello_world message is included in an ASCII file called bundle (e.g. Message_tr_TR.properties). ResouceBundle's getString function retrieves the requested message from this file. For every locale selected, there must be one such file, otherwise, java uses the default bundle (i.e. any bundle with no country or language code). Here is a mapping for the bundle naming: BundleName + "_" + localeLanguage + "_" + localeCountry + "_" + localeVariant BundleName + "_" + localeLanguage + "_" + localeCountry BundleName + "_" + localeLanguage BundleName Here is an example excerpt from a bundle named Message_en_US.properties: movie.title=Title movie.starring=Starring movie.director=Director A bundle written in native character encoding should be converted to ASCII format by using Java's ”native2ascii” utility (located under /<java-sdk-install-dir>/bin/). This is necessary for the web application server (e.g. Tomcat) to correctly display the character set. A sample usage for this utility is as follows: native2ascii -encoding ISO-8859-9 tr.src Message_tr_TR.properties In this example the source bundle (tr.src) is written in ISO-8859-9 encoding. You can keep your bundles under /<jakarta-install-dir>/webapps/<app-name>/WEB-INF/classes directory. Here is an excerpt from Message_tr_TR.properties bundle after running native2ascii. Notice that “İ” is converted to \u0130 and “ö” is converted to \u00f6: movie.title=\u0130sim movie.starring=Oyuncular movie.director=Y\u00f6netmen 3.1. How to Use Locale in JSP You can set a variable in client’s session to keep what locale he/she would like to use. Depending on your preference, you can also store user’s language choice as a cookie or pass it as a request parameter between JSP pages. Here is an example with session: Locale locale = (Locale) session.getValue("myLocale"); if (session.getValue("myLocale") == null) { // Default Language Setting locale = new Locale("tr","TR"); session.putValue("myLocale",locale); } Here is an example on how to use the ResourceBundle in your JSP code: <% ResourceBundle bundle = ResourceBundle.getBundle("Message",locale); %> <select name="searchColumn"> <option value="name"> <%=bundle.getString("movie.title")%></option> <option value="starring"> <%=bundle.getString("movie.starring")%></option> <option value="director"> <%=bundle.getString("movie.director")%></option> 4. JSP Character Set Handling Character handling can be split into two categories: displaying the characters and receiving the ones entered by the user. For JSP pages, setting the encoding for each category is done separately by using different directives and/or functions. 4.1. Displaying International Characters To set the character encoding for JSP page display, use the standard “page” directive with “contentType” parameter as follows: <%@ page contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" %> Page directive is used to control the structure of a servlet or a JSP by importing classes, customizing superclasses, and setting the content type, etc. I also recommend using the following HTML tag for the web browser to load the correct character set: <meta http- Meta tags with an http-equiv attribute are the same as HTTP headers. In general, they are used to control the action of browsers, and can be used to refine the information provided by the actual headers. [4] In the example given above, the HTTP content type is extended to specify the character set. Avoid using Java String functions (or constructors) such as the following to convert character encoding of a string, because it is both inefficient and unnecessary. Once you set all the options mentioned in this article correctly, you will not need to use such a conversion. str = new String(request.getParameter("value").getBytes("ISO-8859-1"), "UTF- 8"); 4.2. Retrieving Data from HTML Forms with International Characters You need to set the request encoding in JSP with setCharacterEncoding method to be able to receive user inputs entered through the forms properly. Below is the call needed in the JSP file: <% request.setCharacterEncoding("UTF-8"); %> You can verify that the character encoding is set correctly for JSP request by printing out the result of the following function call: <% request.getCharacterEncoding(); %> This should return “UTF-8”. If “setCharacterEncoding” does not work for you, you can also try to set the encoding directly in HTML form tag as follows: (see reference [6], section 17 - forms, for more details) <FORM action=”...” method=”...” …form content… </FORM> 5. JSP – MySQL Connection Setup When connecting to MySQL database from your JSP or Java Servlet class using mysql- connector-java or MySQL Connector/J driver (JDBC driver directly from MySQL) connect as follows (notice the useUnicode and characterEncoding argument) to correctly pass the right character encoding from your JSP pages to MySQL database server. (see reference [7] for details) import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; ... try { Connection conn = java.sql.DriverManager.getConnection( "jdbc:mysql://localhost/dbName?user=userName&password=usersPassword&useUnicode =true&characterEncoding=UTF-8"); // Use the Connection... } catch (SQLException ex) { // handle errors, if any System.out.println("SQLException: " + ex.getMessage()); System.out.println("SQLState: " + ex.getSQLState()); System.out.println("VendorError: " + ex.getErrorCode()); } If you are using MySQL version 4.1 or later (highly recommended), you can also set the default collation for the connection with the following statements. These function-calls set the variables used by MySQL to define the default collation. You can also specify what collation to use in your SQL statements. Please see section 6 of this article and MySQL documentation [8] for details on collation. conn.createStatement().execute("SET character_set_client=utf8"); conn.createStatement().execute("SET character_set_connection=utf8"); conn.createStatement().execute("SET character_set_results=utf8"); conn.createStatement().execute("SET character_set_database=utf8"); conn.createStatement().execute("SET character_set_server=utf8"); conn.createStatement().execute("SET collation_connection=utf8_turkish_ci"); conn.createStatement().execute("SET collation_database=utf8_turkish_ci"); conn.createStatement().execute("SET collation_server=utf8_turkish_ci"); It is optional to set these for connection, because you can specify character set / collation within your query or when you are creating your table. See next section for examples. 6. MySQL Character Sets and Collation As a good practice, if you are starting to develop your application from scratch, use UTF-8 character encoding because this standard encoding covers most of the international character sets and is supported by recent versions of all major web browsers (e.g. Netscape, Internet Explorer). Although ISO-8859-9 Turkish encoding (i.e. latin5) sounds very appealing, it is better to use UTF-8 with MySQL for Turkish characters. Furthermore, if you will adapt your application to many other languages, you will not have to change your character set and collation for every new language you localize to (as long as they are supported by UTF8). Starting from version 4.1, MySQL has extensive collation support. Collation is a collection of rules to compare characters in a character set. For example, when you are making a case- insensitive query, database should be able to match lowercase character (e.g. a) with the uppercase character (e.g. A). The real complication is with conflicting international characters. For example, in Turkish, uppercase for letter “i" is “İ” and lowercase for letter “I” is “ı”. A database server that does not know about this property of the Turkish character (i.e. that uses a collation table for English) will return wrong results for case-insensitive queries that involve “i", “ı”, “İ”, and “I” characters. This problem is well addressed in MySQL versions starting from 4.1. Under MySQL, UTF8 character set supports one case-insensitive collation for Turkish called “utf8_turkish_ci” and latin5 (ISO 8859-9) supports two collations that are “latin5_bin” and “latin5_turkish_ci”. For the data entered into and received from MySQL database server, you can set the default character set and collation at five levels: (i) server, (ii) database, (iii) table, (iv) column, and (v) connection. More information and example for each follows: i. When you start the database server: mysqld --default-character-set=utf8 \ --default-collation=utf8_turkish_ci ii. When you are creating the database (or with alter statement after creation): CREATE DATABASE db_name DEFAULT CHARACTER SET utf8 COLLATE utf8_turkish_ci; iii. When you are creating the table (or with alter statement after creation): CREATE TABLE tbl_name (column_list) DEFAULT CHARACTER SET utf8 COLLATE utf8_turkish_ci; iv. When you are describing the columns during table creation: CREATE TABLE tbl_name ( clm_name VARCHAR(5) CHARACTER SET utf8 COLLATE utf8_turkish_ci ); v. As described in section 5. My personal preference is to set the character set and the collation at table or column level. Depending on the structure and content of your application, you can establish these settings at the database or server level. For example, if your application supports many languages; you might want to create separate tables for certain entries in each language with different default character set and collation. 6.1. Using Collate in SQL Queries MySQL uses an expression called introducer in its SQL queries to tell to the parser what the character set is for the string, which follows it. Please note that introducer is not used for any conversion, rather for specifying the character set. Here is an example of its use: SELECT uid FROM users WHERE name = _utf8Ümit COLLATE utf8_turkish_ci; In this example, _utf8 that precedes ‘Ümit’ is the introducer which tells parser that string ‘Ümit’ is in character set utf8. Introducer is written as MySQL charset name preceded by an underscore. If the character set and collation is not specified by introducer and COLLATE clause in the query, then MySQL uses the ones set by character_set_connection and collation_connection system variables. COLLATE clause overwrites the default collation for the comparison and can be used in various parts of the SQL statements. Please see MySQL reference [8] (section 10.3.8 as of this article’s release date) for examples. 7. Conclusion This article describes how to develop and setup JSP applications and MySQL databases to support multiple languages and various character-sets, especially Turkish. Code examples given throughout the article are sufficient to initialize a web service going with different character encodings. Users who require more detailed information should refer to the resources given in corresponding sections. References: [1] GNU Emacs Manual, Free Software Foundation, 2004, Boston, MA, USA. ( ) [2] Designing Enterprise Applications with the J2EE Platform, Second Edition, Greg Murray, Sun Microsystems, Inc., 2002, Upper Saddle River, NJ, USA ( ) [3] Java TM 2 Platform, Standard Edition, v 1.4.2 API Specification, Sun Microsystems, Inc., 2003, Santa Clara, CA, USA ( ) [4] A Dictionary of HTML META Tags, 1996, Vancouver Webpages. ( ) [5] Core Servlets and JavaServer Pages, First Edition, Marty Hall, Sun Microsystems Press/Prentice Hall PTR, May 2000 [6] HTML 4.01 Specification, W3C Recommendation, Copyright © 1994-2002, World Wide Web Consortium, ( ) [7] MySQL Connector/J Documentation, © 1995-2005 MySQL AB., ( ) [8] MySQL Reference Manual, © 1995-2005 MySQL
https://www.techylib.com/el/view/tieplantlimabeans/java_server_pages_jsp_and_mysql_internationalization_localization
CC-MAIN-2017-51
en
refinedweb
django-multilingual-news 2.5. From version 2.0 onwards it is tested and developed further on Django 1.6.2 and django-cms 3. This app is based on the great and re-used some of it’s snippets. Current features include - Entry authors based on a django-people Person - Entry attachments based on the django-document-library Document - Tagging via django-multilingual-tags with a tag based archive view - Entry categories - RSS Feeds for all news entries, just special authors or tag based. - SEO fields on the Entry for storing custom individual meta descriptions and titles.', 'people', 'hvad', 'multilingual_tags', 'document_library', ) Run the South migrations: ./manage.py migrate Twitter Bootstrap 3 List of Bootstrap compatible features: - A delete confirmation modal for deleting news entries. For support of the Twitter Bootstrap 3 functionality, you need to add the library to your template. <script type="text/javascript" src="{% static "multilingual_news/js/multilingual_news.bootstrap.js" %}"></script> Delete confirmation modal Add the following markup to your template. {% load static %} {# add this before multilingual_news.bootstrap.js #} <script type="text/javascript" src="{% static "django_libs/js/modals.js" %}"></script> <div id="ajax-modal" class="modal fade" tabindex="-1"> <div class="modal-content"> <div class="modal-header"> <button type="button" class="close" data-×</button> </div> <div class="modal-body"> </div> </div> </div> To trigger the modal, create a link that looks like this. <a href="{% url "news_delete" pk=news_entry.pk %}" data-Delete</a> Usage Using the apphook Simply create a django-cms page and select Multilingual News Apphook in the Application field of the Advanced Settings. To add a sitemap of your blog, add the following to your urlconf: from multilingual_news.sitemaps import NewsSitemap urlpatterns += patterns( '', url(r'^sitemap.xml$', 'django.contrib.sitemaps.views.sitemap', { 'sitemaps': { 'blogentries': NewsSitemap, }, }), ) RSS Feeds The app provides three different types of feeds, you can link to. - All news {% url "news_rss" %} - News from a specific author {% url "news_rss_author" author=author.pk %}, where author is an instance of a people.Person - All news {% url "news_rss_tagged" tag=tag.slug %}, where Tag is an instance of a multilingual_tags.Tag. Tagging You can simply add tags for a news entry from the NewsEntry admin page, which renders an inline form at the bottom. Settings NEWS_PAGINATION_AMOUNT Default: 10 Amount of news entries to display in the list view. Contribute If you want to contribute to this project, please perform the following steps: # Fork this repository # Clone your fork $ mkvirtualenv -p python2.7 django-multilingual-news $ - 346 downloads in the last week - 1915 downloads in the last month - Author: Tobias Lorenz - Keywords: django,news,blog,multilingual,cms - License: The MIT License - Platform: OS Independent - Package Index Owner: dkaufhold, mbrochh, Tobias.Lorenz - Package Index Maintainer: dkaufhold, mbrochh - DOAP record: django-multilingual-news-2.5.1.xml
https://pypi.python.org/pypi/django-multilingual-news/2.5.1
CC-MAIN-2015-35
en
refinedweb
Empty type From HaskellWiki Latest revision as of 01:25, 20 January 2012 Although we do have constructors here, they do nothing (in fact, Void is essentially id) and the only value here is bottom, which is impossible to This is theoretically equivalent to the previous type, but saves you keyboard wear and namespace clutter.
https://wiki.haskell.org/index.php?title=Empty_type&diff=44074&oldid=41380
CC-MAIN-2015-35
en
refinedweb
when i try to compile my code... i get an error concerning the variable... i was playing aroud with some of the code in lessons 1-3 on the beginners guide... and wondered if you could input x yourself instead of the program declaring the variable itself. the error is on the lineshould be a simple fix, but i cant figure it out. the line i replaced wasshould be a simple fix, but i cant figure it out. the line i replaced wasCode: cin>> X; Code: x = 0; hope my tags work... this was my 1st post...hope my tags work... this was my 1st post...Code: #include <iostream> using namespace std; int main() { int x; cin>> X; cin.ignore(); do { // "Hello, world!" is printed at least one time // even though the condition is false cout<<"Hello, world!\n"; } while ( x != 0 ); cin.get(); }
http://cboard.cprogramming.com/cplusplus-programming/72691-very-basic-beginner-problem-printable-thread.html
CC-MAIN-2015-35
en
refinedweb
Originally posted by Jmannu gundawar: In the MAX's DVD project, the code listing is as follows: public interface DBClient { (methods decleared here do not throw RemoteExceptions) } Then there is another interface created for the RMI (remote) package: public interface DVDDatabaseRemote extends Remote, DBClient {} The class DVDDatabaseImpl in the remote package is decleared as: public class DVDDatabaseImpl extends UnicastRemoteObject implements DVDDatabaseRemote { Now the DBClient methods implemented in this class throws the RemoteException, where as the methods decleared in the interface dont. How is this possible? Why does it not give a compile time error,as the overridden methods (methods decleared in the interface) do not throw this exception?? If I try to implement interface method in the class, I get compile time error if class's method's throw the RemoteException and Interface method is not decleared to throw that exception. Please help. Thanks, Manoj [ October 20, 2003: Message edited by: Jmannu gundawar ] The Foo and RemoteFoo interfaces are certainly similar, but a RemoteFoo is not a Foo, and can't be, because RMI will end up requiring the RemoteFoo to be able to throw exceptions which a Foo may not throw. So, we can use a RemoteFooAdapter to convert a RemoteFoo to act like a Foo. This means you have to catch any RemoteExceptions and handle them somehow. The simplest thing I came up with is to throw a RuntimeException, though there are other strategies possible. Originally posted by Jmannu gundawar: And now I also know what is the bottom line(ha ha) It says: "I have a catapult. Give me all the money, or I will fling an enormous rock at your head." As I understand it we are free to generate our RecordNotFoundException, which we could make extend IOException - after all, it sounds like a type of IOException. RemoteException also extends IOException. Therefore we can write one adapter, DBAdapter, which throws IOException where appropriate instead of either RNFE or RemoteException. And hence both the RemoteClient and LocalClient could implement the same interface removing the need for an adapter for the remote and a separate adapter for the local (and making a nice little factory class to return a connection of the correct type). My project doesn't declare a DuplicateKeyException. Could you really argue that DuplicateKey is an IOException. Maybe you could I like consistency over everything else regarding exceptions. So as I liked your reasoning about RNFE, if seemed natural to me to do the same with DuplicateKeyException. If you think that Data belongs to some broad "IO domain", it makes sense IMO that all checked exceptions thrown by Data methods would extend IOException. findByCriteria() doesn't throw any exception, so it doesn't provide any IOException possible wrapper If you may defend that DuplicateKeyException would be an IOException, the contrary is hard to defend : in createRecord(), throwing DuplicateKeyException because of an IOException caused by some disk failure would be misleading IMO. Same reasoning may be applied to RNFE BTW.
http://www.coderanch.com/t/184370/java-developer-SCJD/certification/NX-Max-DVD-project
CC-MAIN-2015-35
en
refinedweb
Ai Ayumi wrote:public class Stubborn implements Runnable { static Thread t1; static int x = 5; public void run() { if(Thread.currentThread().getId() == t1.getId()) shove(); else push(); } static synchronized void push() { shove(); } static void shove() { synchronized(Stubborn.class) { System.out.print(x-- + " "); try { Thread.sleep(2000); } catch (Exception e) { ; } if(x > 0) push(); } } public static void main(String[] args) { t1 = new Thread(new Stubborn()); t1.start(); new Thread(new Stubborn()).start(); } } From K&B Practice Exam 2, q16: Answer: there is only one lock, so no deadlock can occur. Ai Ayumi wrote: Aren't synchronize(Stubborn.class) and static synchronized void push() locked seperately? Ai Ayumi wrote: If only one lock, Does that mean if t1 is using synchronize(Stubborn.class), t2 won't be able to access static synchronized void push()? Ai Ayumi wrote: Also, the output prints 5 4 3 2 1 (by t1), then 0 (by t2). Why is it not alternatingly by t1, t2? Henry Wong wrote: Ai Ayumi wrote: Aren't synchronize(Stubborn.class) and static synchronized void push() locked seperately? Synchronized static methods use the instance of Class class (that represents the class which the method is declared) as its lock. So, a thread that calls a synchronized static method of the Stubborn class, and a thread that locks on the Stubborn.class instance, are locking on the same object. Henry
http://www.coderanch.com/t/587034/java-programmer-SCJP/certification/won-Deadlock-Exam
CC-MAIN-2015-35
en
refinedweb
{{{ #!html Form submission }}} '''Inner links''' {{{ #!html Part 1 Ajax basic form submission, Django server answers Ajax call. Part 2 Handling the form when JavaScript is deactivated. Part 3 Fixing the frozen fading when user resend the form without waiting for the first fading to end. }}} {{{ #!html Ajax basic form submission, Django server answers Ajax call. }}} '''Ajax form submission using Dojo for the client side javaScript toolkit, and simpleJson for the client/server communication.''' This will take you through all the steps required to get started and to develop a handy form submission without reload. == What do you need == - Django - Dojo (v0.3) [] an open source javascript toolkit. - SimpleJson (v1.3) [] used for javascript <-> python communication. == What do we want to achieve == I will use the model of my current website, a recipe one. When a registered user sees the details of a recipe, the user can rate (mark) it or update their rating (mark) by selecting the mark with a small drop down list. But when the user clicks ok the whole page reloads. We will use some Ajax to make it transparent and fancy effect to show the change to the user. we can add a fading status message like "Your rating has been updated". The select box proposes a rating from 1 to 5, it is actually a form which is sent to the server via POST, which in django links to a method in the view.py : ''def details(request)'' which does the innerwork for the details page generation. == Installing Json == get SimpleJson from svn: {{{ svn co json cd json sudo python setup.py install }}} or get the latest released version from the [ CheeseShop]: {{{ sudo python easy_install simplejson }}} update: changeset [ 3232] now offers v1.3 of simplejson in django's utils, so you do not need to download and install simplejson -- just use django.utils.simplejson. {{{ from django.utils import simplejson #in replace of import simple_json }}} == Django part == '''view.py''' {{{ #!python def details(request): [more stuff] if request.POST: # Get all the mark for this recipe list_mark = Mark.objects.values('mark').filter(recipe__pk=r.id) # loop to get the total total = 0 for element in list_mark: total+= element['mark'] # round it total = round((float(total) / len(list_mark)),1) # update the total r.total_mark= total # save the user mark r.save() # Now the intersting part for this tut import simple_json # it was a french string, if we dont't use unicode # the result at the output of json in Dojo is wrong. message = unicode( message, "utf-8" ) # jsonList = simple_json.dumps((my_mark, total, form_message ,message)) return HttpResponse(jsonList) [more stuff, if not POST return render_to_response('recettes/details.html' ...] }}} '''url.py''' Just normal url.py, remember the path which will point to the wanted method. {{{ #!python from django.conf.urls.defaults import * urlpatterns = patterns('', [...more...] (r'^recettes/(?P[-\w]+)/(?P[-\w]+)/$', 'cefinban.recettes.views.details'), [...more...] }}} == Html template with Dojo javascript == '''Dojo use''' {{{ {% load i18n %} {% extends "base.html" %} {% block script %} Score: {{ r.total_mark }}/5{% endif %} [....] {% if not user.is_anonymous %} {% ifnotequal user.id r.owner_id %} {{mark_message}} 1 2 3 4 5 Notez {{ mark_status }} {% endifnotequal %} {% endif %} }}} And, voila. To have a demo use guest as login, guest as password here [] or if not working here []. Go to index and pick a recipe, update the rating. You can also have a look at the screenshot here : [] == Dreamhost and Simplejson == If you are using dreamhost for hosting please be aware that simplejson is not installed. Instead you will have to install the source of simplejson in a folder in your home directory eg /proz/json/simple_json The simple_json directory contains the required __init__.py for it to be loaded as a python module. Then in your ~/.bash_profile add the directory to your python path like below. {{{ export PYTHONPATH=$PYTHONPATH:$HOME/django/django_src:$HOME/django/django_projects:$HOME/progz/json }}} That will allow yout to use simpl_json in python shell. But '''dont forget''' to change django.fcgi ! Add {{{ sys.path +=['/home/coulix/progz/json'] }}} log out/in and try import simple_json (or simplejson depends on what source you picked) {{{ #!html Handling the form when JavaScript is deactivated. }}} If a user has deactivated his browser's javascript support, or is using a text mode browser, we need a way of making the previous rating button submit the rating to the server which should this time return an html template instead of data to the Ajax call. == Updating the form HTML (details.html template) == This time we put a submit type input inside the form instead of the button type in part 1. type="submit" as indicates its name, submits the form to the server, we will need a way of stopping this behavior using javaScript. {{{ {{mark_message}} [...] }}} Now, how can we tell our details method in view.py to know if it comes from a normal submit request or an Ajax request ? Two solutions, The '''first''' uses a hidden variable in form.html and an added content element in the JS part. {{{ function sendForm() { dojo.byId("mark_status").innerHTML = "Loading ..."; dojo.io.bind({ url: '.', handler: sendFormCallback, content: {"js", "true"}, formNode: dojo.byId('myForm') }); } [...] [...] }}} With this, in our django method in view.py we can test for request["js"]=="true" it would means that Js is activatd and we return the appropriate answer to the Ajax request. The '''second''' uses the url to pass a new variable ajax_or_not to the detail method. {{{ #!python def details(request, r_slug, r_cat, ajax_or_not=None): [...] }}} We modify the url.py to accept this new parameter. {{{ #!python (r'^recettes/(?P[-\w]+)/(?P[-\w]+)/(?P.*)$', 'cefinban.recettes.views.details'), }}} The dojo binding needs to append a variable to the original document url, to make ajax_or_not not None. {{{ function sendForm() { dojo.byId("mark_status").innerHTML = "Loading ..."; dojo.io.bind({ url: './ajax/', handler: sendFormCallback, formNode: dojo.byId('myForm') }); } }}} == New details method in view.py == We just need to test for the existence of ajax_or_not {{{ #!python def details(request, r_slug, r_cat, ajax_or_not=None): [...] if request.POST: [...] same as part 1 # except here we check ajax_or_not if ajax_or_not: # use json for python js exchange # it was a french string, if we dont't use unicode # the result at the output of json in Dojo is wrong. message = unicode( message, "utf-8" ) jsonList = simple_json.dumps((my_mark, total, form_message ,message)) return HttpResponse(jsonList) return render_to_response('recettes/details.html', {'r': r, 'section':'index', 'mark_status':message , 'mark_message':form_message, 'my_mark': my_mark}, context_instance=RequestContext(request),) }}} {{{ #!html Fixing the frozen fading when user resend the form without waiting for the first fading to end. }}} If you haven't realised yet, if two or more calls are sent to the javascript function sendForm in a short time, the fading effect of the current sendForm Callback method might get stuck / froze / bugged. We need a way of avoiding this by desactivating the connection between the submit button and the sendForm method while the fading '''animation''' is active. Thanks Dojo there is such things ! in two lines of code its done. {{{ function sendFormCallback(type, data, evt) { [...as before ...] // and the fancy fading effect // first disconnect the listener ! dojo.event.disconnect(sendFormButton, 'onclick', 'sendForm'); // assign our fading effect to an anim variable. var anim = dojo.lfx.html.highlight("mark_status", [255, 151, 58], 700).play(300); // When this anim is finish, reconnect dojo.event.connect(anim, "onEnd", function() { dojo.event.connect(sendFormButton, 'onclick', 'sendForm'); }); } }}} how nice is this ! Careful, while talking about how to fix the problem using onEnd in Dojo IRC chanel, they realised play() method didnt behave properly and updated it to react to onEnd and such. su you need at least revision '''4286'''. Update your dojo source {{{ svn co dojo }}} '''Note''' It might be completely wrong. More questions / complaints: [email protected]
https://code.djangoproject.com/wiki/AjaxDojoFormSub?format=txt
CC-MAIN-2015-35
en
refinedweb
.mediarouter.player; 18 19 import android.content.BroadcastReceiver; 20 import android.content.Context; 21 import android.content.Intent; 22 import android.view.KeyEvent; 23 24 /** 25 * Broadcast receiver for handling ACTION_MEDIA_BUTTON. 26 * 27 * This is needed to create the RemoteControlClient for controlling 28 * remote route volume in lock screen. It routes media key events back 29 * to main app activity MainActivity. 30 */ 31 public class SampleMediaButtonReceiver extends BroadcastReceiver { 32 private static final String TAG = "SampleMediaButtonReceiver"; 33 private static MainActivity mActivity; 34 35 public static void setActivity(MainActivity activity) { 36 mActivity = activity; 37 } 38 39 @Override 40 public void onReceive(Context context, Intent intent) { 41 if (mActivity != null && Intent.ACTION_MEDIA_BUTTON.equals(intent.getAction())) { 42 mActivity.handleMediaKey( 43 (KeyEvent)intent.getParcelableExtra(Intent.EXTRA_KEY_EVENT)); 44 } 45 } 46 } Except as noted, this content is licensed under Creative Commons Attribution 2.5. For details and restrictions, see the Content License. About Android Auto TV Wear Legal * Required Fields You have successfully signed up for the latest Android developer news and tips.
http://developer.android.com/samples/MediaRouter/src/com.example.android.mediarouter/player/SampleMediaButtonReceiver.html
CC-MAIN-2015-35
en
refinedweb
NetSurf is a CSS capable web browser for AmigaOS 4 and other platforms.The NetSurf developers are happy to announce the immediate availability of NetSurf 2.7. This release contains many bug fixes and improvements.It is available to download from is a change log detailing the important changes in this release: Core / All ---------- * Added WebP image support as build-time option. * Made logging include timing information. * Added treeview support. * Added global history manager. * Added hotlist manager. * Added cookie manager. * Added SSL certificate chain inspection display. * Improved stability. * Optimised plain text handling. * Cleaned up build infrastructure. * Fixed HTTP authentication issues. * Improved cache cleanup. * Improved detection of IP addresses in URLs. * Fixed handling of IPv6 addresses. * Updated rendering of local history. * Made the cache more robust. * Fixed building on OpenBSD. * Optimised count of current fetches for given host. * Added options for treeview rendering colours. * Added partial support for CSS :after pseudo element. * Fixed 'auto' top/bottom margins for tables. * Improved font API documentation. * Fixed float clearing bug. * Fixed browser_window destruction issue. * Added support for CSS system colours. * Fixed colour treatment in rsvg binding. * Improved portability. * Fixed copying from plain text to clipboard. * Improved core/front end interface for rendering into browser windows. * Improved core/front end interface for rendering thumbnails. * Optimised thumbnail rendering. * Made rendering calls pass clipping rectangle around as pointer. * Reduced floating point maths in the layout engine. * Added support for about: URL scheme. * Made cache more robust to strange server responses. * Added about:config and about:Choices displays. * Added about:licence and about:credits pages. * Made knockout rendering optimisation independent of content types. * Fixed clipping issue for HTML contents. * Fixed overflow:auto and overflow:scroll behaviour. * Set download filename according to Content-Disposition header. * Added resource: URL scheme. * Fixed poll loops for file: and data: URL scheme fetchers. * Fixed cache control invalidation. * Fixed text-indent layout issue. * Fixed layout issue where clear wrongly interacted with margins. * Improved cache performance. * Fixed handling of objects which fail to load. * Fixed various form submission issues. * Parallelised fetch and conversion of imported stylesheets. * Made content states more robust. * Optimised layout code to reduce calls to measure strings. * Improved layout code not to duplicate strings for text wrapping. * Improved box structure for HTML contents. * Optimised content message redraw requests. * Made various cache enhancements. * Text plot scaling handled in core. * Handle API diversity of iconv() implementations. * Optimise handling of child objects of an HTML content. * Avoided stalling during early stages of fetch caused by cURL. * Improved example of build configuration. * Added generation of build testament for about:testament. * Sanitised task scheduling. * Improved debugging infrastructure. * Fixed text/plain renderer to cope with scroll offsets. * Added generated list of about: content at about:about. * Allowed config. options to be set from the command line. * Hubbub library (HTML parser): + Added scoping for use from C++ programs. + Fixed example program. + Removed need for library initialisation and finalisation. + Generate entities tree at build time, rather than run time. + Added clang build support. * LibCSS library (CSS parser and selection engine): + Fixed destruction of bytecode for clip property. + Added scoping for use from C++ programs. + Removed need for library initialisation and finalisation. + Added support for CSS2 system colours. + Added support for CSS2 system fonts. + Altered external representation of colours to aarrggbb. + Added support for CSS3 rgba() colour specifier. + Added support for CSS3 'transparent' colour keyword. + Added support for CSS3 hsl() and hsla() colour specifiers. + Added support for CSS3 'currentColor' colour keyword. + Added support for CSS3 'opacity' property. + Added support for CSS3 selectors. + Added support for CSS3 namespaces. + Enabled clients to fetch imported stylesheets in parallel. + Made internal bytecode 64-bit safe. + Fixed leaking of strings. + Rewritten property parsers. + Certain property parsers auto-generated at build time. + Added clang build support. + Various portability enhancements. + Fixed selection for pseudo elements. + Added simultaneous selection for base and pseudo elements. + Namespaced all global symbols. + Updated test suite. + Future-proofed ABI. + Ensured fixed point maths saturates instead of overflowing. + Fixed clip property handling. + Fixed selection and cascade of "uncommon" CSS properties. + Added structure versioning for client input. * LibNSBMP library (NetSurf BMP decoder): + Added missing include. + Made more robust when handling broken ICO files. + Added clang build support. * LibNSGIF library (NetSurf GIF decoder): + Added missing include. + Added clang build support. * LibParserUtils library (parser building utility functions): + Fixed input stream encoding issue. + Added scoping for use from C++ programs. + Removed need for library initialisation and finalisation. + Removed need for run time provision of external Aliases file. + Added clang build support. + Namespaced all global symbols. + Handle API diversity of iconv() implementations. * LibROSprite library (RISC OS Sprite support for non-RO platforms): + C89 compatibility. * LibSVGTiny library (SVG support): + Improved parsing of stroke-width. + Added clang build support. + Various portability enhancements. * LibWapcaplet library (String internment): + Added scoping for use from C++ programs. + Removed need for library initialisation and finalisation. + Added clang build support. RISC OS-specific ---------------- * Replaced hotlist with core hotlist. * Replaced global history with core global history. * Replaced cookie manager with core cookie manager. * Replaced SSL cert. inspection with core SSL cert. inspection. * Apply weighted averaging to download rate display. * Examine extension when fetching local file of type 'Data'. * Iconv module version 0.11 required. * Rewritten toolbar code. * Created simplified, self-contained gui widgets. * Obtain download filename from the core. * Set CSS system colours from desktop palette. * Added menu entries to load about:licence and about:credits pages. GTK-specific ------------ * Replaced global history with core global history. * Added bookmarks support, using core hotlist. * Added cookie manager. * Added SSL certificate inspection window. * Support GTK >= 2.21.6. * Improved full save implementation. * Made drags less jerky. * Made new tabs open to show homepage. * Improved text wrap handling. * Improved menu bar. * Improved context sensitive popup menu. * Made various thumbnailing fixes. * Obtain download filename from the core. * Updated to use resource: scheme for resources. * Fixed makefie's installation target. * Enabled tabbing between form inputs. * Updated About NetSurf dialogue. * Reduced overhead due to Pango when measuring text. AmigaOS-specific ---------------- * Improved bitmap caching. * Fixed menus in kiosk mode. * Improved filetype handling. * Fixed menu shortcuts. * Replaced hotlist with core hotlist. * Replaced global history with core global history. * Replaced cookie manager with core cookie manager. * Replaced SSL cert. inspection with core SSL cert. inspection. * Improved Cairo and non-Cairo plotters. * Added auto-scroll on selection drags beyond window boundaries. * Improved clipboard handling. * Improved icon usage. * Improved stability. * Some incomplete work towards AmigaOS 3 support. * Disabled iframes by default. * Set CSS system colours from the pens in the screen DrawInfo table. * Fixed kiosk mode to always fill screen. * Improved scheduler. * Made new tabs open to show homepage. * Obtain download filename from the core. * Added history content menus to back and forward buttons. * Bitmap rendering optimisations. * Improved download handling. * Runtime selection of graphics plot implementations. * Updated About requester. * Enabled fast scrolling for all content types. Mac OS X-specific ----------------- * New front end. BeOS/Haiku-specific ------------------- * Fixed Replicant instantiation. * Set CSS system colours according to current desktop settings. Windows-specific ---------------- * Improved sub-window creation. * Fixed redraw bugs. * Fixed bitmap plotting. * Fixed thumbnailing. * Fixed local history. * Fixed URL bar. * Cleaned up toolbar creation. * Improved native build. * Fixed CPU thrashing when idle. * Use NetSurf icon on window decoration. * Improved options dialogue. * Made various 'look and feel' enhancements. Atari-specific -------------- * New front end. Framebuffer-specific -------------------- * Improved toolbar. * Improved font selection. * Added glyph cache size configuration option. * Made click action happen on mouse button release. * Give browser widget input focus on startup. * Fixed cursor leaving root widget. * Dynamic detection of surface libraries. * Updated to use resource: scheme for resources. * Improved DPI handling. * Fixed font size in text widgets. * Added support for scaled rendering. Also included are many smaller bug fixes, improvements and documentation enhancements. Seems to work best with 32 bit colour. Scrolling was a bit slower with 16 bit colour.
http://www.amigans.net/modules/news/article.php?storyid=1374
CC-MAIN-2015-35
en
refinedweb
Python decorators are great, if not always straightforward to define. Flask is a great Python library for building REST APIs; it makes use of decorators heavily for things like this: @route('/') def index(): return "index page! Woo!" But when building a REST API around protected resources, you often need to require an API key for certain routes (like, for example PUT to "/users/"). Now, you could go and write code in each view to validate an API key. You might even write a function that does that and just call the function within your views. Personally, I prefer to use decorators. It's just cleaner and simpler. Now, to do so, you have to define a decorator, and that is a bit ugly: from functools import wraps from flask import request, abort # The actual decorator function def require_appkey(view_function): @wraps(view_function) # the new, post-decoration function. Note *args and **kwargs here. def decorated_function(*args, **kwargs): if request.args.get('key') and request.args.get('key') == APPKEY_HERE: return view_function(*args, **kwargs) else: abort(401) return decorated_function Now you can do things like this: @route('/users/', methods=['PUT']) @require_appkey def put_user(): ... As is always the case with decorator in Flask, the route decorator must be the outermost decorator for this to work properly. Well, this is odd. Looks like Coderwall munged the "at" in the decorators into links. weird. Yeah code snippets got broken by coderwall... submit a bug to them :) Anyway the decorator way is really clean and nice solution in situation like this. Nice tip. Add a comment
https://coderwall.com/p/4qickw/require-an-api-key-for-a-route-in-flask-using-only-a-decorator
CC-MAIN-2015-35
en
refinedweb
NAME libinn - InterNetNews library routines SYNOPSIS #include "inn/libinn.h" char * GenerateMessageID(domain) char *domain; ``CA'' stands for Client Active. CAopen opens the active ``list'' command to make a local temporary copy. The CAlistopen sends a ``list'' `. file. Server contains the name of the host; ``innconf->server''. HashMessageID returns hashed message-id using MD5. EXAMPLES char *p; char *Article; char buff[256], errbuff[256]; FILE *F; FILE *ToServer; FILE *FromServer; int port = 119;)
http://manpages.ubuntu.com/manpages/precise/man3/libinn.3.html
CC-MAIN-2015-35
en
refinedweb
bang() always returns doesn't understand bang() I’ve been trying to use Jitter ob3d objects in java with automatic set to zero, when i try and bang using the .bang() method the api tells they have I get doesn’t understand "bang" in the max console. Does anyone else have this problem? here is an example import com.cycling74.max.*; import com.cycling74.jitter.*; public class textureTest extends MaxObject { JitterObject texture; JitterObject gridshape; private static final String[] INLET_ASSIST = new String[]{ "inlet 1 help" }; private static final String[] OUTLET_ASSIST = new String[]{ "outlet 1 help" }; public textureTest(Atom[] args) { declareInlets(new int[]{DataTypes.ALL}); declareOutlets(new int[]{DataTypes.ALL}); setInletAssist(INLET_ASSIST); setOutletAssist(OUTLET_ASSIST); texture = new JitterObject("jit.gl.texture"); texture.setAttr("drawto", "test"); texture.setAttr("name", "tex2"); gridshape = new JitterObject("jit.gl.gridshape"); gridshape.setAttr("drawto", "test"); gridshape.setAttr("shape", "plane"); gridshape.setAttr("texture", "tex2"); gridshape.setAttr("automatic", 1); } public void bang() { //gridshape.bang(); //< -- this line causes trouble !! } public void read(String s) { texture.send("read", new Atom[] { Atom.newAtom(s) }); } public void notifyDeleted() { texture.freePeer(); gridshape.freePeer(); } } could someone try compiling this and explaining why it raises does not understand bang ? ] import com.cycling74.max.*; import com.cycling74.jitter.*; public class test extends MaxObject { JitterObject sketch; private static final String[] INLET_ASSIST = new String[]{ "inlet 1 help" }; private static final String[] OUTLET_ASSIST = new String[]{ "outlet 1 help" }; public test(Atom[] args) { sketch = new JitterObject("jit.gl.sketch"); declareInlets(new int[]{DataTypes.ALL}); declareOutlets(new int[]{DataTypes.ALL}); setInletAssist(INLET_ASSIST); setOutletAssist(OUTLET_ASSIST); } public void bang() { //just pass it on sketch.bang(); //try again sketch.send("bang"); } } Gridshape supports bang, you need to bang it to draw it if you have automatic turned off. According to the api all Jitter Objects inherit the bang method. Are we using the same version of the api? On my machine, the only classes contained in javadocs at java-docapi-jitter are: JitterEvent JitterListener JitterMatrix JitterNotifiable JitterObject A native jitter object can not inherit a java object. As far as the I can tell form the docs, JitterObject is simply a front-end that sends messages to a native "peer". If you replace gridshape.bang(); with post(gridshape.understands("bang")); what does it print? Sorry perhaps the space between Jitter and Object confused matters. I’m using the api with Max 5.07 and the api docs state JitterObject contains a method bang, since the gridshape or sketch in the mxj is an instance of JitterObject it should respond to the method. The objects when in normal patcher use respond to bangs. I’m checking out what understands spits out now. post(gridshape.understands("bang") + "n"); returns false in the console. Well this implies that the C object initialized by mxj really don’t understand the bang message. No, your gridshape variable holds an instance of a JitterObject. This is a java object that can call native code to initiate native jitter objects (in the constructor) and send messages to them. jit.gl.gridshape is NOT instance of JitterObject, so JitterObject won’t know about which method the actual object may support. This is what the understands() method is for. Really? Can you put up an example patch where jit.gl.gridshape supports bang? Btw, I shouldn’t use the term C object here. They are native objects, though surely written in C. This is an example ----------begin_max5_patcher---------- 579.3ocyVF0aaBCDG+YpT+NXwyYQXCDf8T22g81TUkC3QbEXSAiBaU669L1w LRKDnIgloHkSb9r4+8y2g8q2emk8VdCoxF7UvO.VVuJ8Xo705wx3vxNG2Dmg qTAZGyyyILg8pCCJHMB0.eutjA3LPJVP.BNnjvRHk.wNBHsjlTsCWP5lUFkQ h40L0TQFur5bJKiHTuInw6O4LACmSTukuURwY18l.uVXlgiwMMQEKe6yeA5a 2ecpn+VsNPz5tnKvh3cTV5SkjXgFFnv.43.HLRY78ZMtx+AOplzet+tVqzrZ 1fSvSSyH1mNWONgfCkPvtkPGo3WEDspsos77wSjWvPcBAcTITf4+KHsXj8RY 89xAUYfCvY3DF8g2bGhEQiihdb3Crua3yFEeza6HmEgOBvV4ORItZpZhYiH2 APT33HZKlkZu5eVsVNKtgT7Bh7aMaBtBX6J0sDbAMK5jxy2jMKVovK4DQIG3 5tj8J9STHbA66gP01t+Bhnmoh0oYqOblBoAmWjc05aPCfKuo6aNOj4hTXxKr 2ALKHz1SYI78eFDycoHl+F0wTa1nIFb4Ky5tuhgafGzOVjgYxGv0BdNVPiG6 3sqCQQiSToPeRpfRZy4yUji9C0tZhFMd+qd4UWZ6s2VTstsC7FbWwqKiMp8v Y0xqCzoyDRkfxjLjy5ED53f1QSRHritZWNMofK+z8AcbpZfYKM4Yj.zTRy6l IMm+ekFbJoEcbPeRRq8NxSisnaB17mizBuIRKXNRy+hkVqCo4ufwVCT2 -----------end_max5_patcher----------- Believe me now? I got an email back from technical support and apparently bang calls the objects draw method so I’m going to try calling that. Success, calling .call("draw") whenever I want it to draw does the trick. import com.cycling74.max.*; import com.cycling74.jitter.*; public class bangTest extends MaxObject { JitterObject gridshape; private static final String[] INLET_ASSIST = new String[]{ "inlet 1 help" }; private static final String[] OUTLET_ASSIST = new String[]{ "outlet 1 help" }; public bangTest(Atom[] args) { gridshape = new JitterObject("jit.gl.gridshape"); gridshape.setAttr("drawto", "example"); gridshape.setAttr("shape", "plane"); gridshape.setAttr("automatic", 0); declareInlets(new int[]{DataTypes.ALL}); declareOutlets(new int[]{DataTypes.ALL}); setInletAssist(INLET_ASSIST); setOutletAssist(OUTLET_ASSIST); } public void bang() { gridshape.send("draw"); } } ----------begin_max5_patcher---------- 690.3ocyWt0aaBCEG+YpT+NXwyYQXtF1SaeG5aSQUNvYDGA1LvrvZ09tOeIj kzlaM.MQQJVbrM7mebt4We7AK6E7Vn1F8UzOPVVuJsXosorX0YvxtfzljSp0 Kzt.pqIYf8jMSJfVgdhzJx5sVYMETVNHz6wsy5O4LAiT.50+8JJIe2MvaDc6 .2YllpWKewpu35scwlUJ9SIXztsMZ9tOiZ5K5YvtSc5rWRDIKorrmqfDgYat wgx4Q3nX0fWj9hYScPy064uO9fZTNLo+z4ER4nBG7fCm.GEOb80LxycZv3Am eSg007lpDXLYDd1PyHOOrgQ9pgH7.3.wf0Rs9dDUztBsfvxdBpEGFR3OLjbO DjBOAjlbsfBG6pAUTfZXlgZN8BTI7hBfIdOodpohg3LTFQ.HAGUArTnBIVBn rJZZ8RR4+cyxoLHg2vD6widxUmCw0.6qIHLzjgBqiEwAZ+LO+dANAOKKGNiO zEDMchLNTEOmeJ+gYw67dYx7p9eDBbztANHmwLyR7PmXoiOgZ9X9r2y3kiwG AZg7GTQpggJuh2APzIx8pRqoRszMZzx0UzxDm3pSyDFM.XafhVh5Qvh4kxOn 6sYzbE9UAHp3xpZiYrRvYbD5w2cSckvfQDQqnhoY4S2TSAZIEk4vXVO1+7wM WYuKlJx9yLIhiGYnslxR4q+LHl2XQr.Smvckjwiua119U53F5alKKyIL4EjF AufHnIGq71vPT2iSToPeVpfJZ60yU2nMnzTlK93wulautos2dpU88UMwav8l iUXTa2wHk8CrUnoxFpoLID4rcW09KZIMMEX60aWAMsjKycuQHmxI3h0l5DJm WapVzuAhK9NVapi+d2JN0GU2yoM+alzbtekF9bRKd+E8YEldQdaw2DrEbIRa 1MQZQWhzB5szTFjC+CPjSdZl -----------end_max5_patcher----------- This works Nice, strange that this method is not documented in the reference though. Is ‘draw’ a generic OpenGL function? It is documented here Messages and attributes common to the GL group Bang is supposed to call it, maybe this is handled by something that is not exposed to the jitter java api hence having to call it by name? If you try sending "draw" from a message box it has the same effect as bang. I suppose it’s left in to differentiate it from drawraw. Aha, I didn’t notice that page. You’re probably right about the method not being exposed, that is a bug in JitterObject then. Maybe it would be nice to create java wrappers for the jit.gl objects to call them more java-style? Methods could be ported like this for instance: public void dim(int x, int y) { peer.send("dim", new Atom[]{ new Atom(x), new Atom(y) }); } Takes a lot of work though, maybe not worth it.. Forums > Java
https://cycling74.com/forums/topic/bang-always-returns-doesnt-understand-bang/
CC-MAIN-2015-35
en
refinedweb
In-Depth The Silverlight 5 PivotViewer allows you to create unique and interactive data visualizations for your users. Find out how to effectively use this technology and incorporate semantic zoom into your data collections. If you've spent much time writing data-driven applications, then you're painfully aware of how complex users' requirements can become around visualization. Each user in a domain will have his own idea of how data should be sorted, filtered and displayed. When Microsoft released Silverlight 5 in December 2011, a new data-visualization control called the PivotViewer was among the additions to the platform. PivotViewer allows you to build solutions that empower users to create their own vision of their data. Originally developed by Live Labs as a standalone Silverlight 4 control, PivotViewer sets the bar in data visualization by providing an intuitive client-side interface for sorting and filtering data, and presenting two distinct data views with different zoom levels. Built on the Silverlight Deep Zoom technology, PivotViewer allows you to enable users to visualize collections of data in an interactive way. This type of data visualization presents a unique solution that allows users to not only create customized views of a data set, but to discover trends and relationships within the data that might have been missed in more traditional formats. In this article, I'll cover the fundamentals of using PivotViewer and show you how to incorporate Semantic Zoom into your data collections. Creating a PivotViewer Application Because the PivotViewer is now a part of the Silverlight SDK, the only required installations are Microsoft Visual Studio 2010 SP1 and the Microsoft Silverlight 5 Tools for Visual Studio 2010 SP1. You can download the developer tools here. Once you have everything installed, create a Silverlight 5 application by selecting File, New and Project from the Visual Studio menu. After the New Project dialog is displayed, select a Silverlight Application, as shown in Figure 1. For this project, the only reference you need to add is the PivotViewer control. Unlike the previous version, PivotViewer is now entirely contained in a single DLL, making it easier to add the control to your project. To add a reference to the PivotViewer, right-click References in the Silverlight project and add a reference to System.Windows.Controls.Pivot. Now the project is set up, and you're ready to build a PivotViewer application. You need only three things to create a PivotViewer application: data, properties and templates. Like other Silverlight item controls, a collection of data can be added to PivotViewer by setting the ItemsSource property (one of the most requested features after the release of the original version). With this feature, you're able to use any method that you choose to obtain data, whether it's Collection Extensible Markup Language (CXML, the original PivotViewer data source), Open Data Protocol (OData) or Rich Internet Application (RIA) services. In order to focus on the PivotViewer -- instead of on obtaining data -- the sample code (found in the code download and available at VisualStudioMagazine.com/Champion0912) is going to generate some sample data on the client at runtime. To accomplish this, you need to define a data class -- in this case, statistics on each table of diners during a single shift in a restaurant. Defining a Data Class The first order of business is to create a class in your project that will contain the dinner party statistics. Right-click on the Silverlight project and select Add, then Class. Once the Add New Item dialog screen appears, name the class DinnerParty.cs. Then add the following properties to the class: Id, StaffName, TimeSeated and SizeOfParty. In order to generate a collection of data for this example, you'll create a static method in the DinnerParty class that will return a collection of 100 DinnerParty objects. This method should handle each property uniquely so the data is somewhat meaningful to test against. For example, each party will get a unique Id; the SizeOfParty will be a random number between 1 and 5; and table assignments will rotate through each member of the wait staff. The final DinnerParty class is shown in Listing 1. Before adding the newly generated data to the PivotViewer, you must first add a PivotViewer to the project. Because this is a PivotViewer example, the application's UI will consist solely of a PivotViewer object. The first step is to open the MainPage.xaml and add a namespace to System.Windows.Controls.PivotViewer. Next, add an instance of the PivotViewer to the Grid control and name it pViewer: <UserControl x:Class="VSM_PivotViewer.MainPage" xmlns="" xmlns:x="" xmlns:pivot="clr-namespace:System.Windows.Controls.Pivot;assembly= System.Windows.Controls.Pivot" > <Grid x: <pivot:PivotViewer x: </pivot:PivotViewer > </Grid > </UserControl > The last step is to assign the ItemsSource property of pViewer to our generated collection. Open up the codebehind file, MainPage.xaml.cs. In the constructor, you can now add the generated data: public MainPage() { InitializeComponent(); pViewer.ItemsSource = DinnerParty.BuildData(); } After defining the data source for this example, you must define the properties PivotViewer will display. This is accomplished by setting the PivotViewerProperties property to a collection of PivotViewerProperty objects. There are four different PivotViewerProperty classes that are available: The purpose of a PivotViewerProperty is to add metadata to each property that you wish to display inside PivotViewer and tell it how you want the data to be treated. PivotViewer will only display data for the properties you define. If your object has 20 properties and you only define four, then you'll only see those four within the PivotViewer UI. A PivotViewerProperty is defined by a unique name, a display name, a binding statement that maps it to your object, and one or more options. Each option defines how the PivotViewerProperty will be used within the PivotViewer. The following list includes the available options and explains how they're handled by PivotViewer: Next, add all four properties to the PivotViewerProperties collection. You can define these properties in XAML or code. Listing 2 shows the complete XAML for declaring the PivotProperties. For Id and SizeOfParty, use a PivotViewerNumericProperty. TimeSeated is a PivotViewerDateTimeProperty, and StaffName is a PivotViewerStringProperty. All of the properties except for Id have the CanFilter option set, so they appear in the filter pane. In addition, the StaffName has the CanSearch option set, so you can search features of PivotViewer to look up a particular staff member. Designing Trading Cards The final -- and arguably most important -- part of using the PivotViewer is to define the visual representation of each object, otherwise called a trading card. This is accomplished by setting the ItemTemplates property of the PivotViewer to a collection of one or more PivotViewerItemTemplates. A PivotViewerItemTemplate is an extension of a Silverlight DataTemplate. It uses data binding to populate its UI from each object in the ItemsSource collection. Designing the trading cards is often the biggest challenge for developers. Let's face it: This jumps into the design world. In order to design an effective trading card, it's important to understand the data you're displaying and what's important to the user. Designed correctly, the trading cards can add an extra dimension to the data that enables users to visually identify relationships and trends. Size is an important design concept to keep in mind. When you're looking at an entire collection containing 1,000 items, each trading card will be very small. Because the purpose of the trading card is to convey information to the user, it's only possible to present one or two pieces of information. Any more than that and the cards will be too cluttered for the user to distinguish between them. As the size of the cards gets larger on the screen, there's more real estate available to add information for the user. Ideally, you'd like to present a more detailed card as the user zooms in closer and increases the size of the trading card on the screen. For example, in our sample data collection, the StaffName and SizeOfParty are two important pieces of information that stand out. The first trading card will focus on these two properties. The trading card itself will be a square. The SizeOfParty will be displayed in a large TextBlock that's centered in the square. The only remaining question: How do you show the StaffName? If you print the StaffName on the card, it will be unreadable when you zoom out to view the entire collection. This is where visualization comes into play. The No. 1 question I get asked about using PivotViewer is, "What happens if I don't have any pictures in my data?" Some developers feel that this will prevent them from having a usable collection. This notion couldn't be further from the truth. Developers routinely present other forms of data visualization that don't require images. When was the last time you needed a picture for each data point in a line or bar chart? It's possible to present effective visualizations with a combination of text, color and icons. Not only can you make the trading cards visually appealing, but you can also use these tools to help the user identify trends and relationships among the displayed data set. Trying to visualize the StaffName property is a good example. If each staff member is represented by a different color and you set the background of each trading card to the color representing that staff member, then you have created a visualization that will make sense and is useful to the user. With a large number of trading cards, a user can look at the histogram of tables per hour and can visually determine which staff member had the most tables for that time period. Implementing Value Converters The best way to accomplish this visualization is with a value converter. By creating a value converter that converts a StaffName string to a SolidColorBrush, you'll be able to bind the trading card's background to the StaffName. Start by adding a new class to the project named StaffNameValueConverter.cs. The class should inherit the IValueConverter interface. Visual Studio will add the code necessary to implement this interface for you. This can be done by right-clicking IValueConverter and selecting Implement Interface. The IValueConverter interface has two methods: Convert and ConvertBack. The Convert property receives the raw value in a binding statement and lets you convert it into something different. For this example, it takes the StaffName string and returns a SolidColorBrush based on that string. In this instance, it isn't necessary to write an implementation for the ConvertBack because the binding doesn't need to convert from a color back to the StaffName. Listing 3 shows an implementation of a value converter that will map the staff names to unique colors. The method contains a switch statement that will map each of the three names of the staff members to a unique color. While a more flexible converter would need to be created in a real-world solution, this is suitable for demonstrating how to visualize a textual property. Before you can use the new value converter an instance of it must be added to your XAML. The first thing to add is a new namespace to the UserControl: xmlns:local="clr-namespace:VSM_PivotViewer" An instance of the value converter is then added to the UserControl by adding a resource to the UserControl: <UserControl.Resources > <local:StaffNameValueConverter x: </UserControl.Resources > Once the StaffNameValueConverter code is completed, you can create the first PivotViewerItemTemplate and add it to the ItemTemplates property, as demonstrated in Listing 4. As previously mentioned, the template will only contain a Grid and a TextBlock. The Grid's Background property is bound to the StaffName property using the StaffNameValueConverter. The TextBlock has its Text property bound to the SizeOfParty property. Running the project, the PivotViewer is loaded with 100 items displayed, as shown in Figure 2. You're able to change the sort, filter the collection, and see the item details in the detail pane on the right-hand side when you select an item. It's important to remember that all of this functionality came out of the box. All that's required is to supply the data, and describe what to show and how to show it. The first thing that should catch your eye is the division of tiles between all three of our staff members. Even with the simplistic trading cards, it's possible to begin inferring data relationships. For example, sorting by "Staff Name" and selecting the histogram view will show that each staff member has roughly the same number of tables in the system. Sorting by "Size of Party" in the histogram view lets you visualize which staff members had the larger or smaller parties. It should quickly become evident that this approach to data analysis can be useful to the user given properly designed cards. Using Semantic Zoom Another improvement in the latest release of PivotViewer is the concept of Semantic Zoom. Semantic Zoom is the ability to create different visualizations based on the size of an item. This lets you change the level of detail that the user sees as he zooms closer to an item. If you take a look at the current example, not a lot of detail is shown once you select a trading card. To take full advantage of the PivotViewer, you should provide more information as the user zooms in closer to a trading card. The solution is to add a second trading card with more details to the PivotViewer. Before adding a second template to the ItemTemplates collection, it's necessary to take a closer look at the PivotViewer workflow with regard to item templates. The PivotViewerItemTemplate has a MaxWidth property. This property is important to the workflow. When a trading card is rendered, the first template in the collection is used to render the card. If the MaxWidth property is set on the template, then once the trading card is wider in screen pixels than that value, the PivotViewer moves to the next card in the collection. This process continues until the correct size is found, or there are no more templates to check. If this happens, the trading card won't be generated and a blank image will be rendered to the screen. In order to present the user with more information, Listing 5 shows an updated ItemTemplates property for the PivotViewer. The template adds more details from our data set. For instance, both the staff name and the time the table was seated are displayed on the card. The template is added after the first template, so the PivotViewer will still load the original template first. Before the new template is used by the PivotViewer, you must first set a MaxWidth to the original template. Listing 5 shows how the original template was modified to define a MaxWidth equal to 300. Running the app will generate the same view as the first example. However, once you zoom in closer to the cards, the cards will change to the new, more detailed template as shown in Figure 3. If you zoom in slowly enough, you'll also notice that the PivotViewer creates a smooth transition between the two templates by blending in the new template over the original. If you take a look at what this second template provides you, you should begin to see the power of Semantic Zoom. Effectively, utilizing this approach allows you to bring another dimension to your users. Imagine if you only used a single template in your collection. You'd have to take one of two approaches. The first would be to use a simplistic card, as in the original collection. This would give you the ability to create graphical correlations between data points, but overall it would be lacking in details about a single data point. If you went the other direction and created a single detailed card, all of your cards would look identical when viewing the entire collection. This would hinder your ability to show relationships between data. In addition, you'd take a performance hit by attempting to render an entire collection with a complex template. With the enhancements in the latest version of the PivotViewer, it's now conceivable to give the user the ability to choose which templates to use and how those templates are defined. A single interface could be used in completely different ways to display data. The PivotViewer API allows you to set things like the current viewer, the filter state and the sorting property via code. Therefore, you can store this information for the user and restore settings for future sessions. The Silverlight 5 PivotViewer has come a long way since the original version. Its client-side XAML-based rendering, dynamic updates and enhanced API provide a solid foundation for creating interactive and informative data visualizations. Adding the PivotViewer to your apps will give your users a completely new way to experience their data and allow them to explore it in ways not previously possible. Printable Format > More TechLibrary I agree to this site's Privacy Policy. > More Webcasts
https://visualstudiomagazine.com/articles/2012/09/01/semantic-zoom.aspx
CC-MAIN-2015-35
en
refinedweb
Although nowadays most of us have broadband connections, resource caching is important as loading a resource from your local HD is (by now) still faster than fetching it remotely. In this post I'd like to explore how to control the ASP.net MVC caching behavior and its effects when using ajax requests for retrieving data. Default ASP.net MVC Caching Behavior If you don't specify anything at all and you have a plain normal action method like public JsonResult Details(long id) { //snip snip return Json(theResult, JsonRequestBehavior.AllowGet); } then ASP.net MVC will return the response with the following headers: Cache-Control:private Connection:Close Content-Length:81836 Content-Type:application/json; charset=utf-8 Date:Mon, 29 Oct 2012 08:08:44 GMT Server:ASP.NET Development Server/11.0.0.0 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 According to the offical W3.org docs, Cache-Control. As a result, the browser might cache such ajax request made by a JavaScript client, which most of the time might not be desired. Especially our beloved IE used a massive caching approach leading to strange effects. Disabling Caching Behavior Lets first take a look on how to completely disable caching. Globally on the Server (Custom Approach) If we want to completely disable any kind of caching behavior, we could employ a kind of brute force approach by implementing a custom global action filter that sets the corresponding headers: public class MvcApplication : System.Web.HttpApplication { ... public static void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new NoCacheGlobalActionFilter()); } ... } public class NoCacheGlobalActionFilter : ActionFilterAttribute { public override void OnResultExecuted(ResultExecutedContext filterContext) { ... HttpCachePolicyBase cache = filterContext.HttpContext.Response.Cache; cache.SetCacheability(HttpCacheability.NoCache); base.OnResultExecuted(filterContext); } } The response headers will vary as follows: Cache-Control:no-cache Connection:Close Content-Length:81836 Content-Type:application/json; charset=utf-8 Date:Mon, 29 Oct 2012 08:48:40 GMT Expires:-1 Pragma:no-cache Server:ASP.NET Development Server/11.0.0.0 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 Note the Cache-Control and Expires header that has been added. This will prevent any kind of caching, on the server as well as on the client side. Globally on the Server (OutputCache) But rather than creating a custom filter, why not just re-use something existing. ASP.net has already a build-in caching mechanism called OutputCache. It is quite powerful and I'll go into more detail soon. As a result you could annotate your controller as follows [OutputCache(Duration = 0)] public class SomeController : Controller { } in order to prevent caching. Consequently the response headers contain a Cache-Control: public, max-age=0 header. Attention, you could also specify the following in your web.config: <caching> <outputCache enableOutputCache="false" /> </caching> However, this won't prevent caching. Instead it just indicates that no kind of caching mechanism should be applied. By just disabling the output cache we get the default cache headers used by ASP.net MVC which falls back to Cache-Control: private, thus again opening the browser the possibility to cache requests. On the Client-Side using jQuery Beside disabling the cache on the server-side, you also have the possibility to control the caching behavior on the client-side. jQuery's ajax - for instance - allows you to specify a cache flag. Lets look at its effects. Executing an ajax request with $.ajax({ type: 'GET', url: '/nation', ... }); uses the following headers: GET /nation and answers with HTTP/1.1 200 OK Server: ASP.NET Development Server/11.0.0.0 Date: Mon, 29 Oct 2012 08:54:35 GMT X-AspNet-Version: 4.0.30319 X-AspNetMvc-Version: 3.0 Cache-Control: private Content-Type: application/json; charset=utf-8 Content-Length: 81836 Connection: Close Instead, by using setting cache: false like $.ajax({ type: 'GET', cache: false, url: '/nation', ... }); the request headers look as follows GET /nation?_=1351500913222 The response parameters don't vary. But note that jQuery adds an additional, random generated request parameter _=... which is used to enforce a cache invalidation. Enable Caching Before I illustrated how to disable caching behavior. Now I'd like to take a look on how to enable it. On the Server Side While you could easily write your custom caching attribute using action filters, there is no need at all. ASP.net has the OutputCache mechanism which also provides a OutputCacheAttribute that can be applied to Controller actions. For instance we could decorate our controller action like [OutputCache(Duration=3, VaryByParam="*")] public JsonResult Details(long id) { //snip snip return Json(theResult, JsonRequestBehavior.AllowGet); } This causes the following response headers to be injected: HTTP/1.1 200 OK Server: ASP.NET Development Server/11.0.0.0 Date: Mon, 29 Oct 2012 09:18:35 GMT X-AspNet-Version: 4.0.30319 X-AspNetMvc-Version: 3.0 Cache-Control: public, max-age=3 Expires: Mon, 29 Oct 2012 09:18:37 GMT Last-Modified: Mon, 29 Oct 2012 09:18:34 GMT Vary: * Content-Type: application/json; charset=utf-8 Content-Length: 81836 Connection: Close Those of interest here are Cache-Control, Expires and Last-Modified. Client Side Caching We could potentially also enforce client-side caching mechanisms. I'm thinking about using techniques that have been made available with HTML5 like - ApplicationCache - Client-side storage such as localStorage and sessionStorage To what regards the localStorage approach, there exists a nice jQuery plugin on GitHub which might be worth looking at: jQuery-ajax-jstorage-cache (with a fork by Paul Irish as well). Conclusion: Caching Where and When I need It In a real world application you'd obviously expect to have a proper combination of disabling/enabling the cache where appropriate. This implies to have the possibility to use an approach where you define a global default which can be overwritten where needed. The approach I applied is to enforce disabling of the cache on a global basis for then enabling it on an as-needed basis. That is, all our controllers inherit from a custom BaseController class which performs some additional common work. I then annotated that class as follows [OutputCache(Duration=0, VaryByParam="*")] public class BaseController : Controller { ... } On those action methods where I explicitly desire caching behavior I can now add the OutputCache attribute to enforce it: public class SomeController : BaseController { public JsonResult UnCachedAction(){ ... } [OutputCache(Duration=60, VaryByParam="id")] public JsonResult CachedAction(long id){ ... } } This turns out to be quite powerful as the amount of custom code involved is kept at a minimum. If you're interested in using the OutputCache mechanism I'd suggest you to also take a look at custom Cache Profiles. This article has been re-published on the following partner sites:
http://juristr.com/blog/2012/10/output-caching-in-aspnet-mvc/
CC-MAIN-2015-35
en
refinedweb
Learning Examples | Foundations | Hacking | Links Examples > Strings You can get the length of a Strings using the length() command, or eliminate extra characters using the trim() command. This example shows you how to use both commands. There is no circuit for this example, though your Arduino must be connected to your computer via USB. image developed using Fritzing. For more circuit examples, see the Fritzing project page length() returns the length of a String. There are many occasions when you need this. For example,if you wanted to make sure a String was less than 140 characters, to fit it in a text message, you could do this: trim() is useful for when you know there are extraneous whitespace characters on the beginning or the end of a String and you want to get rid of them. Whitespace refers to characters that take space but aren't seen. It includes the single space (ASCII 32), tab (ASCII 9), vertical tab (ASCII 11), form feed (ASCII 12), carriage return (ASCII 13), or newline (ASCII 10). The example below shows a String with whitespace, before and after trimming:
https://www.arduino.cc/en/Tutorial/StringLengthTrim
CC-MAIN-2015-35
en
refinedweb
: AbstractCosJSB serviceFlags AbstractServiceFlagJSB. ServiceConfiguration doServiceConfiguration. JavaSpace ServiceFlag Entry Services have access to the full range of features of a Rio JSB, and as such can be good citizens of a J2EE or CORBA environment, or a JSP Web application. Figure 1. COS Admin Screen.">. dawgTaskTracker.xml . AbstractClientApp connect() Once connected to the COS, your application provides a handy CosConnectable object with create, retrieve, update, delete, and other methods you can use to manage shared information. CosConnectable create retrieve update delete. TaskTrackerEntry Let's take a quick look at the actual TaskTrackerEntry: public class TaskTrackerEntry extends CosEntry implements Archivable Figure 2. Task List.. CosEntry CosEntries creationEnsembleName cos.create(entry) setCreationEnsembleName().
http://archive.oreilly.com/pub/a/onjava/2002/05/15/911proof.html
CC-MAIN-2015-35
en
refinedweb
Name | Synopsis | Description | Return Values | VALID STATES | Errors | TLI COMPATIBILITY | Attributes | See Also #include <xti.h> int t_rcvv(int fd, struct t_iovec *iov, unsigned int iovcount, int *flags); This function receives either normal or expedited data. The argument fd identifies the local transport endpoint through which data will arrive, iov points to an array of buffer address/buffer size pairs (iov_base, iov_len). The t_rcvv() function receives data into the buffers specified by iov0.iov_base, iov1.iov_base, through iov [iovcount-1].iov_base, always filling one buffer before proceeding to the next. Note that the limit on the total number of bytes available in all buffers passed:. The argument iovcount contains the number of buffers which is limited to T_IOV_MAX, which is an implementation-defined value of at least 16. If the limit is exceeded, the function will fail with TBADDATA. The argument flags may be set on return from t_rcvv() and specifies optional flags as described below. By default, t_rcvv() operates in synchronous mode and will wait for data to arrive if none is currently available. However, if O_NONBLOCK is set by means of t_open(3NSL) or fcntl(2), t_rcvv()_rcvv() or t_rcv(3NSL) calls. In the asynchronous mode, or under unusual conditions (for example, the arrival of a signal or T_EXDATA event), the T_MORE flag may be set on return from the t_rcvv() call even when the number of bytes received is less than the total size of all the receive buffers. Each t_rcvv() with the T_MORE flag set indicates that another t_rcvv() must follow to get more data for the current TSDU. The end of the TSDU is identified by the return of a t_rcvv() the amount of buffer space passed in iov is greater than zero on the call to t_rcvv(), then t_rcvv()_rcvv() which will return with T_EXPEDITED set in flags. The end of the ETSDU is identified by the return of a t_rcvv() call with T_EXPEDITED set and T_MORE cleared. If the entire ETSDU is not available it is possible for normal data fragments to be returned between the initial and final fragments of an ETSDU. If a signal arrives, t_rcvv() returns, giving the user any data currently available. If no data is available, t_rcvv() returns –1, sets t_errno to TSYSERR and errno to EINTR. If some data is available, t_rcvv() via the EM interface. On successful completion, t_rcvv() returns the number of bytes received. Otherwise, it returns –1 on failure and t_errno is set to indicate the error. T_DATAXFER, T_OUTREL. On failure, t_errno is set to one of the following: iovcount is greater than T_IOV_MAX.: fcntl(2), t_getinfo(3NSL), t_look(3NSL), t_open(3NSL), t_rcv(3NSL), t_snd(3NSL), t_sndv(3NSL), attributes(5) Name | Synopsis | Description | Return Values | VALID STATES | Errors | TLI COMPATIBILITY | Attributes | See Also
http://docs.oracle.com/cd/E19253-01/816-5170/6mbb5et6a/index.html
CC-MAIN-2015-35
en
refinedweb
Every thread of execution in an application is created by instantiating an object of a class derived from the Thread class. #include <thread.h> Inherited by ost::PosixThread, ost::SerialService, ost::SocketService, ost::TCPSession, ost::ThreadQueue, ost::TTYSession, and ost::UnixSession.. class PosixThread class DummyThread class Cancellation class postream_type class Slog class ThreadImpl void operator++ (Thread &th) Signal the semaphore that the specified thread is waiting for before beginning execution. void operator-- (Thread &th) Every. How work cancellation. How work suspend. How to raise error. How work cancellation. Enumerator: How work suspend. Enumerator: How to raise error. Enumerator: This is actually a special constructor that is used to create a thread 'object' for the current execution context when that context is not created via an instance of a derived Thread object itself. This constructor does not support First. Parameters: When: A thread of execution can also be specified by cloning an existing thread. The existing thread's properties (cancel mode, priority, etc), are also duplicated. Parameters: The thread destructor should clear up any resources that have been allocated by the thread. The desctructor of a derived thread should begin with Terminate() and is presumed to then execute within the context of the thread causing terminaton. clear parent thread relationship. Start a new thread as 'detached'. This is an alternative start() method that resolves some issues with later glibc implimentations which incorrectly impliment self-detach. Returns: Parameters: Examples: thread2.cpp. This is used to help build wrapper functions in libraries around system calls that should behave as cancellation points but don't. Returns: Used to properly exit from a Thread derived run() or initial() method. Terminates execution of the current thread and calls the derived classes final() method. Examples: bug2.cpp, and tcpservice.cpp. This is used to restore a cancel block. Parameters:...) Note: See also: run Reimplemented in ost::ThreadQueue. Examples: tcpthread.cpp. Referenced by ost::getThread(). Used to retrieve the cancellation mode in effect for the selected thread. Returns: Get exception mode of the current thread. Returns:(). Returns: Get system thread numeric identifier. Returns: Get the name string for this thread, to use in debug messages. Returns: Gets the pointer to the Thread class which created the current thread object. Returns:. See also: final Reimplemented in ost::TCPSession, and ost::UnixSession. Check if this thread is detached. Returns: Verifies if the thread is still running or has already been terminated but not yet deleted. Returns: Tests to see if the current execution context is the same as the specified thread object. Returns: Blocking call which unlocks when thread terminates. When a thread terminates, it now sends a notification message to the parent thread which created it. The actual use of this notification is left to be defined in a derived class. Parameters:. See also: Examples: bug1.cpp, bug2.cpp, tcpservice.cpp, tcpstr1.cpp, tcpthread.cpp, thread1.cpp, and thread2.cpp. Sets thread cancellation mode. Threads can either be set immune to termination (cancelDisabled), can be set to terminate when reaching specific 'thread cancellation points' (cancelDeferred) or immediately when Terminate is requested (cancelImmediate). Parameters: Set exception mode of the current thread. Returns: Set the name of the current thread. If the name is passed as NULL, then the default name is set (usually object pointer). Parameters: Set base stack limit before manual stack sizes have effect. Parameters: Sets the thread's ability to be suspended from execution. The thread may either have suspend enabled (suspendEnable) or disabled (suspendDisable). Parameters:. Parameters:. Returns: Parameters: Examples: tcpservice.cpp, and tcpstr1.c. Signal the semaphore that the specified thread is waiting for before beginning execution. Parameters: Reimplemented in ost::PosixThread. Generated automatically by Doxygen for GNU CommonC++ from the source code.
http://www.makelinux.net/man/3/O/ost_Thread
CC-MAIN-2015-35
en
refinedweb
Ext Certified Developer hi team and community, i am not sure if this topic came up already. in germany, some companies are searching for developers, who have skills in developing apps with your framework. but since there is no official certified test or an education for this, it is not easy to determine who is a good choice and who is not. i am not only interested to get such an certification, but i am willing to help to build up something like this in germany. kind regards, tobiu Some folks have asked about this in the past and I don't recall the exact response, but I believe it was not favorable. How you determine who is a good choice is by asking techinical questions during an interview. Questions that Only an Ext JS developer with the proper experience can answer. For instance (it took me only a minute or two to generate these): Q: What are the three phases of the component lifecycle? Q: What is the lowest component class that can participate as a child in a layout? Q: What is the lowest component class that can manage other child items? Q: What does the Data Reader class provide for the Data Store class? Q: Name five layouts in the framework. Q: What layout allows for multiple children, each taking 100% of the parent's available body space but only allows one child to be shown at a time Q: What do Fn.createDelegate, Fn.createInterceptor and Fn.createSquence do? Q: Explain what Ext.Element's role is in the framework I think you get the point If the interviewer does not know the framework, they can create questions from Learning Ext JS () or Ext JS in Action () No ballgame is not complete without its curveballs dude. trick question (I still can't believe I didn't see the answer and tried to calculate it). I never cottoned to "interview questions" like that. Nor, of course, to "certifications." My "question" has always been: "show me a sample of your work." Then: "give me the names of three good references who have been your former clients." Pick up the phone, preferably while they're there, and just see if you can talk to them. A good workman's reputation follows him. So does a bad one's. If the person has built a web-site, then by going to that site I will be able to see the source code that runs it. (It's actually fairly unlikely that the builder bothered to compress it.) Get this person actively involved in conversation about how they did it, and what they did. You can sniff out a fake pretty readily, i-f you know the stuff yourself. Then, just make it clear (in writing) from the start that there will be a 30-day probationary period. Anyone who can "get on their feet" will get on their feet within such time. In my experience, quite frankly,"certifications" are worthless. They're just a product. When all is said and done, whoever is cranking them out has such an extraordinary incentive to do just that, that it renders their little pieces of paper utterly useless as a discriminating factor. Your mileage, of course, may vary... Asking for examples of their work is secondary. Any monkey can go out there and copy someone's code. Hell many developers here do that with the Ext JS Examples, where they build entire applications with the ExtJS Copyright and Ext.example namespace written all over it. The original post was about an Ext JS certification, which does nothing to display one's ability to effectively and efficiently use the product. Jay Garcia @ModusJesus || Modus Create co-founder Ext JS in Action author Sencha Touch in Action author Get in touch for Ext JS & Sencha Touch Touch Training -
https://www.sencha.com/forum/showthread.php?88415-Ext-Certified-Developer
CC-MAIN-2015-35
en
refinedweb
BUS_SETUP_INTR, bus_setup_intr, BUS_TEARDOWN_INTR, bus_teardown_intr -- create, attach and teardown an interrupt handler #include <sys/param.h> #include <sys/bus.h> int BUS_SETUP_INTR(device_t dev, device_t child, struct resource *irq, int flags, driver_intr_t *intr, void *arg, void **cookiep); int bus_setup_intr(device_t dev, struct resource *r, int flags, driver_intr_t handler, void *arg, void **cookiep); int BUS_TEARDOWN_INTR(device_t dev, device_t child, struct resource *irq, void *cookiep); int bus_teardown_intr(device_t dev, struct resource *r, void *cookiep); The method BUS_SETUP_INTR will create and attach an interrupt handler to an interrupt previously allocated by the resource manager's BUS_ALLOC_RESOURCE(9) method. The flags are found in <sys/bus.h>, and give the broad category of interrupt. The flags also tell the interrupt handlers the about certain device driver characteristics. INTR_FAST means the handler is for a timing-critical function. Extra care is take to speed up these handlers. Use of this implies INTR_EXCL. INTR_EXCL marks the handler as being an exclusive handler for this interrupt. INTR_MPSAFE tells the scheduler that the interrupt handler is well behaved in a preemptive environment (``SMP safe''), and does not need to be protected by the ``Giant Lock'' mutex. INTR_ENTROPY marks the interrupt as being a good source of entropy - this may be used by the entropy device /dev/random. The handler intr. Zero is returned on success, otherwise an appropriate error is returned. random(4), device(9), driver(9)>. FreeBSD 5.2.1 March 28, 2003 FreeBSD 5.2.1
http://nixdoc.net/man-pages/FreeBSD/man9/bus_teardown_intr.9.html
CC-MAIN-2015-35
en
refinedweb
Unsuccessful append to an empty NumPy array Solution 1 numpy.append is pretty different from list.append in python. I know that's thrown off a few programers new to numpy. numpy.append is more like concatenate, it makes a new array and fills it with the values from the old array and the new value(s) to be appended. For example: import numpy old = numpy.array([1, 2, 3, 4]) new = numpy.append(old, 5) print old # [1, 2, 3, 4] print new # [1, 2, 3, 4, 5] new = numpy.append(new, [6, 7]) print new # [1, 2, 3, 4, 5, 6, 7] I think you might be able to achieve your goal by doing something like: result = numpy.zeros((10,)) result[0:2] = [1, 2] # Or result = numpy.zeros((10, 2)) result[0, :] = [1, 2] Update: If you need to create a numpy array using loop, and you don't know ahead of time what the final size of the array will be, you can do something like: import numpy as np a = np.array([0., 1.]) b = np.array([2., 3.]) temp = [] while True: rnd = random.randint(0, 100) if rnd > 50: temp.append(a) else: temp.append(b) if rnd == 0: break result = np.array(temp) In my example result will be an (N, 2) array, where N is the number of times the loop ran, but obviously you can adjust it to your needs. new update The error you're seeing has nothing to do with types, it has to do with the shape of the numpy arrays you're trying to concatenate. If you do np.append(a, b) the shapes of a and b need to match. If you append an (2, n) and (n,) you'll get a (3, n) array. Your code is trying to append a (1, 0) to a (2,). Those shapes don't match so you get an error. Solution 2 I might understand the question incorrectly, but if you want to declare an array of a certain shape but with nothing inside, the following might be helpful: Initialise empty array: >>> a = np.zeros((0,3)) #or np.empty((0,3)) or np.array([]).reshape(0,3) >>> a array([], shape=(0, 3), dtype=float64) Now you can use this array to append rows of similar shape to it. Remember that a numpy array is immutable, so a new array is created for each iteration: >>> for i in range(3): ... a = np.vstack([a, [i,i,i]]) ... >>> a array([[ 0., 0., 0.], [ 1., 1., 1.], [ 2., 2., 2.]]) np.vstack and np.hstack is the most common method for combining numpy arrays, but coming from Matlab I prefer np.r_ and np.c_: Concatenate 1d: >>> a = np.zeros(0) >>> for i in range(3): ... a = np.r_[a, [i, i, i]] ... >>> a array([ 0., 0., 0., 1., 1., 1., 2., 2., 2.]) Concatenate rows: >>> a = np.zeros((0,3)) >>> for i in range(3): ... a = np.r_[a, [[i,i,i]]] ... >>> a array([[ 0., 0., 0.], [ 1., 1., 1.], [ 2., 2., 2.]]) Concatenate columns: >>> a = np.zeros((3,0)) >>> for i in range(3): ... a = np.c_[a, [[i],[i],[i]]] ... >>> a array([[ 0., 1., 2.], [ 0., 1., 2.], [ 0., 1., 2.]]) Solution 3 This error arise from the fact that you are trying to define an object of shape (0,) as an object of shape (2,). If you append what you want without forcing it to be equal to result[0] there is no any issue: b = np.append([result[0]], [1,2]) But when you define result[0] = b you are equating objects of different shapes, and you can not do this. What are you trying to do? Solution 4 Here's the result of running your code in Ipython. Note that result is a (2,0) array, 2 rows, 0 columns, 0 elements. The append produces a (2,) array. result[0] is (0,) array. Your error message has to do with trying to assign that 2 item array into a size 0 slot. Since result is dtype=float64, only scalars can be assigned to its elements. In [65]: result=np.asarray([np.asarray([]),np.asarray([])]) In [66]: result Out[66]: array([], shape=(2, 0), dtype=float64) In [67]: result[0] Out[67]: array([], dtype=float64) In [68]: np.append(result[0],[1,2]) Out[68]: array([ 1., 2.]) np.array is not a Python list. All elements of an array are the same type (as specified by the dtype). Notice also that result is not an array of arrays. Result could also have been built as ll = [[],[]] result = np.array(ll) while ll[0] = [1,2] # ll = [[1,2],[]] the same is not true for result. np.zeros((2,0)) also produces your result. Actually there's another quirk to result. result[0] = 1 does not change the values of result. It accepts the assignment, but since it has 0 columns, there is no place to put the 1. This assignment would work in result was created as np.zeros((2,1)). But that still can't accept a list. But if result has 2 columns, then you can assign a 2 element list to one of its rows. result = np.zeros((2,2)) result[0] # == [0,0] result[0] = [1,2] What exactly do you want result to look like after the append operation? Solution 5 numpy.append always copies the array before appending the new values. Your code is equivalent to the following: import numpy as np result = np.zeros((2,0)) new_result = np.append([result[0]],[1,2]) result[0] = new_result # ERROR: has shape (2,0), new_result has shape (2,) Perhaps you mean to do this? import numpy as np result = np.zeros((2,0)) result = np.append([result[0]],[1,2]) - Cupitor over 2 years I am trying to fill an empty(not np.empty!) array with values using append but I am gettin error: My code is as follows: import numpy as np result=np.asarray([np.asarray([]),np.asarray([])]) result[0]=np.append([result[0]],[1,2]) And I am getting: ValueError: could not broadcast input array from shape (2) into shape (0)
https://9to5answer.com/unsuccessful-append-to-an-empty-numpy-array
CC-MAIN-2022-40
en
refinedweb
6.2 The, for information about `MAKELEVEL'.) ifeq (0,${MAKELEVEL}) whoami := $(shell whoami) host-type := $(shell arch) MAKE := ${MAKE} host-type=${host-type} whoami=${whoami} endif An advantage of this use of `:=' is that a typical `descend into a directory' command ( Functions for Transforming Text Functions.).: FOO ?= bar is exactly equivalent to this ( The `origin' Function Origin Function.): ifeq ($(origin FOO), undefined) FOO = bar endif Note that a variable set to an empty value is still defined, so `?=' will not set that variable.
http://osr600doc.xinuos.com/cgi-bin/info2html?(make.info.gz)Flavors&lang=en
CC-MAIN-2022-40
en
refinedweb
Subclassing Introduction Both Python and C++, being object-oriented programming languages, take advantage of the concept known as “Inheritance”, to allow for a class to subclass one or more other classes. This allows for the creation of a sub-class (or descendent class) that is said to “inherit” all the attributes of the super class (or ancestor class), usually with the purpose of expanding upon them. Subclassing pure-python classes from python or C++ classes from C++ is fairly straightforward and there’s plenty of literature on the subject. The Wikipedia article on inheritance is a good starting point before proceeding to the language-specific documentation. Special care however must be taken when creating a Python class that subclasses from a C++ class, as there are limitations to it. The Theory The C++ classes do not exactly exist in the Python namespace. They can’t; they’re C++ objects, not Python objects. Instead, for each C++ class that must be available through Python, a wrapper class that has the same name as the C++ class and all of the same methods has been created. When you call one of the methods on the Python wrapper, it turns around and calls the underlying C++ method of the same name. Thus, it looks like you’re actually dealing directly with the C++ object, even though you’re really dealing with a Python object. When you inherit from a C++ class, you are actually inheriting from the Python wrapper class. You can’t actually inherit from the C++ class itself, since you’re writing a Python class, not a C++ class. This means that whenever you create an instance of your new inherited class, you’re creating an instance of the C++ class, the Python wrapper, and your Python inherited class. But then if you pass a pointer of your instance to some C++ method, all it receives is a pointer to the C++ class. In the context of Panda, if you create an instance of a new “node” class and store it in the scene graph, you are really only storing the underlying C++ object in the scene graph–the Python part of the object gets left behind. This makes sense, because the C++ structures can only store pointers to C++ objects, not Python objects. So, when you pull the node out of the scene graph later, it creates a new Python wrapper around it and returns that new wrapper. Now all you have is the original C++ node–it’s not your new node class anymore, it’s just the Python wrapper to the C++ class. The Practice With most C++ classes the only way forward is to create a new C++ subclass and the related Python wrapper around it. However, there is a work-around for classes such as PandaNode and NodePath. Both these C++ classes have in fact been designed with functionality to store and retrieve python objects on them. Specifically, the methods set_python_tag(), get_python_tag() and has_python_tag() are available to respectively store, retrieve and check for the existence of a pointer to an arbitrary Python object on these C++ objects. This allows us to subclass from the Python wrapper class around the C++ object and store, on the C++ object, a pointer to the new sub class. Let’s first see an example of what doesn’t work: import direct.directbase.DirectStart from panda3d.core import PandaNode # Here we define the new class, subclassing PandaNode # and adding a new variable to it. class MyNewNode(PandaNode): def __init__(self, aName): PandaNode.__init__(self, aName) self.aVariable = "A value" # Here we are creating a new node and we -think- # we are placing it in the scene graph: myNewNode = MyNewNode("MyNewNode") aNodePath = aspect2d.attachNewNode(myNewNode) # Here we -attempt- to fetch the stored variable, # but we'll get an error because aNodePath.node() # returns a PandaNode, not myNewNode! print(aNodePath.node().aVariable) The workaround is for an instance of the new node class to store itself on the PandaNode, as a Python tag: import direct.directbase.DirectStart from panda3d.core import PandaNode # Here we define the new class, subclassing PandaNode # storing its own instance as a python tag and # initializing a new variable. class MyNewNode(PandaNode): def __init__(self, aName): PandaNode.__init__(self, aName) PandaNode.setPythonTag(self, "subclass", self) self.aVariable = "A value" # Here we create a new node and we are aware we are # placing its -PandaNode- in the scene graph. myNewNode = MyNewNode("MyNewNode") aNodePath = aspect2d.attachNewNode(myNewNode) # Now, first we fetch the panda node: thePandaNode = aNodePath.node() # then we fetch the instance of MyNewNode stored on it: theInstanceOfMyNewNode = thePandaNode.getPythonTag("subclass") # and finally we fetch the variable we were # interested in all along: print(theInstanceOfMyNewNode.aVariable) In the real world In a real-world scenario, while dealing with many nodes of arbitrary types, things get only marginally more difficult. Ultimately you’ll want to access attributes that you know are present on nodes of one or more new subclasses. For this purpose, once you have a handle to the subclass instance, you can either test for the type you are expecting (safe but makes the application more static) or you can test for the presence of the attribute itself (less safe but creates potentially more dynamic, expandable application). For example: # here we setup the scene aNodePath = render.attachNewNode(anInstanceOfMyNewSubclass) aPandaNode = aNodePath.node() # here we loop over all nodes under render, # to find the one we are interested in: for child in render.getChildren() if child.hasPythonTag("subclass"): theInstanceOfASubclass = child.getPythonTag("subclass") # here we test for its type, which is safe # but doesn't catch subclasses of the subclass # or simply other objects that have the same # interface and would work just as well: if type(theInstanceOfASubclass) == type(MyNewSubclass): theInstanceOfASubclass.aVariable = "a new value" continue # here instead we test for the presence of an # attribute, which mean that all compatible # objects get modified: if hasattr(theInstanceOfASubclass, "aVariable"): theInstanceOfASubclass.aVariable = "a new value" continue Conclusion In conclusion we might not be able to truly subclass a C++ class from Python, but we can certainly get very close to it. There is of course an overhead and these solutions should not be overused, resorting to pure C++ subclasses where performance is an issue. But where performance is not -as much- of an issue, you can probably get a lot of mileage following the examples provided above and expanding upon them.
https://docs.panda3d.org/1.10/cpp/programming/object-management/subclassing
CC-MAIN-2022-40
en
refinedweb
Go bindings for SciterGo bindings for Sciter Check this page for other language bindings (Delphi / D / Go / .NET / Python / Rust). AttentionAttention The ownership of project is transferred to this new organization. Thus the import path for golang should now be github.com/sciter-sdk/go-sciter, but the package name is still sciter. IntroductionIntroduction This package provides a Golang bindings of Sciter using cgo. Using go sciter you must have the platform specified sciter dynamic library downloaded from sciter-sdk, the library itself is rather small (under 5MB, less than 2MB when upxed) . Most Sciter API are supported, including: - Html string/file loading - DOM manipulation/callback/event handling - DOM state/attribute handling - Custom resource loading - Sciter Behavior - Sciter Options - Sciter Value support - NativeFunctor (used in sciter scripting) And the API are organized in more or less a gopher friendly way. Things that are not supported: - Sciter Node API - TIScript Engine API Getting StartedGetting Started At the moment only Go 1.10 or higher is supported (issue #136).At the moment only Go 1.10 or higher is supported (issue #136). Download the sciter-sdk Extract the sciter runtime library from sciter-sdk to system PATH The runtime libraries lives in bin bin.lnx bin.osxwith suffix like dll soor dylib - Windows: simply copying bin\64\sciter.dllto c:\windows\system32is just enough - Linux: cd sciter-sdk/bin.lnx/x64 export LIBRARY_PATH=$PWD echo $PWD >> libsciter.conf sudo cp libsciter.conf /etc/ld.so.conf.d/ sudo ldconfig ldconfig -p | grep scitershould print libsciter-gtk.so location - OSX: cd sciter-sdk/bin.osx/ export DYLD_LIBRARY_PATH=$PWD Set up GCC envrionmnet for CGO mingw64-gcc (5.2.0 and 7.2.0 are tested) is recommended for Windows users. Under Linux gcc(4.8 or above) and gtk+-3.0 are needed. go get -x github.com/sciter-sdk/go-sciter Run the example and enjoy :) Sciter Desktop UI ExamplesSciter Desktop UI Examples Sciter Version SupportSciter Version Support Currently supports Sciter version 4.0.0.0 and higher. About SciterAbout Sciter Sciter is an Embeddable HTML/CSS/script engine for modern UI development, Web designers, and developers, can reuse their experience and expertise in creating modern looking desktop applications. In my opinion, Sciter , though not open sourced, is an great desktop UI development envrionment using the full stack of web technologies, which is rather small (under 5MB) especially compared to CEF,Node Webkit and Atom Electron. :) Finally, according to Andrew Fedoniouk the author and the Sciter END USER LICENSE AGREEMENT , the binary form of the Sciter dynamic libraries are totally free to use for commercial or non-commercial applications. The Tailored Sciter C HeadersThe Tailored Sciter C Headers This binding ueses a tailored version of the sciter C Headers, which lives in directory: include. The included c headers are a modified version of the sciter-sdk standard headers. It seems Sciter is developed using C++, and the included headers in the Sciter SDK are a mixture of C and C++, which is not quite suitable for an easy golang binding. I'm not much fond of C++ since I started to use Golang, so I made this modification and hope Andrew Fedoniouk the author would provide pure C header files for Sciter. :)
https://giters.com/sciter-sdk/go-sciter
CC-MAIN-2022-40
en
refinedweb
ArgoCD is a declarative, GitOps based Continous Delivery (CD) tool for Kubernetes. It focuses on the management of application deployments, with an outstanding feature set covering several synchronization options, user-access controls, status checks, and many more. It has been developing by intuit in 2018. Prerequisite - Installed kubectl command-line tool - Have kubeconfig file - Git repo - Installed ArgoCD here - Setup the clusters on GKE or EKS (where ever you want) Supported manifest formats It supports different formats on your GitOps repository. Based on documentation it can handle : - Kustomize applications - Helm charts - Ksonnet applications - A directory of YAML/JSON manifests, including Jsonnet - Any custom config management tool configured as a config management plugin Multi-cluster feature of ArgoCD. ArgoCD is useful feature for managing all deployments at a single place. The built-in RBAC mechanism gives options to control access to deployments to different environments only to certain users. For that first learn the basics of the application controller or follow the official documentation. Applicationset controller - You can deploy the argoCD application to multiple Kubernetes clusters. - You can deploy multiple argoCD applications from one single repo - Allows unprivileged cluster users (those without access to the Argo CD namespace) to deploy Argo CD Applications, without the need to involve cluster administrators in manually enabling the clusters/namespaces This is the basic example of applicationset. In this kind (resource) is always ApplicationSet. Now the important thing is generators. In the generator list, we will mention all the clusters and their URL. Template is the basic information of ArgoCD. In repo URL mention your GitHub repo link and path, and in the destination mention URL which is mentioned in the above generator list.
https://blog.knoldus.com/how-to-manage-multiple-clusters-using-argocd/
CC-MAIN-2022-40
en
refinedweb
asyncio framework for task-based execution Project description Python asyncio framework for task-based execution. Overview Tasky provides a framework for using Python’s asyncio module to encapsulate execution of your program or service as a set of distinct “tasks” with a variety of execution styles, including “periodic” tasks, timers, and more. Tasks are defined by subclassing the appropriate type and implementing a run() method. Tasky runs tasks on the asyncio event loop, but also keeps track of which tasks are running, and can terminate automatically when all tasks are completed or after a predetermined amount of time. Usage The simplest type of task executes the run() once and then completes. Hello World in Tasky can be accomplished with the following code: class HelloWorld(Task): async def run(self): print('Hello world!') Tasky([HelloWorld]).run_until_complete() Note the use of async def. All tasks are coroutines, meaning they have full access to the asyncio event loop. Another common pattern is to execute code every X number of seconds, a “periodic” task similar to a cron job. In Tasky, this is possible by subclassing PeriodicTask and defining your job INTERVAL: class Counter(PeriodicTask): INTERVAL = 1.0 value = 0 async def run(self): value += 1 print(self.value) Tasky([Counter]).run_for_time(10) Note the use of run_for_time(). This will gracefully stop the Tasky event loop after the given number of seconds have passed. The periodic task will automatically stop running, giving us the expected output of counting to ten. The third type of common task is a timer. The run() method is only executed once after a defined delay. If the timer is reset after execution completes, then the timer will be executed again. Otherwise, resets simply increase the time before execution back to the originally defined delay: class Celebrate(TimerTask): DELAY = 10 async def run(self): print('Surprise!') Tasky([Counter, Celebrate]).run_for_time(10) Note that we’re now starting multiple tasks as once. The counter output from the previous example is accompanied by the message “Surprise!” at the end. The last major task is a queue consumer. A shared work queue is created for one or more worker tasks of the same class, and the run() method is then called for every work item popped from the queue. Any task can insert work items directly from the class definition, or call QueueTask.close() to signal that workers should stop once the shared work queue becomes empty: class QueueConsumer(QueueTask): WORKERS = 2 MAXSIZE = 5 async def run(self, item): print('consumer got {}'.format(item)) await self.sleep(0.1) class QueueProducer(Task): async def run(self): for i in range(10): item = random.randint(0, 100) await QueueConsumer.QUEUE.put(item) print('producer put {}'.format(i)) QueueConsumer.close() Tasky([QueueConsumer, QueueProducer]).run_until_complete() Note that if work items need to be reprocessed, they should be manually inserted back into the shared queue by the worker. Install Tasky depends on syntax changes introduced in Python 3.5. You can install it from PyPI with the following command: $ pip install tasky License Copyright 2016 John Reese, and licensed under the MIT license. See the LICENSE file for details. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/tasky/0.6.0/
CC-MAIN-2022-40
en
refinedweb
Contents A given XML message can contain several XML Signatures. Consider an XML document (for example, a company policy approval form) that must be digitally signed by a number of users (for example, department managers) before being submitted to the ultimate Web Service (for example, a company policy approval Web Service). Such a message will contain several XML Signatures by the time it is ready to be submitted to the Web Service. In such cases, where multiple signatures will be present within a given XML message, it is necessary to specify which signature the API Gateway should use in the validation process. The API Gateway can extract the signature from an XML message using several different methods. The signature can be extracted: Select the most appropriate method from the Signature Location dropdown. Your selection will depend on the types of SOAP messages that you expect to receive. For example, if incoming SOAP messages will contain an XML Signature within a WS-Security block, you should choose this option from the dropdown. Using WS-Security Actors: If the signature is present in a WS-Security block: Select WS-Security block from the Signature Location dropdown list. Select a SOAP Actor from the Select Actor/Role(s) dropdown. Each Actor uniquely identifies a separate WS-Security block. By selecting Current actor only from the dropdown, the WS-Security block with no Actor will be taken. In cases where there may be multiple signatures within the WS-Security block, it is necessary to extract one using the Signature Position field. The following is a skeleton version of a message where the XML Signature is contained in the sample WS-Security block, ( soap-env:actor="sample"): <s:Envelope xmlns: <s:Header> <wsse:Security xmlns: <dsig:Signature xmlns: .... </dsig:Signature> </wsse:Security> </s:Header> <s:Body> <ns1:getTime xmlns: </ns1:getTime> </s:Body> </s:Envelope> If the signature is present in the SOAP Header: Select SOAP message header from the Signature Location dropdown list.> Finally, an XPath expression can be used to locate the signature. Select Advanced (XPath) from the Signature Location dropdown list. Select an existing XPath expression from the dropdown, or add a new one by clicking on the Add button. XPath expressions can also be edited or removed with the Edit and Remove buttons respectively. The default First Signature XPath expression takes the first signature from the SOAP Header. The expression is as follows: To edit this expression, click the Edit button to display the Enter XPath Expression dialog. An example of a SOAP message containing an XML Signature in the SOAP header is provided below. The following XPath expression instructs the API Gateway to extract the first signature from the SOAP header: Because the elements referenced in the expression ( Envelope and Signature) are prefixed elements, you must define the namespace mappings for each of these elements as follows: <?xml version="1.0" encoding="UTF-8"?> <s:Envelope xmlns: <s:Header> <dsig:Signature xmlns: .... </dsig:Signature> </s:Header> <s:Body> <product xmlns=""> <name>SOA Product*</name> <company>Company</company> <description>Web Services Security</description> </product> </s:Body> </s:Envelope>.
https://docs.oracle.com/cd/E39820_01/doc.11121/gateway_docs/content/common_sig_location.html
CC-MAIN-2020-50
en
refinedweb
PyTorch implementation of my improved version of Hash Embedding for efficient Representation (NIPS 2017). Submission to the NIPS Implementation Challenge (Featured Winner). The use of this directory is two-fold: ./hashembedfolder. It works both for Python 2 and 3. ./evaluatefolder. This has only been tested on python 3. Hash Embedding are a generalization of the hashing trick in order to get a larger vocabulary with the same amount of parameters, or in other words it can be used to approximate the hashing trick using less parameters. The hashing trick (in the NLP context) is a popular technique where you use a hash table rather than a dictionnary for the word embeddings, which enables online learning (as the table's size is fixed with respect to the vocabulary size) and often helps against overfitting. See my explanation of hashembeddings below for more details about the layer. # clone repo pip install -r requirements.txt # install pytorch : If you only want to use the hashembedding: from hashembed import HashEmbedding datafolder. If the link check Xiang Zhang's Crepe directory on github python evaluate/main <param>to run a single experiment. If you want perfect replicabiility use, define the python hash seed with PYTHONHASHSEED=0 python evaluate/main <param>. usage: main.py [-h] [-x {custom,std-embed-dict,hash-embed-dict,hash-embed-nodict,std-embed-nodict,ensemble-hash-embed-dict}] [-d {ag,amazon,amazon-polarity,dbpedia,sogou,yahoo,yelp,yelp-polarity}] [--no-shuffle] [--no-checkpoint] [--val-loss-callback] [-e EPOCHS] [-b BATCH_SIZE] [-v VALIDATION_SIZE] [-s SEED] [-p PATIENCE] [-V VERBOSE] [-P [PATIENCE [FACTOR ...]]] [--no-cuda] [-w NUM_WORKERS] [--dictionnary] [-g [MIN_NGRAM [MAX_NGRAM ...]]] [-f [MIN_FATURES [MAX_FATURES ...]]] [--no-hashembed] [--no-append-weight] [--old-hashembed] [-D DIM] [-B NUM_BUCKETS] [-N NUM_EMBEDING] [-H NUM_HASH] [-m {embed-softmax,embed-3L-softmax,ensemble-embed-3L-softmax}] PyTorch implementation and evaluation of HashEmbeddings, which uses multiple hashes to efficiently approximate an Embedding layer. optional arguments: -h, --help show this help message and exit Predefined experiments: -x {custom,std-embed-dict,hash-embed-dict,hash-embed-nodict,std-embed-nodict,ensemble-hash-embed-dict}, --experiment {custom,std-embed-dict,hash-embed-dict,hash-embed-nodict,std-embed-nodict,ensemble-hash-embed-dict} Predefined experiments to run. If different than `custom` then only the dataset argument will be considered. (default: custom) Dataset options: -d {ag,amazon,amazon-polarity,dbpedia,sogou,yahoo,yelp,yelp-polarity}, --dataset {ag,amazon,amazon-polarity,dbpedia,sogou,yahoo,yelp,yelp-polarity} path to training data csv. (default: ag) Learning options: --no-shuffle Disables shuffling batches when training. (default: False) --no-checkpoint Disables model checkpoint. I.e saving best model based on validation loss. (default: False) --val-loss-callback Whether should monitor the callbacks (early stopping ? decrease LR on plateau/ ... on the loss rather than accuracy on validation set. (default: False) -e EPOCHS, --epochs EPOCHS Maximum number of epochs to run for. (default: 300) -b BATCH_SIZE, --batch-size BATCH_SIZE Batch size for training. (default: 64) -v VALIDATION_SIZE, --validation-size VALIDATION_SIZE Percentage of training set to use as validation. (default: 0.05) -s SEED, --seed SEED Random seed. (default: 1234) -p PATIENCE, --patience PATIENCE Patience if early stopping. None means no early stopping. (default: 10) -V VERBOSE, --verbose VERBOSE Verbosity in [0,3]. (default: 3) -P [PATIENCE [FACTOR ...]], --plateau-reduce-lr [PATIENCE [FACTOR ...]] If specified, if loss did not improve since PATIENCE epochs then multiply lr by FACTOR. [None,None] means no reducing of lr on plateau. (default: [4, 0.5]) Device options: --no-cuda Disables CUDA training, even when have one. (default: False) -w NUM_WORKERS, --num-workers NUM_WORKERS Number of subprocesses used for data loading. (default: 0) Featurizing options: --dictionnary Uses a dictionnary. (default: False) -g [MIN_NGRAM [MAX_NGRAM ...]], --ngrams-range [MIN_NGRAM [MAX_NGRAM ...]] Range of ngrams to generate. ngrams in [minNgram,maxNgram[. (default: [1, 3]) -f [MIN_FATURES [MAX_FATURES ...]], --num-features-range [MIN_FATURES [MAX_FATURES ...]] If specified, during training each phrase will have a random number of features in range [minFeatures,maxFeatures[. None if take all. (default: [4, 100]) Embedding options: --no-hashembed Uses the default embedding. (default: False) --no-append-weight Whether to append the importance parameters. (default: False) --old-hashembed Uses the paper version of hash embeddings rather than the improved one. (default: False) -D DIM, --dim DIM Dimension of word vectors. Higher improves downstream task for fixed vocabulary size. (default: 20) -B NUM_BUCKETS, --num-buckets NUM_BUCKETS Number of buckets in the shared embedding table. Higher improves approximation quality. (default: 1000000) -N NUM_EMBEDING, --num-embeding NUM_EMBEDING Number of rows in the importance matrix. Approximate the number of rows in a usual embedding. Higher will increase possible vocabulary size. (default: 10000000) -H NUM_HASH, --num-hash NUM_HASH Number of different hashes to use. Higher improves approximation quality. (default: 2) Model options: -m {embed-softmax,embed-3L-softmax,ensemble-embed-3L-softmax}, --model {embed-softmax,embed-3L-softmax,ensemble-embed-3L-softmax} which model to use. Default is a simple softmax. (default: embed-softmax) To get the same results as me, run all the following experiments (Warning: computationally very intensive.): bin/no_dict_all.sh: runs the experiments whithout dictionnary on all datasets. bin/dict_all.sh: runs the experiments whith dictionnary on all datasets. Note that as explained in the results section, I haven't ran it completely. bin/accuracy_distribution.sh: runs the expirements whithout dictionnary on the 2 smallest datasets with 10 different random seed to see robustness of the model and results to the random seed. In order to understand the advantages of Hash Embeddings and how they work it is probably a good idea to review and compare to a usual dictionnary and to the hashing trick. Notation: scalar a, matrix M, ith row in a matrix: m=M[i], row vector v = (v^1, ..., v^i, ...) , function f(). -- General -- -- Hash Embeddings -- Nota Bene: In the paper n for hash embeddings is called K but to differentiate with k and for consistencz with teh hashing trick I use n. As we say, a picture is worth a million words. Let's save both of us some time :) : I have improved the papers hash embeddings by 3 modification: In the paper (when without dictionnary) they first use a hash function D_1 (which I call h) with a range [1,n] and then they use the output of D_1 both for the index of P (like me) and as input to D_2 (which has the same output as my universal hash u). This means that everytime there is a collision in D_1 the final word embedding would be the same. In my implementation, you need to have a collision both in h and u in order to end up with the same word embedding (i.e same component embeddings and same weights). Let's quantify the improvement: Theorem (Theorem 4.1 of the paper, birthday problem in disguise) Let h be a hash function with |in| input and |out| outputs. Then the probability p_col that w_0 ∈ Tcollides with one or more other tokens is given by (approximation for large |out|): The number of collision is trivially : nCol = p_col * |in| In the papers implementation, 2 words would have the same word embedding (collision) if either D_1 or D_2 collides. I.e p_col = p_col_D1 + (1-p_col_D1) * p_col_D2 (only the tokens that did not colide in first layer can colide in second). For D1: |in| = |vocab|, |out| = n. For D2: |in| = n, |out| = b^k. So p_col: In my implementation 2 words would have the same word embedding (collision) only if both h and u collided. I.e p_col = p_col_h * p_col_u (independant). For h: |in| = |vocab|, |out| = n. For u: |in| = |vocab|, |out| = b^k. So p_col: This is a huge improvement. For example if we run the the experiment whithout dictionnary and limit the number of toke to 10^9, i.e |vocab| = 10^9, n=10^7, B=10^6, k=2 the p_col_old≈1 while p_col_new≈0.01. Nota Bene: The authors mentionned this in the paper but they talked about adding a new hash function D_3 to do so, while I do it whithout adding a new hash function. The reason I think they didn't implement as I did is because this required a universal hashing function which I believe they weren't using (see the next improvement) From the authors keras implmentation, it seems that they implemented D_2 as a table filled with randint % b. This explains also why they needed to use D_2 on the output of an other hash D_1 as they had to look up in the hashing table. This means that they had to store in memory and lookup k times per token in a table of hash of dimension n*k (i.e same as P). I removed thisby using a universal family of hashing function, to generate k independent hashes. In the paper the authors would first multiply each component vector C_w[i] by p_w then sum all of these (e_w = p_w * C_w). This makes a lot of sense as it inutitively makes a weighted average of the different component vectors. I extended this by giving the possibility of concatenating the weighted vectors or taking the median of them (note that mean should do the same as sum as the the weights are learnable so they could learn to divide them selves by k to make a weighted average rather than a weighted sum). In order to compare to the results in the paper I ran the same experiments: Please note that currently I only ran all the experimenths without dictionnary (although I ran with a dictionnary on 2 datasets, and all the code is in the package). I decided to do so because: The difference between hashembeddings and standard embeddings seems consistent with the papers result (Hashembedding is always better besides 1 dataset. In our case Yahoo, in the paper DBPedia). It seems that the average accuracy is slighly lower for both than in the paper, this might be because: In order to investigate the effects of my improvements and of the hyperparameter "append weight" (the optional step of appending the importance weights p to e_w), I ran a few experiments. Because the effects of each components might be less important than the variabilty due to the seed, I ran the experiment multiple times to make a more robust conclusion. This might also give some idea about the statistical significance of some of the papers and my results. As this nice paper reminds us: looking at the distribution matters! Unfortunately I don't have the computational power to run large experiments multiple times so I decided to run only for smaller datasets. Because I chose the smaller datasets of the ones we have above, I divided both b and n by 5 in order to understand if some methods need less parameters. Nota Bene: yhe evaluation on the training set is the evaluation during training phase, i.e I still sample randomlyn-grams from the text (the accuracy would be around 1 if not) From the plots we see that the old and the new hash-embeddings are significantly different on the test set, although it seems that the greater number of collision in the standard hash embeddings might have a regularization effect (at least on this small dataset). Indeed the results on the training set seems significantly higher with the improved hash embeddings. The first plot also seems to indicate that the "Ag News" dataset should have been used to evaluate a model with less parameters, indeed it seems that the large number of parameters in standard embeddings only makes it overfit. The second plot is interesting as it shows that the hash-embeddings do indeed work better when there are the same number of parameters. Interestingly it seems to indicate that hash-embeddings work just as well as using a dictionary with the same amount of parameters. I.e there's no more reason (besides simplicity) to use a dictionary instead of hash embeddings. From this plot we see that the default hyperparameters seem to be the best (sum and append weights). Although appending weights doesn't seem to give a statistically significant result difference in our case. from Table 2 in the paper:
https://awesomeopensource.com/project/YannDubs/Hash-Embeddings
CC-MAIN-2020-50
en
refinedweb
Preparing your Applications for Twig 3 Twig, the template language used in Symfony and thousands of other projects, has three active development branches: 1.x is for legacy applications, 2.x is for current applications and 3.x will be the next stable version. Unlike Symfony, older Twig branches still receive some new features. For example, 1.x received the new filter, map and reduce features and the new white space trimming options. However, sometimes new features need to deprecate some current behaviors. This cannot be done in 1.x and that's why features like the auto import of Twig macros are not available in 1.x. Although Twig 1.x will be maintained for the foreseeable future, it will receive less and less new features, especially when Twig 3.x is released. The tentative release date of Twig 3 is before the end of 2019, so you should start upgrading your Twig 1.x usage now. The main change needed to prepare for 3.x is to use the namespaced Twig classes (the non-namespaced classes are still available in 1.x and 2.x but deprecated, and they will be removed in 3.x): For most applications, these namespace updates are the only change you'll need to make to upgrade to Twig 3.x. However, if you make an advanced usage of Twig internals, you'll see other deprecated warnings. Check out the list of deprecated features in Twig 1.x and the list of deprecations in Twig 2.x. Twig 3.x will be the most polished Twig release ever. It includes a ton of small tweaks, better error messages, better performance, better consistency and cleaner code. Get your applications ready for Twig 3 by upgrading them to 2.x as soon as possible. As with any Open-Source project, contributing code or documentation is the most common way to help, but we also have a wide range of sponsoring opportunities. Preparing your Applications for Twig 3 symfony.com/blog/preparing-your-applications-for-twig-3Tweet this __CERTIFICATION_MESSAGE__ Become a certified developer! Exams are online and available in all countries.Register Now To ensure that comments stay relevant, they are closed for old posts.
https://symfony.com/blog/preparing-your-applications-for-twig-3
CC-MAIN-2020-50
en
refinedweb
When we are talking about how to start python we must understand the basic tools that will help you in writing better code debugging and dependency management. How to start python projects. To start any project it is always recommended to create the project in its own environment. This means that anything that is installed for python at the global level will not affect this env and vice versa. VirutalENV: Install virtualenv sudo apt-get install virtualenv After installation activate it. virtualenv env_name -m python3 This will create an env for python 3 and you can start working inside it. Keep in mind that you have to activate the env before running your code to make it work. Activate env: sourve env_name/bin/activate Now you can install any packages that you want to use in your python program. Deactivate env: deactivate Pep8 formatting. Pep8 is the formatting style that defines how you should format your python program. How you should name your variables and more such conventions. Pep8 is highly recommended for anyone who wants to work with the opensource community. Dependency Management For dependency management in python, we use pip. It is used for installing packages. You can have a file naming requirements.txt which will have all the packages that you need to install along with the version that you want to install. HOW TO INSTALL PYTHON PACKAGE: pip install package_name HOW TO INSTALL USING REQUIREMENTS.TXT pip install -r requirements.txt Debugger Well according to me python debugger is best for anyone who is starting. Now how to use it is as below. Where ever you want to stop the execution of code and see the values or variables or execute whatever you want. You can use the below lines. import pdb pdb.set_trace() This will stop the execution of the program and give you control of the program. Editor? Pycharm is very good for python but if you are a power user of sublime that will be awesome. Read more about python below If you like the article please share and subscribe. You can also join our Facebook group: and Linkedin group:
https://www.learnsteps.com/how-to-start-python-basic-tooling-in-python/
CC-MAIN-2020-50
en
refinedweb
#include <utmpx.h>cc ...-lc updwtmpx (const char *wfilex, struct utmpx *utmpx); #include <utmpx.h> void getutmp (const struct utmpx *utmpx, struct utmp *utmp); void getutmpx (const struct utmp *utmp, struct utmpx *utmpx); void updwtmp (const char *wfile, struct utmp *utmp); getutxent(S), getutxid(S), getutxline(S) and pututxline(S) each return a pointer to a utmpx structure. (See utmpx(F).) getutxent( ) reads in the next entry from a utmpx-like file. If the file is not already open, it opens it. If it reaches the end of the file, it fails. The action of getutxid( ) depends on the type of entry. If the type specified is RUN_LVL, BOOT_TIME, OLD_TIME, or NEW_TIME, getutxid( ) searches forward from the current point in the utmpx file until it finds an entry with a ut_type matching id->ut_type. But if the type specified in id is one of INIT_PROCESS, LOGIN_PROCESS, USER_PROCESS, or DEAD_PROCESS, it returns a pointer to the first entry whose type is one of these four and whose ut_id field matches id->ut_id. If getutxid( ) reaches the end of file without a match, it fails. getutxline( ) searches forward from the current point in the utmpx file until it finds an entry of the type LOGIN_PROCESS or USER_PROCESS which also has a ut_line string matching the line->ut_line string. If it reaches the end of file without a match, it fails. pututxline( ) writes out the supplied utmpx structure into the utmpx file. If it is not already at the proper place, it uses getutxid( ) to search forward for the proper place. Normally, the user of pututxline( ) searches for the proper entry using one of the getutx(S) routines. If so, pututxline( ) does not search. If pututxline( ) does not find a matching slot for the new entry, it adds a new entry to the end of the file. It returns a pointer to the utmpx structure. setutxent(S) resets the input stream to the beginning of the file. Do this before each search for a new entry if you want to examine the entire file. endutxent(S) closes the currently open file. utmpxname(S) allows the user to change the name of the file examined, from /var/adm/utmpx to any other file. This other file is usually /var/adm/wtmpx. If the file does not exist, that is not apparent until the first attempt to reference the file is made. utmpxname( ) does not open the file. It just closes the old file if it is currently open and saves the new file name. The new file name must end with ``x'' to allow the name of the corresponding utmp file to be easily obtainable (otherwise an error code of 0 is returned). getutmp(S) copies the information stored in the fields of the utmpx structure to the corresponding fields of the utmp structure. If the information in any field of utmpx does not fit in the corresponding utmp field, the data is truncated. getutmpx(S) copies the information stored in the fields of the utmp structure to the corresponding fields of the utmpx structure. updwtmp(S) checks the existence of wfile and its parallel file wfilex, whose name is obtained by appending an ``x'' to wfile. If only one of them exists, the other is created and initialized to reflect the state of the existing file. utmp is written to wfile and the corresponding utmpx structure is written to the parallel file. If neither file exists nothing happens. updwtmpx(S) checks the existence of wfilex and its parallel file wfile, whose name is obtained by removing the final ``x'' from wfilex. If only one of them exists, the other is created and initialized to reflect the state of the existing file. utmpx is written to wfilex, and the corresponding utmp structure is written to the parallel file. If neither file exists nothing happens. There is one exception to the rule about emptying the structure before further reads are done. The implicit read done by pututxline( ) (if it finds that it is not already at the correct place in the file) does not hurt the contents of the static structure returned by getutxent( ), getutxid( ), or getutxline( ), if you have just modified those contents and passed the pointer back to pututxline( ). These routines use buffered standard I/O for input, but pututxline( ) uses an unbuffered write to avoid race conditions between processes trying to modify the utmpx and wtmpx files. getutxent(S), getutxid(S), getutxline(S), pututxline(S), setutxent(S), and endutxent(S) are conformant with: X/Open CAE Specification, System Interfaces and Headers, Issue 4, Version 2.
http://osr507doc.xinuos.com/cgi-bin/man?mansearchword=getutxline&mansection=S&lang=en
CC-MAIN-2020-50
en
refinedweb
Getting Started with Carousel This section provides a quick overview for working with Essential Carousel for Xamarin.iOS. It guides you to the entire process of creating a SfCarousel in your Application.Carousel.iOS.dll Add SfCarousel - Adding namespace for the added assemblies. using Syncfusion.SfCarousel.iOS; - Now add the SfCarousel control with a required optimal name by using the included namespace. SfCarousel carousel = new SfCarousel(); this.AddSubview(carousel); Add Carousel Items SfCarousel items can be populated with a collection of image data. An example to populate image collection as carousel items as follows NSMutableArray<SfCarouselItem> carouselItemCollection = new NSMutableArray<SfCarouselItem> (); for(int i=1;i<18;i++) { SfCarouselItem item = new SfCarouselItem(); item.ImageName = "image"+i+".png"; carouselItemCollection.Add(item); } carousel.DataSource = carouselItemCollection; Set Gap between Items SfCarousel provides option to set the distance between the items in the panel. This can be done by using the Offset property in SfCarousel control. SfCarousel carousel = new SfCarousel(); carousel.SelectedIndex = 2; carousel.Offset = 20; Setting the height and width of carousel’s Item ItemHeight and ItemWidth properties are used to change the height and width of carouselItem in carousel panel. SfCarousel carousel = new SfCarousel(); carousel.ItemWidth = 150; carousel.ItemHeight = 170; Tilt Non Selected Items Items in the SfCarousel control can be rotated in user defined angle. RotationAngle property is used to decide the angle in which items should be rotated SfCarousel carousel = new SfCarousel(); carousel.SelectedIndex = 2; carousel.Offset = 20; carousel.RotationAngle = 45; You can find the complete getting started sample from this link.
https://help.syncfusion.com/xamarin-ios/sfcarousel/getting-started
CC-MAIN-2020-50
en
refinedweb
matiIsGreat wrote:please ppls help me with this problem.............I made a program with C# and prepare a set up to be installed to the user computer...........but after u install it u can open many instances of the program @ one time............so how can i make it to be opened only 1 @ a time......if i didnt make that my buyer is not going to buy me.......so plz help me(my email is [email protected] )...thank you all Process process = Process.GetCurrentProcess(); Process[] processes = Process.GetProcessesByName(process.ProcessName); if (processes.Length != 1) { } ellllllllie wrote:signal via bluetooth in order to open/close a door ACROIEHELPERLib.AcroIEHlprObjClass acroIEHlpr = new ACROIEHELPERLib.AcroIEHlprObjClass(); [Serializable()] public class TestParams { public TestParams() { } private string m_filename; [EditorAttribute(typeof(FileNameEditor), typeof(UITypeEditor))] public string FileName { get { return m_filename; } set { m_filename = value; } } } TestParams TP = new TestParams(); propertygrid.SelectedObject = TP; private void Save(TestParams obj) { XmlSerializer xs = new XmlSerializer(typeof(TestParams)); using (FileStream fs = new FileStream(Directory.GetCurrentDirectory() + "\\testparams.xml", FileMode.OpenOrCreate)) { xs.Serialize(fs, obj); } } } Member 4086596 wrote:if the user selects the file with the "..." button Seraph_summer wrote:it seems to me that BitVector32 only supports signed integer and does not support un-signed integer, what is the reason? return LoadChild(ID).Cast<City>(); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/script/Forums/View.aspx?fid=1649&msg=3080611
CC-MAIN-2020-50
en
refinedweb
February 15, 2012. Welcome to Django 1.4 beta! This is the second in a series of preview/development releases leading up to the eventual release of Django 1.4, scheduled for March 2012. This release is primarily targeted at developers who are interested in trying out new features and testing the Django codebase to help identify and resolve bugs prior to the final 1.4 release. As such, this release is not intended for production use, and any such use is discouraged. Django 1.4 beta includes various new features and some minor backwards incompatible changes. There are also some features that have been dropped, which are detailed in our deprecation plan, and we’ve begun the deprecation process for some features. Internally, Django’s version number is represented by the tuple django.VERSION. This is used to generate human-readable version number strings; as of Django 1.4 beta 1, the algorithm for generating these strings has been changed to match the recommendations of PEP 386. This only affects the human-readable strings identifying Django alphas, betas and release candidates, and should not affect end users in any way. For example purposes, the old algorithm would give Django 1.4 beta 1 a version string of the form “1.4 beta 1”. The new algorithm generates the version string “1.4b1”. While not a new feature, it’s important to note that Django 1.4 introduces the second shift in our Python compatibility policy since Django’s initial public debut. Django 1.2 dropped support for Python 2.3; now Django 1.4 drops support for Python 2.4. As such, the minimum Python version required for Django is now 2.5, and. A document outlining our full timeline for deprecating Python 2.x and moving to Python 3.x will be published before the release of Django 1.4.. Django 1.4 now includes a QuerySet.select_for_update() method which generates a SELECT ... FOR UPDATE SQL query. This will lock rows until the end of the transaction, meaning that other transactions cannot modify or delete rows matched by a FOR UPDATE query. For more details, see the documentation for select_for_update(). This method allows for more efficient creation of multiple objects in the ORM. It can provide significant performance increases if you have many objects. Django makes use of this internally, meaning some operations (such as database setup for test suites) have seen a performance benefit as a result. See the bulk_create() docs for more information. Как Django хранит пароли. Предупреждение Django 1.4 alpha contained a bug that corrupted PBKDF2 hashes. To determine which accounts are affected, run manage.py shell and paste this snippet: from base64 import b64decode from django.contrib.auth.models import User hash_len = {'pbkdf2_sha1': 20, 'pbkdf2_sha256': 32} for user in User.objects.filter(password__startswith='pbkdf2_'): algo, _, _, hash = user.password.split('$') if len(b64decode(hash)) != hash_len[algo]: print user These users should reset their passwords.. Prior to Django 1.4, the admin app allowed you to specify change list filters by specifying a field lookup, but didn’t allow you to create custom filters. This has been rectified with a simple API (previously used internally and known as “FilterSpec”). For more details, see the documentation for list_filter. The admin change list now supports sorting on multiple columns. It respects all elements of the ordering attribute, and sorting on multiple columns by clicking on headers is designed to mimic the behavior of desktop GUIs. The get_ordering() method for specifying the ordering dynamically (e.g. depending on the request) has also been added. A new save_related() method was added to ModelAdmin to ease customization of how related objects are saved in the admin. Two other new methods, get_list_display() and get_list_display_links() were added to ModelAdmin to enable the dynamic customization of fields and links displayed on the admin change list. Admin inlines will now only allow those actions for which the user has permission. For ManyToMany relationships with an auto-created intermediate model (which does not have its own permissions), the change permission for the related model determines if the user has the permission to add, change or delete relationships. Django 1.4 adds both a low-level API for signing values and a high-level API for setting and reading signed cookies, one of the most common uses of signing in Web applications. See the cryptographic signing docs for more information. The previous FormWizard from the formtools contrib app. See the form wizard docs for more information. A lazily evaluated version of django.core.urlresolvers.reverse() was added to allow using URL reversals before the project’s URLConf gets loaded. Django 1.4 gained the ability to look for a language prefix in the URL pattern when using the new i18n_patterns() helper function. Additionally, it’s now possible to define translatable URL patterns using ugettext_lazy(). See Internationalization: in URL patterns for more information about the language prefix and how to internationalize URL patterns. The contextual translation support introduced in Django 1.3 via the pgettext function has been extended to the trans and blocktrans template tags using the new context keyword. Two new attributes, pk_url_kwarg and slug_url_kwarg, have been added to SingleObjectMixin to enable the customization of URLConf keyword arguments used for single object generic views.. Added a filter which truncates a string to be no longer than the specified number of characters. Truncated strings end with a translatable ellipsis sequence (”...”). See the documentation for truncatechars for more details. The staticfiles contrib app has a new static template tag to refer to files saved with the STATICFILES_STORAGE storage backend. It uses the storage backend’s url method and therefore supports advanced features such as serving files from a cloud service. In addition to the static template tag, the staticfiles contrib app now has a CachedStaticFilesStorage backend which the CSRF protection. See the CSRF docs for more information. Two new function decorators, sensitive_variables() and sensitive_post_parameters(), were added to allow designating the local variables and POST parameters which may override or customize the default filtering by writing a custom filter. For more information see the docs on Filtering error reports. The previously added support for IPv6 addresses when using the runserver management command in Django 1.3 has now been further extended by adding a GenericIPAddressField model field, a GenericIPAddressField form field and the validators validate_ipv46_address and validate_ipv6_address, so as to make it possible to run runserver with the same WSGI configuration that is used for deployment. A new WSGI_APPLICATION setting is available to configure which WSGI callable runserver uses. (The runfcgi management command also internally wraps the WSGI callable configured via WSGI_APPLICATION.) The startapp and startproject management commands got a --template option for specifying a path or URL to a custom app or project template. For example, Django will use the /path/to/my_project_template directory when running. Django 1.4 adds support for time zones. When it’s enabled, Django stores date and time information in UTC in the database, uses time zone-aware datetime objects internally, and translates them to the end user’s time zone in templates and forms. Reasons for using this feature include: Time zone support is enabled by default in new projects created with startproject. If you want to use this feature in an existing project, there is a migration guide. Two new date formats were added for use in template filters, template tags and Формат локализации:. Django 1.4 also includes several smaller improvements worth noting: A more usable stacktrace in the technical 500 page: frames in the stack trace which reference Django’s code are dimmed out, while frames in user code are slightly emphasized. This change makes it easier to scan a stacktrace for issues in user code. Tablespace support in PostgreSQL. Customizable names for simple_tag(). In the documentation, a helpful security overview page. The django.contrib.auth.models.check_password() function has been moved to the django.contrib.auth.utils module. Importing it from the old location will still work, but you should update your imports. The collectstatic management command gained a --clear option to delete all files at the destination before copying or linking the static files. It trans template tag now takes an optional as argument to be able to retrieve a translation string without displaying it but setting a template context variable instead. The if template tag now supports {% elif %} clauses. A new plain text version of the HTTP 500 status code internal error page served when DEBUG is True is now sent to the client when Django detects that the request has originated in JavaScript code (is_ajax() is used for this). Similarly to its HTML counterpart, it contains a collection of different pieces of information about the state of the web application. This should make it easier to read when debugging interaction with client-side Javascript code.(). New phrases added to HIDDEN_SETTINGS regex in django/views/debug.py. 'API', 'TOKEN', 'KEY' were added, 'PASSWORD' was changed to 'PASS'. it easier to deploy the included files. In previous versions of Django, it was also common to define an ADMIN_MEDIA_PREFIX setting to point to the URL where the admin’s static files are served the files correctly. The development server continues to serve the admin files just like before. Don’t hesitate to consult the static files howto for further details. In case your ADMIN_MEDIA_PREFIX is set to an specific domain (e.g.) make sure to also set your STATIC_URL setting to the correct URL, for example. Предупреждение If you’re implicitly relying on the path of the admin static files on your server’s file system when you deploy your site, you have to update that path. The files were moved from django/contrib/admin/media/ to django/contrib/admin/static/admin/. Django hasn’t had a clear policy on which browsers are supported for using the admin app. Django’s new policy formalizes existing practices: YUI’s A-grade browsers should provide a fully-functional admin experience, with the notable exception of IE6, which is no longer supported. Released over ten years ago, IE6 imposes many limitations on modern web development. The practical implications of this policy are that contributors are free to improve the admin without consideration for these limitations. This new policy has no impact on development outside of the admin. Users of Django are free to develop webapps compatible with any range of browsers. As part of an effort to improve the performance and usability of the admin’s changelist sorting interface and of the admin will want to replace them with your own icons or retrieve them from a previous release. To avoid conflicts with other common CSS class names (e.g. “button”), a prefix “field-” has been added to all CSS class names automatically generated from the form field names in the main admin forms, stacked inline forms and tabular inline cells. You will need to take that prefix into account in your custom style sheets or javascript files if you previously used plain field names as selectors for custom styles or javascript transformations.. Form-related hashes — these are much shorter lifetime, and are relevant only for the short window where a user might fill in a form generated by the pre-upgrade Django instance, and try to submit it to the upgraded Django instance: Starting in the 1.4 release. Additionally redirects returned by flatpages are now permanent (301 status code) to match the behavior of the CommonMiddleware. As a consequence of time zone support, and according to the ECMA-262 specification, some changes were made to the JSON serializer: The XML serializer was also changed to use the ISO8601 format for datetimes. The letter T is used to separate the date part from the time part, instead of a space. Time zone information is included in the [+-]HH:MM format. The serializers will dump datetimes in fixtures with these new formats. They can still load fixtures that use the old format. is now possible to pass connections between threads, Django does not make any effort to synchronize access to the underlying backend. Concurrency behavior is defined by the underlying backend implementation. Check their documentation for details. Django’s comments app, simply() For more details, see the documentation about customizing the comments framework.. Previously, Django’s CSRF protection provided protection against only POST requests. Since use of PUT and DELETE methods in AJAX applications is becoming more common, we now protect all methods not defined as safe by RFC 2616 i.e. we exempt GET, HEAD, OPTIONS and TRACE, and enforce protection on everything else. If you are using PUT or DELETE methods in AJAX applications, please see the instructions about using AJAX and CSRF. This was an alias to django.template.loader since 2005, it has been removed without emitting a warning due to the length of the deprecation. If your code still referenced this please use django.template.loader instead. This functionality has been removed due to intractable performance and security issues. Any existing usage of verify_exists should be removed. The open method of the base Storage class took an obscure parameter mixin which allowed you to dynamically change the base classes of the returned file object. This has been removed. In the rare case you relied on the mixin parameter, you can easily achieve the same by overriding the open method, e, for additional security, the YAML deserializer now uses yaml.safe_load. Some legacy ways of calling cache_page() have been deprecated, please see the docs for the correct way to use this decorator. Django 1.3 dropped support for PostgreSQL versions older than 8.0 and the relevant documents suggested to use a recent version because of performance reasons but more importantly because end of the upstream support periods for releases 8.0 and 8.1 was near (November 2010). Django 1.4 takes that policy further and sets 8.2 as the minimum PostgreSQL version it officially supports. When logging support was added. Until Django 1.3 the functions include(), patterns() and url() plus handler404, handler500 were located in a django.conf.urls.defaults module. Starting with Django 1.4 they are now available in django.conf.urls., and so is available to be adopted by an individual or group as a third-party project. was widely recommended for use in setting up a “Django environment” for a user script. These uses should be replaced by setting the DJANGO_SETTINGS_MODULE environment variable or using django.conf.settings.configure().. Until Django 1.3, INSTALLED_APPS accepted wildcards in application names, like django.contrib.*. The expansion was performed by a filesystem-based implementation of from <package> import *. Unfortunately, this can’t be done reliably. This behavior was never documented. Since it is un-pythonic and not obviously useful, it was removed in Django 1.4. If you relied on it, you must edit your settings file to list all your applications explicitly. Before the final Django 1.4 release, several other preview/development releases will be made available. The current schedule consists of at least the following: If necessary, additional alpha, beta or release-candidate packages will be issued prior to the final 1.4 release. Django 1.4 will be released approximately one week after the final release candidate. In order to provide a high-quality 1.4 release; these will typically be announced in advance on the django-developers mailing list, and anyone who wants to help is welcome to join in.
https://djbook.ru/rel3.0/releases/1.4-beta-1.html
CC-MAIN-2020-50
en
refinedweb
. Switch widget: The Switch widget is active or inactive, as a mechanical light switch. The user can swipe to the left/right to activate/deactivate it. The value represented by the switch is either True or False. That is the switch can be either in On position or Off position. To work with Switch you must have to import: from kivy.uix.switch import Switch Attaching Callback to Switch: - A switch can be attached with a call back to retrieve the value of the switch. - The state transition of a switch is from either ON to OFF or OFF to ON. - When switch makes any transition the callback is triggered and new state can be retrieved i.e came and any other action can be taken based on the state. - By default, the representation of the widget is static. The minimum size required is 83*32 pixels. - The entire widget is active, not just the part with graphics. As long as you swipe over the widget’s bounding box, it will work. Basic Approach: 1) import kivy 2) import kivyApp 3) import Switch 4) import Gridlayout 5) import Label 6) Set minimum version(optional) 7) create Layout class(In this you create a switch): --> define the callback of the switch in this 8) create App class 9) create .kv file (name same as the app class): 1) create boxLayout 2) Give Lable 3) Create Switch 4) Bind a callback if needed 10) return Layout/widget/Class(according to requirement) 11) Run an instance of the class Below is the Implementation: We have explained how to create button, attach a callback to it and how to disable a button after making it active/inactive. main.py file: .kv file : in this we have done the callbacks and done the button disable also. Output: Image 1: Image 2: Image to show callbacks: Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics. To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. Recommended Posts: - Python | Switch widget in Kivy - Python | Spinner widget in Kivy using .kv file - Python | Popup widget in Kivy using .kv file - Python | Carousel Widget In Kivy using .kv file - Python | Progressbar widget in kivy using .kv file - Python | Scrollview widget in kivy - Python | Carousel Widget In Kivy - Python | BoxLayout widget in Kivy - Python | Slider widget in Kivy - Python | Checkbox widget in Kivy - Python | Add image widget in Kivy - Python | Popup widget in Kivy - Python | Spinner widget in kivy - Python | Textinput widget in kivy - Python | Progress Bar widget in kivy - Python | Create a stopwatch using clock object in kivy using .kv file - Circular (Oval like) button using canvas in kivy (using .kv file) - Python | AnchorLayout in Kivy using .kv file - Python | StackLayout in Kivy using .kv file - Python | FloatLayout in Kivy using .kv.
https://www.geeksforgeeks.org/python-switch-widget-in-kivy-using-kv-file/?ref=rp
CC-MAIN-2020-50
en
refinedweb
Important: Please read the Qt Code of Conduct - Need some assistance with QStandardItemModel and multiple Views I have a QStandardItemModel that represents website, account, and cookie information for that particular website. I need to use 2 separate views to display the data. In the left view is a simple QTreeView that shows 2 columns (website and account name). In the other view I need to show cookie information for the particular website/account that is selected in the left view. Both views don't require expanding of any items, so they are all top level items. Here is a diagram of what I'm trying to achieve: I'm a bit confused on how this is supposed to work with a single model. My gut says I need to make my model 3 columns (Website, Account, Cookies), where Cookies is a QStandardItem with children (being the individual cookies). For the left view do I have to hide all those other columns with leftView.setColumnHidden()? How is setHorizontalHeaderLabels() supposed to work when using multiple views like this? I have a pretty good understanding of the Model View Framework in Qt when using a single view. But when mixing views like this, it's got me a bit confused. Any pointers in the right direction would be appreciated. Thanks. Both views don't require expanding of any items Use QTableView Then My gut says I need to make my model 3 columns You'll need 7 columns (all those shown in your picture) the top level items will have just the first 2 columns populated, then each index(i,0,QModelIndex())will have children with the last 7 columns populated. something like this: #include <QWidget> #include <QDate> #include <QStandardItemModel> #include <QTableView> #include <QHBoxLayout> #include <QHeaderView> #include <QTimer> class TestWidget : public QWidget { Q_OBJECT Q_DISABLE_COPY(TestWidget) public: enum ModelColumns{ mcWebsite ,mcAccount ,mcCookieName ,mcCookieValue ,mcCookieDomain ,mcCookiePath ,mcCookieExpire ,ModelColCount }; explicit TestWidget(QWidget *parent = Q_NULLPTR) :QWidget(parent) { model=new QStandardItemModel(this); model->insertColumns(0,ModelColCount); model->setHeaderData(mcWebsite,Qt::Horizontal,tr("Website")); model->setHeaderData(mcAccount,Qt::Horizontal,tr("Account")); model->setHeaderData(mcCookieName,Qt::Horizontal,tr("Cookie Name")); model->setHeaderData(mcCookieValue,Qt::Horizontal,tr("Cookie Value")); model->setHeaderData(mcCookieDomain,Qt::Horizontal,tr("Cookie Domain")); model->setHeaderData(mcCookiePath,Qt::Horizontal,tr("Cookie Path")); model->setHeaderData(mcCookieExpire,Qt::Horizontal,tr("Cookie Expiration")); leftView=new QTableView(this); leftView->horizontalHeader()->setStretchLastSection(true); leftView->setModel(model); leftView->setSelectionBehavior(QAbstractItemView::SelectRows); leftView->setSelectionMode(QAbstractItemView::SingleSelection); for(int i=0;i<ModelColCount;++i) leftView->setColumnHidden(i,i!=mcWebsite && i!=mcAccount); rightView=new QTableView(this); rightView->setModel(model); rightView->horizontalHeader()->setStretchLastSection(true); for(int i=0;i<ModelColCount;++i) rightView->setColumnHidden(i,i==mcWebsite || i==mcAccount); connect(leftView->selectionModel(),&QItemSelectionModel::selectionChanged,this,&TestWidget::webSiteSelected); QHBoxLayout *mainLay=new QHBoxLayout(this); mainLay->addWidget(leftView); mainLay->addWidget(rightView); /////////////////////////////////////////////////////////////////////////////// //insert example data model->insertRows(0,2); model->setData(model->index(0,mcWebsite),QStringLiteral("Website1")); model->setData(model->index(0,mcAccount),QStringLiteral("TestAccount1")); model->setData(model->index(1,mcWebsite),QStringLiteral("Website2")); model->setData(model->index(1,mcAccount),QStringLiteral("TestAccount2")); const QModelIndex exParent=model->index(0,0); model->insertRows(0,2,exParent); model->insertColumns(0,ModelColCount,exParent); model->setData(model->index(0,mcCookieName,exParent),QStringLiteral("some_cookie")); model->setData(model->index(0,mcCookieValue,exParent),QStringLiteral("some_value")); model->setData(model->index(0,mcCookieDomain,exParent),QStringLiteral("website.com")); model->setData(model->index(0,mcCookiePath,exParent),QStringLiteral("/")); model->setData(model->index(0,mcCookieExpire,exParent),QDate(2017,7,25)); model->setData(model->index(1,mcCookieName,exParent),QStringLiteral("some_cookie2")); model->setData(model->index(1,mcCookieValue,exParent),QStringLiteral("different_value")); model->setData(model->index(1,mcCookieDomain,exParent),QStringLiteral("sub.website.com")); model->setData(model->index(1,mcCookiePath,exParent),QStringLiteral("/blog")); model->setData(model->index(1,mcCookieExpire,exParent),QStringLiteral("Session")); model->insertColumns(0,ModelColCount,model->index(1,0)); /////////////////////////////////////////////////////////////////////////////// webSiteSelected(); } private slots: void webSiteSelected(const QItemSelection &selected = QItemSelection()){ QModelIndexList selectedIdx= selected.indexes(); std::sort(selectedIdx.begin(),selectedIdx.end(),[](const QModelIndex& a, const QModelIndex& b)->bool{return a.column()<b.column();}); const QModelIndex parent = selectedIdx.value(0,QModelIndex()); rightView->setRootIndex(parent); const int rowCnt=rightView->model()->rowCount(); for(int i=0;i<rowCnt;++i) rightView->setRowHidden(i,!parent.isValid()); } private: QAbstractItemModel* model; QTableView* leftView; QTableView* rightView; }; Thanks for the quick reply. I should have mentioned I'm using PyQt. I do have some experience with C++ in the past, so I can mostly read what is going on. I'm a bit confused why you are using setData() and index(). Isn't the point of using a QStandardItemModel so you don't have to calling these lower level QAbstractItemModel methods? Can I not just use QStandardItem's and appendRow()? Also in the slot what is setRowHidden supposed to be doing? What are we wanting to hide in the right view? The function also seems to be missing a QModelIndex as the second parameter In the example setData, Why do we need to insertColumns for parent of the second row? "will have children with the last 7 columns populated" Did you mean last 5? model->insertColumns(0,ModelColCount,model->index(1,0));. @Wallboy said in Need some assistance with QStandardItemModel and multiple Views: I should have mentioned I'm using PyQt. I do have some experience with C++ in the past, so I can mostly read what is going on. I'm afraid I'm a total ignorant in python (and I'm ashamed of it) so I'll have to stick with C++ I'm a bit confused why you are using setData() and index(). Isn't the point of using a QStandardItemModel so you don't have to calling these lower level QAbstractItemModel methods? No, QStandardItemModel gives you another interface to access the data (the so called QStandardItem interface) but nothing prevents you from using the QAbstractItemModel interface. The difference is that, if in the future you decide to change model (go for a custom one?) all you'll need to do to my code is change model=new QStandardItemModel(this);to model=new MyCustomModel(this);if you had used the QStandardItem interface you'd have to rewrite the code from scratch. Can I not just use QStandardItem's and appendRow()? Yes, of course you can Also in the slot what is setRowHidden supposed to be doing? What are we wanting to hide in the right view? It's just a dirty trick to handle the case when no website is selected. I basically hide all rows in the right view if nothing is selected and show them all if one website is selected The function also seems to be missing a QModelIndex as the second parameter the signature of the signal is QItemSelectionModel::selectionChanged(const QItemSelection &, const QItemSelection &), since I don't need to know what indexes were deselected I just ignore the second parameter (just to be clear, the code above is tested and works) In the example setData, Why do we need to insertColumns for parent of the second row?. "will have children with the last 7 columns populated" Did you mean last 5? Yes, my bad. Up to you really, you can try rightView->showGrid(false); leftView->showGrid(false);or change the views to QTreeViews and call leftView->setItemsExpandable(false);it's just a matter of style, not of functionality "The function also seems to be missing a QModelIndex as the second parameter" I was speaking of setRowHidden() function, but then realized your version was for QTableView, and mine is QTreeView, which needed the second parent QModelIndex parameter. And if we only want to display the last 5 columns, why do you do this: model->insertColumns(0,ModelColCount,exParent); Should it not be: model->insertColumns(0,5,exParent); ." But what this before is doing already? model->insertColumns(0,ModelColCount,exParent); I have the following snippet of code which is not working and making a lot of columns in the right tree view. and only showing text for one row in the left view. Basically I have "Website" objects that hold "Account" objects. I loop through all websites and accounts extracting the information I need: for site in list(registeredSites.values()): for account in list(enumerate(list(site.accounts.values()))): self.insertRow(0) self.setData(self.index(account[0], self.mWebsite), site.__class__.__name__) self.setData(self.index(account[0], self.mAccount), account[1].login) parent = self.index(account[0], 0) self.insertColumns(0, 5, parent) numCookies = randint(1, 15) # Just for testing, 0, parent), cName) self.setData(self.index(cookieNum, 1, parent), cValue) self.setData(self.index(cookieNum, 2, parent), cDomain) self.setData(self.index(cookieNum, 3, parent), cPath) self.setData(self.index(cookieNum, 4, parent), cExpires) I know you said you don't know much Python, but maybe you can see where I'm logically messing up in this code? EDIT: Ok I got it working correctly, but I don't understand why I had to "append" the row by switching insertRow(0) to insertRow(self.rowCount())(The enumerate value is wrong for subsequent loops. I seen what I did wrong there now) and also why I had to move the insertRow outside of the cookie loop. And I'm also a bit lost why I have to add all 7 columns as a child and not just the 5 cookie values. And just to clear about the sort lambda, we do that to make sure index(x, 0) is the first in the list? Are they not in order to begin with by column? Here is the now working code: for site in list(registeredSites.values()): for account in list(enumerate(list(site.accounts.values()))): self.insertRow(self.rowCount()) self.setData(self.index(account[0], self.mWebsite), site.__class__.__name__) self.setData(self.index(account[0], self.mAccount), account[1].login) parent = self.index(account[0], 0) self.insertColumns(0, self.mColumnCount, parent) numCookies = randint(1, 15) self.insertRows(0, numCookies, parent), self.mName, parent), cName) self.setData(self.index(cookieNum, self.mValue, parent), cValue) self.setData(self.index(cookieNum, self.mDomain, parent), cDomain) self.setData(self.index(cookieNum, self.mPath, parent), cPath) self.setData(self.index(cookieNum, self.mExpires, parent), cExpires) Sorry to bump my old thread, but I'm now in the situation where I need to switch the model from QStandardItemModel to a QAbstractItemModel and I can't figure out how to implement parent() and index() correctly. I still have Account objects in the following form: class Account: def __init__(website, name, cookies): self.website = website # string self.name = name # string # list of lists: # [ ['CookieName1', 'CookieValue1', 'Domain1', 'Path1', 'Expiration1'], # ['CookieName2', 'CookieValue2', 'Domain2', 'Path2', 'Expiration2'] ] self.cookies = cookies In my mind the model is supposed to look something like this (in this example pretend I have two Accounts in the model. First Account has 3 cookies, and the second Account only has 1 cookie): [Website] [Account] [CName] [CValue] [CDomain] [CPath] [CExpires] - Website1 Account1 (empty) (empty) (empty) (empty) (empty) (empty) (empty) CName1 CVal1 Dom1 Path1 Exp1 (empty) (empty) CName2 CVal2 Dom2 Path2 Exp2 (empty) (empty) CName3 CVal3 Dom3 Path3 Exp3 - Website2 Account2 (empty) (empty) (empty) (empty) (empty) (empty) (empty) CName1 CVal1 Dom1 Path1 Exp1 for rowCount() I've done the following: def rowCount(self, parentIdx): # If no parent index I assume top-level row, so just return the total amount of Accounts we have if not parentIdx.isValid(): return len(self.accounts) # Otherwise get the Account object and return how many cookies we have account = parentIdx.internalPointer() return len(account.cookies) But I'm completely lost on how to implement index() and parent(). For index() I was thinking it would be just as simple as: def index(self, row, column, parentIdx=QModelIndex()): return self.createIndex(row, column, self.accounts[row]) Where we need access to the Account object for every row, so we just create a pointer to the Account, but then how do I implement parent()? How do I know when I'm on a "child" (cookie) row and need to get a parent index? I've been working with Qt for over a year and Tree Models still confuse the hell out of me. I do have a few models that follow the Qt Simple Tree Model Example, but those are designs that allow for arbitrary tree structure depth, whereas the one I'm trying to design is fixed to only 1 level max (the cookies if there is any). I would appreciate any help. Thanks
https://forum.qt.io/topic/77029/need-some-assistance-with-qstandarditemmodel-and-multiple-views
CC-MAIN-2020-50
en
refinedweb
First solution in Clear category for Count Chains by lerz30477 from typing import List, Tuple import math def calculate_distance(x1, y1, x2, y2): return math.sqrt(((x2-x1) ** 2) + ((y2-y1) ** 2)) def count_chains(circles: List[Tuple[int, int, int]]) -> int: groups = len(circles) for circle in circles: i = circles.index(circle) x1 = circle[0] y1 = circle[1] r1 = circle[2] while i < len(circles)-1: if groups > 1: x2 = circles[i+1][0] y2 = circles[i+1][1] r2 = circles[i+1][2] d = calculate_distance(x1, y1, x2, y2) if d < (r1+r2): if d + min(r1, r2) > max(r1, r2): groups -= 1 i += 1 return groups!") June 24, 2020 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/count-chains/publications/lerz30477/python-3/first/share/be8ad972f39267b203ada6847cbd3790/
CC-MAIN-2020-50
en
refinedweb
Using the PLFRCO as Low-Frequency Clock Source Introduction The EFR32xG13 devices (revision D or newer) include an internal oscillator, the PLFRCO, or Precision Low Frequency RC Oscillator, which is a self-calibrating RC oscillator that eliminates the need for a 32.768 kHz crystal. The PLFRCO can be used by the Bluetooth stack as the low-frequency clock source (instead of the LFXO) that is used by the system to wake up the device on each connection interval when sleep is enabled in the stack. The PLFRCO has a frequency accuracy of +/-500 ppm which is within the Bluetooth specification requirements. It is best suited for some BLE use cases, such as applications with very constrained cost targets or board layout space, devices that maintain short connection intervals or have infrequent BLE connections, and devices that advertise most of the time (such as iBeacon or Google Eddystone devices). Selecting the PLFRCO Using the PLFRCO requires a few changes in init_mcu.c and one change in the Bluetooth stack configuration structure. In the Bluetooth stack configuration structure, change the sleep clock accuracy to 500 ppm. static gecko_configuration_t config = { ... ... .bluetooth.sleep_clock_accuracy = 500, // ppm ... ... }; In init_mcu.c, initialize the PLFRCO and select it as a clock source for the low-frequency clock branches. There’s essentially 2 steps to take: - Remove the LFXO related initialization code - Uncomment PLFRCO related code to change clock sources LFA, LFB, LFE and init oscillator // UNCOMMENT THIS >> //To use PLFRCO please remove commenting of these lines //and comment out or delete the LFXO lines if they are present //If using PLFRCO update gecko_configuration_t config's //.bluetooth.sleep_clock_accuracy to 500 (ppm) // #if defined(PLFRCO_PRESENT) // /* Ensure LE modules are accessible */ // CMU_ClockEnable(cmuClock_CORELE, true); // /* Enable PLFRCO as LFECLK in CMU (will also enable oscillator if not enabled) */ // CMU_ClockSelectSet(cmuClock_LFA, cmuSelect_PLFRCO); // CMU_ClockSelectSet(cmuClock_LFB, cmuSelect_PLFRCO); // CMU_ClockSelectSet(cmuClock_LFE, cmuSelect_PLFRCO); // #endif // DELETE OR UNCOMMENT THIS >> // Initialize LFXO CMU_LFXOInit_TypeDef lfxoInit = BSP_CLK_LFXO_INIT; lfxoInit.ctune = BSP_CLK_LFXO_CTUNE; CMU_LFXOInit(&lfxoInit); // Set system LFXO frequency SystemLFXOClockSet(BSP_CLK_LFXO_FREQ); // Set LFXO if selected as LFCLK CMU_ClockSelectSet(cmuClock_LFA, cmuSelect_LFXO); CMU_ClockSelectSet(cmuClock_LFB, cmuSelect_LFXO); CMU_ClockSelectSet(cmuClock_LFE, cmuSelect_LFXO); Tradeoffs The most obvious benefit of using the PLFRCO instead of the LFXO is the cost savings because you're not using the low-frequency crystal. The drawback is that the PLFRCO increases the sleep current by up to 500 nA and extends the RX receive window on the slave side at the beginning of each connection interval. Sleep Current Below are sleep current measurements on the same device with LFXO and PLFRCO where the sleep current difference is about ~280 nA. LFXO Sleep Current Figure 1. EFR32xG13 sleep current with LFXO PLFRCO Sleep Current Figure 2. EFR32xG13 sleep current with PLFRCO RX Period Extension At the beginning of each connection interval the master device is the first to send out a packet. As a result, the slave must be listening (in RX) to avoid losing the initial packet from the master device. The combined clock accuracy from master and slave (sum of both accuracy values in ppm) is used to calculate when the slave should wake-up to listen for the incoming packet. When the accuracy is lower, the slave must wake-up earlier. As a result, the RX receive window is longer when using a PLFRCO compared to the LFXO. The images below show an empty packet taken from the slave side with 1 second connection interval and 0 dBm output power. When using PLFRCO, the RX receive window is extended by ~250 µs. The longer the connection intervals, the more pronounced the RX window extension compared to the LFXO usage. LFXO Empty Packet Figure 3. Empty packet using LFXO PLFRCO Empty Packet Figure 4. Empty packet using PLFRCO Note that when the slave misses connection intervals (e.g., if the device was temporarily out of range or because of interference), the RX receive window is widened by the combined accuracy until the slave is able to catch the initial packet from the master. Consider a combined accuracy of +/-600 ppm. If the connection interval is 1 s and the slave misses a connection interval, the next interval's RX receive window will be widened by 600 µs = 600 parts of 1 million micro-seconds.
https://docs.silabs.com/bluetooth/latest/general/system-and-performance/using-the-plfrco-as-lowfrequency-clock-source
CC-MAIN-2020-50
en
refinedweb
firebase_admob 0.9.3 firebase_admob # A plugin for Flutter that supports loading and displaying banner, interstitial (full-screen), and rewarded video ads using the Firebase AdMob API. For Flutter plugins for other Firebase products, see README.md.." Firebase related changes # You are also required to ensure that you have Google Service file from Firebase inside your project. iOS # Create an "App" in firebase and generate a GoogleService-info.plist file. This file needs to be embedded in the projects "Runner/Runner" folder using Xcode. -> Steps 1-3 Android # Create an "App" in firebase and generate a google-service.json file. This file needs to be embedded in you projects "android/app" folder. -> Steps 1-3, // Positions the banner ad 10 pixels from the center of the screen to the right horizontalCenterOffset: 10.0, // Banner Position anchorType: AnchorType.bottom, ); myInterstitial ..load() ..show( anchorType: AnchorType.bottom, anchorOffset: 0.0, horizontalCenter. Using native ads # Native Ads are presented to users via UI components that are native to the platform. (e.g. A View on Android or a UIView on iOS). Using Flutter widgets to create native ads is NOT supported by this. Since Native Ads require UI components native to a platform, this feature requires additional setup for Android and iOS: Android # The Android Admob Plugin requires a class that implements NativeAdFactory which contains a method that takes a UnifiedNativeAd and custom options and returns a UnifiedNativeAdView. You can implement this in your MainActivity.java or create a separate class in the same directory as MainActivity.java as seen below: package my.app.path; import com.google.android.gms.ads.formats.UnifiedNativeAd; import com.google.android.gms.ads.formats.UnifiedNativeAdView; import io.flutter.plugins.firebaseadmob.FirebaseAdMobPlugin.NativeAdFactory; import java.util.Map; class NativeAdFactoryExample implements NativeAdFactory { @Override public UnifiedNativeAdView createNativeAd( UnifiedNativeAd nativeAd, Map<String, Object> customOptions) { // Create UnifiedNativeAdView } } An instance of a NativeAdFactory should also be added to the FirebaseAdMobPlugin. This is done slightly differently depending on whether you are using Embedding V1 or Embedding V2. If you're using the Embedding V1, you need to register your NativeAdFactory with a unique String identifier after calling GeneratedPluginRegistrant.registerWith(this);. You're MainActivity.java should look similar to: package my.app.path; import android.os.Bundle; import io.flutter.app.FlutterActivity; import io.flutter.plugins.GeneratedPluginRegistrant; import io.flutter.plugins.firebaseadmob.FirebaseAdMobPlugin; public class MainActivity extends FlutterActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); GeneratedPluginRegistrant.registerWith(this); FirebaseAdMobPlugin.registerNativeAdFactory(this, "adFactoryExample", new NativeAdFactoryExample()); } } If you're using Embedding V2, you need to register your NativeAdFactory with a unique String identifier after adding the FirebaseAdMobPlugin to the FlutterEngine. (Adding the FirebaseAdMobPlugin to FlutterEngine should be done in a GeneratedPluginRegistrant in the near future, so you may not see it being added here). You should also unregister the factory in cleanUpFlutterEngine(engine). You're MainActivity.java should look similar to: package my.app.path; import io.flutter.embedding.android.FlutterActivity; import io.flutter.embedding.engine.FlutterEngine; import io.flutter.plugins.firebaseadmob.FirebaseAdMobPlugin; public class MainActivity extends FlutterActivity { @Override public void configureFlutterEngine(FlutterEngine flutterEngine) { flutterEngine.getPlugins().add(new FirebaseAdMobPlugin()); FirebaseAdMobPlugin.registerNativeAdFactory(flutterEngine, "adFactoryExample", NativeAdFactoryExample()); } @Override public void cleanUpFlutterEngine(FlutterEngine flutterEngine) { FirebaseAdMobPlugin.unregisterNativeAdFactory(flutterEngine, "adFactoryExample"); } } When creating the NativeAd in Flutter, the factoryId parameter should match the one you used to add the factory to FirebaseAdMobPlugin. An example of displaying a UnifiedNativeAd with a UnifiedNativeAdView can be found here. The example app also inflates a custom layout and displays the test Native ad. iOS # Native Ads for iOS require a class that implements the protocol FLTNativeAdFactory which has a single method createNativeAd:customOptions:. You can have your AppDelegate implement this protocol or create a separate class as seen below: /* AppDelegate.m */ #import "FLTFirebaseAdMobPlugin.h" @interface NativeAdFactoryExample : NSObject<FLTNativeAdFactory> @end @implementation NativeAdFactoryExample - (GADUnifiedNativeAdView *)createNativeAd:(GADUnifiedNativeAd *)nativeAd customOptions:(NSDictionary *)customOptions { // Create GADUnifiedNativeAdView } @end Once there is an implementation of FLTNativeAdFactory, it must be added to the FLTFirebaseAdMobPlugin. This is done by importing FLTFirebaseAdMobPlugin.h and calling registerNativeAdFactory:factoryId:nativeAdFactory: with a FlutterPluginRegistry, a unique identifier for the factory, and the factory itself. The factory also MUST be added after [GeneratedPluginRegistrant registerWithRegistry:self]; has been called. If this is done in AppDelegate.m, it should look similar to: #import "FLTFirebaseAdMobFirebaseAdMobPlugin registerNativeAdFactory:self factoryId:@"adFactoryExample" nativeAdFactory:nativeAdFactory]; return [super application:application didFinishLaunchingWithOptions:launchOptions]; } @end Dart Example # When creating a Native Ad in Dart, setup is similar to Banners and Interstitials. You can use MobileAdTargetingInfo to target ads, create a listener to respond to MobileAdEvents, and test with a test ad unit id. Your factoryId should match the id used to register the NativeAdFactory in Java/Kotlin/Obj-C/Swift. An example of this implementation is seen below: ); final NativeAd nativeAd = NativeAd( adUnitId: NativeAd.testAdUnitId, factoryId: 'adFactoryExample', targetingInfo: targetingInfo, listener: (MobileAdEvent event) { print("$NativeAd event $event"); }, ); Limitations # This plugin currently has some limitations: -.9.3 # - Support Native Ads on iOS. 0.9.2+1 # - Added note about required Google Service config files. 0.9.2 # - Add basic Native Ads support for Android. 0.9.1+3 # - Replace deprecated getFlutterEnginecall on Android. 0.9.1+2 # - Make the pedantic dev_dependency explicit. 0.9.1+1 # - Enable custom parameters for rewarded video server-side verification callbacks. 0.9.1 # - Support v2 embedding. This will remain compatible with the original embedding and won't require app migration. 0.9.0+10 # - Remove the deprecated author:field from pubspec.yaml - Migrate the plugin to the pubspec platforms manifest. - Bump the minimum Flutter version to 1.10.0. 0.9.0+9 # - Updated README instructions for contributing for consistency with other Flutterfire plugins. 0.9.0+8 # - Remove AndroidX warning. 0.9.0+7 # - Update Android gradle plugin, gradle, and Admob versions. - Improvements to the Android implementation, fixing warnings about a possible null pointer exception. - Fixed an issue where an advertisement could incorrectly remain displayed when transitioning to another screen. 0.9.0+6 # - Remove duplicate example from documentation. 0.9.0+5 # - Update documentation to reflect new repository location. 0.9.0+4 # - Add the ability to horizontally adjust the ads banner location by specifying a pixel offset from the centre. 0.9.0+3 # - Update google-services Android gradle plugin to 4.3.0 in documentation and examples. 0.9.0+2 # - On Android, no longer crashes when registering the plugin if no activity is available. 0.9.0+1 # - Add missing template type parameter to invokeMethodcalls. - Bump minimum Flutter version to 1.5.0. 0.9.0 # - Update Android dependencies to latest. 0.8.0+4 # - Update documentation to add AdMob App ID in Info.plist - Add iOS AdMob App ID in Info.plist in example project 0.8.0+3 # - Log messages about automatic configuration of the default app are now less confusing. 0.8.0+2 # - Remove categories. // Copyright 2017 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. // ignore_for_file: public_member_api_docs import 'dart:io'; import 'package:flutter/material.dart'; import 'package:firebase_admob/firebase_admob.dart'; // You can also test with your own ad unit IDs by registering your device as a // test device. Check the logs for your device's ID value. const String testDevice = 'YOUR_DEVICE_ID'; class MyApp extends StatefulWidget { @override _MyAppState createState() => _MyAppState(); } class _MyAppState extends State<MyApp> { static const MobileAdTargetingInfo targetingInfo = MobileAdTargetingInfo( testDevices: testDevice != null ? <String>[testDevice] : null, keywords: <String>['foo', 'bar'], contentUrl: '', childDirected: true, nonPersonalizedAds: true, ); BannerAd _bannerAd; NativeAd _nativeAd; InterstitialAd _interstitialAd; int _coins = 0; BannerAd createBannerAd() { return BannerAd( adUnitId: BannerAd.testAdUnitId, size: AdSize.banner, targetingInfo: targetingInfo, listener: (MobileAdEvent event) { print("BannerAd event $event"); }, ); } InterstitialAd createInterstitialAd() { return InterstitialAd( adUnitId: InterstitialAd.testAdUnitId, targetingInfo: targetingInfo, listener: (MobileAdEvent event) { print("InterstitialAd event $event"); }, ); } NativeAd createNativeAd() { return NativeAd( adUnitId: NativeAd.testAdUnitId, factoryId: 'adFactoryExample', targetingInfo: targetingInfo, listener: (MobileAdEvent event) { print("$NativeAd event $event"); }, ); } @override void initState() { super.initState(); FirebaseAdMob.instance.initialize(appId: FirebaseAdMob.testAppId); _bannerAd = createBannerAd()..load(); RewardedVideoAd.instance.listener = (RewardedVideoAdEvent event, {String rewardType, int rewardAmount}) { print("RewardedVideoAd event $event"); if (event == RewardedVideoAdEvent.rewarded) { setState(() { _coins += rewardAmount; }); } }; } @override void dispose() { _bannerAd?.dispose(); _nativeAd?.dispose(); _interstitialAd?.dispose(); super.dispose(); } @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: const Text('AdMob Plugin example app'), ), body: SingleChildScrollView( child: Center( child: Column( crossAxisAlignment: CrossAxisAlignment.center, mainAxisSize: MainAxisSize.min, children: <Widget>[ RaisedButton( child: const Text('SHOW BANNER'), onPressed: () { _bannerAd ??= createBannerAd(); _bannerAd ..load() ..show(); }), RaisedButton( child: const Text('SHOW BANNER WITH OFFSET'), onPressed: () { _bannerAd ??= createBannerAd(); _bannerAd ..load() ..show(horizontalCenterOffset: -50, anchorOffset: 100); }), RaisedButton( child: const Text('REMOVE BANNER'), onPressed: () { _bannerAd?.dispose(); _bannerAd = null; }), RaisedButton( child: const Text('LOAD INTERSTITIAL'), onPressed: () { _interstitialAd?.dispose(); _interstitialAd = createInterstitialAd()..load(); }, ), RaisedButton( child: const Text('SHOW INTERSTITIAL'), onPressed: () { _interstitialAd?.show(); }, ), RaisedButton( child: const Text('SHOW NATIVE'), onPressed: () { _nativeAd ??= createNativeAd(); _nativeAd ..load() ..show( anchorType: Platform.isAndroid ? AnchorType.bottom : AnchorType.top, ); }, ), RaisedButton( child: const Text('REMOVE NATIVE'), onPressed: () { _nativeAd?.dispose(); _nativeAd = null; }, ), RaisedButton( child: const Text('LOAD REWARDED VIDEO'), onPressed: () { RewardedVideoAd.instance.load( adUnitId: RewardedVideoAd.testAdUnitId, targetingInfo: targetingInfo); }, ), RaisedButton( child: const Text('SHOW REWARDED VIDEO'), onPressed: () { RewardedVideoAd.instance.show(); }, ), Text("You have $_coins coins."), ].map((Widget button) { return Padding( padding: const EdgeInsets.symmetric(vertical: 16.0), child: button, ); }).toList(), ), ), ), ), ); } } void main() { runApp(MyApp()); } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: firebase_admob: ^0.9 Mar 27, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.1 - pana: 0.13.6 - Flutter: 1.12.13+hotfix.8
https://pub.flutter-io.cn/packages/firebase_admob
CC-MAIN-2020-16
en
refinedweb
In a production application you frequently can find yourself working with objects that have a large accessor chain like student.School.District.Street.Name But when you want to program defensively you need to always do null checks on any reference type. So your accessing chain looks more like this instead if (student.School != null) { if (student.School.District != null) { if (student.School.District.Street != null) { s += student.School.District.Street.Name; } } } Which sucks. Especially since its easy to forget to add a null check, and not to mention it clutters the code up. Even if you used an option type, you still have to check if it’s something or if its nothing, and dealing with huge option chains is just as annoying. One solution is to use the maybe monad, which can be implemented using extension methods and lambdas. While this is certainly better, it can still can get unwieldy. What I really want is a way to just access the chain, and if any part of it is null for it to return null. The magic of Castle Dynamic Proxy This is where the magic of castle dynamic proxy comes into play. Castle creates runtime byte code that can subclass your class and intercept method calls to it. This means you can now control what happens each time a method is invoked on your function, both by manipulating the return value and by choosing whether or not to even invoke the function. Lots of libraries use castle to do neat things, like the moq library from google and NHibernate. For my purposes, I wanted to create a null safe proxy that lets me safely iterate through the function call chain. Before I dive into it, lets see what the final result is: var user = new User(); var name = user.NeverNull().School.District.Street.Name.Final(); At this point name can be either null, or the street name. But since this user never set any of its public properties everything is null, so name here will be null. At this point I can do one null check and move on. The start NeverNull is an extension method that wraps the invocation target (the thing calling the method) with a new dynamic proxy. public static T NeverNull<T>(this T source) where T : class { return (T) _generator.CreateClassProxyWithTarget(typeof(T), new[] { typeof(IUnBoxProxy) }, source, new NeverNullInterceptor(source)); } I’m doing a few things here. First I’m making a proxy that wraps the source object. The proxy will be of the same type as the source. Second, I’m telling castle to also add the IUnBoxProxy interface to the proxy implementation. We’ll see why that’s used later. All it means is that the proxy that is returned implements not only all the methods of the source, but is also going to be of the IUnBoxProxy interface. Third, I am telling castle to use a NeverNullInterceptor that holds a reference to the source item. This interceptor is responsible for manipulating any function calls on the source object. The method interceptor The interceptor isn’t that complicated. Here is the whole class: public class NeverNullInterceptor : IInterceptor { private object Source { get; set; } public NeverNullInterceptor(object source) { Source = source; } public void Intercept(IInvocation invocation) { try { if (invocation.Method.DeclaringType == typeof(IUnBoxProxy)) { invocation.ReturnValue = Source; return; } invocation.Proceed(); var returnItem = Convert.ChangeType(invocation.ReturnValue, invocation.Method.ReturnType); if (!PrimitiveTypes.Test(invocation.Method.ReturnType)) { invocation.ReturnValue = invocation.ReturnValue == null ? ProxyExtensions.NeverNullProxy(invocation.Method.ReturnType) : ProxyExtensions.NeverNull(returnItem, invocation.Method.ReturnType); } } catch (Exception ex) { invocation.ReturnValue = null; } } } The main gist of this class is that whenever a function gets called on a proxy object, the interceptor can capture the function call. We created the specific proxy to be tied to this interceptor as part of the proxy generation. When a function is captured by the interceptor, the interceptor can choose to invoke the actual underlying function if it wants to (via the proceed method). After that, the interceptor tests to see if the function return value was null or not. If the value wasn’t null, the interceptor then proxies the return value (creating a chain of proxy objects). This means that the next function call in the accessor chain is now also on a proxy! But, if the return value was null we still need to continue the accessor chain. Unlike the maybe monad, we can’t bail in the middle of the call. So, what we do instead is to create an empty proxy of the same type. This just gives us a way to capture invocations onto what would otherwise be a null object. Castle can give you a proxy that doesn’t wrap any target. This is what moq does as well. If anyone calls a function on this proxy, the interceptor’s intercept method gets called and we can choose to not proceed with the actual invocation! There’s no underlying wrapped target, it’s just the interceptor catching calls. In the scenario where the return result is null, here is the function to proxy the type public static object NeverNullProxy(Type t) { return _generator.CreateClassProxy(t, new[] { typeof(IUnBoxProxy) }, new NeverNullInterceptor(null)); } Now, you may notice that I’m passing null to the constructor of the interceptor, but previously I passed a source object to the constructor. This is because I want the interceptor to know what is the underlying proxied target. This is how I’m going to be able to unbox the final value out of the proxy chain when it’s requested. This is also the reason for the IUnBoxProxy interface we added. Getting the value out! At this point there is an entire proxy chain set up. Once you enter the proxy chain, all other functions on that object are also proxies. But at some point you want to get the actual value out, whether its null or not. This is where that special interface comes in. Using an extension method on all object types we can cast the object to the special interface (remembering that the object we’re working on is actually a proxy and that it should have implemented the special interface we told it to) and execute a function on it. It really doesn’t matter which function, just a function public static T Final<T>(this T source) { var proxy = (source as IUnBoxProxy); if (proxy == null) { return source; } return (T)proxy.Value; } Since the proxy is actually a dynamic proxy that was created we get caught back in the interceptor. This is why this block exists if (invocation.Method.DeclaringType == typeof(IUnBoxProxy)) { invocation.ReturnValue = Source; return; } If the declaring type (i.e. the thing calling the function) is of that type (which it is since we explicitly cast it to it) then return the internal stored unboxed proxy. If the proxy contained null then a null gets returned, otherwise the last thing in the chain gets returned. I specificailly excluded primitives during the proxy boxing phase since a primitive implies the final ending of the chain. That and castle kept throwing me an error saying that it Could not load type 'Castle.Proxies.StringProxy' from assembly 'DynamicProxyGenAssembly2, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' because the parent type is sealed. But thats OK since we don’t need to proxy primitives in this scenario. Performance tests Now this is great and all, but if it incurs an enormous performance penalty then we can’t really use it. This is where I ran some unscientific tests. In a unit test run in release I checked the relative execution time of the following 3 functions: Create an empty user and use the never null proxy to check a string some amount of times. The console writeline exists only to make sure the compiler doesn’t optimize out unused variables. private void NullWithProxy(int amount) { var user = new User(); var s = "na"; for (int i = 0; i < amount; i++) { s += user.NeverNull().School.District.Street.Name.Final() ?? "na"; } Console.WriteLine(s.FirstOrDefault()); } Test a non null object chain with the proxy private void TestNonNullWithProxy(int amount) { var student = new User { School = new School { District = new District { Street = new Street { Name = "Elm" } } } }; var s = "na"; for (int i = 0; i < amount; i++) { s += student.NeverNull().School.District.Street.Name.Final(); } Console.WriteLine(s.FirstOrDefault()); } And finally test a bunch of if statements on a non null object private void NonNullNoProxy(int amount) { var student = new User { School = new School { District = new District { Street = new Street { Name = "Elm" } } } }; var s = "na"; for (int i = 0; i < amount; i++) { if (student.School != null) { if (student.School.District != null) { if (student.School.District.Street != null) { s += student.School.District.Street.Name; } } } } Console.WriteLine(s.FirstOrDefault()); } And the results are You can see on iteration 1 that there is a big spike in using the proxy. That’s because castle has to initially create and then cache dynamic proxies. After that things level out and grow linearly. While you do incur a penalty hit, its not that far off from regular if checks. Doing 4 chained proxy checks 5000 times runs about 200 milliseconds, compared to 25 milliseconds with direct if checks. While its 8 times longer, you get the security of knowing you won’t accidentally have a null reference exception. For lower amounts of accessing the time is pretty comparable. Conclusion Unfortunately a downside to all of this is that castle can only proxy methods and properties that are marked as virtual. Also I had a lot of difficulty getting proxying of enumerables to work. I was only able to get it to work with things that are declared as IEnumerable or List but not Dictionary or HashSet or anything else. If you know how to do this please let me know! Because of those limitations I wouldn’t suggest using this in a production application. But, maybe, one of these days a language will come out with this built in and I’ll be pretty stoked about that. For full source check out my github. Also I’d like to thank my coworker Faisal for really helping out on this idea. It was his experience with dynamic proxies that led to this post. 1 thought on “Minimizing the null ref with dynamic proxies”
https://onoffswitch.net/2013/05/20/minimizing-null-ref/
CC-MAIN-2020-16
en
refinedweb
Nette Framework: First Impressions? >>IMAGE. NOTE: We will base our review on the official Getting Started tutorial. Installation and bootstrapping Nette uses a self-bootstrap approach (similar to Laravel) with the support of composer: composer create-project nette/sandbox demo This will create a demo directory in the current one, and a sandbox project will be loaded into said folder. Nette’s Getting Started tutorial guides us through building a simple blog app which features basic blog functions like: list all posts, view an individual post, create/edit a post, comments, security etc. Let me show you what the app will look like when you finish the tutorial (I have not added any CSS to it so the overall appearance is quite rudimentary): NOTE: This is served in a Vagrant box. In the next few sections, we will look at some of the fundamental concepts in Nette. As I am a long-time user of Symfony2 (SF2), I will use that for comparison most of the time. Please note that the comparison notes are purely my personal view. Project structure Nette is considered to be an MVC framework, though its “Model” layer is almost missing. Its project structure also reflects this but is organized in a very different way: Above project structure is taken from Nette’s tutorial Like in SF2, a dedicated www ( web in SF2) directory is there to hold the entry PHP file: index.php and also .htaccess rules to provide rewrite instructions for Apache. It will also contain static resources (CSS, JS, fonts, images, etc). vendor will hold all the vendor libraries, as usual. Many other folders will go under app: config: As its name suggests, all the configuration resides here. Nette uses config.neonand config.local.neonto provide configuration information related to database, security, services, app-wide parameters, etc. Nette will load config.neonfirst and then config.local.neon. The latter will override the same parameters defined in the former, as is common in other frameworks as well. You can find out about the Neon file format here. presentersand presenters/templates: these two folders cover the controller and the template (view) portion. Nette uses Latte as its template engine. More on Latte later, and no Cappuccino – sorry. router: it holds the route factory class to customize pretty URIs and thus creates a bridge between a URI and a controller/action. More on this later. Start from database Nette comes with a handy tool called “Adminer” to mimic a PHPMyAdmin kind of functionality. The interface is clean and easy to use: As an “embedded” tool, Adminer’s capability is limited so you may want to switch to your preferred database administration tool if this one doesn’t cut it. Also, it should be noted that we access Adminer from the adminer sub-directory in www. This may not be a good approach, especially in a production environment. This folder should be ignored in deployment phases – either via .gitignore, .gitattributes or otherwise and Nette should point this out in their docs. Router We are developing a simple blog app. We would want a URI showing a specific post (identified by its postId) to look like this: post/show/4 but not like this: post/show?postId=4. Nette recommends using a router factory to manage the link between a URI (or a URI pattern) and its corresponding controllers/actions. The router factory is defined in app/router/RouterFactory.php: class RouterFactory { /** * @return \Nette\Application\IRouter */ public static function createRouter() { $router = new RouteList(); $router[] = new Route('post/show/<postId>', 'Post:Show'); $router[] = new Route('<presenter>/<action>[/<id>]', 'Homepage:default'); return $router; } } The definition of a route is straightforward: - A URI “pattern” with parameter(s): post/show/<postId>. - and a string in the form of “Controller:Action”: Post:Show. The detailed Nette routing documentation can be found here. To use this router factory, we must register a service in app/config/config.neon: services: router: App\RouterFactory::createRouter To generate a link in our template based on a route, we can use the syntax below: <a href="{link Post:Show $post->id}">{$post->title}</a> I must admit this Latte syntax is a bit shorter than the corresponding Twig syntax. It uses {} for both echo statement and control statement. For example: //To display a variable {$var1} //To run a foreach loop {foreach $items as $item} ... {/foreach} Latte also has a powerful macro system to facilitate some common tasks. For example, the below code snippet will only display a list when $items is not null: <ul n: ... </ul> This can be handy when the user wants to display a certain section based on a returned result set. Controllers and Actions A presenter in Nette is the controller. All presenters are in the app/presenters folder and the file name should end with Presenter.php. As an example, we have PostPresenter for post related actions, SignPresenter for sign in / sign out related actions. In a presenter file, we define a class to hold all the actions (methods) that can be invoked. For example, to show a particular post identified by its postId (and its related comments), the method will look like this: namespace App\Presenters; use Nette; use Nette\Application\UI\Form; class PostPresenter extends BasePresenter { private $database; public function __construct(Nette\Database\Context $database) { $this->database = $database; } public function renderShow($postId) { $post = $this->database->table('posts')->get($postId); if (!$post) { $this->error('Post not found'); } $this->template->post = $post; $this->template->comments = $post->related('comments')->order('created_at'); } ... ... } In renderShow($postId), a $post is grabbed from the database by matching $postId. Then, a template will be rendered with variables (the post and related comments in this case). We notice that this process is simple but hides a lot of details. For example, where is this database coming from? In app/config/config.local.neon, we can see this section (following the tutorial): database: dsn: 'mysql:host=127.0.0.1;dbname=quickstart' user: root password: xxxxxx options: lazy: yes This is a familiar database connection setup. When a controller/action is to be invoked, Nette transforms this DSN into a database object (or database context) and injects it into the constructor of that controller class. Thus, the database is accessible to all methods by means of Dependency Injection. What about the template rendering? We just see the variable assignments but no explicit call to a “render” method. Well, this is also part of the Nette convention. When an action is given a render prefix, this action will render a template at the return of the method call. In this case, the method is renderShow. This method is linked to URI like “post/show/3” as we defined earlier (in route definitions, the render prefix is ignored): $router[] = new Route('post/show/<postId>', 'Post:Show');. renderShow will start to look for a template under app/presenters/templates looking for: - A directory named “Post” because this is the controller name of this renderShowaction. - Then a template named Show.latteto populate all the variables and display it. So let’s summarize the naming and mapping conventions used in Nette in the below chart: The Latte template engine If you are familiar with Twig, you will find that Latte is quite easy to learn. It uses {...} pair to escape from the regular HTML parsing and does not differentiate from a pure print (Twig equivalent: {{...}}) or a control ( {%...%}). Also, a variable must be prefixed with the $ sign, or a string literal inside the { } pair will be treated as a macro and most likely cause a syntax error, saying “Unknown macro {xxxx}”. There’s a handy feature when we’re dealing with an iteration on a result set, which is very common: <ul n: {foreach $items as $item} <li id="item-{$iterator->counter}">{$item|capitalize}</li> {/foreach} </ul> Besides the regular usage of a foreach loop, a condition has been put in place to decide if the below <ul> section should be displayed or not. The <ul> section will only be there when there is at least one item in $items. With the help of this macro, we can save some lines and avoid using an if...endif pair. Latte supports template inheritance, template including, filters, and many other cool features. Please visit its official documentation for details. Auth and Forms The official documentation on access control is a good starting point for us. Nette supports in-memory and database credentials. By using in-memory authentication, we use the below snippet: $authenticator = new Nette\Security\SimpleAuthenticator(array( 'john' => 'IJ^%4dfh54*', 'kathy' => '12345', // Kathy, this is a very weak password! )); $user->setAuthenticator($authenticator); Then, the system can explicitly make a user log in using: $user->login($username, $password); where the username and password can be obtained from a form submission. Nette supports roles and ACL (Access Control List) and uses an “Authorizator” to enforce the authorization. Firstly, we can create some roles with hierachy: $acl = new Nette\Security\Permission; //Define a guest role and a registered user role $acl->addRole('guest'); $acl->addRole('registered', 'guest'); In the above code, role register inherits from guest. Then, we define a few resources that a user may access: $acl->addResource('article'); $acl->addResource('comments'); $acl->addResource('poll'); Finally, we set authorization rules: $acl->allow('guest', array('article', 'comments', 'poll'), 'view'); $acl->allow('registered', 'comments', 'add'); So a guest can view an article, comments and a poll and a registered user, besides the privileges inherited from guest, can also add a comment. I really don’t like this kind of access control. Even an annotation outside of a controlled method itself or the use of a decorator would be better than this, in my opinion. And I would say a centralized file (SF2’s security.yml) is the best practice: neat, clean, and flexible. The forms are generated in their respective presenters. In particular, the form creation includes a callback event handler to process a successful form submission. protected function createComponentCommentForm() { $form = new Form; $form->addText('name', 'Your name:')->setRequired(); $form->addText('email', 'Email:'); $form->addTextArea('content', 'Comment:')->setRequired(); $form->addSubmit('send', 'Publish'); $form->onSuccess[] = [$this, 'commentFormSucceeded']; return $form; } But, this is not the action of that form to be rendered. For example, let’s look at the above code for renderShow to display a post detail page and a form for readers to enter comments. In the presenter, we only assigned a post variable and a comments variable to hold related comments. The comment input form is rendered in the template app/presenters/templates/Post/Show.latte: <h2>Post new comments</h2> {control commentForm} The source of that page is extracted below: <h2>Post new comments</h2> <form action="/sandbox/www/post/show/4" method="post" id="frm-commentForm"> <table> <tr class="required"> <th><label for="frm-commentForm-name" class="required">Your name:</label></th> <td><input type="text" name="name" id="frm-commentForm-name" required</td> </tr> ... <tr> <th></th> <td><input type="submit" name="send" value="Publish" class="button"></td> </tr> </table> <div><input type="hidden" name="do" value="commentForm-submit"></div> </form> We see clearly that the form action assigned is /sandbox/www/post/show/4, which is essentially the URI that displays the post itself. There is no place in the source code to indicate that a hook to the commentFormSucceeded method exists. This kind of “inside linking” may confuse Nette beginners a lot. I mean, to have a separate method to process the form is a common practice, and thus to have a URI assigned for such a process is also reasonable. Nette using a callback/event handler to do this is also fine but there is certainly something missing or not clearly explained between when a user clicks the “Submit” button and the input is persisted in the database. We know the persistence is performed in a method called commentFormSucceeded and we implemented that feature by ourselves. But how they are hooked up is not clear. Other cool features Nette comes with a debugger called “Tracy“. In debug mode, we will see a small toolbar at the bottom right corner of our page, telling us important page information: It can be disabled in production mode by changing app/bootstrap.php: $configurator->setDebugMode(false); // "true" for debug mode NOTE: Please purge the temp/cache folder contents if you encounter any issues after changing from development mode to production mode. Nette also includes a test suite called “Tester”. The usage is also straightforward. See here for details. Final Thoughts Nette is a relatively new framework. Its 2.0 release was about 3 years ago. It came to be noticed by many of us thanks to the SitePoint survey. Its Github issue tracker is very active. My two questions posted there got answered in less than 10-30 minutes and both lead to the correct solution, but its documentation needs a lot of work. During my attempts to set up the tutorial app following its docs, I found a lot of typos and missing explanations. If I could give SF2 a score of 10 – not saying SF2 is perfect but just for comparison’s sake – my initial score for Nette is between 7 to 8. It is mature, well written, easy to learn, equipped with advanced features but also has a few areas that need improving. Are you familiar with Nette? Feel free to share your views and comments, too.
https://www.sitepoint.com/nette-framework-first-impressions/
CC-MAIN-2020-16
en
refinedweb
Synopsis Return the estimated exposure map value by weighting an ARF by a spectral model. Syntax estimate_weighted_expmap( id=None, arf=None, elo=None, ehi=None, specresp=None, fluxtype="photon", par=None, pvals=None ) Description The estimate_weighted_expmap() command returns an estimate of the weighted exposure map using the spectral model associated with the dataset as the weighting function. The exposure map is estimated by using an ARF; this can either be the ARF associated with the PHA dataset or can be explicitly given, as a filename, crate, or set of three arrays for the bin edges and bin value. The routine can be used to estimate the exposure map with the current set of model parameters, or it can be given an array of values for a single parameter and return the exposure map for each parameter value. The routine can be loaded into Sherpa by saying: from sherpa_contrib.utils import * Arguments Estimating the source flux If source_counts is the number of counts in the source, emap is the return value of estimate_weighted_expmap(), and exposure is the exposure time in seconds, then source_counts / (exposure * emap) is an estimate of the source flux, with units of either photon / cm^2 / s or erg / cm^2 / s, depending on the value of the fluxtype argument. This is only an estimate of the source flux, since it depends on how closely the source resembles the model spectrum, as well as the energy range used and how the background is estimated. For instance, the background is likely to have a significantly different spectrum to the source, and the correction factor from counts to flux can vary significantly (by 50% or more) with spectral shape. Note: no 'de-reddening' Note that the calculation above does not "de-redden" the source flux; if the model includes Galactic absorption then you will want to account for this in calculating the intrinsic flux of the source. Estimating the sensitivity of the exposure map to the input spectrum The estimate_weighted_expmap() routine can be used to estimate how sensitive the exposure map, and hence the flux conversion factor, is to energy range or spectral model. As an example, if the source model is a power law called "pl", with the parameter gamma, then the conversion factor can be estimated for a range of gamma values: gamma_vals = np.arange(0.1,5,0.1) expmap_vals = estimate_weighted_expmap(par=pl.gamma, pvals=gamma_vals) The curve gamma_vals, expmap_vals shows how the conversion factor (i.e. weighted exposure map) value varies as gamma changes from 0.1 to 4.9. Examples Example 1 sherpa> load_data("src.pi") sherpa> notice(0.5, 7) sherpa> set_source(xsphabs.gal * xsmekal.clus) sherpa> gal.nh = 0.45 sherpa> clus.kt = 2 sherpa> clus.abundanc = 0.5 sherpa> plot_instmap_weights() sherpa> estimate_weighted_expmap() 531.66632 sherpa> estimate_weighted_expmap(fluxtype="erg") 1.722337e+11 sherpa> clus.kt = 5 sherpa> estimate_weighted_expmap() 487.28174 sherpa> estimate_weighted_expmap(fluxtype="erg") 1.1962892e+11 A dataset is loaded, an energy range chosen, and a spectral model set up. The plot_instmap_weights() call is used to show what the weights look like, and then the estimate_weighted_expmap() call is used to estimate the exposure map for two different model parameters (kT = 2 keV and then kT = 5 keV). The first calculation for each parameter value calculates a value with units of cm^2 count / photon (with values around 500), and the second call calculates a value of cm^2 count / erg (values around 1e11). The values were calculated using the ARF that was read in with the data file (i.e. given by the ANCRFILE keyword in the file src.pi). The values can vary strongly given the ARF, spectral model, and energy grids used in the analysis. Approximate count fluxes The estimated source flux (for the kT=2 keV case) is therefore source_counts / (exposure * 531.66632) in units of photon / cm^2 / s and source_counts / (exposure * 1.722337e+11) in units of erg / cm^2 / s, where source_counts is the measured number of source counts in the 0.5 to 7 keV band, and exposure is the exposure time of the observation, in seconds. Please note that these values are only approximate; please see the Calculating Spectral Weights thread and related documentation for a discussion of the reliability of this approach. Example 2 sherpa> dataspace1d(2.5,5,0.1) sherpa> set_source(xsphabs.gal * xsmekal.clus) sherpa> gal.nh = 0.45 sherpa> clus.kt = 2 sherpa> clus.abundanc = 0.5 sherpa> estimate_weighted_expmap(arf="src.arf") 421.69598 sherpa> estimate_weighted_expmap(arf="src.arf", fluxtype="erg") 7.9162262e+10 sherpa> clus.kt = 5 sherpa> estimate_weighted_expmap(arf="src.arf") 420.05133 sherpa> estimate_weighted_expmap(arf="src.arf", fluxtype="erg") 7.5432468e+10 In this example we manually create a data space for the model evaluation. Since there is no associated ARF in this case one has to be given when calling estimate_weighted_expmap(); we chose to give the file name via the arf argument. It would have been slightly more efficient to read in the ARF using the Crates routine read_file (or read_arf) and then send the crate to the routine: for example sherpa> cr = read_file("src.arf") sherpa> estimate_weighted_example(arf=cr) The exposure-map values are more consistent as the kT parameter is varied in this example compared to the previous one because the Chandra ARF shows much less variation over the energy range 2.5 to 5 keV than the 0.5 to 7 keV band used previously. The degree of variation with model parameters also depends on whether the output is in "photon" or "erg" units and the shape of the spectral model. Example 3 sherpa> estimate_weighted_expmap(2, elo=x1, ehi=x2, specresp=arf) In this example the ARF is defined by the three arrays x1, x2, and arf. The x1 and x2 arrays give the low and high edges of each bin, and are in keV, and the arf array gives the ARF for each bin in cm^2. All three arrays should contain the same number of elements. The ARF does not have to have the same energy gridding as the dataspace, but should cover the same energy range. Example 4 sherpa> kts = 10**np.linspace(-1,1,11) sherpa> evals = estimate_weighted_expmap(2, elo=x1, ehi=x2, specresp=arf, par=clus.kt, parvals=kts) In this example the exposure map is evaluated as the clus.kt parameter is varied from 0.1 to 10, using logarithmically-spaced bins. Changes in the scripts 4.8.2 (January 2016) release The routine has been updated to work in CIAO 4.8. Bugs See the bugs pages on the Sherpa website for an up-to-date listing of known bugs. See Also - contrib - get_instmap_weights, plot_instmap_weights, save_instmap_weights, sherpa_utils - data - get_arf, get_bkg, load_arf, load_bkg_arf, load_multi_arfs, set_arf, unpack_arf - info - list_response_ids - modeling - get_response - plotting - plot_arf - tools - aprates, modelflux - utilities - calc_chisqr, calc_energy_flux, calc_model_sum, calc_photon_flux, calc_source_sum, calc_stat, gamma, igam, igamc, incbet, lgam
https://cxc.cfa.harvard.edu/sherpa4.11/ahelp/estimate_weighted_expmap.html
CC-MAIN-2020-16
en
refinedweb
Maximum stock price using MARR This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here! What is the maximum stock price that should be paid for a stock today that is expected to consistently pay a $5 quarterly dividend (paid out to the investor) if its price is expected to be $100 in 3 years (12 quarters)? The investor expects a minimum acceptable rate of return (MARR) of 10% with quarterly compounding.© BrainMass Inc. brainmass.com October 10, 2019, 3:11 am ad1c9bdddf Solution Preview The first step is to find the present value of the dividend payments. We can find this by finding the present value of an annuity, which is equal to A[(1-(1/(1+(r/m))^(nm)))/(r/m)], where A = $5, r = .1, n = 3, and m = 4. Given this, the ... Solution Summary The maximum stock price is uncovered in this solution. $2.19
https://brainmass.com/economics/finance/maximum-stock-price-410561
CC-MAIN-2020-16
en
refinedweb
#include <log_event.h> This event is created to contain the file data. One LOAD_DATA_INFILE can have 0 or more instances of this event written to the binary log depending on the size of the file. Primitive to apply an event to the database. This is where the change to the database is made. Reimplemented from Log_event. Reimplemented in Begin_load_query_log_event.
https://dev.mysql.com/doc/dev/mysql-server/latest/classAppend__block__log__event.html
CC-MAIN-2020-16
en
refinedweb
Check if a number is odd or even in C sharp How to check if a number is even or odd in C# : In this tutorial, we will learn how to check if a number is even or odd using c sharp. The program will take the number as input from the user and print out the result i.e. the number is even or odd. With this example program, you will learn how to print anything on console, how to read user input in C#, how to use if-else conditions and how to use modulus operator in C#. % Operator in C# : % operator computes the remainder after dividing one number by the another. a%b will return the remainder after dividing a by b. For example : Console.WriteLine(5 % 4); Console.WriteLine(8 % 4); Console.WriteLine(10 % 4); Console.WriteLine(10 % -9); Console.WriteLine(-10 % 9); Console.WriteLine(-10 % -9); The above program will print the below output : 1 0 2 1 -1 -1 As you can see the sign of the result is the same as the first operand. This example is dealing with only integers but we can also use % with floating point numbers. For floating point numbers, the result will be a float value. Finding out if a number is even or odd : A number is called an even number if it is divisible by 2. Otherwise, it is an odd number. We will use the % operator to find out if a number is even or odd. If for a number n, n%2 is 0 if it is an even number, else it is an odd number. C# Program : using System; namespace dotnet_sample { class Program { static void Main(string[] args) { //1 int n; //2 Console.WriteLine("Enter a number to check : "); //3 n = int.Parse(Console.ReadLine()); //4 if(n % 2 == 0) { //5 Console.WriteLine(n + " is an even number"); } else { //6 Console.WriteLine(n + " is an odd number"); } } } } Explanation : The commented numbers in the above program denote the step numbers below : - Create one integer number n to read the user input. - Ask the user to enter a number. - Read the number and store it in n. - Using an if-else statement, check if the number is divisible by 2 or not. - If yes, print that the number is an even number. - Else, print that the number is an odd number. Sample Output : Enter a number to check : 10 10 is an even number Enter a number to check : 12 12 is an even number Enter a number to check : 13 13 is an odd number Enter a number to check : 12543 12543 is an odd number This program is available on Github. Conclusion : We have learned how to check if a number is even or not in c#. Try to run the program and drop one comment below if you have any queries.
https://www.codevscolor.com/c-sharp-check-odd-even-number/
CC-MAIN-2020-16
en
refinedweb
Opened 9 years ago Closed 8 years ago #10902 closed defect (fixed) proof=False unnecessary in factor() Description (last modified by ) There are currently no known counter examples where Singular would return an incorrect factorisation. It might be very very slow but does not return wrong answers as far as we know. Hence, we should drop proof=False. Attachments (1) Change History (33) comment:1 follow-up: ↓ 2 Changed 9 years ago by comment:2 in reply to: ↑ 1 Changed 9 years ago by Dear Martin, I reported this upstream: this ticket is empty on my browser (firefox). Also, is there a workaround to factor bivariate polynomials over a prime field from within Sage, in particular over GF(2)? Paul comment:3 follow-up: ↓ 4 Changed 9 years ago by comment:4 in reply to: ↑ 3 Changed 9 years ago by Paul, the Singular developers replied on that ticket and state that this bug is fixed in Singular 3-1-2. Updating to 3-1-2 is now #10903. As for a workaround, I cannot immediately think of any. It's a pretty grim situation, but at least now there is a Singular developer working on factoring. thanks Martin. Once Singular is updated to 3-1-2, I will try to find examples where the factorization returned by Singular is incomplete. It would help to have concrete examples in the documentation. Paul comment:5 Changed 9 years ago by 3-1-2 should also improve this, but there are some issues that are only resolved in whatever is the next version of Singular, cf. comment:6 follow-up: ↓ 7 Changed 9 years ago by Is this really so bad? def check(f0, f1): CP = CartesianProduct(*[f0.base_ring()]*len(f0.variables())) for xvec in CP: f0_val = f0.subs(dict((v, xv) for v, xv in zip(f0.variables(), xvec))) f1_val = f1.subs(dict((v, xv) for v, xv in zip(f0.variables(), xvec))) print xvec, f0_val, f1_val, f0_val == f1_val sage: f0 = x^5 * y^8 * (x*y^4 + y^3 + 1) sage: f1 = x^5 * y^6 * (x*y^4 + y^3 + 1) sage: check(f0, f1) [0, 0] 0 0 True [0, 1] 0 0 True [1, 0] 0 0 True [1, 1] 1 1 True The factor of y^2 has no effect, so aren't the two polynomials equivalent in GF(2)? This happens pretty frequently: sage: f0 = x**8+y**8+1 sage: f1 = f0.factor(proof=False) sage: f1 (x + y + 1)^6 sage: f1.expand() x^6 + x^4*y^2 + x^2*y^4 + y^6 + x^4 + y^4 + x^2 + y^2 + 1 sage: check(f0, f1.expand()) [0, 0] 1 1 True [0, 1] 0 0 True [1, 0] 0 0 True [1, 1] 1 1 True Is there a case where the two expressions aren't equivalent as functions? Or am I missing something obvious like usual? comment:7 in reply to: ↑ 6 Changed 9 years ago by Is there a case where the two expressions aren't equivalent as functions? Or am I missing something obvious like usual? But polynomials over GF(2) are not merely boolean functions. There's a one-to-one mapping between square-free polynomials over GF(2) and boolean functions, but polynomials are more than that. So I'd say it is very bad. comment:8 Changed 9 years ago by here is another example over GF(2): sage: R.<x,y> = GF(2)[] sage: p=x^8 + y^8; q=x^2*y^4 + x; f=p*q sage: lf = f.factor(proof=False) sage: f-lf x^10*y^4 + x^2*y^12 + x^8*y^4 + x^6*y^6 + x^4*y^8 + x^2*y^10 + x^9 + x*y^8 + x^7 + x^5*y^2 + x^3*y^4 + x*y^6 An example over GF(3): sage: R.<x,y> = GF(3)[] sage: p = -x*y^9 + x sage: q = -x^8*y^2 sage: f = p*q sage: f x^9*y^11 - x^9*y^2 sage: f.factor(proof=False) y^2 * (y - 1)^6 * x^6 comment:9 Changed 9 years ago by and an example over GF(5): sage: R.<x,y> = GF(5)[] sage: p=x^27*y^9 + x^32*y^3 + 2*x^20*y^10 - x^4*y^24 - 2*x^17*y sage: q=-2*x^10*y^24 + x^9*y^24 - 2*x^3*y^30 sage: f=p*q; f-f.factor(proof=False) -2*x^37*y^33 - 2*x^42*y^27 + x^36*y^33 - 2*x^30*y^39 + x^41*y^27 - 2*x^35*y^33 + x^30*y^34 + 2*x^29*y^34 + x^23*y^40 + 2*x^14*y^48 - x^13*y^48 + 2*x^7*y^54 + 2*x^37*y^18 + 2*x^42*y^12 - x^36*y^18 + 2*x^30*y^24 - x^41*y^12 + 2*x^35*y^18 - x^27*y^25 - 2*x^26*y^25 - x^20*y^31 - x^30*y^19 - 2*x^29*y^19 - x^23*y^25 - 2*x^14*y^33 + x^13*y^33 - 2*x^7*y^39 + x^27*y^10 + 2*x^26*y^10 + x^20*y^16 and one over GF(7): sage: R.<x,y> = GF(7)[] sage: p=-3*x^47*y^24 sage: q=-3*x^47*y^37 - 3*x^24*y^49 + 2*x^56*y^8 + 3*x^29*y^15 - x^2*y^33 sage: f=p*q sage: f-f.factor(proof=False) 2*x^94*y^61 + 2*x^71*y^73 + x^103*y^32 - 2*x^59*y^61 - 2*x^76*y^39 - 2*x^36*y^73 + 3*x^49*y^57 - x^68*y^32 + 2*x^41*y^39 - 3*x^14*y^57 Examples can be constructed at will with the following code (change GF(7) by whatever you want): R.<x,y> = GF(7)[] nterms=2 for d in range(1,10^6): print d sys.stdout.flush() p = R.random_element(degree=d,terms=nterms) while p.degree()<=0: p = R.random_element(degree=d,terms=nterms) q = R.random_element(degree=d,terms=nterms) while q.degree()<=0: q = R.random_element(degree=d,terms=nterms) f = p*q lp = p.factor(proof=False) lq = q.factor(proof=False) lf = f.factor(proof=False) if lp*lq <> lf: print p, q raise ValueError comment:10 Changed 9 years ago by - Cc malb SimonKing added comment:11 Changed 9 years ago by - Keywords sd34 added comment:12 Changed 9 years ago by - Dependencies set to #10903 comment:13 follow-up: ↓ 14 Changed 9 years ago by - Status changed from new to needs_review comment:14 in reply to: ↑ 13 Changed 9 years ago by The attached patch removes all proof=False from multivariate polynomial factor() calls (being bold!) and adds examples from this ticket as doctests. Paul, can you install #11339, #10903 and this patch and try to break it? the 2nd patch from #11339 fails to apply on 4.7.1: sage: hg_sage.import_patch("/tmp/trac_11339_refcount_singular_polynomials.patch") cd "/localdisk/tmp/install/sage/devel/sage" && hg status cd "/localdisk/tmp/install/sage/devel/sage" && hg status cd "/localdisk/tmp/install/sage/devel/sage" && hg import "/tmp/trac_11339_refcount_singular_polynomials.patch" applying /tmp/trac_11339_refcount_singular_polynomials.patch patching file sage/rings/polynomial/multi_polynomial_libsingular.pyx Hunk #12 FAILED at 765 Hunk #13 FAILED at 783 Hunk #15 FAILED at 820 Hunk #16 FAILED at 862 Hunk #17 FAILED at 883 Hunk #18 succeeded at 832 with fuzz 2 (offset -72 lines). 5 out of 103 hunks FAILED -- saving rejects to file sage/rings/polynomial/multi_polynomial_libsingular.pyx.rej abort: patch failed to apply I could try 4.7.2.alpha, where can I download it? Paul comment:15 Changed 9 years ago by Yes, you'll need 4.7.2.alpha3: comment:16 Changed 9 years ago by - Status changed from needs_review to needs_work - Work issues set to spkg fails to build Martin, with 4.7.2.alpha3 the two patches from #11339 apply ok, but the new skpg fails to compile (on a 64-bit Core2 under Ubuntu): ... make install in Singular make[2]: Entering directory `/localdisk/tmp/install/sage/spkg/build/singular-3-1-3.svn-algnumbers/src/Singular' svnversion >svnver svnversion: relocation error: /usr/lib/libldap_r-2.4.so.2: symbol gnutls_certificate_get_x509_cas, version GNUTLS_1_4 not defined in file libgnutls.so.26 with link time reference make[2]: *** [svnver] Error 127 make[2]: Leaving directory `/localdisk/tmp/install/sage/spkg/build/singular-3-1-3.svn-algnumbers/src/Singular' make[1]: *** [install] Error 1 make[1]: Leaving directory `/localdisk/tmp/install/sage/spkg/build/singular-3-1-3.svn-algnumbers/src' make: *** [/localdisk/tmp/install/sage/local/bin/Singular-3-1-3] Error 2 Unable to build Singular. real 5m33.608s user 4m24.880s sys 0m15.360s sage: An error occurred while installing singular-3-1-3.svn-algnumbers Please email sage-devel explaining the problem and send the relevant part of of /localdisk/tmp/install/sage/install.log. Describe your computer, operating system, etc. If you want to try to fix the problem yourself, *don't* just cd to /localdisk/tmp/install/sage/spkg/build/singular-3-1-3.svn-algnumbers and type 'make check' or whatever is appropriate. Instead, the following commands setup all environment variables correctly and load a subshell for you to debug the error: (cd '/localdisk/tmp/install/sage/spkg/build/singular-3-1-3.svn-algnumbers' && '/localdisk/tmp/install/sage/sage' -sh) When you are done debugging, you can type "exit" to leave the subshell. Any clue? Paul comment:17 Changed 9 years ago by The new SPKG should fix this issue. comment:18 Changed 9 years ago by - Work issues changed from spkg fails to build to Proof=... still present I finally managed to apply the 4 patches, I ran sage -b, then installed the spkg, then ran sage -b again. However Proof=... seems to be still present somewhere: sage: R.<x,y> = GF(2)[] sage: p=x^8 + y^8; q=x^2*y^4 + x; f=p*q sage: f.factor(proof=False) x * (x + y)^8 * (x*y^4 + 1) sage: f.factor() ... NotImplementedError: proof = True factorization not implemented. Call factor with proof=False. Paul comment:19 Changed 9 years ago by for the proof thingy you'll also need to apply the patch from this ticket, i.e.,trac_10902_factor_fixed.patch comment:20 Changed 8 years ago by it seems to work much better (no bug found so far) but some factorizations are very slow (or hang), for example: sage: K=GF(4,'a') sage: a=K.gens()[0] sage: R.<x,y> = K[] sage: f=(a + 1)*x^145*y^84 + (a + 1)*x^205*y^17 + x^32*y^112 + x^92*y^45 sage: f.factor() comment:21 Changed 8 years ago by comment:22 Changed 8 years ago by I've rebased the patch to 5.0.beta13 and made sure that sig_on/sig_off are always called, so that one can interrupt the factorisation if it hangs. comment:23 Changed 8 years ago by Sage 5.0.beta13 fails to build on my computer, thus I'll have to wait for beta14 to review that ticket... Paul comment:24 Changed 8 years ago by thanks to Jeroen, I've managed to build Sage 5.0.beta13. However the issue reported in comment 20 is still present, whereas it took about one second in Sage 4.8: sage: K=GF(4,'a') sage: a=K.gens()[0] sage: R.<x,y> = K[] sage: f=(a + 1)*x^145*y^84 + (a + 1)*x^205*y^17 + x^32*y^112 + x^92*y^45 sage: time r=f.factor(proof=False) Time: CPU 1.08 s, Wall: 1.23 s sage: r y^17 * x^32 * (y^67 + x^60) * ((a + 1)*x^113 + y^28) I'll be happy when this will be reported in another ticket. Paul comment:25 Changed 8 years ago by I've created comment:26 Changed 8 years ago by - Status changed from needs_review to needs_work - Work issues set to doctest failure one doctest fails with sage-5.0.beta13: tarte% ../../../sage -t devel/sage-10902/sage/categories/quotient_fields.py ... Expected: Traceback (most recent call last): ... NotImplementedError: proof = True factorization not implemented. Call factor with proof=False. Got: (x + y)^-1 * y * x Paul Changed 8 years ago by comment:27 Changed 8 years ago by - Keywords sd34 removed - Status changed from needs_work to needs_review - Work issues doctest failure deleted I've updated the patch. comment:28 Changed 8 years ago by Martin, why did you remove the sd34 keyword? I had put it since I worked on this ticket during SD34. Paul comment:29 Changed 8 years ago by - Keywords sd34 added Oh, I figured I should remove it since it wasn't resolved at SD34. I added it back. comment:30 Changed 8 years ago by - Reviewers set to Paul Zimmermann - Status changed from needs_review to positive_review good work, thanks Martin! Paul PS: I removed my name as author since the patch is entirely from you. comment:31 Changed 8 years ago by - Milestone changed from sage-5.0 to sage-5.1 comment:32 Changed 8 years ago by - Merged in set to sage-5.1.beta0 - Resolution set to fixed - Status changed from positive_review to closed I reported this upstream:
https://trac.sagemath.org/ticket/10902
CC-MAIN-2020-16
en
refinedweb
Hide Forgot The program segfaults if you link the pthread library as the first library, because on intel it works in this order I think it a bug in the egcs alpha. It needs KDE but this is not a bug in KDE, because it works on my Intel. /* This program runs in both versions on my Intel RedHat 5.2 but on the alpha it gives a segfault if the pthread library is linked at first. I think this is a bug. Intel works: RH5.2 Alpha Debian segfault. * egcs-2.90.29 980515 (egcs-1.0.3 release) * RedHat 52 segfaults */ // KDE include files #include <kapp.h> #include <pthread.h> /* Success : g++ -I/opt/kde/include -I/opt/qt/include -I/usr/X11R6/include -L/opt/kde/lib -L/opt/qt/lib -L/usr/X11R6/lib test.cpp -lkfile -lkfm -lkdeui -lkdecore -lqt -lXext -lX11 -lpthread SEGFAULT : g++ -I/opt/kde/include -I/opt/qt/include -I/usr/X11R6/include -L/opt/kde/lib -L/opt/qt/lib -L/usr/X11R6/lib test.cpp -lpthread -lkfile -lkfm -lkdeui -lkdecore -lqt -lXext -lX11 The only difference is the linking order of the pthread library. */ static void *writerThread(void *arg){ printf("a\n"); return NULL; } int main(int nargs,char** args) { printf("hi\n"); KApplication a( nargs, args, "kmpg" ); pthread_t tr; printf("hello\n"); pthread_create(&tr,NULL,writerThread,NULL); printf("b\n"); a.exec(); return 0; } Are the other kde libraries linked in the example thread safe? Is this still valid on the RH 6.0? assigned to pbrown for follow ups. This still happens under RHL 6.0 on alpha. Cristian, even if the kde libraries aren't thread safe, all the KDE/X stuff is being done in one thread, so it shouldn't segfault, right? I am no linking expert, but this behaviour does seem strange to me too. Jim, can you please take a look into this? What libraries do I need to use to reproduce this bug? I tried it on a few systems here and I just got things like: /usr/include/kconfigbase.h:80: qcolor.h: No such file or directory It looks like it may be a real bug (link order does matter, but getting it wrong should not - usually - cause core dumps). Preston, is there a machine at redhat (say a chroot jails on porky/jetson?) which can reproduce this?
https://bugzilla.redhat.com/show_bug.cgi?id=816
CC-MAIN-2020-24
en
refinedweb
Different ways to terminate a program in C In this article, we are going to learn about various methods by which we can terminate a C program which is currently in execution. Starting with the most widely used and most obvious function that is by using the exit() function. Some of the common ways to terminate a program in C are: - exit _Exit() - quick_exit - abort - at_quick_exit We will, now, go through each of the above methods in detail. exit() This function requires the declaration of the C library stdlib.h in which it is defined. And to use this function in C++ we may have to include the C++ library cstdlib. It should be noted that this exit() function is not a program control statement used in C programs like break/goto/continue. This function doesn't affect the control flow rather it exits the / closes the current program in execution. This function forces forcefully termination of the current program and the control is transferred to the Operating system. This function immediately interrupts and closes the current program in execution. The general prototype of the exit() is :- void exit(int returning_value) Now let's look at a pseudo code to identify when we actually use a exit() statement during normal execution of our program. Here the example we will be considering the example of is graphic card is present or not because graphic card is must for running high end games. #include<stdlib.h> int main(void) { if(!graphic_card_present)exit(1); letsplay(); /* Any code we can define */ return 0; } Note that the "returning_value" is returned to the calling process , it means mostly it will return to the operating system. The argument is taken to be 0 to indicate normal execution and if some other value is used then that's value is used to show some kind of error. Two Macros that are used as argument here are :- EXIT_SUCCESS This macro means successful execution of the program. EXIT_FAILURE This macro indicates unsuccessful execution of the program. These above two macros are used in the case of passing the argument to the exit function. Sample code using exit() :- void menu(void) { char ch; printf("1. Check Spelling\n"); printf("2. Correct Spelling Errors\n"); printf("3. Display Spelling Errors\n"); printf("4. Quit\n"); printf(" Enter your choice: "); do { ch = getchar(); /* read the selection from the keyboard */ switch(ch) { case '1': check_spelling(); break; case '2': correct_errors(); break; case '3': display_errors(); break; case '4': exit(0); /* return to OS */ } } while(ch!='1' && ch!='2' && ch!='3'); } _Exit() The major difference between this _Exit() and exit(argument) is that the exit(argument) perform cleaning before termination but the _Exit() performs no cleaning before termination of program. Cleaning here means closing all the pointers , opened files ,flushes the buffer etc. The general prototype of the above function :- void _Exit( int exit_argument ); Sample code using _Exit() :- #include <stdlib.h> #include <stdio.h> int main(void) { int t=2; while(t--) { printf("Enter P to run program and E to terminate the program\n"); char c; scanf("%c",&c); if(c=='E') { printf("Program Terminated\n"); _Exit(0); } else printf("Program running\n"); } } The result of the above code for the given inputs :- Inputs:- P E Outputs:- Enter P to run program and E to terminate the program Program running Enter P to run program and E to terminate the program Program running quick_exit This function does cleaning before exiting but the cleaning before exiting is not complete cleaning. The most important characteristic of this function is it exits the called functions is the order opposite to the order in which functions were called. The above statement would be clear from the sample code mentioned below :- #include <stdlib.h> #include <stdio.h> void fun1(void) { puts("Deleted the first function"); fflush(stdout); } void fun2(void) { puts("Deleted the second function"); } int main(void) { at_quick_exit(fun1); at_quick_exit(fun2); quick_exit(0); } The result of the above code for the given inputs :- Output:- Deleted the second function Deleted the first function In the above output just notice that the functions are terminated in the reverse order of the orderin which they were called. This functions can be assumed analogous to destructor in C++. abort This function doen't close the files that are open . This function also doesn't delete the buffers before closing the program neither does it clears the buffer. Prototype of the Function:- void abort ( no arguments ); at_quick_exit This function requires the declaration of the header <stdlib.h> . Prototype of the Function:- int at_quick_exit( void (*func)(void) ); Now notice that the prototype of this statement is quiet different from the ones we saw recently. Here the argument void (*func)(void) is actually a function pointer . So , whats the importance of the argument passed in the above function prototype and what exactly does that indicate ? The void (*func)(void) argument passed in the above function is actually a pointer to a function and this function is called just before the program is ended. The function int at_quick_exit( void (*func)(void) ) returns 0 if execution is successful else the any other value is returned. Now let's see the functions that help in interacting with the enviroment :- - system - getenv, getenv_s 1. SYSTEM This function requires the declaration of the header <stdlib.h> . This function helps in calling the enviroment's command processor by passing the argument . If system() is called with a nullpointer,it will return non-zero if a command processor is present, and zero otherwise. In simple words it means we can pass the commands that can be passed using the terminal of Operating System and returns the command after the execution is done. Prototype of the Function:- int system( const char *command ); Now let's see an example code for better understanding. #include <stdio.h> #include <string.h> int main () { char command[50]; strcpy( command, "dir" ); system(command); return(0); } Output Note that change the key word used for the respective Operating SYstem Terminals otherwise error will be shown. 2. GET_ENV It is a related function to system(). This function requires the declaration of the header <stdlib.h> and if we are using C++ then we have to use the library < cstdlib >. The getenv() function returns a pointer to environmental information associated with the string pointed to by name in the implementation-defined environmental information table. The string returned must never be changed by the program. Return value when it gets successfully executed this function shall return a pointer to a string containing the value for the specified name. Prototype of the Function:- char *getenv(const char *name); A sample code to help you understand better the code :- #include <stdio.h> #include <stdlib.h> int main(void) { char *env_p = getenv("PATH");//the required function is being used here if (env_p) // if the function is not null printf("PATH = %s\n", env_p); } 3. GETENV_S() This getenv_s() function helps in searching for the environmental variable with a given name in the environment that we are using . The detailed function definition :- errno_t getenv_s( size_t *required_return, char (&buffer)[length], const char *nameofvar ); This function is rarely used by anyone so, better to avoid this function. Now let's see the functions in signal management the C provides support for some signal management :- 1. sig_atomic_t It is a typedef present in the header <signal.h> . It is an integer type that can be accessed individually even when the interrupts that are asynchronous are still present. 2. SIG_DFL AND SIG_IGN Both of the above macros present in the header <signal.h> . They both help in defining strategies that help in signal handling. 3. signal It is a function present in the header <signal.h> . It helps in setting the signal handler for a particular signal. 4. raise It is a function present in the header <signal.h> . This actually runs the signal handler that has been intiated for a particular signal. 5. SIG_ERR It is a macros present in the header <signal.h> . This show that some error has been generated. Finally let's see some the signals that are present in C :- These all are defined macros constants:- - SIGABRT - SIGSEGV - SIGILL - SIGINT - SIGFPE - SIGTERM With this article at OpenGenus, you must have a complete idea of terminating a C code which is in execution. Enjoy.
https://iq.opengenus.org/ways-to-terminate-a-program-in-c/
CC-MAIN-2020-24
en
refinedweb
Wikiversity:Meetings/Learning on Wikiversity/03June2008/log < Wikiversity:Meetings | Learning on Wikiversity | 03June2008(Redirected from Wikiversity:Meetings/Learning on Wikiversity/log)Jump to navigation Jump to search This is a log of a meeting on IRC on 3rd June, 2008 about 'learning in Wikiversity' - see Wikiversity:Meetings/Learning on Wikiversity. A short summary you can find here. intro[edit] - [5:05pm] cormaggio: ok, maybe we can start and people can join in as they arrive (McCormack might be here) - [5:07pm] cormaggio: I'm hoping to get some discussion going around 'learning on Wikiversity' - how it works, what problems exist, what we've experienced, and what we could do about it.. - [5:08pm] cormaggio: this is partly towards my research - hence this meeting being logged - and I'd like to use some of this discussion as part of my PhD - [5:09pm] cormaggio: but I'd like to make clear that I will check this with everyone afterwords to see whether it is ok that I use something, or whether it should be anonymized etc - [5:09pm] cormaggio: is that ok, or any questions so far? - [5:09pm] assassingr: I'm ok with that - [5:10pm] Erkan_Yilmaz: what is the topic of your PhD ? and why the discussion now ? did there anything special happen? - [5:10pm] cormaggio: (that's not a request for consent, btw - just to make sure) - [5:10pm] Erkan_Yilmaz: with using the info: ok - [5:10pm] cormaggio: the topic of my PhD is 'Developing Wikiversity through action research' - [5:11pm] cormaggio: (but you don't need to read that!) - [5:11pm] Erkan_Yilmaz: no worries :-) - [5:12pm] Erkan_Yilmaz: so and now you are at the end of your PHD and want some more info? - [5:12pm] cormaggio: why the discussion now - I'm hoping to get an *action plan* together for improving our understanding of how learning works on Wikiversity, and implementing some actions to improve learning on Wikiversity - [5:12pm] cormaggio: no - I still see this PHD as in process - [5:12pm] cormaggio: very much so - [5:13pm] Erkan_Yilmaz: and final question: - [5:13pm] cormaggio: there has been a lot of discussion - and action - but I'm not sure I can fit it all into my research question - [5:13pm] Erkan_Yilmaz: do you also agree that we can publish the chat afterwards? - [5:14pm] Erkan_Yilmaz: (well we can talk also later about that) - [5:14pm] cormaggio: oh yes, i agree of course - but I think it's only fair to check *afterwards* just in case someone isn't comfortable with something they said earlier - [5:14pm] Erkan_Yilmaz: of course - [5:14pm] Erkan_Yilmaz: you mean like: Osama bin Laden? - [5:15pm] cormaggio: that's what i meant earlier about feeling comfortable - [5:15pm] cormaggio: well, I can't see how Osama fits in here, but.. :-) - [5:15pm] Erkan_Yilmaz: :-) - [5:16pm] assassingr: and a question from me if Erkan finished with his - [5:16pm] _Lou: hi you all ! - [5:16pm] Erkan_Yilmaz: temporarily I am finished :-) - [5:16pm] cormaggio: and another thing I'd like to discuss is the research itself - how you see it and your own relation to it; whether it's been of any influence to you; whether you feel like you're involved .. - [5:16pm] assassingr: You are mostly focusing in en.wv? - [5:17pm] cormaggio: yes assassingr - that's a good question - it's mainly on en (even though I'm very interested in other projects) - [5:17pm] Erkan_Yilmaz: well that would mean you would have to participate there also :-) - [5:18pm] Erkan_Yilmaz: or get the knowledge from there to here - [5:18pm] cormaggio: yes, exactly - though it;s interesting to hear other projects' participants' reports of their experiences - [5:18pm] cormaggio: well, you're both here - so you've brought some knowledge ;-) - [5:19pm] Erkan_Yilmaz: another question: how long will this meeting now go ? is there a time limit? - [5:19pm] cormaggio: ok - i had thought about 1.5 hours - is that ok, or do you have a cut off point? - [5:19pm] Erkan_Yilmaz: that is ok, as long I have coffee - [5:19pm] cormaggio: maybe 1 hour could be enough - we'll see - [5:20pm] assassingr: we can see that as the conversation rolls what is the major obstacle to learning in Wikiversity?[edit] - [5:20pm] cormaggio: so maybe let's start with a problem ;-) - what is the major obstacle to learning in Wikiversity? - [5:21pm] cormaggio: what difficulties have we observed people having? - [5:21pm] cormaggio: or what difficulties have we had ourselves? - [5:21pm] Erkan_Yilmaz: depends on the person - how he learns - one point can be: the used means for learning - [5:21pm] Erkan_Yilmaz: WV is mostly just a collection of pages, static - [5:21pm] Erkan_Yilmaz: they can, but must not contain multimedia - [5:22pm] cormaggio: so, it's a case of not having stimulating enough materials, or..? - [5:23pm] Erkan_Yilmaz: yes, that is one - I mean take for example vocabulary learning programs like Rosetta they use many things which stimulate the senses - [5:23pm] Erkan_Yilmaz: such things is a little rare at WV - [5:24pm] Erkan_Yilmaz: the more senses you use - e.g. when eating - you get a better experience - [5:24pm] cormaggio: multimedia has been a long source of frustration for me (and many other people) - our lack of an easy to embed media player is annoying - [5:24pm] assassingr: well, at the current stage I think that the lack of material is one problem, but that's not due to the learning model of WV - [5:24pm] Erkan_Yilmaz: media upload: yes, that is really bad from commons side to not allow more formats :-( - [5:24pm] cormaggio: Erkan - right, so we could be making more interactive materials.. - [5:25pm] cormaggio: assassingr - that's great - I'd like to know what you think of the 'learning model" of WV - is it one model, or..? - [5:25pm] Erkan_Yilmaz: lack of material: that depends what you are searching for I guess - I think stuff like mathematics, IT and so were (e.g. on de.WV) the first things to come - which category/topics you mean assassingr? - [5:26pm] assassingr: Erkan_Yilmaz: well, the math, physics and engineering departments which I have checked are not so developed I think - [5:27pm] cormaggio: Erkan - the formats issue is a long-standing and very complex one - this is a discussion that's been around the Foundation for a long time - there is a sliding scale from ultra-hardcore people to semi-hardcore people to people that want any content as long as it's free... - [5:27pm] Erkan_Yilmaz: depends who is reading it - e.g. most people think the materials must be for university level, since WV contains 'versity' - [5:27pm] assassingr: but I think that's about to change in 1 or 2 years from now - take WP as an example - [5:27pm] Erkan_Yilmaz: but in general assassingr you are right: the things at WV are less then elsewhere - [5:28pm] Erkan_Yilmaz: I also think that material will increase with time: people just think of the image of WP and arrive here at WV and will extend the WV pages - [5:28pm] cormaggio: well, on 'not much material', we can take for granted that it's "early days" at WV, right? - [5:28pm] Erkan_Yilmaz: yup - [5:28pm] Erkan_Yilmaz: WP is now also about 7 years old or? - [5:29pm] cormaggio: yeah - Jan 15 2001 - [5:29pm] assassingr: cormaggio: for starters, I don't believe than any learning model should be static or that there should exist just one - [5:29pm] cormaggio: I'm with you there assassingr :-) - [5:30pm] assassingr: we could try to develop more learning models and try to categorize them on how people can learn - [5:31pm] cormaggio: back to the not much material question - I wonder if there is something holding back content development, as well as it being early days (why, here's McCormack now!) - [5:31pm] assassingr: (i don't know if that does make any sense in English, I'm not familiar with education vocabulary so much) - [5:31pm] cormaggio: I understand assassingr - and I agree again - [5:32pm] Erkan_Yilmaz: any task requires time - [5:32pm] Erkan_Yilmaz: and to give for a task x time that task x must have prio for person y - [5:33pm] Erkan_Yilmaz: I think people are not so much motivated probably because WV is still young and not like WP or they just don't want to share or didn't learn to share - [5:33pm] McCormack: Nothing is holding back content development except lack of contributor retention. - [5:33pm] Erkan_Yilmaz: because when they press the save button the things don't belong to them anymore - [5:33pm] Erkan_Yilmaz: anyone can theoretically edit it - [5:34pm] ¥ McCormack says "contributor retention" again. - [5:34pm] cormaggio: well, that's interesting McCormack - that begs the question why we're not retaining contributors.. - [5:34pm] McCormack: Possibly go back a step and ask first why it is that contributor retention is the problem and not something else? - [5:35pm] cormaggio: hold on Erkan - are you saying that this dynamic is different on WV than WP? - [5:35pm] McCormack: Successful projects have exponential development - which is what we would like to see. - [5:35pm] Erkan_Yilmaz: of course - [5:35pm] assassingr: also - when we were about to begin el.wv there was someone that told me that it wouldn't be easy to approach academic people. That way "expert knowledge" wouldn't be freely available and still it would be a prerogative for some few people - [5:35pm] Erkan_Yilmaz: one point is: WV and WP have different goals - [5:35pm] McCormack: WV has linear development, upwards, steady, but a straight line. - [5:35pm] assassingr: I don't know if I agree with that, but it's something to think about - [5:35pm] darkcode: it's probably because people try to treat Wikiversity like any other classroom/college/university oriented website when it's not - [5:36pm] cormaggio: Erkan - I just want to make sure I understand - the fact that anyone can edit affects contributors on WV more than those on WP? - [5:36pm] cormaggio: hi darkcode - [5:37pm] ¥ assassingr waves at McCormack and darkcode - [5:37pm] Erkan_Yilmaz: yes, because on WP people try to be objective because they are creating an encyclopedia article, but on WV it is about learning in all aspects. That means also non-objective things could be posted - [5:37pm] darkcode: I think it's the mix of the fact anyone can edit and people trying to treat Wikiversity like other university oriented websites that is the problem - [5:37pm] Erkan_Yilmaz: which then could be edited immediately because someone doesn't like it - [5:37pm] cormaggio: assassingr - what was the perceived problem with academic people again? - [5:37pm] Erkan_Yilmaz: in WP they have some kind of "guarantee" that when they post neutral enough that their contribution stays - [5:38pm] darkcode: people contributing to university oriented websites are probably use to writing papers that aren't edited without there approved review - [5:39pm] Erkan_Yilmaz: yeah, people have when thinking of university some kind of fixed/structured/hierarchical viewpoint - [5:39pm] assassingr: Academic people wouldn't be interested in doing research in Wikiversity, that was the problem - [5:39pm] cormaggio: ok Erkan - I think I see better what you're saying - so have you any thoughts on how this subjectivity can be harnessed in a productive and perhaps wiki way? - [5:39pm] cormaggio: (or anyone else for that matter!) - [5:40pm] darkcode: Wikiversity probably needs to work on encouraging more NPOV if that's an issue, I think original research can be done in NPOV way, by also including the fact that people disagree with said original research or interpretations, and by including what other conclusions are possible - [5:40pm] cormaggio: assassingr - was this because it could be edited, or..? - [5:40pm] Erkan_Yilmaz: by having a common goal: that we welcome anyone and any edit - [5:40pm] Erkan_Yilmaz: even if it may be not neutral - [5:40pm] ¥ Erkan_Yilmaz sees it now more form an individual POV - [5:41pm] Erkan_Yilmaz: because most of the time think when thinking of WV on a group/ group of student - [5:41pm] cormaggio: ok - just to be provocative - would we accept a Nazi POV? - [5:41pm] Erkan_Yilmaz: s - [5:41pm] darkcode: nothing is learned when everyone agrees - [5:41pm] assassingr: cormaggio: no, because there are interested in doing research inside a bricks-and-mortar university, and research there could make more profit - [5:42pm] Erkan_Yilmaz: I: yes, because it brings then people together, people interact then, and they learn on the topic and share their viewpoints - [5:42pm] Erkan_Yilmaz: because: when people do such things in private you can not bring them to others viewpoints - [5:42pm] Erkan_Yilmaz: you could not convince them perhaps - [5:42pm] Erkan_Yilmaz: when they do it openly here: it attracts others (pro + contra) - [5:42pm] cormaggio: darkcode - that's interesting - but surely a wiki is a form of peer review - different from academia sure, but no less/more susceptible to conflict :-) - [5:43pm] darkcode: Wikiversity shouldn't be about trying to convince anyone of there personal POV, but rather about including all POVs in order for people to make up there own conclusions - [5:44pm] cormaggio: assassingr - oh sure, the profit motive will play its role - but there are others who see research as something which *must* be in the public domain. indeed, such research is being increasingly funded.. - [5:45pm] cormaggio: darkcode - that's a good point (or at least that's how I see it) - to allow the possibility to recognize and co-critique different perspectives on any given subject - [5:45pm] assassingr: sure, but from personal experience I could tell that there would be may professors that would mock wiki research - [5:45pm] cormaggio: oh yes assassingr - I know plenty myself :-) - [5:46pm] darkcode: there are professors who use to mock Wikipedia as well and now they tell there students to use Wikipedia to research stuff for there class - [5:46pm] cormaggio: but darkcode's idea of NPOV research is interesting - how would that work? - [5:47pm] cormaggio: for me the issue of "neutrality" is always problematic - but I see an analogy between WP's method of NPOV and academia's method of 'academic/scholarly practice' - [5:47pm] darkcode: neutral in this sense would mean including conflicting perspectives for any research done, and letting the audience make up there own conclusions from the research - [5:47pm] assassingr: (about the Nazis, there are some guidelines about fringe research at betawikiversity:Wikiversity:Research_guidelines/En#Fringe_research - just to make sure everyone knows) - [5:48pm] cormaggio: mmm, interesting - [5:48pm] darkcode: even in academic/scholarly research not everyone agrees with conclusions made - [5:49pm] cormaggio: yes, the 'Nazi' issue was the main question over whether/how we would allow for research on WV - one of the main questions at its set-up - [5:49pm] cormaggio: (not just Nazi - I'm just using it as an extreme example) - [5:49pm] Erkan_Yilmaz: so that means that some people (perhaps a majority) agrees to suppress a smaller group's idea? - [5:49pm] assassingr: oh, I know, I'm sure you don't have anything personal with Nazis :) - [5:49pm] cormaggio: so where are we - how to allow for subjectivity while promoting collaboration - yes? - [5:50pm] cormaggio: I'm just trying to keep track of multiple threads - [5:50pm] darkcode: by including all conflicting views/perspectives the research can be made less bias, also including any supporting facts/data from the research or from previous research, could booster its reliability - [5:50pm] cormaggio: does this relate to contributor retention perhaps? - [5:51pm] wknight8111: and we could all sit around the campfire and sing kumbaya - [5:51pm] cormaggio: Erkan - you make a good point - how do you validate who decides what is ok and what is not..? - [5:51pm] cormaggio: wknight8111 (!!) - [5:52pm] wknight8111: there are going to be an infinite number of conflicting views, and some of them are always going to be obvious quackery - [5:53pm] Erkan_Yilmaz: you forget one thing: one person has now a viewpoint x (which may be wrong), but by allowing him/her to post it, a process of learning starts, who says that x months later the person doesn't realize the viewpoint was wrong? by not allowing to post/stay you take away this chance - [5:53pm] darkcode: using Nazi as an example, could include the Nazi point of view, include information like historical info that may have influenced the Nazi perspective, include any scientific data that might be available that supports there view, as well as any data that is in conflict with the Nazi perspective - [5:53pm] Erkan_Yilmaz: people should not limit others freedoms right from the begin, we don't know what will happen with something in the future - [5:53pm] cormaggio: I'm going to semi-answer the question I just raised - if it is a community-consensus-driven process, doesn't that eliminate some of the ethical problems with majority rule? - [5:54pm] assassingr: I think Erkan's answer is different than yours - [5:54pm] wknight8111: and include the perspective that there were no Nazis, no holocaust. Include the idea that Nazis are aliens. Include the idea that Nazis are demons that return to earth every 666 years, include the idea that Jews are made of candy and need to be killed to release the candy, etc - [5:55pm] darkcode: cormaggio: not really because the majority can be equated with the most popular POV, and the minority with the least popular POV - [5:55pm] assassingr: wknight8111: o.O - [5:55pm] cormaggio: ok - I seem to have pressed the 'Nazi button' :-) - [5:55pm] Erkan_Yilmaz: well, even if someone posts false data, it is something they have fun with are interested in, they can use WV as their sandbox to spend their time here - we give them an atmosphere to live their dreams - [5:56pm] wknight8111: I'm just saying, there are a million points of view, and you can't include them all because all of them aren't serious - [5:56pm] assassingr: well, Erkan_Yilmaz, you are extremely liberal for my taste - [5:56pm] Erkan_Yilmaz: assassingr probably I also will behave different from situation to situation - [5:56pm] Erkan_Yilmaz: but when talking abstract, we should also be open for others ideas - [5:56pm] assassingr: that would be better - [5:57pm] cormaggio: I'm sure Erkan also has boundaries - like he says - [5:57pm] darkcode: well assume good faith plays some role in that, we can assume there serious until reason is given to believe there not being serious - [5:57pm] Erkan_Yilmaz: abstract is a high level and when you eliminate right there something you lose many other things below - [5:57pm] assassingr: don't take me wrong, I'm a liberal too, but using WV as a sandbox, that's.... new - [5:57pm] darkcode: well Wikiversity does have a sandbox server - [5:57pm] Erkan_Yilmaz: well it depends: JWS e.g. had some idea with a WV for vandals - [5:58pm] darkcode: vandals.wikiversity.org ;p - [5:58pm] Erkan_Yilmaz: WV should be for anyone - people should not hinder others form being here - [5:58pm] assassingr: darkcode was very right about that - the assume-good-faith part - [5:58pm] cormaggio: can I reframe this in pure terms of learning? I think Erkan is raising the point of how a person learns and participates as a *process* - I quite like that lens... - [5:58pm] Erkan_Yilmaz: yup, because I speak personally also - [5:58pm] Erkan_Yilmaz: when I joined the wiki verse: - [5:58pm] Erkan_Yilmaz: I saw it just as a medium to make ads - [5:58pm] Erkan_Yilmaz: but since then: - [5:59pm] Erkan_Yilmaz: I have had process changes - [5:59pm] Erkan_Yilmaz: and now as you all know: devote my time for the health of WV - [5:59pm] Erkan_Yilmaz: when someone right at begin had blocked me, I wouldn't be here - [5:59pm] Erkan_Yilmaz: and I guess you all would say: that is a loss - [5:59pm] Erkan_Yilmaz: you never know how someone develops over time - [5:59pm] cormaggio: absolutely! (loss) - [6:00pm] Erkan_Yilmaz: and WV is for me besides doing academic research/learning a place for the individual to learn and to develop - [6:00pm] cormaggio: thanks Erkan - [6:00pm] Erkan_Yilmaz: it goes a little more in the direction like: it is a virtual world which allows people to be home - [6:00pm] assassingr: well, I'm sure that everyone is acting the same way as regards ads - leave a message in the talk page - [6:01pm] Erkan_Yilmaz: but that is just my POV :-) - [6:01pm] cormaggio: (I'm not sure I understand "ads", btw) - [6:01pm] Erkan_Yilmaz: ads: I explain - [6:02pm] Erkan_Yilmaz: <comment removed, per Erkan's request. --Cormaggio> - [6:03pm] cormaggio: ah yes - ok - [6:04pm] cormaggio: so yes exactly - people who come in for purposes alien to the spirit of the project can become acculturated etc and become great contributors - [6:04pm] darkcode: one way I guess a page could be evaluated is by asking the question "is this a serious page that I can learn/study and allows me to easily make up my own conclusions?" - [6:04pm] cormaggio: thanks for bringing that up darkcode - [6:05pm] cormaggio: I was going to return to the question of what Wikiversity should include - as we were discussing - which is one part of this - [6:05pm] Erkan_Yilmaz: rehi assassingr :-) - [6:06pm] cormaggio: are there ways that we can help a random clicker evaluate a page perhaps? - [6:06pm] cormaggio: could/should we incorporate a feedback mechanism into every page? - [6:06pm] Erkan_Yilmaz: evaluate in regards to ? design/content quality/understandability? - [6:07pm] cormaggio: Erkan - right - could/should we ask all of these questions? Or would it put people off? - [6:07pm] Erkan_Yilmaz: feedback mechanism would be nice, e.g.a simple one: make a template on the main page asking: did you like the content ? click here and share your view - and the here goes to the talk page - [6:07pm] darkcode: well a random clicker is already going to evaluate it by different things, like by its appearance, if it catches there interest, if it sounds plausible, if it includes information they need or if it's lacking in information to backup any claims, etc. - [6:07pm] Erkan_Yilmaz: someone who is willing to contribute will add his opinion anyway - [6:08pm] cormaggio: hmm, surely "willing to contribute" spans many types of people in many different 'states'.. - [6:08pm] darkcode: different people will evaluate it in different ways, depending on what there needs are and purpose in looking for it on Wikiversity - [6:09pm] Erkan_Yilmaz: I think any kind of feedback is good even if it's vandalism (because vandalism shows, that the page got known) - [6:09pm] Erkan_Yilmaz: because so far you don't know besides edits on talk or main page that users saw it - [6:09pm] cormaggio: sure darkcode - perhaps simply a "tell us what you think" link which is a new section on the talk page.. - [6:09pm] Erkan_Yilmaz: we don't have access to server logs :-( - [6:10pm] Erkan_Yilmaz: we could go over search engine hits which appear at begin of a search or so, but still this is not reliable - [6:10pm] cormaggio: Erkan - yes, this is an issue - but WP has it and presumably so will we eventually.. - [6:11pm] Erkan_Yilmaz: implementing an easy to use and simple feedback mechanism which rewards people in some kind of form would be something that also non-altruistic people would then use - [6:11pm] McCormack is now known as Sleeping. - [6:11pm] Erkan_Yilmaz: the reward can happen e.g. already: by increasing edit count, because some only work for that :-) - [6:12pm] darkcode: well its been discussed before about having some kind of comments extension that acts similar to the cite extension, adding a comments section to the bottom of the page when say <comments/> is used on it - [6:13pm] Erkan_Yilmaz: interesting: I just thought: the talk page takes the user from the page where he is and takes him out of his thoughts probably - he doesn't see the relation so much more? - [6:13pm] Erkan_Yilmaz: I mean if you read the page and you are on the page your mind is still "there" - [6:13pm] cormaggio: that's another interesting point Erkan :-) - [6:13pm] Erkan_Yilmaz: and not radically - by your eyes - split - [6:14pm] Erkan_Yilmaz: comment on the main page sounds ok - [6:14pm] cormaggio: and I hadn't noticed that proposal darkcode - [6:14pm] Erkan_Yilmaz: but then: would that people like ? e.g. the authors? - [6:14pm] darkcode: its only been discussed before on IRC cormaggio - [6:14pm] cormaggio: ok - [6:15pm] cormaggio: but feedback is only one part of this, surely - [6:16pm] cormaggio: if we're discussing how to make learning resources more easily usable for people, there are other considerations.. - [6:16pm] darkcode: just as the Classes Clock idea has only been discussed on IRC - [6:17pm] ¥ Erkan_Yilmaz thinks we have lost some of the participants and should try to bring them in again? - [6:17pm] Erkan_Yilmaz: IRC participants now - [6:18pm] darkcode: I think some of the problems are that people try too hard to anticipate what people want and divide people into imaginary stereotypes - [6:18pm] Erkan_Yilmaz: hi Devourer - [6:18pm] cormaggio: how/where do you see that, darkcode? - [6:19pm] Erkan_Yilmaz: perhaps when they try to make a 100% perfect page instead of developing it slowly with errors - everybody can see? - [6:19pm] darkcode: well for instance I see in the Schools vs Topics vs Portals namespaces, it's assumed that people are only interested in a certain thing and would not want to be distracted by other things - [6:20pm] ¥ Erkan_Yilmaz has successfully avoided that discussion :-) - [6:20pm] darkcode: it's also in the page about the different type of Wikiversity participants, in calling people teachers, learners, whatever - [6:20pm] cormaggio: I wonder if "having errors" is seen as the key to other people jumping in and collaborating? Is that our model for participation? - [6:20pm] Erkan_Yilmaz: darkcode: yes, how they title others :-) - [6:20pm] Erkan_Yilmaz: but people just need such stereotypes to make the world look easier, simpler to understand - [6:21pm] Erkan_Yilmaz: seeing errors and jumping could mean 2 things: either I want to help or I want to show the other has made an error - [6:21pm] cormaggio: well, I think there's benefit in identifying uses of Wikiversity - because people will have clear needs to use Wikiversity for - [6:22pm] darkcode: it encourages the attitude that people are just learners, just students or just teachers, rather than encouraging people to teach what they know, while trying to learn what they don't know - [6:22pm] cormaggio: but I think there's always been the attitude that people's roles and needs will always be flexible in Wikiversity - changing from a teacher to a learner to a browser, etc - [6:23pm] atglenn: I kind of like the approach of collective learning, where there is not someone teaching to an audience but, while someone learns a topic they are recording their experiences learning it and each participant is... - [6:23pm] atglenn: ... adding content to the others' contributions - [6:23pm] darkcode: well that is a problem too, people's roles don't necessarily change either, people may need to browse in order to learn or teach something - [6:24pm] cormaggio: atglenn - that's something that I'd like to see more of actually - and develop a style of working that might encourage people to take control of their learning - [6:24pm] darkcode: someone way want to build on existing work and avoid duplicating what someone else has already done - [6:25pm] atglenn: with the "record after you learn it" model, (thanks Erkan) people have to wait for one person to - [6:25pm] atglenn: have complete knowledge of the topic (and then be dependent on that person's knowledge) - [6:25pm] atglenn: with a collective learning process there is no waiting and each person can contribute as they experiment, - [6:26pm] atglenn: so that we benefit from their struggles during the learning process - [6:26pm] darkcode: ya - [6:26pm] atglenn: rather than seeing only the finished product - [6:26pm] cormaggio: atglenn - I'd really like to start to define that model - that would be a great action step for this meeting - [6:26pm] darkcode: everyone should be learning while mentoring those who want to learn, what they already have learned, as they've learned it - [6:26pm] assassingr: for me teacher/learner attributes are just to indicate if someone works towards providing resources or he wants to get more resources, nothing more than that - [6:26pm] atglenn: and since none of us actually learns bu producing a finished product straight away... - [6:27pm] atglenn: we all learn by encountering obstacles and working through them... this seems like an added benefit. - [6:27pm] Erkan_Yilmaz: yup, learning is a mental change - [6:28pm] ¥ assassingr darkcode confused me... - [6:28pm] assassingr: oh, I give up - [6:29pm] cormaggio: well, there are probably a few things I haven't understood fully myself - [6:29pm] Erkan_Yilmaz: only a few? - [6:29pm] darkcode: well like say your learning C++, while your learning C++, you could be mentoring someone on how to declare variables in C++ because you've learned that concept, but your still learning, and there learning from you, so your both learning and a mentor for another learner - [6:29pm] * Erkan_Yilmaz understands many things not - [6:29pm] cormaggio: but I'm aware that this meeting has been going a while now and I think we might want to put our heads together for some action - [6:30pm] Erkan_Yilmaz: darkcode: yes - [6:30pm] Erkan_Yilmaz: by seeing how someone else does it, you have another viewpoint instead just learning yourself alone - [6:31pm] darkcode: you can also explain what worked for you, with many people explaining what worked for them, people have more methods of learning - [6:32pm] Erkan_Yilmaz: darkcode - by any chance your example was in relation to pair programming? - [6:32pm] cormaggio: I'm not sure atglenn's model and darkcode's model are the same... - [6:33pm] darkcode: also I was thinking of an alternative to primary school, secondary school, etc. that might be a better way to organize resources - [6:33pm] cormaggio: perhaps they are.. - [6:33pm] atglenn: I think my model is different. they are not mutually exclusive, certainly. - [6:33pm] cormaggio: ..but my point is: maybe we should write up this model explicitly on-wiki? - [6:34pm] darkcode: organize them by more general category like Resources for young children, Resources for older children, Resources for teens, Resources for young adults, Resources for adults, Resources for the elderly - [6:35pm] darkcode: those are more easily recognized categories that don't rely on as much on geographical location, but still rely some though on culture - [6:35pm] darkcode: might be a little less confusing - [6:35pm] darkcode: could also have Resources for everyone - [6:36pm] cormaggio: darkcode - yeah, but there's always going to be problems - [6:36pm] cormaggio: there are plenty of adult learners who don't know how to read advanced texts (or at all).. - [6:37pm] cormaggio: (but I'm just being my usual 'problematic self') - [6:37pm] darkcode: well the way I see it, these names make it easier for people to tell what kind of resources they might understand - [6:38pm] darkcode: with the educational levels approach more guesswork is involved I think - [6:39pm] darkcode: but I still think A depends on B, so B is categorized in Resources depending on A, is the best approach - [6:39pm] atglenn: can someone scour JWS' pages to see if he hasn't written up something like this already? - [6:39pm] cormaggio: right - well, we're not going to solve this one now perhaps.. - [6:39pm] atglenn: (sorry, was occupied) - [6:40pm] Erkan_Yilmaz: about the categorizing there is hot discussion atm at en.wv - [6:40pm] cormaggio: ..but I'd like to start to bring this to a conclusion of 'what next' - [6:40pm] Erkan_Yilmaz: just browse through the colloquium - [6:41pm] atglenn: *already - [6:41pm] cormaggio: I definitely like the idea of clarifying how collective learning would work - without the need for 'waiting' for others - [6:41pm] Erkan_Yilmaz: what means collective learning ? learning together or creating a collective mind which knows all what was learned? - [6:42pm] cormaggio: atglenn - I'm pretty sure JWS hasn't written it up quite like this - but I'll send on some relevant links - [6:42pm] atglenn: one downside with that model is that it requires a group that learns together (ideally)... and with our current numbers this is difficult. but this is the model we could strive for. - [6:42pm] atglenn: I'll try to write something, where should it go? - [6:42pm] cormaggio: Erkan - that's an interesting question - one for the wiki :-) - [6:42pm] atglenn: learning together, Erkan, and leaving a record of that learning process - [6:42pm] atglenn: in my head, anyways - [6:42pm] cormaggio: I think we should set up a page Wikiversity:Collective learning? - [6:42pm] Erkan_Yilmaz: leaving a record: like in a blog? - [6:43pm] cormaggio: or we can get a better name if we like - [6:43pm] Erkan_Yilmaz: page name: it is a wiki it can be changed anytime - [6:43pm] atglenn: not necessarily a blog. the on-wikt writings of the participants would be that record - [6:43pm] cormaggio: but there's also some other pages, two are linked in the channel info - [6:43pm] cormaggio: "learning model" and "discussion group" - [6:44pm] atglenn: well, I vote for the collective learning page and it can be linked to from anywhere else that's appropriate. - [6:45pm] cormaggio: I'd also like to pick up on this theme of subjectivity, personal development, and wiki inclusion/exclusion.. - [6:45pm] cormaggio: Erkan - do you have a place for that, or a new page required? - [6:45pm] atglenn: please elaborate - [6:45pm] Erkan_Yilmaz: place for? - [6:46pm] cormaggio: Erkan - the notion of how to include people into the process of learning, when they might first appear to be radical POV's or vandals etc - [6:46pm] cormaggio: atglenn - that was the theme in a nutshell ;-) - [6:46pm] atglenn: thanks! - [6:46pm] Erkan_Yilmaz: well isn't there - assume good faith and don't bite newcomers? - [6:47pm] cormaggio: right PLE[edit] - [6:47pm] cormaggio: I also think the notion of a PLE is relevant here.. - [6:47pm] Erkan_Yilmaz: :-) - [6:47pm] Erkan_Yilmaz: well PLEs as the name PLE are a new wave at WV - [6:47pm] Erkan_Yilmaz: it was started by you :-) - [6:47pm] cormaggio: - [6:48pm] cormaggio: yes :-) - [6:48pm] Sleeping is now known as McCormack. - [6:48pm] Erkan_Yilmaz: rehi McCormack - [6:48pm] Erkan_Yilmaz: well I have a PLE - [6:48pm] Erkan_Yilmaz: but Cormac - I don't see your PLE or? - [6:48pm] Erkan_Yilmaz: or it is a little hidden or not declared as PLE? - [6:49pm] cormaggio: but I think it's relevant to the question of how the individual interacts with the community/space, and how community facilitators can help people on their personal learning curves - [6:49pm] cormaggio: hmm, interesting observation Erkan! - [6:49pm] Erkan_Yilmaz: well every interaction/edit in WV is a step in the PLE - [6:49pm] cormaggio: I have to admit - a lot of my reflections are in a diary on my desktop - not much/all of it goes onto my blog :-( - [6:50pm] Erkan_Yilmaz: so, why are they private? - [6:50pm] Erkan_Yilmaz: just lazy? - [6:50pm] cormaggio: that's a long discussion - but basically so I don't have to think about what I'm writing - [6:51pm] cormaggio: whether it's ok to be read etc - [6:51pm] cormaggio: but really, that's another discussion.. - [6:51pm] Erkan_Yilmaz: ok - [6:52pm] cormaggio: I think contributor retention brought up by McCormack is an interesting point to take up - but it was left unexplored.. - [6:52pm] cormaggio: maybe next meeting, or on-wiki - [6:53pm] Erkan_Yilmaz: could someone explain the meaning of c.r. ? because when I search the translator there is much words :-( - [6:53pm] atglenn: are these discussions only about models for en wikt? - [6:53pm] cormaggio: anything else that anyone wants to focus on - arising from this meeting? - [6:53pm] atglenn: or are they meant to be cross-wv models? - [6:53pm] atglenn: *discussions - [6:53pm] cormaggio: atglenn - it would be great if ideas could cross-pollinate between projects - [6:54pm] atglenn: I wonder if we might want to have these discussions in the main channel, rather than here - [6:54pm] cormaggio: ok - I'll bear that in mind for the next meeting - thanks atglenn - [6:54pm] atglenn: ok - [6:55pm] cormaggio: oh yes, NPOV was a big theme here - and whether/how it's useful.. - [6:56pm] cormaggio: what's c.r. Erkan? where did you see it? - [6:56pm] Erkan_Yilmaz: c.r. = contributor retention - [6:57pm] cormaggio: ok - that would be 'how to keep (active) contributors (actively) contributing' - [6:57pm] Erkan_Yilmaz: I see - [6:57pm] Erkan_Yilmaz: making them feel comfortable and valued - [6:58pm] Erkan_Yilmaz: but we can elaborate next meeting also? - [6:58pm] cormaggio: that would be great, yes - [6:58pm] cormaggio: so, I'm not sure of the action coming from this meeting.. - [6:58pm] Erkan_Yilmaz: you could make some points where we could talk for the next meeting? - [6:58pm] cormaggio: I see the page on collective learning as a great step - but what else? - [6:58pm] Erkan_Yilmaz: and everybody can suggest also themselves - [6:58pm] cormaggio: yes, I'll do that - [6:58pm] cormaggio: exactly - [7:00pm] Erkan_Yilmaz: well one action is: we have laid the basics for the next meeting - [7:00pm] Erkan_Yilmaz: people can see what it is we talk about and then from that they can bring more input - we were just 4-5 persons now - [7:01pm] McCormack: So did you reach any conclusions? - [7:01pm] cormaggio: yes - I'll have to read this log and see what actions/problems we've defined - and it would be great if people could add to the list - [7:01pm] cormaggio: I'll post a summary of the meeting as well as the log on-wiki - [7:02pm] cormaggio: conclusions McCormack? well, we at least have something to define and something to talk about - that was the purpose here :-) - [7:02pm] cormaggio: so, unless someone has something burning to say, maybe we should call this (meeting) to a close for now..? - [7:03pm] Erkan_Yilmaz: 2h is enough - [7:03pm] cormaggio: it's been 2 hours - sorry Erkan :-) - [7:03pm] Erkan_Yilmaz: :-) - [7:04pm] cormaggio: well, thanks very much to everyone - it's been very interesting, and, I think, useful - [7:04pm] atglenn: thanks for having us. - [7:04pm] cormaggio: but the test of that is in the action to come of course - [7:04pm] cormaggio: atglenn :-) - [7:05pm] cormaggio: oops - phone - [7:06pm] cormaggio: sorry everyone! thanks again! - [7:06pm] Erkan_Yilmaz: bye then for this meeting, now IRC life begins
https://en.wikiversity.org/wiki/Wikiversity:Meetings/Learning_on_Wikiversity/log
CC-MAIN-2020-24
en
refinedweb
In this tutorial we’ll add a voice of reason to the chorus of people crying FAKE NEWS all day long. Let’s make a fake news detector that actually works with better reliability!. (2018). Building a Fake News Detector. A tech blog about fun things with Python and embedded electronics. Retrieved from: Introduction ‘Fake News’ is an umbrella term for an epidemic that is running rampant. At its core lie disinformation and distrust in experts. On the funny end of the spectrum dangle conspiracy nuts claiming lizard people rule the world, which would actually be flat, and which we never really left for the (strangely round) moon. Our neighbouring planet Mars is also round according to these people, it’s just that earth is flat because [reasons]. All in all it has pretty good entertainment value, and if anything it gives a sobering insight into the limitless nature of human ignorance and willful stupidity. On the other end of the spectrum lie dangerous ideas though. Vaccines have never caused autism, not a single shred of evidence is available for that. Yet, easily preventable disease is on the rise because parents trust random people sharing dubious links on Facebook more than they do actual doctors. Politics isn’t immune either. As I’m writing this, more and more links between Russia, Trump and alt-right media outlets are appearing. All the while more disinformation leaks into Facebook and Twitter feeds through what appear to be organised misinformation campaigns from various origins. People are being influenced in insidious ways to vote one way or the other, something that will have real implications for the next years to decades. It’s happening everywhere, also in my country. A growing party in the Netherlands for example claims on their website as a major talking point that there are over 150.000 illegal immigrants living in the country. That sure sounds bad. However, in reality the exact number is unknown. The most recent estimates are from a 2015 report and put it at 35 thousand, down from 41 something thousand in 2009, note that the numbers have been declining since at least 1997, when estimates put the number between 112.000 and 163.000. But that doesn’t fit the narrative of the party, because foreigners=bad, and shouting that things are getting worse fits well with the cynical “back in my day it used to be so much better” sentiment in the aging voter base. Numbers? Meh just write down something big (citing high, fictional numbers appears to be somewhat of a track record). Also omit the reference to the source of the data while you’re at it. It’s part of a larger trend, a toxic mix of voters lacking in critical thinking skills and politicians for whom only the number of votes matter. Now I’m a trained data scientist that works with lots of data daily, so it’s become a habit to always go for the numbers and sources. If an article makes a grand claim but the source reference is missing (it generally is), that is suspect right away. If a quick internet search doesn’t produce comparable estimates it is even more suspect. I appreciate that most people don’t do this. It takes time and effort to do so and our lives are busy enough as it is. If anything, this shows how flippin’ easy it is to slip falsehoods into a narrative. In the age of (dis)information excess it’s easy to find articles supporting whatever viewpoint is desired. ‘Personalised content feeds’ only serve to reinforce the viewer’s current beliefs by showing articles and videos similar to what’s been viewed before. It’s a sure way of never having to encounter something critical of held ideas or beliefs. It’s a comfortable, brainless confirmation bias in action. This promotes closed-mindedness, and is incredibly dangerous as it can lead to polarisation easily. We need a fix. Maybe deep learning can help us assimilate the data more effectively? Let’s take a look! In this post we’ll explore one way of building a fake news detector, as well as the caveats it brings. The main problem from the outset is that the data sets out there are not very big, but the classification task we want to perform relies on language which is very complex. Generally, getting a deep learning net to learn more complicated patterns means you need to give it more examples: you’d need a lot of data. We can either spend months and a lot of money to make our own dataset, or be smart about it: transfer learning with word embeddings! Getting Started In this tutorial we will: - Go over the theory of how to make a word embedding matrix; - Discuss why using pre-trained word embeddings is helpful; - Load up a fake news dataset; - Build two network architectures and evaluate; - Discuss the subtleties of fake news detection. To follow along with the code, you’ll need: - Python 3+ (Anaconda recommended); - Tensorflow (or Theano); - Keras; - A reasonable GPU to speed up training. Not necessary but highly recommended. Background: Word Embeddings Encode Semantic Relations For those familiar with principal component analysis (PCA), word embeddings will make intuitive sense. Using word embeddings means creating an n-dimensional vector space where each word is associated with its own vector. Each dimension in this vector space, then, serves a role similar to a ‘component’ from PCA: words that load similarly on it share some relationship. If this doesn’t make a whole lot of sense to you, don’t worry about it, read on below. Building a word embedding matrix is done using a shallow neural network with a bottleneck in the middle. Data generation for this task is done by mining a large body of text for word-target pairs. For each word in the text, a series of word pairs are generated by sampling a random target word within a window. Let’s visualise how it works to make it a bit more clear. Consider the sentence “The general lack of critical thinking ability helps fake news spread like cancer“. Generating word-pairs is usually done with a window function, meaning for each word in the body of text one or several target words are selected from the surrounding words (let’s say 5). Generating word pairs could then look something like this: Words like “the”, “of”, “and” are essentially meaningless words in this context and can be excluded. Whether this is done is up to the creator of the embedding matrix. The job of the network is to associate input words with target words. What the model essentially will learn to predict are the probabilities that certain words co-occur nearby each other. Words that co-occur do so because they share some relationship, which is exactly what we’re interested in because this carries semantic meaning. Background: The Making of an Embedding Matrix So how do we go from word pairs to an embedding matrix? Easy, actually! We need to make two decisions: how many words are in our dictionary, and how many dimensions the embedding vectors will have. We could then train a shallow neural network with these properties: To train the network we create a keyed dictionary. The input is a one-hot vector the same size as the dictionary, with all zeros except for the index location of the input word in the dictionary. For example if you encode “Aardvark”, which happens to be the second word in this example dictionary, the input vector looks like [0, 1, 0, .., 0]. The output layer has a shape equal to the input vector (size of dictionary) and needs to have a softmax activation function. Softmax scales the entire output vector so that the total sum is 1. We need this because here the output classes are mutually exclusive (each time we have one input word and one target word). Using softmax means the network outputs probabilities for each word given as an input. The network also has a single hidden layer of the size we want the embedding layer dimensionality to be. It is trained on the word pairs. Once training is complete the output layer is removed. We’re not interested in predicting a target word from an input word at all. We’re interested in the weights of the hidden layer, because from these we can make an embedding matrix. Doing something simple in an incredibly convoluted and dumb way After training we have a network that encodes for the individual word vectors. Each time we present the neural net with an input vector encoding a word, the corresponding weights in the hidden layer are selected (the word’s feature vector). There’s a catch though. For those with some background in neural nets: note that because the input vector is all zeros except for one position, we waste tremendous resources by multiplying all weight matrices in all neurons with almost all zeroes, except for a single weight in each matrix in each neuron corresponding to the position of the word in the dictionary. Why is this dumb? Because we’re just emulating a look-up table in an overly complicated manner! By multiplying with all zeros except for the input word entry, what the network is doing is selecting one weight in every neuron weight matrix by zeroing out the rest and multiplying the one weight with 1. A hidden layer with 300 units and a dictionary of 10.000 words will entail 3.000.000 multiplications, while we only use the output from 0.1% of all these multiplications (300 weights, one selected in each neuron). We can do better, and linguistic scientists certainly have. To make this process less idiotic the hidden layer weight matrices are extracted and encoded into a look-up table the of shape(size_of_dictonary, n_dimensions). This way, each word in the dictionary has an associated n-dimensional vector attached. To get a word vector, we only need to find its index in the dictionary, which is computationally a very cheap operation. Each vector describes that word’s position and orientation in n-dimensional space, which encodes the information on word relations into the spatial relations between the vectors. Consider this 3-dimensional example: The above figure shows a hypothetical visualisation of a 3-dimensional embedding where Cats and Dogs share a dimension (e.g. pets), lions and wolves share a dimension (e.g. wild animals), but there is also a relationship among them. Here you can visually see how an algorithm would solve the problem “Cats are to Lions, as Dogs are to ____”. It needs to find a vector with a similar length and orientation going from Dog to the target word! In mathematical terms, it needs to find the vector pair with the highest cosine similarity. So what good does it do the fake news detector? The word embeddings process will produce a numerically efficient way of representing semantic relations. That is great, since because they’re numbers we can do all sorts of cool math with it like the reasoning task mentioned above. But perhaps the most important feature is that they encode a lot of linguistic knowledge. Incorporating such an embedding matrix in a deep learning architecture for other tasks is a form of transfer learning. Transfer learning means that in stead of starting from scratch when training the network, we inject it with knowledge learned from another related task. This will help our fake news detector. In stead of presenting it with plain text and having it learn everything from scratch, by presenting it with each word’s embedding vector we offer it a much more information-rich diet. In practice this translates to much lower time and data requirements to fit the network properly. In essence, the network can now focus solely on learning to discriminate fake and real news, without having to first learn what the hell language is. Getting the Data First we need data. In this tutorial we’ll use the ‘train.csv’ dataset from here. Download it and extract it. The dataset is nicely balanced, with 10.387 real news articles, and 10.413 biased/fake articles. Now let’s write a function to load up the data. #LOAD THE DATA import pandas as pd import numpy as np import random def load_kagglefakenews(): #load training data and put into arrays df = pd.read_csv('Kaggle_FakeNews/train.csv', encoding='utf8') # be sure to point to wherever you put your file train_data = df['text'].values.tolist() #'text' column contains articles train_labels = df['label'].values.tolist() #'label' column contains labels #Randomly shuffle data and labels together combo = list(zip(train_data, train_labels)) random.shuffle(combo) train_data, train_labels = zip(*combo) del df #clear up memory return np.asarray(train_data).tolist(), np.asarray(train_labels).tolist() Call the function to load the data: train_data, train_labels = load_kagglefakenews() After loading the data we need to tokenize it, meaning we’ll split the text into separate words and remove punctuation and other unwanted characters. We then assign each word a unique numerical identifier that corresponds to the word’s position in our dictionary. We also set a few constants with values that will help construct the training sets and embedding layer, then tokenize our loaded data: from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.layers import Embedding from keras.utils import to_categorical import pickle MAX_NB_WORDS=50000 #dictionary size MAX_SEQUENCE_LENGTH=1500 #max word length of each individual article EMBEDDING_DIM=300 #dimensionality of the embedding vector (50, 100, 200, 300) tokenizer = Tokenizer(num_words=MAX_NB_WORDS, filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~') def tokenize_trainingdata(texts, labels): tokenizer.fit_on_texts(texts) pickle.dump(tokenizer, open('Models/tokenizer.p', 'wb')) sequences = tokenizer.texts_to_sequences(texts) word_index = tokenizer.word_index print('Found %s unique tokens.' % len(word_index)) data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH) labels = to_categorical(labels, num_classes=len(set(labels))) return data, labels, word_index #and run it X, Y, word_index = tokenize_trainingdata(train_data, train_labels) And split the data into training, validation and test sets: #split the data (90% train, 5% test, 5% validation) train_data = X[:int(len(X)*0.9)] train_labels = Y[:int(len(X)*0.9)] test_data = X[int(len(X)*0.9):int(len(X)*0.95)] test_labels = Y[int(len(X)*0.9):int(len(X)*0.95)] valid_data = X[int(len(X)*0.95):] valid_labels = Y[int(len(X)*0.95):] Labeling is done so that 0 = real news, and 1 = fake or biased news Getting the Embeddings We’ll be using a 300-dimensional embedding matrix, trained on a text dump of the full wikipedia site as it was in 2014 + the Gigaword set (a large set of news article content). It has a vocabulary of 400.000 words, so it’s also a lot smarter than I am. Grab the dataset from the Stanford GloVe project page: (heads up: 822MB). The official Keras blog has a nice article about loading up word embeddings and using them, so let’s adapt the code to our needs. Extract the downloaded ZIP file, and consider the following block of code to build our embedding layer: def load_embeddings(word_index, embeddingsfile='wordEmbeddings/glove.6B.%id.txt' %EMBEDDING_DIM): embeddings_index = {} f = open(embeddingsfile, 'r', encoding='utf8') for line in f: #here we parse the data from the file values = line.split(' ') #split the line by spaces word = values[0] #each line starts with the word coefs = np.asarray(values[1:], dtype='float32') #the rest of the line is the vector embeddings_index[word] = coefs #put into embedding dictionary f.close() print('Found %s word vectors.' % len(embeddings_index)) embedding_layer = Embedding(len(word_index) + 1, EMBEDDING_DIM, weights=[embedding_matrix], input_length=MAX_SEQUENCE_LENGTH, trainable=False) return embedding_layer #and build the embedding layer embedding_layer = load_embeddings(word_index) With all that squared away we can go on to train a model! Baseline Performance Let’s set a simple convnet architecture as a baseline model. This will train fast and give us a baseline performance that a simple model will achieve. Remember that deep learning is a very empirical discipline: prototype quickly and refine along the way. Let’s perform some more imports and set the model architecture: from keras import Sequential, Model, Input from keras.layers import Conv1D, MaxPooling1D, AveragePooling1D, Flatten, Dense, GlobalAveragePooling1D, Dropout, LSTM, CuDNNLSTM, RNN, SimpleRNN, Conv2D, GlobalMaxPooling1D from keras import callbacks def baseline_model(sequence_input, embedded_sequences, classes=2): x = Conv1D(64, 5, activation='relu')(embedded_sequences) x = MaxPooling1D(5)(x) x = Conv1D(128, 3, activation='relu')(x) x = MaxPooling1D(5)(x) x = Conv1D(256, 2, activation='relu')(x) x = GlobalAveragePooling1D()(x) x = Dense(2048, activation='relu')(x) x = Dropout(0.5)(x) x = Dense(512, activation='relu')(x) x = Dropout(0.5)(x) preds = Dense(classes, activation='softmax')(x) model = Model(sequence_input, preds) return model Now let’s train it on the loaded Kaggle set and monitor for signs of overfitting. The telltale sign is that validation loss will start increasing steadily, and often validation accuracy will decrease. We will use ‘early stopping’ when we observe this. Early stopping simply means aborting training before the set number of epochs is reached. #put embedding layer into input of the model sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedded_sequences = embedding_layer(sequence_input) model = baseline_model(sequence_input, embedded_sequences, classes=2) model.compile(loss='categorical_crossentropy', optimizer='adamax', metrics=['acc']) print(model.summary()) model.fit(train_data, train_labels, validation_data=(valid_data, valid_labels), epochs=25, batch_size=64) For me the model converged very quickly (thanks word embeddings!), and overfitting started occurring after a few epochs: Train on 18720 samples, validate on 1040 samples Epoch 1/25 18720/18720 [8==============================] - 8s 420us/step - loss: 0.2927 - acc: 0.8676 - val_loss: 0.1233 - val_acc: 0.9452 Epoch 2/25 18720/18720 [8==============================] - 7s 390us/step - loss: 0.0876 - acc: 0.9679 - val_loss: 0.0902 - val_acc: 0.9644 Epoch 3/25 18720/18720 [8==============================] - 7s 392us/step - loss: 0.0462 - acc: 0.9850 - val_loss: 0.0875 - val_acc: 0.9692 Epoch 4/25 18720/18720 [8==============================] - 7s 392us/step - loss: 0.0230 - acc: 0.9923 - val_loss: 0.0913 - val_acc: 0.9692 Epoch 5/25 18720/18720 [8==============================] - 7s 392us/step - loss: 0.0157 - acc: 0.9950 - val_loss: 0.1004 - val_acc: 0.9712 Great! In just a few epochs the model reaches very impressive performance statistics. We end up choosing the model from epoch 3 because afterwards, validation loss increases steadily. Time to test it on the held out set: model.evaluate(test_data, test_labels) 1040/1040 [8==============================] - 0s 228us/step [0.11665208717593206, 0.9634615384615385] Looking good! There are no big deviations from validation statistics so that indicates good performance. Now on to see what it thinks of a few real world examples. Luckily there’s a site that maintains a database of known sites known for fake news content:. Now let’s test the model out on an obviously fake article that came out a day or so ago. I’ve simply taken the first front page article (at the time of writing this) from the first site mentioned on opensources.co. First let’s write a function to tokenize the text of any article: def tokenize_text(text): sequences = tokenizer.texts_to_sequences(text) data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH) return data Then paste the text into a string and tokenize it (be careful not to paste ads in it as well) text = """ """ #put the article text here #tokenize tok = tokenize_text([text]) #ask the model that it thinks model.predict(tok) #out comes array([[0.04552918, 0.9544708 ]], dtype=float32) Remember that 0=real, 1=fake. This indicates that the model gives it a 95.45% chance of being fake (or biased) news. That is correct, dear model! A less obvious example Sweden has been a recent target of a stream of fake news in Europe. According to some less-than-reputable sources, it’s on the brink of civil war, overrun by migrants, and contains the rape-capital of the world. While Sweden has problems, like any other country does, what is presented in biased articles is often a gross exaggeration of the existing situation or factually incorrect. Let’s see if the model can spot some things! We’ll take a look at this Daily Telegraph article and see what the model thinks of it: array([[1.7819532e-06, 9.9999821e-01]], dtype=float32) 99.99% change of being fake or biased. That certainly is interesting! Going through the article there is some loaded language and hyperboles, but without detailed knowledge of the situation it is difficult to spot what’s out of place. However, keep in mind that: - the telegraph receives lots of money (£900.000 a year) from Russia in exchange for reproducing content from the Russian state-controlled newspaper. - its chief political commentator resigned in 2015 in protest to the paper publishing articles heavily influenced by the interests and wishes of the large advertisers in the paper. - it has the highest number of upheld complaints regarding factual inaccuracies of all UK newspapers. - it has been implicated in regularly reproducing Chinese communist party propaganda following a payment deal from China that brings in £800.000,- a year. See the Wikipedia here for more info and references. I’ll leave it up to you to decide whether you think it’s a trustworthy source. I don’t think it is, and frankly I found it impressive that the model also seems to pick this up. The Wikipedia section linked, by the way, is marked real by the model with 99.27% certainty: wikitext = "" newspapers.[63] In October 2017, a number of major western news organisations whose coverage has irked Beijing were excluded from Xi Jinping's speech event launching new politburo. However, the Daily Telegraph, which regularly publishes Communist party propaganda in the UK in an advert section as part of a reported £800,000 annual contract with Beijing’s China Daily, has been granted an invitation to the event.[64] """ tok = tokenize_text([wikitext]) model.predict(tok) array([[0.99272037, 0.0072796 ]], dtype=float32) Now let’s look at reporting on the same topic (hand grenade possessions and misuse by migrants in Sweden) as reported by a reputable source: the BBC, the article here. Pasting the text and running the model, my version gives me: array([[0.99580574, 0.0041943 ]], dtype=float32) The model would classify this article as real with a 99.58% certainty. Let’s do one last random article, which happened to be front page CNN when I was writing this. Rated real with 89.01% certainty. Improving Performance Before wrapping up let’s take a look at a somewhat more complicated architecture using ‘smart’ neurons. LSTM layers implement a learnable forgetting mechanism that allows them to spot relationships between different elements in a series quite efficiently. They are widely used in time-series analysis, but can of course also be applied to language since it also shows strong relationships between individual elements. Consider this architecture including LSTM layers: def LSTM_model(sequence_input, embedded_sequences, classes=2): x = CuDNNLSTM(32, return_sequences=True)(embedded_sequences) x = CuDNNLSTM(64, return_sequences=True)(x) x = CuDNNLSTM(128)(x) x = Dense(4096, activation='relu')(x) x = Dense(1024, activation='relu')(x) preds = Dense(classes, activation='softmax')(x) model = Model(sequence_input, preds) return model I’ve trained the model on a synthesis of five large datasets: - the Kaggle Fake News set - the FakeNewsNet set - the Kaggle Fake News Detection set - the Fake Or Real News set - the Kaggle Getting Real About Fake News set In the end it reached an aggregate test accuracy of about 95% (the baseline model got stuck at 90%). Be cautious when combining these datasets: the labels are reversed for some. Concluding – What does this all mean, really? We’ve trained two models on a single dataset, and then expanded the LSTM model training to a synthesis of five available datasets to make it more versatile. Despite the impressive accuracies of the models in this tutorial they do come with some important disclaimers: the datasets the models have been trained on contain mostly articles of a specific type, namely politically themed and reporting on societal issues. Deep learning models are pretty bad at doing tasks outside what they’ve been trained for. As such, it is difficult for the model to generalise to news articles that fall outside the categories they’re trained on. Take for example a random BBC article about cats: It is classified as a fake with 65.23% certainty by the baseline model, most likely because of the hyperboles and strongly worded segments throughout the article. In this case they serve as a writing device that makes the article funny to read, but they are employed by many fake news writers as well to stir up emotion and make grand claims. The more complex LSTM was trained on a more diverse dataset and didn’t make the same mistake, even though the narrowly trained baseline got a higher test accuracy. This highlights the importance of being cautious when deploying these types of systems in the real world: even a high model accuracy doesn’t necessarily translate into high real-world performance, if the training sets are different from the data encountered in real-world applications. Performance cannot be guaranteed on just any text the fake news detector is presented with, it may be compromised if writing styles change, or if the fake news detector judges on a topic it is unfamiliar with. A word of caution Is a fake news detector with about 95% accuracy useable? After all, on the test set it will get about one in every 20 articles wrong. I think we would need to be very cautious when implementing it. One possibility is that false negatives (fake articles labeled real) can be misused by politicians to further their agenda (see! This fake article is not fake after all!). Do we really want that? Our model certainly hasn’t reached 100% accuracy on a diverse dataset, so this happening remains a very real danger. Another serious issue is that publicly available fake news detectors open up the possibility for false negative mining. Imagine you are writing an intentionally fake or biased news article and have fake news detectors available. What do you do? Bingo! You would just keep re-writing and adjusting the wording in your article until it fools the detector. Now the detector has become useless. This underscores another important thing: you cannot just implement a fake news detector on a website and be done with it. More harm than good will likely come from it that way. You also need to be aware of how people will abuse it (no, not might, people will abuse it), and need to implement some form of protection against this (and other) kind of abuse. But, perhaps the biggest hidden danger lies in the power vested in whomever hosts a robust fake news detector. If it works well enough and gains mainstream adoption, and if the wrong people gain control of it, there will be a tremendous opportunity to expand the spread of disinformation exponentially. If the general population trusts the machine and someone nefarious sabotages it, will we notice? Probably not for at least a while. The biggest and best fake news detector is still your brain. Remember to check your sources, be critical of what you read, and be vigilant. This book is an excellent resource on learning to be better at critical thinking. In the end, who is to say what is fake and what is real? The LSTM model certainly isn’t perfect, especially when you show it a text far outside of its expertise: it rated this blog post FAKE NEWS with a certainty of 99.91%. But don’t tell anyone, ok? 6 Comments AlfredJanuary 24, 2019 the load_kagglefakenews() function is returning a MemoryError palkabFebruary 6, 2019 Hi Alfred. Are you running a 32-bit Python maybe? How much memory is available, what OS are you using? – Paul TejasFebruary 8, 2019 same for me I’m running python 3.6 64 bit. It’s returning a memory error. palkabFebruary 8, 2019 Then it seems your machine doesn’t have enough RAM available. You could try increasing available swap space so that the OS can still allocate enough. What machine and OS are you running on? Cheers, Paul Parthesh SoniAugust 10, 2019 Man…you are the best. Your articles are very pleasant to read and are very informative. I didn’t used my local machine for running your code as it has very low specifications and I encountered errors like MemoryError. I instead used Google Colab for the whole process. Colab enviornment can be easy to use for your code. Thanks for this wonderful article. palkabAugust 16, 2019 Yes Colab is awesome! Thanks for the kind words
http://www.paulvangent.com/2018/08/31/building-a-fake-news-detector/?replytocom=1711
CC-MAIN-2020-24
en
refinedweb
BizTalk Does Not Validate My Message?!?! The ability to rigidly define the content of a data field is an essential element in any structured data definition. It is no surprise therefore one can define data constraints on elements and attributes in a schema through the means of various facets, such as enumerations, min length, regular expressions, and many more. The implementation of facets in a schema provides a large amount of value in terms of the data aspect of the contract to be used in an integration. Whether the integration is WCF-based or not, this data contract must be adhered to and messages that do not adhere to this contract must not be processed. There is, however, no value in defining the data constraints if these constraints can not be enforced at the time the data is received, as allowing "invalid" data to be processed will invariably lead to errors in the processing of that data. It is for this reason that validating a message against its schema is oftentimes a necessary requirement. Some might say that this validation should be performed by default, and any messages that do not validate should be rejected as early in the processing of the message as possible. BizTalk does not do this by default. At this point you may be saying what do you mean "BizTalk does not validate my message?" ... hence, the title of this post. The point is that BizTalk CAN validate your message, but it does not do so by default. One of the reasons for this is flexibility: you may prefer to do the validation later in your process, perhaps in an orchestration, where you can handle the validation errors in your own way; or you may want the receive pipeline to do the validation and have BizTalk route the failed message. This decision is not made for you by BizTalk ... your solution design will determine the appropriate implementation. In this post I will present two ways in which you can get BizTalk to fully validate your message against its schema as early in the receiving process as possible: at the receive pipeline. Using the XMLReceive Pipeline When using the standard XMLReceive pipeline in a receive location you can click on the ellipsis button to the right of the pipeline selection box to display the "Configure Pipeline" window. The screenshot below shows how the XMLReceive pipeline makes use of the XML disassembler pipeline component and the Party resolution pipeline component. When first setting up the receive location the ValidateDocument and DocumentSpecNames properties are False and blank respectively. In this default configuration the XMLReceive pipeline will try and match the received XML document's target namespace and root node name to the published schemas in BizTalk, and the received document's structure will be validated against the schema's structure. No data validation is performed on the received document, however. By setting the ValidateDocument property to True the XML disassembler pipeline component will be instructed to validate the data contained in the received document as well. Setting this property to true also requires that at least one DocumentSpecName is also provided, as the XML disassembler uses the schema specified by the fully qualified assembly name in the DocumentSpecNames property to identify the schema with which to perform the validation. With these properties set, if the receive location processes a message that fails validation an entry like the one below will be logged to the event log. While this approach achieves the objective it falls short for an ideal solution, because: - The XML disassembler pipeline component has logic built into it to identify the matching schema for a received message, so why does it not use this schema to perform validation? In other words, I would like to be able to change the Validate flag to true, without needing to specify a value for the DocumentSpecNames property. - Every time the schema changes (possibly due to version changes) the developer will need to remember to go and change the configuration of the XMLReceive pipeline to ensure that the correct fully qualified assembly name is stored in the DocumentSpecNames property. - If I have multiple receive locations that each need to do schema validation the previous task would be compounded as I would need to remember to set the properties for each receive location. This leads on to the second method to ensure that BizTalk validates the content of the received message: a custom pipeline that can be used in any receive location to do full schema validation, without requiring any additional configuration. Creating and Using the XMLValidatingReceive Pipeline The concept behind the XMLValidatingReceive pipeline is to have a generic pipeline that one can use in any project, and in any receive location where the received document's structure and data content needs to be validated against an existing schema. As the standard functionality of matching a received document with a deployed schema is still required, the XMLReceive pipeline is an excellent starting point. As the XMLReceive pipeline uses the XML disassembler and Party resolution pipeline components, the XMLValidatingReceive pipeline will need to use these two pipeline components as a starting point as well. In addition to these components, the XMLValidatingReceive pipeline will also make use if the XML validator pipeline component. To create the XMLValidatingReceive pipeline, create a new receive pipeline. I created this pipeline in a generic BizTalk solution and project, so that I could use it for any solution, at any customer. In the new receive pipeline, add the XML disassembler, XML validator and Party resolution pipeline components, as per the screenshot below. There is no need to change any of the default properties for any of these components. The pipeline can now be deployed and used in any BizTalk application. A message processed by this pipeline will now use the XML disassembler's schema resolution functionality and it will validate the structure of the document against the resolved schema. The XML validator component will then use the resolved schema name to validate the received document's data content, in accordance with any facets defined in the schema. Where a received document fails validation, the same error as in the previous method is shown (as below), except that the pipeline referred to will now be the new custom pipeline. The result is a common pipeline that can be used in any receive location, and which does not require any configuration of pipeline properties to ensure that the received document is full validated. At the same time, this pipeline does not require any changes to be effected in the event of a change to a schema. Summary In conclusion, don't assume that BizTalk will enforce those facets you diligently applied to your schema. You need to instruct BizTalk to validate a received message against the data constraints you define in your schema, and there are two simple ways in which this can be achieved. If it is a regular requirement to ensure that the received document is fully validated against the schema, then create a custom XMLValidatingReceive pipeline that can be used in any BizTalk development project.
https://docs.microsoft.com/en-us/archive/blogs/nabeelp/biztalk-does-not-validate-my-message
CC-MAIN-2020-24
en
refinedweb
Full project mirroring between GitLab instances DescriptionDescription GitLab has ~Geo, which is a product for multi-region replication of GitLab data. This includes all database contents as well as files and repository + wiki data. Geo has a "selective sync" feature, which is used to replicate a subset of an instance elsewhere. GitLab is gaining a "bidirectional sync" feature: - this can be used as a sort of poor man's multi-region, multi-master replication of a subset of repositories between two non-Geo GitLab instances, but files and database contents (issues, MRs, memberships, etc) aren't part of this. ProposalProposal Enhance bidirectional replication with instance, namespace and project-level federation of database contents and files. We could start by only supporting it at project-level though. An admin or owner on gitlab-a.com would also have an account on gitlab-b.com. They would set up an instance, group or project-level integration on the latter, using a personal access token from the former. Whenever a change happens on one instance, it is replicated asynchronously to the other, using webhooks to notify that a change has happened. Obviously, conflicts can occur, as we see with bidirectional repository mirroring. We may need an explicit federation object on both sides to support read-write on both sides; if set up on only one side, it could act as a read-only replica. A major source of conflicts in the multi-master version would be IIDs of issues, MRs, etc. This can be worked around using the same hack as mysql multi-master replication with N members - fixed offsets. If you have 2 members of the federation, the first only uses odd IIDs, the second only uses even IIDs. Artifacts and pipelines are more difficult. We might just have to disable CI on all but one node to begin with. File conflicts won't happen as we add random hex to every upload. We'd need to tell the other nodes to pull the file each time one was uploaded, though. Repository conflicts are being handled orthogonally. We can apply the same logic to both main and wiki repository. Memberships could be left out-of-scope to begin with, but we could consider automatic linking by email address or a fixed map of user equivalences between instances/groups/projects too. What else? This feature proposal represents a "less-trust" form of Geo selective sync. It's something you can set up between two independent GitLab instances. Both sides would be read-write, and it could be set up entirely in the GitLab UI with no need for sysadmin work or postgresql replication on the respective instances. Since only someone who is an instance/namespace/project admin can set this up, I don't think there are permissions problems to worry about.
https://gitlab.com/gitlab-org/gitlab/-/issues/4517
CC-MAIN-2020-24
en
refinedweb
package forest; import java.util.*; public class Tiger { public void getDetails(String nickName, int weight) { System.out.println("Tiger nick name is " + nickName); System.out.println("Tiger weight is " + weight); } }. C:\snr > javac -d . Tiger.java. D:\sumathi> set classpath=C:\snr;%classpath%;. D:\sumathi> notepad Animal.java The above statement creates a file called Animal.java and write the code in it, say, as follows. import forest.Tiger; public class Animal { public static void main(String args[]) { Tiger t1 = new Tiger (); t1.getDetails("Everest", 50); } }.
https://way2java.com/packages/create-user-defined-packages/
CC-MAIN-2020-24
en
refinedweb
connectivity #. Sample usage: import 'package:connectivity/connectivity.dart' as connectivity; var connectivityResult = await connectivity.checkConnectivity(); if (connectivityResult == ConnectivityResult.mobile) { // I am connected to a mobile network. } else if (connectivityResult == ConnectivityResult.wifi) { // I am connected to a wifi network. } Getting Started # For help getting started with Flutter, view our online documentation. For help on editing plugin code, view the documentation.' as connectivity;'; @override void initState() { super.initState(); initConnectivity(); } //.
https://dart-pub.mirrors.sjtug.sjtu.edu.cn/packages/connectivity/versions/0.0.1+1
CC-MAIN-2020-24
en
refinedweb
Warehouse Apps 232 Apps found. category: Warehouse × version: 13.0 × Integrate Multi-Warehouse Access Control with Manufacturing Multi-Warehouse Access Control - Manufacturing Integrate Multi-Warehouse Access Control with Purchase Multi-Warehouse Access Control - Purchase Integrate & Manage your bpost Shipping Operations from Odoo bpost Odoo Shipping Integration Stock Force Date Inventory force date Inventory Adjustment force date Stock Transfer force date stock picking force date receipt force date shipment force date delivery force date in stock backdate stock back date inventory back date receipt back date Force date in Stock Transfer and Inventory Adjustment Show stock Stock Incoming Outgoing quantity in product form. Stock Incoming Outgoing, 显示预计出入库数量 import stock serialno from csv, inventory lot no from excel, stock serial number from xls, inventory lot no from xlsx app, import stock lot module odoo
https://apps.odoo.com/apps/modules/category/Warehouse/browse?order=Newest&series=13.0&amp%3Bcategory=&amp%3Bamp%3Bsearch=ecosoft&amp%3Bversion=&amp%3Bamp%3Bseries=6.1
CC-MAIN-2020-24
en
refinedweb
These articles provide tutorials and usage documentation for XPCOM, including how to use it in your own projects and how to build XPCOM components for your Firefox add-ons and the like. We. -). - mozilla::services namespace - The servicesC++ namespace offers an easy and efficient alternative for obtaining a service as compared to the indirect XPCOM approach: GetService(), CallGetService(), etc methods are expensive and should be avoided when possible. - Receiving startup notifications - Sometimes it's necessary for XPCOM components to receive notifications as to the progress of the application's startup process, so they can start new services at appropriate times, for example. - XPCOM array guide -. - XPCOM changes in Gecko 2.0 - Several changes that affect XPCOM component compatibility are taking place in Gecko 2. This article details those changes, and provides suggestions for how to update your code. - XPCOM hashtable guide - A hashtable is a data construct that stores a set of items. Each item has a key that identifies the item. Items are found, added, and removed from the hashtable by using the key. Hashtables may seem like arrays, but there are important differences: - XPCOM Stream Guide - In Mozilla code, a stream is an object which represents access to a sequence of characters. It is not that sequence of characters, though: the characters may not all be available when you read from the stream.
https://developer.cdn.mozilla.net/en-US/docs/Mozilla/Tech/XPCOM/Guide
CC-MAIN-2020-24
en
refinedweb
Daydream offers motion controller support for Unity and Unreal. These features include: Controller visualization: A 3D model of the Daydream controller that displays which button the user is currently pressing and where the user is currently touching the Daydream controller's touchpad. Laser and reticle visualization: Displays a laser and reticle so the user can easily interact with the VR environment. Arm model: A mathematical model to make the 3D controller model in VR approximate the physical location of the Daydream controller. Input System: A standard and extensible framework for raycasting from the controller model. The input system integrates with the laser and reticle visualization to make it easy to interact with the UI and other objects in VR. All visualization elements are optional and reskinnable. Controller support in Unreal Currently this functionality is only available in Unreal with Google VR. Motion Controller with visualization support - Enable the Google VR Motion Controller plugin. (instructions). - Open the Blueprint for the Player Pawn. - Add the GoogleVRMotionControllerto the Components list at the same level as the VR Camera root. - Modify the properties on the GoogleVRMotionControllerComponent to adjust it. Cardboard apps should use UGoogleVRGazeReticleComponent instead, for a gaze-based reticle. Motion Controller without visualization support Use the official Unreal MotionControllerComponent. Input system - Open the Blueprint for the Player Pawn. - Add the GoogleVRPointerInputComponent to the Blueprint. - Use the GoogleVRPointerInputComponent's API to listen and react to events triggered by the pointer. - If desired, subclass the GoogleVRPointerInputComponent in C++ to add additional events, to add custom processing of the raycast, or to override the raycast implementation. The GoogleVrPointerInput Component works with both GoogleVRGazeReticle and GoogleVRMotionController. It is also integrated with UE4 Widgets, allowing you to interact with the standard UE4 UI with the pointer. To respond to events generated by the GoogleVRPointerInput Component, use the interfaces IGoogleVRActorPointerResponder and IGoogleVRComponentPointerResponder in either C++ or Blueprint. Adjusting the Arm Model Blueprint: - Open your Player Pawn Blueprint. - Create a node, and search for the term "ArmModel” to see what tuning parameters are available. - Attach the node to the BeginPlayevent. C++ - Add #include "GoogleVRControllerFunctionLibrary.h"to your code. - Include GoogleVRControlleras a dependency in your Build.csfile. Call tuning functions, for example: UGoogleVRControllerFunctionLibrary::SetArmModelPointerTiltAngle(20.0f); Disabling the Arm Model You can disable or enable the Arm Model by calling the function SetArmModelEnabled either in a Blueprint or in code as described in the “Adjusting the Arm Model” section of this document. If disabled, the MotionControllerComponent will behave the same as it did the previous version of Unreal, in that orientation will change based on the controller.
https://developers.google.com/vr/develop/unreal/arm-model
CC-MAIN-2020-24
en
refinedweb
Changelog Contents - Changes since 2018.02 - 2018.02 - Dependency changes - New features - Changes and improvements - Build system - Bug fixes - Deprecated APIs - Potential compatibility breakages, removed APIs - Documentation - 2015.05 - 2014.06 - 2014.01 - 2013.10 - 2013.08 - 2012.02 Changes since 2018.02 Documentation - A new Developers guide page containing step-by-step checklists for maintainers and core developers) 2015.05 Released 2015-05-09, tagged as v2015.05. See the release announcement for a high-level overview. Dependency changes No dependency changes in this release. New features - Support for plugin aliases in PluginManager library. - Range-based-for support in Containers:: LinkedList. - Added convenience PluginManager:: Manager:: loadAndInstantiate() function. - Added Containers:: *Array:: slice() and friends. - Utility:: Debug is able to reuse existing operator<<overloads for std:: ostream (see mosra/ corrade#12) - Added Utility:: String:: beginsWith() and Utility:: String:: endsWith(). Changes - TestSuite:: Compare:: Container is now able to compare non-copyable containers such as Containers:: Array (see mosra/ corrade#9). - Using const charinstead of const unsigned charfor raw binary data. - Better algorithm for comparing floating-point values in TestSuite. Build system - CMake now always installs FindCorrade.cmaketo library-specific location, making it usable without providing own copy of the file in depending projects. The WITH_FIND_MODULEoption is no longer needed. See mosra/ corrade#17. - Displaying all header files, plugin metadata files and resource files in project view to make use of some IDEs less painful (such as QtCreator) - Properly checking for C++ standard compiler flags to avoid adding conflicting ones (see mosra/ corrade#10) - Gentoo ebuild (see mosra/ corrade#16) - Better handling of RPATH on macOS (see mosra/ corrade#18) Bug fixes - Removed static initializers to avoid memory corruption in static builds (see mosra/ magnum#90). - Plugin manager now correctly folows dependency order when unloading plugins. - Fixed issues with plugin manager having multiple global instances in static builds (see mosra/ corrade#15) - Fixed a crash in Clang caused by overly clever code (see mosra/ magnum#84) Deprecated APIs No API was deprecated in this release. Potential compatibility breakages, removed APIs - Removed unused plugin replacement feature, as it had questionable benefits and no real use. - All functionality deprecated in 2014.01 has been removed, namely: - Removed deprecated ability to use relative includes (e.g. #include <Utility/Debug.h>), use absolute paths ( #include <Corrade/Utility/Debug.h>) instead. - Removed deprecated Utility::String::split()overload, use either split() or splitWithoutEmptyParts() instead. 2014.06 Released 2014-06-30, tagged as v2014.06. See the release announcement for a high-level overview. Dependency changes - Minimal required GCC version is now 4.7. Support for GCC 4.6 has been moved to compatibilitybranch.. Potential compatibility breakages, removed APIs - All functionality deprecated in 2013.10 has been removed. In particular the deprecated Interconnect::Emitter::connect()was removed, use Interconnect:: connect() instead. 2014.01 Released 2014-01-21, tagged as v2014.01. See the release announcement for a high-level overview. Dependency changes No dependency changes in this release. New features - MSVC 2013 support in the compatibilitybranch - from) - Adapted Utility:: Configuration test to work under MinGW (see mosra/ corrade#7) Deprecated APIs Utility::String::split(..., bool)is deprecated, use separate functions split() and splitWithoutEmptyParts() instead. Potential compatibility breakages,) 2013.10 Released 2013-10-30, tagged as v2013.10. See the release annoucement for a high-level overview. Dependency changes No dependency changes in this release. New features - macOS port (thanks to David Lin, see mosra/ corrade#4) - Build system - Added a WITH_FIND_MODULECMake option to install Find modules for Corrade into a system location (see mosra/ corrade#2) GCC*_COMPATIBILITYis sometimes needed to be set explicitly (see mosra/ corrade#1) - Installing *.dlland *.libfiles to pproper locations on Windows (see mosra/ corrade#5) Bug fixes - CORRADE_ VERIFY() macro in TestSuite can now be conveniently used also on classes with explicit operator bool() - Fixed assertion failure on -long-argumentsparsing in Utility:: Arguments Deprecated APIs - Interconnect:: Emitter:: connect() is deprecated, use Interconnect:: connect() instead. Potential compatibility breakages, removed APIs No deprecated API was removed in this release. 2013.08 Released 2013-08-30, tagged as v2013.08. See the Magnum project announcement for a high-level overview. 2012.02 Released 2012-02-08, tagged as v2012.02. See the project announcement for a high-level overview.
http://doc.magnum.graphics/corrade/corrade-changelog.html
CC-MAIN-2018-13
en
refinedweb
#include <plurrule.h> keyword = <identifier> ( ... ) Definition at line 104 of file plurrule.h.
http://icu.sourcearchive.com/documentation/4.4.1-4/classPluralRules.html
CC-MAIN-2018-13
en
refinedweb
The idea is very simple. We keep two variables row and col for the range of rows and cols. Specifically, row is the number of rows of vec2d and col is the number of columns of the current 1d vector in vec2d. We also keep two variables r and c to point to the current element. In the constructor, we initialize row and col as above and initialize both r and c to be 0 (pointing to the first element). In hasNext(), we just need to check whether r and c are still in the range limited by row and col. In next(), we first record the current element, which is returned later. Then we update the running indexes and possibly the range if the current element is the last element of the current 1d vector. A final and important note, since in next(), we record the current element, we need to guarantee that there is an element. So we implement a helper function skipEmptyVector() to skip the empty vectors. It is also important to handle the case that vec2d is empty (in this case, we set col = -1). The time complexity of hasNext() is obviously O(1) and the time complexity of next is also O(1) in an amortized sense. The code is as follows. class Vector2D { public: Vector2D(vector<vector<int>>& vec2d) { data = vec2d; r = c = 0; row = vec2d.size(); col = (row == 0 ? -1 : data[r].size()); skipEmptyVector(); } int next() { int elem = data[r][c]; if (c == col - 1) { r++; c = 0; col = data[r].size(); } else c++; skipEmptyVector(); return elem; } bool hasNext() { return col != -1 && (r < row && c < col); } private: vector<vector<int>> data; int row, col, r, c; void skipEmptyVector(void) { while (!col) { r++; col = data[r].size(); } } }; /** * Your Vector2D object will be instantiated and called as such: * Vector2D i(vec2d); * while (i.hasNext()) cout << i.next(); */ Updates: Since we need to copy the vec2d, we can just copy it into a 1d vector<int>.; }; /** * Your Vector2D object will be instantiated and called as such: * Vector2D i(vec2d); * while (i.hasNext()) cout << i.next(); */ Of course, the problem can also be solved in O(1) memory (see here for a better solution). Your data is a copy of vec2d. If you copy all the data anyway, you might as well just copy it into a single simple vector<int>. Makes things easier. class Vector2D { vector<int> data; int i = 0; public: Vector2D(vector<vector<int>>& vec2d) { for (auto& v : vec2d) data.insert(end(data), begin(v), end(v)); } int next() { return data[i++]; } bool hasNext() { return i < data.size(); } }; Java public class Vector2D { private LinkedList<Integer> data = new LinkedList(); public Vector2D(List<List<Integer>> vec2d) { for (List v : vec2d) data.addAll(v); } public int next() { return data.pop(); } public boolean hasNext() { return !data.isEmpty(); } } And Python :-) class Vector2D: def __init__(self, vec2d): self.data = sum(vec2d, [])[::-1] def next(self): return self.data.pop() def hasNext(self): return bool(self.data) Though self.data = [i for v in vec2d for i in v][::-1] would be faster for long vec2d. Hi, Stefan. Great thanks! In fact, I have come up with this idea at the very beginning. Then I write down the following code, which however, meets Memory Limit Exceeded ...; }; Now I realize that the code has a bug in next, which should be return data[idx++];. Well, your use of insert is much more concise and the Python code is just cool :-) Haha, yeah, I can see how that MLE can be misleading :-) If you learn to love range-based loops, it can be nice and concise without insert: for (auto& v : vec2d) for (int i : v) data.push_back(i); Little fix: auto& instead of auto for vectors to prevent needless copies. for (auto& v : vec2d) "since elements in vector are stored in a contiguous range of memory, the problem can be solved in O(1) memory" I don't understand. I could similarly solve it in O(1) memory with a linked list. How does the contiguous range of memory matter? @jianchao.li.fighter said in 20ms C++ Solution with Explanations: return col != -1 && (r < row && c < col); is col != -1 needed in the hasNext() return statement? Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/20681/20ms-c-solution-with-explanations
CC-MAIN-2017-43
en
refinedweb
I'm so confused with global class ProcessObject: RR = 0 def b(self): self.RR=5 print("valor: ", self.RR) def c(self): print("final: ", self.RR) def d(self): global RR RR = 3 print("valor: ", RR) print(RR) proce = ProcessObject() proce.b() proce.c() proce.d() proce.c() 0 value: 5 final: 5 value: 3 final: 5 c d RR This has nothing to do with immutability... But anyway: class ProcessObject: # this "RR" lives in the `class` statement namespace, # it's accessible as 'RR' in this namespace. After the # `class` statement is executed, it becomes an attribute # of the `ProcessObject` class, accessible as `ProcessObject.RR`, # and thru instances as `ProcessObject().RR`. # RR = 0 def b(self): # this creates an "RR" instance attribute # on the current instance (`self`), which shadows # the "RR" class attribute self.RR=5 print("valor: ", self.RR) def c(self): print("final: ", self.RR) def d(self): # The two following lines creates a module-global # name "RR", which is distinct from the two previous # ones. global RR RR = 3 print("valor: ", RR) # this prints the value of the `RR` living in the class # statement scope - NOT the value of the yet non-existing # global RR print(RR) proce = ProcessObject() proce.b() # this creates the `proce.RR` instance attribute proce.c() proce.d() proce.c() But I not understand why with "c" the value is 5 if the RR is an object immutable. It prints '5' because you assigned that value to proce.RR when calling proce.b(). You're confusing names and values... RR is not an object, it's a name which is bound to an object. The fact that it's at one point bound to an immutable object doesn't mean you cannot rebind it to another object (mutable or not, that's irrelevant here). And why "d" using global no mute the value of RR And here you are confusing binding (assignment) and mutating. binding (assignment) means "make that name points to this object", mutating means "change the state of this object". An example of mutation is adding or removing an element to/from a list, or reversing a list in place. FWIW, the call to proce.d DO rebind (and actually bind on the first call) the module-global "RR". You may want to run this '"extended" version of your script to find out what really happens: print("before : globals = {}".format(globals())) class ProcessObject: RR = 0 print "RR defined in the class namespace - not in globals: {}".format(globals()) def __init__(self): print("in init") print(" self.__dict__ : {}".format(self.__dict__)) print(" ProcessObject.__dict__ : {}".format(ProcessObject.__dict__)) def b(self): print("before calling b - self.__dict__ : {}".format(self.__dict__)) self.RR=5 print("valor: ", self.RR) print("after calling b - self.__dict__ : {}".format(self.__dict__)) def c(self): print("before calling c - self.__dict__ : {}".format(self.__dict__)) print("final: ", self.RR) def d(self): print("before calling d : globals = {}".format(globals())) global RR RR = 3 print("valor: ", RR) print("after calling d : globals = {}".format(globals())) print(RR) print("after class statement: globals : {}".format(globals())) proce = ProcessObject() proce.c() proce.b() proce.c() proce.d() proce.c()
https://codedump.io/share/DQqPUzPZWlos/1/confusion-about-global-and-immutable
CC-MAIN-2017-43
en
refinedweb
This is the mail archive of the [email protected] mailing list for the glibc project. Hi, There is bug report that ld.so in GLIBC 2.24 built by Binutils 2.29 will crash on arm-linux-gnueabihf. This is confirmed, and the details is at:. And I could also reproduce this crash using GLIBC master. As analyzed in the PR, the old code was with the assumption that assembler won't set bit0 of thumb function address if it comes from PC-relative instructions and the calculation can be finished during assembling. This assumption however does not hold after PR gas/21458. I think ARM backend in GLIBC should be fix to be more portable so it could work with various combinations of GLIBC and Binutils. OK for master and backport to all release branches? 2017-07-12 Jiong Wang <[email protected]> * sysdeps/arm/dl-machine.h (elf_machine_load_address): Also strip bit 0 of pcrel_address under Thumb mode. diff --git a/sysdeps/arm/dl-machine.h b/sysdeps/arm/dl-machine.h index 7053ead16ed0e7dac182660f7d88fa21f2b4799a..5b67e3d004818308d9bf93effb13d23a762e160f 100644 --- a/sysdeps/arm/dl-machine.h +++ b/sysdeps/arm/dl-machine.h @@ -56,11 +56,19 @@ elf_machine_load_address (void) extern Elf32_Addr internal_function __dl_start (void *) asm ("_dl_start"); Elf32_Addr got_addr = (Elf32_Addr) &__dl_start; Elf32_Addr pcrel_addr; + asm ("adr %0, _dl_start" : "=r" (pcrel_addr)); #ifdef __thumb__ - /* Clear the low bit of the funciton address. */ + /* Clear the low bit of the funciton address. + + NOTE: got_addr is from GOT table whose lsb is always set by linker if it's + Thumb function address. PCREL_ADDR comes from PC-relative calculation + which will finish during assembling. GAS assembler before the fix for + PR gas/21458 was not setting the lsb but does after that. Always do the + strip for both, so the code works with various combinations of glibc and + Binutils. */ got_addr &= ~(Elf32_Addr) 1; + pcrel_addr &= ~(Elf32_Addr) 1; #endif - asm ("adr %0, _dl_start" : "=r" (pcrel_addr)); return pcrel_addr - got_addr; }
https://sourceware.org/ml/libc-alpha/2017-07/msg00518.html
CC-MAIN-2017-43
en
refinedweb
#include "ltwia.h" L_LTWIA_API L_INT EXT_FUNCTION L_WiaSetProperties(hSession, pItem, pProperties, pfnCallBack, pUserData) Sets a list of all properties available through the LWIAPROPERTIES structure into the WIA device's item passed through the pItem parameter. This feature is available in version 16 or higher. The function can set the values of the properties to the values specified in the LWIAPROPERTIES structure into the WIA device's item passed through the pItem parameter. So if the user wants to keep his changed properties then he should not set this flag to suppress the manufacturer's image acquisition dialog and acquire directly from the specified source item. Required DLLs and Libraries Platforms LEADTOOLS WIA supports both 32-bit and 64-bit image acquisition for both WIA 1.0 (XP and earlier) and WIA 2.0 (VISTA and later). For an example, refer to L_WiaGetRootItem.
https://www.leadtools.com/help/leadtools/v19m/wia/api/l-wiasetproperties.html
CC-MAIN-2017-43
en
refinedweb
This chapter provides detailed descriptions of the various classes of Motif manager widgets. Examples explore the various methods of positioning children within the BulletinBoard, Form, and RowColumn widgets. As their name implies, manager widgets manage other widgets, which means that they control the size and location (geometry) and input focus policy for one or more widget children. The relationship between managers and the widgets that they manage is commonly referred to as the parent-child model. The manager acts as the parent and the other widgets are its children. Since manager widgets can also be children of other managers, this model produces the widget hierarchy, which is a framework for how widgets are laid out visually on the screen and how resources are specified in the resource database. While managers are used and explained in different contexts throughout this book, this chapter discusses the details of the different manager widget classes. Chapter 3, Overview of the Motif Toolkit, discusses the general concepts behind manager widgets and how they fit into the broader application model. You are encouraged to review the material in this and other chapters for a wider range of examples, since it is impossible to deal with all of the possibilities here. For an in-depth discussion of the X Toolkit Composite and Constraint widget classes, from which managers are subclassed, see Volume Four, X Toolkit Intrinsics Programming Manual . The Manager widget class is a metaclass for a number of functional subclasses. The Manager widget class is never instantiated; the functionality it provides is inherited by each of its subclasses. In this chapter, we describe the general-purpose Motif manager widgets, which are introduced below: The MessageBox, SelectionBox, FileSelectionBox, and Command widgets are also Motif manager widgets. These widgets are used for predefined Motif dialogs and are discussed in Chapter 5, Introduction to Dialogs; Chapter 6, Selection Dialogs; and Chapter 7, Custom Dialogs. A manager widget may be created and destroyed like any other widget. The main difference between using a manager and other widgets involves when the widget is declared to be managed in the creation process. While we normally suggest that you create widgets using XtVaCreateManagedWidget(), we recommend that you create a manager widget using XtVaCreateWidget() instead, and then manage it later using XtManageChild(). To understand why this technique can be important, you need to understand how a manager widget manages its children. A manager widget manages its children by controlling the sizes and positions of the children. The process of widget layout only happens when the child and the parent are both in the managed state. If a child is created as an unmanaged widget, the parent skips over that widget when it is determining the layout until such time as the child is managed. However, if a manager widget is not itself managed, it does not perform geometry management on any of its children regardless of whether those children are managed. To be precise, a manager does not actually manage its children until it is both managed and realized. If you realize all of your widgets at once, by calling XtRealizeWidget() on the top-level widget of the application, as described in Chapter 2, The Motif Programming Model, it should not make a difference whether a manager is managed before or after its children are created. However, if you are adding widgets to a tree of already-realized widgets, the principles set forth in this section are important. If you are adding children to an already-realized parent, the child is automatically realized when it is managed. If you are adding a manager widget as a child of a realized widget, you should explicitly realize the widget before you manage it. Otherwise, the resize calculations may be performed in the wrong order. In a case such as this, it is essential to use XtManageChild() rather than XtVaCreateManagedWidget(), since doing so allows you to make the explicit realize call before managing the widget. To demonstrate the problems that you are trying to avoid, consider creating a manager as a managed widget before any of its children are created. The manager is going to have a set of PushButtons as its children. When the first child is added using XtVaCreateManagedWidget(), the manager widget negotiates the size and position of the PushButton. Depending on the type of manager widget being used, the parent either changes its size to accommodate the new child or it changes the size of the child to its own size. In either case, these calculations are not necessary because the geometry needs to change as more buttons are added. The problem becomes complicated by the fact that when the manager's size changes, it must also negotiate its new size with its own parent, which causes that parent to negotiate with its parent all the way up to the highest-level shell. If the new size is accepted, the result goes back down the widget tree with each manager widget resizing itself on the way down. Repeating this process each time a child is added almost certainly affects performance. Because of the different geometry management methods used by the different manager widgets, there is the possibility that all of this premature negotiation can result in a different layout than you intended. For example, as children are added to a RowColumn widget, the RowColumn checks to see if there is enough room to place the new child on the same row or column. If there isn't, then a new row or column is created. This behavior depends heavily on whether the RowColumn is managed and also on whether its size has been established by being realized. If the manager parent is not managed when the children are added, the whole process can be avoided, yet you still have the convenience of using XtVaCreateManagedWidget() for all of the widget children. When the manager is itself managed, it queries its children for their size and position requests, calculates its own size requirements, and communicates that size back up the widget tree. For best results, you should use XtVaCreateWidget() to create manager widgets and XtVaCreateManagedWidget() to create primitive widgets. Creating a primitive widget as an unmanaged widget serves no purpose, unless you explicitly want the widget's parent to ignore it for some reason. If you are adding another manager as a child, the same principle applies; you should also create it as an unmanaged widget until all its children are added as well. The idea is to descend as deeply into the widget tree and create as many children as possible before managing the manager parents as you ascend back up. Once all the children have been added, XtManageChild() can be called for the managers so that they only have to negotiate with their parents once, thus saving time, improving performance, and probably producing better results. Despite all we've just said, realize that the entire motivating factor behind this principle is to optimize the method by which managers negotiate sizes and positions of their children. If a manager only has one child, it does not matter if you create the manager widget as managed or not. Also, the geometry management constraints of some widgets are such that no negotiation is required between the parent and the children. In these situations, it is not necessary to create the manager as an unmanaged widget, even though it has children. We will explain these cases as they arise. In the rest of this chapter, we examine the basic manager widget classes and present examples of how they can be used. While geometry management is the most obvious and widely used aspect of the widget class, managers are also responsible for keyboard traversal, gadget display, and gadget event handling. Many of the resources of the Manager metaclass are inherited by each of its subclasses for handling these tasks. The BulletinBoard is the most basic of the manager widget subclasses. The BulletinBoard widget does not enforce position or size policies on its children, so it is rarely used by applications as a general geometry manager for widgets. The BulletinBoard is the superclass for the Form widget and all of the predefined Motif dialog widgets. To support these roles, the BulletinBoard has a number of resources that are used specifically for communicating with DialogShells. The BulletinBoard has callback resources for FocusIn, FocusOut, and MapNotify events. These callbacks are invoked when the user moves the mouse or uses the TAB key to traverse the widget hierarchy. The events do not require much visual feedback and they only require application-specific callback routines when an application needs to set internal states based on the events. The XmNfocusCallback and XmNmapCallback resources are used extensively by DialogShells. Despite the low profile of the BulletinBoard as a manager widget, there is a lot to be learned from it, since the principles also apply to most other manager widgets. In this spirit, let's take a closer look at the BulletinBoard widget and examine the different things that can be done with it as a manager widget. If you want to use a BulletinBoard directly in an application, you must include the file <Xm/BulletinB.h>. The following code fragment shows the recommended way to create a BulletinBoard: Widget bboard; bboard = XtVaCreateWidget ("name", xmBulletinBoardWidgetClass, parent, resource-value-list, NULL); /* Create children */ XtManageChild (bboard);The parent parameter is the parent of the BulletinBoard, which may be another manager widget or a shell widget. You can specify any of the resources that are specific to the BulletinBoard, but unless you are using the widget as a dialog box, your choices are quite limited. Of the few BulletinBoard resources not tied to DialogShells, the only visual one is XmNshadowType. When used in conjunction with the XmNshadowThickness resource, you can control the three-dimensional appearance of the widget. There are four possible values for XmNshadowType: XmSHADOW_IN XmSHADOW_OUT XmSHADOW_ETCHED_IN XmSHADOW_ETCHED_OUT The default value for XmNshadowThickness is 0, except when the BulletinBoard is the child of a DialogShell, in which case the default value is 1. In either case, the value can be changed by the application or by the user. The XmNbuttonFontList resource may be set to a font list as described in Chapter 19, Compound Strings. This font list is used for each of the button children of the BulletinBoard, when the button does not specify its own font. If the resource is not specified, its value is taken from the XmNbuttonFontList of the nearest ancestor that is a subclass of BulletinBoard, VendorShell, or MenuShell. Similarly, the XmNlabelFontList and XmNtextFontList resources can be set for the Labels and Text widgets, respectively, that are direct children of the BulletinBoard. Since the BulletinBoard does not provide any geometry management by default, you must be prepared to manage the positions and sizes of the widgets within a BulletinBoard. As a result, you must set the XmNx and XmNy resources for each child. You may also have to set the XmNwidth and XmNheight resources if you need consistent or predetermined sizes for the children. In order to maintain the layout, you must add an event handler for resize (ConfigureNotify) events, so that the new sizes and positions of the children can be calculated. the source code shows the use of an event handler with the BulletinBoard. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /*B.h> char *corners[] = { "Top Left", "Top Right", "Bottom Left", "Bottom Right", }; static void resize(); main(argc, argv) int argc; char *argv[]; { Widget toplevel, bboard; XtAppContext app; XtActionsRec rec; int i; XtSetLanguageProc (NULL, NULL, NULL); /*ButtonWidget) Widget w; /* The widget (BulletinBoard) that got resized */ XEvent *event; /* The event struct associated with the event */ String args[]; /* unused */ int *num_args; /* unused */ { WidgetList children; Dimension w_width, w_height; short margin_w, margin_h; XConfigureEvent *cevent = (XConfigureEvent *) event; int width = cevent->width; int height = cevent->height; /*); }The program uses four widgets, labeled Top Left, Top Right , Bottom Left, and Bottom Right. The positions of the buttons in the BulletinBoard correspond to their names. Since the widgets are not positioned when they are created, the geometry management only happens when the widget is resized. the figure shows the application before and after a resize event. When a resize event occurs, X generates a ConfigureNotify event. This event is interpreted by Xt and the translation table of the widget corresponding to the resized window is searched to see if the application is interested in being notified of the event. We have indicated interest in this event by calling XtAppAddActions() and XtOverrideTranslations(), as shown below: XtActionsRec rec; ... rec.string = "resize"; rec.proc = resize; XtAppAddActions (app, &rec, 1); XtOverrideTranslations (bboard, XtParseTranslationTable ("<Configure>: resize()"));As described in Volume Four, X Toolkit Intrinsics Programming Manual , a translation table pairs a sequence of one or more events with a sequence of one or more functions that are called when the event sequence occurs. In this case, the event is a ConfigureNotify event and the function is resize(). Translations are specified as strings and then parsed into an internal format with the function XtParseTranslationTable(). The routine creates an internal structure of events and the functions to which they correspond. Xt provides the table for translating event strings such as <Configure> to the actual ConfigureNotify event, but Xt cannot convert the string resize() to an actual function unless we provide a lookup table. The XtActionsRec type performs this task. The structure is defined as follows: typedef struct { String string; XtActionProc proc; } XtActionsRec;The action list is initialized to map the string resize to the actual function resize() using XtAppAddActions(). We install the translation table on the widget using XtOverrideTranslations() so that when a ConfigureNotify event occurs, the resize() function is called. The resize() function takes four arguments. The first two arguments are a pointer to the widget in which the event occurred and the event structure. The args and num_args parameters are ignored because we did not specify any extra parameters to be passed to the function when we installed it. Since the function is called as a result of the event happening on the BulletinBoard widget, we know that we are dealing with a composite widget. We also know that there is only one event type that could have caused the function to be called, so we cast the event parameter accordingly. The task of the function is to position the children so that there is one per corner in the BulletinBoard. We get a handle to all of the children of the BulletinBoard. Since we are going to place the children around the perimeter of the widget, we also need to know how far from the edge to place them. This distance is taken from the values for XmNmarginWidth and XmNmarginHeight. All three resource values are retrieved in the following call: XtVaGetValues (w, XmNchildren, &children, XmNmarginWidth, &margin_w, XmNmarginHeight, &margin_h, NULL); The remainder of the function simply places the children at the appropriate positions within the BulletinBoard. The routine uses a very simple method for geometry management, but it does demonstrate the process. The general issue of geometry management for composite widgets is not trivial. If you plan on doing your own geometry management for a BulletinBoard or any other composite widget, you should be very careful to consider all the resources that could possibly affect layout. In our example, we considered the margin width and height, but there is also XmNallowOverlap, XmNborderWidth (which is a general Core widget resource), XmNshadowThickness (a general manager widget resource) and the same values associated with the children of the BulletinBoard. There are also issues about what to do if a child decides to resize itself, such as if a label widget gets wider. In this case, you must first evaluate what the geometry layout of the widgets would be if you were to grant the Label permission to resize itself as it wants. This evaluation is done by asking each of the children how big they want to be and calculating the hypothetical layout. The BulletinBoard either accepts or rejects the new layout. Of course, the BulletinBoard may have to make itself bigger too, which requires asking its parent for a new size, and so on. If the BulletinBoard cannot resize itself, then you have to decide whether to force other children to be certain sizes or to reject the resize request of the child that started all the negotiation. Geometry management is by no means a simple task; it is explained more completely in Volume Four, X Toolkit Intrinsics Programming Manual. The Form widget is subclassed from the BulletinBoard class, so it inherits all of the resources that the BulletinBoard has to offer. Accordingly, the children of a Form can be placed at specific x,y coordinates and geometry management can be performed as in the source code However, the Form provides additional geometry management features that allow its children to be positioned relative to one another and relative to specific locations in the Form. In order to use a Form, you must include the file < Xm/Form.h>. A Form is created in a similar way to other manager widgets, as shown below: Widget form; form = XtVaCreateWidget ("name", xmFormWidgetClass, parent, resource-value-list, NULL); /* create children */ XtManageChild (form); Geometry management in a Form is done using attachment resources. These resources are constraint resources, which means that they are specified for the children of the Form. The resources provide various ways of specifying the position of a child of a Form by attaching each of the four sides of the child to another entity. The side of a widget can be attached to another widget, to a fixed position in the Form, to a flexible position in the Form, to the Form itself, or to nothing at all. These attachments can be considered hooks, rods, and anchor points, as shown in the figure. In this figure, there are three widgets. The sizes and types of the widgets are not important. What is important is the relationship between the widgets with respect to their positions in the Form. Widget 1 is attached to the top and left sides of the Form by creating two attachments. The top side of the widget is hooked to the top of the Form. It can slide from side to side, but it cannot be moved up or down (just like a shower curtain). The left side can slide up and down, but not to the right or to the left. Given these two attachment constraints, the top and left sides of the widget are fixed. The right and bottom edges of the widget are not attached to anything, but other widgets are attached to those edges. The left side of Widget 2 is attached to the right side of Widget 1. Similarly, the top side of Widget 2 is attached to the top side of Widget 1. As a result, the top and left sides of the widget cannot be moved unless Widget 1 moves. The same kind of attachments hold for Widget 3. The top side of this widget is attached to the bottom of Widget 1 and its left side is attached to the left side of Widget 1. Given these constraints, no matter how large each of the widgets may be, or how the Form may be resized, the positional relationship of the widgets is maintained. In general, you must attach at least two adjacent edges of a widget to keep it from moving unpredictably. If you attach opposing sides of the widget, the widget will probably be resized by the Form in order to satisfy the attachment policies. The following resources represent the four sides of a widget: XmNtopAttachment XmNbottomAttachment XmNrightAttachment XmNleftAttachmentFor example, if we want to specify that the top of a widget is attached to something, we use the XmNtopAttachment resource. Each of the four resources can be set to one of the following values: XmATTACH_FORM XmATTACH_OPPOSITE_FORM XmATTACH_WIDGET XmATTACH_OPPOSITE_WIDGET XmATTACH_NONE XmATTACH_SELF XmATTACH_POSITIONWhen an attachment is set to XmATTACH_FORM, the specified side is attached to the Form as shown in the figure. If the resource that has this value is XmNtopAttachment, then the top side of the widget is attached to the top of the Form. The top attachment does not guarantee that the widget will not move from side to side. If XmNbottomAttachment is also set to XmATTACH_FORM, the bottom of the widget is attached to the bottom side of the Form. With both of these attachments, the widget is resized to the height of the Form itself. The same would be true for the right and left edges of the widget if they were attached to the Form. When an attachment is set to XmATTACH_OPPOSITE_FORM, the specified side of the widget is attached to the opposite side of the Form. For example, if XmNtopAttachment is set to XmATTACH_OPPOSITE_FORM, the top side of the widget is attached to the bottom side of the Form. This value must be used with a negative offset value (discussed in the next section) or the widget is placed off of the edge of the Form and it is not visible. While it may seem confusing, this value is the only one that can be applied to an attachment resource that allows you to specify a constant offset from the edge of a Form. The XmATTACH_WIDGET value indicates that the side of a widget is attached to another widget. The other widget must be specified using the appropriate resource from the following list: XmNtopWidget XmNbottomWidget XmNleftWidget XmNrightWidgetThe value for one of these resources must be the widget ID. For example, the figure shows how to attach the right side of Widget 1 to the left side of Widget 2. This attachment method is commonly used to chain together a series of adjacent widgets. Chaining widgets horizontally does not guarantee that the widgets will be aligned vertically, or vice versa. The XmATTACH_OPPOSITE_WIDGET value is just like XmATTACH_WIDGET, except that the widget is attached to the same edge of the specified widget, as shown in the figure. In this case, the right side of Widget 1 is attached to the right side of Widget 3. This attachment method allows you to align the edges of a group of widgets. As with XmATTACH_WIDGET, the other widget must be specified using XmNtopWidget, XmNbottomWidget, XmNleftWidget, or XmNrightWidget . XmATTACH_NONE specifies that the side of a widget is not attached to anything, which is the default value. This case could be represented by a dangling hook that is not attached to anything. If the entire widget moves because another side is attached to something, then this side gets dragged along with it so that the widget does not need resizing. Unless a particular side of a widget is attached to something, that side of the widget is free-floating and moves proportionally with the other parts of the widget. When the side of a widget is attached using XmATTACH_POSITION, the side is anchored to a relative position in the Form. This value works by segmenting the Form into a fixed number of equally-spaced horizontal and vertical positions, based on the value of the XmNfractionBase resource. The position of the side must be specified using the appropriate resource from the following list: XmNtopPosition XmNbottomPosition XmNleftPosition XmNrightPositionSee Section #sformpos for a complete discussion of position attachments. When an attachment is set to XmATTACH_SELF, the side of the widget is attached to its initial position in the Form. You position the widget initially by specifying its x,y location in the Form. After the widget has been placed in the Form, the attachment for the side reverts to XmATTACH_POSITION, with the corresponding position resource set to the relative position of the x,y coordinate in the Form. Now that we have explained the concept of Form attachments, we can reimplement the four corners example from the previous section. Unlike in the previous version, we no longer need a resize procedure to calculate the positions of the widgets. By specifying the correct attachments, as shown in the source code the widgets are placed and managed correctly by the Form when it is resized. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* form_corners.c -- demonstrate form layout management. Just as * in corners.c, there are four widgets each labeled top-left, * top-right, bottom-left and bottom-right. Their positions in the * form correspond to their names. As opposed to the BulletinBoard * widget, the Form manages this layout management automatically by * specifying attachment types for each of the widgets. */ #include <Xm/PushB.h> #include <Xm/Form.h> char *corners[] = { "Top Left", "Top Right", "Bottom Left", "Bottom Right", }; main(argc, argv) char *argv[]; { Widget toplevel, form; XtAppContext app; XtSetLanguageProc (NULL, NULL, NULL); toplevel = XtVaAppInitialize (&app, "Demos", NULL, 0, &argc, argv, NULL, NULL); form = XtVaCreateManagedWidget ("form", xmFormWidgetClass, toplevel, NULL); /* Attach the edges of the widgets to the Form. Which edge of * the widget that's attached is relative to where the widget is * positioned in the Form. Edges not attached default to having * an attachment type of XmATTACH_NONE. */ XtVaCreateManagedWidget (corners[0], xmPushButtonWidgetClass, form, XmNtopAttachment, XmATTACH_FORM, XmNleftAttachment, XmATTACH_FORM, NULL); XtVaCreateManagedWidget (corners[1], xmPushButtonWidgetClass, form, XmNtopAttachment, XmATTACH_FORM, XmNrightAttachment, XmATTACH_FORM, NULL); XtVaCreateManagedWidget (corners[2], xmPushButtonWidgetClass, form, XmNbottomAttachment, XmATTACH_FORM, XmNleftAttachment, XmATTACH_FORM, NULL); XtVaCreateManagedWidget (corners[3], xmPushButtonWidgetClass, form, XmNbottomAttachment, XmATTACH_FORM, XmNrightAttachment, XmATTACH_FORM, NULL); XtRealizeWidget (toplevel); XtAppMainLoop (app); }In this example, two sides of each widget are attached to the Form. It is not necessary to attach the other sides of the widgets to anything else. If we attach the other sides to each other, the widgets would have to be resized so that they could stretch to meet each other. With the specified attachments, the output of the program looks just like the output in the figure. A more complex example of Form attachments is shown in the source code This example implements the layout shown in the figure. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* attach.c -- demonstrate how attachments work in Form widgets. */ #include <Xm/PushB.h> #include <Xm/Form.h> main(argc, argv) int argc; char *argv[]; { Widget toplevel, parent, one, two, three; XtAppContext app; XtSetLanguageProc (NULL, NULL, NULL); toplevel = XtVaAppInitialize (&app, "Demos", NULL, 0, &argc, argv, NULL, NULL); parent = XtVaCreateManagedWidget ("form", xmFormWidgetClass, toplevel, NULL); one = XtVaCreateManagedWidget ("One", xmPushButtonWidgetClass, parent, XmNtopAttachment, XmATTACH_FORM, XmNleftAttachment, XmATTACH_FORM, NULL); two = XtVaCreateManagedWidget ("Two", xmPushButtonWidgetClass, parent, XmNleftAttachment, XmATTACH_WIDGET, XmNleftWidget, one, /* attach top of widget to same y coordinate as top of "one" */ XmNtopAttachment, XmATTACH_OPPOSITE_WIDGET, XmNtopWidget, one, NULL); three = XtVaCreateManagedWidget ("Three", xmPushButtonWidgetClass, parent, XmNtopAttachment, XmATTACH_WIDGET, XmNtopWidget, one, /* attach left of widget to same x coordinate as left side of "one" */ XmNleftAttachment, XmATTACH_OPPOSITE_WIDGET, XmNleftWidget, one, NULL); XtRealizeWidget (toplevel); XtAppMainLoop (app); }The example uses three PushButton gadgets inside of a Form widget. The output of the program is shown in the figure. You should notice that the widgets are packed together quite tightly, which might not be how you expected them to appear. In order to space the widgets more reasonably, we need to specify some distance between them using attachment offsets. Attachment offsets control the spacing between widgets and the objects to which they are attached. The following resources represent the attachment offsets for the four sides of a widget: XmNleftOffset XmNrightOffset XmNtopOffset XmNbottomOffsetthe figure shows the graphic representation of attachment offsets. By default, offsets are set to 0 (zero), which means that there is no offset, as shown in the output for the source code To make the output more reasonable, we need only to set the left offset between widgets One and Two and the top offset to between widgets One and Three. The resources values can be hard-coded in the application or set in a resource file, using the following specification: *form.One.leftOffset: 10 *form.One.topOffset: 10 *form.Two.leftOffset: 10 *form.Three.topOffset: 10 Our choice of the value 10 was arbitrary. The widgets are now spaced more appropriately, as shown in the figure. While the layout of the widgets can be improved by setting offset resources, it is also possible to disrupt the layout. Consider the following resource specifications: *form*leftOffset: 10 *form*topOffset: 10While it might seem that these resource values are simply a terser way to specify the offsets shown earlier, the figure makes it clear that these specifications do not produce the desired effect. An application should hard-code whatever resources may be necessary to prevent the user from setting values that would make the application non-functional or aesthetically unappealing. Offset resource values can be tricky because they apply individually to each side of each widget in a Form. The problem with the resource specifications used to produce the figure is that the offsets are being applied to each side of every widget, when some of the alignments need to be precise. In order to prevent this problem, we need to hard-code the offsets for particular attachments, as shown in the following code fragment: two = XtVaCreateManagedWidget ("Two", xmPushButtonWidgetClass, parent, XmNleftAttachment, XmATTACH_WIDGET, XmNleftWidget, one, XmNtopAttachment, XmATTACH_OPPOSITE_WIDGET, XmNtopWidget, one, XmNtopOffset, 0, NULL); three = XtVaCreateManagedWidget ("Three", xmPushButtonWidgetClass, parent, XmNtopAttachment, XmATTACH_WIDGET, XmNtopWidget, one, XmNleftAttachment, XmATTACH_OPPOSITE_WIDGET, XmNleftWidget, one, XmNleftOffset, 0, NULL);The use of zero-length offsets guarantees that the widgets they are associated with are aligned exactly with the widgets to which they are attached, regardless of any resource specifications made by the user. A general rule of thumb is that whenever you use XmATTACH_OPPOSITE_WIDGET, you should also set the appropriate offset to zero so that the alignment remains consistent. In some situations it is necessary to use negative offsets to properly arrange widgets in a Form. The most common example of this situation occurs when using the XmATTACH_OPPOSITE_FORM attachment. Unless you use a negative offset, as shown in the figure, the widgets are placed off the edge of the Form and are not visible. Form positions provide another way to position widgets within a Form. The concept is similar to the hook and rod principle described earlier, but in this case the widgets are anchored on at positions that are based on imaginary longitude and latitude lines that are used to segment the Form into equal pieces. The resource used to partition the Form into segments is XmNfractionBase. Although the name of this resource may suggest complicated calculations, you just need to know that the Form is divided horizontally and vertically into the number of partitions represented by its value. For example, the figure shows how a Form is partitioned if XmNfractionBase is set to 5. As you can see, there are an equal number of horizontal and vertical partitions, but the size of the horizontal partitions is not the same as the size of the vertical partitions. It is currently not possible to set the number of horizontal partitions separately from the number of vertical ones, although it is possible to work around this shortcoming, as we will describe shortly. Widgets are placed at the coordinates that represent the partitions by specifying XmATTACH_POSITION for the attachment resource and by specifying a coordinate value for the corresponding position resource. The position resources are XmNtopPosition, XmNbottomPosition, XmNleftPosition , and XmNrightPosition. For example, if we wanted to attach the top and left sides of a PushButton to position 1, we could use the following code fragment: XtVaCreateManagedWidget ("name", xmPushButtonWidgetClass, form, XmNtopAttachment, XmATTACH_POSITION, XmNtopPosition, 1, XmNleftAttachment, XmATTACH_POSITION, XmNleftPosition, 1, NULL);The right and bottom attachments are left unspecified, so those edges of the widget are not explicitly positioned by the Form. If attachments had been specified for these edges, the widget would have to be resized by the Form in order to satisfy all the attachment constraints. One obvious example of using position attachments is to create a tic-tac-toe board layout, as is done in the source code. /* tictactoe.c -- demonstrate how fractionBase and XmATTACH_POSITIONs * work in Form widgets. */ #include <Xm/PushB.h> #include <Xm/Form.h> main(argc, argv) int argc; char *argv[]; { XtAppContext app; Widget toplevel, parent, w; int x, y; extern void pushed(); /* callback for each PushButton */ XtSetLanguageProc (NULL, NULL, NULL); toplevel = XtVaAppInitialize (&app, "Demos", NULL, 0, &argc, argv, NULL, NULL); parent = XtVaCreateManagedWidget ("form", xmFormWidgetClass, toplevel, XmNfractionBase, 3, NULL); for (x = 0; x < 3; x++) for (y = 0; y < 3; y++) { w = XtVaCreateManagedWidget (" ", xmPushButtonWidgetClass, parent, XmNtopAttachment, XmATTACH_POSITION, XmNtopPosition, y, XmNleftAttachment, XmATTACH_POSITION, XmNleftPosition, x, XmNrightAttachment, XmATTACH_POSITION, XmNrightPosition, x+1, XmNbottomAttachment, XmATTACH_POSITION, XmNbottomPosition, y+1, NULL); XtAddCallback (w, XmNactivateCallback, pushed, NULL); } XtRealizeWidget (toplevel); XtAppMainLoop (app); } void pushed(w, client_data, call_data) Widget w; /* The PushButton that got activated */ XtPointer client_data; /* unused -- NULL was passed to XtAddCallback() */ XtPointer call_data; { char buf[2]; XmString str; XmPushButtonCallbackStruct *cbs = (XmPushButtonCallbackStruct *) call_data; /* Shift key gets an O. (xbutton and xkey happen to be similar) */ if (cbs->event->xbutton.state & ShiftMask) buf[0] = '0'; else buf[0] = 'X'; buf[1] = 0; str = XmStringCreateLocalized (buf); XtVaSetValues (w, XmNlabelString, str, NULL); XmStringFree (str); } The output of this program is shown in the figure. As you can see, the children of the Form are equally sized because their attachment positions are segmented equally. If the user resizes the Form, all of the children maintain their relationship to one another. The PushButtons simply grow or shrink to fill the form. One common use of positional attachments is to lay out a number of widgets that need to be of equal size and equal spacing. For example, you might use this technique to arrange the buttons in the action area of a dialog. Chapter 7, Custom Dialogs , provides a detailed discussion of how to arrange buttons in this manner. There may be situations where you would like to attach widgets to horizontal positions that do not match up with how you'd like to attach their vertical positions. Since the fraction base cannot be set differently for the horizontal and vertical orientations, you have to use the least common multiple as the fraction base value. For example, say you want to position the tops and bottoms of all of your widgets to the 2nd and 4th positions, as if the Form were segmented vertically into 5 parts. But, you also want to position the left and right edges of those same widgets to the 3rd, 5th, 7th, and 9th positions, as if it were segmented into 11 parts. You would have to apply some simple arithmetic and set the value for XmNfractionBase to 55 (5x11). The top and bottom edges would be set to the 22nd (2x11) and 44th (4x11) positions and the left and right edges would be set to the 15th (3x5), 25th ( 5x5), 35th (7x5), and 45th (9x5) positions. There are a few other useful Form resources that we have not covered so far. The XmNhorizontalSpacing resource can be used to specify the distance between horizontally adjacent widgets, while XmNverticalSpacing specifies the distance between vertically adjacent widgets. These values only apply when the left and right offset values are not specified, so they are intended to be used as global offset values global for a Form. The following resource specification: *horizontalSpacing: 10is equivalent to: *leftOffset: 10 *rightOffset: 10The XmNrubberPositioning resource specifies the default attachments for widgets in the Form. The default value of False indicates that the top and left edges are attached to the form by default. If XmNrubberPositioning is set to True, the top and left attachments are set to XmATTACH_POSITION by default. If the XmNtopAttachment or XmNleftAttachment resource is explicitly set for a widget, then the default attachment has no effect. The XmNresizable resource is another constraint resource that can be set on the children of a Form widget. This resource indicates whether or not the Form tries to grant resize requests from the child. Some widget layouts are difficult to create using a single Form widget. Since a manager widget can contain other managers, it is often possible to generate the desired layout by using a Form within a Form. One common problem is that there are no Form attachments available to align two widgets horizontally if they have different heights. We need a middle attachment resource, but one doesn't exist. For example, if you have a series of Labels and Text widgets that you want to pair off and stack vertically, it would be nice to align each pair of widgets at their midsections. To solve this problem, we can place each Label-Text widget pair in a separate Form. If the top and bottom edges of the widgets are attached to the Form, the widgets are stretched to satisfy the constraints, which means that they are aligned horizontally. All of these smaller Form widgets can be placed inside of a larger Form widget. the source code shows an implementation of this idea. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* text_form.c -- demonstrate how attachments work in Form widgets * by creating a text-entry form type application. */ #include <Xm/LabelG.h> #include <Xm/Text.h> #include <Xm/Form.h> char *prompts[] = { "Name:", "Phone:", "Address:", "City:", "State:", "Zip Code:", }; main(argc, argv) int argc; char *argv[]; { Widget toplevel, mainform, subform, label, text; XtAppContext app; char buf[32]; int i; XtSetLanguageProc (NULL, NULL, NULL);); /* Note that the label here contains a colon from the prompts * array above. This makes it impossible for external resources * to be set on these widgets. Here, that is intentional, but * be careful in the general case. */); } The output of the program is shown in the figure. Notice that the Labels are centered vertically with respect to their corresponding Text widgets. This arrangement happened because each Label was stretched vertically in order to attach it to the top and bottom of the respective Form. Of course, if the Labels were higher than the Text widgets, the Text widgets would be stretched instead. Later, we'll show another version of this program that gives better results. As you can imagine, there are many different ways for a Form, or any other manager widget, to manage the geometry of its children to produce the same layout. Later, when we discuss the RowColumn widget, we will show you another solution to the problem of horizontal alignment. It is important to remember that there is no right or wrong way to create a layout, as long as it works for your application. However, you should be very careful to experiment with resizing issues as well as with resources that can be set by the user that might affect widget layout, such as fonts and strings. With a Form widget, you can specify a virtually unlimited number of attachments for its children. The dependencies inherent in these attachments can lead to various errors in the layout of the widgets. One common problem involves circular dependencies. The following code fragment shows a very simple example of a circular dependency: w1 = XtVaCreateManagedWidget ("w1", xmLabelGadgetClass, form, NULL); w2 = XtVaCreateManagedWidget ("w2", xmLabelGadgetClass, form, NULL); XtVaSetValues (w1, XmNrightAttachment, XmATTACH_WIDGET, XmNrightWidget, w2, NULL); XtVaSetValues (w2, XmNleftAttachment, XmATTACH_WIDGET, XmNleftWidget, w1, NULL); In this example, the left widget is attached to the right widget and the right widget is attached to the left one. If you do mistakenly specify a circular dependency, it is unlikely that it will be as obvious as this example. Fortunately, in most cases, the Motif toolkit catches circular dependencies and displays an error message if one is found. When this situation occurs, you need to reconsider your widget layout and try to arrange things such that the relationship between widgets is less complex. One rule to remember is that adjacent widgets should only be attached in one direction. When you attach the side of a widget to another widget in a Form, you need to be careful about how you specify the attached widget. If you specify this widget in the application code, you need to make sure that the widget has been created before you specify it as a resource value. With Motif 1.1, you cannot specify a widget ID in a resource file unless you have installed your own widget-name-to-widget-ID converter. (See Volume Four, X Toolkit Intrinsics Programming Manual, for information about resource converters.) In Motif 1.2, the toolkit provides a name-to-widget converter, so you can specify widget IDs in a resource file. Another common problem arises with certain Motif compound objects, such as ScrolledList and ScrolledText objects. XmCreateScrolledText() and XmCreateScrolledList() return the corresponding Text or List widget, but it is the parent of this widget that needs to be positioned within a Form. The following code fragment shows an example of positioning a ScrolledList incorrectly: form = XmCreateForm (parent, "form", NULL, 0); list = XmCreateScrolledList (form, "scrolled_list", NULL, 0); XtVaSetValues(list, /* <- WRONG */ XmNleftAttachment, XmATTACH_FORM, XmNtopAttachment, XmATTACH_FORM, NULL);Since the List is a child of the ScrolledWindow, not the Form, specifying attachments for the List has no effect on the position of the List in the Form. The attachments need to be specified on the ScrolledWindow, as shown in the following code fragment: XtVaSetValues (XtParent (list), XmNleftAttachment, XmATTACH_FORM, XmNtopAttachment, XmATTACH_FORM, NULL);If you specify attachments for two opposing sides of a widget, the Form resizes the widget as needed, so that the default size of the widget is ignored. In most cases, the Form can resize the widget without a problem. However, one particular case that can cause a problem is a List widget that has its XmNvisibleItemCount resource set. This resource implies a specific size requirement, so that when the List is laid out in the Form widget, the negotiation process between the Form and the List may not be resolved. See Chapter 12, The List Widget, for a complete discussion of the List widget. Attachments in Form widgets can be delicate specifications, which means that you must be specific and, above all, complete in your descriptions of how widgets should be aligned and positioned. Since resources can be set from many different places, the only way to guarantee that you get the layout you want is to hard-code these resource values explicitly. Even though it is important to allow the user to specify as many resources as possible, you do not want to compromise the integrity of your application. Attachments and attachment offsets are probably not in the set of resources that should be user-definable. Although attachments can be delicate, they are also provide a powerful, convenient, and flexible way to lay out widgets within a Form, especially when the widgets are grouped together in some abstract way. Attachments make it easy to chain widgets together, to bind them to the edge of a Form, and to allow them to be fixed on specific locations. You do not need to use a single attachment type exclusively; it is perfectly reasonable, and in most cases necessary, to use a variety of different types of attachments to achieve a particular layout. If you specify too few attachments, you may end up with misplaced widgets or widgets that drift when the Form is resized, while too many attachments may cause the Form to be too inflexible. In order to determine the best way to attach widgets to one another, you may find it helpful to a draw picture first, with all of the hooks and offset values considered. The RowColumn widget is a manager widget that, as its name implies, lays out its children in a row and/or column format. The widget is also used internally by the Motif toolkit to implement a number of special objects, such as the Motif menus, including PopupMenus, PulldownMenus, MenuBars, and OptionMenus. Many of the resources for the RowColumn widget are used to control different aspects of these objects. The Motif convenience functions for creating these objects set most of these resources automatically, so they are generally hidden from the programmer. The resources are not useful when you are using the RowColumn as a simple manager widget anyway, so we do not discuss them here. The XmNrowColumnType resource controls how a particular instance of the RowColumn is used. The resource can be set to the following values: XmWORK_AREA XmMENU_BAR XmPULLDOWN XmMENU_POPUP XmMENU_OPTIONThe default value is XmWORK_AREA; this value is also the one that you should use whenever you want to use a RowColumn widget as a manager. The rest of the values are for the different types of Motif menus. If you want to create a particular menu object, you should use the appropriate convenience function, rather than try to create the menu yourself using a RowColumn directly. We discuss menu creation in in Chapter 4, The Main Window, and Chapter 15, Menus. The RowColumn widget is also used to implement RadioBoxes and CheckBoxes, which are collections of ToggleButtons. See Chapter 11, Labels and Buttons, for more information on these objects. The RowColumn is useful for generic geometry management because it requires less fine tuning than is necessary for a Form or a BulletinBoard widget. Although the RowColumn has a number of resources, you can create a usable layout without specifying any resources. In this case, the children of the RowColumn are automatically laid out vertically. In the source code we create several PushButtons as children of a RowColumn, without specifying any RowColumn resources. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* rowcol.c -- demonstrate a simple RowColumn widget. Create one * with 3 pushbutton gadgets. Once created, resize the thing in * all sorts of contortions to get a feel for what RowColumns can * do with its children. */ #include <Xm/PushB.h> #include <Xm/RowColumn.h> main(argc, argv) int argc; char *argv[]; { Widget toplevel, rowcol; XtAppContext app; XtSetLanguageProc (NULL, NULL, NULL); toplevel = XtVaAppInitialize (&app, "Demos", NULL, 0, &argc, argv, NULL, NULL); rowcol = XtVaCreateManagedWidget ("rowcolumn", xmRowColumnWidgetClass, toplevel, NULL); (void) XtVaCreateManagedWidget ("One", xmPushButtonWidgetClass, rowcol, NULL); (void) XtVaCreateManagedWidget ("Two", xmPushButtonWidgetClass, rowcol, NULL); (void) XtVaCreateManagedWidget ("Three", xmPushButtonWidgetClass, rowcol, NULL); XtRealizeWidget (toplevel); XtAppMainLoop (app); }What makes the RowColumn widget unique is that it automates much of the process of widget layout and management. If you display the application and resize it in a number of ways, you can get a better feel for how the RowColumn works. the figure shows a few configurations of the application; the first configuration is the initial layout of the application. As you can see, if the application is resized just so, the widgets are oriented horizontally rather than vertically. The orientation of the widgets in a RowColumn is controlled by the XmNorientation resource. The default value of the resource is XmVERTICAL. If we want to arrange the widgets horizontally, we can set the resource to XmHORIZONTAL. The orientation can be hard-coded in the application, or we can specify the value of the resource in a resource file. The following resource specification sets the orientation to horizontal: *RowColumn.orientation: horizontalAlternatively, we can specify the resource on the command line as follows: % rowcol -xrm "*orientation: horizontal"the figure shows the output of the source code with a horizontal orientation. As before, the figure shows a few different configurations of the application, with the first configuration being the initial one. If you use a RowColumn widget to manage more objects than can be arranged in a single row or column, you can specify that the widgets should be arranged in both rows and columns. You can also specify whether the widgets should be packed together tightly, so that the rows and columns are not necessarily the same size, or whether the objects should be placed in identically-sized boxes. As with the Form and BulletinBoard widgets, objects can also be placed at specific x, y locations in a RowColumn widget. The RowColumn widget does not provide a three-dimensional border, so if you want to provide a visual border for the widget, you should create it as a child of a Frame widget. The RowColumn widget can be quite flexible in terms of how it lays out its children. The advantage of this flexibility is that all of its child widgets are arranged in an organized fashion, regardless of their widget types. The widgets remain organized when the RowColumn is resized and in spite of constraints imposed by other widgets or by resources. One disadvantage of the flexibility is that sometimes the children need to be arranged in a specific layout so that the user interface is intuitive. the source code shows how to lay out widgets in a spreadsheet-style format using a RowColumn. This layout requires that each of the widgets be the same size and be spaced equally in a predetermined number of rows and columns. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* spreadsheet.c -- This demo shows the most basic use of the RowColumn * It displays a table of widgets in a row-column format similar to a * spreadsheet. This is accomplished by setting the number ROWS and * COLS and setting the appropriate resources correctly. */ #include <Xm/LabelG.h> #include <Xm/PushB.h> #include <Xm/RowColumn.h> #define ROWS 8 #define COLS 10 main(argc, argv) int argc; char *argv[]; { Widget toplevel, parent; XtAppContext app; char buf[16]; int i, j; XtSetLanguageProc (NULL, NULL, NULL); toplevel = XtVaAppInitialize (&app, "Demos", NULL, 0, &argc, argv, NULL, NULL); parent = XtVaCreateManagedWidget ("rowcolumn", xmRowColumnWidgetClass, toplevel, XmNpacking, XmPACK_COLUMN, XmNnumColumns, COLS, XmNorientation, XmVERTICAL, NULL); /* simply loop thru the strings creating a widget for each one */ for (i = 0; i < COLS; i++) for (j = 0; j < ROWS; j++) { sprintf (buf, "%d-%d", i+1, j+1); if (i == 0 || j == 0) XtVaCreateManagedWidget (buf, xmLabelGadgetClass, parent, NULL); else XtVaCreateManagedWidget ("", xmPushButtonWidgetClass, parent, NULL); } XtRealizeWidget (toplevel); XtAppMainLoop (app); }The output of this example is shown in the figure. The number of rows is specified by the ROWS definition and the number of columns is specified by COLS. In order to force the RowColumn to lay out its children in the spreadsheet format, we set the XmNpacking, XmNnumColumns, and XmNorientation resources. The value for XmNpacking is set to XmPACK_COLUMN, which specifies that each of the cells should be the same size. The heights and widths of the widgets are evaluated and the largest height and width are used to determine the size of the rows and columns. All of the widgets are resized to this size. If you are mixing different widget types in a RowColumn, you may not want to use XmPACK_COLUMN because of size variations. XmPACK_COLUMN is typically used when the widgets are exactly the same, or at least similar in nature. The default value of XmPACK_TIGHT for XmNpacking allows each widget to keep its specified size and packs the widgets into rows and columns based on the size of the RowColumn widget. Since we are packing the widgets in a row/column format, we need to specify how many columns (or rows) we are using by setting the value of XmNnumColumns to the number of columns. In this case, the program defines COLS to be 10, which indicates that the RowColumn should pack its children such that there are 10 columns. The widget creates as many rows as necessary to provide enough space for all of the child widgets. Whether XmNnumColumns specifies the number of columns or the number of rows depends on the orientation of the RowColumn. In this program, XmNorientation is set to XmVERTICAL to indicate that the value of XmNnumColumns specifies the number of columns to use. If XmNorientation is set to XmHORIZONTAL, XmNnumColumns indicates the number of rows. If we wanted to use a horizontal orientation in our example, we would set XmNnumColumns to ROWS and XmNorientation to XmHORIZONTAL. The orientation also dictates how children are added to the RowColumn; when the orientation is vertical, children are added vertically so that each column is filled up before the next one is started. If you need to insert a child in the middle of an existing RowColumn layout, you can use the XmNpositionIndex constraint resource to specify the position of the child. Since this resource is used most often with menus, it is discussed in Chapter 15, Menus. In our example, we explicitly set the value of XmNorientation to the default value of XmVERTICAL. If we do not hard-code this resource, an external resource specification can reset it. Since the orientation and the value for XmNnumColumns need to be consistent, you should always specify these resources together. Whether you choose to hard-code the resources, to use the fallback mechanism, or to use a specification in a resource file, you should be sure that both of the resources are specified in the same place. In the spreadsheet example, we can use either a horizontal or vertical orientation. However, orientation may be significant in other situations, since it affects how the RowColumn adds its children. For example, if we want to implement the text-entry form from the source code using a RowColumn, the order of the widgets is important. In this case, there are two columns and the number of rows depends on the number of text entry fields provided by the application. We specify the orientation of the RowColumn as XmHORIZONTAL and set XmNnumColumns to the number of entries provided by the application, as shown in the source code XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* Code:", }; main(argc, argv) int argc; char *argv[]; { Widget toplevel, rowcol; XtAppContext app; char buf[8]; int i; XtSetLanguageProc (NULL, NULL, NULL);); }The output of this example is shown in the figure. The labels for the text fields are initialized by the text_labels string array. When the RowColumn is created, it is set to a horizontal orientation and the number of rows is set to the number of items in text_labels. As you can see, the output of this program is slightly different from the output for the text_form example. The example uses the XmNisAligned and XmNentryAlignment resources to control the positioning of the Labels in the RowColumn. These resources control the alignment of widgets that are subclasses of Label and LabelGadget. When XmNisAligned is True (the default), the alignment is taken from the XmNentryAlignment resource. The possible alignment values are the same as those that can be set for the Label's XmNalignment resource: XmALIGNMENT_BEGINNING XmALIGNMENT_CENTER XmALIGNMENT_ENDBy default, the text is left justified. While the alignment of the Labels could also be specified using the XmNalignment resource for each widget, it is convenient to be able to set the alignment for the RowColumn and have it propagate automatically to its children. In our example, we use XmALIGNMENT_END to right justify the Labels so that they appear to be attached to the Text widgets. In Motif 1.2, there is an additional resource for controlling the alignment of various children. The XmNentryVerticalAlignment resource controls the vertical positioning of children that are subclasses of Label, LabelGadget, and Text. The possible values for this resource are: XmALIGNMENT_BASELINE_BOTTOM XmALIGNMENT_BASELINE_TOP XmALIGNMENT_CENTER XmALIGNMENT_CONTENTS_BOTTOM XmALIGNMENT_CONTENTS_TOPIn the example, we do not specify this resource because the default value, XmALIGNMENT_CENTER, produces the layout that we want. The RowColumn can be set up so that it only manages one particular type of widget or gadget. In many cases, this feature facilitates layout and callback management. For example, a MenuBar consists entirely of CascadeButtons that all act the same way and a RadioBox contains only ToggleButtons. The XmNisHomogeneous resource indicates whether or not the RowColumn should only allow one type of widget child. The widget class that is allowed to be managed is specified by the XmNentryClass resource. XmNisHomogeneous can be set at creation-time only. Once a RowColumn is created, you cannot reset this resource, although you can always get its value. These resources are useful for ensuring consistency; if you attempt to add a widget as a child of a RowColumn that does not permit that widget class, an error message is printed and the widget is not accepted. The Motif toolkit uses these mechanisms to ensure consistency in certain compound objects, to prevent you from doing something like adding a List widget to a MenuBar, for example. In this case, the XmNentryClass is set to xmCascadeButtonWidgetClass. As another example, when XmNradioBehavior is set, the RowColumn only allows ToggleButton widgets and gadgets to be added. The XmCreateRadioBox() convenience function creates a RowColumn widget with the appropriate resources set automatically. (See Chapter 11, Labels and Buttons .) You probably do not need to use XmNisHomogeneous unless you are providing a mechanism that is exported to other programmers. If you are writing an interactive user-interface builder or a program that creates widgets by scanning text files, you may want to ensure that new widgets are of a particular type before they are added to a RowColumn widget. In such cases, you may want to use XmNisHomogeneous and XmNentryClass. Unless there is some way for a user to to dynamically create widgets while an application is running, these resources are not particularly useful. The RowColumn does not provide any specific callback routines that react to user input. While there are no callbacks for FocusIn and FocusOut events, the widget does have XmNmapCallback and XmNunmapCallback callback resources. These callbacks are invoked when the window for the RowColumn is mapped and unmapped. The callbacks are similar to those for the BulletinBoard, but since the RowColumn is not designed specifically to be a child of a DialogShell, the routines are invoked regardless of whether the parent of the RowColumn is a DialogShell. The XmNentryCallback is the only other callback that is associated specifically with the RowColumn widget. This callback resource makes it possible to install a single callback function that acts as the activation callback for each of the children of a RowColumn widget. The routine specified for the XmNentryCallback overrides the XmNactivateCallback functions for any PushButton or CascadeButton children and the XmNvalueChangedCallback functions for ToggleButtons. The XmNentryCallback is a convenience to the programmer; if you use it, you don't have to install separate callbacks for each widget in the RowColumn. XmNentryCallback functions must be installed before children are added to the RowColumn, so be sure you call XtAddCallback() before you create any child widgets. The callback procedure takes the standard form of an XtCallbackProc. The call_data parameter is an XmRowColumnCallbackStruct, which is defined as follows: typedef struct { int reason; XEvent *event; Widget widget; char *data; char *callbackstruct; } XmRowColumnCallbackStruct; The reason field of this data structure is set to XmCR_ACTIVATE when the XmNentryCallback is invoked. The event indicates the event that caused the notification. The entry callback function is called regardless of which widget within the RowColumn was activated. Since an entry callback overrides any previously-set callback lists for PushButtons, CascadeButtons, and ToggleButtons, the parameters that would have been passed to these callback routines are provided in the RowColumn callback structure. The widget field specifies the child that was activated, the widget-specific callback structure is placed in the callbackstruct field, and the client data that was set for the widget is passed in the data field. the source code shows the installation of an entry callback and demonstrates how the normal callback functions are overridden. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* entry_cb.c -- demonstrate how the XmNentryCallback resource works * in RowColumn widgets. When a callback function is set for this * resource, all the callbacks for the RowColumn's children are reset * to point to this function. Their original functions are no longer * called had they been set in favor of the entry-callback function. */ #include <Xm/PushBG.h> #include <Xm/RowColumn.h> char *strings[] = { "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Ten", }; void called(widget, client_data, call_data) Widget widget; XtPointer client_data; XtPointer call_data; { XmRowColumnCallbackStruct *cbs = (XmRowColumnCallbackStruct *) call_data; Widget pb = cbs->widget; printf ("%s: %d0, XtName (pb), cbs->data); } static void never_called(widget, client_data, call_data) Widget widget; XtPointer client_data; XtPointer call_data; { puts ("This function is never called"); } main(argc, argv) int argc; char *argv[]; { Widget toplevel, parent, w; XtAppContext app; int i; XtSetLanguageProc (NULL, NULL, NULL); toplevel = XtVaAppInitialize (&app, "Demos", NULL, 0, &argc, argv, NULL, NULL); parent = XtVaCreateManagedWidget ("rowcolumn", xmRowColumnWidgetClass, toplevel, NULL); XtAddCallback (parent, XmNentryCallback, called, NULL); /* simply loop thru the strings creating a widget for each one */ for (i = 0; i < XtNumber (strings); i++) { w = XtVaCreateManagedWidget (strings[i], xmPushButtonGadgetClass, parent, NULL); /* Call XtAddCallback() to install client_data only! */ XtAddCallback (w, XmNactivateCallback, never_called, i+1); } XtRealizeWidget (toplevel); XtAppMainLoop (app); }The RowColumn is created and its XmNentryCallback is set to called(). This routine ignores the client_data parameter, as none is provided. However, we do use the data field of the cbs because this is the data that is specified in the call to XtAddCallback() for each of the children. We install the never_called() routine for each PushButton and pass the position of the button in the RowColumn as the client_data. Even though the entry callback overrides the activate callback, the client_data is preserved. Our example is a bit contrived, so it may seem pointless to call XtAddCallback() for each PushButton and specify an XmNentryCallback as well. The most compelling reason for using an entry callback is that you may want to provide client data for the RowColumn as a whole, as well as for each child widget. Remember that the RowColumn widget is also used for a number of objects implemented internally by the Motif toolkit, such as the Motif menu system, RadioBoxes, and CheckBoxes. Many of the resources for the widget are specific to these objects, so they are not discussed here. For more information on menus, see Chapter 4, The Main Window, and Chapter 15, Menus; for information on RadioBoxes and CheckBoxes, see Chapter 11, Labels and Buttons. The Frame is a simple manager widget; the purpose of the Frame is to draw a three-dimensional border around its child. In Motif 1.1, a Frame can contain only one child. With Motif 1.2, the widget can have two children: a work area child and a title child. The Frame shrink wraps itself around its work area child, adding space for a title if one is specified. The children are responsible for setting the size of the Frame. The Frame is useful for grouping related control elements, so that they are separated visually from other elements in a window. The Frame is commonly used as the parent of RadioBoxes and CheckBoxes, since the RowColumn widget does not provide a three-dimensional border. the figure shows a portion of a dialog box that uses Frames to segregate three groups of ToggleButtons. To use Frame widgets in an application, you must include the file <Xm/Frame.h>. Creating a Frame widget is just like creating any other manager widget, as shown in the following code fragment: Widget frame; frame = XtVaCreateManagedWidget ("name", xmFrameWidgetClass, parent, resource-value-list, NULL);Since the Frame performs only simple geometry management, you can create a Frame widget as managed using XtVaCreateManagedWidget() and not worry about a performance loss. The Frame widget is an exception to the guidelines about creating manager widgets that we presented earlier in the chapter. The principal resource used by the Frame widget is XmNshadowType. This resource specifies the style of the three-dimensional border that is placed around the work area child of the Frame. The value may be any of the following: XmSHADOW_IN XmSHADOW_OUT XmSHADOW_ETCHED_IN XmSHADOW_ETCHED_OUTIf the parent of the Frame is a shell widget, the default value for XmNshadowType is set to XmSHADOW_OUT and the value for XmNshadowThickness is set to 1. Otherwise, the default shadow type is XmSHADOW_ETCHED_IN and the thickness is 2. Of course, these values may be overridden by the application or the user. In Motif 1.2, the Frame provides some constraint resources that can be specified for its children. The XmNchildType resource indicates whether the child is the work area or the title child for the Frame. The default value is XmFRAME_WORKAREA_CHILD . To specify that a child is the title child, use the value XmFRAME_TITLE_CHILD. The XmNchildHorizontalAlignment and XmNchildHorizontalSpacing resources control the horizontal positioning of the title. The possible values for horizontal alignment are: XmALIGNMENT_BEGINNING XmALIGNMENT_END XmALIGNMENT_CENTERThe XmNchildVerticalAlignment resource specifies the vertical positioning of the title child relative to the top shadow of the Frame. The possible values for this resource are: XmALIGNMENT_BASELINE_BOTTOM XmALIGNMENT_BASELINE_TOP XmALIGNMENT_CENTER XmALIGNMENT_WIDGET_TOP XmALIGNMENT_WIDGET_BOTTOMthe source code demonstrates many of the different shadow and alignment styles that are possible with the Frame widget. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. This example also uses functionality that is new in Motif 1.2; to take advantage of this functionality, define the symbol MOTIF_1_2 when you compile the program. /* frame.c -- demonstrate the Frame widget by creating * four Labels with Frame widget parents. */ #include <Xm/LabelG.h> #include <Xm/RowColumn.h> #include <Xm/Frame.h> main(argc, argv) int argc; char *argv[]; { Widget toplevel, rowcol, frame; XtAppContext app; XtSetLanguageProc (NULL, NULL, NULL); /* Initialize toolkit and create TopLevel shell widget */ toplevel = XtVaAppInitialize (&app, "Demos", NULL, 0, &argc, argv, NULL, NULL); /* Make a RowColumn to contain all the Frames */ rowcol = XtVaCreateWidget ("rowcolumn", xmRowColumnWidgetClass, toplevel, XmNspacing, 5, NULL); /* Create different Frames each containing a unique shadow type */ XtVaCreateManagedWidget ("Frame Types:", xmLabelGadgetClass, rowcol, NULL); frame = XtVaCreateManagedWidget ("frame1", xmFrameWidgetClass, rowcol, XmNshadowType, XmSHADOW_IN, NULL); XtVaCreateManagedWidget ("XmSHADOW_IN", xmLabelGadgetClass, frame, NULL); #ifdef MOTIF_1_2 XtVaCreateManagedWidget ("XmALIGNMENT_CENTER", xmLabelGadgetClass, frame, XmNchildType, XmFRAME_TITLE_CHILD, XmNchildVerticalAlignment, XmALIGNMENT_CENTER, NULL); #endif frame = XtVaCreateManagedWidget ("frame2", xmFrameWidgetClass, rowcol, XmNshadowType, XmSHADOW_OUT, NULL); XtVaCreateManagedWidget ("XmSHADOW_OUT", xmLabelGadgetClass, frame, NULL); #ifdef MOTIF_1_2 XtVaCreateManagedWidget ("XmALIGNMENT_BASELINE_TOP", xmLabelGadgetClass, frame, XmNchildType, XmFRAME_TITLE_CHILD, XmNchildVerticalAlignment, XmALIGNMENT_BASELINE_TOP, NULL); #endif frame = XtVaCreateManagedWidget ("frame3", xmFrameWidgetClass, rowcol, XmNshadowType, XmSHADOW_ETCHED_IN, NULL); XtVaCreateManagedWidget ("XmSHADOW_ETCHED_IN", xmLabelGadgetClass, frame, NULL); #ifdef MOTIF_1_2 XtVaCreateManagedWidget ("XmALIGNMENT_WIDGET_TOP", xmLabelGadgetClass, frame, XmNchildType, XmFRAME_TITLE_CHILD, XmNchildVerticalAlignment, XmALIGNMENT_WIDGET_TOP, NULL); #endif frame = XtVaCreateManagedWidget ("frame4", xmFrameWidgetClass, rowcol, XmNshadowType, XmSHADOW_ETCHED_OUT, NULL); XtVaCreateManagedWidget ("XmSHADOW_ETCHED_OUT", xmLabelGadgetClass, frame, NULL); #ifdef MOTIF_1_2 XtVaCreateManagedWidget ("XmALIGNMENT_WIDGET_BOTTOM", xmLabelGadgetClass, frame, XmNchildType, XmFRAME_TITLE_CHILD, XmNchildVerticalAlignment, XmALIGNMENT_WIDGET_BOTTOM, NULL); #endif XtManageChild (rowcol); XtRealizeWidget (toplevel); XtAppMainLoop (app); }The output of this example is shown in the figure. The program creates four Frame widgets. Each Frame has two Label children, one for the work area and one for the title. Each Frame uses a different value for the XmNshadowType and XmNchildVerticalPlacement resources, where these values are indicated by the text of the Labels. Although we have used a Label as the work area child of a Frame in this example, it is not a good idea to put a border around a Label. The shadow border implies selectability, which can confuse the user. The PanedWindow widget lays out its children in a vertically-tiled format. The Motif Style Guide also provides for a horizontally-oriented paned window, but the Motif toolkit does not yet support it. The idea behind the PanedWindow is that the user can adjust the individual panes to provide more or less space as needed on a per-child basis. For example, if the user wants to see more text in a Text widget, he can use the control sashes (sometimes called grips) to resize the area for the Text widget. When the user moves the sash, the widget above or below the one being resized is resized smaller to compensate for the size change. The width of the widget expands to that of its widest managed child and all of the other children are resized to match that width. The height of the PanedWindow is set to the sum of the heights of all of its children, plus the spacing between them and the size of the top and bottom margins. In Motif 1.1, widgets are placed in a PanedWindow in the order that you create them, with the first child being placed at the top of the PanedWindow. With Motif 1.2, you can set the XmNpositionIndex constraint resource to control the position of a child in a PanedWindow if you do not want to use the default order. An application that wants to use the PanedWindow must include the file <Xm/PanedW.h>. An instance of the widget may be created as usual for manager widgets, as shown in the following code fragment: Widget paned_w; paned_w = XtVaCreateWidget ("name", xmPanedWindowWidgetClass, parent, resource-value-list, NULL); ... XtManageChild (paned_w); The PanedWindow widget provides constraint resources that allow its children to indicate their preferred maximum and minimum sizes. the source code shows three widgets that are set in a PanedWindow. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* paned_wind1.c --there are two Label widgets that are positioned * above and below a Text widget. The Labels' minimum and maximum * sizes are set to 25 and 45 respectively, preventing those * panes from growing beyond those bounds.); XtVaCreateManagedWidget ("Hello", xmLabelWidgetClass, pane, XmNpaneMinimum, 25, XmNpaneMaximum, 45, NULL); XtVaCreateManagedWidget ("text", xmTextWidgetClass, pane, XmNrows, 5, XmNcolumns, 80, XmNpaneMinimum, 35, XmNeditMode, XmMULTI_LINE_EDIT, XmNvalue, "This is a test of the paned window widget.", NULL); XtVaCreateManagedWidget ("Goodbye", xmLabelWidgetClass, pane, XmNpaneMinimum, 25, XmNpaneMaximum, 45, NULL); XtManageChild (pane); XtRealizeWidget (toplevel); XtAppMainLoop (app); }The two Label widgets are positioned above and below a Text widget in a PanedWindow. The minimum and maximum sizes of the Labels are set to 25 and 45 pixels respectively, using the resources XmNpaneMinimum and XmNpaneMaximum. No matter how the PanedWindow or any of the other widgets are resized, the two Labels cannot grow or shrink beyond these bounds. The Text widget, however, only has a minimum size restriction, so it may be resized as large or as small as the user prefers, provided that it does not get smaller than the 35-pixel minimum. the figure shows two configurations of this application. One problem with setting the maximum and minimum resources for a widget involves determining exactly what those extents should be. The maximum size of 45 for the Label widgets in the source code is an arbitrary value that was selected for demonstration purposes only. If other resources had been set on one of the Labels such that the widget needed to be larger, the application would definitely look unbalanced. For example, an extremely high resolution monitor might require the use of unusually large fonts in order for text to appear normal. There are two choices available at this point. One is to specify the maximum and minimum values in a resolution-independent way and the other is to ask the Label widget itself what height it wants to be. Specifying resolution-independent dimensions requires you to carefully consider the type of application you are creating. When you specify resolution-independent values, you must specify the values in either millimeters, inches, points, or font units. The value of the XmNunitType Manager resource controls the type of units that are used. the source code demonstrates the use of resolution-independent dimensions. XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* unit_types.c --the same as paned_win1.c except that the * Labels' minimum and maximum sizes are set to 1/4 inch and * 1/2 inch respectively. These measurements are retained * regardless of the pixels-per-inch resolution of the user's * display. */ , XmNunitType, Xm1000TH_INCHES, NULL); XtVaCreateManagedWidget ("Hello", xmLabelWidgetClass, pane, XmNpaneMinimum, 250, /* quarter inch */ XmNpaneMaximum, 500, /* half inch */ NULL); XtVaCreateManagedWidget ("text", xmTextWidgetClass, pane, XmNrows, 5, XmNcolumns, 80, XmNpaneMinimum, 250, XmNeditMode, XmMULTI_LINE_EDIT, XmNvalue, "This is a test of the paned window widget.", NULL); XtVaCreateManagedWidget ("Goodbye", xmLabelWidgetClass, pane, XmNpaneMinimum, 250, /* quarter inch */ XmNpaneMaximum, 500, /* half inch */ NULL); XtManageChild (pane); XtRealizeWidget (toplevel); XtAppMainLoop (app); }The second technique that we can use is to query the Label widgets about their heights. This technique requires the use of the Xt function XtQueryGeometry(), as shown in the source code XtSetLanguageProc() is only available in X11R5; there is no corresponding function in X11R4. /* paned_wind2.c --there are two label widgets that are positioned * above and below a Text widget. The labels' desired heights are * queried using XtQueryGeometry() and their corresponding maximum * and minimum sizes are set to the same value. This effectively * prevents those panes from being resized., label; XtWidgetGeometry size;); label = XtVaCreateManagedWidget ("Hello", xmLabelWidgetClass, pane, NULL); size.request_mode = CWHeight; XtQueryGeometry (label, NULL, &size); XtVaSetValues (label, XmNpaneMaximum, size.height, XmNpaneMinimum, size.height, NULL); printf ("hello's height: %d0, size.height); XtVaCreateManagedWidget ("text", xmTextWidgetClass, pane, XmNrows, 5, XmNcolumns, 80, XmNresizeWidth, False, XmNresizeHeight, False, XmNpaneMinimum, 35, XmNeditMode, XmMULTI_LINE_EDIT, XmNvalue, "This is a test of the paned window widget.", NULL); label = XtVaCreateManagedWidget ("Goodbye", xmLabelWidgetClass, pane, NULL); size.request_mode = CWHeight; XtQueryGeometry (label, NULL, &size); XtVaSetValues (label, XmNpaneMaximum, size.height, XmNpaneMinimum, size.height, NULL); printf ("goodbye's height: %d0, size.height); XtManageChild (pane); XtRealizeWidget (toplevel); XtAppMainLoop (app); }XtQueryGeometry() asks a widget what size it would like to be. This routine takes the following form: XtGeometryResult XtQueryGeometry(widget, intended, preferred_return) Widget widget; XtWidgetGeometry *intended; XtWidgetGeometry *preferred_return;Since we do not want to resize the widget, we pass NULL for the intended parameter. We are not interested in the return value of the function, since the information that we want is returned in the preferred_return parameter. This parameter is of type XtWidgetGeometry, which is defined as follows: typedef struct { XtGeometryMask request_mode; Position x, y; Dimension width, height, border_width; Widget sibling; int stack_mode; } XtWidgetGeometry;We tell the widget what we want to know by setting the request_mode field of the size variable that we pass to the routine. The request_mode field is checked by the query_geometry function within the called widget. Depending on which bits that are specified, the appropriate fields are set within the returned data structure. In the source code we set request_mode to CWHeight, which tells the Label widget's query_geometry method to return the desired height in the height field of the data structure. If we had wanted to know the width as well, we could have set request_mode as follows: size.request_mode = (CWHeight | CWWidth);In this case, the width and height fields would be filled in by the Label widget. Once we have the Label's desired height, we can set the constraint resources XmNpaneMaximum and XmNpaneMinimum to the height of the Label. By making these two values the same, the pane associated with the Label cannot be resized. In most cases, the XtQueryGeometry() method can be used reliably to determine proper values for minimum and maximum pane extents. In Motif 1.1, many of the Motif widgets do not have query_geometry methods, so they do not return sensible values when XtQueryGeometry() is called. In Motif 1.2, the query_geometry method has been implemented for all Motif widgets. Setting extents is useful, since without them, the user can adjust a PanedWindow so that the size of a widget is unreasonable or unaesthetic. If you are setting the extents for a scrolled object (ScrolledText or ScrolledList), you do not need to be as concerned about the maximum extent, since these objects handle larger sizes appropriately. Minimum states are certainly legitimate though. For example, you could use the height of a font as a minimum extent for Text or a List. The PanedWindow widget can be useful for building your own dialogs because you can control the size of the action area. The action area is always at the bottom of the dialog and its size should never be changed. See Chapter 7, Custom Dialogs, for a complete discussion of how a PanedWindow can be used in in this manner. The Sashes in a PanedWindow widget are in fact widgets, even though they are not described or defined publicly. While the Motif Style Guide says that the Sash is part of the PanedWindow widget, the Motif toolkit defines the object privately, which means that technically the Sash is not supported and it may change in the future. However, it is possible to get a handle to a Sash if you absolutely need one. In order to retrieve a Sash, you need to include the header file <Xm/SashP.h>. The fact that the file ends in an uppercase P indicates that it is a private header file, which means that an application program should not include it. However, there is no public header file for the Sash widget, so unless you include the private header file, you cannot access the Sashes in a PanedWindow. If you retrieve all of the children from a PanedWindow using XtVaGetValues() on the XmNchildren resource, you can use the XmIsSash() macro to locate the Sash children. This macro is defined as follows: #define XmIsSash(w) XtIsSubclass(w, xmSashWidgetClass)Although XtIsSubclass() is a public function, xmSashWidgetClass is not declared publicly. One reason that you might want to get handles to the Sashes in a PanedWindow is to turn off keyboard traversal to the Sashes, as described in the next section. The Motif Style Guide specifies methods by which the user can interact with an application without using the mouse. These methods provide a way for the user to navigate through an application and activate user-interface elements on the desktop using only the keyboard. Such activity is known as keyboard traversal and is based on the Common User Access (CUA) interface specifications from Microsoft Windows and Presentation Manager. These specifications make heavy use of the TAB key to move between elements in a user interface; related interface controls are grouped into what are called tab groups. Some examples of tab groups are a set of ToggleButtons or a collection of PushButtons. Just as only one shell on the screen can have the keyboard focus, only one widget at a time can have the input focus. When keyboard activity occurs in a window, the toolkit knows which tab group is current and directs the input focus to the active item within that group. The user can move from one item to the next within a tab group using the arrow keys. The user can move from one tab group to the next using the TAB key. To traverse the tab groups in the reverse direction, the user can use SHIFT-TAB. The CTRL key can be used with the TAB key in a Text widget to differentiate between a traversal operation and the use of the TAB key for input. The SPACEBAR activates the item that has the keyboard focus. To illustrate the keyboard traversal mechanisms, let's examine tictactoe.c from the source code This program contains one tab group, the Form widget. Because the PushButtons inside of it are elements in the tab group, the user can move between the items in the tic-tac-toe board using the arrow keys on the keyboard, as illustrated in the figure. Pressing the TAB key causes the input focus to be directed to the next tab group and set to the first item in the group, which is known as the home element. Since there is only one tab group in this application, the traversal mechanism moves the input focus to the first element in the same group. Thus, pressing the TAB key in this program always causes the home item to become the current input item. The conceptual model of the tab group mechanism corresponds to the user's view of an application. With tab groups, the widget tree is flattened out into two simple layers: the first layer contains tab groups and the second layer contains the elements of those groups. In this model, there is no concept of managers and children or any sort of widget hierarchy. But as you know, an application is based on a very structured widget hierarchy. The implementation of tab groups is based on lists of widget pointers that refer to existing widgets in the widget tree. These lists, known as navigation groups, are maintained by the VendorShell and MenuShell widgets and are accessed by the input-handling mechanisms of the Motif toolkit. Each widget class in the Motif toolkit is initialized either as a tab group itself or as a member of a tab group. Manager widgets, Lists, and Text widgets are usually tagged as tab groups, since they typically contain subelements that can be traversed. For example, the elements in a List can be traversed using the arrow keys on the keyboard; the up arrow moves the selection to the previous element in the List widget. In a Text widget, the arrow keys move the insertion cursor. The other primitive widgets, such as PushButtons and ToggleButtons, are usually tagged as tab group members. Output-only widgets are not tagged at all and are excluded from the tab group mechanism, since you cannot traverse to an output-only widget. These default settings are not permanent. For example, a PushButton or a ToggleButton can be a tab group, although this setting is uncommon and should only be done when you have a special reason for forcing the widget to be recognized as a separate tab group. When the TAB key is pressed, the next tab group in the list of tab groups becomes the current tab group. Since manager widgets are normally tab groups, the order of tab group traversal is typically based on the order in which the manager widgets are created. This entire process is automated by the Motif toolkit, so an application does not have to do anything unless it wants to use a different system of tab groups for some reason. In order to maintain Motif compliance, we recommend that you avoid interfering with the default behavior. We are discussing keyboard traversal in the chapter on manager widgets because managers play the most visible role in keyboard traversal from the application programmer's perspective. Managers, by their nature, contain other widgets, which-tac-toe->event->xbutton.state & ShiftMask) letter = buf[0] = '0'; else letter = buf[0] = 'X'; buf[1] = 0; str = XmStringCreateLocalized (buf); XtVaSetValues (w, XmNlabelString, str, XmNuserData, letter, XmNshadowThickness, 0, XmNtraversalOn, False, NULL); XmStringFree (str); }The user can still click on a previously-selected item with the mouse button, but the routine causes an error bell to sound in this situation. Output-only-mouse-drivenPrior to this release, the PanedWindow and its Sashes were created in such a way that you could not override the traversability of the Sashes using hard-coded values in the widget creation call or using a resource specification in a resource file. In fact, the internals of the PanedWindow widget hard-coded-- > 0) if (XmIsSash (children[num_children])) XtVaSetValues (children[num_children], XmNtraversalOn, False, NULL); }There are some applications that might actually have to be used without a mouse, just as there are some users who prefer to use the keyboard, so you should be careful about turning off keyboard traversal for the Sashes in a PanedWindow widget. If you do turn off Sash traversal, we recommend that you document the behavior and provide a way for the user to control this behavior. For example, you could provide an application-specific resource that controls whether or not Sashes can be traversed using the keyboard. As noted earlier, XmNtraversalOn can be set on tab groups (which tend to be manager widgets) as well as tab group members. If traversal is off for a tab group, none of its members can be traversed. If keyboard traversal is something that you need to modify in your application, you should probably hard-code XmNtraversalOn values directly into individual widgets as you create them. Turning off traversal is typically not something that is done on a per-widget-class basis. When you turn traversal off in application code, be careful to make sure that there is no reason that a user would want to traverse to the particular widgets because once you hard-code the resource values, they cannot be modified by the user in a resource file. The XmNnavigationType resource controls whether a widget is a tab group itself or is a member of a tab group. When this resource is set to XmNONE, the widget is not a tab group, so it defaults to being a member of one. As a member, its XmNtraversalOn resource indicates whether or not the user can direct the input focus to the widget using the keyboard. This value is the default for most primitive widgets. When the resource is set to XmTAB_GROUP, the widget is a tab group itself, so it is included in keyboard navigation. This value is the default for managers, Lists, and Text widgets. By modifying the default value of the XmNnavigationType resource for a widget, you can specify that a primitive widget is a tab group. As a result, the user traverses to the widget using the TAB key rather than one of the arrow keys. For example, you can modify tictactoe.c by setting the XmNnavigationType to XmTAB_GROUP for each PushButton. There are two other values for XmNnavigationType that are used for backwards compatibility with older versions of the toolkit. They are not generally used unless you are porting programs from Motif 1.0. In this version of the toolkit, there is an application called XmAddTabGroup() to make a widget a tab group. With Motif 1.0, the programmer was required to specify precisely which widgets were tab groups, which were members of a tab group, and which were not traversable. As a result, XmAddTabGroup() had to be called for all manager widgets. To maintain backwards compatibility, whenever XmAddTabGroup() is called, the toolkit assumes the programmer is using the old Motif 1.0 specifications and disables the new, automatic behavior. Unless your application is currently using the old API, you can probably skip to the next section. Calling XmAddTabGroup() is equivalent to setting XmNnavigationType to XmEXCLUSIVE_TAB_GROUP. If this value is set on a widget or if XmAddTabGroup() is called, new widgets are no longer added as tab groups automatically. Basically, the toolkit reverts to the old behavior. An exclusive tab group is much the same as a normal tab group, but Motif recognizes this special value and ignores all widgets that have the newer XmTAB_GROUP value set. You can think of this value as setting exclusivity on the tab group behavior. The value XmSTICKY_TAB_GROUP can also be used for XmNnavigationType in Motif 1.0. If this value is used on a widget, the widget is included automatically in keyboard traversal, even if another widget has its navigation type set to XmEXCLUSIVE_TAB_GROUP or if XmAddTabGroup() has been called. This value provides a partial workaround for the new behavior, but not exactly. You can set a widget to be a sticky tab group without completely eliminating the old behavior and without interfering with the new behavior. You can ignore these two values for all intents and purposes. If you need to port an old application to a newer version of the Motif toolkit, you should consider removing all of the calls to XmAddTabGroup() and just going with the new behavior. If you need to change the default behavior, you should use XmNONE and XmTAB_GROUP to control whether or not a widget is a tab group or a member of one. To control whether the widget is part of the whole keyboard traversal mechanism, use the XmNtraversalOn resource. In order for manager widgets to implement keyboard traversal, they have their own event translation tables that specify what happens when certain events occur. As discussed in Chapter 2, The Motif Programming Model, a translation table specifies a series of one or more events and an action that is invoked if the event occurs. The X Toolkit Intrinsics handles event translations automatically; when the user presses the TAB key, Xt looks up the event <Key>Tab in the table and invokes the corresponding action procedure. In this case, the procedure changes the input focus from the current tab group to the next one on the list. This mechanism is dependent on the window hierarchy of the widget tree. Events are first delivered to the widget associated with the window where the event took place. If that widget (or its window) does not handle the type of event delivered, it passes the event up the window tree to its parent, which then has the option of dealing with the event. Assuming that the parent is a manager widget of some kind, it now has the option to process the event. If the event is a keyboard traversal event, the appropriate action routine moves the input focus. The default event translations that manager widgets use to handle keyboard traversal are currently specified as follows: <Key>osfBeginLine: ManagerGadgetTraverseHome() <Key>osfUp: ManagerGadgetTraverseUp() <Key>osfDown: ManagerGadgetTraverseDown() <Key>osfLeft: ManagerGadgetTraverseLeft() <Key>osfRight: ManagerGadgetTraverseRight() Shift ~Meta ~Alt <Key>Tab: ManagerGadgetPrevTabGroup() ~Meta ~Alt <Key>Tab: ManagerGadgetNextTabGroup() <EnterWindow>: ManagerEnter() <LeaveWindow>: ManagerLeave() <FocusOut>: ManagerFocusOut() <FocusIn>: ManagerFocusIn()The OSF-specific keysyms are vendor-defined, which means that the directional arrows must be defined by the user's system at run-time. Values like <Key>osfUp and <Key>osfDown may not be the same as <Key>Up and <Key>Down. The routines that handle keyboard traversal are prefixed by ManagerGadget. Despite their names, these functions are not specific to gadgets; they are used to handle keyboard traversal for all of the children in the manager. If a primitive widget inside of a manager widget specifies an event translation that conflicts with one of the manager's translations, the primitive widget can interfere with keyboard traversal. If the primitive widget has the input focus, the user cannot use the specified event to move the input focus with the keyboard. The following code fragment shows how the translation table for a PushButton can interfere with the keyboard traversal mechanism in its parent: Widget pb; XtActionRec action; extern void do_tab(); actions.string = "do_tab"; actions.proc = do_tab; XtAddActions (&actions, 1); pb = XtVaCreateManagedWidget ("name", xmPushButtonWidgetClass, parent, resource-value-list, NULL); XtOverrideTranslations (pb, XtParseTranslationTable ("<Key>Tab: do_tab"));The translation table is merged into the existing translations for the PushButton widget. This translation table does not interfere with the translation table in the manager widget, but it does interfere with event propagation to the manager. When the TAB key is pressed, the action routine do_tab() is called and the event is consumed by the PushButton widget. The event is not propagated up to the manager widget so that it can perform the appropriate keyboard traversal action. The workaround for this problem is to have do_tab() process the keyboard traversal action on its own, in addition to performing its own action. This technique is discussed in the next section. Since a manager can also contain gadgets, the manager widget must also handle input that is destined for gadgets. Since gadgets do not have windows, they cannot receive events. Only the manager widget that is the parent of a gadget can receive events for the gadget. The manager widget has the following additional translations to handle input on behalf of gadgets: <Key>osfActivate: ManagerParentActivate() <Key>osfCancel: ManagerParentCancel() <Key>osfSelect: ManagerGadgetSelect() <Key>osfHelp: ManagerGadgetHelp() ~Shift ~Meta ~Alt <Key>Return: ManagerParentActivate() ~Shift ~Meta ~Alt <Key>space: ManagerGadgetSelect() <Key>: ManagerGadgetKeyInput() <BtnMotion>: ManagerGadgetButtonMotion() <Btn1Down>: ManagerGadgetArm() <Btn1Down>,<Btn1Up>: ManagerGadgetActivate() <Btn1Up>: ManagerGadgetActivate() <Btn1Down>(2+): ManagerGadgetMultiArm() <Btn1Up>(2+): ManagerGadgetMultiActivate() <Btn2Down>: ManagerGadgetDrag()Unlike with keyboard traversal translations, widget translations cannot interfere with the manager translations that handle events destined for gadgets. If a widget had the input focus, the user's actions cannot be destined for a gadget, since the user would have to traverse to the gadget first, in which case the manager would really have the input focus. In Chapter 10, The DrawingArea Widget, we discuss the problems involved in handling input events on the DrawingArea widget. The problems arise because the widget can be used for interactive drawing, as well as serve as a manager. There may be events that you want to process in your application, but they could also be processed by the DrawingArea itself. The problem is really a semantic one, as there is no way to determine which action procedure should be invoked for each event if the DrawingArea has a manager-based action and the application defines its own action. For more information on translation tables and action routines, see Chapter 2, The Motif Programming Model, and Volume Four, X Toolkit Intrinsics Programming Manual. At times, an application may want to move the input focus as a result of something that the user has done. For example, you might have an action area where each PushButton invokes a callback function and then sets the input focus to the home item in the tab group, presumably to protect the user from inadvertently selecting the same item twice. the source code demonstrates how this operation can be accomplished. /* proc_traverse.c -- demonstrate how to process keyboard traversal * from a PushButton's callback routine. This simple demo contains * a RowColumn (a tab group) and three PushButtons. If any of the * PushButtons are activated (selected), the input focus traverses * to the "home" item. */ #include <Xm/PushB.h> #include <Xm/RowColumn.h> main(argc, argv) int argc; char *argv[]; { Widget toplevel, rowcol, pb; XtAppContext app; void do_it(); XtSetLanguageProc (NULL, NULL, NULL); toplevel = XtVaAppInitialize (&app, "Demos", NULL, 0, &argc, argv, NULL, NULL); rowcol = XtVaCreateManagedWidget ("rowcolumn", xmRowColumnWidgetClass, toplevel, XmNorientation, XmHORIZONTAL, NULL); (void) XtVaCreateManagedWidget ("OK", xmPushButtonWidgetClass, rowcol, NULL); pb = XtVaCreateManagedWidget ("Cancel", xmPushButtonWidgetClass, rowcol, NULL); XtAddCallback (pb, XmNactivateCallback, do_it, NULL); pb = XtVaCreateManagedWidget ("Help", xmPushButtonWidgetClass, rowcol, NULL); XtAddCallback (pb, XmNactivateCallback, do_it, NULL); XtRealizeWidget (toplevel); XtAppMainLoop (app); } /* callback for pushbuttons */ void do_it(widget, client_data, call_data) Widget widget; XtPointer client_data; XtPointer call_data; { /* do stuff here for PushButton widget */ (void) XmProcessTraversal(widget, XmTRAVERSE_HOME); }The three frames in the figure show the movement of keyboard focus in the program. In the figure, the current input focus is on the Cancel button; when it is selected, the input focus is changed to the OK button. The callback routine associated with the PushButtons does whatever it needs and then calls XmProcessTraversal() to change the input item to the home item, which happens to be the OK button. This function can be used when an application needs to set the current item in the tab group to another widget or gadget or it can be used to traverse to a new tab group. The function takes the following form: Boolean XmProcessTraversal(widget, direction) Widget widget; int direction;The function returns False if the VendorShell associated with the widget has no tab groups, the input focus policy doesn't make sense, or if there are other extenuating circumstances that would be considered unusual. It is unlikely that you'll ever have this problem. The direction parameter specifies where the input focus should be moved. This parameter can take any of the following values: XmTRAVERSE_CURRENT XmTRAVERSE_NEXT XmTRAVERSE_PREV XmTRAVERSE_HOME XmTRAVERSE_UP XmTRAVERSE_DOWN XmTRAVERSE_LEFT XmTRAVERSE_RIGHT XmTRAVERSE_NEXT_TAB_GROUP XmTRAVERSE_PREV_TAB_GROUPAll but the last two values are for traversing to items within the current tab group; the last two are for traversing to the next or previous tab group relative to the current one. In the case of the source code the call to XmProcessTraversal() forces the home element to be the current item in the current tab group. For a more sophisticated example of manipulating the input focus, see Section #stextcbs in Chapter 14, Text Widgets. One problem with XmProcessTraversal() is that you can only move in a relative direction from the item that has the input focus. This functionality is sufficient in most cases, since the logic of your application should not rely on the user following any particular input sequence. If you need to traverse to a specific widget regardless of the current item, in most cases you can make the following call: XmProcessTraversal (desired_widget, XmTRAVERSE_CURRENT);This calling sequence specifies that the desired_widget takes the input focus, but only if the shell that contains the widget already has the keyboard focus. If the shell does not have the focus, nothing happens until the shell obtains the keyboard focus. When it does, the desired_widget should have the input focus. Under certain conditions, this function may appear not to work. For example, if you create a dialog and want to set the input focus to one of its subwidgets, you may or may not get this to happen, depending on whether or not the dialog has been realized and mapped to the screen and whether or not keyboard focus has been accepted. Unfortunately, there is no general solution to this problem because the Motif toolkit isn't very robust about the programmer changing input focus out from under it. You cannot call generic X functions like XSetInputFocus() to force a widget to take input focus or you will undermine Motif's attempt at monitoring and controlling the input policy on its own. In Motif 1.2, there are some new functions that make it easier for an application to control keyboard traversal. The XmGetFocusWidget() routine returns the widget that has the input focus, while XmGetTabGroup() returns the widget that is the tab group for the specified widget. You can also call XmIsTraversable() to determine whether or not a particular widget is eligible to receive the input focus. With Motif 1.1, you often cannot determine which widget has the input focus or where a particular widget is in the widget tree relative to the current input item. Manager widgets are the backbone of an application. Without them, primitive widgets have no way of controlling their size, layout, and input focus. While the Motif toolkit provides many different manager widget classes, you may find that there are some things that you cannot do with them. Experienced toolkit programmers have found that it is possible to port Constraint class widgets from other toolkits to the Motif toolkit, by subclassing them from the generic Manager widget class. This topic is beyond the scope of this book. This chapter introduces the Motif manager widgets, but it does not discuss in detail some of the basic issues of geometry management. If the basic concepts presented in this chapter are still somewhat foreign to you, see Volume Four, X Toolkit Intrinsics Programming Manual, for a more in-depth discussion of composite widgets and geometry management.
http://www.oreilly.com/openbook/motif/vol6a/Vol6a_html/ch08.html
CC-MAIN-2017-43
en
refinedweb
I have this function in python, and this function computes the sum of the integers in the list. def runningSum(aList): theSum = 0 for i in aList: theSum = theSum + i return theSum >>runningSum([1,2,3,4,5]) = 15 E.g.: [1,2,3,4,5] -> [1,3,6,10,15] E.g.: [2,2,2,2,2,2,2] -> [2,4,6,8,10,12,14] Appending the running sum into a list in a loop and return the list: >>> def running_sum(iterable): ... s = 0 ... result = [] ... for value in iterable: ... s += value ... result.append(s) ... return result ... >>> running_sum([1,2,3,4,5]) [1, 3, 6, 10, 15] Or, using yield statement: >>> def running_sum(iterable): ... s = 0 ... for value in iterable: ... s += value ... yield s ... >>> running_sum([1,2,3,4,5]) <generator object runningSum at 0x0000000002BDF798> >>> list(running_sum([1,2,3,4,5])) # Turn the generator into a list [1, 3, 6, 10, 15] If you're using Python 3.2+, you can use itertools.accumulate. >>> import itertools >>> list(itertools.accumulate([1,2,3,4,5])) [1, 3, 6, 10, 15] where the default operation in accumulate with an iterable is 'running sum'. Optionally you can also pass an operator as needed.
https://codedump.io/share/vI8Zssj59SY0/1/a-function-that-takes-a-list-of-integers-as-a-parameter-and-returns-a-list-of-running-totals
CC-MAIN-2017-43
en
refinedweb
Successful software projects do not end with the product's rollout. In most projects, new versions that are based on their predecessors are periodically released. Moreover, previous versions have to be supported, patched, and adjusted to operate with new operating systems, locales, and hardware. Web browsers, commercial databases, word processors, and multimedia tools are examples of such products. It is often the case that the same development team has to support several versions of the same software product simultaneously. Usually, a considerable amount of software can be shared among different versions of the same product, but each version also has its specific components. Namespace aliases can be used in these cases to switch swiftly from one version to another. Namespace aliases can provide dynamic namespaces; that is, a namespace alias can point at a given time to a namespace of version X and, at another time, it can refer to a different namespace. For example: namespace ver_3_11 //16 bit { class Winsock{/*..*/}; class FileSystem{/*..*/}; }; namespace ver_95 //32 bit { class Winsock{/*..*/}; class FileSystem{/*..*/}; } int main()//implementing 16 bit release { namespace current = ver_3_11; // current is an alias of ver_3_11 using current::Winsock; using current::FileSystem; FileSystem fs; // ver_3_11::FileSystem //... } In this example, the alias 'current' is a symbol that can refer to either ver_3_11 or ver_95. To switch to a different version, you only have to assign a different namespace to 'current'. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/tips/Tip/13060
CC-MAIN-2017-43
en
refinedweb
The following sections describe how to use Web Services Reliable Messaging: Web Service reliable messaging is a framework whereby an application running in one application server can reliably invoke a Web Service running on another application server, assuming that both servers implement the WS-ReliableMessaging specification. Reliable is defined as the ability to guarantee message delivery between the two Web Services. WebLogic Web Services conform to the WS-ReliableMessaging specification (February 2005), which describes how two Web Services running on different application servers can communicate reliably in the presence of failures in software components, systems, or networks.. A reliable WebLogic Web Service provides the following delivery assurances: See the WS-ReliableMessaging specification for detailed documentation about the architecture of Web Service reliable messaging. Using Web Service Reliable Messaging: Main Steps describes how to create the reliable and client Web Services and how to configure the two WebLogic Server instances to which the Web Services are deployed. WebLogic Web Services use WS-Policy files to enable a destination endpoint to describe and advertise its Web Service reliable messaging capabilities and requirements. The WS-Policy specification provides a general purpose model and syntax to describe and communicate the policies of a Web service. These WS-Policy files are XML files that describe features such as the version of the supported WS-ReliableMessaging specification, the source endpoint’s retransmission interval, the destination endpoint’s acknowledgment interval, and so on. You specify the names of the WS-Policy files that are attached to your Web Service using the @Policy JWS annotation in your JWS file. Use the @Policies annotation to group together multiple @Policy annotations. For reliable messaging, you specify these annotations only at the class level. WebLogic Server includes two simple WS-Policy files that you can specify in your JWS file. You cannot change these pre-packaged files, so if their values do not suit your needs, you must create your own WS-Policy file. See Creating the Web Service Reliable Messaging WS-Policy File for details about creating your own WS-Policy file if you do not want to one included with WebLogic Server. See Web Service Reliable Messaging Policy Assertion Reference for reference information about the reliable messaging policy assertions. <> <?xml version="1.0"?> <wsp:Policy xmlns:wsrm="" xmlns:wsp="" xmlns: <wsrm:RMAssertion > <wsrm:InactivityTimeout <wsrm:BaseRetransmissionInterval <wsrm:ExponentialBackoff /> <wsrm:AcknowledgementInterval <beapolicy:Expires </wsrm:RMAssertion> </wsp:Policy> Configuring reliable messaging for a WebLogic Web Service requires standard JMS tasks such as creating JMS servers and Store and Forward (SAF) agents, as well as Web Service-specific tasks, such as adding additional JWS annotations to your JWS file. Optionally, you create WS-Policy files that describe the reliable messaging capabilities of the reliable Web Service if you do not use the pre-packaged ones. If you are using the WebLogic client APIs to invoke a reliable Web Service, the client application must run on WebLogic Server. Thus, configuration tasks must be performed on both the source WebLogic Server instance on which the Web Service that includes client code to invoke the reliable Web Service reliably is deployed, as well as the destination WebLogic Server instance on which the reliable Web Service itself is deployed. The following procedure describes how to create a reliable Web Service, as well as a client Web Service that in turn invokes an operation of the reliable Web Service reliably. The procedure shows how to create the JWS files that implement the two Web Services from scratch; if you want to update existing JWS files, use this procedure as a guide. The procedure also shows how to configure the source and destination WebLogic Server instances. It is assumed that you have created a WebLogic Server instance where you have set up an Ant-based development environment and that you have a working build.xml file to which you can add targets for running the jwsc Ant task and deploying the generated reliable Web Service. It is further assumed that you have a similar setup for another WebLogic Server instance that hosts the client Web Service that invokes the Web Service reliably. For more information, see: This is the WebLogic Server instance to which the reliable Web Service is deployed. See Configuring the Destination WebLogic Server Instance. This is the WebLogic Server instance to which the client Web Service that invokes the reliable Web Service is deployed. See Configuring the Source WebLogic Server Instance. See Creating the Web Service Reliable Messaging WS-Policy File for details about creating your own WS-Policy file. See Programming Guidelines for the Reliable JWS File. build.xml fileto include a call to the jwscAnt task which will compile the reliable JWS file into a Web Service. See Running the jwsc WebLogic Web Services Ant Task for general information about using the jwsc task. prompt> ant build-mainService deploy-mainService See Programming Guidelines for the JWS File That Invokes a Reliable Web Service. build.xmlfile that builds the client Web Service. See Updating the build.xml File for a Client of a Reliable Web Service. prompt> ant build-clientService deploy-clientService Configuring the WebLogic Server instance on which the reliable. Take note of the JNDI name you define for the JMS queue because you will later use it when you program the JWS file that implements your reliable Web Service. See Create JMS modules and Create queues. When you create the SAF agent: Bothto enable both sending and receiving agents. See Create Store and Forward agents. If you are using the Web Service reliable messaging feature in a cluster, you must: Configuring the WebLogic Server instance on which the client. Be sure when you create the SAF agent that you set the Agent Type field to Both to enable both sending and receiving agents. See Create Store and Forward agents. A WS-Policy file is an XML file that contains policy assertions that comply with the WS-Policy specification. In this case, the WS-Policy file contains Web Service reliable messaging policy assertions. You can use one of the two default reliable messaging WS-Policy files included in WebLogic Server; these files are adequate for most use cases. However, because these files cannot be changed, if they do not suit your needs, you must create your own. See Use of WS-Policy Files for Web Service Reliable Messaging Configuration for a description of the included WS-Policy files. The remainder of this section describes how to create your own WS-Policy file. The root element of the WS-Policy file is <Policy> and it should include the following namespace declarations for using Web Service reliable messaging policy assertions: <wsp:Policy xmlns:wsrm="" xmlns:wsp="" xmlns: You wrap all Web Service reliable messaging policy assertions inside of a <wsrm:RMAssertion> element. The assertions that use the wsrm: namespace are standard ones defined by the WS-ReliableMessaging specification. The assertions that use the beapolicy: namespace are WebLogic-specific. See Web Service Reliable Messaging Policy Assertion Reference for details. All Web Service reliable messaging assertions are optional, so only set those whose default values are not adequate. The order in which the assertions appear is important. You can specify the following assertions; the order they appear in the following list is the order in which they should appear in your WS-Policy file: <wsrm:InactivityTimeout>—Number of milliseconds, specified with the Millisecondsattribute, which defines an inactivity interval. After this amount of time, if the destination endpoint has not received a message from the source endpoint, the destination endpoint may consider the sequence to have terminated due to inactivity. The same is true for the source endpoint. By default, sequences never timeout. <wsrm:BaseRetransmissionInterval>—Interval, in milliseconds, that the source endpoint waits after transmitting a message and before it retransmits the message if it receives no acknowledgment for that message. Default value is set by the SAF agent on the source endpoint’s WebLogic Server instance. <wsrm:ExponentialBackoff>—Specifies that the retransmission interval will be adjusted using the exponential backoff algorithm. This element has no attributes. <wsrm:AcknowledgmentInterval>—Maximum interval, in milliseconds, in which the destination endpoint must transmit a stand-alone acknowledgement. The default value is set by the SAF agent on the destination endpoint’s WebLogic Server instance. <beapolicy:Expires>—Amount of time after which the reliable Web Service expires and does not accept any new sequence messages. The default value is to never expire. This element has a single attribute, Expires, whose data type is an XML Schema duration type. For example, if you want to set the expiration time to one day, use the following: <beapolicy:Expires. <beapolicy:QOS>—Delivery assurance level, as described in Overview of Web Service Reliable Messaging. The element has one attribute, QOS, which you set to one of the following values: AtMostOnce, AtLeastOnce, or ExactlyOnce. You can also include the InOrderstring to specify that the messages be in order. The default value is ExactlyOnce InOrder. This element is typically not set. The following example shows a simple Web Service reliable messaging WS-Policy file: <?xml version="1.0"?> <wsp:Policy" /> </wsrm:RMAssertion> </wsp:Policy> This section describes how to create the JWS file that implements the reliable Web Service. The following JWS annotations are used in the JWS file that implements a reliable Web Service: @weblogic.jws.Policy—Required. See Using the @Policy Annotation. @javax.jws.Oneway—Required only if you are using Web Service reliable messaging on its own, without also using the asynchronous request-response feature. See Using the @Oneway Annotation and Using the Asynchronous Features Together. @weblogic.jws.BufferQueue—Optional. See Using the @BufferQueue Annotation. @weblogic.jws.ReliabilityBuffer—Optional. See Using the @ReliabilityBuffer Annotation The following example shows a simple JWS file that implements a reliable Web Service; see the explanation after the example for coding guidelines that correspond to the Java code in bold.); } } In the example, the ReliableHelloWorldPolicy.xml file is attached to the Web Service at the class level, which means that the policy file is applied to all public operations of the Web Service. The policy file is applied only to the request Web Service message (as required by the reliable messaging feature) and it is attached to the WSDL file. The JMS queue that WebLogic Server uses internally to enable the Web Service reliable messaging has a JNDI name of webservices.reliable.queue, as specified by the @BufferQueue annotation. The helloWorld() method has been marked with both the @WebMethod and @Oneway JWS annotations, which means it is a public operation called helloWorld. Because of the @Policy annotation, the operation can be invoked reliably. The Web Services runtime attempts to deliver reliable messages to the service a maximum of 10 times, at 10-second intervals, as described by the @ReliabilityBuffer annotation. The message may require re-delivery if, for example, the transaction is rolled back or otherwise does not commit. Use the @Policy annotation in your JWS file to specify that the Web Service has a WS-Policy file attached to it that contains reliable messaging assertions. See Use of WS-Policy Files for Web Service Reliable Messaging Configuration for descriptions of the two WS-Policy files ( DefaultReliability.xml and LongRunningReliability.xml) included in WebLogic Server that you can use instead of writing your own. You must follow these requirements when using the @Policy annotation for Web Service reliable messaging: @Policyannotation only at the class-level. directionattribute of the @Policyannotation only to its default value: Policy.Direction.both. Use the uri attribute to specify the build-time location of the policy file, as follows: @Policy(uri="ReliableHelloWorldPolicy.xml", direction=Policy.Direction.both, attachToWsdl=true) The example shows that the ReliableHelloWorldPolicy.xml file is located in the same directory as the JWS file. policy:prefix along with the name and path of the policy file. This syntax tells the jwscAnt task at build-time not to look for an actual file on the file system, but rather, that the Web Service will retrieve the WS-Policy file from WebLogic Server at the time the service is deployed. Use this syntax when specifying one of the pre-packaged WS-Policy files or when specifying a WS-Policy file that is packaged in a shared Java EE library. http:prefix along with the URL, as shown in the following example: @Policy(uri="" direction=Policy.Direction.both, attachToWsdl=true) You can also set the attachToWsd attribute of the @Policy annotation to specify whether the policy file should be attached to the WSDL file that describes the public contract of the Web Service. Typically you want to publicly publish the policy so that client applications know the reliable messaging capabilities of the Web Service. For this reason, the default value of this attribute is true. If you plan on invoking the reliable. Conversely, if the method is not annotated with the @Oneway annotation, then you must invoke it using the asynchronous request-response feature. If you are unsure how the operation is going to be invoked, consider creating two flavors of the operation: synchronous and asynchronous. See Invoking a Web Service Using Asynchronous Request-Response, and Using the Asynchronous Features Together.. Use the @BufferQueue annotation to specify the JNDI name of the JMS queue which WebLogic Server uses to store reliable messages internally. The JNDI name is the one you configured when creating a JMS queue in step 4 in LongRunningReliability.xml WS-Policy File. The @BufferQueue annotation is optional; if you do not specify it in your JWS file then WebLogic Server uses a queue with a JNDI name of weblogic.wsee.DefaultQueue. You must, however, still explicitly create a JMS queue with this JNDI name using the Administration Console. Use this annotation to specify the number of times WebLogic Server should attempt to deliver the message from the JMS queue to the Web Service implementation (default 3) and the amount of time that the server should wait in between retries (default 5 seconds). Use the retryCount attribute to specify the number of retries and the retryDelay attribute to specify the wait time. The format of the retryDelay attribute is a number and then one of the following strings: For example, to specify a retry count of 20 and a retry delay of two days, use the following syntax: @ReliabilityBuffer(retryCount=20, retryDelay="2 days") If you are using the WebLogic client APIs, you must invoke a reliable Web Service from within a Web Service; you cannot invoke a reliable Web Service from a stand-alone client application. The following example shows a simple JWS file for a Web Service that invokes a reliable operation from the service described in Programming Guidelines for the Reliable JWS File; see the explanation after the example for coding guidelines that correspond to the Java code in bold. package examples.webservices.reliable; import java.rmi.RemoteException; import javax.jws.WebMethod; import javax.jws.WebService; import weblogic.jws.WLHttpTransport; import weblogic.jws.ServiceClient; import weblogic.jws.ReliabilityErrorHandler; import examples.webservices.reliable.ReliableHelloWorldPortType; import weblogic.wsee.reliability.ReliabilityErrorContext; import weblogic.wsee.reliability.ReliableDeliveryException; @WebService(name="ReliableClientPortType", serviceName="ReliableClientService") @WLHttpTransport(contextPath="ReliableClient", serviceUri="ReliableClient", portName="ReliableClientServicePort") public class ReliableClientImpl { @ServiceClient( serviceName="ReliableHelloWorldService", portName="ReliableHelloWorldServicePort") private ReliableHelloWorldPortType port; @WebMethod public void callHelloWorld(String input, String serviceUrl) throws RemoteException { port.helloWorld(input); System.out.println(" Invoked the ReliableHelloWorld.helloWorld operation reliably." ); } ); } Follow these guidelines when programming the JWS file that invokes a reliable Web Service; code snippets of the guidelines are shown in bold in the preceding example: @ServiceClientand @ReliabitliyErrorHandlerJWS annotations: import weblogic.jws.ServiceClient; import weblogic.jws.ReliabilityErrorHandler; <clientgen>child element of the jwscAnt task, of the port type of the reliable Web Service you want to invoke. The stub package is specified by the packageNameattribute of <clientgen>, and the name of the stub is determined by the WSDL of the invoked Web Service. import examples.webservices.reliable.ReliableHelloWorldPortType; import weblogic.wsee.reliability.ReliabilityErrorContext; import weblogic.wsee.reliability.ReliableDeliveryException; @ServiceClientJWS annotation to specify the name and port of the reliable Web Service you want to invoke. You specify this annotation at the field-level on a private variable, whose data type is the JAX-RPC port type of the Web Service you are invoking. @ServiceClient( serviceName="ReliableHelloWorldService", portName="ReliableHelloWorldServicePort") private ReliableHelloWorldPortType port; @ServiceClientannotation, invoke the reliable operation: port.helloWorld(input);port.helloWorld(input); Because the operation has been marked one-way, it does not return a value. @weblogic.jws.ReliabilityErrorHandlerannotation: ); } This method takes ReliabilityErrorContext as its single parameter and returns void. See weblogic.jws.ReliabilityErrorHandler for details about programming this error-handling method. When programming the client Web Service, be sure you do not: @ReliabilityErrorHandler) or use any reliable messaging assertions in the associated WS-Policy files. wsdlLocationattribute of the @ServiceClientannotation. This is because the runtime retrieval of the specified WSDL might not succeed, thus it is better to let WebLogic Server use a local WSDL file instead. WebLogic Server provides a utility class for use with the Web Service Reliable Messaging feature. Use this class to perform common tasks such as set configuration options, get the sequence id, and terminate a reliable sequence. Some of these tasks are performed in the reliable Web Service, some are performed in the Web Service that invokes the reliable Web Service. See weblogic.wsee.reliability.WsrmUtils for details. To update a build.xml file to generate the JWS file that invokes the operation of a reliable Web Service, add taskdefs and a build-reliable-client targets that look something like the following; see the description after the example for details: <taskdef name="jwsc" classname="weblogic.wsee.tools.anttasks.JwscTask" /> <target name="build-reliable-client"> <jwsc enableAsyncService="true" srcdir="src" destdir="${client-ear-dir}" > <jws file="examples/webservices/reliable/ReliableClientImpl.java"> <clientgen wsdl="{wls.destination.host}:${wls.destination.port}/ReliableHelloWorld/ReliableHelloWorld?WSDL" packageName="examples.webservices.reliable"/> < ReliableHelloWorld Web Service. The jwsc Ant task automatically packages them in the generated WAR file so that the client Web Service can immediately access the stubs. You do this because the ReliableClientImpl JWS file imports and uses one of the generated classes. WebLogic Server supports production redeployment, which means that you can deploy a new version of an updated reliable reliable Web Service, its work is considered complete when the existing reliable messaging sequence is explicitly ended by the client or because of a time-out. For additional information about production redployment and Web Service clients, see Client Considerations When Redeploying a Web Service. Client applications that invoke reliable Web Services..
http://docs.oracle.com/cd/E13222_01/wls/docs100/webserv_adv/rm.html
CC-MAIN-2015-06
en
refinedweb
10 May 2012 07:55 [Source: ICIS news] TOKYO (ICIS)--?xml:namespace> Sumitomo Chemical registered extraordinary losses of Y26.0bn during the full year to 31 March 2012 due to a write-down of investment in an unnamed affiliated company, it said in a statement. Operating profit during the 12 months fell 31% year on year to Y60.7bn from Y88.0bn, while net sales were down 1.7% to Y1,947.9bn from Y1,982.4bn. Full-year operating profit in the petrochemicals and plastics segment declined 45% to Y11.1bn from the previous year, while net sales rose 3.1% to Y679.6bn. This was a result of decrease in shipments of synthetic resins and petrochemical products year on year due to the scheduled maintenance shutdowns of Sumitomo Chemical’s plants in Besides, the impact of the 11 March 2011 earthquake and subsequent lower demand also contributed to the decline in earnings. On the other hand sales rose on the back of higher market prices overseas and increased selling prices in (
http://www.icis.com/Articles/2012/05/10/9558111/japans-sumitomo-chemical-full-year-net-profit-falls.html
CC-MAIN-2015-06
en
refinedweb
17 October 2011 08:31 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Shanxi Sanwei owns and operates a 100,000 tonne/year BDO plant at Hongdong in After expansion works are completed, the company’s BDO capacity will be increased to 250,000 tonnes/year, the source added. The company also plans to bring onstream a 30,000 tonne/year polytetramethylene ether glycol (PTMEG) plant at the same time at Hongdong, which will raise its PTMEG capacity to 85,000 tonnes/year, the source said. “Construction work on the BDO plant and the new PTMEG plant has started,” said the source. Shanxi Sanwei announced its expansion plans in a statement to the Shenzhen Stock Exchange in mid-September. The expansion work on the BDO plant and the construction of the PTMEG plant are expected to cost around yuan (CNY) 1.5bn ($235m). ($1 = CNY6.38) Please visit the complete ICIS plants and projects database
http://www.icis.com/Articles/2011/10/17/9500472/chinas-shanxi-sanwei-aims-to-complete-bdo-expansion-by-2013.html
CC-MAIN-2015-06
en
refinedweb
Inheritance is what happens when a subclass receives variables or methods from a superclass. Java does not support multiple inheritance, except in the case of interfaces. The Cat class in the following example is the subclass and the Animal class is the superclass. Cat recieves eat() method of the Animal class even if we do not write it inside the class. public class Animal { public void eat() { System.out.println("Eat for Animal"); } } public class Cat extends Animal { public void eat() { System.out.println("Eat for Cat"); } } You can share your information about this topic using the form below! Please do not post your questions with this form! Thanks.
http://www.java-tips.org/java-se-tips/java.lang/what-is-inheritance.html
CC-MAIN-2015-06
en
refinedweb
Hi! I am having a small (and possibly really simple) problem with my code (I am using java)... Here is the assignment given by my teacher: Use a Stack to write a method allSeqs(int max, int len) that will generate all sequences, consisting of the numbers between 1 and max, of length len. For instance, calling allSeqs(3,2) gives: [1, 1][1, 2][1, 3][2, 1][2, 2][2, 3][3, 1][3, 2][3, 3] while calling allSeqs(2, 4) gives [1, 1, 1, 1][1, 1, 1, 2][1, 1, 2, 1][1, 1, 2, 2][1, 2, 1, 1][1, 2, 1, 2][1, 2, 2, 1][1, 2, 2, 2][2, 1, 1, 1][2, 1, 1, 2][2, 1, 2, 1][2, 1, 2, 2][2, 2, 1, 1][2, 2, 1, 2][2, 2, 2, 1][2, 2, 2, 2] You can use a backtracking method, very similar to what we did for the N Queens problem: Push the proper number of 1's onto a Stack, and then repeatedly print out the stack contents (that's one permutation), and pop the top element, and if it's not equal to max, add one to it, and push it back. And here is my code: import java.util.*; public class NewStack { private Stack<Integer> s; public String toString() { String result = "[ "; while ( !s.isEmpty()) result = result + s.pop() + ","; result = result + "]"; return result; } public void allSeqs(int max, int len) { System.out.println(s); while (s.size() < len){ s.push(1); } System.out.println(s); for (int i=0; i<len; i++){ try{ int a = s.pop(); if (a < max) { a = a + 1; s.push(a); } else { s.push(1); } System.out.print(s); }//end try block catch (NullPointerException e) { System.out.println("The last slot has been reached, the next one is empty so, we are done!"); } }//end for loop } //end method public static void main(String[] args) { NewStack nt = new NewStack(); nt.allSeqs(2,3); } } It compiles, but I can't get it to work properly, it gives me a NullPointerException whenever I try to run it... Do you know what's wrong?? Thank you!
http://www.javaprogrammingforums.com/whats-wrong-my-code/18534-whats-wrong-my-code-generate-sequences-numbers-using-stack.html
CC-MAIN-2015-06
en
refinedweb
Next: Introduction, Up: (dir) [Contents][Index] This manual describes gzochid, the gzochi reference manual documents gzochid, the server component of the gzochi massively multiplayer online game development framework. Next: Conceptual overview, Previous: Top, Up: Top [Contents][Index] gzochid is the server component for the gzochi massively multiplayer online game development framework. It is responsible for hosting the server-side portions of game applications developed against the framework, for managing the data persisted by these applications and provisioning computing resources for their needs at run-time, and for exposing a set of container services that are especially useful for online game development. gzochid is a container for game server applications the same way that a web container is a container for web applications: It manages the lifecycles of the applications it hosts, routes client requests to application endpoints, and provides services to ease the interaction of the applications with external resources. This manual expects a familiarity with the Scheme programming language, which is the language used to write applications that can be hosted by gzochid. gzochid uses the GNU GUile extension language platform to provide its Scheme run-time environment; the Guile manual is an excellent resource for users who are new to Scheme or to functional programming in general. New users of gzochi may wish to pay special attention to the following section of this manual, “Conceptual overview,” which explores some of the rationale for the design of the gzochi framework and the services it provides. Next: Installation, Previous: Introduction, Up: Top [Contents][Index] Developing games that are played over a computer network poses challenges distinct from those that arise from games whose extent is limited to a single process or a single machine. The characteristics of the network link that connects the machines involved the game give rise, by necessity, to some primary constraints on the design of the game: To create a dynamic model of the game state that is shared between the client and the server, those systems must exchange messages, and this process is naturally limited by the rate of message delivery. Will messages be delivered reliably? Is message order guaranteed to be preserved? Answers to these questions will also have an impact on game server architecture. When multiple players are allowed to interact with a networked game system simultaneously, an additional set of difficulties emerges. How can their interactions with components of the game system be properly synchronized such that each player receives a fair and consistent view of the world? How can this synchronization be scaled to maintain a responsive user interface for up to thousands of simultaneously connected clients? Some additional background is presented in the following sections. To address these issues, gzochid provides a unified set of software tools, which are described in a subsequent chapter (see Application services). Next: Transactions and atomicity, Up: Conceptual overview [Contents][Index] If the state of a game is to “outlive” the server process that manages it, the data that makes up the state must be written to non-volatile storage, such as a hard disk. There are many reasons for a server process to terminate. Some shutdowns, such as those performed for system maintenance or upgrades; others are spontaneous—software bugs are inevitable, and whether they cause the server process to terminate outright or leave the game system in an unplayable state, the net effect is the same. When a player’s commitment to an online game can span months or even years, losing their play history can be quite disheartening. Several questions must be answered in order for persistence to be implemented: How often must data be persisted? If players are notified of changes to game state before these changes are made persistent, then the potential exists for a recovered game state to “erase” progress made by players between the last persistence event and the shutdown of the game. On the other hand, non-volatile storage media are often quite a bit slower than random-access memory, and thus forcing persistence too often can create a bottleneck on game performance. How much of the game state must be persisted at each persistence point to ensure that the entire state can be reconstituted at some later date? For large, complex games, the game state may be so large that persisting the entire state takes so long that even infrequent persistence events must cause the game to “pause.” If only portions of the game state are persisted, then care must be taken to ensure that enough state is persisted to ensure logical consistency between entities within the game. gzochid addresses these issues with an automatic persistence mechanism that treats game state as an object graph in which updates are tracked and persisted transparently, with full transactional semantics. See Managed records, for more information. Next: Network communication, Previous: Data persistence, Up: Conceptual overview [Contents][Index] It is often important that certain sequences of events occur as a unit, such that either every event in the sequence takes place (and players receive notification) or, in the event of an error, none of the events take place—no matter where in the sequence the error arises. One example of this is transferring an object from one the inventory of one player to the inventory of another. Depending on how the transfer is implemented, there may be an interval between the object being taken from the first player but before it is given to the second player. If the execution of game code is interrupted at this point, the object may be destroyed. A similar problem arises if a reference to the object is given to the second player before being removed from the first. Even when errors are detected, recovery can difficult: It may not be possible to determine what the state of individual connected clients is once they have lost consistency with the server’s model of the game world, and enabling clients to resolve conflicting updates to game state presents significant architectural challenges. In general, once the side effects—e.g, client notifications or persistent changes to data—have been committed, they cannot easily be undone. gzochid executes all game application code in a transactional context, meaning that the side effects produced by a portion of code are delayed until its successful completion, at which point all of them guaranteed to take effect together; or the side effects are “rolled back” and the block of code is either retried, with no indication to the client that an error occurred, or abandoned. And other portions of code that are concurrently accessing the same bits of data are guaranteed a consistent view of that data for the duration of their execution. Next: Concurrent execution, Previous: Transactions and atomicity, Up: Conceptual overview [Contents][Index] The reliable delivery of messages between clients and the server is fundamental to the operation of an online game, but the network communication services provided by most programming languages and operating systems expose game applications to nuanced, unpredictable behaviors. Some communication channels do not guarantee packet delivery; nor is in-order delivery alway ensured. Even if a system promises ordered and reliable ordered delivery of packets, it is difficult to predict their time of delivery. When a packet of data arrives on the client or server, the bytes it contains must be interpreted, a process that is governed by a shared protocol. The set of possible communications between hosts in an online game is naturally dependent upon the specific mechanics of that game, but some types of messages are likely common to all games: Clients need to establish their identities with the server; the server needs to respond to authentication attempts. The client and server may need messages representing attempts to disconnect or log out of the game gracefully. Some types of messages, which trigger transitions in the state of a client’s connection to the server, are only valid when the connection is in a particular state. What should be the response from the server if it receives a login request from an already-authenticated client in the course of regular gameplay? If there are multiple points in the flow of game execution at which messages can be received, does semantic validation of message content have to be applied at all of them? Because the state of a network connection is dependent on the state of multiple independent hosts and their operators, it is inherently asynchronous with game state. In the likely case that a protocol message is larger than a single packet, an entire message may not become available all at once, and thus participants in the communication will need to keep a buffer of partial messages while they wait for the remaining bytes. A connection may be interrupted before all of the message’s consituent packets have been delivered, and if the processing of network data is mixed in with the processing of game application logic, recovery from a network failure may be complicated. There are aspects of multiplayer games for which it is not very important that every player have a view of the world that is consistent with that of every other player. For example, depending on the context, it may not be critical that every player have the same information about the scenery in a particular region—this is information that is related to the game world, but has no bearing on the outcome of the game. There are other elements of gameplay for which it is necessary that all involved players are kept in a consistent state. If some but not all players receive notification of a change that does affect the outcome of the game, not only are they at a strategic disadvantage, but losing synchronization with the server’s model of the world may make it difficult for them to interpet subsequent notifications. gzochid provides a network management layer and a low-level client-server protocol that work in concert to hide the tricky details of message deliery. Reasonably large messages may be transactionally queued for delivery to the server or its clients, and messages can be addressed to individual clients or to arbitrarily large groups of clients with a single procedure invocation. See Client session management, for information on gzochid’s representation of client connections; See Channel management, for a description of efficient message broadcast services. Previous: Network communication, Up: Conceptual overview [Contents][Index] While some portions of a game can be driven entirely by client-generated events, there are often flows of execution that are best managed as “background processes” that proceed independent of any action by a client. Some examples of this type of processing include: Time and weather systems, in which regular changes to the game world take place at scheduled intervals; or the actions of non-player characters that act as autonomous agents within the game world, with a flow of logic that determines their behavior based on various dynamic game states. As you can infer from these examples, different modes of scheduling and execution are possible for tasks within the same application. Some tasks need to begin executing at predetermined points in the future. Others need to run as soon as CPU time can be allocated to them. Certain tasks, such as those implementing an in-game timer tick, for example, may need to execute on a repeating basis. This type of asynchronous parallelism could be implemented using a an explicitly threaded execution model, but this approach has some significant limitations. For one, allocating a new thread for every asynchronous bit of processing to be done consumes valuable process memory and CPU cycles, and effectively constrains the amount of work that can be . A thread pool can some of these issues, but few low-level thread libraries include time-based scheduling as part of their thread management APIs—you can specify that some bit of code should run in a separate thread but, but you usually can’t control when it runs or how often it repeats without building some abstractions around the system’s thread primitives. gzochid provides scheduling and execution services that allow tasks to be queued for immediate or deferred processing, either one time only or repeating on a configurable interval. Furthermore, gzochid provides durability guarantees about scheduled tasks to the effect that tasks that fail with a recoverable error can be retried, and that the universe of scheduled tasks can survive a failure and restart of the application server. See Task scheduling, for more information. Next: Running gzochid, Previous: Conceptual overview, Up: Top [Contents][Index] See the INSTALL file included in the gzochid distribution for detailed instructions on how to build gzochid. In most cases, if you have the requisite toolchain and dependences in place, you should simply be able to run ./configure make make install This will install the gzochid executable gzochid (as well as the database tools gzochi-dump, gzochi-load, and gzochi-migrate) to a standard location, depending on the installation prefix—on most Unix-like systems, this will be /usr/local, with executable files being copied to /usr/local/bin. A server configuration file with the default settings will be installed to the /etc directory under the installation prefix—by default, /usr/local/etc. This configuration file will be processed prior to installation to set references to the gzochid deployment and data directories—where the server looks for deployed games and where it stores game state data, respectively—to locations relative to the installation prefix. See The server configuration file, for more information. Next: Application deployment, Previous: Installation, Up: Top [Contents][Index] The format for running the gzochid program is: gzochid option … With no options, gzochid scans its deployment directory looking for game applications, each of which is configured and initialized or restarted depending on its state, and then begins listening for client connections. By default, the monitoring web application is also started. In the absence of command line arguments, the port numbers on which these servers listen for connections, as well as other aspects of their behavior, are modifiable via a configuration file (see below). gzochid supports the following options: Specify an alternate location for the server configuration file. Print an informative help message on standard output and exit successfully. Print the version number and licensing information of gzochid on standard output and then exit successfully. Up: Running gzochid [Contents][Index] The gzochid server configuration file is usually named gzochid.conf and is installed (and searched for) by default in the /etc/ directory of the installation prefix. It is an .ini-style configuration file, meaning it consists of several named sections of key-value pairs, like so: [section] key1 = value1 key2 = value2 The configuration options currently understood by gzochid are as follows, organized by section. admin These settings control various aspects of gzochid’s administrative and monitoring functionality. Set to true to enable the administrative context, which is responsible for collecting and reporting on a variety of statistics about the state of a running gzochid server and its hosted games. Disabling this feature (by setting this key to false) will net a small improvement in process memory consumption and CPU performance. Set to true to enable the debugging server. Has no effect if ‘context.enabled’ is not true. The local port on which the debugging server should listen for incoming telnet connections. Set to true to enable the monitoring web server. Has no effect if ‘context.enabled’ is not true. The local port on which the monitoring web server should listen for incoming HTTP connections. game These settings control the primary game server module of gzochid. The local port on which to listen for incoming TCP connections from gzochi clients. The filesystem directory in which to store game state data for hosted game applications. The user associated with the gzochid process must have read and write access to this directory, as the server will attempt to create sub-directories rooted at this location for each game, and will read and write data files in those sub-directories. The filesystem directory to search for deployed game applications. The user associated with the gzochid process must have read and execute access to this directory—but it does not need to be able to write files here. The maximum duration, in milliseconds, for time-limited transactions executed on behalf of a game application. Any task or callback whose execution time exceeds this value will fail and be retried (if it has not used up its maximum retries). This setting also bounds the amount of time the container’s data services will wait to obtain a lock for exclusive access to any single datum such as a managed reference; if the wait time expires, the transaction attempting to access the data will be marked for rollback. By design, this setting has an impact on the task execution throughput of game applications. The longer a task takes to execute, and the more data it accesses as part of its execution, the more constraints it places on the transactional work that can be done concurrent with its execution. As a corollary, the shorter a task’s duration and the less data it accesses, the less risk there is that it will conflict with other tasks. Before increasing this value beyond its default, consider refactoring a failing task to perform fewer operations and access less data. Certain lifecycle events, such as application initialization, are executed in transactions without time limits; this setting has no effect on the processing of those events. log These settings control the system-wide logging behavior of gzochid. The lowest severity of log message that will be recorded in the logs (both to console and to the server’s log file). Valid values of this settings are, in order of decreasing severity: ERR, WARNING, NOTICE, INFO, and DEBUG. Next: Application services, Previous: Running gzochid, Up: Top [Contents][Index] Game applications hosted by gzochid are discovered by the server when it scans its deployment directory for sub-directories containing game application descriptor files, which are discussed in the following section. If gzochid’s game deployment directory is “/var/gzochid/deploy” (the default), then the server will scan and discover the game descriptor file “/var/gzochid/deploy/my-game/game.xml” on startup. The other required components of a game application are the Guile Scheme modules that contain the game logic and callbacks. These are typically included in sub-directory trees rooted in the game application directory, but alternate locations to search for modules can be specified in the game application descriptor file. Up: Application deployment [Contents][Index] “game.xml,” the game application descriptor, is an XML document that is deployed as part of a game application and provides game configuration information and metadata to gzochid. It is a rough analog to the “web.xml” descriptor file that accompanies Java web applications. Like “web.xml,” “game.xml” is used by the server to find the locations of the application’s code libraries and the entry points to the application. The game descriptor file must appear in the deployed application’s root directory. A DTD that can be used to validate the structure of your application descriptors is included in the gzochid source distribution. Some explanation of the semantics of the descriptor file follows. The application descriptor’s document element is game, and it must include a name attribute that specifies a unique “short name” for the application; among other things, this name will be used to identifier your application in log messages. The description element allows you to specify as its content some longer, more descriptive text to identify your game. (This text will be displayed in the gzochid monitoring console.) The load-paths element is a wrapper element for zero or more load-path element, each of which specifies as its content an absolute or relative path to be (temporarily) added to the default Guile %load-path variable, which contains the locations that are searched during module resolution. Note that the game application root directory is always added to the load path, so you can leave the load-paths element empty if your application library tree is rooted at the application root. Next there are two “lifecycle callbacks” that must be specified, procedures for handling the “initialization” and “logged in” events. Callbacks are specified via a callback element, which has no content but requires the procedure and module arguments which specify a publicly-visible Scheme procedure to be used to handle an event. For example, the following XML snippet: <callback procedure="handler" module="my-code app handlers" /> ...identifies the handler procedure exported from the Guile module (my-code app handlers). Modules are resolved via the configured load paths. Initialization occurs once per application—even across server restarts—and the initialization callback is passed an R6RS hashtable containing application properties. (In the current release, this table will always be empty.) The logged in callback will be called when a client connects and successfully authenticates, and it will be passed a client session record that can be used to communicate with the connected client. Next: Communication protocol, Previous: Application deployment, Up: Top [Contents][Index] The follow sections describe the services that gzochid exposes to the applications it hosts. As an application author, you are free to employ or ignore them in whatever proportion you choose, but the expectation is that most non-trivial games will depend heavily on all of the services the container provides. Next: Managed records, Up: Application services [Contents][Index] All game application code that executes within the gzochid container executes in the context of a transaction, meaning that all of its side effects, such as sending messages or modifying data, have the four ACID properties: Atomicity, consistency, isolation, and durability. The services described in the following sections are the primary means by which side effects are achieved in a gzochi application, and all of them participate in the container’s two-phase commit protocol, which verifies that each participant is prepared to commit before issuing the request to do so. When a transaction commits, the side effects requested from each service by application code are made permanent: Messages are sent to clients, changes to data are saved to the data store. When a transaction fails to commit, each service participating in the commit rolls back any transaction-local state that was created during the lifetime of the transaction. A transaction may fail to commit for any of several reasons: It may have exceeded the maximum running specified by the tx.timeout setting; it may have attempted to access data in a way that brought it into conflict with another currently-executing transaction; an external resource (such as a network connection) required by the transaction may be unavailable; code executing in a transactional context may exit non-locally. If a transactional task or callback fails to commit but the container determines that it is safe and desirable to re-attempt it, it will automatically be returned to the queue of items eligible for execution. A transactional unit of work that fails in a “retryable” way will be retried up to a configured maximum number of times (3, by default) before being abandoned. Next: Data binding and storage, Previous: Transaction management, Up: Application services [Contents][Index] gzochid exposes its transactional persistence services via flexible, user-defined data structures called “managed records.” A managed record is just like an R6RS record—in fact, a managed record is an R6RS record that has the record-type gzochi:managed-record as its parent—with a few additional rules. First of all, the fields of a managed record must be annotated to assist the container in persisting them to the data store. This is done via a serialization clause added to every field sub-clause in the managed record definition. The serialization clause gives a reference to the serialization to use to convert the field value to and from a stream of bytes as it is written to and read from the data store. A serialization is a record that provides serializer and deserializer procedures that are called by the container during the persistence phrase for managed records that have been modified in a transaction. The serialization clause may be omitted for fields whose value will always be a managed record. In this case, a built-in managed record serialization will be used. See gzochi io, for more information on serialization. Second, managed record-type definitions must be nongenerative (unique) with respect to their serialized type. Every managed record type is associated with a symbol (called a “serial UID”) that identifies it to the serialization system, and no two managed record types may share the same serial UID. A record type’s serial UID may be given explicitly, via the serial-uid keyword argument or clause, or one will be selected automatically: By taking the record type’s R6RS uid (if the type is nongenerative) or by taking the record type name. All managed record types definitions are added to a type registry, from which they are looked up by their serial UIDs during serialization and deserialization. Record types that do not specify a target type registry via the type-registry keyword argument or clause will be added to a default type registry. The (gzochi data) data module exports procedures that allow developers to create new type registries, but this is not typically useful outside the scope of a database migration, since during normal operation of the gzochid container, only the default type registry is used. See Migrating data, for more information on data migration. For parity with the R6RS record libraries, the gzochid Scheme API provides both syntactic and procedural facilities for working with managed records. See gzochi data, for more information on creating managed records. Next: Client session management, Previous: Managed records, Up: Application services [Contents][Index] gzochid provides a few different ways of accessing data persisted as managed records. You can access persistent data implicitly, in the form of a managed record used as a callback for a lifecycle event (e.g., client messages); or you can bind managed objects to names for explicitly retrieval. In either case, the container will also manage the persistence of managed records reachable from the root record—that is to say, gzochid knows when a field in one managed record contains a reference to another managed record and only persists the portions of the object graph that change during the execution of application code. Like the rest of the operations provided by the container, access to data follows transactional semantics. Any single flow of execution within an application is guaranteed a consistent view of data over the lifetime of its execution, and the set of changes it makes to data during this lifetime will be persisted together or not at all. These operations are also subject to transactional constraints, such as the requirement that they complete within a configured timeout and that they not conflict with operations accessing the same data in other transactions. If these constraints are violated, the operation will fail and the task will be retried or abandoned. Bindings can be manipulated using the gzochid:set-binding!, gzochid:get-binding and gzochid:remove-binding! procedures. For example, the following code snippet introduces a new binding that refers to a field of the managed record obj: (define obj (gzochi:get-binding "my-object")) (gzochi:set-binding! "f" (my-object-field obj)) Note that the container tracks references to managed records such that at any subsequent point (provided the bindings above are not mutated), the following expression will evaluate to #t: (eq? (my-object-field (gzochi:get-binding "my-object")) (gzochi:get-binding "f")) See gzochi data, for more information. Next: Channel management, Previous: Data binding and storage, Up: Application services [Contents][Index] A connected and authenticated user is exposed to game code in the form of a client session, a managed record that can be used to manipulate the state of that user. Messages may be queued transactionally for delivery to individual client sessions or to groups of sessions (see below); sessions can be explicitly disconnected; and handlers may be registered for session-related events, such as incoming messages and client-side disconnections. Because client sessions are managed records, they can be used any place in game application code that managed records may be used, such as automatically serialized fields of other managed records or as the data for an event handler. See gzochi client, for more information. Next: Task scheduling, Previous: Client session management, Up: Application services [Contents][Index] Message broadcasting is a pattern that appears frequently in massively multiplayer game development. There may be a logical set of players that should be managed and addressed by the server as a group for the purposes of messaging. This group may include every player in the game, in the case of system notifications that need to be sent globally; or it may correspond to a logical grouping arising from game logic, such as broadcasting messages only to players gathered in a particular room or region. gzochid addresses this case by providing an abstraction for managing communication with arbitrarily large numbers of grouped client sessions. gzochid “channels” are managed records, and, like client sessions, can be used any place in game application code that supports managed records. See gzochi channel, for more information. Next: Transactional logging, Previous: Channel management, Up: Application services [Contents][Index] gzochid provides an explicit task scheduling API that is used in place of other mechanisms for asynchronous processing, such as threads. A task in gzochid is a callback similar to the ones invoked for application lifecycle events, such as initialization; it specifies a procedure to execute (qualified by a Guile module name), accompanied optionally by an arbitrary managed record to supply contextual information. The task API provides several options for task scheduling. Scheduling is a transactional operation with respect to the code doing the scheduling, so the earliest a scheduled task can run is following a successful commit of the transactional code (a task or a lifecycle callback) that scheduled it. Additionally, the execution of a task may be delayed by a user-specified number of milliseconds. If a task needs to run more than once, you can reschedule it explicitly as part of its execution, or you can schedule it as a periodic task. Periodic tasks are automatically rescheduled to run on a repeating basis with each execution following the previous one after a specified number of milliseconds. Task scheduling is persistent, such that the schedule of pending tasks will survive a restart of the gzochid container. See gzochi task, for more information. Previous: Task scheduling, Up: Application services [Contents][Index] Because gzochid executes application code transactionally, any side effects (e.g., queued messages, scheduled tasks) will be rolled back if the enclosing transaction fails to commit. And a single task may be executed several times before committing successfully if there is heavy contention for data or other resources. As such, if log messages are written non-transactionally, it can be difficult to use them to trace application events. The gzochid container provides an API for writing log messages that will only be flushed on a successful commit. For example, if the following segment of transactional code is attempted and rolled back twice before committing: (display "A log message.") (newline) (gzochi:log-info "A transactional log message.") ...the output might include: A log message. A log message. A log message. A transactional log message. See gzochi log, for more information. Next: User authentication, Previous: Application services, Up: Top [Contents][Index] The protocol used for gzochi client-server communication is documented below. Note that this description is provided merely for the curious; the responsibility of gzochi game developers is to handle the messages that this underlying protocol delivers—in the form of byte arrays—to the client or server. The format and encoding of these byte arrays is left up to game developers to determine. The low-level protocol described below conforms to a general pattern of a two-byte prefix encoding the length of the message payload, if any, followed by an “opcode” indicating the purpose of the message. (The opcode byte is not included in the length prefix.) The maximum size of a message body sent between components in a gzochi game is thus 65535. Note that this is not necessarily the size of the packet that delivers the message; the client and server may break a larger protocol up into smaller packets that are re-assembled by the recipient. (Naturally, this behavior is transparent to gzochi game developers.) The following table lists the message types and structures that make up the low-level gzochi byte protocol. LOGIN_REQUEST (0x10) A request from a client to login to a gzochid application endpoint. The message payload must include the UTF-8-encoded endpoint name followed by a null byte (0x0), followed by a byte sequence that will be passed uninterpreted to the authentication plugin configured for the specified endpoint. LOGIN_SUCCESS (0x11) A message from the server indicating that a LOGIN_REQUEST was received and that authentication was successful. There is no message payload for this message. LOGIN_FAILURE (0x12) A message from the server indicating that a LOGIN_REQUEST was received but that the login could not be completed successfully. LOGOUT_REQUEST (0x20) A request from an authenticated client to perform a graceful disconnect from a gzochid server. There is no message payload for this message. LOGOUT_SUCCESS (0x21) A message from the server indicating that a LOGOUT_REQUEST message was received. There is no message payload for this message. Following the dispatch of this message, the server will close the client’s socket connection. SESSION_DISCONNECTED (0x30) A message from the server notifying the client that the server will be immediately closing the client’s socket connection. There is no message payload for this message. SESSION_MESSAGE (0x31) A message that may be sent from the server or the client that carries a message to be delivered, respectively, to a gzochid game application endpoint or to client application code. The message payload follows the opcode as a byte sequence that will be passed uninterpreted to a handler registered by game code. Next: Monitoring, Previous: Communication protocol, Up: Top [Contents][Index] gzochid employs a pluggable authentication mechanism to allow each game application endpoint to specify an authentication scheme that meets its security requirements. Authentication plugins are built and distributed as shared libraries and installed to the “gzochid” directory under the library directory of the installation prefix; for example, ‘/usr/local/lib/gzochid’. This directory is scanned on server startup and any discovered plugins are dynamically loaded. Plugins are configured for use by individual game applications via the auth element in the game descriptor file. Nested property elements can be used pass application-specific configuration to the plugin like so: <auth type="password_file"> <property name="path" value="/path/to/passwords.txt" /> </auth> The plugins currently available are described in the following sections. Next: Password file authentication, Up: User authentication [Contents][Index] The pass-thru authentication plugin performs no actual authentication—all login requests are accepted—and the contents of the byte sequence included as part of the login request are interpeted as a UTF-8-encoded text string and are used as the name portion of the identity for the resulting client sessions. Note that pass-thru authentication is built into the gzochid server container and does not need to be loaded or otherwise configured. This scheme is what will be used by applications that do not explicitly specify another authentication plugin. Next: Kerberos v5 authentication, Previous: Pass-thru authentication, Up: User authentication [Contents][Index] The password file authentication plugin allows a game application to authenticate users against a list defined in a text file. The location of this file is given by the required “path” property described below. Each row of the password file gives a username, followed by the equals (“=”) character, followed by a password, and terminated by a newline. testuser=password123 This plugin interprets the byte string passed by the client during authentication as follows: All characters up to (but not including) a NULL byte ( 0x00) are interpreted as a username and are used as the name portion of the identity for the client session following successful authentication; all characters following this NULL byte are interpreted as a password and must match the password component of the row corresponding to the username in the password file. To use the password file plugin to authenticate clients of an application, specify “password_file” as the value of the type attribute in an auth element in the application’s game descriptor file. This plugin accepts the following configuration properties. Previous: Password file authentication, Up: User authentication [Contents][Index] The Kerberos v5 authentication plugin allows a game application to act as a host-based service in a Kerberos authentication realm, and verify tickets generated for a user by a ticket granting server. The contents of the byte sequence included as part of the login request are interpeted as an AP-REQ request and passed to the krb5_rd_req Kerberos API function. The name portion of the identity created for the resulting client session is obtained by calling the krb5_unparse_name function on the parsed request ticket. See the README file included in the gzochi source distribution for instructions for building the Kerberos authentication plugin and linking with the Kerberos libraries. To use the Kerberos plugin to authenticate the users of an application, specify “krb5” as the value of the type attribute in an auth element in the application’s game descriptor file. This plugin accepts the following configuration properties. The configured service principal name must match the target service principal for which the client generated the ticket. The following sample code extracts the credentials for the current user principal from a local credentials cache and encodes a request that can be read by the Kerberos authentication plugin. It may be useful as a basis for building clients of applications that use this plugin. #include <krb5.h> #include <stddef.h> void authenticate () { krb5_auth_context auth_context; krb5_ccache ccache; krb5_context k5_ctx; krb5_data outbuf; krb5_init_context (&k5_ctx); krb5_cc_default (k5_ctx, &ccache); krb5_auth_con_init (k5_ctx, &auth_context); krb5_mk_req (k5_ctx, &auth_context, 0, "gzochi", "localhost.localdomain", NULL, ccache, &outbuf); /* The contents of outbuf.data should be sent as the payload of an authentication request to the gzochid server. */ ... } Note that because of the one-way nature of the gzochi client authentication protocol, game applications cannot prove their identity to an authenticating client, and the AP_OPTS_MUTUAL_REQUIRED flag should not be used by clients. Next: Remote debugging, Previous: User authentication, Up: Top [Contents][Index] When enabled, the gzochid administrative module collect various bits of statistical information and makes resources related to active game applications available for reporting. The module’s architecture supports the registration of sub-modules—described below—that expose this information in different ways. The monitoring web server When the monitoring web server is enabled, it listens for HTTP connections on its configured port (8080 by default) and serves HTML pages with information about the server and the games it runs. In particular, a browser for the game applications’ data stores is provided, which renders the contents of the store in a format similar to a “hex editor” application. There are two ways of accessing the data store through the monitoring web server. The binding list can be accessed by visiting the following URL in a web browser: Provided the server is running on the local machine and listening on port 8080, the URL above will return a list of bound names in the data store for ‘my-game’, with links to the data bound to each name. For a lower-level view of the store, you can visit the URL: ...which serves a list of all of the managed records in the store, indexed by the internal object identifiers the container uses to track them. Future plans for the monitoring web server include per-game statistical reporting on data such as transactional throughput and client session volume. Next: Application design with gzochid, Previous: Monitoring, Up: Top [Contents][Index] Application code deployed to gzochid executes in a context with fairly unique characteristics. Even when unit tests—which should be the primary means of detecting and preventing bugs—have been written, situations present themselves in which inspecting the state of a running application is the most effective way to troubleshoot an error or incorrect behavior. To this end, gzochid offers a remote debugging interface that allows a developer to connect to a running gzochid instance using a telnet client and to evaluate Scheme code within the context of an application hosted by that instance. The debugging server listens on the port specified by the module.debug.port setting in the server configuration file. When a client connects to the debug port, a new Scheme REPL (Read Evaluate Print Loop) is launched and associated with a fresh “sandbox” environment in which the (guile), (gzochi), and (gzochi admin) modules have been pre-loaded. A user can interact with the debugger using the full complement of Scheme syntax and Guile REPL commands (e.g., ,pretty-print). A typical debugging session might have the form: julian@navigator:~$ telnet localhost 37146 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. GNU Guile 2.0.6.31-2446f Copyright (C) 1995-2012 Free Software Foundation, Inc. Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'. This program is free software, and you are welcome to redistribute it under certain conditions; type `,show c' for details. Enter `,help' for help. scheme@(#{ g109}#)> (gzochi:applications) $1 = (#<r6rs:record:gzochi:application-context>) scheme@(#{ g109}#)> (gzochi:with-application (car $1) (lambda () (gzochi:get-binding "scoreboard"))) $2 = #<r6rs:record:my-game:scoreboard> Next: Scheme API reference, Previous: Remote debugging, Up: Top [Contents][Index] As the introduction to this manual explains, gzochid is a server container for gzochi game applications. To take full advantage of this architecture, it may be useful to understand the way gzochid interacts with your application. There are several entry points to a game application. When an application is started for the first time, its initialization procedure is invoked. This procedure is responsible for setting up any global state required by the game and creating any managed record bindings that must be present before client connections can be accepted. The game application descriptor also registers a callback procedure that is invoked when a new, authenticated client connection is established. See the The following paragraphs are describe some best practices for game programming in gzochid. Avoid top-level Scheme bindings for application state. Modifications to data accessed via bindings in the top-level module environment (i.e., variables created using define) cannot be tracked by gzochid, whether the data is part of a managed record or not. To share mutable state between different execution paths in a game application, bind the state to a name with the gzochid binding API and retrieve and modify its value—creating a local definition via a let form, perhaps—within the scope of the code that needs it. There is no need to synchronize access to data managed by gzochid; the container will prevent race conditions by rolling back task execution as necessary. Express concurrency through task scheduling. Likewise, you should never need to explicitly launch a thread to handle a bit of application work. Scheduling work via gzochid’s task API ensures that it is executed in the proper transactional context and that it will be transparently rescheduled following a non-fatal error and that its status will survive the failure and restart of the container. Limit task scope. The longer a task runs and the more resources it accesses, the more likely it is to disrupt the behavior of other simultaneously-executing tasks, and the less likely its transaction will be able to commit successfully—tasks whose execution time exceeds the configured value of tx.timeout will be aborted or otherwise prevented from committing. If there is a lot of work to be done or there are many objects to be modified, try breaking a large task into a series of smaller tasks, each of which does a portion of the work and then schedules the next portion to run as soon as possible. Avoid external side effects. Because application tasks are executed within the scope of a transaction that may be rolled back and re-attempted arbitrarily many times, any side effects these tasks have that are not managed by gzochid may be “played back” multiple times along with the rest of the task’s execution, possibly duplicating their impact. Next: An example application, Previous: Application design with gzochid, Up: Top [Contents][Index] The following sections describe the Scheme API exposed to the server-side components of gzochi applications by the gzochid container. The API consists of a set of use-specific R6RS libraries that can be imported independently or all together via the (gzochi) composite library. Next: gzochi app, Up: Scheme API reference [Contents][Index] The (gzochi admin) module exports procedures and data types that enable introspection of game application state and the execution of Scheme code within the context of a running game application. These functions are intended for use with the remote debugging interface provided by the gzochid container. Because their use cases as part of deployed application code are not well defined, they are not exported from the (gzochi) composite library. Returns a list of application context objects representing the set of applications running in the container. Returns an application context representing the “current” application, e.g. as set by gzochi:with-application, or #f if the current application has not been set. Returns #t if obj is an application context object (as returned by gzochi:applications or gzochi:current-application, #f otherwise. Returns the name of the application represented by the application context object context. Calls the zero-argument procedure thunk with the current application set temporarily to the application represented by the application context object context. The value yielded by thunk is returned. The application that was current before calling gzochi:with-application (or #f if there was none) will be restored when thunk exits locally or non-locally. Next: gzochi channel, Previous: gzochi admin, Up: Scheme API reference [Contents][Index] The (gzochi app) module exports procedures and data types common to other components of the API. Callbacks are serializable references to Scheme procedures with some optionally associated data. They are used in several places to indicate actions to be taken by game application code upon notification of an asynchronous event such as a new autheticated client connection. Procedures are represented within callbacks as Scheme symbols; the modules that export them are represented as lists of symbols. The data associated with a callback—if any—must take the form of a managed record, so that its lifecycle may be managed by the container. Constructs a new gzochi:callback with the specified procedure, module, and optional data. Expands to an invocation of gzochi:make-callback with the specified procedure, module, and optional data. If module is omitted, the module in which procedure is defined will be used. Returns #t if obj is a gzochi:callback, #f otherwise. Returns the module specification for the gzochi:callback callback as a list of symbols. Returns the procedure name for the gzochi:callback callback as a symbol. Returns the managed record that encapsulates the data for the gzochi:callback callback or #f if the callback was created without any. The value of this fluid is set to the root deployment directory of the application to which the currently executing code belongs. (By default, this is “/var/gzochid/deploy/[application name]”.) Next: gzochi client, Previous: gzochi app, Up: Scheme API reference [Contents][Index] The (gzochi channel) library exports procedures that are useful for managing client communication channels. Creates and returns a new channel bound to the string specified by name. A &gzochi:name-exists condition will be raised if there is already a channel with that name. Returns the channel bound to the specified string name, which must have been created previously via a call to gzochi:create-channel. A &gzochi:name-not-bound condition will be raised if there is no channel wit Returns #t if obj is a channel, #f otherwise. Returns the name of the channel channel as a string. Adds the client session session to the channel channel. session is guaranteed to receive any messages successfully committed following the successful commit of the join operation. Removes the client session session from the channel channel. session is guaranteed not to receive any messages committed following the successful commit of the leave operation. Enqueues a message to be sent, in the form of the bytevector msg, to all client sessions that are members of the channel channel at the time this procedure is called. Destroys the channel channel, unbinding its associated name and removing all constituent client sessions. Next: gzochi conditions, Previous: gzochi channel, Up: Scheme API reference [Contents][Index] The (gzochi client) library provides procedures that are useful for working with individual client sessions. See (gzochi channel), for functionality related to groups of client sessions. Client sessions are managed records; as such, you may store a session as the value of fields in other managed records or as the data for tasks or other callbacks. When a client is disconnected, the session record is removed from the data store, and attempts to access it will result in a &gzochi:object-removed condition being raised. Returns #t if obj is a client session record, #f otherwise. Returns the identity associated with the session session as a string. The game’s authentication plugin is responsible for setting the session’s identity. Enqueues a message to be sent, in the form of the bytevector msg, to the client session session. Client session listeners are managed records that are used by the gzochid container to notify game application code of events related to client sessions. A listener is returned from the callback for the logged in event to indicate a successful login handshake ( #f may be returned instead to signal a login failure.) This listener must in turn be configured with two gzochi:callback objects, one to be invoked when a new message is received from the spcified session, the other to be invoked when the session disconnects from the server. The record type, for introspection or use as a base type. Constructs a new gzochi:client-session-listener with the specified gzochi:callback objects received-message and disconnected, which will be invoked, respectively, when a message is received from a client and when a client disconnects from the server. Returns #t if obj is a gzochi:client-session-listener, #f otherwise. Returns the gzochi:callback registered with the gzochi:client-session-listener listener for handling received messages. Returns the gzochi:callback registered with the gzochi:client-session-listener listener for handling session disconnection. Next: gzochi data, Previous: gzochi client, Up: Scheme API reference [Contents][Index] The (gzochi conditions) library exposes condition types and related procedures for conditions that may be raised during the execution of game application code. A &gzochi:name-exists condition is raised in contexts in which an attempt is being made to introduce a new binding, but the target name is already bound. The condition type, for introspection or use as a base type for other conditions. Contructs a new &gzochi:name-exists condition for the specified name string name. Returns #t of obj is a &gzochi:name-exists condition, #f otherwise. Returns the name associated with the specified &gzochi:name-exists condition cond. A &gzochi:name-not-bound condition is raised in contexts in which a non-existent named binding is looked up. The condition type, for introspection or use as a base type for other conditions. Contructs a new &gzochi:name-not-bound condition for the specified name string name. Returns #t of obj is a &gzochi:name-not-bound condition, #f otherwise. Returns the name associated with the specified &gzochi:name-not-bound condition cond. A &gzochi:object-removed condition is raised in contexts when a reference to a managed record is accessed after the record has been removed. This may happen for a number of reasons: A client session record used as a field value or binding and which is subsequently disconnected; a named binding whose target value is explicitly removed; or a managed record used as a value that is accessed after it is has been explicitly removed. The condition type, for introspection or use as a base type for other conditions. Contructs a new &gzochi:object-removed condition. Returns #t of obj is a &gzochi:object-removed condition, #f otherwise. A &gzochi:no-current-application condition is raised when an operation requiring an application context is attempted and none is present. This happens most frequently because of user error in an interactive remote debugging session—for example, a call to a transaction-aware API function that is not wrapped in an invocation of gzochi:with-application. The condition type, for introspection or use as a base type for other conditions. Contructs a new &gzochi:no-current-application condition. Returns #t of obj is a &gzochi:no-current-application condition, #f otherwise. A &gzochi:transaction-aborted condition is raised when a transactional operation is attempted and the transaction bound to the current thread has been found to be in an inconsistent state—that is to say, it has been rolled back or marked for rollback. Although this condition will almost aways be raised in a non-continuable way, aborted transactions are an expected part of the control flow of a gzochi game application. A transaction may be aborted for a number of reasons; the code executing as part of the transaction may have accessed data in a way that brought it into conflict with code executing as part of another transaction, or its execution time may have exceeded a configured threshold and the scheduler has pre-emptively aborted its transaction to prevent it from interfering with other transactions. A &gzochi:transaction-aborted condition does not typically require explicit handling. Rather, it is best to allow the code that triggered the condition to exit non-locally; gzochid’s task scheduler will observe the state of the transaction upon exit and reschedule the task or callback accordingly. The condition type, for introspection or use as a base type for other conditions. Contructs a new &gzochi:transaction-aborted condition. Returns #t if obj is a &gzochi:transaction-aborted condition, #f otherwise. A &gzochi:transaction-retry condition is raised when a transactional operation or an interaction with the gzochi API fails but is safe to retry and worth retrying because it might succeed on a subsequent execution attempt. This condition is often raised as a composite condition with a &gzochi:transaction-timeout condition when a task’s execution times out. The container’s Scheme interface recognizes this condition type, and the container uses it as part of determining whether to reschedule or abandon a task. Application code may raise this condition explicitly, so as to release resources if it can determine ahead of time that the current transaction will not be able to commit. The condition type, for introspection or use as a base type for other conditions. Constructs a new &gzochi:transaction-retry condition. Returns #t if obj is a &gzochi:transaction-retry condition, #f otherwise. A &gzochi:transaction-timeout condition is raised when a transactional operation takes longer than the time remaining for the current transaction to run, or when the current transaction attempts a transactional operation after its allotted running time has elapsed. The condition type, for introspection or use as a base type for other conditions. Constructs a new &gzochi:transaction-timeout condition. Returns #t if obj is a &gzochi:transaction-timeout condition, #f otherwise. Next: gzochi io, Previous: gzochi conditions, Up: Scheme API reference [Contents][Index] The (gzochi data) library exports the data structures and procedures that make up gzochid’s data storage and serialization API. As discussed in a previous section, managed records are the foundation of any game application’s interaction with container’s data services. The syntactic and procedural APIs for managed records are discussed below. Note that because managed records are implemented on top of R6RS records, they may be treated as such for the purposes of the procedures in the (rnrs records inspection) library. Defines a new managed record type, introducing bindings for a record-type descriptor, a record constructor descriptor, a constructor procedure, a record predicate, and managed accessor and mutator procedures for the new managed record type’s fields. The structure of name-spec and the record-clause sub-forms are the same as in the R6RS define-record-type, with the following exceptions: First, each field definition that appears in a fields declaration of a record-clause sub-form of a managed record definition may include a serialization specification that provides the serialization to use for the field when it is marshalled to or from persistent storage. This specification has the form (serialization s), where s is a gzochi:serialization record. If the serialization specification is omitted, the container will use a default serialization that requires all field values to be managed records themselves or #f. A custom serial UID may be specified by adding a serial-uid clause as one of the record-clause sub-forms. This UID will be used to register the record type definition in the default type registry, or in the registry specified by the type-registry clause, if one is provided. Custom serial UIDs and non-default type registries are mainly useful when implementing migration processes. See Migrating data, for more information. Evaluates to the managed record-type descriptor associated with the type specified by record-name. Evaluates to the managed record-constructor descriptor associated with the type specified by record-name. Returns a new record-type descriptor for the managed record with the specified properties, which have the same semantics as they do when passed to R6RS’s make-record-type-descriptor, with the exceptions that the field descriptors in fields may contain serialization specifiers as described above, and that the optional keyword arguments #:serial-uid and #:type-registry may be present and have the effect described above. Returns the managed record field accessor procedure for the kth field of the managed record-type descriptor mrtd. Returns the managed record field mutator procedure for the kth field of the managed record-type descriptor mrtd. An &assertion condition will be raised if this field is not mutable. Returns a constructor procedure for the managed record constructor descriptor mrcd. This procedure returns the same value as the record-constructor procedure in the (rnrs records procedural) library. These procedures return the same values as their counterparts in the (rnrs records procedural) library. A constructor and type predicate for managed record type registries, which can be used to control the visibility of managed record types in different contexts; in particular, across different schemas during a database migration. gzochi:make-managed-record-type-registry constructs a new type registry object. gzochi:managed-record-type-registry? returns #t if obj is a managed record type registry, #f otherwise. Returns #t if obj is a managed record, #f otherwise. Returns the record-type descriptor for mr. This procedure returns the same value as the record-rtd procedure in the (rnrs records procedural) library. References to managed records within an object graph of other managed records will be tracked transparently by the container. The following procedures can be used to explicitly store and retrieve references to managed records by name. Returns the managed record associated with the strig name name. A &gzochi:name-not-bound condition will be raised if there is no binding for name. Creates an association between the string name and object, which must be a managed record, replacing any previous binding that may exist for name. Removes the binding for the string name. A &gzochi:name-not-bound condition will be raised if there is no binding for name. Note that this procedure only removes the association between name and a managed record; to remove the record itself, you must call gzochi:remove-record!. Many of the operations provided by (gzochi data) are supported only for managed records. For example, you cannot use gzochi:set-binding! to store a primitive value; you must first “wrap” that value as a field in a managed record type that includes information about how it should be serialized. In lieu of creating distinct managed record types to wrap every unmanaged type that needs to be stored, the procedures below are provided to support the use of the generic gzochi:managed-serializable type that encapsulates the serialization information for unmanaged types. Constructs a new gzochi:managed-serializable object with the specified value. The serializer-callback and deserializer-callback arguments must be gzochi:callback instances specifying procedures to be used or serialization and deserialization, respectively. Returns #t if obj is a managed serializable record, #f otherwise. Returns the value wrapped by the managed serializable record ms. Compound data types such as vectors and hash tables are necessary components of any non-trivial application, but designing structures that support highly concurrent access to and modification of their contents is challenging. The following API describes “managed” implementations of R6RS vectors and hash tables, which partition their elements such that conflicts between transactions accessing different elements of the same container are minimized. For applications that require access to sequentially-ordered elements and need the container to resize itself as elements are added or removed, a managed sequence type is provided which features an API based on the SRFI-44 collections proposal (). Returns a newly allocated managed vector of len elements. The initial contents of each position is set to #f. Returns a newly allocated managed vector composed of the given arguments. If any vector element is not a managed record, the keyword arguments #:serializer and #:deserializer must be given (in the form of gzochi:callback records), and managed serializable wrappers will be generated to hold any unmanaged elements. Returns #t if obj is a managed vector, #f otherwise. Returns the contents of position k of vec. k must be a valid index of vec. Stores obj at position k of vec. k must be a valid index of vec. The value returned by gzochi:managed-vector-set! is unspecified. If obj is not a managed record, the keyword arguments #:serializer and #:deserializer must be given (in the form of gzochi:callback records), and a managed serializable wrapper will be generated to hold obj. Returns the number of elements in vector as an exact integer. Return a newly allocated list composed of the contents of v. Constructs a new hash table that uses the procedure specified by the gzochi:callback arguments equiv-callback to compare keys and hash-callback as a hash function. equiv-callback must specify a procedure that accepts two arguments and returns a true value if the are equivalent, #f otherwise, hash-callback a procedure that accepts one argument and returns a non-negative integer. Returns #t if obj is a managed hash table, #f otherwise. Returns the number of keys currently in the managed hash table hashtable. Returns the value associated with key in the the managed hash table hashtable or default if none is found. Associates the key key wth the value obj in the managed hash table hashtable, and returns an unspecified value. If key is not a managed record, the keyword arguments #:key-serializer and #:key-deserializer must be given (in the form of gzochi:callback records), and a managed serializable wrapper will be generated to hold key. Likewise, if obj is not a managed record, #:value-serializer and #:value-deserializer must be provided. Removes any association found for the key key in the hash table hashtable, and returns an unspecified value. Returns #t if the managed hash table hashtable contains an association for the key key, #f otherwise. Associates with key in the managed hash table hashtable the result of calling proc, which must be a procedure that takes one argument, on the value currently associated with key in hashtable—or on default if no such association exists. If key is not a managed record, the keyword arguments #:key-serializer and #:key-deserializer must be given. If proc returns something that is not a managed record—or if default becomes the value of the association—then #:value-serializer and #:value-deserializer must be provided. Removes all of the associations from the managed hash table hashtable. Returns a managed vector of the keys with associations in the managed hash table hashtable, in an unspecified order. Returns two values—a managed vector of the keys with associations in the managed hash table hashtable, and a managed vector of the values to which these keys are mapped, in corresponding but unspecified order. Returns a gzochi:callback record that represents the equivalence predicate used by hashtable. Returns a gzochi:callback record that represents the hash function used by hashtable Returns a newly allocated managed sequence. Returns #t if obj is a managed sequence, #f otherwise. Return a newly allocated list composed of the contents of seq. Appends obj to the end of seq. The value returned by gzochi:managed-sequence-add! is unspecified. If obj is not a managed record, the keyword arguments #:serializer and #:deserializer must be given (in the form of gzochi:callback records), and a managed serializable wrapper will be generated to hold obj. Returns #t if seq contains at least one element equivalent to obj as determined by the equivalence procedure pred, #f otherwise. If pred is not specified, eq? will be used to check equivalence. Removes the first element of seq equivalent to obj, as determined by the equivalence procedure pred. If pred is not specified eq? will be use to check equivalence. The positions of all elements that come after the removed element are decreased by one. The value returned by gzochi:managed-sequence-delete! is unspecified. Removes the element at position i of seq. The positions of all elements that come after i are decreased by one. The value returned by gzochi:managed-sequence-delete-at! is unspecified. Applies fold-fn to each element in seq. fold-fn should accept a single element from the sequence as its first argument, and the values passed as seeds as its remaining arguments. The function must return either #f, indicating that the folding iteration should halt; or the values to passed as seed values on the next invocation of fold-fn. The fold completes when fold-fn has been applied to every element of seq, or when fold-fn returns #f. gzochi:managed-sequence-fold-left applies fold-fn to every element in seq starting with position 0; gzochi:managed-sequence-fold-right begins with the element at the end of the sequence and works backwards to the first. These procedures return the final set of seed values returned by fold-fn as the values of the fold. Inserts obj into seq at position i, which must be between zero and the nunber of elements currently in the sequence, inclusive. The positions of all elements that come after i are increased by one. The value returned by gzochi:managed-sequence-insert! is unspecified. If obj is not a managed record, the keyword arguments #:serializer and #:deserializer must be given (in the form of gzochi:callback records), and a managed serializable wrapper will be generated to hold obj. Returns the element at position i in seq. Replaces the element in seq at position i, which must be between zero and the number of elements currently in the sequence, with obj. The value returned by gzochi:managed-sequence-set! is unspecified. If obj is not a managed record, the keyword arguments #:serializer and #:deserializer must be given (in the form of gzochi:callback records), and a managed serializable wrapper will be generated to hold obj. Returns the number of entities currently in the managed sequence seq. Next: gzochi log, Previous: gzochi data, Up: Scheme API reference [Contents][Index] The (gzochi io) library exports structures and procedures useful for building serializations, which are used by the data management features of the gzochid container for storing and retrieving application data. The serializations described below may be used directly as field serializations for a managed record-type descriptor or composed to form serializations for more complex types. A serialization is an R6RS record with fields for a serialization procedure and a deserialization procedure. Serialization procedures should take two arguments, an output port and the value to be serialized, and should write the serialized form of the value to the port as a sequence of bytes; deserialization procedures will be passed an input port and should read the bytes necessary to return a value of the required type. No type or length information—or delimiters—other than what the serializers themselves add to the output will be persisted. In particular, deserializer procedures should take care not to read more bytes than necessary from the port, as doing so will disrupt the operation of subsequent deserializers. The base serialization record-type, constructor, predicate, and field accessors. A serialization that uses an efficient, variable-length encoding format to read and write integer values of arbitrary size. A serialization that encodes boolean values as integers, with 1 representing true and 0 representing false. Note that the deserializer procedure will treat any non-zero value as encoding true. A serialization that reads and writes text strings using the UTF-8 encoding, with an integer prefix giving the number of characters in the original string. A serialization that reads and writes symbols by converting them to and from strings and delegating to the procedures in gzochi:string-serialization. A serialization that reads and writes R6RS bytevectors as sequences of bytes with encoded integer prefixes giving the lengths of the original vectors. Returns a new gzochi:serialization record that can be used as a serialization for a list of arbitrary length in which every car of the list has its serialization performed by the serialization ser. Next: gzochi task, Previous: gzochi io, Up: Scheme API reference [Contents][Index] The (gzochi log) library provides procedures for performing application-level message logging, at various levels of priority. All of the procedures in this library do their writes transactionally, which means that the messages will only be persisted (to the console and/or a file on disk, depending on configuration) if the current transaction commits. These procedures interpret their “rest” arguments as values with which to replace escapes in the message string. Formatting and replacement is done according to the rules of Guile’s simple-format procedure. Writes a transactional message at the priority level priority, a symbol that must be one of err, warning, notice, info, and debug. Each of these convenience procedures delegates to gzochi:log, passing a respective priority value. Next: gzochi, Previous: gzochi log, Up: Scheme API reference [Contents][Index] The (gzochi task) library provides procedures for creating and scheduling transactional tasks. Tasks are gzochi:callback records, and represent blocks of code to be executed on behalf of a game application. All tasks are executed in a fully transactional context, meaning that their side effects enjoy guarantees of atomicity. Transactionally schedules the gzochi:callback callback for future execution. If delay is specified, it must be a non-negative integer giving the number of milliseconds by which the execution of callback should be delayed. If period is also specified, the execution of callback will be executed repeatedly, and period must be a non-negative integer giving the number of milliseconds that must elapse between executions of callback. If period is specified, this procedure futhermore returns a “perodic task handle,” an opaque managed record that can be used to control the scheduling behavior of the resulting task. Note that the execution delay of a scheduled task, as well as the period of a periodic task, is “best effort.” The task will not be eligible for execution until at least the specified number of milliseconds have elapsed, but may be delayed further depending on the current state of the server. Returns #t if obj is a task handle, #f otherwise. Cancels the periodic task associated with the periodic task handle handle. Note that depending on the state of the container’s task execution schedule, the task may still be eligible for execution for a short period of time following its cancellation. After this procedure returns, handle will be removed from the data store. Previous: gzochi task, Up: Scheme API reference [Contents][Index] The (gzochi) library is a composite of all of the other public gzochi libraries, with the exception of ‘(gzochi admin)’. It imports and re-exports all of their exported procedures and syntactic forms. Next: Database tools, Previous: Scheme API reference, Up: Top [Contents][Index] The following sample code is a complete gzochid application that implements a simple “Hello, world!” behavior. Authenticated clients receive a simple greeting and may send a single message to the server before being disconnected. #!r6rs (library (gzochi example hello-world) (export initialize-hello-world hello-client client-message client-disconnected) (import (gzochi) (rnrs)) (define (initialize-hello-world properties) (gzochi:notice "Hello, world!")) (define (hello-client session) (gzochi:client-session-send session (string->utf8 (string-append "Hello, " (gzochi:client-session-name session) "!"))) (gzochi:make-client-session-listener (gzochi:make-callback 'client-message '(gzochi example hello-world)) (gzochi:make-callback 'client-disconnected '(gzochi example hello-world)))) (define (client-message session msg) (let ((name (gzochi:client-session-name session))) (gzochi:notice "~a: ~a" name (utf8->string msg)) (gzochi:client-session-send session (string->utf8 (simple-format "Goodbye, ~a!" name)))) (gzochi:disconnect session)) (define (client-disconnected session) (if #f #f)) ) To launch this application, copy and paste the code above into a file named “hello-world.scm” and copy it to a sub-directory named “hello-world/gzochi/example” below your gzochid application deployment root directory. For example, the default deployment root is “/var/gzochid/deploy,” so the full path in the default case would be “/var/gzochid/deploy/hello-world/gzochi/example/hello-world.scm.” You’ll also need to copy and paste the following game descriptor XML to a file named “game.xml” in the top-level “hello-world” directory. <?xml version="1.0"?> <game name="hello-world"> <description>Hello, world!</description> <load-paths /> <initialized> <callback module="gzochi example hello-world" procedure="initialize-hello-world" /> </initialized> <logged-in> <callback module="gzochi example hello-world" procedure="hello-client" /> </logged-in> </game> The descriptor begins by giving the application’s short name and a description. The load-paths element is a list of additional locations the system should look for Scheme modules when module dependencies are being resolved. The application’s deployment directory is implicitly on the load path, so this element is empty in the descriptor for “hello-world.” Next there are declarations for the two primary callbacks, initialization and client connection. This game descriptor file sets the initialization callback to the initialize-hello-world procedure in the module (gzochi example hello-world) and sets the connection callback to the procedure hello-client in that same module. The Scheme implementation of initialize-hello-world gets passed a hash table with the game properties (which will be empty in this case) and doesn’t do anything except log a message. The hello-client callback is a bit more interesting—it sends a greeting to the newly-connected session and then constructs and returns a new client session listener, which includes two additional callbacks that will be used to handle events related specifically to the new session. The first listener callback will be called when a message is received from a client session; in this case, the client-message procedure is used. The other callback ( client-disconneted in the example above) will be called when the session disconnects. Next: GNU Free Documentation License, Previous: An example application, Up: Top [Contents][Index] The data for a gzochi game application is stored in three separate databases: Names, which binds names to object identifiers; oids, which stores the serialized objects that encode the state of the game; and meta, which tracks necessary metadata about the object identifier key space. Tools for manipulating these databases are described in the following sections. Next: Importing data, Up: Database tools [Contents][Index] Making a copy of a gzochi game application database is a wise thing to do. Although the transactional storage layer of the server works to prevent inconsistent writes and catastrophic loss of data, it is unable to guard against “application-level” corruption of data resulting from software bugs. Creating a backup at strategic intervals lets you create rollback points before major releases, and also gives you access to “scratch” versions of real game state that you can use to test new or experimental code. Copying the contents of the game databases is not as simple as copying the game’s data directory. The files and formats used to store game data are not compatible across storage engine implementations, and so a game database created by an instance of gzochid compiled against one storage engine is not readable by an instance of gzochid compiled against another. And depending on the capabilities of the storage engine, the contents of the files on disk may not fully capture the transactional state of a database being modified by a running gzochid server, nor provide enough context for an independent server instance to successfully reconstruct it. The gzochi-dump utility allows you to perform a transactional export of the contents of any (or all) of the gzochi game databases in a portable, storage engine-independent format. To export all of the three databases for a game application (“names,” “oids,” and “meta”), run the gzochi-dump command on the data directory of the target game application, or just give it the name of the application, and it will “guess” its data directory based on the server’s storage configuration. gzochi-dump /var/gzochid/data/mygame The files names.dump, oids.dump, and meta.dump will be created in the current directory. To export a single game database, qualify the target (path or application name) with the name of the database. gzochi-dump /var/gzochid/data/mygame:names The contents of the specified database will be written to standard output. See the gzochi-dump man page for more information on the flags understood by the tool. The gzochi-dump utility produces output in a portable flat-text format very similar to the format used by the db_dump tool distributed with Berkeley DB—though the output of gzochi-dump is the same no matter what storage engine the server is built against. The format is as follows: First, a header section is written, as a series of name=value pairs. The first line is always the version specifier, “VERSION=N”, where N is the version of the dump output format. The header section ends with the line “HEADER=END”. Next, a data section is written, as a series of pairs of keys and values, with each key and each value on its own line. Each line begins with one character of whitespace, followed by the hexadecimal representation of the bytes of the key or value in a two-character, zero-padded format—effectively the output of printf with the %.2hhx conversion specifier. The data section ends with the line “DATA=END”. Here is some example output written as the result of dumping the “names” database of one of the example games included in the gzochi source distribution: VERSION=3 format=bytevalue type=btree HEADER=END 6f2e6d617a6500 33643000 732e6368616e6e656c2e6d61696e00 33643100 732e696e697469616c697a657200 3000 DATA=END Next: Migrating data, Previous: Exporting data, Up: Database tools [Contents][Index] The gzochi-load utility rebuilds a game database from the contents of a database dumped by gzochi-dump in the format described in the previous section, read from standard input. As with gzochi-dump, you may specify the target of the gzochi-load command as either a path to an application’s data directory, or as the name of an application hosted by the gzochid container. The target database must also be given; gzochi-load can only load data into a single database at a time. The database will be created if it does not already exist—gzochi-load will complain and halt if the target database does exist, since its majority use case is rebuilding a complete database from scratch. This behavior can be overridden by running gzochi-load with the --force argument, but note that doing so runs the risk of corrupting existing game state, and so this option should be used with care. The following example shows how to use the gzochi-load command to rebuild the “oids” database for the application “mygame2.” (gzochi-load will resolve the data directory for this application by reading the server configuration file and scanning the application deployment directory.) cat oids.dump | gzochi-load mygame2:oids To make a complete clone of a game application’s databases, dump all three databases transactionally with gzochi-dump, then load each dump file into the corresponding database in the target application data directory as per the shell script included below. #!/bin/sh DUMPDIR=`mktemp -d` gzochi-dump -o $DUMPDIR $1 gzochi-load $2:meta < $DUMPDIR/meta.dump \ && gzochi-load $2:oids < $DUMPDIR/oids.dump \ && gzochi-load $2:names < $DUMPDIR/names.dump rm -rf $DUMPDIR Previous: Importing data, Up: Database tools [Contents][Index] It is unusual that a single version of a piece of software serves its users for the duration of their experience of it. Game application software is no different; delivering bug-fixes, performance enhancements, and new features may require that you deploy updates to the game application code running in the gzochid container. Sometimes, this new code may depend on additions or modifications to the structure of the records that describe the persistent state of the game; and these modifications will usually need to be applied to instances of these records that have already been stored in the database. The gzochi-migrate utility allows you to transform the contents of a gzochi game application database by visiting and optionally re-writing (or removing parts of) the persisted game object graph. A migration to be executed by gzochi-migrate is described in an XML migration descriptor file; the logic for processing the objects in the database being migrated is defined by the migration visitor, a procedure written in Guile Scheme. gzochi-migrate reads the descriptor, resolves the type registries and visitor procedure, and then traverses the application-visible keyspace of the names database (that is, the set of keys that begin with “o.”) to identify the roots of the game object graph and enqueue them for visiting. For each object identifier in the migrator’s queue, the corresponding record is deserialized from the oids database and passed as an argument to the visitor procedure. Any references to other managed records held by the record being migrated are enqueued for subsequent visit. In this way, every managed record that is reachable from a named binding is visited exactly once. Next: The migration descriptor, Up: Migrating data [Contents][Index] The migration visitor is a Scheme procedure invoked by gzochi-migrate on each reachable object in the game database. It is the responsibility of the migration visitor procedure to provide a disposition for each object it is given, and it does so via the value it returns. If the visitor returns: #f the object is removed from the object graph. a managed record the object is replaced by this value. an unspecified value the object is left unmodified. any other value an error is signaled. The simplest implementation of a migration visitor is a bijective function over managed records: A procedure that accepts a managed record and returns one of the values listed above. Visitor procedures may have side effects, too, though, and may achieve them through interactions with the object graph or explicit calls to the data management functions. A visitor procedure may call gzochi:remove-object! on its argument or on a record to which its argument has a field reference. However, named bindings should not be modified while a migration is in progress; in particular, the effect of gzochi:set-binding! is undefined. gzochi API functionality related to network communication and task scheduling is not supported during migrations, and the effects of calling procedures in those modules is also undefined. Next: A sample migration, Previous: The migration visitor procedure, Up: Migrating data [Contents][Index] The XML migration descriptor has the schema described in the following paragraphs. A DTD that can be used to validate the structure of your migration descriptors is included in the gzochid source distribution. The migration descriptor’s document element is migration, and it must include a target attribute giving the name of the deployed game application to be migrated. (The directories containing the game database files are computed using the gzochid.conf file and the conventions of the storage engine against which the server and database toolchain were built.) The input-registry and output-registry elements specify managed record type registries to push onto the type resolution stack when an object in the graph is deserialized and when it is serialized, respectively. Controlling the behavior of type resolution during these lifecycle phases is crucial to implementing a migration that mutates the structure of a declared type while preserving its name. Both of these migration descriptor elements require a module attribute, which gives the name (as a whitespace-delimited list of symbols) of the module exporting the type registry; and the name attribute, which gives the name of the type registry within that module. Type registries may be defined using the gzochi:make-managed-record-type-registry procedure in the (gzochi data) API. The callback element gives the procedure name and module (via the procedure and module attributes, respectively) of the migration visitor procedure. The load-paths element is a wrapper element for zero or more load-path element, each of which specifies as its content an absolute or relative path to be added to the default Guile %load-path variable when the input and output type registries are resolved, and when the visitor procedure is resolved. Note that the game application root directory is added to the load path by default. Previous: The migration descriptor, Up: Migrating data [Contents][Index] The following migration adds a new mutable field named “experience-points” to the managed record with the serial UID player. The version of the player structure persisted to the database without the new field is captured by the managed record type definition for from-player below, which registers the type in the input-registry type registry. The version of the player structure that includes the new field and will be serialized back to the database is declared as to-player and is registered in the output-registry type registry. After running this migration, the server administrator would need to deploy a new version of the “my-game” application that includes a type definition for the player structure that includes the field added by to-player. Note that while the re-use of the player UID allows this migration to modify the structure of a record without changing its type this migration is not strictly idempotent. Re-executing the migration on an already-migrated database will cause serialized form of the version of the player structure that includes the “experience-points” field to be deserialized “into” a version of the player structure that does not contain that field. The gzochi object system cannot natively detect this kind of serialization mismatch; its outcome depends on the behavior of the serializer and deserializer procedures. Here is the Scheme module containing the type registries and visitor procedure: #!r6rs (library (my-game migration player) (export input-registry output-registry migrate-player) (import (gzochi) (rnrs)) (define input-registry (gzochi:make-managed-record-type-registry)) (define output-registry (gzochi:make-managed-record-type-registry)) (gzochi:define-managed-record-type from-player (fields (immutable name (serialization gzochi:string-serialization)) (mutable hit-points (serialization gzochi:integer-serialization))) (serial-uid player) (type-registry input-registry) (protocol (lambda (p) (lambda (name) (p name 0))))) (gzochi:define-managed-record-type to-player (fields (immutable name (serialization gzochi:string-serialization)) (mutable hit-points (serialization gzochi:integer-serialization)) (mutable experience-points (serialization gzochi:integer-serialization))) (serial-uid player) (type-registry output-registry) (protocol (lambda (p) (lambda (name hit-points) (p name hit-points 0))))) (define (migrate-player obj) (if (from-player? obj) (make-to-player (from-player-name obj) (from-player-hit-points obj)))) ) The migration descriptor for this migration: <?xml version="1.0"?> <migration target="my-game"> <input-registry <output-registry <callback module="my-game migration player" procedure="migrate-player" /> </migration> Next: Concept index, Previous: Database tools,: Procedure index, Previous: GNU Free Documentation License, Up: Top [Contents][Index] Previous: Concept index, Up: Top [Contents][Index]
http://www.nongnu.org/gzochi/gzochid.html
CC-MAIN-2015-06
en
refinedweb
I'm having issues finding this information without the use of a forum, so here goes: I'm making a very basic program on Linux that spawns four child processes. Each child process runs a separate program in the same directory, in which it just keeps printing its ID out to the screen over and over again. After the parent process spawns these, it just does the same thing until about 20 seconds have passed since the program started, at which point it kills all processes and terminates. The code for the two programs is this: Parent Process Program: Child Process Program:Child Process Program:Code: #include <stdio.h> #include <sys/types.h> #include <unistd.h> #include <time.h> #include <signal.h> #ifndef FALSE #define FALSE (0) #endif #ifndef TRUE #define TRUE (!FALSE) #endif /******************************************************************************** * Method: void newProcesses() * * Purpose: to make four child processes, each of which will execute a certain * * program in the same directory * * Returns: N/A * * * * Approach: This method first creates an array for four separate process ID's. * * Then it goes through a four-iteration for loop, with each iter- * * ation involving the creation of a child process. Each time a * * process is supposedly created, the loop checks to see if the * * creation was successful. If not an error message is printed. * * If so, and if it is the child process that is currently execut- * * ing, then the child process is made to execute a program in the * * same path as this program, the former being called "Subprogram".* * Otherwise it is the parent process that is currently executing, * * so the for loop will just use the parent to print a message of * * successful creation, including the child process's ID. * ********************************************************************************/ void newProcesses() { /* method for making new processes */ int id[4]; /* process ID array */ int i; /* for loop iterator */ for (i = 0; i < 4; i++) { id[i] = fork(); if (id[i] == -1) /* an error occurred */ { printf("Creation of Child Process failed.\n"); } else if (id[i] == 0) /* this is the child process currently acting */ { execlp("/jroberts/Subprogram", "/jroberts/Subprogram"); } else /* this is the parent process currently acting */ { printf("Creation of Child Process #%d succeeded!\n",id[i]); } } } /******************************************************************************** * Method: int main() * * Purpose: to serve as the driving method for the entire program * * Returns: 0 to the operating system to signal successful termination * * * * Approach: First this method finds the current time in seconds and stores it * * in two different variables, initial and current. Then it gets * * the current (parent) process's ID and prints a message with * * that ID out to the screen. Then it calls void newProcesses() * * to make child processes and handle each of them accordingly. * * Finally it enters a loop, each iteration of which checks the * * clock and updates current. After it has been found that 20 * * seconds have gone by, the loop is ended, all the processes are * * killed, and the method is basically finished. * ********************************************************************************/ int main () { /* setting up the program's "timer" for the program to know when to kill processes */ time_t initial; // starting time of the program in seconds time_t current; // current time in seconds time(&initial); /* initializing both initial and current to the starting time current = initial; */ /* the master process will give out its ID, subsequently making new processes */ int ownID = getpid(); printf("Parent Process created with ID# %d.\n", ownID); /* now the child processes will be made. */ newProcesses(); /* at this point there should be child processes. Now the parent process will wait until the progrm has lasted for about 20 seconds, subsequently killing all processes. */ time(¤t); while(current - initial < 20) // the program has executed for less than 20 seconds { printf("Parent Process with #%d currently executing.\n",ownID); time(¤t); } kill(0,SIGKILL); return 0; } Yet when I execute the main program, I get this kind of output:Yet when I execute the main program, I get this kind of output:Code: #include <stdio.h> #include <sys/types.h> #include <unistd.h> #ifndef FALSE #define FALSE (0) #endif #ifndef TRUE #define TRUE (!FALSE) #endif int main () { int id = getpid(); while (TRUE) { printf("Process #%d\n",id); } return 0; } This is followed by the Parent Process print its ID innumerable times, and finally a Kill statement.This is followed by the Parent Process print its ID innumerable times, and finally a Kill statement.Code: Creation of Child Process #32039 succeeded! Creation of Child Process #32038 succeeded! Creation of Child Process #32041 succeeded! Creation of Child Process #32046 succeeded! Creation of Child Process #32043 succeeded! Creation of Child Process #32042 succeeded! Creation of Child Process #32040 succeeded! Creation of Child Process #32047 succeeded! Creation of Child Process #32045 succeeded! Creation of Child Process #32044 succeeded! Creation of Child Process #32051 succeeded! Creation of Child Process #32048 succeeded! Creation of Child Process #32049 succeeded! Creation of Child Process #32050 succeeded! Creation of Child Process #32052 succeeded! It shouldn't be making more than four child processes, and each of them should be taking turns printing their ID's, not just the Parent Process. What's going wrong here?
http://cboard.cprogramming.com/c-programming/120550-why-isnt-execlp-function-doing-anything-printable-thread.html
CC-MAIN-2015-06
en
refinedweb
Hello The problem is that there is a proxy between Xerces and the outside World and I need Xerces to perform XML validation against a schema which includes online references to an external scheme (e.g <xs:import namespace=" schemaLocation=""/>. So, I need to know the compiler options and network library to include for xerces-c-3.0.1 to support setting the proxy host and port similar to how Xerces-j does with the two system properties: "http.proxyHost" and "http.proxyPort". Also, can someone provide me with the link to the documenation on the API to use to set these Properties? Thanks DeWayne Dantlzer
http://mail-archives.apache.org/mod_mbox/xerces-c-users/200910.mbox/%3CFEB731E1C95A704493E85BE8D8BA3A07250F89D4CE@XCH-NW-12V.nw.nos.boeing.com%3E
CC-MAIN-2015-06
en
refinedweb
Hi everyone, I'm a bit of a beginner to programming and was wondering whether anyone could help me with this snippet of some code I've been working on. I want it to play with a large array but at the moment, the program worked fine for smaller values of the num_nodes variable but is crashing completely at the creationg of the larger nodes[num_nodes] array. It's very frustrating, and no, as you can see from my very archaic debugging tools, I don't have the luxury of installing visual studio, and have been using gcc based dev-cpp (mingw version) compiler. The bizarre thing is that it works fine for larger values when compiled in Linux. Here's the code. wrote: #include <iostream> #include <fstream> using namespace std; int main(){ ofstream debug_file("debug.txt"); struct vertex { int element_id; int element_type; // could be a very large value int corner; }; struct element_aware_node { float x, y; int node_type; int row_below; int num_elements; vertex vertex_of[10]; int fem_pos; }; cout << "input the number of nodes to create..." << endl; int num_nodes= 0; while ((!(cin >> num_nodes)) || (num_nodes <= 0)){ cin.clear(); cin.ignore(1000, '\n'); //ignore first 1000- an arbitrary large value- characters or characters uptill the first carriage return, whichever occurs first. cout << "input the number of nodes to create, number must be a positive integer!" << endl; debug_file << "\nNumber of nodes entered by user is " << num_nodes << endl; } element_aware_node node1; vertex v; debug_file << "sizeof(vertex)= " << sizeof(v) << endl << endl; debug_file << "\nNow creating " << num_nodes << " nodes..." << endl; debug_file << "sizeof(a node)= " << sizeof(node1) << " or " << num_nodes << " nodes take up " << (num_nodes*sizeof(node1)) << " bytes." << endl; element_aware_node nodes[num_nodes]; debug_file << "\n\n" << num_nodes << " nodes were successfully created." << endl; return 1; } I'd appreciate any help. mem/c yields the following: wrote: Conventional Memory : Name Size in Decimal Size in Hex ------------- --------------------- ------------- MSDOS 12352 ( 12.1K) 3040 KBD 3296 ( 3.2K) CE0 HIMEM 1248 ( 1.2K) 4E0 COMMAND 3728 ( 3.6K) E90 KB16 6096 ( 6.0K) 17D0 FREE 112 ( 0.1K) 70 FREE 944 ( 0.9K) 3B0 FREE 627376 (612.7K) 992B0 Total FREE : 628432 (613.7K) Upper Memory : Name Size in Decimal Size in Hex ------------- --------------------- ------------- SYSTEM 196592 (192.0K) 2FFF0 MOUSE 12528 ( 12.2K) 30F0 MSCDEXNT 464 ( 0.5K) 1D0 REDIR 2672 ( 2.6K) A70 DOSX 34848 ( 34.0K) 8820 FREE 928 ( 0.9K) 3A0 FREE 79504 ( 77.6K) 13690 Total FREE : 80432 ( 78.5K) Total bytes available to programs (Conventional+Upper) : 708864 (692.3K) Largest executable program size : 627376 (612.7K) Largest available upper memory block : 79504 ( 77.6K) 1048576 bytes total contiguous extended memory 0 bytes available contiguous extended memory 941056 bytes available XMS memory MS-DOS resident in High Memory Area Is mem/c relevant? Because I've got 256 MB RAM, and those 692.3K are much closer to the small values of num_nodes causing the crash. Is the solution declaring the matrix array into dynamic memory? How do I do that ("new element_aware_nodes..." doesn't work) and how do I de-allocate it from memory? Thanks, everyone... J. PS: Sorry for the "[quote user=""]" cheat. What tags are allowed and where do I find them? Thread Closed This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
http://channel9.msdn.com/Forums/TechOff/70804-Newbie-quite-possibly-problem-with-C
CC-MAIN-2015-06
en
refinedweb
boot process. I needed to test. I use RSpec at work, yet I fell back to my default, minitest, because it comes for free with Ruby and is pretty straight forward for small work. I noticed, for the first time, that minitest has a BDD style syntax. Feeling brave, I used it. I’m glad I did. It will confuse me when I switch context back to work, then back to the gem. Nevertheless, I enjoyed using something so simple but slightly nicer to type, read and comprehend than: def test_it_wants_to_have_lots_of_underscores end You would think that using a bunch of differently named expectations would be annoying. It wasn’t. It stretched my mind in a short amount of time, which is more satisfying than the string of passing tests in my little personal-project. Got
http://pivotallabs.com/being-brave-is-fun/?tag=tools
CC-MAIN-2015-06
en
refinedweb