text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
------------------------------------------------------------------ This file is part of bzip2/libbzip2, a program and library for lossless, block-sorting data compression. bzip2/libbzip2 version 1.0.6 of 6 September 2010 Copyright (C) 1996-2010 Julian Seward <[email protected]> Please read the WARNING, DISCLAIMER and PATENTS sections in the README file. This program is released under the terms of the license contained in the file LICENSE. ------------------------------------------------------------------ 0.9.0 ~~~~~ First version. 0.9.0a ~~~~~~ Removed 'ranlib' from Makefile, since most modern Unix-es don't need it, or even know about it. 0.9.0b ~~~~~~ Fixed a problem with error reporting in bzip2.c. This does not effect the library in any way. Problem is: versions 0.9.0 and 0.9.0a (of the program proper) compress and decompress correctly, but give misleading error messages (internal panics) when an I/O error occurs, instead of reporting the problem correctly. This shouldn't give any data loss (as far as I can see), but is confusing. Made the inline declarations disappear for non-GCC compilers. 0.9.0c ~~~~~~ Fixed some problems in the library pertaining to some boundary cases. This makes the library behave more correctly in those situations. The fixes apply only to features (calls and parameters) not used by bzip2.c, so the non-fixedness of them in previous versions has no effect on reliability of bzip2.c. In bzlib.c: * made zero-length BZ_FLUSH work correctly in bzCompress(). * fixed bzWrite/bzRead to ignore zero-length requests. * fixed bzread to correctly handle read requests after EOF. * wrong parameter order in call to bzDecompressInit in bzBuffToBuffDecompress. Fixed. In compress.c: * changed setting of nGroups in sendMTFValues() so as to do a bit better on small files. This _does_ effect bzip2.c. 0.9.5a ~~~~~~ Major change: add a fallback sorting algorithm (blocksort.c) to give reasonable behaviour even for very repetitive inputs. Nuked --repetitive-best and --repetitive-fast since they are no longer useful. Minor changes: mostly a whole bunch of small changes/ bugfixes in the driver (bzip2.c). Changes pertaining to the user interface are: allow decompression of symlink'd files to stdout decompress/test files even without .bz2 extension give more accurate error messages for I/O errors when compressing/decompressing to stdout, don't catch control-C read flags from BZIP2 and BZIP environment variables decline to break hard links to a file unless forced with -f allow -c flag even with no filenames preserve file ownerships as far as possible make -s -1 give the expected block size (100k) add a flag -q --quiet to suppress nonessential warnings stop decoding flags after --, so files beginning in - can be handled resolved inconsistent naming: bzcat or bz2cat ? bzip2 --help now returns 0 Programming-level changes are: fixed syntax error in GET_LL4 for Borland C++ 5.02 let bzBuffToBuffDecompress return BZ_DATA_ERROR{_MAGIC} fix overshoot of mode-string end in bzopen_or_bzdopen wrapped bzlib.h in #ifdef __cplusplus ... extern "C" { ... } close file handles under all error conditions added minor mods so it compiles with DJGPP out of the box fixed Makefile so it doesn't give problems with BSD make fix uninitialised memory reads in dlltest.c 0.9.5b ~~~~~~ Open stdin/stdout in binary mode for DJGPP. 0.9.5c ~~~~~~ Changed BZ_N_OVERSHOOT to be ... + 2 instead of ... + 1. The + 1 version could cause the sorted order to be wrong in some extremely obscure cases. Also changed setting of quadrant in blocksort.c. 0.9.5d ~~~~~~ The only functional change is to make bzlibVersion() in the library return the correct string. This has no effect whatsoever on the functioning of the bzip2 program or library. Added a couple of casts so the library compiles without warnings at level 3 in MS Visual Studio 6.0. Included a Y2K statement in the file Y2K_INFO. All other changes are minor documentation changes. 1.0 ~~~ Several minor bugfixes and enhancements: * Large file support. The library uses 64-bit counters to count the volume of data passing through it. bzip2.c is now compiled with -D_FILE_OFFSET_BITS=64 to get large file support from the C library. -v correctly prints out file sizes greater than 4 gigabytes. All these changes have been made without assuming a 64-bit platform or a C compiler which supports 64-bit ints, so, except for the C library aspect, they are fully portable. * Decompression robustness. The library/program should be robust to any corruption of compressed data, detecting and handling _all_ corruption, instead of merely relying on the CRCs. What this means is that the program should never crash, given corrupted data, and the library should always return BZ_DATA_ERROR. * Fixed an obscure race-condition bug only ever observed on Solaris, in which, if you were very unlucky and issued control-C at exactly the wrong time, both input and output files would be deleted. * Don't run out of file handles on test/decompression when large numbers of files have invalid magic numbers. * Avoid library namespace pollution. Prefix all exported symbols with BZ2_. * Minor sorting enhancements from my DCC2000 paper. * Advance the version number to 1.0, so as to counteract the (false-in-this-case) impression some people have that programs with version numbers less than 1.0 are in some way, experimental, pre-release versions. * Create an initial Makefile-libbz2_so to build a shared library. Yes, I know I should really use libtool et al ... * Make the program exit with 2 instead of 0 when decompression fails due to a bad magic number (ie, an invalid bzip2 header). Also exit with 1 (as the manual claims :-) whenever a diagnostic message would have been printed AND the corresponding operation is aborted, for example bzip2: Output file xx already exists. When a diagnostic message is printed but the operation is not aborted, for example bzip2: Can't guess original name for wurble -- using wurble.out then the exit value 0 is returned, unless some other problem is also detected. I think it corresponds more closely to what the manual claims now.. 1.0.2 ~~~~~ A bug fix release, addressing various minor issues which have appeared in the 18 or so months since 1.0.1 was released. Most of the fixes are to do with file-handling or documentation bugs. To the best of my knowledge, there have been no data-loss-causing bugs reported in the compression/decompression engine of 1.0.0 or 1.0.1.. Here are the changes in 1.0.2. Bug-reporters and/or patch-senders in parentheses. * Fix an infinite segfault loop in 1.0.1 when a directory is encountered in -f (force) mode. (Trond Eivind Glomsrod, Nicholas Nethercote, Volker Schmidt) * Avoid double fclose() of output file on certain I/O error paths. (Solar Designer) * Don't fail with internal error 1007 when fed a long stream (> 48MB) of byte 251. Also print useful message suggesting that 1007s may be caused by bad memory. (noticed by Juan Pedro Vallejo, fixed by me) * Fix uninitialised variable silly bug in demo prog dlltest.c. (Jorj Bauer) * Remove 512-MB limitation on recovered file size for bzip2recover on selected platforms which support 64-bit ints. At the moment all GCC supported platforms, and Win32. (me, Alson van der Meulen) * Hard-code header byte values, to give correct operation on platforms using EBCDIC as their native character set (IBM's OS/390). (Leland Lucius) * Copy file access times correctly. (Marty Leisner) * Add distclean and check targets to Makefile. (Michael Carmack) * Parameterise use of ar and ranlib in Makefile. Also add $(LDFLAGS). (Rich Ireland, Bo Thorsen) * Pass -p (create parent dirs as needed) to mkdir during make install. (Jeremy Fusco) * Dereference symlinks when copying file permissions in -f mode. (Volker Schmidt) * Majorly simplify implementation of uInt64_qrm10. (Bo Lindbergh) * Check the input file still exists before deleting the output one, when aborting in cleanUpAndFail(). (Joerg Prante, Robert Linden, Matthias Krings) Also a bunch of patches courtesy of Philippe Troin, the Debian maintainer of bzip2: * Wrapper scripts (with manpages): bzdiff, bzgrep, bzmore. * Spelling changes and minor enhancements in bzip2.1. * Avoid race condition between creating the output file and setting its interim permissions safely, by using fopen_output_safely(). No changes to bzip2recover since there is no issue with file permissions there. *. 1.0.3 (15 Feb 05) ~~~~~~~~~~~~~~~~~ Fixes. Fixes CAN-2005-0758 to the extent that applies to bzgrep. * Use 'mktemp' rather than 'tempfile' in bzdiff. * Tighten up a couple of assertions in blocksort.c following automated analysis. * Fix minor doc/comment bugs. 1.0.5 (10 Dec 07) ~~~~~~~~~~~~~~~~~ Security fix only. Fixes CERT-FI 20469 as it applies to bzip2. 1.0.6 (6 Sept 10) ~~~~~~~~~~~~~~~~~ * Security fix for CVE-2010-0405. This was reported by Mikolaj Izdebski. * Make the documentation build on Ubuntu 10.04
|
http://opensource.apple.com/source/bzip2/bzip2-27/bzip2/CHANGES
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
- OSI-Approved Open Source (6)
- GNU General Public License version 2.0 (3)
- GNU Library or Lesser General Public License version 2.0 (2)
- Academic Free License (1)
- Apache License V2.0 (1)
- Common Development and Distribution License (1)
- Common Public License 1.0 (1)
- GNU General Public License version 3.0 (1)
- Mozilla Public License 1.1 (1)
- Open Software License 3.0 (1)
- PHP License (1)
Interpreters Software
- Hot topics in Interpreters Softwarebrainfuck web crawler web query gwc
Brainfuck Center
Brainfuck Center is an IDE and compiler for your Brainfuck scripts. It includes a full debugger with step-by-step debugging and much more. MDI window
Rexx service wrapper
Wrapper to run Regina Rexx as a Windows service. Should also support other Rexx versions, especially ooRexx.1 weekly downloads
PHP++
PHP++ is a new programming language with a syntax similar to PHP, but it's completly rewritten in C++ and comes with a lot of new features like namespaces and a own, easy extendable object oriented framework.
|
http://sourceforge.net/directory/development/interpreters/os:mswin_server2003/os:os_groups/
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Hi,
I found this simple code.
I am new to threads and i want to know how this code can be modified so that 5balls can be created and bounced off the boundaries.
Will that require multiple threads - one for each ball?
Do help me with this. I have been on this since yesterday!
Code :
import java.applet.Applet; import java.awt.Color; import java.awt.Graphics; import java.awt.Rectangle; /** An applet that displays a simple animation */ public class BouncingCircle extends Applet implements Runnable { int x = 150, y = 50, r = 50; // Position and radius of the circle int dx = 11, dy = 7; // Trajectory of circle Thread animator; // The thread that performs the animation volatile boolean pleaseStop; // A flag to ask the thread to stop /** This method simply draws the circle at its current position */ public void paint(Graphics g) { g.setColor(Color.red); g.fillOval(x - r, y - r, r * 2, r * 2); } /** * This method moves (and bounces) the circle and then requests a redraw. * The animator thread calls this method periodically. */ public void animate() { // Bounce if we've hit an edge. Rectangle bounds = getBounds(); if ((x - r + dx < 0) || (x + r + dx > bounds.width)) dx = -dx; if ((y - r + dy < 0) || (y + r + dy > bounds.height)) dy = -dy; // Move the circle. x += dx; y += dy; // Ask the browser to call our paint() method to draw the circle // at its new position. repaint(); } /** * This method is from the Runnable interface. It is the body of the thread * that performs the animation. The thread itself is created and started in * the start() method. */ public void run() { while (!pleaseStop) { // Loop until we're asked to stop animate(); // Update and request redraw try { Thread.sleep(100); } // Wait 100 milliseconds catch (InterruptedException e) { } // Ignore interruptions } } /** Start animating when the browser starts the applet */ public void start() { animator = new Thread(this); // Create a thread pleaseStop = false; // Don't ask it to stop now animator.start(); // Start the thread. // The thread that called start now returns to its caller. // Meanwhile, the new animator thread has called the run() method } /** Stop animating when the browser stops the applet */ public void stop() { // Set the flag that causes the run() method to end pleaseStop = true; } }
Thanks!
|
http://www.javaprogrammingforums.com/%20threads/9207-bouncing-balls-printingthethread.html
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Import excel data into SQL server -- ASP.net, C#, DTS -- how??
Discussion in 'ASP .Net' started by Rathtap, Jun 30, 2003.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
Excel file data import to Sql server 2000 via the asp.netSreedhar Vankayala, Feb 23, 2004, in forum: ASP .Net
- Replies:
- 2
- Views:
- 13,770
- Sreedhar Vankayala
- Feb 25, 2004
How can I get import an excel file into sql via asp.net pageJake, Apr 5, 2004, in forum: ASP .Net
- Replies:
- 1
- Views:
- 776
- Ian Oldbury
- Apr 7, 2004
How can I call a DTS (SQL Server 2000) from VISUAL BASIC.NETjonefer, Feb 15, 2007, in forum: ASP .Net Building Controls
- Replies:
- 0
- Views:
- 230
- jonefer
- Feb 15, 2007
import into excel file from SQL-Serverpeteyjr, Feb 11, 2004, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 228
- peteyjr
- Feb 11, 2004
SQL, T-SQL, or DTS to combine several data fields?Ken Fine, Nov 26, 2003, in forum: ASP General
- Replies:
- 1
- Views:
- 305
- Bob Barrows
- Nov 26, 2003
|
http://www.thecodingforums.com/threads/import-excel-data-into-sql-server-asp-net-c-dts-how.58568/
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Can anyone please explain to me how to change the insertion point of the text ("Hello, World!") to wherever the cursor happens to be whenever the plugin is activated?
import sublime, sublime_plugin
class ExampleCommand(sublime_plugin.TextCommand):
def run(self, edit):
self.view.insert(edit, 0, "Hello, World!")
You will use view.sel() to get the cursors. This returns a list of regions, since multiple cursors are supported . Anyways, if you want to assume the first cursor, and the beginning of the region you can do the following.
import sublime, sublime_plugin
class ExampleCommand(sublime_plugin.TextCommand):
def run(self, edit):
self.view.insert(edit, self.view.sel()[0].begin(), "Hello, World!")
Probably worthwhile to take a look at the api if you haven't already.
ST2: sublimetext.com/docs/2/api_reference.htmlST3: sublimetext.com/docs/3/api_reference.html
Thank you -- here are my lessons learned so far:
Variables are set AFTER self.view.insert, not in the middle of.
Python is sensitive to indentations in the form of white space, with incremental white space indentations.
Saving a file with the Python console open helps to debug.
To try out the plugin in the Python console, I need to be aware that this syntax is no longer supported in ST2 [view.runCommand('example')] -- instead, I should use: view.run_command('example')
To utilize dir() in the Python console, there are many variables that could be inserted in between the parenthesis -- I will need to read more documentation regarding the various usages.
Examples of code that will work in the Sublime Text plugins can be found in unrelated Python scripts, because the language is the same.docs.python.org/2/library/os.path.html
You must be looking at old versions of the api documentation. You want view.run_command. Not view.runCommand. Check the links I provided. Also, as kind of a getting started for plugins, consider going through the following tutorial (link). You may also want to go through some general python tutorials first anyways. Diving straight into plugin development without some understanding of the language can make it more difficult than it needs to be. "dir(variable)" will list the attributes (well I guess it really depends on how the dir method was implemented for that class) for that class. Normally this will show you things like methods and variables associated with that class.
As a future reference, you may want to look at the porting guide, for when you decided to write plugins for ST3 (link). There are some changes to the ST3 api, when compared to ST2, so you should probably look at that as well.
This tutorial has been extremely helpful. I have updated my list of lessons learned so far. Thank you very much!
|
https://forum.sublimetext.com/t/chage-hello-world-to-insert-at-cursor-position/9270/5
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Hi, fellow RxJS streamer! 👋
Today I want to share a JS/TS package that allows you to access props of objects on Observables:
source$.subscribe(o => console.log(o?.a?.b?.c)) // turn ↑ into ↓ source$.a.b.c.subscribe(console.log)
tl;dr: github.com/kosich/rxjs-proxify
A simple use case: read the msg property of each value on the stream
import { proxify } from "rxjs-proxify"; import { of } from "rxjs"; const source = of({ msg: 'Hello' }, { msg: 'World' }); const stream = proxify(source); stream.msg.subscribe(console.log); // 'Hello', 'World'
proxifywill create a Proxy for given Observable
The package has good TypeScript support, so all props are intelli-sensed by cats, dogs, and IDEs:
import { of } from 'rxjs'; import { proxify } from 'rxjs-proxify'; const source = of({ a:1, b:1 }, { a:2, b:2 }); const stream = proxify(source); stream. // <- will suggest .a .b .pipe .subscribe etc
t's also possible to call methods on values (even those using
this keyword), e.g.:
import { proxify } from "rxjs-proxify"; import { of } from "rxjs"; const source = of({ msg: () => 'Hello' }, { msg: () => 'World' }); const stream = proxify(source); // calls msg() fn on each value of the stream stream.msg().subscribe(console.log); // 'Hello', 'World'
And you are still free to apply RxJS operators at any depth:
import { proxify } from "rxjs-proxify"; import { of } from "rxjs"; import { scan } from "rxjs/operators"; const source = of({ msg: 'Hello' }, { msg: 'World' }); const stream = proxify(source); stream.msg.pipe(scan((a, c)=> a + c)).subscribe(console.log); // 'HelloWorld'
The package uses Proxies under the hood, recursively applying it to sub-properties and method results, so the chain can be indefinitely deep. And you can apply
.subscribe or
.pipe at any time!
🎹 Try it
You can install it via
npm i rxjs-proxify
Or test it online: stackblitz.com/edit/rxjs-proxify-repl
📖 Repository
The source code and more examples are available on github repo:
github.com/kosich/rxjs-proxify
Outro
Thank you for reading this article! Stay reactive and have a nice day 🙂
If you enjoyed reading — please, indicate that with ❤️ 🦄 📘 buttons — it helps a lot!
Soon I'll post a more detailed review of the lib and how it works
Follow me here and on twitter for more RxJS, React, and JS posts:
🗣 Would love to hear your thoughts!
Psst.. need something more to read?
I got you covered:
"Turn a Stream of Objects into an Object of Streams"
"Fetching Data in React with RxJS and <$> fragment"
"Queries for Observables: Crazy & Simple!"
"Intro to Recks: Rx+JSX experiment"
😉
Cya 👋
Discussion
Does it play nicely with the
<$>component? It would be a great combination. It’s a similar idea to Hookstate. And what about
cats and dogs
Subject? and
.nextmethod?
Hi, Franciszek!
Great question! Especially since I haven't tried it yet with
<$>myself! 😄 (the idea came up for use in recksjs framework) Now I did! Here's an example:
^ online playground — I've added two precautions in comments, please be advised!
Hm, I haven't considered this before 🤔 The original idea was to provide a selector, independently from provider. And the
.nextmethod would be lost after first subproperty access, e.g.
stream.a— since it's
map-ped under the hood. Though having the initial Subject, one can still do:
THEORETICAL PART
Yet there's something to be discovered w/ state management, like:
Quite interesting concept, need to think & play around w/ this! 🤓
EOF THEORETICAL PART
Let me know what you think!
P.S: And thx for Hookstate, didn't know about it!
THEORETICAL UPDATE:
~ Well typed, though has bugs 🙂
Heres a playground: stackblitz.com/edit/rstate?file=in...
Wow, it looks like MobX now 🤯. I checked the code. What about changing the approach? Instead of chaining observables, we can chain the properties names and then just pluck them for subscribing. I made the demo. I also used Immer to deliver next state 😛
Awesome! I especially like that you've united read & write: I wanted to do it too! (but reused rxjs-proxify cz of already implemented TypeScript)
And I agree, a single pluck is nicer & more performant, though we might need a custom pluck operator if we want to support
a.b.c()— since pluck won't keep the call context:
thiswill be lost, not sure if that's a big deal
I still have mixed feelings regarding whether pushing to state should be
a.b.c = 1or
a.b.c.next(1)— latter is uglier, I admit, though it has two benefits:
a.next({ b: { c: 1 } })
a.b = 1is not obviously effectful)
Regardless of implementation details, do you think such approach could be useful as a package? I think in Recks it can be handy, maybe in
<$>too, not sure where else..
Will try to compile a unified solution early next week 👀
I don't get the first point. Which this will be lost? Help me understand 😅. I updated the demo with
applytrap and everything seems ok to me. Besides your
proxifydoes not keep the this of the root object if that's what you mean
When it comes to
=vs
.next: IMHO
=with a company of
.nextor just
.next. Keeping only
=as you just said disallows setting the root state and makes the proxy less usable in higher-order scenarios like passing the chain as a prop to further set the next value.
Could it be handy as a package? Using RxJS as a standalone state manager is very rare these times. Subjects in comparison to their relatives from MobX seems poor. Although mixing MobX with RxJs feels a little cumbersome due to the very different subscription fashion. Maybe it would be better to create a store by nesting observables as MobX does?
Sorry, now I'm confused 🙂 I see that updated demo has the
"this" is lostexample — I meant exactly that. With
o.f()I'd expect
thisin
f()to be
o(as in object
o, not Observable of
o). Here's a test from
proxifythat might better explain it.
Yet, this is a really minor issue (if it is an issue in the first place!), easy to fix and not worth much attention.
Can you share an example of this? I might be missing something obvious here 🙁
Will have to educate myself better on MobX to appreciate this (haven't worked with MobX yet, only tutorials 😅)
Give me a hint if you have some particular use case in mind 🤔
P.S: Thanks for
fn?.()— I didn't know that optional chaining can be applied to function calls! That's great!
Apologize for my wicked accusations 😌.
proxifydoes keep the
this. I didn't test it before just got the wrong claims reading the source
Phew 😅 ! That's cool! Thanks for proofreading the sources! 🙏
Hey, @fkrasnowski , sorry for bothering you again 🙂
Want to give an update:
State:
👂 listen to distinct state value changes
📝 write to state (sub)values
💫 reset state
👀 sync read current state value
👓 TS support coming
/ I've dropped
fncalls with whole
thisissue for now 😅 and used your cool approach with pluck! 👍 /
🔗 work in progress
Autorun:
Another THEORETICAL thing born in discussions (here and on twitter for the very same
proxify🙂)
A function that is re-evaluated when an Observable used in it changes
(I think MobX'
autorundoes a similar thing)
🔗 work in progress #2
Mix:
The two might look cool together:
Let me know what you think 🤔
Take care!
I've got mixed feelings about all this
autorunthing.
combineLatestoperator for the same purpose, less cool, still clear, and convenient.
Comparison between some aproaches:
RxJS:
MobX:
Svelte:
It might look like MobX and Svelte strategy is better, due to their terseness. But RxJS is about operators! And its full force lays in them. Your solution:
Best of both worlds??
Glad you add
distinctUntilChanged. And I think
runcould be better named
computedor
derivedor
autopiped😨😝
Totally agree with all three points!
And I love your examples — this gives some perspective! Thanks 👍
I've started this only because it was a fun dev challenge, I was sure as an API it's useless and error-prone! Then when it was ready... I began to have doubts 🤔😄
The shorter notation might make sense in templates like in
<$>or Recks...
Probably it's a maker's affection of some sort: when you're painting something for a day long, and you look at it -- and it's crap, you know it's crap but still you kinda like it 😔
I think, I'll finish and post the
statething.
Maybe even include it into
proxifylib (bad naming here too 🤦♂️)
Still not sure what to do w/
autorun— will polish it, then we'll see...
BTW, I've been sharing all this on twitter too, here's a thread
Víctor Oliva made another cool autorun concept — check it out!
(I'd ping you long ago, though haven't found you there)
and here's latest concept w/ state & autorun from twitter:
Thanks for taking your time to look into this on Friday evening 🙂
Have a good weekend!
SATURDAY: okay... I've shared autorun on github github.com/kosich/rxjs-autorun (no npm package, naming is still a problem)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/rxjs/turn-a-stream-of-objects-into-an-object-of-streams-2aed
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Does anybody know if it's possible to rsync files into an annex folder without constantly overwriting already annexed files? I know that the
-L option will de-reference links on the source side, but I don't see any option for doing the same on the target side.
I have two machines -- alpha and beta. Alpha doesn't know anything about git annex, but knows how to rsync. I have a folder on alpha that I regularly sync over to beta with:
rsync -av folder/ beta:/.../annex/folder
That way I know that I can eventually safely delete any material in the folder on alpha because it's been backed up/archived to my annex. The problem is, every time I add the files into the annex on beta, they get replaced with symlinks, which wouldn't be a problem except for the fact that now the next time I run the rsync command all of the files get retransmitted because they don't match the symlinks over on beta. This doesn't cause a problem on the git annex side, but it significantly slows down the rsync process.
Is this a case for an rsync remote? (I haven't really figured out special remotes yet.) Or is there a typical workflow on the git annex side that I could be using to fix this (like
import rather than
add)?
Thanks!
Right after posting this last night I came across this forum entry, which led me to the tip on how to create a cache annex, which eventually led me to a little more detail on the
annex.hardlinkand
annex.thinoptions.
It sounds like they may kind of be what I'm looking for, but I'm not sure how to find more details (doc) on how to use them or specifically what they do. They aren't listed in supported options when I run (on v6.2):
I google
git annex.hardlinkand
git annex.thin, but those just point me to bugs or forums or tips that refer to the settings.
Are these annex wide settings? (that seems to be the case). Is it possible to apply them at a folder level? Am I maybe just missing the point of lock/unlock?
I'll keep looking and run some experiments on my own.
Thanks again!
The options are listed in the
man git-annexoutput (which is also available at).
I think conceptually that's a good fit. You could set
importtree=yeswith the special remote and ingest changes with
git annex importon beta's side. However, the rsync special remote doesn't support
importtreeyet.
In your followup comment, you mention unlocked files. That would get you around the link problem. You could call
rsyncwith
--checksumto limit what is transferred, though that might be expensive depending on how big your files are.
You can set then at the repository level in the repo's .git/config.
Oh boy -- or should I say, oh "
man"... Now I feel like a bit of an idiot for not checking the actual high level man pages... Thanks for that tip.
I did some experimenting and noticed another thing that I hadn't noticed -- although my binary version is v6.20180227 my annexes were all using the v5 index. That came as a bit of a surprise. Once I upgraded my annex to v6 the annex.thin settings started working.
As for rsync, I had tried the
-c(
--checksum) option, but it wasn't dereferencing the links on the target side, so the files still registered as different (at least I think I tried this, but I may go back and check again, because I was doing a lot of different things...) Nevermind, I just checked my history and I never actually tried it -- I had been using
to try to confirm that the contents of the two folders matched so that I'd know I could safely delete my local copy, but that didn't work because of the links. I didn't add the
-Loption until I reversed the direction and ran the command from the server side with
alphaas the target.
Thanks for all the help -- this should work well enough for my stage folder issue, but it also solves a separate problem that I'd been struggling with for making my photos available to a self hosted photo webserver tool that I was trying out (photoprism). It can't currently handle symlinks and my local drive was getting filled up with all the extra copies of my photos directory tree!
Just to clarify: My comment was in the context of unlocked files (in v6+ repos). In that case, symlinks aren't used: the content is kept in the working tree (and a pointer file is tracked by git).
Also, since it sounds like you may want all files to be unlocked, you might want to look into
git annex adjust --unlockto enter an adjusted with all files in an unlocked state.
FWIW, if you don't need importing for this use case, I think using
git annex exportwith an rsync special remote configured with
exporttree=yeswould work well.
|
https://git-annex.branchable.com/forum/rsync_into_annex_without_overwriting_links__63__/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
I have my own static library which has two versions - lite & pro.
It's in private repo.
I've added separate private Podspec for each version.
Libs are as compiled static .a files with header files (not open source).
Adding to project like this:
# common cocoapods stuff here
abstract_target 'CommonPods' do
# some other pods here
target 'App' do
pod 'BaseSDK'
end
target 'AppPro' do
pod 'ProSDK'
end
end
Project has 2 targets and own .xcconfig files and there are #include CocoaPods .xcconfig files in it. But mistate was to #include both CocoaPods .xcconfig files for Base & Pro Podfile targets like so:
#include "Pods/Target Support Files/Pods-CommonPods-App/Pods-CommonPods-App.debug.xcconfig" #include "Pods/Target Support Files/Pods-CommonPods-AppPro/Pods-CommonPods-AppPro.debug.xcconfig"
And 'Other LD flags' has been overwrote by latest.
So I've added new .xcconfig files in order to separate .xcconfig files for targets one of which includes Pods-CommonPods-App/Pods-CommonPods-App.debug.xcconfig and other Pods-CommonPods-App/Pods-CommonPods-AppPro.debug.xcconfig (same for release of course).
|
https://codedump.io/share/tcT9EgoUJHXo/1/how-to-add-lite-amp-pro-version-of-library-via-cocoapods
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
0
it's a beginner's problem. I am trying to calculate sum of the first 15 factorials. I know int isn't large enough for the sum, so I used unsigned long long but it still didn't work. why?? Thanks for helping!
#include <iostream> #include <cmath> #include <iomanip> #include <cstring> using namespace std; int main() { const int fac = 15; unsigned long long sum = 0; int start = 1; for (int i = 1; i <= fac; i++){ start = start * i; sum += start; cout << fixed << setprecision(0) << i << " " << sum <<endl; } }
|
https://www.daniweb.com/programming/software-development/threads/475544/why-doesn-t-the-unsigned-long-long-work
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
OGSF library - query (lower level functions) More...
#include <math.h>
#include <grass/gis.h>
#include <grass/ogsf.h>
Go to the source code of this file.
OGSF library - query (lower level functions)
GRASS OpenGL gsurf OGSF Library
(C) 1999-2008 by the GRASS Development Team
This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details.
Definition in file gs_query.c.
Definition at line 35 of file gs_query.c.
Referenced by RayCvxPolyhedronInt().
Definition at line 34 of file gs_query.c.
Referenced by gs_setlos_enterdata(), and RayCvxPolyhedronInt().
Values needed for Ray-Convex Polyhedron Intersection Test below originally by Eric Haines, erich.nosp@[email protected]@m..com.
Definition at line 29 of file gs_query.c.
Referenced by I_cluster_assign(), I_cluster_reassign(), and RayCvxPolyhedronInt().
Definition at line 33 of file gs_query.c.
Referenced by gs_setlos_enterdata(), and RayCvxPolyhedronInt().
Get data bounds for plane.
Definition at line 469 of file gs_query.c.
References b, DOT3, gs_get_xrange(), gs_get_yrange(), GS_get_zrange(), t, W, X, Y, and Z.
Referenced by gs_setlos_enterdata().
Crude method of intersecting line of sight with closest part of surface.
This version uses the shadow of the los projected down to the surface to generate a line_on_surf, then follows each point in that line until the los intersects it.
Definition at line 191 of file gs_query.c.
References ATT_TOPO, b, FROM, G_debug(), gs_get_att_typbuff(), gs_get_surf(), GS_v3dir(), GS_v3eq(), gsdrape_get_allsegments(), NULL, num, segs_intersect(), TO, viewcell_tri_interp(), X, g_surf::x_trans, Y, g_surf::y_trans, Z, and g_surf::z_trans.
Referenced by GS_get_selected_point_on_surface().
Crude method of intersecting line of sight with closest part of surface.
Uses los vector to determine the point of first intersection which is returned in point. Returns 0 if los doesn't intersect.
Definition at line 52 of file gs_query.c.
References ATT_TOPO, b, FROM, G_debug(), GS_distance(), gs_get_att_typbuff(), gs_get_surf(), GS_v3dir(), NULL, TO, viewcell_tri_interp(), X, g_surf::x_trans, Y, g_surf::y_trans, Z, and g_surf::z_trans.
Referenced by GS_get_selected_point_on_surface().
Gets all current cutting planes & data bounding planes
Intersects los with resulting convex polyhedron, then replaces los[FROM] with first point on ray inside data.
Definition at line 529 of file gs_query.c.
References FROM, FRONTFACE, GS_distance(), gs_get_databounds_planes(), GS_v3add(), GS_v3dir(), GS_v3mult(), gsd_get_cplanes(), MISSED, num, RayCvxPolyhedronInt(), and TO.
Referenced by GS_get_selected_point_on_surface().
Ray-Convex Polyhedron Intersection Test.
Originally by Eric Haines, erich.nosp@[email protected]@m..com
This test checks the ray against each face of a polyhedron, checking whether the set of intersection points found for each ray-plane intersection overlaps the previous intersection results. If there is no overlap (i.e. no line segment along the ray that is inside the polyhedron), then the ray misses and returns 0; else 1 is returned if the ray is entering the polyhedron, -1 if the ray originates inside the polyhedron. If there is an intersection, the distance and the nunber of the face hit is returned.
Definition at line 384 of file gs_query.c.
References BACKFACE, DOT3, FRONTFACE, HUGE_VAL, MISSED, t, and W.
Referenced by gs_setlos_enterdata().
|
http://grass.osgeo.org/programming7/gs__query_8c.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Get the mute status of the audio input (to the far end) of the current voice call.
#include <audio/audio_manager_volume.h>
int audio_manager_get_voice_input_mute(bool *mute)
true if the input of the voice call is being muted, false otherwise.
The audio_manager_get_voice_input_mute() function returns the mute status of the audio input (to the far end) of the current voice call..
|
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.audiomanager.lib_ref/topic/audio_manager_get_voice_input_mute.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Opened 10 years ago
Closed 10 years ago
#6541 closed (fixed)
wrong description of default Manager in documentation
Description
On it is written
"If you use custom Manager objects, take note that the first Manager Django encounters (in order by which they’re defined in the model) has a special status. Django interprets the first Manager defined in a class as the “default” Manager. Certain operations — such as Django’s admin site — use the default Manager to obtain lists of objects, so it’s generally a good idea for the first Manager to be relatively unfiltered."
It doesn't work here, and it seems the code has changed in trunk with respect to this.
On irc i've been told to do "manager =" in class Admin, and indeed it works. They also asked me to report it here :-)
Change History (3)
comment:1 Changed 10 years ago by
comment:2 Changed 10 years ago by
(Sorry, i was in a hurry)
With the following code, the admin interface will not use MoniteurManager, but the usual, unfiltered, default one.
class MoniteurManager(models.Manager):
def get_query_set(self):
raise ValueError
return super(MoniteurManager,self).get_query_set().filter(anneein=[1,2,3])
class Moniteur(models.Model):
objects_current = MoniteurManager()
annee = models.IntegerField("Année", choices=ANNEE_CHOICES, default='1', core=True)
class Admin:
etc....
It works if i do :
class Moniteur(models.Model):
class Admin:
manager = MoniteurManager()
"it doesn't work" is not really specific enough to work on this problem. What exactly are you doing and what doesn't work? That is, please give us some steps to duplicate the problem. A small example (a model with one field, say), would be ideal.
|
https://code.djangoproject.com/ticket/6541
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
In 2017 we made two of our web optimisation products – Mirage and Rocket Loader – even faster! Combined, these products speed up around 1.2 billion web-pages a week. The products are both around 5 years old, so there was a big opportunity to update them for the brave new world of highly-tuned browsers, HTTP2 and modern Javascript tooling. We measured a performance boost that, very roughly, will save visitors to sites on our network between 50-700ms. Visitors that see content faster have much higher engagement and lower bounce rates, as shown by studies like Google’s. This really adds up, representing a further saving of 380 years of loading time each year and a staggering 1.03 petabytes of data transfer!
Cycling image Photo by Dimon Blr on Unsplash.
What Mirage and Rocket Loader do
Mirage and Rocket Loader both optimise the loading of a web page by reducing and deferring the number of assets the browser needs to request for it to complete HTML parsing and rendering on screen.
Mirage
With Mirage, users on slow mobile connections will be quickly shown a full page of content, using placeholder images with a low file-size which will load much faster. Without Mirage visitors on a slow mobile connection will have to wait a long time to download high-quality images. Since it’ll take a long time, they will perceive your website as slow:
With Mirage visitors will see content much faster, will thus perceive that the content is loading quickly, and will be less likely to give up:
Rocket Loader
Browsers will not show content that until all the Javascript that might affect it has been loaded and run. This can mean users wait a significant time before seeing any content at all, even if that content is the only reason they’re on visiting the page!
Rocket Loader transparently defers all Javascript execution until the rest of the page has loaded. This allows the browser to display the content the visitors are interested in as soon as possible.
How they work
Both of these products involve a two step process: first our optimizing proxy-server will rewrite customers’ HTML as it’s delivered, and then our on-page Javascript will attempt to optimise aspects of the page load. For instance, Mirage’s server-side rewrites image tags as follows:
Since browsers don’t recognise
data-cfsrc, the Mirage Javascript can control the whole process of loading these images. It uses this opportunity to intelligently load placeholder images on slow connections.
Rocket Loader uses a similar approach to de-prioritise Javascript during page load, allowing the browser to show visitors the content of the page sooner.
The problems
The Javascript for both products was written years ago, when ‘rollup’ brought to mind a poor lifestyle choice rather than an excellent build-tool. With the big changes we’ve seen in browsers, protocols, and JS, there were many opportunities to optimise.
Dynamically… slowing things down
Designed for the ecosystem of the time, both products were loaded by Cloudflare’s asynchronous-module-definition (AMD) loader, called CloudflareJS, which also bundled some shared libraries.
This meant the process of loading Mirage or Rocket Loader looked like:
- CFJS inserted in blocking script tag by server-side rewriter
- CFJS runs, and looks at some on-page config to decide at runtime whether to load Rocket/Mirage via AMD, inserting new script tags
- Rocket/Mirage are loaded and run
Fighting browsers
Dynamic loading meant the products could not benefit from optimisations present in modern browsers. Browsers now scan HTML as they receive it instead of waiting for it all to arrive, identifying and loading external resources like script tags as quickly as possible. This process is called preload scanning, and is one of the most important optimisations performed by the brower. Since we used dynamic code inside CFJS to load Mirage and Rocket Loader, we were preventing them from benefitting from the preload scanner.
To make matters worse, Rocket Loader was being dynamically inserted using that villain of the DOM API,
document.write– a technique that creates huge performance problems. Understanding exactly why is involved, so I’ve created a diagram. Skim it, and refer back to it as you read the next paragraph:
As said, using
document.writeto insert scripts is be particularly damaging to page load performance. Since the
document.writethat inserts the script is invisible to the preload scanner (even if the script is inline, which ours isn’t, preload scanning doesn’t even attempt to scan JS), at the instant it is inserted the browser will already be busy requesting resources the scanner found elsewhere in the page (other script tags, images etc). This matters because a browser encountering a non-deferred or asynchronous Javascript, like Rocket Loader, must block all further building of the DOM tree until that script is loaded and executed, to give the script a chance to modify the DOM. So Rocket Loader was being inserted at an instant in which it was going to be very slow to load, due to the backlog of requests from the preload scan, and therefore causes a very long delay until the DOM parser can resume!
Aside from this grave performance issue, it became more urgent to remove
document.writewhen Chrome began to intervene against it in version 55 triggering a very interesting discussion. This intervention would sometimes prevent Rocket Loader from being inserted on slow 2G connections, stopping any other Javascript from loading at all!
Clearly,
document.writeneeded to be extirpated!
Unused and over-general code. Since we’ve launched the new, shiny Cloudflare Apps, CFJS had no other important products dependant upon it.
A plan of action
Before joining Cloudflare in July this year, I had been working in TypeScript, a language with all the lovely new syntax of modern Javascript. Taking over multiple AMD, ES5-based projects using Gulp and Grunt was a bit of a shock. I really thought I’d written my last
define(['writing', 'very-bug'], function(twice, prone) {}), but here I was in 2017 seeing it again!
So it was very tempting to do a big-bang rewrite and get back to playing with the new ECMAScript 2018 toys. However, I’ve been involved in enough rewrites to know they’re very rarely justified, and instead identified the highest priority changes we’d need to improve performance (though I admit I wrote a few
git checkout -b typescript-versionbranches to vent).
So, the plan was:
- identify and inline the parts of CFJS used by Mirage and Rocket Loader
- produce a new version of the other dependencies of CFJS (our logo badge widget is actually hardcoded to point at CloudflareJS)
- switch from AMD to Rollup (and thus ECMAScript import syntax)
The decision to avoid making a new shared library may be surprising, especially as tree-shaking avoids some of the code-size overhead from unused parts of our dependencies. However, a little duplication seemed the lesser evil compared to cross-project dependencies given that:
- the overlap in code used was small
- over-general, library-style functions were part of why CFJS became too big in the first place
- Rocket Loader has some exciting things in its future…
Sweating kilobytes out of the minified + Gzipped Javascript files is be a waste of time for most applications. However, in the context of code that’ll be run literally millions of times in the time you read this article, it really pays off. This is a process we’ll be continuing in 2018.
Switching out AMD
Switching out Gulp, Grunt and AMD was a fairly mechanical process of replacing syntax like this:
define(['cloudflare/iterator', 'cloudflare/dom'], function(iterator, dom) { // ... return { Mirage: Mirage, }; })
with ECMAScript modules, ready for Rollup, like:
import * as iterator from './iterator'; import { isHighLatency } from './connection'; // ... export { Mirage }
Post refactor weigh-in
Once the parts of CFJS used by the projects were inlined into the projects, we ended up with both Rocket and Mirage being slightly larger (all numbers minified + GZipped):
So we made a significant file-size saving (about half a jQuery’s worth) vs the original file-size required to completely load either product.
New insertion flow
Before, our original insertion flow looked something like this:
// on page embed, injected into customers' pages
Inside cloudflare.min.js we found the dynamic code that, once run, would kick off the requests for Mirage and Rocket Loader:
// cloudflare.min.js if(cloudflare.rocket) { require(“cloudflare/rocket”); }
Our approach is now far more browser friendly, roughly:
// on page embed
If you compare the new insertion sequence diagram, you can see why this is so much better:
Measurement
Theory implied our smaller, browser-friendly strategy should be faster, but only by doing some good old empirical research would we know for sure.
To measure the results, I set up a representative test page (including Bootstrap, custom fonts, some images, text) and calculated the change in the average Lighthouse performance scores out of 100 over a number of runs. The
metrics I focussed on were:
- Time till first meaningful paint (TTFMP) – FMP is when we first see some useful content, e.g. images and text
- Overall – this is Lighthouse’s aggregate score for a page – the closer to 100, the better
Assessment
So, improved metrics across the board! We can see the changes have resulted in solid improvements, e.g a reduction in our average time till first meaningful paint of 694ms for Rocket, and 49ms for Mirage.
Conclusion
The optimisations to Mirage and Rocket Loader have resulted in less bandwidth use, and measurably better performance for visitors to Cloudflare optimised sites.
Footnotes
- The following are back-of-the-envelope calculations. Mirage gets 980 million requests a week, TTFMP reduction of 50ms. There are 1000 ms in a second * 60 seconds * 60 minutes * 24 hours * 365 days = 31.5 billion milliseconds in a year. So (980e6 * 50 * 52) / 31.5e9 = in aggregate, 81 years less waiting for first-paint. Rocket gets 270 million requests a week, average TTFMP reduction of 694ms, (270e6 * 694 * 52) / 31.5e9 = in aggregate, 301 years less waiting for first-meaningful-paint. Similarly 980 million savings of 16kb per week for Mirage = 817.60 terabytes per year and 270 million savings of 15.2kb per week for Rocket Loader = 213.79 terabytes per year for a combined total of 1031 terabytes or 1.031 petabytes.
- and a tiny 1.5KB file for our web badge – written in TypeScript 👍 – which previously was loaded on top of the 21.6KB CFJS
- shut it Hume
- Thanks to Peter Belesis for doing the initial work of identifying which products depended upon CloudflareJS, and Peter, Matthew Cottingham, Andrew Galloni, Henry Heinemann, Simon Moore and Ivan Nikulin for their wise counsel on this blog post.
VISIT THE SOURCE ARTICLE
How we made our page-load optimisations even faster
|
https://networkfights.com/2018/02/02/how-we-made-our-page-load-optimisations-even-faster/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
By default, Visual C++ links C and C++ applications and DLLs to its C (and C++) Runtime Libraries. All of the C language and most of the basic C++ language infrastructure is provided in MSVCRT(D).dll. The STL components are provided in MSVCP50(D).dll for Visual C++ 5.0, and MSVCP60(D).dll for Visual C++ 6.0. I'll refer to these collectively as the CRT Library. Whether linked statically or dynamically, there are associated costs that are often worth avoiding where possible. Many developers want to keep the size of their executables and libraries at a minimum, especially so when such components are to be downloaded. Also, in the modern component-based, Internet-distributed computing world, reducing dependencies and potential dynamic incompatibilities is very important.
Linking statically to the CRT Library always increases the size of the application/library, sometimes dramatically so, particularly when building small application/libraries. Also, where multiple dynamic modules form part of an application, there can be multiple statically linked copies of the same code throughout the working set of a process, which is not only costly in space terms, but can cause memory locality problems. In such circumstances, the memory allocated by one module's CRT Library will cause a crash if it is passed to another module's CRT Library for deallocation.
Linking dynamically can cause dependency problems (including version incompatibilities and distribution problems) in addition to increases in load times. Because the CRT DLL is not part of the Win32 system libraries, it is even possible to encounter older systems in which it is not installed. (Windows 95 OSR1 does not ship with MSVCRT.dll as part of the operating system distribution.) Furthermore, Microsoft has encountered program-breaking incompatibilities between versions of MSVCRT.dll ("Bug++ of the Month," WDJ, May 1999), which is also something we developers are very keen to avoid. Finally, since the DLL version is only available in multithreaded form, it can also lead to subtle, but significant, performance costs.
There are many ways in which an application/library may be dependent on the C Runtime Library. However, in most circumstances the vast majority of the contents of the library are not required by the application/library being built, and it can be beneficial to remove any dependencies. (Indeed, the Active Template Library wizard-generated components make some efforts in this direction via their discrimination of the _ATL_MIN_CRT preprocessor symbol.) Most of these dependencies can be eliminated with the use of a variety of mostly simple techniques.
This article describes a number of such simple techniques, covering the issues of entry points, memory, intrinsic functions, strings, exceptions, implicit functions, large integers, floating-point types, global variables, C++ classes, virtual methods, and use of the C++ Standard Library.
Additionally, the techniques described in this article may be informative, both in terms of how Win32 executable components are generated and used, and also in providing some insights into how the C++ language implementation is layered on top of that of the C Library.
Achieving Independence
The CRT provides the following to application developers:
- It sets up the module entry point, such as preparing argc/argv for console applications.
- It initializes the stdio libraries, and other supporting libraries, global variables, and memory-management functions.
- For C++ components, it handles the C++ language infrastructure, such as the construction and destruction of static objects.
- It provides some of the implicit functions and constants used in floating point and large integer code.
- It provides a number of functions that may be explicitly used by application code, such as the stdio library functions and, for C++ programs, some parts of the STL.
There are four ways in which independence from the run-time libraries may be achieved:
- Eliminate things we don't use the library functions not called in our module.
- Eliminate things we don't need such as the C and C++ Library initialization code.
- Replace some of the things we do need with other implementations substituting strcpy() with lstrcpy().
- Replace some of the things we do need with our own implementations such as providing lightweight implementations of operators new() and delete().
Detaching the CRT
Before I describe some of the various coding techniques that can be used to avoid the CRT, I'd first like to explain the mechanism of unlinking the CRT Libraries. If you are working from the command line, or with makefiles, then add the /nodefaultlib linker switch. If you are working within the Visual Studio IDE, then check the "Ignore all default libraries" checkbox (as shown in Figure 1) in the Link tab on the Project Settings dialog, which sets this flag for the project's linker phase. This removes the default libraries from the list of libraries that the linker searches when linking the process/library.
If you compile your project now, you will see one of the following errors, depending on whether you are developing a console application:
LINK : error LNK2001: unresolved external symbol _mainCRTStartup a GUI application LINK : error LNK2001: unresolved external symbol _WinMainCRTStartup or a DLL LINK: error LNK2001: unresolved external symbol [email protected]
This is because the entry points for console applications, GUI applications, and DLLs that are built with the Visual C++ compiler are, by default, not the normal Win32 entry points of main, WinMain, and DllMain, but are, rather, mainCRTStartup WinMainCRTStartup and _DllMainCRTStartup, respectively. Applications built for Unicode have the wWinMainCRTStartup and wmainCRTStartup entry-point functions.These are functions that are automatically linked in by Visual C++ (unless you specify /nodefaultlib) and that perform additional startup and shutdown tasks outside the scope of your code, including parsing the command line, handling static object construction and destruction, and setting up C global constants. To specify that your entry point be used, you need to specify it to the linker via the /entry switch (i.e., /entry:"myWinMain"), or by setting the "Entry-point symbol" (shown in Figure 2) in the Link tab on the Project Settings dialog.
The Visual C++ compiler and linker insert these so that you can write your normal entry points without concern for what they are doing, and the startup and shutdown code that these functions implement is hidden from your code.
An important issue is whether to detach the CRT in release builds only, or in both debug and release builds. The reasons to do the former are usually practical. For example, you may wish to use stdio functions (i.e., vsprintf) for debug tracing that are not required in release mode. In addition, with Visual C++ 6.0, the /GZ flag brings in some CRT Library functions to debug mode only, which you may wish to avoid.
Generally, the problem with having code exist in debug mode and not in release mode, or vice versa, is that you increase the chances for having errors only appear in release builds, which can be extremely difficult to fix, even assuming that you have detected them before your product ships! When detaching the CRT, the chances of this happening are greatly increased, so I would recommend doing so in both debug and release builds wherever possible.
Entry Points
As well as providing various aspects of the C++ run-time support, the entry points provided by the CRT Library also perform some processing in order to provide some information to your entry points. A little known fact is that the Win32 system does not pass the command line, the module instance handle (the previous instance handle is always NULL in Win32), or the window show state to the WinMain() entry point. This is only provided when linking to the CRT entry point WinMainCRTStartup() (or its Unicode analogue wWinMainCRTStartup()).
These three parameters may be synthesized in the following ways. The module instance handle is available by simply calling GetModuleHandle() and passing NULL for the module name.
hinst = GetModuleHandle(NULL);
The window show state can also be simply obtained, as shown in Listing 1. Win32 makes the command line available via the GetCommandLine() function. However, there is a complication in that the CRT Library "helpfully" strips off the executable (argv[0]) from the command line, so that to write code that works correctly with and without CRT linking, the technique shown in Listing 2 must be used in all circumstances. The code in Listing 2 is an extract from the Visual C++ 6.0 CRT implementation, and you would need to include something similar in your application to ensure consistency.
A far more useful approach is to always access the command line from GetCommandLine() and then parse it into an argc/argv form. Indeed, this is generally the approach I take in code, even when working with the CRT libraries, so that applications can be switched from console to GUI, or vice versa, which, whilst occurring rarely, is nevertheless often enough to make this worthwhile. In any case, having your arguments parsed by tried and tested code is always a benefit, saving both coding and debugging effort.
It should be clear that writing these same three blocks of code for each and every GUI program would become tedious. It is much more useful to bind them into a common implementation to be compiled and linked into your program, in the form of an entry point that you would specify. An example is shown in Listing 3.
For DLLs, the system does provide the correct values for module instance handle, reason, and implicit indicator parameters, so there is no need to provide additional facilities around your DllMain().
memset, memcmp, and Other Intrinsics
The functions memset(), memcmp(), memcpy(), strcat(), strcmp(), strcpy(), and strlen() are among the CRT functions that are implemented as intrinsics. This means that when the compiler is directed to do so (by specification of the /Oi flag, or one of its overriding flags, notably the /O2 "maximize speed" flag), the compiler inserts the whole code for the function in place at its call point(s). (Intrinsics are thus equivalent to mandatory inlines.) When in debug mode, or when building for minimum size (which is generally the preferred option, since smaller programs are often faster due to caching issues; see "Guru of the Week #33"), the linker naturally reports that a symbol is missing when detached from the CRT Library.
There are three options. First, you can provide your own implementation of the function(s). For example, Listing 4 shows a fully compatible plug-in for memcpy().
The second option is to define the /Oi compiler flag on a per-project, or per-file, basis, which causes all the intrinsics used in the affected compilation units to be called in place.
The third option is to use the #pragma intrinsic statement, which applies intrinsic on a function-by-function basis. The following statement:
#pragma intrinsic(memset, strlen)
would cause only memset() and strlen() to be called in place.
Memory
The C memory functions malloc(), realloc(), and free() are all found in the CRT Library. When using code that links to these functions they can simply be synthesized as shown in Listing 5. Note that HeapRealloc() does not support the semantics of realloc() in regards to being passed a Null memory block pointer, hence the conditional tests.
It is also possible to implement them in terms of the GlobalAlloc() family of functions, or the LocalAlloc() family, but the MSDN documentation notes that they are slower than the HeapAlloc() family, which should be preferred.
Memory Leaks
One of the most useful services that the CRT Library provides, and one that developers least like to do without, is that of memory-leak detection. Nevertheless, it is also possible to implement a rudimentary form of memory-leak detection with only the Win32 Heap API, by making use of its HeapWalk() function.
The code in Listing 6 can be inserted into an outer scope of your main/WinMain function, and will perform basic leak detection. It will catch all leaks in your code, but the picture may be muddied since certain blocks will have been allocated by the system for infrastructure tasks, such as the application path name (as retrieved by the GetCommandLine() function). However, you can get around this by creating a new heap, via HeapCreate(), at program startup, and then allocating all memory (including in malloc()/operator new()) from that heap. In that case, only genuine leaks from your own code will be reported.
Listing 3 demonstrates using the trace_ leaks() leak detector function outside the scope of your WinMain() function.
String Operations
All of the C Standard, and a number of other Microsoft proprietary, string manipulation functions are implemented in the CRT Library and are, therefore, off-limits when not linking to it.
For the string functions _strset(), strcat(), strcmp(), strcpy(), and strlen() you can use the intrinsic technique just described for the memory functions. However, for all other operations you are left with three options.
The first option is to write your own versions of the function. For example, the strncat() function could be implemented as shown in Listing 7.
The second option, which is the preferred one where applicable (since the functions are well tested and already in linked DLLs so your module will be smaller), is to use the string functions available from the Win32 API (in the KERNEL32 and USER32 DLLs). KERNEL32 exports the following string functions, lstrcat(), lstrcmp(), lstrcmpi(), lstrcpy(), lstrcpyn(), and lstrlen(), in both ANSI (i.e., lstrcatA()) and Unicode (i.e., lstrcatW()) forms. All of these functions can replace their C Standard counterparts: strcat()/wcscat(), strcmp()/wcscmp(), stricmp()/wcsicmp(), strcpy()/wcscpy(), strncpy()/wcsncpy() and strlen()/wcslen(). However, lstrcpynA/W() has subtly different semantics to strncpy()/wcsncpy(), which can lead to some nasty bugs. lstrcpynA/W() always appends a NULL character in the given space, irrespective of whether all of the string to be copied can fit or not. Thus, it is not possible to construct a string from in-place assembled fragments with lstrcpynA/W(), since every fragment will contain a NULL. For this reason, and from bitter personal experience, I recommend that you steer clear of this function entirely.
The third option is to provide replacement functionality using code that is not a direct replacement. For example, you can provide string concatenation via the wsprintf() function, in the following way.
char result[101]; wsprintf(result, "%s%s", str1, str2); str1 = result;
Other operations may also be synthesized. For example, in C++ compilation units, strchr() can be replaced by a call to an STL Library char_traits::find(), if implemented as demonstrated in Listing 8. This is an extract of the stlsoft::char_traits class, available at.
Of course, some char_traits specializations are implemented in terms of strchr(), so you need to watch out for this (though the linker will warn you if this is the case).
Other replacements are useful for converting integers to strings. The STLSoft conversion library () contains the integer_to_string() suite of template functions that convert from any of the eight (8-, 16-, 32-, and 64-bit signed and unsigned) integers to strings via a highly efficient, inline implementation. These can be used instead of (s)printf() for such simple conversions.
The USER32 functions wsprintfA/W() are useful for more than just concatenating strings. Indeed, they are intended as a (near complete) plug-in replacement for the C Standard sprintf() and swprintf() functions. However, it should be noted that they do not handle floating-point numbers of any kind. This is less of a hassle than it sounds for our purposes, since we will see that there is little point trying to avoid the CRT Library when using floating-point numbers. For all other types, wsprintfA/W() provides an excellent replacement for (s)printf()/s(w)printf().
64-Bit Integers and Floating Points
If you are using floating points in all their glory, then there is no choice but to use the CRT Library, because the complex supporting infrastructure functions and variables reside within it. However, many uses of floating-point numbers are in the fractional intermediate calculations of integral numbers. In these circumstances, it is often possible to emulate the calculation by some clever use of the Win32 MulDiv() function, which multiplies two 32-bit parameters into a 64-bit intermediate and then divides that by the third 32-bit parameter. Consider the following code snippet, containing the alternate styles, from inside a function in the Synesis Software painting libraries:
#ifndef _SYB_MWPAINT_NO_CRT ... ((double)lpGDS->nGradWidth * cx * i / (range * lpGDS->nGradGran)); #else ... MulDiv(MulDiv(lpGDS->nGradWidth, cx, range), i, lpGDS->nGradGran); #endif /* _SYB_MWPAINT_NO_CRT */
The 64-bit integer type, __int64, has been a built-in type in the Visual C++ compiler since Version 2.0. Simple operations on the type, including bitwise and logical (Boolean) operations, and addition and subtraction can induce the compiler to place inline bit/byte-wise manipulation. However, the arithmetic operations multiply, divide, and modulo, and the shift operators are all implemented as calls to CRT Library functions (_allmul(), _alldiv(), _allrem(), _allshl(), and _allshr()). If you are using any of those operations, and cannot convert those operations to their 32-bit equivalents without losing accuracy, then you must accept linking to the CRT Library.
Exceptions and SEH
Both C++ exceptions and the compiler-supplied structured exception handling (SEH) link in significant parts of the CRT Library, and it is not worth the effort to try and avoid it. It is just about possible to directly code exception handling without CRT (as described in "A Crash Course on the Depths of Win32 Structured Exception Handling," Matt Pietrek, Microsoft Systems Journal, January 1997), but it is certainly not worth the effort. The philosophy of not linking to the CRT is that for the benefits of skipping the CRT Library we expect a little discomfort, but not such levels of pain.
Also, given the fact that most uses of the techniques described here are for supporting DLLs and small utility programs, the need/desire for exception is little, if any.
As well as not writing your own try-catch or __try-__except/__try-__finally constructs, you should also remove the /GX compiler flag (the "Enable exception handling" checkbox in the C/C++ options tab), since many libraries (including the STL implementation that ships with Visual C++) discriminate their functionality via the preprocessor symbol _CPPUNWIND, inserting exception-handling code in its presence.
Stacking Verification with _chkstk()
From Visual C++ 2.0 onwards, the Visual C++ compiler has, under certain conditions, inserted a function called _chkstk(). Briefly, this function touches contiguous stack pages in order to ensure that the virtual memory system doesn't leave any uncommitted blocks, since it commits blocks as they are first accessed.
There is only one option here, if you wish to avoid the CRT. If you keep all your stack frames to less than the system page size, then the compiler will not insert the call and you have no worries. (This can be obtained by calling GetSystemInfo(). For most architectures, it is 4096, but this should not be assumed.) In practice this can often be achieved, given good software engineering practices of modest-sized functions, and only declaring frame arrays of things that are genuinely of fixed size, dynamically allocating those that are not. (You have to be careful with fixed-size buffers. "Counting NULL Termination in Path Length Computations," ("Tech Tips," WDM, September 2002) illustrates the mistake in the development of Windows NT/2000 that can lead to Explorer crashes on maximally large paths.)
It can be necessary on occasion to allocate even such fixed-sized blocks via the heap, since their total size exceeds the page size, but this generally brings into question the modularity (or lack of) of the code that is causing this. Such code can often be better structured to avoid this requirement.
It is possible to link a program requiring _chkstk() by providing your own implementation, but since this will not perform the required stack touching, the program will always crash! The following code illustrates this.
extern "C" void _chkstk(void) { } int APIENTRY WinMain(HINSTANCE, HINSTANCE, LPSTR, int) { volatile char sz[100000]; sz[99999] = 0; // Crash!! return 0; }
Stacking Verification with _chkesp
From Visual C++ 6.0 onwards, the Visual C++ compiler has provided the /GZ compiler option, which is intended to assist in finding release-build errors whilst in debug builds. It introduces auto-initialization of local variables and various call-stack validations. Part of its mechanism lies in the implicit calling of a CRT function called _chkesp(), which validates the ESP register as part of its stack checking on the entry and exit of (most) function calls. The signature of the function is as follows:
extern "C" void __declspec(naked) __cdecl _chkesp(void);
If you can write code without precipitating the insertion of _chkesp(), then you needn't worry. The calling of almost any functions within your code causes this call to be inserted, however, so in practice it is not possible to write any worthwhile program that does not cause the compiler to link it in.
There are three options here. The first option is to still link to the CRT in debug mode, and to not do so in release mode. As discussed previously, however, this is fraught with danger and is generally a bad idea.
The second option is simply to not use /GZ. This can be a valuable facility, however, especially when using GetProcAddress() (as this can easily lead to calling convention mistakes), so a useful compromise is to test a debug version built with /GZ and then without, so as to avoid debug/release differential problems.
The third option is as with _purecall(), described later to provide your own implementation. As with that function, you can make it as simple or as complex as you like. The CRT-provided implementation pops a dialog warning that the value of ESP was improperly saved between function calls, and offers the standard Abort, Retry, and Ignore options. The simplest implementation that takes some action (in raising an int 3) is shown in Listing 9.
CRT Global Variables
The global variables, such as errno, _osver, _winmajor, _winminor, _winver, _pgmptr, _wpgmmptr, _environ, and so on, are all set up and manipulated by the CRT Library (see "Special Global Variables for Common Windows Programming Tasks," Eugene Gershnik, WDM, July 2002). If your code is heavily dependent on these variables, then there is no point trying to detach from the CRT Library since they will not be updated correctly (in addition to the fact that they will be missing symbols).
However, some of these variables are constant, in particular the operating system version variables _osver, _winmajor, _winminor, and _winver. These variables can be declared in your code and initialized in your main function as it is in the CRT itself (an extract of which is shown in Listing 10).
C++ Classes
You are able to use many C++ features and not run aground on a lack of the CRT. Simple ADT (Abstract Data Type) classes those that primarily encapsulate and manipulate resources without using polymorphism can survive quite nicely, as their methods are simply compiled and linked as normal.
Classes with a limited level of polymorphism can also be used without any additional effort. Where such classes have virtual members other than their destructors, the virtual mechanism can exist cleanly without the CRT Library. While it is usually a bad idea to declare virtual methods without declaring a virtual destructor, due to likely problems of incomplete destruction, there are cases where it is acceptable; for example, in the definition of COM interface implementing classes.
Templates are also happily implemented by the compiler and, in and of themselves, do not rely on the CRT Library.
Virtual Destructors
If any of the classes you instantiate have virtual destructors, then the compiler will build in a hidden call to ::operator delete(). Use of the Source Browser lends a clue as to why this is. If you build a project with one or more virtual destructors, and then browse for "operator delete," the browser tool will take you to the end of the class definition.
What appears to be actually happening is the compiler is creating a per-class operator delete() (where you have not explicitly provided one) for each and every class that has a virtual destructor, and implementing it in terms of the global operator delete(). This is in accordance with the C++ Standard, which stipulates that "operator delete shall be looked up in the scope of the destructor's class" (ISO/IEC C++ Standard, 1998 (ISO/IEC 14882:1998(E)), section 12.4.11).
If you do not allocate and, therefore, do not delete, instances of class types on the heap, then you can safely placate the linker by providing your own stub for operator delete, as in:
void operator delete (void *) throw() { }
Operators new and delete
In circumstances where you do actually make use of heap-based class instances (or prefer to allocate from the C++ free-store than the C heap), you need to provide global and/or perclass implementations of operators new() and delete().
In either case, a serviceable solution is simply to define them in terms of the Win32 Heap API, using the default process heap. Listing 11 illustrates how per-class allocation could be implemented. The global definitions could be identical.
Pure Virtual Members
The Visual C++ compiler/linker instantiates the vtable entries for pure virtual methods with a CRT Library function, _purecall(), which has the following signature:
extern "C" int _cdecl _purecall(void);
This function is called if a pure virtual method() is called within the constructor or destructor of an abstract class as part of the construction/destruction of a concrete derived class. (This is a big no-no in C++. Nonetheless, it is refreshing to see that the implementers of Visual SourceSafe are as human as the rest of us, since it regularly crashes with pure-call termination.) The CRT Library implementation pops a dialog informing the user that a pure call has occurred, and then terminates the application.
To link projects containing one or more classes with pure virtual members without the CRT Library, you must supply your own definition of this function. If you are feeling confident (that it will never be called), then this can be as simple as:
extern "C" int _cdecl _purecall(void) { return 0; }
or you can be more proactive, as in the implementation I use in the Synesis Software libraries (Listing 12), which informs the user of the problem and then closes the process.
Statics
The compilation of static class instances is composed of two parts: the allocation of space on the frame, and the calling of the constructor and destructors.
Global static class instances are constructed and destroyed by the CRT Library infrastructure. Function scope static class instances are constructed at the point of their first use, and are destroyed by the CRT Library infrastructure along with their global counterparts. It should be obvious, then, that use of statics, particularly global statics, without the CRT Library is difficult: Your global static objects will not be constructed before you use them, and none of your static objects will be destroyed on program exit.
Nevertheless, the compiler does allocate the space on the frame for the instances, so it is possible to still use the instances if we can either provide for their constructors and/or destructors to be called, or can live without them.
With global static class instances, it is getting pretty close to being too much effort, not to mention introducing some dodgy techniques, for achieving this. The code in Listing 13 shows techniques for constructing and destroying static class instances, namely in place construction and explicit destruction, respectively, as well as, I hope, illustrating that the ensuing unsafe coding should dissuade any casual use of them. It is possible to use linker techniques to support global objects (as described in "C++ Runtime Support for the NT DDK," ntinsider/1999/global.htm), and this is something I intend to incorporate in the near future.
With function scope static class instances, however, the disadvantages are much reduced, since the call to the constructor is made from within the function, and I have successfully made use of a number of them in programs detached from the CRT Library.
A simple alternative technique for dealing with global variables is to refer to the global class instance via a pointer in all client code, and then setting that pointer to the address of a local (within main()/WinMain()) instance, effectively getting a static for free by virtue of its being the outermost local frame class instance.
STL
The implementation of some of the STL classes that ship with Visual C++ means that the CRT Library is required for some, but not all, parts of the STL Library. For example, if you declare a single string, with a literal string constructor argument, the linker reports that it cannot see the following symbols:
"void __cdecl std::_Xlen(void)" ([email protected]@@YAXXZ) ___CxxFrameHandler __except_list __EH_prolog
Also, virtually no parts of the iostreams are usable without the CRT Library.
However, other parts of the library are eminently usable without the CRT Library, including auto_ptr, list, map, and vector, along with the algorithms and functionals.
Conclusion
There is clearly a trade off between the benefits that are gained when executables and DLLs are not linked to the CRT Library and the costs (in effort and inconvenience) involved. It is clear that one cannot, or should not, exclude the CRT when floating-point operations, certain parts of the C++ Standard Library, very large frame variables, RTTI, and stdio (i.e., scanf()) are involved. Furthermore, there is little point in expending considerable efforts in this pursuit for a module that is predominantly going to be linked to other DLLs and/or executables that themselves link to the CRT Library.
Nevertheless, this leaves a large number of situations suitable for the application of these techniques. These include small executables such as windowless utilities, installation programs, and small GUI tools. The techniques find even more widespread utility in the creation of DLLs. (Indeed, of the 18 Synesis Software base libraries, all but two of employ these techniques to achieve independence from the CRT Library.)
At first look, the list of techniques that must be applied to a project can seem way beyond the effort worth employing for the benefits gained. However, once the common boilerplate is formed into a mini-CRT Library, the generation of programs and/or DLLs that are CRT-free becomes simple, reliable, and effective, especially in conjunction with customized AppWizards. My own personal use of these techniques is found most often in the base (DLL) libraries for my company as well as for clients, and in a variety of small utility programs; see.
Other Issues
For reasons of brevity, I have been unable to talk about the full gamut of issues that pertain to working without the CRT facilities or in providing alternative implementations of them. Other issues include file handling, sophisticated handling of singleton object lifetimes, reference-counting APIs, console applications, and command-line parsing to name a few.
Eugene Gershnik (author of "Visual C++ Exception-Handling Instrumentation," WDM, December 2002) and I have decided to work together to develop a lightweight CRT replacement incorporating the ideas from both our articles and global object-linker techniques. The project, "CRunTiny," is available online at.
Matthew Wilson holds a degree in Information Technology and a Ph.D. in Electrical Engineering, and is a software development consultant for Synesis Software. Matthew's work interests are in writing bulletproof real-time, GUIs, and software-analysis software in C, C++, and Java. He has been working with C++ for over 10 years, and is currently bringing STLSoft.org and its offshoots into the public domain. Matthew can be contacted via [email protected] or at.
|
http://www.drdobbs.com/windows/avoiding-the-visual-c-runtime-library/184416623
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
#include <RTSQLTransaction.hh>
Inherits fatalmind::Command< PT >< fatalmind::SQL::ResourceType< TM > >.
List of all members.
It takes a Command to be executed within a transaction. In case you would like to put more commands into one transactions, combine them first as BatchCommand and pass the BatchCommand to the Transaction command. The command will be executed within one transaction which is automatically committed on success. On failure (every exception) a rollback is performed..
|
http://www.fatalmind.com/software/ResourcePool/cplusplus/doc/classfatalmind_1_1SQL_1_1RTSQLTransaction.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Porcelain in Dulwich
"porcelain" is the term that is usually used in the Git world to refer to the user-facing parts. This is opposed to the lower layers: the plumbing.
For a long time, I have resisted the idea of including a porcelain layer in Dulwich. The main reason for this is that I don't consider Dulwich a full reimplementation of Git in Python. Rather, it's a library that Python tools can use to interact with local or remote Git repositories, without any extra dependencies.
dulwich has always shipped a 'dulwich' binary, but that's never been more than a basic test tool - never a proper tool for end users. It was a mistake to install it by default.
I don't think there's a point in providing a dulwich command-line tool that has the same behaviour as the C Git binary. It would just be slower and less mature. I haven't come across any situation where it didn't make sense to just directly use the plumbing.
However, Python programmers using Dulwich seem to think of Git operations in terms of porcelain rather than plumbing. Several convenience wrappers for Dulwich have sprung up, but none of them is very complete. So rather than relying on external modules, I've added a "porcelain" module to Dulwich in the porcelain branch, which provides a porcelain-like Python API for Git.
At the moment, it just implements a handful of commands but that should improve over the next few releases:
from dulwich import porcelain r = porcelain.init("/path/to/repo") porcelain.commit(r, "Create a commit") porcelain.log(r)
Syndicated 2013-10-03 22:00:00 from Stationary Traveller
|
http://www.advogato.org/person/ctrlsoft/diary/101.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
:
On Windows Vista:
Configuring the JDBC Distributed Transaction
import java.net.Inet4Address; import java.sql.*; import java.util.Random; import javax.transaction.xa.*; import javax.sql.*; import com.microsoft.sqlserver.jdbc.*;
public class testXA {
public static void main(String[] args) throws Exception
{
// Create variables for the connection string ,
|
http://blogs.msdn.com/b/dataaccesstechnologies/archive/2011/10/27/unable-to-do-remote-sql-stored-procedure-debugging-from-vs2010.aspx
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
One of the problems with code quality tools is that they tend to overwhelm developers with problems that aren't really problems -- that is, false positives. When false positives occur, developers learn to ignore the output of the tool or abandon it altogether. The creators of FindBugs, David Hovemeyer and William Pugh, were sensitive to this issue and strove to reduce the number of false positives they report. Unlike other static analysis tools, FindBugs doesn't focus on style or formatting; it specifically tries to find real bugs or potential performance problems.
What is FindBugs?
FindBugs is a static analysis tool that examines your class or JAR files looking for potential problems by matching your bytecodes against a list of bug patterns. With static analysis tools, you can analyze software without actually running the program. Instead the form or structure of the class files are analyzed to determine the program's intent, often using the Visitor pattern (see Resources). Figure 1 shows the results of analyzing an anonymous project (its name has been withheld in order to protect the horribly guilty):
Figure 1. FindBugs UI
Let's take a look at some of the problems that FindBugs can detect.
Examples of problems found
The following list doesn't include all the problems FindBug might find. Instead, I've focused on some of the more interesting ones.
Detector: Find hash equals mismatch
This detector finds several related problems, all centered around the implementation of
equals() and
hashCode(). These two methods are very important because they're called by nearly all of the Collections-based classes -- List, Maps, Sets, and so on. Generally, this detector finds two different types of problems -- when a class:
- Overrides Object's
equals()method, but not its
hashCodeor vice-versa.
- Defines a co-variant version of the
equals()or
compareTo()method. For example, the
Bobclass defines its
equals()method as boolean
equals(Bob), which overloads the
equals()method defined in Object. Because of the way the Java code resolves overloaded methods at compile-time, the version of the method defined in Object will almost always be the one used at runtime, not the one you defined in
Bob(unless you explicitly cast the argument to your
equals()method to type
Bob). As a result, when one of the instances of this class is put into any of the collection classes, the
Object.equals()version of the method will be used, not the version defined in
Bob. In this case, the
Bobclass should define an
equals()method that accepts an argument of type
Object.
Detector: Return value of method ignored
This detector looks for places in your code where the return value of a method is ignored when it shouldn't be. One of the more common instances of this scenario is found when invoking
String methods, such as in Listing 1:
Listing 1. Example of ignored return value
1 String aString = "bob"; 2 b.replace('b', 'p'); 3 if(b.equals("pop"))
This mistake is pretty common. At line 2, the programmer thought he'd replaced all of the b's in the string with p's. He did, but he forgot that strings are immutable. All of these types of methods return a new string, never changing the receiver of the message.
Detector: Null pointer dereference and redundant comparisons to null
This detector looks for two types of problems. It looks for cases where a code path will or could cause a null pointer exception, and it also looks for cases in which there is a redundant comparison to null. For example, if both of the compared values are definitely null, they're redundant and may indicate a coding mistake. FindBugs detects a similar problem when it's able to determine that one of the values is null and the other one isn't, as shown in Listing 2:
Listing 2. Null pointer examples
1 Person person = aMap.get("bob"); 2 if (person != null) { 3 person.updateAccessTime(); 4 } 5 String name = person.getName();
In this example, if the
Map on line 1 does not contain the person named "bob," a null pointer exception will result on line 5 when the
person is asked for his name. Because FindBugs doesn't know if the map contains "bob" or not, it will flag line 5 as a possible null pointer exception.
Detector: Field read before being initialized
This detector finds fields that are read in constructors before they're initialized. This error is often caused by mistakenly using a field's name instead of a constructor argument -- although not always, as Listing 3 shows:
Listing 3. Reading a field in a constructor before it's initialized
1 public class Thing { 2 private List actions; 3 public Thing(String startingActions) { 4 StringTokenizer tokenizer = new StringTokenizer(startingActions); 5 while (tokenizer.hasMoreTokens()) { 6 actions.add(tokenizer.nextToken()); 7 } 8 } 9 }
In this example, line 6 will cause a null pointer exception because the variable
actions has not been initialized.
These examples are only a small sampling of the types of problems that FindBugs detects (see Resources for more). At the time of this writing, FindBugs comes with a total of 35 detectors.
Getting started with FindBugs
To run FindBugs, you will need a Java Development Kit (JDK), version 1.4 or higher, although it can analyze the class files created by older JDKs. The first thing to do is download and install the latest release of FindBugs -- currently 0.7.1 (see Resources). Fortunately, the download and installation is pretty straightforward. After downloading the zip or tar, unzip it into a directory of your choosing. That's it -- the install is finished.
Now that it's installed, let's run it on a sample class. As is often the case with articles, I will speak to the Windows users and assume that those of the Unix persuasion can deftly translate and follow along. Open a command prompt and go to the directory in which you installed FindBugs. For me, that's C:\apps\FindBugs-0.7.3.
In the FindBugs home directory, there are a couple of directories of interest. The documentation is located in the doc directory, but more important for us, the bin directory contains the batch file to run FindBugs, which leads me to the next section.
Running FindBugs
Like most tools these days, you can run FindBugs in multiple ways -- from a GUI, from a command line, using Ant, as an Eclipse plug-in, and using Maven. I'll briefly mention running FindBugs from the GUI, but I'll primarily focus on running it from Ant and the command line. Partly that's because the GUI hasn't caught up with all of the command line options. For example, currently you can't specify filters to include or exclude particular classes in the UI. But the more important reason is because I think FindBugs is best used as an integrated part of your build, and UIs don't belong in automated builds.
Using the FindBugs UI
Using the FindBugs UI is straightforward, but a couple of points deserve some elaboration. As Figure 1 demonstrates, one of the advantages of using the FindBugs UI is the description provided for each type of detected problem. Figure 1 shows the description for the bug Naked notify in method. Similar.
Running FindBugs as an Ant task
Let's take a look at how to use FindBugs from an Ant build script. First copy the FindBugs Ant task to Ant's lib directory so that Ant is made aware of the new task. Copy FIND_BUGS_HOME\lib\FindBugs-ant.jar to ANT_HOME\lib.
Now take a look at what you need to add to your build script to use the FindBugs task. Because FindBugs is a custom task, you'll need to use the
taskdef task so that Ant knows which classes to load. Do that by adding the following line to your build file:
<taskdef name="FindBugs" classname="edu.umd.cs.FindBugs.anttask.FindBugsTask"/>
After defining
taskdef, you can refer to it by its name,
FindBugs. Next you'll add a target to the build that uses the new task, as shown in Listing 4:
Listing 4. Creating a FindBugs target
1 <target name="FindBugs" depends="compile"> 2 <FindBugs home="${FindBugs.home}" output="xml" outputFile="jedit-output.xml"> 3 <class location="c:\apps\JEdit4.1\jedit.jar" /> 4 <auxClasspath path="${basedir}/lib/Regex.jar" /> 5 <sourcePath path="c:\tempcbg\jedit" /> 6 </FindBugs> 7 </target>
Let's take a closer look at what's going on in this code.
Line 1: Notice that the
target depends on the compile. It's important to remember that FindBugs works on class files, not source files, so making the
target depend on the compile target ensures that FindBugs will be running across the up-to-date class files. FindBugs is flexible about what it will accept as input, including a set of class files, JAR files, or a list of directories.
Line 2: You must specify the directory that contains FindBugs, which I did using an Ant property like this:
<property name="FindBugs.home" value="C:\apps\FindBugs-0.7.3" />
The optional attribute
output specifies the output format that FindBugs will use for its results. The possible values are
xml,
text, or
emacs. If no
outputFile is specified, then FindBugs prints to standard out. As mentioned previously, the XML format has the added advantage of being viewable within the UI.
Line 3: The
class element is used to specify which set of JARs, class files, or directories you want FindBugs to analyze. To analyze multiple JARs or class files, specify a separate
class element for each. The
class element is required unless the
projectFile element is included. See the FindBugs manual for more details.
Line 4:.
Line 5: If the
sourcePath element is specified, the
path attribute should indicate a directory that contains your application's source code. Specifying the directory allows FindBugs to highlight the source code in error when viewing the XML results in the GUI. This element is optional.
That covers the basics. Let's fast forward several weeks.
Filters
You've introduced FindBugs to your team and have been running it as a part of your hourly/nightly build process. As the team has become more acquainted with the tool, you've decided that some of the bugs being detected aren't important to your team, for whatever reason. Perhaps you don't care if some of your classes return objects that could be modified maliciously -- or maybe, like JEdit, you have a real honest-to-goodness, legitimate reason to invoke
System.gc().
You always have the option of "turning off" a particular detector. On a more granular level, you could exclude certain detectors from finding problems within a specified set of classes or even methods. FindBugs offers this granular control with exclude and include filters. Exclude and include filters are currently supported only in the command-line or Ant versions of FindBugs. As the name implies, you use exclude filters to exclude the reporting of certain bugs. The less popular, but still useful, include filters can be used to report targeted bugs only. The filters are defined in an XML file. They may be specified at the command-line with an exclude or include switch or by using the
excludeFilter and
includeFilter in your Ant build file. In the examples below, assume that the exclude switch was used. Also note in the discussion below that I use "bugcode," "bug," and "detector" somewhat interchangeably.
Filters can be defined in a variety of ways:
- Filters that match one of your classes. These filters could be used to ignore all problems found in a particular class.
- Filters that match particular bugcodes in one of your classes. These filters could be used to ignore some bugs found in a particular class.
- Filters that match a set of bugs. These filters could be used to ignore a set of bugs across all of the analyzed classes.
- Filters that match particular methods in one of the analyzed classes. These filters could be used to ignore all bugs found in a set of methods for a class.
- Filters that match some bugs found in methods in one of the analyzed classes. You could use these filters to ignore some of the bugs found in a particularly buggy set of methods.
That's all there is to getting started. See the FindBugs documentation for more details on additional ways the FindBugs task can be customized. Now that we know how to set up a build file, let's take a closer look at integrating FindBugs into your build process.
Integrating FindBugs into your build process
You have several options when it comes to integrating FindBugs into your build process. You can always execute FindBugs from the command line, but more than likely you're already using Ant for your build, so using the FindBugs Ant task is the most natural. Because we've covered the basics of using the FindBugs Ant task earlier, I'll cover some of the reasons you should add FindBugs to your build process and discuss a few of the issues you may run into.
Why should I integrate FindBugs into my build process?
One of the first questions that's often asked is why would I want to add FindBugs into my build process? While there are a host of reasons, the most obvious answer is that you want to make sure problems are detected as soon as your build is run. As your team grows and you inevitably add more junior developers to the project, FindBugs can act as a safety net, detecting identified bug patterns. I want to reiterate some of the sentiment expressed in one of the FindBugs papers. If you put enough developers together, then you're going to have bugs in your code. Tools like FindBugs certainly won't find all the bugs, but they'll help find some of them. Finding some now is better than your customers finding them later -- especially when the cost of incorporating FindBugs into your build process is so low.
Once you've stabilized which filters and classes to include, there's a negligible cost for running FindBugs, with the additional benefit that it detects new bugs. The benefit is probably even greater if you've written application-specific detectors.
Generate meaningful results
It's important to recognize that this cost/benefit analysis is only valid so long as you don't generate a lot of false positives. In other words, the tool's value is diminished if, from build to build, it is no longer simple to determine whether new bugs have been introduced. The more automated your analysis can be, the better. If fixing bugs means having to wade through a lot of irrelevant detected bugs, then you'll likely not use the tool very often, or at least not make good use of it.
Decide which set of problems you don't care about and exclude them from the build. Otherwise, pick a small set of detectors that you do care about and run just those. Another option would be to exclude sets of detectors from individual classes, but not others. FindBugs offers a lot of flexibility with its use of filtering, which should help you generate results that are meaningful to your team, which leads us to the next section.
Determine what you will do with the results of FindBugs
It may seem obvious, but I've worked with more teams than you might imagine who apparently add FindBugs-like tools to their builds for the pure joy of it. Let's explore this question in a bit more detail -- what should you do with your results? It's a difficult question to answer specifically because it has a lot to do with how your team is organized, how you deal with code ownership issues, and so on. However, here are some guidelines:
- You may want to consider adding the FindBugs results to your source code management (SCM) system. The general rule of thumb is don't put build artifacts into your SCM system. However, in this particular case, breaking the rule may be the right thing to do because it allows you to monitor the quality of the code over time.
- You may choose to convert the XML results file into an HTML report that you post on your team's Web site. The conversion can be carried out with an XSL stylesheet or script. Check the FindBugs Web site or mailing list for examples (see Resources).
- Tools like FindBugs can often turn into political weapons used to bludgeon teams or individuals. Try not to encourage that or let it happen -- remember, it's just a tool that's meant to help you improve the quality of your code. With that inspirational aside, in next month's installment I'll show you how to write custom bug detectors.
Summary
I encourage you to try some form of static analysis tool on your code, whether it's FindBugs, PMD, or something else. They're valuable tools that can find real problems, and FindBugs is one of the better ones for eliminating false positives. In addition, its pluggable architecture provides an interesting test bed for writing invaluable application-specific detectors. In Part 2 of this series, I'll show you how to write custom detectors to find application-specific problems.
Resources
- Download the latest version of FindBugs.
- The FindBugs site provides a full list of bugs with descriptions.
- Read more information about the Visitor pattern.
- Here's more information on the Byte Code Engineering Library.
- PMD is another powerful open-source static code analysis tool that lets) .
- In "The future of software development" (developerWorks, June 2003), Eric Allen discusses some of the current trends in software development and predicts what they may lead to in the coming years. Check out the rest of Er.
|
http://www.ibm.com/developerworks/java/library/j-findbug1/
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Add clientListener dynamically?601554 Aug 4, 2008 6:38 PM
I would like to add a clientListener to input components via a phaseListener before the page is rendered. I've attempted this to no avail and now I'm wondering if it's feasible. If anyone could point me in the right direction, I would appreciate it.
Thanks,
Matt
Thanks,
Matt
This content has been marked as final. Show 6 replies
1. Re: Add clientListener dynamically?Frank Nimphius-Oracle Aug 5, 2008 3:40 PM (in response to 601554)Hi,
I once wrote an interesting blog entry about this
See "Configuring a client and server listener at designtime"
Frank
2. Re: Add clientListener dynamically?601554 Aug 6, 2008 5:00 PM (in response to Frank Nimphius-Oracle)Hi Frank,
Thanks for the reply... I would actually like to add a clientListener to every input component in my application at run time without binding. I've written the code to iterate the view and add the listner to the input components (which works fine), but I just need a hook to execute it prior to the first render. I've tried a phaseListener to no avail and now I'm looking into a custom viewHandler. Any suggestions?
Thanks,
Matt
3. Re: Add clientListener dynamically?487442 Aug 6, 2008 5:41 PM (in response to 601554)Hi Matt,
I didn't test what I'm going to propose, but it should work, in theory that is.
1. Create an Application class implementing decorator pattern
2. Create an ApplicationFactory class implementing decorator pattern
public class MyApplication extends Application { private Application wrapped; public MyApplicationFactory(Application wrapped) { this.wrapped = wrapped; } private UIComponent addClientListeners(UIComponent component) { if (component instanceof UIXInput) { // It's an input, add the listener // Since there's no interface for ClientListener purpose, we have to use reflection try { Method getClientListeners = component.getClass().getMethod("getClientListeners", null); ClientListenerSet cls = (ClientListenerSet)getClientListeners.invoke(component, null); cls.addListener("event", "clientMethodCall"); } catch (NoSuchMethodException e) { // The new input component doesn't support ClientListener, either log, no-op or crash } } return component; } // Override all methods, simply calling the wrapped.method version, except for the following: public UIComponent createComponent(String componentType) { return addClientListeners(wrapped.createComponent(componentType)); } public UIComponent createComponent(ValueBinding componentBinding, FacesContext context, String componentType) { return addClientListeners(wrapped.createComponent(componentBinding, context, componentType)); } public UIComponent createComponent(ValueExpression componentExpression, FacesContext context, String componentType) { return addClientListeners(wrapped.createComponent(componentExpression, context, componentType)); } }
3. Register the application factory in your faces-config.xml
public class MyApplicationFactory extends ApplicationFactory { private ApplicationFactory wrapped; private boolean decorated; public MyApplicationFactory(ApplicationFactory wrapped) { this.wrapped = wrapped; this.decorated = false; } public Application getApplication() { Application application = wrapped.getApplication(); if (!decorated) { application = new MyApplication(application); setApplication(application); decorated = true; } return application; } public void setApplication(Application application) { wrapped.setApplication(application); } }
Note that it's a pretty advanced JSF strategy, but has the benefit to show the powerful modularization mechanism used by the technology. Also note that it will fail if the application doesn't use JSF correctly, that is by directly instantiating components rather than using Application as the sole UIComponent factory. Therefore, as a best practice (actually the only one always working), always use the Application class to instantiate your UIComponents. It's going to be annoying at first, but you're going to be very happy of having done so in the long run, I assure you.
Regards,
~ Simon
4. Re: Add clientListener dynamically?601554 Aug 7, 2008 5:41 AM (in response to 487442)Hi Simon,
Thank you for that strategy - it worked out perfectly and will definitely come in handy down the road!
Best Regards,
Matt
5. Re: Add clientListener dynamically?Frank Nimphius-Oracle Aug 7, 2008 5:44 AM (in response to 601554)Matt,
what is the usecase for this ? As a recommendation of best practices you should avoid coding in JavaScript for as long as you could (better performance). Also note that currently the application wont run if a clientListener is applied, pointing to a non existing JS function
Frank
6. Re: Add clientListener dynamically?601554 Aug 7, 2008 4:24 PM (in response to Frank Nimphius-Oracle)Frank,
Here's a simplified usecase: Our application has numerous forms with a lot of data. On value change of any field the styling on the tab label must change to italic to indicate the form has not been saved and must also flip a flag in the managed been that indicates there has been a change. On close of a tab the client must recieve a "You have data that has not been saved, would you like to close the tab without saving?" dialog if the flag indicates there was a change.
I could accomplish this by adding a valueChangeListener to each component, but I wanted to avoid calling the server on value cange on every field. Instead I implimented a clientListener that will only poke the server on the first field change.
Regards,
Matt
|
https://community.oracle.com/message/2693923
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Several people have problems using the XML class to load and save their data. It can get confusing when an XMLNode at one level looks and acts the same as an XMLNode at another level. I loved the idea of XML, but it always seemed more work than it was worth to get it to work correctly.
tlhIn'toq posted an XML solution for a poster in this thread. I saw that code and fell in love with it. It was simple. It was elegant. It did exactly what it needed to do. It used serialization to remove a lot of the grunt work involving XML. It was also built specifically for the poster's class. That's when I had the idea of using Generics to make the code more powerful. Ever since then XML files bend to my bidding. All it takes is a little work on the front end. I'll show you how to do it.
Set Up
First, we need some classes to work with. Since the original solution was for XNA, I stuck with the video game theme. But I wrote this without XNA installed, nor is this solution only for game programmers. I used a copy of VS2010, .Net 4.0 (but I'm sure it will work on earlier versions), all optimized for my work production.
First I created an enum called ItemType:
public enum ItemType { Weapon, Shield, Potion, }
This is used by my two classes. The first is an Item:
public class Item { #region Fields public ItemType Type; public int Strength; public String Name; #endregion #region Constructors #region Item() public Item() { Type = ItemType.Potion; Name = ""; Strength = 0; } #endregion #region Item(Name, Type, Strength) public Item(string Name, ItemType Type, int Strength) { this.Name = Name; this.Type = Type; this.Strength = Strength; } #endregion #region Item(Item) public Item(Item old) { this.Name = old.Name; this.Type = old.Type; this.Strength = old.Strength; } #endregion #endregion #region ToString public override string ToString() { StringBuilder builder = new StringBuilder(); builder.Append("\tName: "); builder.Append(Name); builder.Append("\n\tType: "); builder.Append(Type.ToString()); builder.Append("\n\tStrength: "); builder.Append(Strength); return builder.ToString(); } #endregion }
A very important note. Any class that uses this trick must have a default, parameterless constructor. It just does.
My second class is a Unit:
public class Unit { #region Fields public Item Weapon; public Item Shield; public List<Item> Bag; public string Name; #endregion #region Constructors #region Unit() public Unit() { Bag = new List<Item>(); } #endregion #region Unit(Name) public Unit(string Name) : this() { this.Name = Name; } #endregion #endregion #region ToString public override string ToString() { StringBuilder builder = new StringBuilder(); builder.Append("Name: "); builder.Append(Name); builder.Append("\nWeapon:\n"); builder.Append(Weapon); builder.Append("\nShield:\n"); builder.Append(Shield); builder.Append("\nBag:\n"); foreach (Item item in Bag) { builder.Append(item); builder.Append("\n\n"); } return builder.ToString(); } #endregion }
Another note. This project was created for testing purposes. I created a couple of public fields and some basic constructors. Production code should use private fields, properties, better constructors (but at least a default one!), and methods. I left them out for space consideration. I added the ToString method for later use.
Serialization
This is the bit of front end you need to do to get the code to work. This is easy work, though. If you know how to serialize already, you can skip this section.
What we want is a way for the code to take apart the class and recombine it. This can be used for many things. It is used to write with BinaryWriter to a file. It's also used a lot when working with ports. Your class needs a way of breaking itself down, and building itself from those parts. First, you need to add the using System.Runtime.Serialization to both the Item and the Unit classes (or just once if you have them in the same file).
Let's break down our Item class. The class has three properties, a number of constructors, ToString, and any othe methods you wanted. The important bits we need are the properties. The other information can be recreated if we have this information. We'll create a method to pry this information out. The method name will be GetObjectData. It will return void. It will take two parameters, SerializationInfo and StreamingContext. This signature needs to be exact, so the serializers can call it.
public void GetObjectData(SerializationInfo info, StreamingContext ctxt) { }
In the method, we want to give the SerializationInfo any info we need to serialize. SerializationInfo has a method called AddValue that will help us do that. We pass two parameters, the name of the property and the value. So add the following to GetObjectData:
info.AddValue("Name", Name); info.AddValue("Type", Type); info.AddValue("Strength", Strength);
It doesn't matter what names we give the properties, but it is easier to debug if you give it a simple name to remember and type.
You are done with the first part. You took apart an Item. Now you just need to put it back together. For this, we need a constructor. It will have the same properties as GetObjectData.
public Item(SerializationInfo info, StreamingContext ctxt) { }
The SerializationInfo is the same object that we just passed our properties. We need to get them back. We do this by calling the GetValue method. This method also takes two parameters, the name of the property and the property type. It returns an object. Once we get the object from GetValue, we need to cast it to the proper type. So we have Name = (string)info.GetValue(. This will set our Name property to the result of GetValue after we turned it into a string. For the first parameter, we use the same string we used when adding the value, "Name". Name = (string)info.GetValue("Name", . For the second parameter, we need to tell it what kind of object it is looking for. So we pass it the typeof(string). Now we have one line to get our Name out of serialization.
Name = (string)info.GetValue("Name", typeof(string));
We just need to do the same thing for the other properties we want. The order doesn't matter. That is why there is a name, so you can add or remove them in any order.
public Item(SerializationInfo info, StreamingContext ctxt) { Name = (string)info.GetValue("Name", typeof(string)); Type = (ItemType)info.GetValue("Type", typeof(ItemType)); Strength = (int)info.GetValue("Strength", typeof(int)); }
Serialize Unit
We need to do the same thing for our Unit class. It is important that any class you want to convert to XML, including properties that are classes, are serialized. Otherwise the serializer will not be able to convert the class.
GetObjectData is the easiest to do. Simply add all of our properites. The Unit class has four properties.
public void GetObjectData(SerializationInfo info, StreamingContext ctxt) { info.AddValue("Name", Name); info.AddValue("Weapon", Weapon); info.AddValue("Shield", Shield); info.AddValue("Bag", Bag); }
Wait, Bag is a complex property. It is a List of Items. What are we going to do? Ignore it! Serializers know about Collections. All you have to worry about is telling the Serializer that it is a collection. You don't have to here, it will figure out what object it is when you hand it over.
The next step is to write a new constructor. If you understood the last section, you should be able to build this:
public Unit(SerializationInfo info, StreamingContext ctxt) { Name = (string)info.GetValue("Name", typeof(string)); Weapon = (Item)info.GetValue("Weapon", typeof(Item)); Shield = (Item)info.GetValue("Shield", typeof(Item)); }
We're missing the information on Bag. How do we get it from info? We tell it it's a collection! typeof(List<Item>) How do we store it into our Bag property? We cast it to a collection! Add this line to the constructor:
Bag = (List<Item>)info.GetValue("Bag", typeof(List<Item>));
You just turned the two classes into serializable classes. It takes a bit of effort, but it will make everything work in the end. You can now pass those around to readers and writers and not do any complex work.
Generics and XML
I will make this XML thing easy. Create a new class. Call it MyXML. Now we'll create two static methods, one to read objects and one to save objects.
Now we'll use generics. Generics is what collections like List use. We make one method, and that method can be used for all sorts of datatypes. So that means we will create a class that saves and gets objects from XML files. This will work for the Unit class, but it will also work for any other class that has been properly serialized.
We need three using statements for this class.
using System.Xml.Serialization; using System.Xml; using System.IO;
We'll create a method called SaveObject. It will be public static, and return a boolean value to show whether it worked. It will take two parameters, an object to save and a filename to save it. To create a generic, we use a placeholder for the datatype. I'll use the letter 'T' (the normal convention), but you can use any letter.
public static bool SaveObject<T>(T obj, string FileName) { }
After the method name, I inserted <T>. This tells the compiler that this is a generic and can be used with any datatype. The parameter obj is of type T. So the compiler knows that if a string is passed, than type T is string. If Unit is passed, type T is Unit. Now we can use T as if it were any other datatype.
The first step is to create an XMLSerializer. The constructor of the XMLSerializer requires a datatype. Well, we have the object to send it, so we just use the parameter obj. The GetType method is an Object scope variable, so it will work with any C# Object.
var x = new XmlSerializer(obj.GetType());
Next, we create a StreamWriter. This will write the actual XML file. We pass it the FileName we got as a parameter. We also pass it false. The second parameter is a boolean value of whether to append the file. False means we'll overwrite the file, if it exists already.
using (var Writer = new StreamWriter(FileName, false))
The only thing we need to do is call the Serialize method of x. We'll pass it the Writer and obj. then return true.
x.Serialize(Writer, obj); } return true;
I wrapped it in a try/catch block. We should check to verify that the FileName is a full path that has write access. There will also be an error if the object passed isn't serializable. The full function is as follows.
public static bool SaveObject<T>(T obj, string FileName) { try { var x = new XmlSerializer(obj.GetType()); using (var Writer = new StreamWriter(FileName, false)) { x.Serialize(Writer, obj); } return true; } catch { return false; } }
Now we create a similar function that does the reverse. It also returns a boolean value so we know if it worked. We pass it the object to save to, and the filename. The object needs to be by reference. That way the function can edit it, and the caller knows the function will edit it. We create an instance of FileStream using (FileStream stream = new FileStream(FileName, FileMode.Open)). This uses the FileName passed, and says that the stream will be open (instead of say create).
Now create an XMLTextReader XmlTextReader reader = new XmlTextReader(stream). This will use the stream to read an XML file. Now create an XMLSerializer var x = new XmlSerializer(obj.GetType()). Again we call the GetType on Object so the XMLSerializer knows what type of object it will be. Next we call Deserialize and cast into our object type T obj = (T)x.Deserialize(reader);. The full function is as follows.
public static bool GetObject<T>(ref T obj, string FileName) { try { using (FileStream stream = new FileStream(FileName, FileMode.Open)) { XmlTextReader reader = new XmlTextReader(stream); var x = new XmlSerializer(obj.GetType()); obj = (T)x.Deserialize(reader); return true; } } catch { } return false; }
You may think you want to return an object of type T and just pass it a FileName. This won't work. To call obj.GetType(), obj need to be instantiated. I don't know of a way to generically create an obj of type T. Since we need to pass it an instantiated obj, we might as well pass by reference and do the work there.
Putting It Together
So we had to add one function and one constructor for each class we want to turn into XML. We also created a new class, but made it generic so you can use it over and over again in different projects.
It is easy to use. First, we create a Unit and a FileName.
string FileName = "MyUnit.XML"; Item MyWeapon = new Item("Sword", ItemType.Weapon, 10); Item MyShield = new Item("Shield", ItemType.Shield, 8); Item MyPotion = new Item("Health Potion", ItemType.Potion, 5); Item BackUpWeapon = new Item("Ax", ItemType.Weapon, 7); Unit Dan = new Unit("Dan"); Dan.Weapon = MyWeapon; Dan.Shield = MyShield; Dan.Bag.Add(MyPotion); Dan.Bag.Add(BackUpWeapon);
To turn it into XML, we just call the function
MyXML.SaveObject(Dan, FileName);. That's it. You can check the XML file.
To pull it out of XML is just as easy.
Unit Dan2 = new Unit(); MyXML.GetObject(ref Dan2, FileName);
I hope this tutorial has been helpful. I know this has helped me in my coding. I included a .zip of my project for easier viewing.
Attached File(s)
Robin19.zip (34.38K)
Number of downloads: 584
|
http://www.dreamincode.net/forums/topic/191471-reading-and-writing-xml-using-serialization/page__pid__1121835__st__0
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
I've been thinking for a while about a series of blog posts I'd like to write explaining various Entity Framework concepts, particularly those related directly to writing code using the framework--by that I mean that I will concentrate more on using the API to write object-oriented programs which do data access and less on some other kinds of topics like writing and debugging the XML mapping, using the designer, etc. Some of this will be pretty basic, but many of them are the kinds of things that come up fairly often in our own internal discussions as well as on the forums and such. Hopefully this will be valuable, please let me know what you think, if there are topics you would especially like me to cover, or if there's anything I can do to make it more useful.
I'm going to use a common model/database for my samples which I call DPMud. It's something that a few friends and I created and maintain as a fun side-project which is an old-style multi-user text adventure (MUD) built using the entity framework. The architecture is not something that will scale well (every real-time event in the system persists to the database, and users learn about events by querying the database), but it works reasonably well for a few users at a time, it's an environment which is very convenient for us to develop in, and it has turned out to be a decent example for exercising all sorts of entity framework functionality. It even does a reasonable job of stressing the system.
In order to help you get a feel for the model, here's an abbreviated version of the diagram:
Essentially I have a many-to-many self relationship for rooms, but because I need a "payload" on that relationship (some properties on the relationship itself rather than the rooms), I chose to create another entity type called Exit and then have two 1-many relationships between Rooms and Exits--one of those relationships represents all of the exits out of a room, and the other relationship represents all of the entrances to the room. The other parts of the model are actors which are related to rooms, and items which can be related either to a room or an actor. The model doesn't have a good way to enforce that an item must be related either to a room or an actor but not both or neither, but I've added check constraints to the DB which handle that enforcement. In the real game there are also events which relate to each of the other entity types as well as a large inheritance hierarchy of actor types and event types among other things, but those make the model diagram very hard to read, so I've left them out.
This diagram corresponds to my CSDL file which is the XML file describing my conceptual model. If you'd like to look that over, you can find it here. Well, that CSDL file actually has a few things not present in the diagram, but it is simplified considerably from the full model used in our game. For completeness, you can also have a look at the SSDL and the MSL. I know I'm a bit "old school" when it comes to these things since I've been working on the EF since well before we had a designer (funny to say that about a not-yet-released project), but I've authored each of these files by hand rather than using the designer. I'll also point out, in case some of you are unaware, that the designer uses a file called EDMX to store all of the metadata about an EF model--this one file includes the CSDL, MSL and SSDL just in separate sections within it. When it comes to runtime, the system requires the three separate files, and the designer projects automatically create them in your application out directory. So, I will talk about the three files separately and even play some tricks that don't work with the designer today rather than describing the default designer experience.
The one other thing I'll say about DPMud is that it was built from the beginning as rich windows client which talks direclty to the database. So most of our initial apps will work that way. This is nice, because these are some of the easiest apps to write, but as we go through the series we probably will also spend some time exploring other app architectures like web services and web apps.
OK. So enough about my funny little sample. Let's cover some EF concepts, shall we? There are three things I'd like to talk about in this post which are all related to getting any basic app up and running (once you are done creating the metadata for the model, generating the basic entity classes, etc.):";
using System;
using System.Data.EntityClient;
using System.Data.SqlClient;
using DPMud;
namespace DPMud
{
public partial class DPMudDB
{
[ThreadStatic]
static DPMudDB _db;
static string _connectionString;
public static string ConnectionString
{
get
{
if (_connectionString == null)
{
// insert code from above which uses connection string builders here //
_connectionString = entityBuilder.ToString();
}
return _connectionString;
}
}
public static DPMudDB db
{
get
{
if ((_db == null) && (ConnectionString != null))
{
_db = new DPMudDB(ConnectionString);
}
return _db;
}
}
So what do we get from all this? Well, once we have this foundation laid, writing a program which accesses the database using our strongly typed context becomes pretty darn simple (even if we have multiple threads), and that program can itself just be a single EXE with the entire model and all of its metadata in that one assembly (this is what we do for DPMud—nothing quite like drag-and-drop deployment).
Here’s the code for a simple program which prints out a list of all the rooms and items in the DPMud database but does so from two threads running concurrently. Note: you can set break points in each of the loops and step through the program seeing things interleave nicely, but if you actually run the program there’s a good chance all of the rooms will print together and the items the same because the total time required is relatively small so there may not be that much time-slicing between the threads.
using System.Threading;
namespace sample1
class Program
{
static void Main(string[] args)
{
Thread thread = new Thread(new ThreadStart(Program.PrintItems));
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
foreach (Room room in DPMudDB.db.Rooms)
{
Console.WriteLine("Room {0}", room.Name);
}
}
static void PrintItems()
foreach (Item item in DPMudDB.db.Items)
Console.WriteLine("Item {0}", item.Name);
}
Fun huh? OK. Yes I admit it, I’m a total geek, but I think it’s pretty fun.
Until next time,Danny
|
http://blogs.msdn.com/b/dsimmons/archive/2007/09/15/concepts-part-i-connection-strings-context-lifetimes-metadata-resources.aspx
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
As per 5.2.8-1 we could import RTTI objects into the std namespace by adding the
following to typeinfo.h:
namespace std {
using ::type_info;
using ::bad_cast;
} // namespace std
Suggestion based on use of RTTI in boost any container (the any container stores
arbitrary types in a typesafe container).
Richard
|
http://www.digitalmars.com/archives/cplusplus/1838.html
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Hi,
> Just a short question ... what problems are you actually having with Jira? Is it the
availability of an instance? The migration of Issues allready reported to Adobe?
It's the migration. Infrastructure have recently upgraded to a newer version of JIRA and Alex
has supplied then with a new dump that should be compatible with it. As all Apache project
use the same instance of JIRA there's only limited times (only weekends?) when this very large
import can be done.
> For me the change of Flex mooving to Apache offers more chances than risks.
Agree there. Having a community of very smart developers being able to commit to it is positive
change.
Thanks,
Justin
|
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201206.mbox/%[email protected]%3E
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Text processing is one of the most common tasks in application
development. Whether it is a Java Servlet or a VOIP application, the
conversion from a raw text-based input message to a machine-readable
internal representation almost always requires parsing (or
tokenization), which, in its current form, refers to a process of
extracting tokens (the smallest unit of relevent textual information)
and storing them in null-terminated memory blocks, also known as
strings. Over the years, people have invented various automation
techniques and tools, e.g. regular expression and Lex, to reduce the
complexity of manual parsing. Proven both useful and stable, those
techniques and tools have stood the test of time. As a result, the
current framework of text processing is generally considered to be
fairly well-established.
In this article, I propose an alternative to existing
"extractive" style of parsing that sits at the lowest layer
of the text processing "stack." Furthermore, I intend to
show that an arguably small perspective change on the representation
of a token can lead to some interesting properties not readily
available from today's XML processing techniques. In fact, applying
this technique to XML processing can yield some of the advantages
being promoted by "binary XML" advocates while still
retaining the XML 1.x textual format.
As the logical first step, people use a piece of code called a
"tokenizer" to split input text into little tokens. Under
the traditional text processing framework, the tokenization step
usually involves the change of storage location of the token
content. One usually performs the following steps to achieve the
purpose of tokenization in C:
Detect the token length (n) in the source
document
Dynamically allocate a small memory block
of the size n+1.
Copy the content of the token from the source document to the
allocated memory block and terminate the token string with a null
character
(Typically) discard the source document
Consider the following snippet of an HTTP header as
an example.
Accept: */*
Accept-Language: en-us
Connection: Keep-Alive
Host: localhost
Referer:
After the tokenization step, we have created ten null-terminated
string objects, each representing a field in this HTTP header
fragment. From this point on, subsequent post processing steps
requiring read access would assume those tokens to be the atomic
information storage units.
Accept\0"
"*/*\0"
"Accept-Language\0"
"en-us\0"
"Connection\0"
"Keep-Alive\0"
"Host\0"
"localhost\0"
"Referer\0"
"\0"
However, we would like to observe that, with some help from
pointer arithmetic, the aforementioned tokenization step isn't
necessarily the only way for one to gain read-access to the content
of a token. Rather than extracting tokens out of the source document,
one can instead choose to record a pair of integers representing the
starting offset value and length for each token as appeared in the
source document, while maintaining the source document in
memory.
Using the same HTTP header fragment, the new output, consisting of
the source document and an array of integers, is shown below. As an
example, we use the first pair of integers (0, 6) to describe the
token "Accept", indicating that it starts at offset 0 of the
source document, and is 6 characters in length.
Accept: */*
Accept-Language: en-us
Connection: Keep-Alive
Host: localhost
Referer:
0, 6
9, 3
13, 15
30, 5
36, 10
48, 10
59, 4
65, 9
75, 7
84, 25
This "non-extractive" style of tokenization has some
interesting properties:
One can remove, add or update any token without
reserializing the entire file. Using the
traditional "extractive" style of parsing, one may need
to perform numerous string concatenations in order to compose the
updated document. The new "non-extractive" style allows
one to directly copy the content from unmodified regions. In the
example above, if one wants to replace the token
"localhost" whose offset and length values
are 65 and 7 respectively, one simply:
localhost
a) copies the content from the beginning
to offset value of 64,
b) appends the new token content, then
c) completes the operation by appending the remaining
"unchanged" content in the source document starting from
offset value of 74 (65+9).
When there are a large number of
tokens in the source document, the performance improvement
resulting from this non-extractive style of parsing can be quite
significant as compared to the extractive style.
One can remove, add or update a group of adjacent tokens
without reserializing the entire file. Similar to #1, one can treat a group of adjacent
tokens as a single large token by making calculations based on the
starting offset and length values of the first and last token of the
group.
One can address any fragment of the source message by a
pair of integer (offset + length). Because a
fragment in a source message consists of a group of adjacent tokens,
one can address, or even take out, the fragment from the source
message by calculating, the starting offset and length of the fragment
(as in #2).
With #3, one can efficiently pull out and splice together
fragments from multiple source messages to compose new
messages. An interesting application of #3 is
that one can efficiently compose a new document by appending together
fragments of several documents, as long as each document is tokenized
"non-extractively."
The original text messages is fully- preserved and
dealing with whitespace is easier. Because one maintains the original document in
memory, no information is lost after the "non-extractive"
tokenization step. To remove the leading or trailing whitespace,
one simply recalculates the starting offset and length of a
"trimmed" token.
One can write a pair of integers for every token in the
source message into a binary file that can be persisted on
disk. To understand this property, observe that
"extractive" tokenization requires the use of absolute
pointers that are difficult to be made persistent, whereas the
"non-extractive" tokenization uses "soft" pointers
(starting offset) and relies on runtime pointer arithmetic to gain
access to the token content.
With #6, pre-parsing a text message or "parse once,
use many times" become possible. When you load both the
source message and the binary file (containing offset and length of
tokens), you don't have to parse every time you process the source
message in a read-only manner. Its immediate benefit is that one can
generate the binary file and save it along with the source document
prior to any ensuing post-processing.
Let's examine how non-extractive parsing works in practice. As an
example, consider how to compare a token with a known string value.
In C's <string.h> library (shown below), we can
straightforwardly use the function "strncmp()"
for that purpose.
strncmp()
#include <string.h>
int strncmp(const char *s1, const char *s2, size_t n);
Suppose sm points to the source message and
s1 points to a known string value to be compared
against, one simply sets the value s2 to the value
(sm + starting_offset),
and n to the length of the token. In other words, the
support of comparison for both "extractive" tokenization
(i.e. strcmp) and "non-extractive"
tokenization (i.e. strncmp) actually exists since very
early. Yet most string-to-data conversion macros or functions,
e.g. atoi, atof and Java's
parseInt assume tokens in the "extractive" sense. To
support the new "non-extractive" tokenization, one can
create a mirror set of functions,
e.g. ne_atoi, ne_atof
and ne_parseInt (ne stands
for non-extractive). We compare function signatures
between those two functions sets in the table below.
sm
s1
s2
(sm + starting_offset)
n
strcmp
strncmp
atoi
atof
parseInt
ne_atoi
ne_atof
ne_parseInt
ne
atoi(char* s1)
ne_atoi(char* sm, int offset, int
length)
atof(char* s1)
parseInt(String s1)
ne_parseInt(String s1, int offset,
int length)
Because XML is a text-based markup language, a
"non-extractive" style of XML processing will possess the
following properties:
One can remove, add, or update any token in an XML
document without reserializing the entire document. Notice
that this comes straight from the first property for
"non-extractive" parsing. Using current
"extractive" XML processors, such as DOM, SAX or PULL, a
simple update can potentially require that the entire document be
taken apart into tokens before putting those tokens back. When most
part of the document isn't relevant to the modification, redundant
re-serialization can incur significant performance overhead.
One can remove, add or update any element without
reserializing the whole thing every time. An
element in an XML document consists of a group of adjacent tokens,
such as the starting/ending tags, its text content, attributes and
child elements. Applying "non-extractive" tokenization
allows one to, similar to #1, avoid reserialization on irrelevant
parts of the document associated with "extractive"
tokenization. This, again, buys performance.
One can address any fragments of an XML document by a
pair of integers (offset + length). Those two
integers can be calculated from the offset and length of both the
first and the last tokens in the fragment. Also the fragment
"descriptor" (consisting of those two integers) is
persistent, meaning that one can potentially save the descriptor on
disk for later use.
With # 3, one can efficiently pull out and splice
together fragments from multiple XML messages to compose new XML
document. Assuming we have a fragment
"descriptor" for each document of interest, we can
efficiently compose a new XML document by concatenating those
fragments like buffers.
The original XML messages is fully-preserved and dealing
with whitespace is easier. Whitespace handling has been
source of controversy among various "extractive" style
XML processor implementations. A non-extractive XML processor has
the option that extractive ones don't have: Because a
non-extractive XML processor maintains the source document in
memory, it allows one to recover insignificant whitespace in the
worst case.
One can write a pair of integers
for every token in the XML document into a binary file that can be persisted
on disk. In other words, one now has the option to persist
the "parsed state" on disk.
With #6, preparsing an XML message or "parse once,
use many times" becomes possible. When one loads in
memory both the XML document and the binary, he can directly process
the XML document. If any subsequent processing only involves read
access, then one can potentially parse the document once and use it
many times. Imagine an XML document tagged along with the parsed state
can potentially help reduce the processing overhead incurred by
network hops/intermediaries.
In addition, one can combine some of the above properties to create
new properties and applications. In his article,
Steven J. Vaughan-Nichols wrote that "... Yet another problem is
that there's no way to pull out a single string of data, say the
contents of a field, from an XML document without having to retrieve
and parse the entire document." By combining #1, #6 and #7, one
comes up a pretty good solution to the problem.
There has been ongoing debate on the topic of binary XML as
manifested by the binary XML workshop held by W3C last September. Poor
performance of XML processing is often mentioned as one of the reasons
behind the recent effort to look for an optimized infoset
serialization format. One way to look at the "parse once, use
many times" property afforded by a non-extractive parsing style
is that it can potentially provide a pre-parsed XML that is also
backward compatible with XML--in other words, one loses no advantages
inherent to XML, e.g. flexibility and human readability. As an
example, an XML search engine using "non-extractive" style
pre-parsed XML could allow users to query a large set of XML documents
across the Internet using XPath at significantly better throughput
than today's XML processing technology, even higher than many proposed
binary XML formats.
I have proposed a new "non-extractive" style of
tokenization for text processing purposes. I've also outlined a few of
its properties in the context of XML processing. As XML finds more
uses as an open document/data encoding format, people will
increasingly demand more efficient and sophisticated XML processing
capabilities. Although the basic concept of non-extractive parsing is
simple, it can potentially lead to a new direction in thinking of XML
processing.
© , O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://www.xml.com/pub/a/2004/05/19/parsing.html
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Though the title is from the famous story by O. Henry, this article has nothing to do with the plot of that story and can be also titled:
The examples for this article can be divided into two separate groups but because they use the same programming technique I decided
to unite them under one title. These examples are similar to some examples from the book World of Movable Objects and I would say that the set of
examples in the book has more details. Also I have to admit that there is more logic in the explanation in the book which you can find at.
In addition to the book and the huge demo program with all its codes you can
find there an additional document with descriptions of all the methods from MoveGraphLibrary.dll. This library is needed to start the accompanying application. If you see in the text below any mention of source files
which are not included in the project accompanying this article, then these are the files from the project accompanying the book.
Let’s start with some general rules for all the further examples.
Any movement of each and all objects are organized
with only three mouse events: MouseDown, MouseMove, and
MouseUp. Movements are supervised by a special object of the Mover class and the method for each of these three events
contains only one call to one or another method from the Mover class. Everything is pretty easy and is explained at
the very beginning of the mentioned book World of Movable Objects.
MouseDown
MouseMove
MouseUp
Mover
Often enough movements have some restrictions; for
example, there can be a limitation on the size of a particular object or an
object is allowed to move only inside some area. In general, when there is
some limitation on the movement, then an object can be stopped while the mouse
(the cursor) continues to move. In the examples of this article there is some
difference in this particular situation: when an object has to stop because of
some restrictions, then the mouse cursor also stops and stays at the same spot
of an object. It looks like the cursor is adhered to the spot where an object
was originally pressed with a mouse.
The first part of examples deals with the movements of
objects along predefined tracks; these tracks can be shown or hidden from
view, but for the algorithm it doesn’t matter at all. Let us start with the
preliminary example of moving a simple object – a small colored spot – along
the segment of a straight line or along an arc.
There are two straight lines and two arcs in Form_SpotsOnLinesAndArcs.cs (figure 1). There is a small colored spot on each of these lines
(straight or curved) and each spot can move only along its trail and only between the ends of this trail. Usually it is easier to deal with the straight
lines, so let us start with the SpotOnLineSegment class.
SpotOnLineSegment
public class SpotOnLineSegment : GraphicalObject
{
Form form;
Mover supervisor;
PointF m_center;
int m_radius;
SolidBrush brush;
PointF ptEndA, ptEndB;
float dxMouseFromCenter, dyMouseFromCenter;
An object of the SpotOnLineSegment class is characterized by the position of its central
point (m_center), radius (m_radius), and
the brush (brush) with which it is painted. Such a colored spot is
allowed to move only along the straight segment, so the end points of this
segment must be declared (ptEndA and ptEndB). The
central point of the colored spot is placed exactly on the declared segment with
the help of the Auxi_Geometry.NearestPointOnSegment() method.
m_center
m_radius
brush
ptEndA
ptEndB
Auxi_Geometry.NearestPointOnSegment()
public SpotOnLineSegment (Form frm, Mover mvr, PointF pt0, PointF pt1,
PointF pt, int r, SolidBrush brsh)
{
form = frm;
supervisor = mvr;
ptEndA = pt0;
if (Auxi_Geometry .Distance (pt0, pt1) >= minSegmentLength)
{
ptEndB = pt1;
}
else
{
ptEndB = Auxi_Geometry .PointToPoint (pt0,
Auxi_Geometry .Line_Angle (pt0, pt1), minSegmentLength);
}
PointF ptBase;
PointOfSegment typeOfNearest;
double fDist;
m_center = Auxi_Geometry .NearestPointOnSegment (pt, ptEndA, ptEndB,
out ptBase, out typeOfNearest, out fDist);
m_radius = r;
brush = brsh;
}
The usual thing for the technique of adhered cursor is that next to nothing is done inside the form itself where an object has to be
moved; the only needed thing there is to inform the caught object about the initial position of the mouse cursor at the moment of the catch. This allows
the caught object to remember the relative position of the cursor at the initial moment in order to keep it unchanged throughout the movement.
private void OnMouseDown (object sender, MouseEventArgs e)
{
if (mover .Catch (e .Location, e .Button))
{
GraphicalObject grobj = mover .CaughtSource;
if (grobj is SpotOnLineSegment)
{
(grobj as SpotOnLineSegment) .InitialMouseShift (e .Location);
}
else if (grobj is SpotOnArc)
{
(grobj as SpotOnArc) .InitialMouseShift (e .Location);
}
}
}
Whatever else is needed to keep the cursor at the same relative position is done inside the particular class of the spot. If you need
to restrict the movement of the spot along the horizontal or vertical segment, then you can use a standard clipping operation which imposes the restrictions
not on the moving of the object but on the moving of the mouse cursor. Because of this, the proposed example shows the new technique on the inclined lines with
which the standard clipping of the mouse cursor cannot be used.
Whatever is needed to keep the spot on the segment can be found in the SpotOnLineSegment.MoveNode() method.
SpotOnLineSegment.MoveNode()
public override bool MoveNode (int i, int dx, int dy, Point ptM,
MouseButtons catcher)
{
bool bRet = false;
if (catcher == MouseButtons .Left)
{
bRet = true;
PointF ptCenterNew = new PointF (ptM .X - dxMouseFromCenter,
ptM .Y - dyMouseFromCenter);
PointF ptBase, ptNearest;
PointOfSegment typeOfNearest;
double fDist = Auxi_Geometry .Distance_PointSegment (ptCenterNew, ptEndA,
ptEndB, out ptBase, out typeOfNearest, out ptNearest);
supervisor .MouseTraced = false;
Center = ptNearest;
Cursor .Position = form .PointToScreen (Point .Round (new PointF (
m_center .X + dxMouseFromCenter, m_center .Y + dyMouseFromCenter)));
supervisor .MouseTraced = true;
bRet = true;
}
return (bRet);
}
When the cursor tries to move the spot, the new proposed center of the spot is calculated according to the mouse position and the initial shifts of the cursor from the center of the spot.
PointF ptCenterNew = new PointF (ptM .X - dxMouseFromCenter,
ptM .Y - dyMouseFromCenter);
It may happen that this new point is just on the line, but the probability of such accurate moving of a spot is very low. More likely
that the cursor is moved somewhere along the segment so the position of the spot must be adjusted to the line. The exact new location of the spot on the
segment will be the nearest point of the segment from the proposed point ptCenterNew; the
Auxi_Geometry.Distance_PointSegment() method gives such point on the line (ptNearest).
ptCenterNew
Auxi_Geometry.Distance_PointSegment()
ptNearest
double fDist = Auxi_Geometry .Distance_PointSegment (ptCenterNew, ptEndA,
ptEndB, out ptBase, out typeOfNearest, out ptNearest);
After the calculation of the new spot position on the
line, there is a standard (for this technique!) order of steps:
supervisor.MouseTraced = false;
Center = ptNearest;
Cursor .Position = form .PointToScreen (Point .Round (new PointF (
m_center .X + dxMouseFromCenter, m_center .Y + dyMouseFromCenter)));
supervisor.MouseTraced = true;
Spot is moving on along the segment. The mentioned method Auxi_Geometry.Distance_PointSegment() calculates the nearest point not on the infinitive
line but on its segment between the known end points, so it will not allow to move the spot beyond those ends of the segment.
Now let us look at the case of the spot on the arc. The main idea is the same, but there are more calculations for this curved
trail and there is also one catch. Thanks to such traps, the programming is interesting and exciting.
Some fields in the class SpotOnArc are the same as in the previous
SpotOnLineSegment class; others are different because the trail has different shape.
SpotOnArc
public class SpotOnArc : GraphicalObject
{
Form form;
Mover supervisor;
PointF m_center;
int m_radius;
Color clr = Color .Red;
PointF ptCenter_Arc;
int radius_Arc;
RectangleF rcAroundCircle;
double minPositiveDegree_Arc, maxPositiveDegree_Arc;
There are all the needed methods in the MoveGraphLibrary.dll to calculate the point on the circle when the angle is
known or to calculate an angle for the known point; there are also methods to convert radians and degrees back and forth. To imagine and describe the
position of any point, I prefer to use degrees, while for calculations radians are used. The trail can be a full circle or only some part of it (an arc);
both cases are represented at figure 1. (In the initial version the big arc was really closed, but then I made a small gap in it to demonstrate
the trap that I am going to explain.) While initializing a SpotOnArc object, you declare the parameters of the spot and of its trail. Two angles are needed for
an arc: an angle from the center of a circle to one end point of the arc, and the sweep angle from this end to another one. Do not forget that the sign of
any angle is calculated in the normal way used in math – positive angles are going counterclockwise; I have already mentioned this in the chapter Rotation.
public SpotOnArc (Form frm, Mover mvr, PointF ptArcCenter, int nArcRadius,
double angleArcEnd_Degree, double angleArcSweep_Degree,
PointF pt, int r, Color clrSpot)
If the arc is shorter than the closed circle, then any movement of the spot must be checked for the possibility of going beyond the
end points. It was easier to do on the straight segment; on the arc we have to deal with the angles and the source of the small problem is the possibility of
jumping from the positive angles into negative or vice versa while moving along the arc. To organize the needed checking of angles, I use the
[minPositiveDegree_Arc,
maxPositiveDegree_Arc] range; only the movement inside this range is allowed. In the most cases it would be enough but not in the case of nearly closed arc. The length of the gap depends on the
sweep degree of the arc and its radius. If you declare an arc with a very small gap angle (for nearly closed arc from figure 1 the sweep angle is
354 degrees), then the gap is obvious. If you move the spot slowly, it will stop at the end point before the gap; if you try to move the spot faster,
chances are high that the spot will jump over the gap. With the really fast movement of the mouse the spot can jump over the gap of 15 degrees (maybe
more). This is not good at all; the trail is the trail; the spot is not supposed to move over the gaps; something must be done.
[minPositiveDegree_Arc,
maxPositiveDegree_Arc]
There can be different ways to prevent such jumping
over the ends of an arc; you can implement your own algorithm. My idea in
solving the problem of the gaps is based on setting the ranges of angles for
the first and the last quarters of the arc and not allowing to change the
position between these quarters. The additional checking of the jumps over the
gap is needed only for non-closed arcs; during the initialization of such arc
the special flag bCheckGap is set and two needed ranges are calculated.
bCheckGap
public SpotOnArc (Form frm, Mover mvr, PointF ptArcCenter, int nArcRadius,
double angleArcEnd_Degree, double angleArcSweep_Degree,
PointF pt, int r, Color clrSpot)
{
… …
if (-360 < angleArcSweep_Degree && angleArcSweep_Degree < 360)
{
bCheckGap = true;
double quarter = (maxPositiveDegree_Arc - minPositiveDegree_Arc) / 4;
firstQuaterPositiveAngle = new double [] {
minPositiveDegree_Arc, minPositiveDegree_Arc + quarter };
lastQuaterPositiveAngle = new double [] {
maxPositiveDegree_Arc - quarter, maxPositiveDegree_Arc };
}
If the bCheckGap is set to true, then in the MoveNode() method an additional checking of the proposed movement
is organized.
true
MoveNode()
public override bool MoveNode (int i, int dx, int dy, Point ptM, MouseButtons mb)
{
… …
if (bCheckGap)
{
… …
If it is found that currently the angle to the spot belongs to the first quarter of the arc and the proposed new spot position
belongs to the last quarter or vice versa, then such movement is not allowed, the spot is not moved anywhere, and the cursor is returned on its previous position inside the spot.
supervisor .MouseTraced = false;
Cursor .Position = form .PointToScreen (Point .Round (new PointF (
m_center .X + dxMouseFromCenter, m_center .Y + dyMouseFromCenter)));
supervisor .MouseTraced = true;
return (false);
At the beginning of this article I mentioned that everything is going to be movable. You can see yourself that in the Form_SpotsOnLinesAndArcs.cs
the colored spots are movable but their trails are not. It is done only to simplify the code and to focus the attention on the movement of the spots along
the trails. If you want to make everything movable, it is easy enough to do. For the movement of the straight segments and arcs you can look into the couple
of examples from the book. For example, the Form_SegmentOfLine.cs works with movable and changeable straight segments while for the arcs you can look
at the code of the Form_Arcs_FullyChangeable.cs. If you are going to change the current example to make everything movable then do not forget about two things.
Auxi_Geometry.Ellipse_Point()
If I would have to make everything movable in this
example, I would organize two classes of complex objects trail + spot and
register such object with the mover by the IntoMover() method.
These things are well explained in the book.
This must be a short article, so I’ll skip a couple of
examples from the book and jump directly to the Form_SpotOnCommentedWay.cs.
There are so many cases when you have to find your way
on a set of paths or roads. It can be a railroad system in Canadian Rockies,
or the floor plan of level 2 in the National Gallery, or some labyrinth which
you are going to explore, or simply a plan of your own house if you happen to
own something like Blenheim Palace. Anyway, you can think out a lot of
possibilities when something has to be moved by user(!) along the set of
trails on the screen. An object itself and the whole entourage can be very
picturesque but for the simplicity of the code let us deal in this example with
a set of pure lines and a small colored spot going along them. It is enough to
take into consideration only the straight segments and arcs as any complex
system of trails can be constructed of such elements.
In the name of the file with the new example you can
see the word Commented. When I tried to explain the right and wrong
movements of the small colored spot in the example Form_SpotOnConnectedSegments.cs (this is an example from the book) I had to add the
numbers of segments at the figure artificially because those numbers are not shown in the mentioned example. The new example – Form_SpotOnCommentedWay.cs
– uses much more complex set of trails and I change them from time to time, so it will be much easier for explanation and understanding to have all the
segments with some comments, for example, numbers.
To organize a way for the colored spot, three
different classes are used. One of them is an abstract class
Way_SegmentCommented which is used as
basic for two other classes.
Way_SegmentCommented
public abstract class Way_SegmentCommented
{
protected WaySegmentType segment_type;
protected PointF ptStart, ptFinish;
protected CommentToCircle m_comment;
public abstract double Length { get; }
public abstract PointF Center { get; }
public abstract void Draw (Graphics grfx, Pen pen, bool bMarkEnds,
Color clrEnds);
public abstract double DistanceToSegment (PointF pt, out PointF ptNearest);
public abstract CommentToCircle Comment { get; }
While drawing new segments, their end points can be
marked with different color; this makes easier the understanding of the whole
set of segments.
New segments have a Center property.
For the arc segment it is the center around which the arc is constructed; for
the straight segment it is the middle point between its ends.
Straight segments belong to the
Way_LineSegmentCommented class.
Way_LineSegmentCommented
public class Way_LineSegmentCommented : Way_SegmentCommented
{
double m_angle;
static double minLength = 10;
// -------------------------------------------------
public Way_LineSegmentCommented (Form form, PointF pt0, PointF pt1,
double angleToC_Degree, double coefToCenter,
string txt_Cmnt, Font fnt_Cmnt, double angleDegree_Cmnt, Color clr_Cmnt)
{
segment_type = WaySegmentType .Line;
ptStart = pt0;
m_angle = Auxi_Geometry .Line_Angle (pt0, pt1);
if (Auxi_Geometry .Distance (pt0, pt1) >= minLength)
{
ptFinish = pt1;
}
else
{
ptFinish = Auxi_Geometry .PointToPoint (ptStart, m_angle, minLength);
}
PointF center = Auxi_Geometry .Middle (ptStart, ptFinish);
m_comment = new CommentToCircle (form, Point .Round (center),
Convert .ToInt32 (Length / 2), angleToC_Degree,
coefToCenter, txt_Cmnt, fnt_Cmnt, angleDegree_Cmnt, clr_Cmnt);
}
Curved parts of the way are represented by the objects of the Way_ArcSegmentCommented class.
Way_ArcSegmentCommented
public class Way_ArcSegmentCommented : Way_SegmentCommented
{
PointF m_center;
float m_radius;
double angleStart, angleSweep;
RectangleF rcAroundCircle;
double angleStart_Deg, angleSweep_Deg;
static double minLength = 15;
static float minArcRadius = 20;
Any segment of the way might have a comment of the CommentToCircle class. This class is derived from the TextMR class and is included into the
MoveGraphLibrary.dll but it is used not too often so I think I need to mention the rules for positioning CommentToCircle objects.
It is obvious from the name that such comments are used with the circles. The position of comment (its central point) is described by an angle from the
center of the “parent” circle and additional coefficient. When the coefficient is inside the [0, 1] range, then the comment is inside the circle: 0 means the
center of the circle, while coefficient 1 puts the comment on the border. When the coefficient is greater than 1, then the comment is placed outside the
circle and the coefficient means the distance in pixels from the border of the circle to the comment.
CommentToCircle
TextMR
When the comment of the CommentToCircle class is used with the arc, there are no problems in understanding all the parameters because there is a circle along the border of
which this arc is painted. When you have a straight segment, then you need to imagine the circle with the center in the middle of segment and the radius
equal to half of the length of this segment. Thus, both end points of straight segment are on the border of this imaginable circle and you position the
comment in relation to this imaginable circle.
The Form_SpotOnCommentedWay.cs allows to work with different ways by changing the number of segments; there is a combo box to do it. This example starts with
four segments connected one after another (figure 2).
If you increase gradually the number of segments then somewhere throughout this process you have a system of 11 segments (figure 3).
Figure 4 shows the maximum system of 18 segments.
When you develop an application in which a colored spot has to be moved along some system of trails then the whole task
can be divided into two big parts. The first one is the design of the trails; the second one is the movement of an object along those trails. The second
task is nearly the same as was discussed in the previous example Form_SpotOnLinesAndArcs.cs. The details of such movement are mostly explained with the previous example and
the only addition to be thought out is the movement from one segment to
another. Think about this task as the movement of a train along the rails. If
the train (the colored spot) is somewhere on segment seven then it can go
either to segment eight or four but it cannot go directly to segment one though
the lines of segments one and seven cross on the screen. If you think about
the whole thing as the railroads then a normal train going over the bridge
(segment 7) does not jump to another trail (segment one) going underneath.
The trail system in the Form_SpotOnCommentedWay.cs can be changed at any moment, so the position of the spot must be adjusted to
it. Certainly it is done in this example but the main thing is such design that for any position of the colored spot it is easy to find to which segments
it can move further on. Three lists of segments are used in the program:
List<Way_SegmentCommented> maximumWay = new List<Way_SegmentCommented> ();
List<Way_SegmentCommented> segmentsAll = new List<Way_SegmentCommented> ();
List<Way_SegmentCommented> segmentsAvailable = new List<Way_SegmentCommented> ();
maximumWay contains the full list of segments. This list is
populated only once by the SetMaximumWay() method
and is used as a source of segments throughout the whole work of the Form_SpotOnCommentedWay.cs example. You can see from the code of the
SetMaximumWay() method that each segment has a comment that shows the
number of the segment; these comments are seen at figures 2, 3, and 4.
maximumWay
SetMaximumWay()
private void SetMaximumWay ()
{
maximumWay .Add (new Way_LineSegmentCommented (this, new PointF (100, 200),
new PointF (250, 200), 90, 0.2, "0", fntCmnts, clrCmnts)); // 0
maximumWay .Add (new Way_LineSegmentCommented (this,
maximumWay [0] .PointFinish, -20, 220, 130, 0.3, "1",
fntCmnts, clrCmnts)); // 1
PointF ptEnd = maximumWay [1] .PointFinish;
double angle = (maximumWay [1] as Way_LineSegmentCommented) .Angle;
double angleToCenter = angle - Math .PI / 2;
double radius = 180;
PointF ptCenter = Auxi_Geometry .PointToPoint (ptEnd, angleToCenter, radius);
maximumWay .Add (new Way_ArcSegmentCommented (this, ptEnd, ptCenter, -100,
50, 0.9, "2", fntCmnts, clrCmnts)); // 2
… …
maximumWay .Add (new Way_ArcSegmentCommented (this, ptA, center,
Auxi_Convert .RadianToDegree (angle) - 90,
110, 0.85, "17", fntCmnts, clrCmnts)); // 17
}
segmentsAll contains all the segments shown at each particular
moment. The number of available segments can be changed (there is a control to
do it) and the list is populated by the SetTrails() method.
segmentsAll
SetTrails()
private void SetTrails (int nTrails)
{
segmentsAll .Clear ();
for (int i = 0; i < nTrails; i++)
{
segmentsAll .Add (maximumWay [i]);
}
}
segmentsAvailable contains at each moment only few segments. Depending
on the current configuration of the shown segments (segmentsAll) and the position of the spot, this list contains
between two and four elements (for the current system of segments), but the
population of this list is an interesting process. At any moment this list
contains only the segment currently used by the spot and the connected segments
to which the spot can move directly from this one.
segmentsAvailable
Let us check how it works when the Form_SpotOnCommentedWay.cs is started.
public Form_SpotOnCommentedWay ()
{
InitializeComponent ();
mover = new Mover (this);
fntCmnts = new Font ("Times New Roman", 12, FontStyle .Bold);
clrCmnts = Color .Blue;
ccNumber = new CommentedControl (this, numericUD_Trails, Side .E, "Trails");
SetMaximumWay ();
SetTrails (4);
spot = new SpotOnCommentedWay (this, mover, segmentsAll,
new PointF (400, 100), 7, Color .Magenta);
int iUsedSeg = spot .CurrentlyUsedSegment;
PrepareAvailableWay (iUsedSeg);
spot .SetWay (segmentsAvailable);
iCurrentlyUsedFromAvailable = spot .CurrentlyUsedSegment;
RenewMover ();
}
maximumWay is populated by the SetMaximumWay() method
and the number of segments is limited to four by the SetTrails() method. Then the spot is initialized.
spot = new SpotOnCommentedWay(this, mover, segmentsAll,
new PointF (400, 100), 7, Color .Magenta);
Among other parameters, the spot gets the currently used system of segments –
segmentsAll. Though another parameter declares the point where
the spot must appear, this point is used only as the first approximation. The spot is positioned there but after it the nearest point on the available
segments is found and the spot is moved to this point. Thus, the spot cannot be placed anywhere except the available segments.
After the spot is placed on some segment, it can
return the number of this segment.
int iUsedSeg = spot .CurrentlyUsedSegment;
With this known number and currently used system of segments, the short list
of really available segments at this moment – segmentsAvailable –
can be populated by the PrepareAvailable() method. This short list
always includes the segment where the spot is positioned now and the segments to
which the spot can move directly from this segment. Those available segments
depend on the current configuration.
PrepareAvailable()
private void PrepareAvailableWay (int iCurrentlyUsedSegment)
{
segmentsAvailable .Clear ();
numbersAvailable .Clear ();
int nTrails = Convert .ToInt32 (numericUD_Trails .Value);
MakeSegmentAvailable (iCurrentlyUsedSegment);
switch (iCurrentlyUsedSegment)
{
case 0:
if (nTrails >= 2) MakeSegmentAvailable (1);
if (nTrails >= 6) MakeSegmentAvailable (5);
break;
case 1:
MakeSegmentAvailable (0);
if (nTrails >= 3) MakeSegmentAvailable (2);
break;
… …
This short list - segmentsAvailable - contains all the available, at the moment, segments for the spot.
All other segments are not available so they do not exist for the spot. Then
this short list is sent to the spot as the full system of segments and the spot
returns the number of the occupied segment from this short list.
spot .SetWay(segmentsAvailable);
iCurrentlyUsedFromAvailable = spot.CurrentlyUsedSegment;
Similar exchange of information between the spot and the form happens when the spot is moved.
private void OnMouseMove (object sender, MouseEventArgs e)
{
if (mover .Move (e .Location))
{
GraphicalObject grobj = mover .CaughtSource;
if (grobj is SpotOnCommentedWay)
{
int iNewSegment = spot .CurrentlyUsedSegment;
if (iNewSegment != iCurrentlyUsedFromAvailable)
{
PrepareAvailableWay (numbersAvailable [iNewSegment]);
spot .SetWay (segmentsAvailable);
iCurrentlyUsedFromAvailable = spot .CurrentlyUsedSegment;
}
}
Invalidate ();
}
}
When the spot is moved, its MoveNode() method is used. The nearest point on the available segments is found; the spot is moved into this point, and the number of this
segment is returned into the OnMouseMove() method.
OnMouseMove()
int iNewSegment = spot .CurrentlyUsedSegment;
If this number differs from the previously occupied segment, then it means that the spot has moved from one segment to another, the
new available system of segments must be prepared; this new system of available segments is sent to the spot, and the spot returns the number of the newly
occupied segment among the new set of available segments.
if (iNewSegment != iCurrentlyUsedFromAvailable)
{
PrepareAvailableWay (numbersAvailable [iNewSegment]);
spot .SetWay (segmentsAvailable);
iCurrentlyUsedFromAvailable = spot .CurrentlyUsedSegment;
}
There are two reasons that there is no code to prevent
the jumps over the end points of arcs in the Form_SpotOnCommentedWay.cs. The
described algorithm allows to move only to the available segments, while the
problem of jumping over the gaps in the arcs is solved by not using the arcs
with the big sweep angles. If you need to use a long arc close to a full
circle, divide such arc into three or four. You can see such solution in the Form_SpotOnCommentedWay.cs with segments 2, 3, and 4 (figure 4). These
three arcs belong to the same circle so I could easily unite them into one arc,
for example, with number 2. But then some people will try to move the colored
spot really quickly right and down from the shown position and instead of going
farther on along the segment 7 (in current version) it can jump to another end
of the same segment 2.
To solve this problem, it would be enough to divide
big arc inito two, for example, by uniting segments 2 and 3. Because I needed
a point of connection for segment 17, I decided to divide my nearly full circle
into three parts.
In both of the previous examples a colored spot moves
along the well visible trail, but this is not a mandatory thing. Nowhere in
the code of those examples you can find a requirement that the segments of the
way must be painted. They are painted, but you can easily comment some lines
in the
OnPaint() method and nothing will change
at all; the spot will continue to move along the invisible trails and this will
be a strange thing. Well, it is a strange thing for this Form_SpotOnCommentedWay.cs example but it is an ordinary and expected thing if
you deal with movement in some labyrinth. To demonstrate such a thing there is
the next example Form_LabyringthForDachshund.cs.
OnPaint()
I am familiar with one nice dachshund who likes to
investigate all dark holes and enigmatic passes, so I decided to unite the idea
of labyrinth from the Form_BallInLabyrinth.cs example (from the book!)
with some code from the previous Form_SpotOnCommentedWay.cs example and to construct an interesting labyrinth for
this clever dog. If you ever saw a single dachshund
you understand that in the narrow curved tunnels the tail of this dog is
usually two turns back from its nose, so, while going through the labyrinth,
this dog is represented by a colored spot, while the moment she steps out she
immediately turns into a nice dog (figure 5).
The movement of the colored spot inside the labyrinth is organized in the same way as in the previous example Form_SpotOnCommentedWay.cs only in this new example it is a bit simpler.
Labyrinth is the same all the time, so it is enough to have two lists of segments: one includes all the segments of the labyrinth
and another only the segments available at each particular moment.
List<Way_SegmentCommented> way;
List<Way_SegmentCommented> segmentsAvailable;
The segments inside the labyrinth are not shown, but
there are commented lines in the OnPaint() method.
Turn these lines into the working code and the segments will be painted. It
does not matter whether the segments are painted or not; the colored spot can
go only along them. In this example I use only straight segments of the Way_LineSegmentCommented class. If you
want, you can insert the Way_ArcSegmentCommented segments
on the curves; especially on those where there are no variants and the 90
degree turn is the only way to go on.
The Form_LabyrinthForDachshund.cs is not the only example with labyrinths in the book World of Movable Objects. There
are several more examples and for each of them the design of labyrinth is not the main thing but needs the programming efforts. Also each of those examples
work with one particular labyrinth that is already coded and thus fixed. Each
of the examples is included into the book to demonstrate some features of the
moving process, but each of those examples has such a limitation.
When there is an easy to understand and implement
technique of moving an object inside a labyrinth, then it would be nice to have
an easy to use technique with which any user can design any labyrinth just in
seconds or minutes. I don’t think that small kids have to spend the time in
front of the computer screen, but they have a habit to come and do the same
things as grown ups are doing. Imagine that you can design a new labyrinth on
the screen just in seconds and then a small person can try to move the spot
through this just constructed labyrinth. This would be not the worst way of
using a computer and this is what you can find in the next example.
There is one major difference between the previous
examples and the next one. Regardless of whether the way was visible or not in
the previous examples, it was always predetermined. The system of trails could
be a complex one, but the colored spot could not go anywhere outside those
predetermined lines. The walls in the Form_LabyrinthForDachshund.cs are
used as an entourage and nothing else. In the next example there is no
predetermined set of trails and the walls restrict the movements of the colored
spot.
First, let us formulate the requirements for the
labyrinth design.
It is not a surprising thing that even very
interesting examples can be based on relatively simple elements. Each wall in
our new labyrinths is going to be an object of the LineSegment class – a segment of a straight line with the
possibility of the length change. Any wall can be also rotated; the rotation
of a segment goes around its middle point. A straight line is really a simple
element: only two end points are needed and a pen to draw the line between
these points. To avoid the accidental disappearance of an element, there is a
limit on minimal length of a wall.
LineSegment
public class LineSegment : GraphicalObject
{
PointF ptA, ptB;
Pen m_pen;
static int minLen = 10;
The cover of such object consists of three nodes: two
circular nodes at the end points are used to change the length while the strip
node along the whole length of an element allows to move it forward and rotate.
public override void DefineCover ()
{
float radius = Math .Max (3, m_pen .Width / 2);
cover = new Cover (new CoverNode [] {new CoverNode (0, ptA, radius),
new CoverNode (1, ptB, radius),
new CoverNode (2, ptA, ptB, radius)});
}
The colored spot to be moved inside a labyrinth is
also simple. An object of the Spot_Restricted class is a
small circle with a radius that can be changed inside the [3, 12] range.
Spot_Restricted
public class Spot_Restricted : GraphicalObject
{
Form form;
Mover supervisor;
PointF m_center;
int m_radius = 5;
Color m_color = Color .Red;
List<LineSegment> walls = new List<LineSegment> ();
static int radMin = 3;
static int radMax = 12;
The cover of such spot consists of a single circular
node covering the whole element.
public override void DefineCover ()
{
cover = new Cover (new CoverNode (0, m_center, Math .Max (m_radius, 5)));
}
To make the catching of such spot easier, the radius
of the node is never less than five pixels. Thus, for the smallest spot the
node is bigger than the spot itself and such spot can be grabbed not only by
any point inside its border but also at the points which are outside the spot
but close to its border. The spot can be grabbed at different points but at
the moment of catching the cursor is moved to the center of the spot and is
adhered to this central point throughout the whole process of movement.
public void StartMoving ()
{
supervisor .MouseTraced = false;
Cursor .Position = form .PointToScreen (Point .Round (m_center));
supervisor .MouseTraced = true;
}
Now let us see how all the things work inside the Form_SpotInManualLabyrinth.cs.
When you open this form for the first time, it is nearly empty: there is a
lonely colored spot in view and nothing else. The only way to start building a
labyrinth is to call the menu at any empty place (figure 6).
Well, to be correct, this is the view of the menuOnEmpty when there is at least one wall inside the form.
Without any walls in view, only the first command of this menu is enabled; this
command is always enabled and just now we are interested in this command. By
clicking the first command of this menu you add the new wall. At the beginning
there are no other walls, so the new one will be the only one in view.
menuOnEmpty
private void Click_miAddNewWall (object sender, EventArgs e)
{
LineSegment segment = new LineSegment (ptMouse_Up,
new PointF (ptMouse_Up .X + 50, ptMouse_Up .Y + 70), penWalls);
AvoidSpotSegmentOverlap (segment);
segments .Add (segment);
RenewMover ();
Invalidate ();
}
Because the length, position, and angle of any wall
can be easily changed, this new wall can be transformed to whatever you need.
Any wall can be modified and positioned independently of all others; in such
way any labyrinth can be constructed; figure 7 shows one of the
infinitive variants
The second way to add new wall is to call the context menu on any existing wall (figure 8) and to use its command Duplicate.
The inclination of any line (wall) can be easily changed by simple rotation but the same menu contains a pair of commands to
make the touched wall strictly horizontal or vertical. Another menu command allows to call a small tuning form and to change with its help the view of any
line (color, width, and style) so not all the walls must look in the same way. But if you like the new view of some wall and would like to make all others
similar, call this menu on the wall with the preferable view and select its Use as sample command.
Another way to change the design, and this way is even more powerful, is to use the group. Such group – of the
GroupOfSegments class – can contain an arbitrary number of segments
(at least one) and looks like a slightly rounded frame around all its inner elements.
GroupOfSegments
public class GroupOfSegments : GraphicalObject
{
List<LineSegment> lines = new List<LineSegment> ();
int spaceToFrame = 10;
Pen penFrame;
If you need to organize some segments into a group, you have to press the left button at an empty place:
private void OnMouseDown (object sender, MouseEventArgs e)
{
ptMouse_Down = e .Location;
if (mover .Catch (e .Location, e .Button))
{
… …
}
else
{
if (e .Button == MouseButtons .Left)
{
if (group != null)
{
FreeWalls ();
}
bTemporaryFrame = true;
}
}
ContextMenuStrip = null;
}
and move the mouse without releasing the pressed button.
private void OnMouseMove (object sender, MouseEventArgs e)
{
ptMouse_Move = e .Location;
if (mover .Move (e .Location))
{
… …
}
else
{
if (bTemporaryFrame)
{
Invalidate ();
}
}
}
At any moment the rectangular frame is shown; the
point of initial press and the current mouse point are used as the opposite
corners of this rectangle.
private void OnPaint (object sender, PaintEventArgs e)
{
Graphics grfx = e .Graphics;
… …
if (bTemporaryFrame)
{
grfx .DrawRectangle (penFrame,
Auxi_Geometry .RectangleAroundPoints (ptMouse_Down, ptMouse_Move));
}
}
When at last the button is released, then the
temporary frame is gone, but if at this moment there are some segments (at
least one) inside the temporary frame, then the group is organized and it
contains all those segments that were rounded.
private void OnMouseUp (object sender, MouseEventArgs e)
{
ptMouse_Up = e .Location;
double dist = Auxi_Geometry .Distance (ptMouse_Down, ptMouse_Up);
if (mover .Release ())
{
… …
}
else
{
if (e .Button == MouseButtons .Left)
{
if (bTemporaryFrame)
{
Rectangle rc = Auxi_Geometry .RectangleAroundPoints
(ptMouse_Down, e .Location);
List<LineSegment> segmentsInFrame = new List<LineSegment> ();
for (int i = segments .Count - 1; i >= 0; i--)
{
if (rc .Contains (Rectangle .Round
(segments [i] .RectAround)))
{
segmentsInFrame .Insert (0, segments [i]);
segments .RemoveAt (i);
}
}
if (segmentsInFrame .Count > 0)
{
group = new GroupOfSegments (segmentsInFrame, penFrame);
RenewMover ();
}
bTemporaryFrame = false;
Invalidate ();
}
… …
With this group a lot of things can be done. Forward
movement and rotation of the group with all its elements can be started by an
ordinary mouse press inside the group (left button press starts forward
movement, right button press starts rotation) while other things are done via
the commands of the context menu that can be called inside the group.
A GroupOfSegments object
is simple and its cover is very simple and consists of a single rectangular
node slightly bigger than the frame. This means that the group can be grabbed
for moving by any inner point or any point in the vicinity of the frame.
public override void DefineCover ()
{
Rectangle rc = rcFrame;
rc .Inflate (3, 3);
cover = new Cover (rc, Resizing .None);
}
The group is not a resizable object by itself, so it is impossible to change its sizes by moving its borders, but the sizes of the
group depend on the positions of all the inner elements. This means that any move of any segment belonging to the group calls the update of the group.
private void OnMouseMove (object sender, MouseEventArgs e)
{
ptMouse_Move = e .Location;
if (mover .Move (e .Location))
{
GraphicalObject grobj = mover.CaughtSource;
if (grobj is LineSegment)
{
if (group != null && grobj .ParentID == group .ID)
{
group .Update ();
}
}
… …
The update of the group consists of recalculating of its frame and cover.
public void Update ()
{
CalcFrame ();
DefineCover ();
}
When you duplicate any segment belonging to the group,
the new segment is also included into the group. As the duplicate of any wall
is placed slightly aside from the original, then such action often increases
the size of the group.
private void Click_miDuplicate (object sender, EventArgs e)
{
LineSegment segment = segmentPressed .Copy;
segment .Move (30, 30);
AvoidSpotSegmentOverlap (segment);
if (group != null && segmentPressed .ParentID == group .ID)
{
group .AddSegment (segment);
}
else
{
segments .Add (segment);
}
RenewMover ();
Invalidate ();
}
The forward movement of the group orders the synchronous movement of the inner elements so their relative positions do not change.
public override void Move (int dx, int dy)
{
rcFrame .X += dx;
rcFrame .Y += dy;
foreach (LineSegment elem in lines)
{
elem .Move (dx, dy);
}
}
The group can be also rotated, but here the situation
is a bit more complex than in the case of a single wall rotation. Any segment
inside the group can be rotated individually; such rotation goes around the
middle point of the caught segment and calls the update of the group. The
rotation of the group as a whole goes around its central point and it differs
from the synchronous rotation of all elements around their central points (it
is not a ballet or the type of synchronous rotation demonstrated in the Form_SpottedTexts.cs
in the chapter Texts of the book). When the group is pressed for
rotation (right mouse press), its GroupOfSegments.StartRotation()
method is called.
GroupOfSegments.StartRotation()
private void OnMouseDown (object sender, MouseEventArgs e)
{
ptMouse_Down = e .Location;
if (mover .Catch (e .Location, e .Button))
{
GraphicalObject grobj = mover .CaughtSource;
if (e .Button == MouseButtons .Left)
{
… …
}
else if (e .Button == MouseButtons .Right)
{
if (grobj is LineSegment)
{
(grobj as LineSegment) .StartRotation (e .Location);
}
else if (grobj is GroupOfSegments)
{
(grobj as GroupOfSegments) .StartRotation (e .Location);
}
}
… …
Each segment of the group is based on two end points. All the segments are going to rotate around the same point – the central point
of the group, so for each end point of each segment its own radius and compensation angle must be calculated at the starting moment of the group’s
rotation. These calculations are done in the GroupOfSegments.StartRotation() method.
public void StartRotation (Point ptMouse)
{
compensation = new double [2 * lines .Count];
radius = new double [2 * lines .Count];
ptMiddle = Auxi_Geometry .Middle (rcFrame);
double angleMouse = Auxi_Geometry .Line_Angle (ptMiddle, ptMouse);
for (int i = 0; i < lines .Count; i++)
{
compensation [i * 2] = Auxi_Common .LimitedRadian (angleMouse –
Auxi_Geometry .Line_Angle (ptMiddle, lines [i] .Point_A));
compensation [i * 2 + 1] = Auxi_Common .LimitedRadian (angleMouse –
Auxi_Geometry .Line_Angle (ptMiddle, lines [i] .Point_B));
radius [i*2] = Auxi_Geometry .Distance (ptMiddle, lines [i] .Point_A);
radius [i*2+1] = Auxi_Geometry.Distance (ptMiddle, lines [i] .Point_B);
}
}
In this way the group can be moved forward and rotated; other possibilities are available
via the commands of its context menu (figure 9). Among others, there are two commands that allow to construct labyrinths much faster; these are the
commands to duplicate all the walls of the group. Depending on the used command, the duplicates are placed either to the:
right of the existing set of segments or below so in both cases none of the new segments overlap with any of the existing segments of the group.
There is one thing that must be not forgotten when you add new segments, move and release them, or duplicate: a segment can appear at
the place of the colored spot and something must be done in such situation. From my point of view the solution is obvious: if it happens, such segment must
be moved aside. This is done in the AvoidSpotSegmentOverlap() method.
AvoidSpotSegmentOverlap()
private bool AvoidSpotSegmentOverlap (LineSegment segment)
{
bool bMove = false;
if (Auxi_Geometry .Distance_PointSegment (spot .Center, segment .Point_A,
segment .Point_B) <= spot .Radius + segment .Pen .Width / 2)
{
segment .Move (0, 30);
bMove = true;
if (Auxi_Geometry .Distance_PointSegment (spot .Center,
segment .Point_A, segment .Point_B) <= spot .Radius)
{
segment .Move (30, 0);
}
if (group != null && segment .ParentID == group .ID)
{
group .Update ();
}
}
return (bMove);
}
An additional remark about one command from the menu on the walls (figure 8). I already mentioned that the view of the
pressed wall can be spread on other walls by using the command Use as sample. The result of this command slightly depends on whether the pressed wall is inside the group or not.
Looks like I have explained everything about the
design and change of an arbitrary labyrinth but forgot to write about the
movement of colored spot in the labyrinth which is the main thing in this
example. As usual, we have to look at the code of three mouse events in the Form_SpotInManualLabyrinth.cs
– the OnMouseDown(), OnMouseMove(), and OnMouseUp() methods – and on the
MoveNode()
method of the involved class – the Spot_Restricted.MoveNode()
method.
OnMouseDown()
OnMouseUp()
Spot_Restricted.MoveNode()
The movement of the spot starts when this spot is
pressed by the left button.
private void OnMouseDown (object sender, MouseEventArgs e)
{
ptMouse_Down = e .Location;
if (mover .Catch (e .Location, e .Button))
{
GraphicalObject grobj = mover .CaughtSource;
if (e .Button == MouseButtons .Left)
{
if (grobj is Spot_Restricted)
{
segmentsAll .Clear ();
if (group != null)
{
segmentsAll .AddRange (group .Elements);
}
segmentsAll .AddRange (segments);
spot .Walls = segmentsAll;
spot .StartMoving ();
}
}
… …
At this moment a group around some walls may or may not exist, but this fact has nothing to do with the restrictions on the spot
movement. If there is a group then the visible border of this group is only a reminder for the user about the group of walls that can be moved and changed
simultaneously or duplicated by a single command. For the colored spot the border of the group does not mean anything and the walls inside or outside
the group work as a restriction for movement in the same way. Thus, a united List of all the walls – segmentsAll – is organized to be used
as barriers for further movements. This united List includes walls from inside (group.Elements) and outside (segments) the
group. The spot gets this List via the Spot_Restricted.Walls property.
group.Elements
segments
Spot_Restricted.Walls
segmentsAll .Clear ();
if (group != null)
{
segmentsAll .AddRange (group .Elements);
}
segmentsAll .AddRange (segments);
spot .Walls = segmentsAll;
At the same moment the mouse cursor is moved to the central point of the colored spot in order to avoid dealing with the initial
mouse shifts throughout the whole process of movement. This is done by the Spot_Restricted.StartMoving() method.
Spot_Restricted.StartMoving()
spot.StartMoving ();
In the OnMouseMove() method of the form the Spot_Restricted object is
not mentioned so we have to look into the Spot_Restricted.MoveNode() method to find out what happens with the colored spot throughout the movement.
public override bool MoveNode (int i, int dx, int dy, Point ptM,
MouseButtons catcher)
{
bool bRet = false;
if (catcher == MouseButtons .Left)
{
LineSegment wall;
for (int j = 0; j < walls .Count; j++)
{
wall = walls [j];
if (Auxi_Geometry .Distance_Segments (m_center, ptM, wall .Point_A,
wall .Point_B) <= m_radius + wall .Pen .Width / 2)
{
supervisor .MouseTraced = false;
Cursor .Position = form .PointToScreen (Point .Round (m_center));
supervisor .MouseTraced = true;
return (false);
}
}
Center = ptM;
bRet = true;
}
return (bRet);
}
The checking of the possibility of proposed movement is really easy. The mouse is moved somewhere by the user and this new mouse
position must become the new central point of the spot. The minimal allowed distance between the central point of the spot and the central line of the wall is the
sum of the radius and half the width of this wall. This checking must be done for each wall of the labyrinth. Walls are not moved throughout the movement of
the spot and the current List of walls was just sent to the spot at the initial moment of its movement. If any wall prevents the proposed
movement of the spot then the mouse cursor is returned to the previous position. As usual, this is done with the temporary disconnection of the link
between the mover and the object under move.
In one of the examples in the book I have demonstrated
that one more checking is needed: if you try to move the colored spot really
quickly, it can go through the wall. Because of that possibility, there is an
additional checking in the BallSV class
(look for it in the Form_BallInLabyrinth.cs example of the book). Maybe
I am not so fast now, but I can’t reproduce the movement of the colored spot
through any wall in the Form_SpotInManualLabyrinth.cs and excluded an
additional checking from the code. If you are more successful with reproducing
this situation in the current example you now know where to look for the
additional checking to improve the code.
BallSV
When the spot is released it can be released only on
an empty place but with the release of a wall the situation is different. A
wall can be moved anywhere and it can be released while overlapping the colored
spot. In this case the wall is moved slightly aside in order to avoid such
overlapping. The same overlapping of some wall and the colored spot can occur
when not a single wall but a group is moved and released; in both cases the
AvoidSpotSegmentOvelap() method must be used.
AvoidSpotSegmentOvelap()
private void OnMouseUp (object sender, MouseEventArgs e)
{
ptMouse_Up = e .Location;
double dist = Auxi_Geometry .Distance (ptMouse_Down, ptMouse_Up);
if (mover .Release ())
{
GraphicalObject grobj = mover .WasCaughtSource;
if (grobj is LineSegment)
{
LineSegment segment = grobj as LineSegment;
if (AvoidSpotSegmentOverlap (segment))
{
Invalidate ();
}
}
else if (grobj is GroupOfSegments)
{
foreach (LineSegment segment in group .Elements)
{
if (AvoidSpotSegmentOverlap (segment))
{
Invalidate ();
}
}
}
… …
Though the examples of this article use similar
technique of adhered mouse cursor during the movements of some objects they use
this technique in slightly different ways.
One is the situation with the predetermined set of
trails and an object that can be moved only along those trails. For each
movement of the mouse cursor, the nearest allowed position on the trails is
calculated, an object is moved into this position, and throughout the position
adjustment the link between the mover and the object under move must be
temporary disconnected. The objects around the trails have nothing to do with
the movement and are used only as an entourage.
Another is the case without predetermined trails and
in this case the objects around are used to restrict the movements. If any of
those objects does not allow the movement proposed by the move of the mouse
cursor, then an object is not moved, the cursor must be returned to the
previous position, and for this back movement of the cursor the same temporary
disconnection between the mover and the object is.
|
http://www.codeproject.com/Articles/544451/The-Roads-We-Take
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
You can subscribe to this list here.
Showing
25
50
100
250
results of 44
I just downloaded the 0.50 tar.gz and found the fonts folder appears to
be awol.
other than that it is working great :)
john
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
jdhunter@... ha scritto:
>?
Ok, I have removed the python2.3-numeric-ext dependency. The new version is on
the server
- --
/Vittorio Palmisano/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird -
iD8DBQFAPga4pT6bvDtyXOIRApu0AKDMEXGwqhsjQHr66KsAQYGAkCqq1ACgvZob
1GiHun0m4qd+c4yZKoGScu4=
=amuJ
-----END PGP SIGNATURE-----
Todd Miller has added a TkAgg backend to CVS. As the name implies,
this is a Tkinter GUI backend which uses agg to render. One very nice
feature of this backend is that it works interactively from any python
shell, though you'll still need to set matplotlib.interactive(True).
> python -i yourscript.py -dTkAgg
There may be a few minor issues to clear up, but it will be included
in the next release, so now would be a good time to test it out
(Thanks Todd!)
Also, following Paul Barrett's pointer to the BaKoMa fonts, I
implemented a TeX math parser which works with the new ft2font module
to render math text. Any text element (xlabel, ylabel, title, text,
etc) can use TeX markup, as in
xlabel(r'$\Delta_i$')
^
use raw strings
The $ symbols must be the first and last symbols in the string. Eg,
you cannot do
r'My label $x_i$'.
But you can change fonts, as in
r'\rm{My label} x_y'
to achieve the same effect.
A large set of the TeX symbols are provided (see below). Subscripting
and superscripting are supported, as well as the over/under style of
subscripting with \sum, \int, etc.
The module uses pyparsing to parse the TeX expression, an so can
handle fairly complex TeX expressions
Eg, the following renders correctly
tex = r'$\cal{R}\prod_{i=\alpha_{i+1}}^\infty a_i\rm{sin}(2 \pi f x_i)$'
See
for a screenshot.
The fonts \cal, \rm, \it, and \tt are defined.
The computer modern fonts this package uses are part of the BaKoMa
fonts, which are (in my understanding) free for noncommercial use.
For commercial use, please consult the licenses in fonts/ttf and the
author Basil K. Malyshev - see also
KNOWN ISSUES:
- some hackish ways I deal with a strange offset in cmex10
- bbox is a bit large in vertical direction and small in horizontal direction
- nested subscripts, eg, x_i_i not working
- nesting fonts changes in sub/superscript groups not parsing
- no rotations yet
- no kerning
I would like to add more layout commands, like \frac.
Backends: This currently works with Agg, GTKAgg and TkAgg. If David
incorporates ft2font into paint, it will be easy to add to Paint. I
think I can also add it to GTK rather straightforwardly, since it's
just a matter of rendering from the ft2font pixel buffers; ditto for
wx. PS will require more substantial work, doing the metrics and
layouts with the AFM versions of the computer modern fonts. Backends
which don't support mathtext will just render the TeX string as a
literal.
CVS is updated - or if you prefer, I uploaded a snapshot to
Let me know how it goes! I expect there will be plenty of issues
cropping up.
JDH
Allowed TeX symbols:
\Delta \Downarrow \Gamma \Im \LEFTangle \LEFTbrace \LEFTbracket
\LEFTparen \Lambda \Leftarrow \Leftbrace \Leftbracket \Leftparen
\Leftrightarrow \Omega \P \Phi \Pi \Psi \RIGHTangle \RIGHTbrace
\RIGHTbracket \RIGHTparen \Re \Rightarrow \Rightbrace \Rightbracket
\Rightparen \S \SQRT \Sigma \Sqrt \Theta \Uparrow \Updownarrow
\Upsilon \Vert \Xi \aleph \alpha \approx \ast \asymp \backslash \beta
\bigcap \bigcirc \bigcup \bigodot \bigoplus \bigotimes
\bigtriangledown \bigtriangleup \biguplus \bigvee \bigwedge \bot
\bullet \cap \cdot \chi \circ \clubsuit \coprod \cup \dag \dashv \ddag
\delta \diamond \diamondsuit \div \downarrow \ell \emptyset \epsilon
\equiv \eta \exists \flat \forall \frown \gamma \geq \gg \heartsuit
\imath \in \infty \int \iota \jmath \kappa \lambda \langle \lbrace
\lceil \leftangle \leftarrow \leftbrace \leftbracket \leftharpoondown
\leftharpoonup \leftparen \leftrightarrow \leq \lfloor \ll \mid \mp
\mu \nabla \natural \nearrow \neg \ni \nu \nwarrow \odot \oint \omega
\ominus \oplus \oslash \otimes \phi \pi \pm \prec \preceq \prime \prod
\propto \psi \rangle \rbrace \rceil \rfloor \rho \rightangle
\rightarrow \rightbrace \rightbracket \rightharpoondown
\rightharpoonup \rightparen \searrow \sharp \sigma \sim \simeq \slash
\smile \spadesuit \sqcap \sqcup \sqrt \sqsubseteq \sqsupseteq \subset
\subseteq \succ \succeq \sum \supset \supseteq \swarrow \tau \theta
\times \top \triangleleft \triangleright \uparrow \updownarrow \uplus
\upsilon \varepsilon \varphi \varphi \varrho \varsigma \vartheta
\vdash \vee \wedge \wp \wr \xi \zeta
Vittorio Palmisano <redclay@...> writes:
> Hello,
> I have updated my debian packages for Matplotlib, and I put the installation
> instructions on my page:
Hello Vittorio,
Thanks much for doing this. I am just now taking it for a test drive
on a debian machine I have access to.?
Thanks again,
JDH
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
I have updated my debian packages for Matplotlib, and I put the installation
instructions on my page:
These are the packages I've build:
python-gdmodule 0.52
python-matplotlib 0.50-1
python-matplotlib python-matplotlib-doc 0.50-1
python-pypaint 0.3
python-ttfquery 1.0.0a1
python-matplotlib-doc is a documentation package made with pydoc.
- --
/Vittorio Palmisano/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird -
iD8DBQFAOPj2pT6bvDtyXOIRAjTqAJ99uxVxbH//QG32f/xC+UCURBYyYACgilk+
bLZtzF0xGJIP0KmHiJiU1Nk=
=CqiV
-----END PGP SIGNATURE-----
John Hunter wrote:
>?
See
AMS Fonts (Truetype)
TTF versions of the American Mathematical Society Computer Modern fonts,
aka the BaKoMa fonts by Boris Malyshev. The truetype versions of the AMS
fonts are included in PCTeX.
-- Paul
--
Paul Barrett, PhD Space Telescope Science Institute
Phone: 410-338-4475 ESS/Science Software Branch
>>>>>
>>>>> "John" == John Hunter <jdhunter@...> writes:
John> * clipping appears broken: arctest.py, legend_demo.py, and
John> so on. This used to work, right?
John> * dotted lines are solid: see any plot with grid(True),
John> eg, log_test.py. Ditto for dashed lines: eg, plot(t3,
John> cos(2*pi*t3), 'r--') appears solid.
Both of these problems are arising from the select / unselect thing in
the GC. Since these changes were in response to the double gc problem
which no longer exists, I'm simply going to 'pass' in select and
unselect for the next release (which fixes both problems) above.
backend wx now passes all the tests, except the known issues.
I also added some more of the dpi scaling functionality to wx
(linewidth and markersize via the new renderer method
points_to_pixels). The last remaining area is dash size which is more
difficult since wx uses a constant for dash drawing rather than an ink
on/off approach. Does wx also support on/off? If I recall correctly,
you experimented with this but had a hard time getting it working right.
In any case the high res (eg dpi=300) wx output looks great.
There also seems to be an issue with getting the text bounding box
right with fontangles and fontweights that are not standard. To
experiment with this, uncomment 'if 0' on or around line 416 is
axis.py and run text_handles.py or text_themes.py. This will draw the
bbox around you text instance.
JDH
pypaint 0.3
-----------------
This module provides a light Python wrapper for libart written in C. It
allows you to create static images quickly. pypaint runs on both Linux and
Windows.
It only provides basic functionality at the moment - line drawing,
rectangles, polygons, arcs, fill, and simple font support (using freetype
1). The images are output in PNG format.
There will be support for a pypaint backend in the next release of
matplotlib.
pypaint is available at. In
future, there will be a homepage up at
David Moore
Software Consultant
St. James Software
Phone: +27 21 424 3492
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system ().
Version: 6.0.580 / Virus Database: 367 - Release Date: 2004/02/06
>>>>> "David" == David Moore <davidm@...> writes:
David> Sorry, last email wasn't to the list. From posts you've
David> made, it looks like you've made some changes to pypaint.
David> Do you want those committed before a pypaint release?
David> Apart from that, I'm just tying up some loose ends, and
David> then I'll be able to make the release - hopefully today,
David> but maybe only Monday if you want your changes added.
I don't have anything to add. I tried to get clipping working but
didn't succeed. I think there are two problems
1) libart wants to paths to be counter clockwise for clipping to work
(at least that's what I was told on the libart list), and I don't
think all the path constructors insure this, eg draw_arc.
2) even when the paths are constructed properly, I think there is a
problem in libart that causes clipping artifacts.
So I think you'll be fine going with what you've got for now. It
would be great to get clipping worked out in the long run, but I don't
see any immediate solutions.
On the font front: as I mentioned earlier, I ported font.c to agg and
it's working. For high resolution images it works great. For small
images (eg, dpi=60) the fonts look pretty bad. I think this is the
same for paint and agg, at least in the comparisons I've looked at.
This isn't surprising since the font code is the same. The two areas
where small rasters come up are for web pages (which is where I assume
your interests are) and embedding in GUI applications (which is
important to me).
Is this why you want to upgrade to freetype2? From my reading of some
of the docs there, this is one thing that freetype2 does better.
JDH
On Saturday 14 February 2004 8:03 am, you wrote:
> Thanks for the pointers.
>
> On Friday 13 February 2004 9:20 pm, you wrote:
> > I made some progress!
>
> [snup]
>
> > The bug was a line I introduced when I added some more print
> > debugs
>
> I had fixed this one...
>
> > The front end change that is causing your problems is that I now pass
> > a gcedge and a gcface. I used to pass a gcface and a face color.
> > Maybe I'll revert to that.
> >
> > When I instantiate a gcedge and a gcface on the front end, you make 2
> > calls to new_gc
> >
> > def new_gc(self):
> > """
> > Return an instance of a GraphicsContextWx, and sets the current
> > gc copy """
> > DEBUG_MSG('new_gc()', 2, self)
> > self.gc = GraphicsContextWx(self.bitmap, self)
> > assert self.gc.Ok(), "wxMemoryDC not OK to use"
> > return self.gc
> >
> > I think initing 2 GCs with the same bitmap is causing the problem. If
> > you see an easy backend solution, that would be ideal. If not, I
> > could re-refactor the frontend to be like it was: pass a gcedge and a
> > facecolor.
>
> This is, indeed, illegal in wx. It sometimes works under Windows, but
> consistently fails (dramatically) under Linux (at least for wxGtk).
To be more exact about what happens, here is the synopsis:
In patches.py, Patch._draw() instantiates two gc instances
def _draw(self, renderer, *args, **kwargs):
#here=============>
gc = renderer.new_gc()
gc.set_foreground(self._edgecolor)
gc.set_linewidth(self._linewidth)
gc.set_alpha(self._alpha)
if self._clipOn:
gc.set_clip_rectangle(self.bbox.get_bounds())
if not self.fill: gcFace = None
else:
#and here===========>
gcFace = renderer.new_gc()
gcFace.set_foreground(self._facecolor)
gcFace.set_alpha(self._alpha)
self._derived_draw(renderer, gc, gcFace)
Each call to backend_wx.py RendererWx.new_gc() does the following:
self.gc = GraphicsContextWx(self.bitmap, self)
assert self.gc.Ok(), "wxMemoryDC not OK to use"
return self.gc
But instantiating a GraphicsContextWx does:
def __init__(self, bitmap, renderer):
GraphicsContextBase.__init__(self)
wxMemoryDC.__init__(self)
DEBUG_MSG("__init__()", 1, self)
# Make sure (belt and braces!) that existing wxDC is not selected to
# to a bitmap.
if GraphicsContextWx._lastWxDC != None:
#the gcEdge gets selected into a NULL bitmap here
GraphicsContextWx._lastWxDC.SelectObject(wxNullBitmap)
self.SelectObject(bitmap)
self.SetPen(wxPen('BLACK', 1, wxSOLID))
self._style = wxSOLID
self.renderer = renderer
GraphicsContextWx._lastWxDC = self
You will note that in order to enforce the wx rule that no more than one
wxMemoryDC can be selected into a bitmap, I ensure that the last wxMemoryDC
instance activated is selected to a wxNullBitmap.
In the case of the above code, this means that the gc representing the
rectangle has already been selected to a NULL bitmap. When you try to draw to
this, an exception is generated within the C++ code which is not propagated
back up to the Python layer. Unfortunately, the assert() calls don't help
much here as each gc instance is OK when it is constructed.
I'm afraid I'd missed the significance of the change from a facecolor to a
gcFace for fill colours.
> I'll see if I can come up with any ideas, but passing a gcedge and
> facecolor is likely to be simpler to get to work reliably.
I have thought about this, and to only other option I can think of is for me
to select between 'currently active' gc instances as required. Obviously,
this would probably slow backend_wx down, but I'm not yet sure what the
performance hit will be. Let me try it...
[an hour or so later...]
Attached is a backend_wx which seems to work with most of the examples I've
tried (Linux only - expect Windows to work as it's generally better behaved
than wxGtk, but would be nice if someone would try it.
Some issues (for now):
- Clipping seems to be broken (c.f. arctest.py)
- Object picker demo doesn't work (haven't implemented this functionality)
- Some types of dotted lines are not working properly (c.f. subplot_demo.py)
- embedding_in_wx doesn't work - due to some of the recent API changes. I'll
fix this one pretty soon - it shouldn't be much work.
- It's about 20% slower than the previous implementation in which only one GC
was active at a time.
Not sure if the dotted lines and clipping are down to information being lost
when I switch between gcs - I'll have to look into this, but the attached
will be a good start for getting things to work on wx again.
John, I'll leave it as your call whether you want to switch the API back to
using a facecolor instead of a gcFace. It would be better from the wx
perspective, but there may be good reasons for doing otherwise on other
backends.
There are a couple of optimisations I can probably make to speed things a
little if we stick with the current API -it should be possible to do
something slightly more intelligent than simply selecting a bitmap at the
start of each call, and deselecting it afterwards - although, as noted above,
wxMemoryDC seems to be extremely fragile under wxGtk - not sure why.
I have checked the attached into CVS for now, but we may have the usual
'mirror update' issue.
Regards
Jeremy
>>>>> "Vittorio" == Vittorio Palmisano <redclay@...> writes:
Vittorio> I have rebuild the package with support for gtkgd. I
Vittorio> have tried to build also the agg backend, but I can't
Vittorio> find the source package for this library! I've found two
Vittorio> version: one from and another from
Vittorio> scipy cvs, but seems that there are some missing headers
Vittorio> when I compile the backend. I have also packaged the
Vittorio> gdmodule from
Vittorio>
Vittorio> and the ttfquery module from
Vittorio>, all the debs
Vittorio> are on
Wow, amazing work.
You may want to look at the header of matplotlib.backends.backend_agg
- this includes some install instructions, where to get agg, etc...
However, that may not be necessary.
I have built a new sdist that has all the prereqs for agg built in
(fonttools, ttfquery, and agg2). You just have to set BUILD_AGG in
setup.py and it should build, as long as your compiler is fairly
recent. I'll leave it to you whether you want to use debian to get
fontools and ttfquery, or use the ones included with matplotlib. I
stripped all the fluff out of those 3 packages and with everything
included the sdist is still under a megabyte.
I used this to build an all-in-one win32 installer, which will be nice
because now win32 users with Numeric can make PS and PNG plots out of
the box (zlib, libpng, freetype are statically built in).
I'm going to upload the snapshot if you want to experiment with it. I
should warn you though: I forgot yesterday that there is still a
critical bug in wx that must be repaired. Thus there will be one more
revision following this one. (Sorry)
JDH
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
John Hunter ha scritto:
>
>.
>
I have rebuild the package with support for gtkgd.
I have tried to build also the agg backend, but I can't find the source package
for this library! I've found two version: one from and
another from scipy cvs, but seems that there are some missing headers when I
compile the backend.
I have also packaged the gdmodule from and the ttfquery module
from, all the debs are on
I can write some documentation about debian packages required, once I've tested
them, but only for the next weekend, bye
- --
/Vittorio Palmisano/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird -
iD8DBQFALOhdpT6bvDtyXOIRAiQGAKDDQCrh0sELN09vI2dKYVfTKZ9H4ACgr8o2
ezklwMSi/Ke3zY5kK8DWwsI=
=SX+C
-----END PGP SIGNATURE-----
Almost
>
> On the topic of releases: David, my target date is early next week.
> Do you think you can get pypaint up by then?
>
> Thanks again,
> JDH
>
Hi John,
Sorry, last email wasn't to the list. From posts you've made, it looks like
you've made some changes to pypaint. Do you want those committed before a
pypaint release? Apart from that, I'm just tying up some loose ends, and
then I'll be able to make the release - hopefully today, but maybe only
Monday if you want your changes added.
thanks,
David Moore
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system ().
Version: 6.0.580 / Virus Database: 367 - Release Date: 2004/02/06
>>>>> "Vittorio" == Vittorio Palmisano <redclay@...> writes:
Vittorio> Ok, I have fixed this problems, now "lintian -i
Vittorio> python-matplotlib_0.50-1_i386.changes" doesn't say
Vittorio> nothing. I have updated the files on my repository, bye
Hi Vittorio, thanks a bunch for doing this. A lot of people have
asked for this, and there's been incremental progress here and there,
but I'm glad to see you got everything done..
On the topic of releases: David, my target date is early next week.
Do you think you can get pypaint up by then?
Thanks again,
JDH
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Jochen Voss ha scritto:
> Hi Vittorio,
>
>
>
Ok, I have fixed this problems, now "lintian -i
python-matplotlib_0.50-1_i386.changes" doesn't say nothing.
I have updated the files on my repository, bye
- --
/Vittorio Palmisano/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird -
iD8DBQFALBMapT6bvDtyXOIRAotMAKCKjRsSkEqW1+ZghHsbPTPSkxiK5gCfcse1
nTVo7VRSpn42CrynvFDGNKY=
=yfL1
-----END PGP SIGNATURE-----
Hi Vittorio,
On Thu, Feb 12, 2004 at 11:14:19PM +0100, Vittorio Palmisano wrote:
> I've made a Debian package for matplotlib and I've put it at:
>
--=20
Hello,
I've made a Debian package for matplotlib and I've put it at:
I've read some specification for dependencies from,
this is my first package and so there are some (many?) things to fix.
I think that the package may also include the api documentation generated with pydoc.
--
/Vittorio Palmisano/
On Tue, 2004-02-10 at 16:30, John Hunter wrote:
> I just made a minor change in the backend API. The faceColor argument
> (formerly a color arg) for draw_rectangle, draw_arc, etc... is now a
> graphics context instance. I updated all the backends in CVS.
>
Hi John,
I'm working with Perry Greenfield's group on a Tkinter/Paint backend for
matplotlib. I noticed today that the paint facecolor had changed to
black. Looking into it, there were a few more edits to
backend_paint.py needed as a result of the interface change above. I
also noticed that the paint version of draw_polygons needed a little
code to get it working. Attached is a patch for both.
Regards,
Todd
--
Todd Miller <jmiller@...>
The agg backend
Features that are implemented
* capstyles and join styles
* dashes
* linewidth
* lines, rectangles, ellipses, polygons
* clipping to a rectangle
* output to RGBA and PNG
* alpha blending
* DPI scaling - (dashes, linewidths, fontsizes, etc)
* freetype1
TODO:
* use ttf manager to get font - right now I just use Vera
INSTALLING
Grab the latest matplotlib from
REQUIREMENTs
python2.2+
Numeric 22+
agg2 (see below)
freetype 1
libpng
libz ?
Install AGG2 (cut and paste below into xterm should work)
wget
tar xvfz agg2.tar.gz
cd agg2
make
(Optional) if you want to make the examples:
cd examples/X11
make
Installing backend_agg
Edit setup.py: change aggsrc to point to the agg2 src tree and
replace if 0: with if 1: in the backend_agg section
Then just do the usual thing: python setup.py build
Please let me know if you encounter build problems, and tell me
platform, gcc version, etc... Currently the paths in setupext.py
assume as linux like filesystem (eg X11 include dir, location of
libttf, etcc) so you may need to tweak these.
Using agg backend
python somefile.py -dAgg
or
import matplotlib
matplotlib.use('Agg')
Let me know how it works out! Note also that backend agg is the first
backend to support alpha blending; see scatter_demo2.py.
JDH
|
http://sourceforge.net/p/matplotlib/mailman/matplotlib-devel/?viewmonth=200402
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
Implementing.
List<T>.Sort() method uses the default comparer Comparer(T).Default for type T to determine the order of list elements. The Comparer(T).Default property.
If you use system types like String, default comparer works fine. This is because String class implements IComparable interface. If you see the definition of String class, it looks like (you can see this by pressing F12 keeping the cursor on the word 'String' in the code editor of visual studio):
public sealed class String : IComparable, ICloneable, IConvertible, IComparable<string>, IEnumerable<char>, IEnumerable, IEquatable<string>
But for types that does not implement Icomparable, default sort does not work. However there is one work around. This trick uses delegates.
To illustrate this example, I created a simple class Post. The code of this class is as below. This class does not implement IComparable interface. So default sort will not work for this.
using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace Tyagi.Utility { public class Post : IComparable<Post> { public String Title { get; set; } public String Source { get; set; } } }
Create another class that will consume this class.
using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace Tyagi.Utility { public class PostsList { List<Post> posts = new List<Post>(); public List<Post> GelPosts() { posts.Add(new Post { Title = "MSN Space", Source = "Blog" }); posts.Add(new Post { Title = "CNBC TV 18", Source = "News" }); posts.Add(new Post { Title = "Independence Day", Source = "Movie" }); posts.Sort(delegate(Post P1, Post P2) { return P1.Title.CompareTo(P2.Title); }); return posts; } } }
The trick is in the use of a delegate in the call to Sort method of the posts object.
posts.Sort(delegate(Post P1, Post P2) { return P1.Title.CompareTo(P2.Title); });
The delegate accepts two parameter, both of the same class Post. It the uses the CompareTo method. This method is defined in the type of property. In this example, Title property of Post class is of type string. So the CompareTo method of string class will be called in this case. CompareTo method returns int. Sort method of List<T> uses this integer to compare two objects.
The above code returns list in the ascending order of title. To reverse the order, simply reverse the code as be below:
posts.Sort(delegate(Post P1, Post P2) { return P2.Title.CompareTo(P1.Title); });
This is one way. You can implement the sorting functionality in your class as well. I will discuss this in the next post.
|
http://weblogs.asp.net/yaneshtyagi/implementing-sorting-in-generic-list-list-lt-t-gt
|
CC-MAIN-2016-07
|
en
|
refinedweb
|
In this penultimate part of a series, I develop a basic client-side cache class in PHP. It uses the local storage mechanism packaged with HTML5 to save, retrieve and manipulate data in the browser.
If you're a conscientious PHP developer who wants to build scalable object-oriented applications, and still don't know what approach to take to make the enhancement process as painless as possible (for yourself or for other developers), then keep reading. You'll want to take a close look at the "Plug-in" pattern. As its name suggests, this simple, yet powerful paradigm will let you create truly "pluggable" programs using only the functionality of Composition and Interfaces. That's all that it takes, really.
What's more, to demonstrate how to implement the pattern in some concrete use cases, in previous parts of this series I developed a really basic application. It utilized the concept of "plug-in" to easily couple additional classes to its existing infrastructure. The application was responsible for rendering different types of objects on screen, ranging from a couple of HTML widgets to JavaScript alert boxes. Of course, the most appealing facet of this expansion process was that it didn't demand that we change a single section of the client class consuming those renderable objects.
Logically, the flexible nature of "Plug-in" makes it suitable to implement in all sorts of situations and environments. Indeed, I left off the last installment developing a caching system, which was able to swap different cache back-ends at runtime. While at this stage, the system is composed of a cache manager and only one cache backend (which turns out to be an APC wrapper), this is about to change. In the lines to come I'm going to create another backend, which will make use of the local storage mechanism that comes bundled with HTML5.
So, are you eager to see how the "Plug-in" pattern can be used in the development of a cache library which will be able to cache data in the server and in the client with the same ease? Well, get rid of your anxiety and start reading!
Using the "Plug-in" pattern in the construction of an extensible cache system: a brief look at the system in its current state
Before I begin building the client-side cache backend mentioned in the introduction, I'd like to spend a few moments reintroducing the source classes (and the interface) that comprise the cache system developed in the previous installment of the series. In doing so, you'll be able to understand more easily how the implementation of a "plug-in" schema can turn this system into an application whose functionality can be easily extended.
First, here's the definition of a generic cache backend, which happens to be an interface called "Cacheable." Take a look at it:
(Cache/Cacheable.php)
<?php
namespace Cache;
interface Cacheable{ public function set($key, $data); public function get($key); public function delete($key); public function clear(); }
As seen above, the "Cacheable" interface defines a contract that must be agreed to by all of the concrete back-ends created from this point onward. Well, the following class is one of the previous interface's implementers. It acts like a simple wrapper for the APC PHP extension. Check it out:
(Cache/ApcCache.php)
class ApcCache implements Cacheable{ /** * Save data to the cache */ public function set($key, $data) { if (!apc_store($key, $data)) { throw new ApcCacheException('Error saving data with the key ' . $key . ' to the cache.'); } return $this; } /** * Get the specified data from the cache */ public function get($key) { if ($this->exists($key)) { if (!$data = apc_fetch($key)) { throw new ApcCacheException('Error fetching data with the key ' . $key . ' from the cache.'); } return $data; } return null; } /** * Delete the specified data from the cache */ public function delete($key) { if ($this->exists($key)) { if (!apc_delete($key)) { throw new ApcCacheException('Error deleting data with the key ' . $key . ' from the cache.'); } } return $this; } /** * Check if the specified cache key exists */ public function exists($key) { return apc_exists($key); } /** * Clear the cache */ public function clear($cacheType = 'user') { return apc_clear_cache($cacheType); } }
(Cache/ApcCacheException.php)
class ApcCacheException extends Exception{}
The logic driving the above "Apc" class is pretty self-explanatory (especially if you've already worked with the APC extension), so let's move on. Pay attention to the definition of the following client class, which permits you to cache data using different cache back-ends, including the one just defined:
(Cache/CacheHandler.php)
class CacheHandler{ protected $_cache; /** * Constructor */ public function __construct(Cacheable $cache) { $this->_cache = $cache; } /** * Add the specified data to the cache */ public function set($key, $data) { return $this->_cache->set($key, $data); } /** * Get the specified data from the cache */ public function get($key) { return $this->_cache->get($key); } /** * Delete the specified data from the cache */ public function delete($key) { $this->_cache->delete($key); } } While I have to admit that the implementation of this client class is somewhat naive, it demonstrates how to use the "Plug-in" pattern in the construction of a scalable cache system. If you take a close look at the class' constructor, you'll see that it accepts any implementer of the earlier "Cacheable" interface, which makes simple to create different cache back-ends and inject them straight into the class' internals.
For now, though, there's only one cache backend available for testing, so here's a script that shows how to use it for caching some info about my loyal friend, Julie Smith:
// include the autoloaderrequire_once 'Autoloader.php';Autoloader::getInstance();
use CacheApcCache as ApcCache, CacheCacheHandler as CacheHandler;
// cache some data using APC$cacheHandler = new CacheHandler(new ApcCache);$cacheHandler->set('fname', 'Julie') ->set('lname', 'Smith') ->set('email', '[email protected]'); // display the cached dataecho ' First Name: ' . $cacheHandler->get('fname') . ' Last Name: ' . $cacheHandler->get('lname') . ' Email: ' . $cacheHandler->get('email');
In the example above, the data is saved to and retrieved from the opcode cache. That's not especially interesting -- but, since the "CacheHandler" class behaves like is a sort of adapter that allows us to switch over multiple cache back-ends at runtime, it'd be a shame not to take advantage of this ability, right?
Well, in the following section I'm going to create a brand new cache backend. It will be capable of caching data in the client through the local storage mechanism included with HTML5.
To see how this cache backend will be developed, move ahead and read the lines to come.
|
http://www.devshed.com/c/a/PHP/Create-A-ClientSide-Cache-in-PHP/
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
I tried to make the variables self explanatory.
This program updates the console every 5 seconds with
lps: Loops per second, and most of the other variables used in the program.
#include<iostream> #include<time.h> int main() { using namespace std; #define WAIT 5 //5 seconds time_t timer_start = time(NULL); time_t timer_end = timer_start; time_t prev_timer = timer_start; unsigned int while_cntr = 0; unsigned int prev_while_cntr = 0; unsigned int last_loop = 0; unsigned int avg_lps = NULL; unsigned int lps = NULL; bool go = false; cout<<"This is a time.h testing program\n\n"; prev_timer = timer_start-timer_end; while(go==false) { while_cntr++; timer_start = time(NULL); timer_start = timer_start-timer_end; if(timer_start >= prev_timer + WAIT) { last_loop=while_cntr-prev_while_cntr; lps = last_loop/(timer_start-prev_timer); avg_lps = while_cntr/timer_start; prev_timer=timer_start; prev_while_cntr = while_cntr; cout<<"lps: "<<lps<<"\n"; cout<<"avg_lps: "<<avg_lps<<"\n"; cout<<"timer_start: "<<timer_start<<"\n"; cout<<"timer_end: "<<timer_end<<"\n"; cout<<"prev_timer: "<<prev_timer<<"\n"; cout<<"while_cntr: "<<while_cntr<<"\n"; cout<<"prev_while_cntr: "<<prev_while_cntr<<"\n"; cout<<"last_loop: "<<last_loop<<"\n\n"; } } return 0; }
|
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=5011001
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
This object provides the functionality of both a GeomVertexReader and a GeomVertexWriter, combined together into one convenient package.
More...
#include "geomVertexRewriter.h"
List of all members.
This object provides the functionality of both a GeomVertexReader and a GeomVertexWriter, combined together into one convenient package.
It is designed for making a single pass over a GeomVertexData object, modifying rows as it goes.
Although it doesn't provide any real performance benefit over using a separate reader and writer on the same data, it should probably be used in preference to a separate reader and writer, because it makes an effort to manage the reference counts properly between the reader and the writer to avoid accidentally dereferencing either array while recopying.
Definition at line 39 of file geomVertexRewriter.h.
Thread::get_current_thread()
[inline]
Constructs an invalid GeomVertexRewriter.
You must use the assignment operator to assign a valid GeomVertexRewriter to this object before you can use it.
Definition at line 25 of file geomVertexRewriter.I.
Constructs a new rewriter to process the vertices of the indicated data object.
Definition at line 38 of file geomVertexRewriter.I.
This flavor creates the rewriter specifically to process the named data type.
Definition at line 52 of file geomVertexRewriter.I.
References set_column().
Definition at line 68 of file geomVertexRewriter.I.
Constructs a new rewriter to process the vertices of the indicated array only.
Definition at line 83 of file geomVertexRewriter.I.
Definition at line 96 of file geomVertexRewriter.I.
Resets the GeomVertexRewriter to the initial state.
Reimplemented from GeomVertexReader.
Definition at line 294 of file geomVertexRewriter.I.
Returns the array index containing the data type that the rewriter is working on.
Definition at line 320 of file geomVertexRewriter.I.
References GeomVertexReader::get_array(), and GeomVertexWriter::get_array().
Returns the particular array object that the rewriter is currently processing.
Definition at line 156 of file geomVertexRewriter.I.
References GeomVertexWriter::get_array_data(), and GeomVertexReader::get_array_data().
Returns the write handle to the array object that the rewriter is currently processing.
This low-level call should be used with caution; be careful with modifying the data in the handle out from under the GeomVertexRewriter.
Definition at line 172 of file geomVertexRewriter.I.
Returns the description of the data type that the rewriter is working on.
Definition at line 333 of file geomVertexRewriter.I.
References GeomVertexReader::get_column(), and GeomVertexWriter::get_column().
Returns the Thread pointer of the currently-executing thread, as passed to the constructor of this object.
Definition at line 197 of file geomVertexRewriter.I.
References GeomVertexReader::get_current_thread(), and GeomVertexWriter::get_current_thread().
Returns the row index at which the rewriter started.
It will return to this row if you reset the current column.
Definition at line 380 of file geomVertexRewriter.I.
References GeomVertexReader::get_start_row(), and GeomVertexWriter::get_start_row().
Returns the per-row stride (bytes between consecutive rows) of the underlying vertex array.
This low-level information is normally not needed to use the GeomVertexRewriter directly.
Definition at line 185 of file geomVertexRewriter.I.
References GeomVertexReader::get_stride(), and GeomVertexWriter::get_stride().
Returns the vertex data object that the rewriter is processing.
Definition at line 143 of file geomVertexRewriter.I.
References GeomVertexReader::get_vertex_data(), and GeomVertexWriter::get_vertex_data().
Returns true if a valid data type has been successfully set, or false if the data type does not exist.
Definition at line 307 of file geomVertexRewriter.I.
Returns true if the reader or writer is currently at the end of the list of vertices, false otherwise.
Definition at line 393 of file geomVertexRewriter.I.
Referenced by GeomPrimitive::offset_vertices().
Sets up the rewriter to use the data type with the indicated name.
This also resets both the read and write row numbers to the start row (the same value passed to a previous call to set_row(), or 0 if set_row() was never called.)
The return value is true if the data type is valid, false otherwise.
Definition at line 240 of file geomVertexRewriter.I.
Sets up the rewriter to use the indicated column description on the given array.
Definition at line 281 of file geomVertexRewriter.I.
Sets up the rewriter to use the nth data type of the GeomVertexFormat, numbering from 0.
Definition at line 218 of file geomVertexRewriter.I.
Referenced by GeomVertexRewriter(), and set_column().
Definition at line 259 of file geomVertexRewriter.I.
Sets the start, write, and write index to the indicated value.
The rewriter will begin traversing from the given row.
Definition at line 367 of file geomVertexRewriter.I..
Definition at line 351 of file geomVertexRewriter.I.
|
http://www.panda3d.org/reference/1.8.0/cxx/classGeomVertexRewriter.php
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
On Oct 12, 2009, at 22:28 , Uwe Hollerbach wrote: > parsePrefixOf n str = > string (take n str) >> opts (drop n str) >> return str > where opts [] = return () > opts (c:cs) = optional (char c >> opts cs) Seems to me this will succeed as soon as it possibly can... > myTest = myPrefixOf 1 "banana" > <|> myPrefixOf 1 "chocolate" > <|> TPCP.try (myPrefixOf 2 "frito") > <|> myPrefixOf 3 "fromage" ...so the "frito" branch gets committed as soon as "fr" is read/parsed (myTest returns)... > % ./opry fro > "test" (line 1, column 3): > unexpected "o" > expecting "i", white space or end of input ...which is why this is looking for "white space or end of input". My fix would be to have myPrefixOf require the prefix be terminated in whatever way is appropriate (end of input, white space, operator?) instead of simply accepting as soon as it gets a prefix match regardless of what follows. -- :
|
http://www.haskell.org/pipermail/haskell-cafe/2009-October/067733.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Christian Schoenebeck declaimed: > > I've been trying to switch from mbox to maildir, but exim won't deliver > > to my maildirs. Here's the relevant section of my exim.conf: > > I replaced the local_delivery section by: > > local_delivery: > driver = appendfile > create_directory = true > directory_mode = 700 > directory = ${home}/Maildir > group = mail > mode = 0660 > envelope_to_add = true > return_path_add = true > maildir_format > > That works for me. I also found a nice perl script which converts old mbox -> > Maildir. Let me know if you need it. Well this doesn't hurt at any rate, the old system of having the primary deliver go to file = /var/spool/mail/${localpart} was also working. The mbox or Maildir specified in the local_delivery transport of exim.conf gets the mail. The problem is that my .forward file won't file mail to Maildirs, just mboxes. A stripped down example is: # Exim filter if error_message then finish endif if $h_Subject contains "mailtest" then seen save $home/Maildir/test/ else save $home/Maildir/inbox endif finish # end .forward According to the comments in exim.conf:TRANSPORTS CONFIGURATION, it's the address_directory transport that handles addresses generated by .forward files. Could you send me that section? TIA, Paul -- Paul Mackinney [email protected] -- To UNSUBSCRIBE, email to [email protected] with a subject of "unsubscribe". Trouble? Contact [email protected]
|
http://lists.debian.org/debian-user/2002/06/msg03952.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
How to: Find Existing Files and Directories in Isolated Storage
You can also search for existing directories and files using an isolated storage file. Remember that within a store, file and directory names are specified with respect to the root of the virtual file system. Also, file and directory names in the Windows file systems are case-insensitive.
To search for a directory, use the GetDirectoryNames instance method of IsolatedStorageFile. GetDirectoryNames takes a string representing a search pattern. Both single-character (?) and multi-character (*) wildcard characters are supported. These wildcard characters cannot appear in the path portion of the name; that is, directory1/*ect* is a valid search string, but *ect*/directory2 is not.
To search for a file, use the GetFileNames instance method of IsolatedStorageFile. The same restriction for wildcard characters in search strings that applies to GetDirectoryNames also applies to GetFileNames.
Neither GetDirectoryNames nor GetFileNames is recursive; that is, IsolatedStorageFile does not supply methods for listing all directories or files in your store. However, examples of recursive methods are part of the code below. Also note that both GetDirectoryNames and GetFileNames return only the directory or file name of the item found. For example, if there is a match on the directory RootDir/SubDir/SubSubDir, SubSubDir will be returned in the results array.
FindingExistingFilesAndDirectories Example
The following code example illustrates how to create files and directories in an isolated store. First, a store isolated for user, domain, and assembly is retrieved and placed in the isoStore variable. The CreateDirectory method is used to set up a few different directories and the IsolatedStorageFileStream method creates some files in these directories. The code then loops through the results of the GetAllDirectories method. This method uses GetDirectoryNames to find all the directory names in the current directory. These names are stored in an array, and then GetAllDirectories calls itself, passing in each of the directories it has found. The result is all the directory names returned in an array. Next, the code calls the GetAllFiles method. This method calls GetAllDirectories to find out the names of all the directories, and then it checks each of these directories for files using the GetFileNames method. The result is returned in an array for display.
using System; using System.IO; using System.IO.IsolatedStorage; using System.Collections; public class FindingExistingFilesAndDirectories{ // Retrieves an array of all directories in the store, and // displays the results. public static void Main(){ // This part of the code sets up a few directories and files in the // store. IsolatedStorageFile isoStore = IsolatedStorageFile.GetStore(IsolatedStorageScope.User |"); new IsolatedStorageFileStream("InTheRoot.txt", FileMode.Create, isoStore); new IsolatedStorageFileStream("AnotherTopLevelDirectory/InsideDirectory/HereIAm.txt", FileMode.Create, isoStore); // End of setup. Console.WriteLine('\r'); Console.WriteLine("Here is a list of all directories in this isolated store:"); foreach(string directory in GetAllDirectories("*", isoStore)){ Console.WriteLine(directory); } Console.WriteLine('\r'); // Retrieve all the files in the directory by calling the GetFiles // method. Console.WriteLine("Here is a list of all the files in this isolated store:"); foreach(string file in GetAllFiles("*", isoStore)){ Console.WriteLine(file); } }// End of Main. // Method to retrieve all directories, recursively, within a store. public static string[] GetAllDirectories(string pattern, IsolatedStorageFile storeFile){ // Get the root of the search string. string root = Path.GetDirectoryName(pattern); if (root != "") root += "/"; // Retrieve directories. string[] directories; directories = storeFile.GetDirectoryNames(pattern); ArrayList directoryList = new ArrayList(directories); // Retrieve subdirectories of matches. for (int i = 0, max = directories.Length; i < max; i++){ string directory = directoryList[i] + "/"; string[] more = GetAllDirectories (root + directory + "*", storeFile); // For each subdirectory found, add in the base path. for (int j = 0; j < more.Length; j++) more[j] = directory + more[j]; // Insert the subdirectories into the list and // update the counter and upper bound. directoryList.InsertRange(i+1, more); i += more.Length; max += more.Length; } return (string[])directoryList.ToArray(Type.GetType("System.String")); } public static string[] GetAllFiles(string pattern, IsolatedStorageFile storeFile){ // Get the root and file portions of the search string. string fileString = Path.GetFileName(pattern); string[] files; files = storeFile.GetFileNames(pattern); ArrayList fileList = new ArrayList(files); // Loop through the subdirectories, collect matches, // and make separators consistent. foreach(string directory in GetAllDirectories( "*", storeFile)) foreach(string file in storeFile.GetFileNames(directory + "/" + fileString)) fileList.Add((directory + "/" + file)); return (string[])fileList.ToArray(Type.GetType("System.String")); }// End of GetFiles. }
|
http://msdn.microsoft.com/en-us/library/zd5e2z84(v=vs.80).aspx
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Open Source Killing Commercial Developer Tools 742
jexrand recommends an interview with John De Goes in which he argues: "The tools market is dead. Open source killed it." The software developer turned president of N-BRAIN explains the effect that open source has had on the developer tools market, and how this forced the company to release the personal edition of UNA free of charge. According to De Goes, selling a source-code editor, even a very good one, is all but impossible in the post-open source era, especially given that, "Some developers would rather quit their job than be forced to use a new editor or IDE." N-BRAIN's decision is but one in a string of similar announcements from tools companies announcing the free release of their previously commercial development tools.:and piracy killed music (Score:5, Insightful)
The reason open source has taken off so much is because it allows people who have no capital to dodge around the wage-slave line and produce things with their own tools.
Teach a man to fish and all that jazz...
Capitalism and all its fictional scarcity have been destroying productivity in the name of control for a long time. The liberty that lies beneath free software and open publishing is increasing productivity, not damaging it.
Capitalist economics is a big shell game, meant to fleece suckers. It's monopoly, dependence, exploitation and theft, pure and simple.:and piracy killed music (Score:5, Insightful)
Well then their competitors will beat them by using the superior tool and shipping a product faster, better, cheaper. That IS free-market economics. Not every company is going to make the best decisions. The best teams will survive, the weakest will fail.
It seems to me these guys selling the source-code editor are not doing their job of marketing/advertising well enough. If their product will truly save time/money then they need to do a better job of convincing people of that. If their tool would save me hours daily I might be interested. But I've never heard of their tool. I've never seen it. That's not MY failure, it's theirs.
Re: (Score:3, Insightful)
1. Write article almost-trolling OSS, make sure to mention your product a bunch.
2. Get article posted to slashdot
3. ??????
4. Profit!!
Re:and piracy killed music (Score:5, Insightful)
I think we're both generalizing and 'if'-fing a little too much. Every case should be examined separately. We can safely assume Qcad is not a real replacement for AutoCAD, whereas OOo will be more than enough for the majority of MSOffice users. The problem with companies such as the one TFA mentions is that they seem to be trying to sell the same thing you can get somewhere else for free, without any noticeable quality difference, and then bitching about it and crying "the communists are destroying my business!". Ask ice-sellers what they think of the price drop in refrigerators.
If that were true, most places where employees only use email, web browsing and office software would be installing Linux instead of the almost ubiquitous Windows., Interesting)
Technological advantages are not the only way you can have progress. Progress can be attained by, for example, having every programmer in the world be able to access affordable development tools. This goes to the advantage of everyone, and the disadvantage of those who want to sell development tools. Maybe they should just move on to the next product, or look for an alternate business model. It happens all the time to all kinds of companies.
I really think that we have reached a point where all development tools offer the same features, more or less. Maybe the point is that these software companies should move to something more than making source code editors which we can no longer distinguish from each other.:5, Insightful)
Bah. What good is an editor that doesn't include email, usenet, telnet and ftp functions?
Seriously, though, I don't doubt your sincerity, but whenever I read something along the lines of "It works great!", I wonder why it is the endorsement never includes its limitations, or what should be a requisite qualifier of "It works, but only for the limited manner in which I need it to work.":5, Informative)
I don't know what compiler versions you are talking about.
VC6 was not iso compliant. No wonder. the ISO standard wasn't ratified at that time.
But g++ 2.95 scored equally bad, or worse.
VC++8.x and 9 are very compliant, and on par with g++.
Sure VC++ has compiler extensions, but so does g++, which litters the global namespace with ISO non-conformant functionnames (snprintf).
However, VC++ also has a switch that turns it into ISO mode, allowing not a single compiler extension.
And I don't know if you know, but a lot of headers (string for example) are supposed to come WITHOUT the
string.h is a C include header. string is a C++ include header.
But hey, at least you're a respectable programmer. Me, I use whatever tool I need to get the job done.-
Really? (Score:5, Interesting)
And some prima-donna developers will presumably find themselves without a job after a couple of resignations based on the code-editor they were required to use.
I'm glad to see that (F)OSS is making an impact, even if it means that a company has to give away their software. I know that this might put a lot of jobs at risk, which is bad, but maintaining a false-economy-based business model as a welfare system is, I tend to assume, more harmful to the overall economy. Plus there's always the option to release advanced tools under a paid-for license, as well as the paid-for support contract.
Re:Really? (Score:5, Interesting)
Not at all. Economy's not based on what we'd like it to be, not based on anything moral or worthy, but simply upon what is. And if there's competition in a market for a code editor (or anything else at all) which is being distributed for no cost then the commercial entities have to compete against that product. Saying, as Mike Masnick, from Techdirt, asserts "that you can't compete against free" means that "you can't compete, period."
Product A achieves the same ends as Product B. Product A is free, Product B requires a payment. If there is no distinction between the two products except price, then many people will go for Product A, and will forgive a few quirks or bugs. I tend to assume then that Product B has to compete with this product to maintain, or gain, market share. This is why I tend to believe that there should be a basic free version. The paid-for version should have added value; whether it's advanced features and/or support is largely irrelevant; the point is that to justify the cost of the product there has to be more than just the basics, which can be acquired legally for free in the form of the FOSS.
Plus in the context of software, once it's been developed then there's no further cost (if distributed digitally) to producing another million copies (okay, there's the cost of servers and bandwidth) beyond the initial copy (and the bug-fixes, which I'd tend to assume are more or less negligible next to the original development cost). If a commercial entity wants to continue earning money for releasing a product it has to compete with the prevalent market conditions. If free software is your competitor then you have to compete with free.
My comment about 'welfare' was perhaps a little harsh or glib, though it was intended to contribute towards the point that continuing in the vein of the old market tradition (build it, sell it, profit, rinse and repeat) doesn't work so well when the sell it stage is removed. And expecting to continue to sell a product, when alternatives are available for free, is counter-intuitive at best.
Apologies if I offended anyone.)
Sorry, N-BRAIN, but your website looks like sh*t. (Score:5, Insightful)
How about wasting 5 minutes on a concept for an online presence and an online marketing strategy? And, please, *do* get a *professional* webdesigner to rebuild the site. You'll find plenty of them here [csszengarden.com].
To be honest, somebody who needs to get a job done nearly cares squat wether a tool is free or costs 300$. It's only because the 300$ tools are just as crappy as the free ones (sic!) that they settle for the free ones. And damn the few bucks I have to shell out for it.
Best example: Zend Studio and PHP Eclipse or PDT Eclipse. If I have to go through the same fuss configging local remote debuggin in either, I see no point in spending 300$ for Zend Studio. That way I'll even learn to configure an open source tool - a skill not wasted - rather than learning to deal with some quirks of some prorprietary tool.
Counterexample: Mint [haveamint.com] is a web presence statistics tool with PHP backend logic. There are like a quarter bazillion of these in Free, FOSS and public domain scatterd all over the web. However, looking at this guys site (he happens to be a good designer *and* a good programmer) I haven't the slightest doubt that his statistics tool will deliver without hassle. Thus whenever I need a statistics tool, he'll be the first and last where I look for it.
Offer value (Score:4, Insightful)
For example: there's an expensive, commercial ARM compiler despite the existence of GCC. People buy it because it generates code that's ~20% smaller and faster.
Re: (Score:3, Insightful)
The answer is simple - They're charging to much. (Score:5, Insightful)
Even in a small company with 2 developers/engineers, this can often mean that they need 8 licenses.
1 for each developer/engineer for their primary machine = 2 licenses
1 for each developer/engineer for their home machine = 2 licenses
1 for each developer/engineer for their notebook = 2 licenses
1 for each test lab machine = 2 licenses
In total, we are now looking at 8 licenses for 2 blokes, when in reality only one of them will ever be using it at a time anyway.
Then they put a myriad of protection and security in there which makes it a pain to install, maintain, or move.
Then we need a yearly maintenance fee for each license to get bug fixes. With 8 licenses, we need 8 maintenance fees. Even at $100 per license for maintenance, we're now looking at $800- every year just to get bugs fixed!
Assume the Editor costs $250 per license and $100 per year for maintenance (bug fixes), which is about what they charge, with 2 developers/engineers we are now looking at $2,000 for the initial licenses and and additional $800 every year if we want to keep using it or heaven forbid we actually expect it to work. If course, they claim that we get "features" with the maintenance, but most of the time we don't want "features", we just want the product to keep working. Yeah, I know, they'll add support for Windows-Vista or another feature which is neat, but instead of looking at that work as a way of expanding their market, they tend to look at it as a way of lockin or bleeding their existing customer base. This is at the very core of what is wrong with software and the mindset that programmers of software development tools end up with.
Here's a tip for you guy's who do make good tools.
WE WANT TO BUY THEM.
- price them reasonably
- license them reasonably
WE WANT YOU TO STAY IN BUSINESS.
- we will tell all of our friends
- we will tell all of our associates
- we will tell the next generation
- features and fixes generate new customers
WE NEED TO MAKE A LIVING TOO.
- we can't bleed our customers
- we need to write a new program every month or two
- slash the price you charge me to fix your problems
- we can't afford the prices you guys are asking/expecting
Look at the prices for Micro$haft compilers and tools. They quickly run into the thousands of dollars. Borland has also lost the plot and charge an obscene amount of money for their products. Very few of us have customers with unlimited budgets. Very few of us actually want to cheat and buy "Accademic" versions. We are programmers and developers too. We know that it takes you time and you need to eat, but fair is fair, you guys are providing spanners. If you make a good one, you can sell thousands of them, but don't try to retire just because you've made one spanner. The world doesn't work that way anymore.
Re: (Score:3, Insightful)
Often, they think they have something special to sell - after all, they wrote - so they can charge like it's gold. I think that many of the tool vendors spend so much energy on their own products... often focusing na
A Complete Load of Fetid Rabbit Droppings (Score:5, Insightful)
What Open Source has essentially done is say, "You must be at least this tall to publish a tools suite." Pretty much the only compilers that died were the bad ones. No one, for example, laments the passing of Whitesmiths.
As for editors, well, it was pretty obvious 20 years ago that the editor that was powerful and platform-independent (so you didn't have to re-learn everything and re-write all your macros on a new platform) was going to win. That pretty much meant either EMACS or VI.
Schwab. The tools market is dead. Open source killed it. The only commercial development tools that can survive today are the ones that leapfrog open source tools. With UNA Collaborative Edition, we have that--there's nothing for real-time collaborative development that even comes close, whether commercial or open source. But UNA Personal Edition is more of an incremental improvement on what's out there in the editing world. "
So commercial software has to be a LOT better than opensource to survive not merely a little better.
So whats the problem with that??? If you want to make lots of money...quit your bellyaching and INVENT,INNOVATE and INSPIRE!
And? (Score:5, Interesting)
The linked-to article about "Enerjy" says it in no uncertain terms - there were no sales for this type of product. There was also an overbearing impetus within the company itself that free/open source software could do parts of the job just as well, and they were considering using it themselves. The whole industry of "text editors for programmers" has always been niche, and now is dead. I can't say that Open Source has much to do with it so much as "overwhelming choice".
"Years of work and cutting-edge research went into this editor, and it rivals, even surpasses, commercial editors that are selling for $100, $200, even $400 a pop."
It's an editor. I think that cutting-edge research is pushing it a bit but even $100 a pop seems expensive for what is a glorified text editor. Even if you did make $400 each time, did you really ever think that's going to continue forever?
"First of all, I should mention that UNA is a source code editor, not an IDE. It's a very sophisticated editor, well on the road to becoming an IDE, but it doesn't provide out-of-the-box support for compiling, testing, or debugging."
Point proven. It's a text editor. Designed (supposedly) for programming, that doesn't even have a facility to run a compilation script without "plugins" etc.
"The incremental search in UNA is so novel that we're patenting it. That's right, we're patenting a feature we're giving away for free. The incremental search interface allows you to navigate documents with theoretical maximum efficiency. You can jump to wherever you want in the document by typing just half a keystroke more than the minimum number of characters necessary to differentiate that position from others. You can't do better than that. People were blown away by the incremental search feature of Idea 7.0, but we've got something better than that."
I seriously doubt you will be able to patent such an old and over-used idea. Opera does this in my mail, my contacts, my newsgroups, my notes. Pidgin does it in my chat-histories. I've seen it in any number of programs, quite a lot of them "programmer's editors" or IDE's. It's hardly "novel", I wouldn't be "blown away".
The other reasons he thinks that UNA should win are scarily simple at the least. Dialog boxes that don't say stupid things. Keyboard shortcuts. External actions running in the background. Basically, what he has is the equivalent of a freeware programmer's editor from several years ago.
The screenshots depict an atrociously complicated screen with which (supposedly) people who don't know the program can write a Hello World in five minutes. Whoopee.
So his program dies a death because open-source programs do it better? That's not surprising... the program seems to be at least five-ten years behind. My versions of Visual Basic 3.0 and 4.0 had quite a lot of those features, admittedly only for their own language, but similarly thrash his editor in lots of other places (such as being able to compile without needing a plugin!). And the point is that most programmers now use either command-line tools from a particular favourite GUI or they use the IDE/GUI that came with the language (e.g. VB.net, etc.). If they are using command-line tools, then the GUI can be chopped and changed every month with little hassle as various software is released/updated/etc. And you could have a whole group of people use *whatever the hell interface they want* with the same backend tools and work together on a project.
So the fact that the type of program is dying is not surprising - it's a very volatile, niche market driven by the whims of particular programmers. The fact that his particular program is dying is even less surprising - it doesn't seem to offer anything at all. Certainly not for a pricetag, anyway.
Are we really supposed to shed tears over the lose of any part of his business, let alone that he's "been forced" to release a program for free that he couldn't sell?)
|
http://tech.slashdot.org/story/08/06/10/0228220/open-source-killing-commercial-developer-tools?sdsrc=nextbtmnext
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
On 16 Feb 2008, at 5:04 PM, Donn Cave wrote: > > On Feb 16, 2008, at 3:46 PM, Philippa Cowderoy wrote: > >>! > > Ironically, the simple task of reading a file is more work than I > expect > precisely because I don't want to bother to handle exceptions. I > mean, > in some applications it's perfectly OK to let an exception go to > the top. > > But in Haskell, you cannot read a file line by line without writing an > exception handler, because end of file is an exception! as if a > file does > not normally have an end where the authors of these library functions > came from? I agree 100%; to make life tolerable around Haskell I/O, I usually end up binding the moral equivalent of tryJust (\ exc -> case exc of IOException e | isEOFError e -> return () _ -> Nothing) $ getLine somewhere at top level and then calling that where it's needed. > For the author of the original post ... can't make out what you > actually > found and tried, so you should know about "catch" in the Prelude, the > basic exception handler. Also, you might need to know that bracket nests in various ways: bracket openFile hClose $ bracket readLine cleanUpLine $ proceed There's also finally, for when the first argument to bracket is ommitted, and (>>) for when the second argument is :) jcc
|
http://www.haskell.org/pipermail/haskell-cafe/2008-February/039698.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Writing Javascript is not trivial. With all the dynamicity it brings, function level scoping, lack of class syntax and all makes it really hard to tackle.
Well, no longer...
I wasn't a javascript developer until 3-4 months ago, and all the things i mentioned above beat me on an occasion. Then I met Closure Tools at work. It is basically a set of tools and libraries to make your javascript writing easier.
The sub-projects that might be of interest (in random order):
Javascript Compiler
It is not a compiler in the sense that all your javascript is converted into a binary-like language, but it is a compiler that will go through your code, and depending on optimization level, it will analyze your code and eliminate dead code, inline functions/variables, replace variables with shorter names and such which would minimize your code size, as well as make it more performant. Using JSDoc comments, it enforces static type safety, hence making your code less prone to errors.
Javascript Linter
At a company like google, where your products are JS+HTML, your code can easily grow out of control. With people from different programming backgrounds having to write JS+HTML on occasion, it is really hard to keep a readable codebase. This tool was a result of Google's own needs of a codebase that scales, and was written to ensure that any js code conforms to Google's Javascript Style Guide.
Javascript Library
Google deals with all the browsers more than anyone, and tried to make it easier. This gave birth to Closure library. It's a library that has utilities like dom manipulation, animation, ui widgets, server communication and such. It comes with a nice unit test framework as well. Sure, there are millions of libraries that do similar jobs, but its modular and well-tested structure makes it one of the most important tools in my toolbelt.
It also provides primitives like goog.inherits and goog.base to allow developers write OOP code similar to Java or C#. The library itself has a very good OO structure, and it is well-documented, and almost all of the library is written with cross-browser mindset.
Javascript Templating Library
The tools come with a relatively mature templating system. The templates you write look somewhat similar to mustache in its syntax. The nice thing is that the templates you write could be both used on server side or on the client side. The nicer thing is that templates are translated into java and javascript code as opposed to being interpreted, thus, they are super-fast. LinkedIn engineers made a good comparison of different template engines, so do check it yourself.
CSS Compiler
This is very similar to Less or Sass, so if you need a CSS compiler, add this to your list of tools to compare. I don't have any experience with either of these to give a fair comparison. Below is an example (taken from here)
@def BG_COLOR rgb(235, 239, 249); @def DIALOG_BORDER_COLOR rgb(107, 144, 218); @def DIALOG_BG_COLOR BG_COLOR; body { background-color: BG_COLOR; } .dialog { background-color: DIALOG_BG_COLOR; border: 1px solid DIALOG_BORDER_COLOR; }
The code above will be converted to:
body { background-color: #ebeff9; } .dialog { background-color: #ebeff9; border: 1px solid #6b90da; }
Of course, you can also specify to minimize it.
Javascript Compiler
This article will talk about JSCompiler, as it is the tool i want to see people using more.
Due to its very flexible and dynamic nature, Javascript also suffers from lack of a good compiler support. Dynamic analysis is good, but most of the time, we want our javascript code to be type safe, and we want it to be easily refactorable.
When you don't have compiler support, any little change you are going to make is most likely to result in undetectable errors. If you change a parameter name for a function, it will break the function. If you remove a parameter, it will break all other functions that depend on it, unless you exhaustively and carefully search for every single use case.
Well, no more...
Using JSDoc annotations you should already be writing, closure compiler does almost everything a compiler would do. It does type checking for parameters of functions and variables, handles dead code elimination that would remove pieces of the code that are not actually used (or if you want, you can choose to keep them), and more. With the help of the compiler, you can write code in a very elegant way, while still keeping it pretty small because it also does minimization.
The only downside is that you have to go by its rules. It has it's own restrictions you have to follow. Javascript enthusiasts will probably complain about how this restricts javascript's flexibility and such, but there it goes.
Below are some of the examples that I think you will like:
Object/Class Constructors
/** * Some class description. * @constructor */ namespace.Class = function() { /** * The field definition. * @type {string} * @private */ this.field1_ = 'fieldValue'; /** * The field definition. * @type {number} */ this.field2 = 23; }
Couple things to note here: -@constructor directive for indicating that it is a constructor. This is important because the meaning of "this" changes depending on where you are using at, and closure compiler makes sure you know what you're doing this way.
-@type directive for indicating the type of the variable. If, at any point in the source code, you try to assign a value with different type, this will make sure it fails.
-@private directive for making a variable private. The compiler will complain if you try to access it outside the class definition.
Constants
The way to define a constant variable is as follows:
/** * @type {string} * @const */ namespace.VARIABLE_NAME = 'variableValue';
Again, the compiler will complain if you try to assign this another value later.
Compiler-Modifiable Constants
If you want to change something during compile time, such as including a code when you are debugging but not when it is released, this is your buddy.
/** @define {boolean} */ namespace.IsDebug = true;
Dictionary vs Property Accessors
In javascript, you can access to any property in two ways (as every object is actually a dictionary): 1. Property accessor such as myObject.property 2. Dictionary Indexer such as myObject['myproperty']
The reason we need to distinguish the two is that the following code is technically correct:
var myObject = {}; myObject['hello'] = 123; console.log(myObject.hello);
However, it shouldn't be the way you write your code. If you want to use an object as a dictionary, you should be using [''] accessor. The compiler should give you an error when you want to say myobject.hello. The way tekk that to closure compiler:
/** * @dict */ var myObject = {}; myObject['hello'] = 123; console.log(myObject.hello); // This will fail.
Exposing vs Not Exposing
By default, closure compiler will remove unused properties and or rename them with a shorter name to reduce the code size. This is not always ideal. You can prevent that by marking a property with @expose
/** * Some class description. * @constructor */ namespace.Class = function() { /** * The field definition. * @type {string} * @expose */ this.field1 = 'fieldValue'; }
Be careful with this, since if this is a library code, deleting a property that you expose might be harmful to your users.
Inheritance
Closure makes inheritance a bit easier since for some JAvascript prototype inheritance is a bit hard to grasp and all they want is a java like mechanism.
/** * Some class description. * @constructor */ namespace.BaseClass = function() { /** * The field definition. * @type {string} */ this.field1 = 'fieldValue'; } /** * Some class description. * @constructor * @extends {namespace.BaseClass} */ namespace.Class = function() { goog.base(this); /** * The field definition. * @type {string} */ this.field2 = 'fieldValue'; } goog.extends(namespace.Class, namespace.BaseClass)
Here @extends directive ensure the compiler treats the derived Class as a subclass of BaseClass. Any access to property of BaseClass will not cause a compiler error.
goog.base will invoke base class constructor.
goog.extends() will do the simplified prototype inheritance for you.
Interface/Implements
Due to it's dynamic nature, javascript doesn't really have the concept of interface and implements. However, in order to make sure you are not missing any property or function definition in objects you want to pass as parameters or such, this is how you do it.
/** * An interface. * @interface */ function Interface1() {}; Interface1.prototype.doSomething = function() {}; Interface1.prototype.doSomethingElse = function() {}; /** * An implementation. * @implements {Interface1} */ function Class1() {}; Class1.prototype.doSomething = function() {};
For the following case, the compiler will fail as you have forgotten to implement method "doSomethingElse" in Class1.
Parameters and Parameter type check
/** * Does something. * @param {!string} mandatoryParam1 The description for param 1. * @param {?number} mandatoryParam2 The description for param 2. * @param {number=} opt_param3 The description for param 2. */ Class1.prototype.doSomething = function(mandatoryParam1, mandatoryParam2, opt_param3) {};
As you see, for each parameter, we have @param keyword, a type expression, the name of the parameter and description. Here, the most important part is the type expression.
- {!string} specifies that this value is a string, and it cannot be null.
- {?number} specifies that this is a number or null.
- {number=} specifies that this is an optional parameter - this does not need to be specifies when calling the function.
Conclusion
Make sure you take a look at closure tools. They will make your front end (and sometimes back-end?) development much more managable.
|
http://tech.pro/tutorial/1256/introduction-to-closure-tools
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
File APIs for Java Developers
Manipulate DOC, XLS, PPT, PDF and many others from your application.
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
Register / Login
JavaRanch
»
Java Forums
»
Databases
»
Object Relational Mapping
Author
Can anyone tell why is this query not valid?
Eduardo Born
Greenhorn
Joined: Oct 26, 2010
Posts: 3
posted
Oct 26, 2010 19:16:39
0
Hello!
I have criteria query that, when executed, results the following exception thrown:
java.lang.IllegalArgumentException: org.hibernate.QueryException: illegal syntax near collection: id [select generatedAlias0.permissions from temp.package.commons.user.Role as generatedAlias0 where generatedAlias0.id=2L] at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1222) at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1168) at org.hibernate.ejb.AbstractEntityManagerImpl.createQuery(AbstractEntityManagerImpl.java:320) at org.hibernate.ejb.criteria.CriteriaQueryCompiler.compile(CriteriaQueryCompiler.java:227) at org.hibernate.ejb.AbstractEntityManagerImpl.createQuery(AbstractEntityManagerImpl.java:437) at temp.package.dao.impl.DefaultDAOService.getProperties(DefaultDAOService.java:585)
The id attribute is the entity's attribute which was annotated with @Id @GeneratedValue. Basically I'm trying to load the attribute "permissions", which is a collection uninitialized at the time, for the role object whose id is 2. The id attribute is of long type.
Any ideas on why this query would fail?
It seems correct to me...
I can post the code that creates the criteria query here, but it is fairly complicated (as it generates based on a LDAP filter, etc), so I'm hoping the error will be visible from the generated query
string
and the exception. Please let me know if that's not the case.
Thanks!!
Eduardo Born
Eduardo Born
Greenhorn
Joined: Oct 26, 2010
Posts: 3
posted
Oct 27, 2010 10:51:31
0
Some more information...
The criteria query that generated this error was built this way:
EntityManager entityManager = getEntityManager(); CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); CriteriaQuery<Object> criteriaQuery = criteriaBuilder.createQuery(); // queryScopeClass is assigned to type temp.pack.commons.user.Role Class<? extends T> queryScopeClass = role.getClass(); Root<? extends T> from = criteriaQuery.from(queryScopeClass); Path<?> attributePath = from.get("permissions"); Predicate predicate = criteriaBuilder.equal(attributePath, new Long(2)); criteriaQuery.where(predicate); // attempting to get just the role's permissions CriteriaQuery<Object> select = criteriaQuery.select(attributePath); TypedQuery<Object> typedQuery = entityManager.createQuery(select); return typedQuery.getResultList();
The Role and Permission classes have been mapped with JPA and some Hibernate annotations like this:
public abstract class Role implements Serializable { /** * The id of this role. Internal use only. * * @since 1.0 */ @Id @GeneratedValue protected long id; /** * Set of permissions granted to this role. * * @since 1.0 */ @OneToMany(cascade = { CascadeType.PERSIST, CascadeType.MERGE }, mappedBy="sourceRole") protected Set<Permission> permissions = new HashSet<Permission>(); ... } public class Permission implements Serializable { private static final long serialVersionUID = 1L; /** * The id of this permission. Used internally for persistence. * * @since 1.0 */ @Id @GeneratedValue @Column(name = "PERMISSION_ID") protected long id; /** * The group to which the owner of this permission is being granted permission to. * * @since 1.0 */ @ManyToOne(cascade = { CascadeType.PERSIST, CascadeType.MERGE }) @JoinColumn(name = "TARGET_ROLE_ID") @ForeignKey(name = "FK_TARGET_GROUP_PERMISSION_ID", inverseName = "FK_PERMISSION_ID_TARGET_GROUP") protected Group targetGroup; /** * The role that has been granted this permission. * * @since 1.0 */ @ManyToOne(cascade = { CascadeType.PERSIST, CascadeType.MERGE }) @JoinColumn(name = "SOURCE_ROLE_ID") @ForeignKey(name = "FK_SOURCE_GROUP", inverseName = "FK_GROUP_PERMISSIONS") private Role sourceRole; ... }
I was expecting the call to typedQuery.getResultList() to return a list of collections with just one element: the collection of permission objects for the role with id = 2. This is an attempt to select just the "permissions" collection from the object role with id = 2.
I'm new to criteria queries and I'm having a hard time finding what's wrong with it.
What would be the correct criteria query to accomplish this?
Thank you!!
Eduardo
subject: Can anyone tell why is this query not valid?
Similar Threads
Can anyone tell why is this query not valid?
@ManyToMany delete problem
Exception when accessing Javaranch
Unable to locate appropriate constructor
MySQL + Hibernate: HTTP servlet broken pipe with entity manager
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/515248/ORM/databases/query-valid
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
XML::RSS::LibXML - XML::RSS with XML::LibXML:
<rss version="2.0" xml: ... <channel> <tag attr1="val1" attr2="val3">foo bar baz</tag> </channel> </rss>
All of the fields in this construct can be accessed like so:
$rss->channel->{tag} # "foo bar baz" $rss->channel->{tag}{attr1} # "val1" $rss->channel->{tag}{attr2} # "val2"
See XML::RSS::LibXML::MagicElement for details.
Creates a new instance of XML::RSS::LibXML. You may specify a version or an XML base in the constructor args to control which output format as_string() will use.
XML::RSS::LibXML->new(version => '1.0', base => '');
The XML base will be included only in RSS 2.0 output. You can also specify the encoding that you expect this RSS object to use when creating an RSS string
XML::RSS::LiBXML->new(encoding => 'euc-jp');
Parse a string containing RSS.
Parse an RSS file specified by $filename
These methods are used to generate RSS. See the documentation for XML::RSS for details. Currently RSS version 0.9, 1.0, and 2.0 are supported.
Additionally, add_item takes an extra parameter, "mode", which allows you to add items either in front of the list or at the end of the list:
$rss->add_item( mode => "append", title => "...", link => "...", ); $rss->add_item( mode => "insert", title => "...", link => "...", );
By default, items are appended to the end of the list
Return the string representation of the parsed RSS. If $format is true, this flag is passed to the underlying XML::LibXML object's toString() method.
By default, $format is true.
Adds a new module. You should do this before parsing the RSS. XML::RSS::LibXML understands a few modules by default:
rdf => "", dc => "", syn => "", admin => "", content => "", cc => "", taxo => "",
So you do not need to add these explicitly.
Saves the RSS to a file
Syntactic sugar to allow statement like this:
foreach my $item ($rss->items) { ... }
Instead of
foreach my $item (@{$rss->{items}}) { ... }
In scalar context, returns the reference to the list of items.
Creates, configures, and returns an XML::LibXML object. Used by
parse() to instantiate the parser used to parse the feed.
Here's a simple benchmark using benchmark.pl in this distribution, using XML::RSS 1.29_02 and XML::RSS::LibXML 0.30
daisuke@beefcake XML-RSS-LibXML$ perl -Mblib tools/benchmark.pl t/data/rss20.xml XML::RSS -> 1.29_02 XML::RSS::LibXML -> 0.30 Rate rss rss_libxml rss 25.6/s -- -67% rss_libxml 78.1/s 205% --
- Only first level data under <channel> and <item> tags are examined. So if you have complex data, this module will not pick it up. For most of the cases, this will suffice, though.
- Namespace for namespaced attributes aren't properly parsed as part of the structure. Hopefully your RSS doesn't do something like this:
<foo bar:
You won't be able to get at "bar" in this case:
$xml->{foo}{baz}; # "whee" $xml->{foo}{bar}{baz}; # nope
- Some of the structures will need to be handled via XML::RSS::LibXML::MagicElement. For example, XML::RSS's SYNOPSIS shows a snippet like this:
$rss->add_item(title => "GTKeyboard 0.85", # creates a guid field with permaLink=true permaLink => "", # alternately creates a guid field with permaLink=false # guid => "gtkeyboard-0.85 enclosure => { url=> '', type=>"application/x-bittorrent" }, description => 'blah blah' );
However, the enclosure element will need to be an object:
enclosure => XML::RSS::LibXML::MagicElement->new( attributes => { url => '', type=>"application/x-bittorrent" }, );
- Some elements such as permaLink elements are not really parsed such that it can be serialized and parsed back and force. I could fix this, but that would break some compatibility with XML::RSS
Tests. Currently tests are simply stolen from XML::RSS. It would be nice to have tests that do more extensive testing for correctness
XML::RSS, XML::LibXML, XML::LibXML::XPathContext
Copyright (c) 2005-2007 Daisuke Maki <[email protected]>, Tatsuhiko Miyagawa <[email protected]>. All rights reserved.
Many tests were shamelessly borrowed from XML::RSS 1.29_02
Development partially funded by Brazil, Ltd. <>
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~dmaki/XML-RSS-LibXML-0.3102/lib/XML/RSS/LibXML.pm
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
05 January 2011 07:23 [Source: ICIS news]
By Yu Guo
SINGAPORE (ICIS)--Asian polyethylene terephthalate (PET) bottle-grade chip prices, on an uptrend since the last quarter, are expected to rise further this year on higher raw material costs with demand expected to remain strong, according to industry players.
Feedstock monoethylene glycol (MEG) values were assessed at $1,075-1,090/tonne (€806-818/tonne) CFR (cost & freight) ?xml:namespace>
Similarly, feedstock purified terephthalic acid (PTA) prices were at a 15 year high of $1,180-1,215/tonne CFR China on 24 December 2010, ICIS data showed.
Both upstream sectors were expected to remain firm this year because of tight supply and healthy downstream demand.
“Raw material prices will remain strong in the coming year judging by the market fundamentals,” said a major UAE-based producer.
PET bottle chip prices had also been on an uptrend since the fourth quarter of 2010 and peaked at a historical high of $1,650/tonne FOB (free on board)
Despite having stabilised at around the high $1,400s/tonne FOB Korea, spot PET bottle chip prices are still deemed as too high by end-users and converters, especially during the current traditional lull in the northern hemisphere.
“Producers foresee more [pre-Chinese Lunar New Year] buying activities, starting from January, because nothing happened in November and December [2010] when buyers should have started stocking up,” said a source from major southeast Asian producer Indorama.
“We do expect more demand to surface in the coming months, after the market returns from the year-end festivity, given the relatively low inventory levels reported among downstream players,” said a major Chinese producer.
Downstream converters and bottlers had been purchasing on a need-to basis since the unexpected climb of PET bottle chip prices from October 2010 onwards, the producer added.
Market players also expected higher PET bottle chip prices in the near term on the back of the prevailing firm feedstock values.
“Bolstered by growing demand, PET chip prices will follow rising upstream costs closely in order to maintain their margins,” said another major producer based in
Although producers said the 2011 market would be too dynamic for any of them to give an accurate prediction, some agreed the average annual bottle chip prices in 2011 would be around $100-150/tonne higher compared to those in 2010.
The yearly average spot prices for PET bottle-grade chips were recorded at around $1,280/tonne FOB
Converters told ICIS they had already proposed to adjust their base prices up by at least $100/tonne for PET components in their contract formula for this year, in anticipation of higher raw material prices.
However, traders were more concerned over profits, in view of the rising feedstock costs.
“PET chips prices will definitely be higher in tandem with firm PTA and MEG values, the problem will be profits in this already competitive market,” said a trader.
A producer in southeast Asia said there was no perfect substitution [to bottle chips] immediately available to downstream beverage producers, because it took time to change filling lines - should anyone wish to switch to aluminium cans or glass bottles.
“Bottle chip should see demand growing at a rate of around 15% in the next couple of years for markets such as the Middle East and
($1 = €0.75)
For more information on PET,
|
http://www.icis.com/Articles/2011/01/05/9423018/outlook-11-asia-pet-prices-seen-firmer-on-raw-material-costs.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Its good
Its good
Collections Framework
Collections Framework
Java provides the Collections Framework.
In the Collection Framework... or objects.
Set is the interface of the collections framework
which
Collections Framework
The Collections Framework provides a well-designed set of interfaces.... The
collections framework is a unified architecture which is used to represent and
manipulate collections. The framework allows the collections to get manipulated
Collections
/java/jdk6/introduction-collections-api.shtml Hi,
Please send a study material on collections
Introduction to Collections Framework
:
The Java Collections Framework
provides the following benefits:
Reduces...
Introduction to Collections Framework
Collections Framework:
The Collections
Collections Framework Enhancements
Collections Framework Enhancements
In Collection framework, we are able... and Classes are provided in JAVA SE 6 :
Deque ? Deque is a interface. It is a short
Collections Framework
Collections Framework Sir, We know that both HashMap & Hashtable is using for same purposes i.e Used for storing keys-values pair. But there is some difference between this two class that are
1)Hashtable is synchronized
collections - Java Interview Questions
collections The Java Collections API Hi friend,
Java Collections of API (Application Programming Intreface) Consists of several... value must be wrapped in an object of its appropriate wrapper class (Boolean
wt are the collections in java
wt are the collections in java plese send me the reply
Hi Friend,
Java Collections API (Application Programming Interface) Consists... be wrapped in an object of its appropriate wrapper class (Boolean, Character, Integer
Collections in Java
Collections in Java are data-structures primarily defined through a set of classes and interface and used by Java professionals. Some collections in Java that are defined in Java collection framework are: Vectors, ArrayList, HashMap
java collections - Java Beginners
java collections i need a sample quiz program by using collections and its property reply soon as possible .... Hi Friend,
Try the following code:
import java.io.*;
import java.util.*;
class Test {
String
|
http://www.roseindia.net/tutorialhelp/allcomments/2967
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Computer Science Archive: Questions from March 11, 2012
- Anonymous askedprivate static int NUM_O... Show more
• Show less
public class Calculator {
private double register; //holds temporary results
private static int NUM_OP;
//default constructor
public Calculator()
{
register = 0;
}
public void clear() //clears register
{
register = 0;
}
public void set(double x) //sets the value of the register
{
register = x;
}
public double getResult()
{
return register;
}
public void add(double x) //add a value to the register
{
register += x;
NUM_OP ++;
}
public void printResult()
{
System.out.println("Result = " + register);
}
public static void main(String[] args)
{
Calculator cal = new Calculator();
//add 3 and 5 and print result
cal.set(3);
cal.add(5);
cal.printResult();
}
}
/* TODO:
1. Define a constructor to initialize register with a user-defined value
2. Define subtract, multiply, and divide methods
3. Define a method to copy one Calculator result to another
4. Define a private static variable "NUM_OP" to store the total number of operations (add, subtract, multiply or divide) --
5. Define a static method GetNumOP to return the value of NUM_OP
6. Write necessary sequence of statements in the main method to do this math with "cal" object and print the result:
(10 + 5) * 100 / 50
7. Create another calculator object "cal2" and do the math with "cal2":
120 / 50
8. Print the total number of operations done in Calculators "cal" and "cal2".
9. (Bonus point) Print sum of the register values stored in cal and cal2 objects.1 answer
- Anonymous askedHey guys I dont understant this. When I input all this below in C++ I get result 13. How can I know... Show moreHey guys I dont understant this. When I input all this below in C++ I get result 13. How can I know that that will be a result... can someone explain me pleeease.
#include <iostream>
using namespace std;
int main ()
{
int a,b,c,d,e,k;
cin >> a;
cin >>b>>e;
cin >>c>>d;
k=a+b+c+d+e;
cout<<k<< endl;
system("pause");
return 0;
} •.
In storing the floating point number -1231 the mantissa has been truncated hence the number is not exactly -1231. what is the actual number stored?
Please show full working•.
A)Find the 8 Bit ASCII code for the five characters -1231• Show less2 answers
- AmazingSalt7742 asked1 answer
- AmazingSalt7742 asked2 answers
- TinyAnimal744 askedspecifi... Show moreIn this program, you will write a program to solve the m-producer n-consumer problem for the
specific values of m = 1 and n = 2. You have a shared circular buffer that can hold 20 integers. The
producer process stores the numbers 1 to 50 in the buffer one by one (in a for loop with 50
iterations) and then exits. Each of the consumer processes reads the numbers from the buffer and
adds them to a shared variable SUM (initialized to 0). Any consumer process can read any of the
numbers in the buffer. The only constrain is that every number written by some producer should be
read exactly once by exactly one of the consumers. Of course, a producer should not write when the
buffer is full, and a consumer should not read when the buffer is empty.
Write a program that first creates the shared circular buffer and the shared variable SUM using the
shm*() calls in Linux. You can create any other shared variable that you think you may need. The
program then forks the producers and the two consumers. The producer and consumer codes can be
written as functions that are called by the child processes. After the producer and both the
consumers have finished (the consumers exit after all the data produced by all the producers have
been read. How does a consumer know this?), the parent process prints the value of SUM. Note that
the value of SUM should be 25 × 51 = 1275 if your program is correct. • Show less1 answer
- Anonymous asked1. A DNA strand can be represented as a large string built with four character symbols, namely A, T,1 answer
- Anonymous asked1. Write a program that reads two coordinates (x1, y1) and (x2, y2), and prints the slope of the lin1 answer
- Anonymous askedThe GPS navigation system uses the graph and geometric algorithms to calculate di... Show more
Find Nearest Points• Show less
The GPS navigation system uses the graph and geometric algorithms to calculate distances and map a route. One of the geometric problems is the closest-pair problem. Given a set of points, the closest-pair problem is to find the two points that are nearest to each other. Write a program that computes the distances between all pair of points and find the one with the minimum distance as follows:
1.Ask the user to enter the number of points.
2.Ask the user to enter x and y positions for each point. Store the positions in a 2-D array. What should the dimensions of the 2-D array be?
3.Compute the distance between the first two points and initialize the variable that represents the shortest distance. Recall that the distance between the two points (x1, y1) and (x2, y2) is computed by taking the square root of the quantity (x1 - x2)^2 + (y1 - y2)^2.
4.Use a nested for loop to compute the distance for every two points and update the shortest distance and the two points with the shortest distance.
5. Display the shortest distance and the closest two points.1 answer
- Anonymous askedHow do we tell whether f(n) Big Theta of... Show more
For the following two functions:
f(n) = n1/2
g(n) = 5 log n
How do we tell whether f(n) Big Theta of g(n)?• Show less1 answer
- Anonymous asked1 answer
- Anonymous askedIn this articlesection, click Finding ?les. Read... Show morewhere can i find the above statement?
From: In the In this articlesection, click Finding ?les. Read the topic and click any See also or For more informationlinks, if necessary, to provide the following information:
Source:ISBN: 053874653X | Title: New Perspectives on Microsoft Office 2010, First Course | Publisher: Course Technology • Show less1 answer
- Anonymous asked1. C... Show more-.
4. Show printsreen shot of programThe resistor class will, at minimum, have members that do the
following:
1. store the nominal resistance value of a resistor2. store the tolerance of a resistor3. initialize any and all nominal-resistance values to correct, EIA, nonzero
values that are greater than 0 and less than 1,000,000 ohms4. initialize any and all resistance-tolerance values to correct, E12, E24,
E48, or E96 resistance-tolerance values5. allow the nominal-resistance and tolerance values of a resistor object to be
changed by the user6. All member functions should have a test message stating the name of the
function. All the test messages should be displayed or not displayed, depending
on the value of a Boolean variable declared in main().
0 answers
- If the Boolean value = true, display the message.
- If the Boolean value = false, do not display the message.
- Anonymous asked0 answers
- BewilderedBrush9377 askedfor ( i... Show morevoid GetScores(double scores[])
{
int count = 0;
int scoreNum = 1;
bool valid;
char ans;
do
{
for ( int i = 0; i < size; i++)
{
cout << "Score " << scoreNum << ": ";
cin >> scores[i];
if (cin.fail() || scores[i] < 0 || scores[i] > 100)
{
cin.clear();
cin.ignore(100, '\n');
cout << "\nInvalid";
valid = false;
}
else
{
valid = true;
scoreNum ++;
}
cout << "More? ";
cin >> ans;
if( ans == 'y')
{
}
else
{
break;
}
}
} while (!valid);
} • Show less3 answers
- SparklingZipper839 askedint g... Show morei can not get the program to execute, here is what i got,
int getProblem(int& numberselected);
int get2Pt(int& fourcoor);
double getPtSlope(double& pointslope);
double slopeIntcptFromPt(double& slopeintercept);
double intcptFromPtSlope(int& x1,int& y1, double& m);
void display2Pt(int& x1,int& y1,int& x2,int& y2);
void dispayPtSlope(int& x1,int& y1, double m);
void displaySlopeIntcpt(double& m,double& b);
int main()
{
int numberselected,form,fourcoor;
double pointslope, slopeintercept;
cout<<"please choice a form that you would like to convert to slope intercept form:"<<endl;
cout<<"1) Two-point form(you are given two points)"<<endl;
cout<<"2) Point-slope form(you are given the slope and a point)"<<endl;
cin>>form;
getProblem(numberselected);
cin>>form;
if(form==1){
cout<<"you have choose two point form"<<endl;
}
else if(form==2){
cout<<"you have choose point slope form"<<endl;
}
else
cout<<"you have chosen an incorrect value"<<endl;
get2Pt(fourcoor);
slopeIntcptFromPt(slopeintercept);
getPtSlope(pointslope);
slopeIntcptFromPt(slopeintercept);
return 0;
}
int getProblem(int& numberselected)
{
cout<<"please choice a form that you would like to convert to slope intercept form:"<<endl;
cout<<"1) Two-point form(you are given two points)"<<endl;
cout<<"2) Point-slope form(you are given the slope and a point)"<<endl;
return numberselected;
}
int get2Pt(int& fourcoor)
{
int x1,y1,x2,y2;
cout<<"enter the x,y coordinates for the first set of points, separate entry by a space:"<<endl;
cin>>x1>>y1;
cout<<"enter the x,y coordinates for the second set of points, separate entry by a space:"<<endl;
cin>>x2>>y2;
return fourcoor;
}
double getPtSlope(double& pointslope)
{
int x1,y1;
double m;
cout<<"please enter a slope:";
cin>>m;
cout<<"Now enter the (xy) components of the point:";
cin>>x1>>y1;
return pointslope;
}
double slopeIntcptFromPt(double& slopeintercept)
{
bool slope;
double x1,y1,y2,x2=slopeintercept;
double m,b;
/*x1=slopeintercept-->x1;
y1=slopeintercept-->y1;
x2=slopeintercept-->x2;
y2=slopeintercept-->y2;*/
{
if ((x1-x2)==0)
cout<<"the slope is undefined"<<endl;
else
slope=true;
}
m=double(y2-y1)/(x2-x1);
cout<<m<<endl;
b=(m*x1)+y1;
cout<<b<<endl;
return slopeintercept;
}
double intcptFromPtSlope(int& x1,int& y1, double& m)
{
int x1,y1;
double m,b;
cout<<"enter the (xy) components:";
cin>>x1>>y1;
cout<<"enter the slope:";
cin>>m;
b=m*x1+y1;
cout<<"the y intercept is:"<<y<<endl;
return IntcptFromPtSlope;
}
void displayPtSlope(int& x1,int& y1,int& x2,int& y2)
{
int x1,y1,x2,y2;
cout<<"enter the first set (xy) components:";
cin>>x1>>y1;
cout<<"enter the second set of (xy) components:"<<endl;
cin>>x2>>y2;
cout<<"the two point form is"<<endl;
cout<<"m"<<"="<<"("<<y2<<"-"<<y1<<")"<<"/"<<"("<<x2<<"-"<<x1<<")"<<endl;
}
void dispayPtSlope(int& x1,int& y1, double m)
{
int x1,y1;
double m;
cout<<"enter the first set (xy) components:";
cin>>x1>>y1;
cout<<"enter the slope:";
cin>>m;
cout<<"the point slope form is"<<endl;
cout<<"y"<<"-"<<y1<<"="<<m<<"("<<"x"<<"-"<<x1<<")"<<endl;
}
void displaySlopeIntcpt(double& m,double& b)
{
double m,b;
cout<<"enter the y intercept:";
cin>>b;
cout<<"enter the slope:";
cin>>m;
cout<<"the slope intercept form is"<<endl;
cout<<"y"<<"="<<m<<"x"<<"+"<<b<<endl;
} • Show less0 answers
- Anonymous askedrelative advantages and disadvantages of at least three different measures used to protect operating... Show morerelative advantages and disadvantages of at least three different measures used to protect operating systems. The ease of implementation and the associated security management issues should also be addressed. Finally, the paper should include a ranking of the measures from best to worst with supporting rationale.
• Show less0 answers
- Anonymous askedYou are working for a computer gaming company and have been asked to look at a survival game. You wi... Show more of uniform squares. It is surrounded, on all four sides, by water, 1 square wide. Only problem is that the water is full of piranhas, but on each side there three randomly-placed bridges to freedom. The corners are “reachable” so there can be bridges there and the rabbit can escape through the corners. It could be depicted as this (remember that the “bridges” will be randomly
placed differently each time the program is executed):
The goal is for the survivor rabbit to get off the island by “hopping” onto one of the bridges to the mainland. If the rabbit hops into the water it is eaten. If the rabbit takes more than 20 hops and is still on the island then it starves. In all cases, either successfully “escaping” or dying in the effort, the program will print the number of hops that particular rabbit took.
(1) Need to generate random numbers
(2) Equate a random number to a direction of hop, for example, 1=up, 2=right
(3) Keep track of the rabbit's location (row, column). If the rabbit
hops onto a bridge then it has escaped. If it hops into water it dies
(4) Keep track of total number of hops. If hops=20 and it's still on the island, the rabbit dies
(5) Once above is working then put into a loop • Show less1 answer
- ag5443 askedWrite... Show moreHello I was wondering if someone would be able to help me write my code.
This is the problem...
Write a number guessing game in which the computer selects a random number in the range of 0 to 100, and users get a maximum of 20 attempts to guess it. At the end of each game the user should be told whether they won or lost, and then whether they want to play again. When the user quits, the program should output the total number of wins and losses. To make the game more intersting, the program should vary the wording of the messages that it outputs for winning, losing (too low or too high), and for asking for another game. Create 10 different messages for each of these 3 cases (winning, losing (too low or too high), and for asking for another game) and use random numbers to choose among them.
This is what I have so far...
#include <iostream>
#include <fstream>
#include <iomanip>
using namespace std;
int rnd (void)
{
return (rand() % 100 + 1);
}
void main (void)
{
int guess;
int choice;
char input;
bool playagain;
int number;
ofstream outData;
outData.open("outresults.txt");
playagain = false;
while (!playagain)
{
cout << "Welcome to my number guessing game! " << endl;
cout << " User to guess a number generated randomly by the program " << endl;
cout << "Select number '1' for this game" << endl;
cin >> choice;
while (choice != 1)
{
cout << "Please enter '1' only" << endl;
cout << "Enter again: " ;
cin >> choice;
}
if (choice == 1)
{
bool isGuessed;
int num = rnd();
isGuessed = false;
cout << "Welcome to the guessing game" << endl;
while (!isGuessed)
{
cout << "Enter an integer greater or equal to 0 and less than 100: ";
cin >> guess;
while (guess < 1 || guess > 100)
{
cout << "Your number had been out of range, please enter number between 1 to 100 only." << endl;
cout << "Enter again: " ;
cin >> guess;
}
cout << endl;
if (guess == num)
{
cout << "You guessed the correct number" << endl;
cout << "My random number is " << num << endl;
cout << "Thank you for playing my number guessing game." << endl;
isGuessed = true;
}
else if (guess < num)
{
cout << "Your number is smaller.Guess again";
cout << endl;
}
else
{
cout << "Your number is higher. Guess again";
cout << endl;
}
}
}
cout << "Do you want to play again. " << endl;
cout << "Press 1 to play agin or press 2 to stop the game." << endl;
cin >> number;
while (number < 1 || number > 2)
{
cout << "Press only 1 or 2. " << endl;
cout << "Press a number again." << endl;
cin >> number;
}
if (number == 1)
{
cout << "Lets play again. " << endl;
}
else
{
cout << "Thank you for playing" << endl;
playagain = true;
}
}
}
I would really appreciate if someone could help me out! • Show less1 answer
- FearlessWaterfall7644 askedpublic class M... Show morein java create the matrix from the following code without using gui :
package matrice;
public class Main {
public static void main(String[] args) {
int array[][]= new int[100][100];
outputArray(array);
}
public static void outputArray(int[][] array) {
int rowSize = array[0].length;
int columnSize = array[0].length;
int limit=1;
for(int i = 0; i <100; i++) {
System.out.print("[");
for(int j = 0; j <100; j++) {
System.out.print(" "+limit+ array[i][j]);
}
System.out.println(" ]");
}
System.out.println();
}
} • Show less2 answers
- iadonia2332 askedDescribe and define two classes that would correctly model an inheritance relationship. Make sure to... Show moreDescribe and define two classes that would correctly model an inheritance relationship. Make sure to describe both the base class and the derived class, and discuss what the derived class inherits from the base class and why you designed the two classes in that manner.
How would you use inheritance in an object-oriented program? You can use the example you provided in the Implementing Inheritance thread as a reference. Discuss when a program might use both the base class and the derived class and why.
• Show less1 answer
- Anonymous askedUse a class to Write a C++ program to perform grocery check-out procedure for a simple store with ma... Show more
Use a class to Write a C++ program to perform grocery check-out procedure for a simple store with max 100 products.• Show.0 answers
- Anonymous askedBanks lend money to each other. In tough economic times, if a bank goes bankrupt, it may not be able... Show moreBanks lend money to each other. In tough economic times, if a bank goes bankrupt, it may not be able to pay back the loan (You might remember 2008 banking crisis). A bank’s total assets are its current balance plus its loans to other banks. Figure 1 is a diagram that shows five banks. The banks’ current balances are 25, 125, 175, 75, and 181 million dollars, respectively. The directed edge from node 1 to node 2 indicates that bank 1 lends 40 million dollars to bank 2.
diagram at
Figure 1 Banks lend money to each other.
If a bank’s total assets are under a certain limit, the bank is unsafe. The money it borrowed cannot be returned to the lender, and the lender cannot count the
loan in its total assets. Consequently, the lender may also be unsafe, if its total assets are under the limit.
Write a program to find all unsafe banks. Your program reads the input as follows. It first reads two integers n and limit, where n indicates the number of banks and limit the minimum total assets for keeping a bank safe. It then reads n lines that describe the information for n banks with id from 0 to n-1. The first number in the line is the bank’s balance, the second number indicates the number of banks that borrowed money from the bank, and the rest are pairs of two numbers. Each pair describes a borrower. The first number in the pair is the borrower’s id and the second is the amount borrowed. For example, the input for the five banks in Figure 1 is as follows (note that the limit is 201):
>5 201
>25 2 1 100.5 4 320.5
>125 2 2 40 3 85
>175 2 0 125 3 75
>75 1 0 125
>181 1 2 125
The total assets of bank 3 are (75 + 125 = balance + loan), which is under 201. So bank 3 is unsafe. After bank 3 becomes unsafe, the total assets of bank 1 will fall below (125 + 40). So, bank 1 is also unsafe. The output of the program should be
>Unsafe banks are 3 1 (Hint: Use a two-dimensional array borrowers to represent loans. borrowers[i][j] indicates the loan that bank i loans to bank j. Once bank j becomes unsafe, borrowers[i][j] should be set to 0.) • Show less1 answer
- UptightDrum3688 askedA 2N3370 has IDSS=2.5mA and gm0=1500uS. What is its gate-source cutoff voltage? What is the value of... Show moreA 2N3370 has IDSS=2.5mA and gm0=1500uS. What is its gate-source cutoff voltage? What is the value of gm for VGS=-1V?
• Show less0 answers
- Anonymous askedThe program should get the sum of all array memory locations, but it is only showing me one randomly... Show more
The program should get the sum of all array memory locations, but it is only showing me one randomly generated number.• Show less
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define RANGE 10
int main()
{
int r[1000];
int num;
int count;
int sum;
for ( count = 0; count < 1000; count++ )
{
r[ count ] = 0;
}
for ( count = 0; count < 1000; count++ )
{
srand((unsigned)time(NULL));
num = rand() % RANGE + 1;
r[ count ] = num;
}
for ( count = 0; count < 1000; count++)
{
sum = r[ count] + sum;
printf("%i", %sum);
}
return(0);1 answer
- ElegantViolin5721 askedI need to create a program that will make a post fence with no more than 18 posts and two layers so... Show moreI
- ElegantViolin5721 askedI need to create a program that will make a post fe... Show moreCreating a loop in order to build a fence in C++
I
- Anonymous askedThis lab introduces you to writing a C++ program to implement the concept of class inheritance using... Show moreThis.
**PLEASE READ** This must be in c++ format usable on Visual Studios 2010. I do not need comments on how to start writing the code or partial code, I need the full code itself. If you could show where each file starts and ends that would really help (Example use comments saying //header.h start and //header.h end OR separate each file with a line ----------- and as before say which one it is, like header.h) Thanks!
• Show less0 answers
- BraveRhino4840 askedI am working on a problem and I am stuck. I am having a problem with my commissionRate statement and... Show moreI am working on a problem and I am stuck. I am having a problem with my commissionRate statement and am stuck. Here is the problem
Create an abstract class named Salesperson. Fields include first and last names; the
Salesperson constructor requires both these values. Include properties for the fields.
Include a method that returns a string that holds the Salesperson’s full name—the first
and last names separated by a space. Then perform the following tasks:
» Create two child classes of Salesperson: RealEstateSalesperson and GirlScout.
The RealEstateSalesperson class contains fields for total value sold in dollars and
total commission earned (both of which are initialized to 0), and a commission rate
field required by the class constructor. The GirlScout class includes a field to hold
the number of boxes of cookies sold, which is initialized to 0. Include properties for
every field.
» Create an interface named ISell that contains two methods: SalesSpeech()and
MakeSale(). In each RealEstateSalesperson and GirlScout class, implement
SalesSpeech()to display an appropriate one- or two-sentence sales speech that
the objects of the class could use. In the RealEstateSalesperson class, implement
the MakeSale()method to accept an integer dollar value for a house, add the
value to the RealEstateSalesperson’s total value sold, and compute the total
commission earned. In the GirlScout class, implement the MakeSale()method
to accept an integer representing the number of boxes of cookies sold and add it to
the total field.
» Write a program that instantiates a RealEstateSalesperson object and a
GirlScout object. Demonstrate the SalesSpeech()method with each object,
then use the MakeSale()method two or three times with each object. Display the
final contents of each object’s data fields. Save the file as SalespersonDemo.cs.
Here is my code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SalesPersonDemo
{
public interface ISell
{
string SalesSpeach();
double MakeSale();
}
abstract class Salesperson
{
public string firstName;
public string lastName;
public Salesperson(string First, string Last)
{
First = FirstName;
Last = LastName;
}
public string FirstName
{
get
{
return firstName;
}
}
public string LastName
{
get
{
return lastName;
}
}
public abstract string FullName();
}
class RealEstateSalesperson : Salesperson
{
private double houseVal;
private int totalValDollars = 0;
private int totalCommission = 0;
public RealEstateSalesperson(string FirstName,String LastName, int commissionRate) : base (FirstName, LastName)
{
CommissionRate = commissionRate;
}
public int TotalVal
{
get
{
return totalValDollars;
}
set
{
totalValDollars = value;
}
}
public int TotalCommission
{
get
{
return totalCommission;
}
set
{
totalCommission = value;
}
}
public int CommissionRate
{
get
{
return commissionRate;
//The name 'commissionRate' does not exist in the current context
}
set
{
commissionRate = value;
//The name 'commissionRate' does not exist in the current context
}
}
public override string SalesSpeach()
{
return "As a small business person, you have no greater leverage than the truth. - John Geenleaf Whittier";
}
public override double MakeSale()
{
double total;
total = CommissionRate * TotalVal + TotalVal;
return total;
}
public override string FullName()
{
return FirstName + " " + LastName;
}
}
class GirlScout : Salesperson
{
private int numOfboxes = 0;
public int NumOfBoxes
{
get
{
return numOfboxes;
}
set
{
numOfboxes = value;
}
}
public override string SalesSpeach()
{
return "Every Cookie has a Mission: To Help Girls Do Great Things.";
}
public override string FullName()
{
return FirstName + " " + LastName;
}
}
}
Am I in the right ballpark on this assignment.
• Show less0 answers
- Anonymous askedSt. Joseph's School has 1,200 students, each of whom currently pays $8,000 per year to attend. In ad... Show more
St. Joseph's School has 1,200 students, each of whom currently pays $8,000 per year to attend. In addition to revenues from tuition, the school receives an appropriation from the church to sustain its activity. The budget for the upcoming year is $15 million, and the church appropriation will be $4.8 million. By how much will the school have to raise tuition per student to keep from having a shortfall in the upcoming year?• Show less1 answer
- g2bgac askedThe program has to be written in C, well mbed.org has a compiler specificallty for the mbed, thanks... Show more
The program has to be written in C, well mbed.org has a compiler specificallty for the mbed, thanks alot for your help and i will award lifesavor for the first one to provide me with good help.• Show less0 answers
- DizzyRain6569 askedI need help writing the code for the STORE class. We are supposed to create our own linked list stru... Show moreI need help writing the code for the STORE class. We are supposed to create our own linked list structure for it. If someone can get me started with the STORE class that would be great. The full assignment is described below.
----------------------------------------------------------------------------------------------------------
Overview: The goal of this project is to construct an inventory system for a book-seller. The book-seller will have three types of products: books, music, and movies, each available across multiple store locations.
Details: The system that you will write will be a REPL (Read, Evaluate, Print, Loop) with several options, each available from a main menu:
Add a new store.
Remove a store.
List stores.
Add a new product.
Look up a product.
Purchase product.
Add a new store- This should add a new store to the list of available stores. The menu should prompt for the store’s location. The system should disallow you from adding stores in the same location.
Remove a store- The system should prompt the user for a location, and remove the store in that location. The system should notify the user whether the removal was a success.
List stores- The system should provide a list of stores, sorted alphabetically.
Add a new product- The system should prompt the user for the type of the product. For books, the user should be prompted for the name of the book, the author, the ISBN, and the price. For movies, the user should be prompted for the name of the movie, the year it was released, and the price. For music, the user should be prompted for the name, the artist or group, the genre, and the price. Finally, the system should prompt for each of the store locations- asking the user how many of the product to add at each store.
Find.)
Purchase.) The system should then ask if the user wants to purchase the product and from which location he or she wishes to purchase it from. That product should then be removed from the system.
Implementation Details
Implementation Description: There is obviously a lot to do here. Your implementation should do all that I described above, using the concepts of object oriented programming and linked lists that you all have learned in class and in lab. For this project, you must write your own data structures. Do not use any built-in Java lists such as arrays or arrayLists.
Your program should use several classes. Book, movie, and music should all be children of the product class.
Store
Product
Book
Movie
Music
Store Class: The store class should store the location of the store and a list of products in the store. Objects of type Store should point to another Store, creating a singly linked list. The store list can be navigated via the pointer to the next store. Additionally, the store should point to a single product, which in turn points to other products, creating another singly linked list. Stores should be stored alphabetically by location name. This requires a little bit of thought when adding and removing stores to the list.
Product Class: Again, the product class should create a singly linked list by pointing to another Product. The product should include the name and price.
Book Class: This should inherit from the product class and should include the author and ISBN. You don’t need to repeat the name or price.
Movie Class: This should inherit from the product class and should include the year released. You don’t need to repeat the name or price.
Music Class: Similarly, this should inherit from the product class and should include the artist and genre. Again, you don’t need to repeat the name or price.
An internal representation of the objects in this project might look like the diagram below. Note the linked structure between all objects. Also note that the stores are sorted alphabetically by their location. Finally, note that there is a HEAD that points to the first store. You don’t necessarily have to use the variable names that are shown in the diagram.
?
Obviously, you’ll need a lot more than just the fields listed above. You’ll also need methods to add different objects to their respective lists, and test whether a product is a book, movie, or music (HINT: instanceof) . It’s up to you to determine how this should be accomplished. I’ll try to tailor the lectures and assignments over the next few weeks to this project.
Bonus: For a 10% bonus on this project, implement a VIP customer system.
Add the following options to the initial menu:
Add a VIP customer- The system should prompt the user for the name of the customer, and his or her phone number. It should not allow for identical names.
List VIP customers- Lists all of the VIP customers in the system.
When products are looked up or purchased, add a feature where the system prompts for the phone number of the customer. If that phone number belongs to a VIP customer, then add a 10% discount to the price of the product.
This necessitates the following customer class:
Customer Class (Bonus): The customer class should store the name and phone number of the customer. I don’t care how this information is stored (i.e. whether it is Last, First or First_Last). Objects of type Customer should point to another customer, creating another singly linked list. The customer list can be navigated via the pointer to the next customer. • Show less1 answer
- Anonymous askedint multiplyIt(int x) {... Show moreGiven these classes in different files.
package MyPkg;
public class MyClass {
int multiplyIt(int x) { return x*(x--); }
}
import MyPkg.*; // line 1
class Needy3 {
public static void main(String[] args) {
MyPkg.MyClass u = new MyPkg.MyClass(); // line 2
System.out.println(u.multiplyIt(5));
}
}
/*
Q1: What needs to be added to this code so that it will compile properly?
Q2: Why was compilation a problem before?
Q3: Modify this code so that it compiles and prints a value to the standard
output.
*/
• Show less1 answer
- Anonymous asked// The... Show more
1 answer
- // FILE: sequence1.h
- // CLASS PROVIDED: sequence (part of the namespace main_savitch_3)
- // There is no implementation file provided for this class since it is
- // an exercise from Section 3.2 of "Data Structures and Other Objects Using C++"
- //
- // TYPEDEFS and MEMBER CONSTANTS for the sequence class:
- // typedef ____ value_type
- // sequence::value_type is the data type of the items in the sequence. It
- // may be any of the C++ built-in types (int, char, etc.), or a class with a
- // default constructor, an assignment operator, and a copy constructor.
- //
- // typedef ____ size_type
- // sequence::size_type is the data type of any variable that keeps track of
- // how many items are in a sequence.
- //
- // static const size_type CAPACITY = _____
- // sequence::CAPACITY is the maximum number of items that a sequence can hold.
- //
- // CONSTRUCTOR for the sequence class:
- // sequence( )
- // Postcondition: The sequence has been initialized as an empty sequence.
- //
- // MODIFICATION MEMBER FUNCTIONS for the sequence class:
- // void start( )
- // Postcondition: The first item on the sequence becomes the current item
- // (but if the sequence is empty, then there is no current item).
- //
- // void advance( )
- // Precondition: is_item returns true.
- // Postcondition: If the current item was already the last item in the
- // sequence, then there is no longer any current item. Otherwise, the new
- // current item is the item immediately after the original current item.
- //
- // void insert(const value_type& entry)
- // Precondition: size( ) < CAPACITY.
- //.
- //
- // void attach(const value_type& entry)
- // Precondition: size( ) < CAPACITY.
- //.
- //
- // void remove_current( )
- // Precondition: is_item returns true.
- // Postcondition: The current item has been removed from the sequence, and the
- // item after this (if there is one) is now the new current item.
- //
- // CONSTANT MEMBER FUNCTIONS for the sequence class:
- // size_type size( ) const
- // Postcondition: The return value is the number of items in the sequence.
- //
- // bool is_item( ) const
- // Postcondition: A true return value indicates that there is a valid
- // "current" item that may be retrieved by activating the current
- // member function (listed below). A false return value indicates that
- // there is no valid current item.
- //
- // value_type current( ) const
- // Precondition: is_item( ) returns true.
- // Postcondition: The item returned is the current item in the sequence.
- //
- // VALUE SEMANTICS for the sequence class:
- // Assignments and the copy constructor may be used with sequence objects.
- #ifndef MAIN_SAVITCH_SEQUENCE_H
- #define MAIN_SAVITCH_SEQUENCE_H
- #include <cstdlib> // Provides size_t
- namespace main_savitch_3
- {
- class sequence
- {
- public:
- // TYPEDEFS and MEMBER CONSTANTS
- typedef double value_type;
- typedef std::size_t size_type;
- static const size_type CAPACITY = 30;
- // CONSTRUCTOR
- sequence( );
- // MODIFICATION MEMBER FUNCTIONS
- void start( );
- void advance( );
- void insert(const value_type& entry);
- void attach(const value_type& entry);
- void remove_current( );
- // CONSTANT MEMBER FUNCTIONS
- size_type size( ) const;
- bool is_item( ) const;
- value_type current( ) const;
- private:
- value_type data[CAPACITY];
- size_type used;
- size_type current_index;
- };
- }
- #endif
- Anonymous asked|... Show moreGiven the following directory structure:
src
| -- Nintendo.class
| -- hero
|-- Mario.class
|
|-- how
|-- Yoshi.class
And the following source file:
class MyClass {
Nintendo n;
Mario m;
Yoshi y;
}
/*
Q1: Which statement(s) must be added for statement(s) must be added for the
source file (in MyClass) to compile?
Q2: Why must this/these statements be added?
Q3: Add this/these statement(s) to the code so that it compiles. Name your text
file c10q2.java
*/
• Show less2 answers
- Anonymous askedScanner x = new... Show moreGiven:
// (1)INSERT CODE HERE
class TestIt {
public static void main(String[] args) {
Scanner x = new Scanner("myFile");
System.out.println(x.hasNext());
}
}
/*
Q1: Give two independent examples of lines of code that can be inserted at line
(1) so that the file will compile.
Q2: What is the most specific line of code that can be typed so that the code
will compile? What is the most generic?
*/
• Show less1 answer
- Anonymous asked... Show moreGiven:
import static java.lang.System.*;
class _ {
static public void main(String... _Y2__pq_$__) {
String _$ = "";
for(int x=0; ++x < _Y2__pq_$__.length; )
_$ += _Y2__pq_$__[x];
//System.out.println(x);
out.println(_$);
}
}
And the command line:
java _ - A . What will be the output?
Q1: What is this question testing the student's knowledge of?
Q2: What is normally typed in the place of the identifier _Y2__pq_$__?
Q3: What does this mean concerning method names and identifiers?
• Show less2 answers
- Anonymous askedWhat is the truth table of the boolean function f below, which is a function of three boolean variab... Show more
What is the truth table of the boolean function f below, which is a function of three boolean variable a, b, and c?
What is the boolean expression of f?• Show less2 answers
- Anonymous askedEmployee - firstName : string - lastName : strin... Show morei L A B S T E P S
STEP 1: Understand the UML Diagram
Employee - firstName : string - lastName : string - gender : char - dependents : int - annualSalary : double - static numEmployees : int = 0 +Employee() +Employee(in first : String, in last : String, in gen : char, in dep : int, in sal : double) +calculatePay() : double +displayEmployee() : void +static getNumEmployees() : int +getFirstName() : String +setFirstName(in first : String) : void +getLastName() : String +setLastName(in last : String) : void +getGender() : char +setGender(in gen : char) : void +getDependents() : int +setDependents(in dep : int) : void +getAnnualSalary() : double +setAnnualSalary(in salary : double) : void +setAnnualSalary(in salary : String) : void The following attribute has been added:
- static numEmployees: int = 0:", 'F', 5, 24000.0
Using your code from Week 1, display a divider that contains the string "Employee Information".
Display the employee information for the second Employee object.: 7 Annual Salary: $32,500.00 Weekly Pay: $625.00 Total employees: 1 *************** Employee Information *************** First Name: Mary Last Name: Noia Gender: F Dependents: 5 Annual Salary: $24,000.00 Weekly Pay: $461.54 Total employees: 2 • Show less1 answer
- Anonymous asked$0.... Show moreA bank charges $10 per month plus:
$0.10 each for less than 20 checks
$0.08 each for 20-39 checks
$0.06 each for 40-59 checks
$0.04 each for 60 or more checks
The bank also charges $15 if the balance falls below $400, before any fees are applied. Do not accept a negative value for the number of checks written.
If a negative value is given for the beginning balance, display a message indicating the account is overdrawn.
Write a program that asks a user to enter the beginning balance of an account and the number of checks written
for month. Calculate the bank charge fee for that month.
-----------------------------
I tried this, but kept getting "E:\temp\D9C2.tmp\Bank.cpp:6:11: error: '::main' must return 'int'
E:\temp\D9C2.tmp\Bank.cpp: In function 'int main()':
E:\temp\D9C2.tmp\Bank.cpp:87:15: error: 'system' was not declared in this scope:"
Is there an easier way to write this?:
//Header file section
#include<iostream>
using namespace std;
//Main function
void main()
{
//Variable declarations
int beginningbalance,checks;
double total,bankservicefees;
//inputting balance
cout<<"Enter beginning balance"<<endl;
cin>>beginningbalance;
//inputting checks
cout<<"Enter number of checks written on this month"<<endl;
cin>>checks;
if(checks <0)
{
cout<<"Account Overdrawn"<<endl;
}
if(checks<20 && beginningbalance > 400)
{
total = checks*0.10;
bankservicefees = (10+total);
cout<<"Total bank service fees for the month is $"<<bankservicefees<<endl;
}
if(checks<20 && beginningbalance < 400)
{
total = checks*0.10;
bankservicefees = (10+total+15);
cout<<"Total bank service fees for the month is $"<<bankservicefees<<endl;
}
if(checks >= 20 && checks <=39)
{
if(beginningbalance > 400)
{
total = checks*0.08;
bankservicefees = (10+total);
cout<<"Total bank service fees for the month is $" <<bankservicefees<<endl;
}
}
if(checks >= 20 && checks <=39)
{
if(beginningbalance < 400)
{
total = checks*0.08;
bankservicefees = (10+total+15);
cout<<"Total bank service fees for the month is $" <<bankservicefees<<endl;
}
}
if(checks >= 40 && checks <=59)
{
if(beginningbalance > 400)
{
total = checks*0.06;
bankservicefees = (10+total);
cout<<"Total bank service fees for the month is $" <<bankservicefees<<endl;
}
}
if(checks >= 40 && checks <=59)
{
if(beginningbalance < 400)
{
total = checks*0.06;
bankservicefees = (10+total+15);
cout<<"Total bank service fees for the month is $" <<bankservicefees<<endl;
}
}
if(checks >= 60 && beginningbalance > 400)
{
total = checks*0.04;
bankservicefees = (10+total);
cout<<"Total bank service fees for the month is $" <<bankservicefees<<endl;
}
if(checks >= 60 && beginningbalance < 400)
{
total = checks*0.04;
bankservicefees = (10+total+15);
cout<<"Total bank service fees for the month is $"<<bankservicefees<<endl;
}
//pause system for a while
system("pause");
}//End main function
• Show less3 answers
- Anonymous askedPlease follow the instruction in the exercise below. The program should include %comments, input sta... Show morePlease follow the instruction in the exercise below. The program should include %comments, input statements, fprintf output statements, multiway path control structures like "if", "elseif", "end". Lifesaver rating will be given for step by step instructions of the program.
The following are formulas for calculating the training heart rate (THR) for
men and women
For men (Karvonen formula): THR = [(220 – AGE) – RHR] × INTEN + RHR
For women: THR = [(206 – 0.88 × AGE) – RHR] × INTEN + RHR
where AGE is the person’s age, RHR the resting heart rate, and INTEN the fitness
level (0.55 for low, 0.65 for medium, and 0.8 for high fitness). Write a
program in a script file that determines the THR. The program asks users to
enter their gender (male or female), age (number), resting heart rate (number),
and fitness level (low, medium, or high). The program then displays the training
heart rate. Use the program for determining the training heart rate for the
following two individuals:
(a) A 21-years-old male, resting heart rate of 62, and low fitness level.
(b) A 19-years-old female, resting heart rate of 67, and high fitness level. • Show less1 answer
- Anonymous askedThe Cairo location will have two... Show moreThe company has decided to expand internationally to Cairo in Egypt.
The Cairo location will have two travel agents.
The company has also decided to expand to an adjacent office building which will be leased.
The original building is Building A
The leased building is Building B
Building B will have 10 travel agents and an Administrative Aide
First challenge: having the networks in both building communicate with each other.
The agents in the Cairo location will need to exchange files with the Miami location.
Building B will have Windows Vista and Building A has Windows 2000, XP and Vista - explain effects of this on the network and your solution .
Agents from building A need to work at Building B- select an appropriate user profile (local or roaming?)
Space is needed for five new agents who will work in and out of the office - wireless/wired, remote access?
Persons in building B needs to get print reports from Building A- shared resource (print report folder locaton) and printer setup. • Show less0 answers
- Anonymous asked1.Prompt the user for the name of an input text file 2.Read the input text file and display a keywor1 answer
- Anonymous asked(4 - 5 sentences)
Based on your knowledge of networks, discuss which design would best fit the client... Show more(4 - 5 sentences)
Based on your knowledge of networks, discuss which design would best fit the client’s global needs. How does your design allow both locations to communicate. There is a need for occasional file exchange between Miami and Cairo. How can this be done.
Please answer this Question with what diagram was used in my last question of What Network would you use and diagram using Viso on my last question I asked. • Show less0 answers
- youngblood askedwrite a rou... Show moreConsider using the ADC and the ATMega1284P to read the attached light sensor on PORTA.6.
write a rountine to perform the following:
-Initialize PORTA.6 as an input
-Select the ADC clk as Fclk/128
-Select the active channel to be channel 6, single-ended conversion
-Select ADC reference to be AVcc
-Select the ADC return value to be left-justified. this yields and 8b result in the high register ADCH
-Finally, enable the converter (but do not initiate a conversion)
-Return from the routine • Show less0 answers
- DirtyPawn7728 asked0 answers
- Anonymous askedDim myR... Show morePrivate Sub btnAdd_Click(sender As System.Object, e As System.EventArgs) Handles btnAdd.Click
Dim myReader As IO.StreamReader = IO.File.OpenText("C:\Visual Studio 2010\testData.txt")
Dim sum As Integer = 0
Dim num As Integer
Try
myReader = IO.File.OpenText("C:\Visual Studio 2010\testData.txt")
Catch ex As IO.DirectoryNotFoundException
MessageBox.Show(" Directory not found, " & ex.Message & "By: ")
Catch exc As IO.DriveNotFoundException
MessageBox.Show(" File cannot be found, " & exc.Message & "By:")
End Try • Show less1 answer
- Anonymous askedHere's the question I'm having some issues with. I basically have to fill in the three methods (r... Show more<p>Here's the question I'm having some issues with. I basically have to fill in the three methods (rowSum(), colSum(), and diagSum()) :<br /><br />An n x n matrix that is filled with the numbers 1, 2, 3, …, n^2 is a magic square if the sum of the elements in each row, in each column, and in the two diagonals is the same value. The MagicSquare class that follows holds a two-dimensional array, and the method isSquare() determines if the array matrix forms a magic square. There are three methods missing in the class that you must provide: rowSum(), colSum(), and diagSum().<br /><br />Here is the code, and any help is appreciated, thanks!<br /><br />/**<br /> * MagicSquare - determines if array forms a magic square.<br /> */<br />public class MagicSquare<br />{<br /> private int[][]square;<br /> <br /> /**<br /> * Constructor - passed in a square<br /> */<br /> public MagicSquare(int[][]inSquare)<br /> {<br /> square = inSquare;<br /> }<br /><br /> /**<br /> * isMagic - determines if array forms a magic square.<br /> * <br /> * @return true if array is a magic square, false otherwise<br /> */<br /> public boolean isMagic()<br /> {<br /> // test if two dimensions are the same<br /> if (square.length != square[0].length)<br /> {<br /> return false; // not n x n<br /> }<br /> <br /> // test that all integers are represented<br /> for (int num = 1; num <= square.length * square.length; num++)<br /> {<br /> boolean found = false;<br /> for (int i = 0; i < square.length && found == false; i++)<br /> {<br /> for (int j = 0; j < square.length; j++)<br /> {<br /> if (square[i][j] == num)<br /> {<br /> found = true;<br /> break; // out of inner for loop<br /> }<br /> }<br /> }<br /> if (found == false) // number not in array<br /> {<br /> return false;<br /> }<br /> }<br /> <br /> // test that any row, column or diagonal did not match<br /> if (rowSum() == -1 || colSum() == -1 || diagSum() == -1)<br /> {<br /> return false;<br /> }<br /> <br /> // test that the sums of rows, columns, and diagonals are same<br /> if (rowSum() != colSum() || colSum() != diagSum())<br /> {<br /> return false;<br /> }<br /> <br /> // it is a magic square<br /> return true;<br /> }<br /> /**<br /> * rowSum - determines sum of rows.<br /> * <br /> * @return sum of rows if all the same, -1 otherwise<br /> */<br /> public int rowSum()<br /> {</p>
<p>// create method here<br /><br /> }<br /> <br /> /**<br /> * colSum - determines sum of columns.<br /> * <br /> * @return sum of columns if all the same, -1 otherwise<br /> */<br /> public int colSum()<br /> {<br /> // create method here</p>
<p><br /> }<br /> <br /> /**<br /> * diagSum - determines sum of the two diagonals.<br /> * <br /> * @return sum of two diagonals if the same, -1 otherwise<br /> */<br /> public int diagSum()<br /> {</p>
<p>// create method here<br /><br /> }<br />}<br /><br /></p> • Show less1 answer
- needhelpwithhomework askedTest with the following i... Show moreWrite two functions that reverse the order of elements in a
vector<strings>
Test with the following input:
1st input:
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Sunday
quit
2nd input:
one
two
three
four
five
six
seven
eight
quit
Please note: You must write a main program to call the two
functions in order to test them. When asking the user for input have them enter quit to signal that they are finished entering input
i.e.
cout<<"Enter a string and enter quit to end input: \n" • Show less2 answers
- Anonymous askedWrite a method to save a Linked List to a text file called file.txt. It must save the Linked List to... Show moreWrite a method to save a Linked List to a text file called file.txt. It must save the Linked List to the file with each node on a different line.
Here is the Linked List
LinkedList<Account> account = new LinkedList<Account>();
Account is the class that contains all of the string information for each account such as username, password, email, state, hobby. But basically this is what the linked List looks like, but with multiple accounts.
John
[email protected]
Virginia
Driving • Show less1 answer
- GentleDog1542 askedExplain when a Switch statement may be more appropriate than an if/else statement and provide a code... Show moreExplain when a Switch statement may be more appropriate than an if/else statement and provide a code example.
• Show less1 answer
- GentleDog1542 askedInput/out... Show moreWhich of the following is not included in a pseudocode program?
Program control structures
Input/output
Declarations
Program algorithms
• Show less1 answer
- GentleDog1542 askedrepetition operato... Show more
The symbol "==" is used as part of a _____.• Show less
comparison operator
decision operator
repetition operator
sequence operator1 answer
- Anonymous asked1 Goal The goal of this lab is to make an interrupt driven four-digit seven-segment LED counter that1 answer
- GentleDog1542 askedwi... Show moreOne-way if statements execute an action if the condition is _____.
sent to the console
true
false
within a nested loop
• Show less1 answer
- GentleDog1542 askedConsider the following statement: A store is giving a discount of 10% for all purchases over $1,000.... Show moreConsider the following statement: A store is giving a discount of 10% for all purchases over $1,000. Which of the following is not the appropriate structure to use to program the statement?
Selection
Decision
Control
Sequence
• Show less2 answers
- GentleDog1542 asked1 answer
- Anonymous askedWith OSCCLK running at 8 MHz, how will you configure the RTI register to generate interrupts with a ... More »0 answers
- GentleDog1542 askedWhich of the following is a variable, usually a boolean or an integer, that signals when a condition... Show moreWhich of the following is a variable, usually a boolean or an integer, that signals when a condition exists?
Relational operator
Logical operator
Arithmetic operator
Flag
• Show less1 answer
- GentleDog1542 askedinpu... Show moreWhen a program lets the user know that an invalid choice has been made, this is known as _____.
input validation
output validation
compiler criticism
None of the above
• Show less2 answers
- GentleDog1542 askedco... Show moreConsider the following segment of code:
if(apple = 5)
cout<<"You got \"five\" apples!"<<endl;
else
cout<<"You do not have five apples!\n";
cout<<"The end of the program is reached.";
What error can you identify?
A double quotation mark was incorrectly inserted.
The programmer forgot the curly braces.
Assumes indentation has a logical purpose.
The programmer used assignment operators instead of relational operators.
• Show less3 answers
- GentleDog1542 askedswitch, case, bre... Show moreWhat keywords are used to construct a switch statement?
case, jump, break, default
switch, case, break, default
break, default, case, goto
switch, case, break, else
• Show less2 answers
- Anonymous askedwrite CodeWarrior C statements to specify and interrupt handler to ATD (A-to -D converter module 0) ... More »0 answers
- GentleDog1542 askedIt stops the pr... Show moreHow does the environment handle a breakpoint when the debugger is running a program?
It stops the program at the line that has the breakpoint, but doesn't execute it.
It stops the program at the line of the breakpoint after it has executed it.
It runs past the line with the breakpoint, but echoes the values in the Trace window.
It shows the call stack in the memory window.
• Show less2 answers
- EPR023 askedThe main issue Im having with my code right now is that I cant generate the output that Im being ask... Show moreThe main issue Im having with my code right now is that I cant generate the output that Im being asked.
this is the exercise
Write a program that stimulates a bouncing ball by computing its height in feet at each second as time passes on a simulated clock. At time zero, the ball begins at height zero and has an initial velocity supplied by the user. (An initial velocity of at least 100 feet per second is a good choice.) After each second, change the height by adding the current velocity; then subtract 32 from the velocity. If the new height is less than zero, multiply both the height and the velocity by -0.5 to simulate the bounce. Stop at the fifth bounce. The output from your program should have the following form:
Enter the initial velocity of the ball: 100
Time: 0 Height: 0.0
Time: 1 Height 100.0
Time: 2 Height 168.0
Time: 3 Height 204.0
Time: 4 Height 208.0
Time: 5 Height 180.0
Time: 6 Height 120.0
Time: 7 Height 28.0
Bounce !
Time: 8 Height 48.0
This is my code as of right now and I would like to have some insight into how I can fix this issue. please be as explanatory as possible.
public static void main(String[] args)
{
Scanner keyboard = new Scanner (System.in);
double height = 0;
int bounce = 0;
int time = 0;
System.out.println("Enter the initial velocity of the ball:");
int velocity = keyboard.nextInt();
while ( bounce < 5) {
// displaying time and height
System.out.println("time:" + time + " height:" + height);
//updating time
time++;
// calculate new height
height = height + velocity;
// decrease velocity
velocity = velocity - 32;
if (height < 0)
{
// change velocity
velocity = (int)-0.5 * velocity;
// recalculate height
height = -0.5*height;
System.out.println("Bounce!");
bounce++;
}
THIS IS MY CURRENT OUTPUT FROM MY CODE
Bounce!
time:13 height:4.0
time:14 height:4.0
Bounce!
time:15 height:14.0
time:16 height:14.0
Bounce!
• Show less1 answer
- Anonymous askedarrays. In that exercise, the pro... Show more
(Exercise 11 in Chapter 9 explains how to add large integers using• Show less
arrays. In that exercise, the program could add only integers of, at
most, 20 digits.)
This chapter explains how to work with dynamic integers.
DESIGN A CLASS named largeIntegers such that an object of this class can store
an integer of any number of digits. ADD OPERATIONS TO ADD, SUBTRACT, MULTIPLY AND COMPARE INTEGERS STORED IN TWO OBJECTS. ALSO ADD CONSTRUCTORS TO PROPERLY INITIALIZE OBJECTS AND FUNCTIONS TO SET, RETRIEVE, AND PRINT THE VALUES OF OBJECTS. Also add constructors to properly initialize objects and functions to set, retrieve, and print the values of objects.
(THIS IS THE NEXT STEP - ONLY NEED THE MAIN CPP. I capitalized the section that this step includes.
(Step 2. If done by Sunday 11:59 PST I'll award Lifesaver, even if not complete; depends. Refer to for first step. I changed LargeInteger to largeIntegers.)
This is what I have so far for the main, can't figure out what's wrong with the number(input) part.
#include "largeIntegers.h"
#include <iostream>
#include <string>
using namespace std;
int main()
{
string input;
largeIntegers number1;
largeIntegers number2;
while(true)
{
cout << "Enter first number to add" << endl;
cin >> input;
largeIntegers number1(input);
cout << "Enter second number to add" << endl;
cin >> input;
largeIntegers number2(input);
cout << "answer: /n" << (number1 + number2) << endl;
}
return
|
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2012-march-11
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
We start by setting up a new project. File/New File or Project /Felgo/New – Empty Felgo 3 Project and finally press Choose….
In the next window you choose a name and a location for your project.
Next you choose your development kit.
MinGW is typically used for windows. However, you need to pick a development kit depending on your platform. In case you don't have that, you can update/download lacking
components with the help of MaintenanceTool. You can find MaintenanceTool in the Felgo source folder, e.g. C:/Soft/Felgo
Now you are going to set the project properties. Keep the App display name as it is, but change App identifier to fit your project, for example – com.FelgoTutorials.2048Tutorials and set Interface orientation to Portrait.
Press Next and Finish. Congratulations, you just created a project!
Now we can get busy with creating our game scene and adding some colors. Open your project and go to QML/qml/Main.qml.
Remove everything and add the following code.
import Felgo 3.0 import QtQuick 2.2 GameWindow { id: gameWindow screenWidth: 960 screenHeight: 640 activeScene: gameScene Scene { id: gameScene width: 480 height: 320 } }
GameWindow is our main component that holds all our scenes, entities and functions. Here you can put whatever
width and
height is suitable for your monitor. To
check different screen resolutions press ctrl+(1-9) while the program is running.
Scene is where you going to put your game. In this case however,
width and
height play a role of a logical game size and will be auto-scaled to match the GameWindow size. You can read more about supporting different screen resolutions and aspect ratios here.
Let's define our background and fill it with some nice
color plus an additional brown margin. Update your Scene so it looks like this.
import Felgo 3.0 import QtQuick 2.2 GameWindow { // ... Scene { id: gameScene width: 480 height: 320 Rectangle { id: background anchors.fill: gameScene.gameWindowAnchorItem color: "#B6581A" // main color border.width: 5 border.color: "#4A230B" // margin color radius: 10 // radius of the corners } } }
Run your game to see the result (press the green triangle in the bottom left corner).
Most of the properties of this component are pretty obvious but
anchors.fill requires some explanation. The QML anchoring system allows you to define relationships between the anchor lines of different items. In
this case we define all anchor lines of our background to match all anchor lines of the
gameWindowAnchorItem. Basically, no matter how big the window size is, it is always covered by our background rectangle. In
this way we avoid black borders on some wide screen devices.
Let's add some general properties of our game. Update your GameWindow component so it looks like this.
import Felgo 3.0 import QtQuick 2.2 GameWindow { id: gameWindow screenWidth: 960 screenHeight: 640 property int gridWidth: 300 // width and height of the game grid property int gridSizeGame: 4 // game grid size in tiles property int gridSizeGameSquared: gridSizeGame*gridSizeGame property var emptyCells // an array to keep track of all empty cells property var tileItems: new Array(gridSizeGameSquared) // our Tiles // Scene starts here }
With the help of our freshly defined properties we are going to create a nice tiled background for our game. Create a new qml class by left-clicking qml/Add New…/Qt/QML File and press Choose…
In the next window set the name to
GameBackground.qml and proceed by clicking Next and Finish.
Add the following implementation for the game background.
import QtQuick 2.2 import Felgo 3.0 // GameBackground is our decorative layer // it holds our grid background. Of course it would be possible to just use an image instead, but we wanted to show you how to use the Grid component together with a Repeater to create such a layout. Rectangle { id: gameBackground width: gridWidth height: width // square so height = width color: "#4A230B" radius: 5 // STATIONARY, IMMOVABLE orange grid Grid { id: tileGrid anchors.centerIn: parent rows: gridSizeGame //”4” - in this case we don't need to specify columns because the component will do that for us //Repeater fills our gameBackground with empty(orange) tiles Repeater { id: cells model: gridSizeGameSquared //”16” - repeat times Item { // an invisible item holds our tile width and height. so we can adjust our margins and offsets width: gridWidth / gridSizeGame // ”300” / ”4” height: width // square so height=width Rectangle { anchors.centerIn: parent width: parent.width-2 // -2 is our width margin offset. set 0 if no offset needed height: width // square so height = width color: "#E99C0A" radius: 4 } } } } }
We start by setting up a background rectangle with an id
gameBackground. This rectangle acts as our tile grid container. As you see, the
width of the rectangle is set to the
gridWidth,
one of the five properties we set in the
Main.qml. By changing the value of
gridWidth, you change the size of the whole tile grid!
Inside the
gameBackground we have a Grid component. The functionality of the component is pretty straightforward. You create items inside the grid and
their positions are adjusted according to the number of
rows and/or
columns you specified.
However, instead of creating items manually, we use a Repeater. The Repeater is a component
that automatically creates a number of elements based on the
model property. Just add an item/rectangle/button/… to the Repeater and you're done.
In our case, we first define an Item with a size deduced from (gridWidth/gridSizeGame) and then put a Rectangle inside. By doing this, we ensure that our Tile has a fancy edge offset while other distances and sizes are not disturbed.
You can set your own offset by changing the number next to the Rectangle's
width and
height. Different offset values are demonstrated on the picture below.
Just to clarify, these are not the tiles that we will move later in the game. This is just the background. You could use an image for this instead, but we are nerds, and we do it the nerd way! ;-)
In order to see the grid in our game, add the following Item component to the
Main.qml. You should place the Item within our Scene component, just after the background rectangle.
import Felgo 3.0 import QtQuick 2.2 GameWindow { // ... Scene { // ... Item { id: gameContainer width: gridWidth height: width // square so height = width anchors.centerIn: parent GameBackground {} } } }
Here we put our freshly created
GameBackground.qml inside the Item gameContainer. This Item is not only a place to store our
GameBackground, but will also be the target container for entities created by the EntityManager. The EntityManager is a component that creates, removes and keeps track of all entities created in the game. We can define a new EntityManager in our
Main.qml like this.
import Felgo 3.0 import QtQuick 2.2 GameWindow { // ... EntityManager { id: entityManager entityContainer: gameContainer } // our Scene starts here }
Go run your game and see what you got.
For now it's just a decorative orange grid, but very soon there will be tiles merging and sliding like crazy!
Before we add our controls, we create a Timer that we trigger after each move/swipe. We will animate the tiles later and the animation will take 250ms, in
order to prevent the animations from overlapping, we lock the input for 300ms to be sure that the animation has finished. The Timer goes beneath our
gameContainer.
import Felgo 3.0 import QtQuick 2.2 GameWindow { // ... Scene { // ... Timer { id: moveRelease interval: 300 } } }
We set the Timer id to
moveRelease and set its
interval to 300, meaning the Timer runs for 300ms and then stops. We will keep track of that and restart it every time a move/swipe was made.
After the Timer component, add this piece of code to track the keyboard input. We track the keyboard support to be quicker with testing on desktop, as swipe movements are tedious with the mouse.
import Felgo 3.0 import QtQuick 2.2 GameWindow { // ... Scene { // ... Keys.forwardTo: keyboardController // by forwarding keys to the \c keyboardController we make sure that \c focus is automatically provided to the \c keyboardController. Item { id: keyboardController Keys.onPressed: { if (event.key === Qt.Key_Left && moveRelease.running === false) { event.accepted = true moveRelease.start() console.log("move Left") } } } } }
To keep a clear code structure, we create an empty Item to use as a container for the keyboard input.
Keys.onPressed is the general signal we receive
in case some Key was pressed. If
Qt.Key_Left is pressed and the Timer is not running, we accept the event, restart the Timer and write the event to the console for testing purposes.
If you run the app and press the left key several times, you will notice that we can only move Left every 300ms. Since we don't have any tiles yet, we use the console to check our output.
Just below the
Qt.Key_Left statement, try to write a similar if statement for each of the arrow keys -
Qt.Key_Right, Qt.Key_Up, Qt.Key_Down.
With the help of a MouseArea component we keep track of touch/swipe events of a mobile device and clicks/drags of a mouse. To implement this component paste this code just after your keyboard Item.
import Felgo 3.0 import QtQuick 2.2 GameWindow { // ... Scene { // ... MouseArea { id:mouseArea anchors.fill: gameScene.gameWindowAnchorItem property int startX // initial position X property int startY // initial position Y property string direction // direction of the swipe property bool moving: false //3 Methods below check swiping direction //and call an appropriate method accordingly onPressed: { startX = mouse.x //save initial position X startY = mouse.y //save initial position Y moving = false } onReleased: { moving = false } onPositionChanged: { var deltax = mouse.x - startX var deltay = mouse.y - startY if (moving === false) { if (Math.abs(deltax) > 40 || Math.abs(deltay) > 40) { moving = true if (deltax > 30 && Math.abs(deltay) < 30 && moveRelease.running === false) { console.log("move Right") moveRelease.start() } else if (deltax < -30 && Math.abs(deltay) < 30 && moveRelease.running === false) { console.log("move Left") moveRelease.start() } else if (Math.abs(deltax) < 30 && deltay > 30 && moveRelease.running === false) { console.log("move Down") moveRelease.start() } else if (Math.abs(deltax) < 30 && deltay < 30 && moveRelease.running === false) { console.log("move Up") moveRelease.start() } } } } } } }
We start by giving an
id and defining an
anchor to our MouseArea component. As you can see by calling
anchors.fill you
can fill any area you want to work as a clickable input receiver. In this case our area has the exact same anchor reference as our big brown background –
gameScene.gameWindowsAnchorsItem. This means the MouseArea covers the entire screen and reacts to all possible screen touches.
To know in what direction the user is swiping we check the difference (new.position-old.position) for both coordinates and based on that we call a move function to all our tiles (but for now we just write to the console). We
also check and restart our
moveRelease Timer. If you run your game and swipe the screen, you can see that the direction of the swipe gets printed
in the console.
Voted #1 for:
|
https://felgo.com/doc/how-to-make-2048-game-1-tutorial/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
[ ]
Scott Giles commented on NIFI-7279: ----------------------------------- I don't have access to a development computer right now. I'm basically stuck with a tablet. > Redis Detect Duplicate Issue > ---------------------------- > > Key: NIFI-7279 > URL: > Project: Apache NiFi > Issue Type: Bug > Affects Versions: 1.11.4 > Reporter: Scott Giles > Priority: Major > >. > > A possible fix would be to change line 202 of > java/org/apache/nifi/redis/service/RedisDistributedMapCacheClientService.java > to > return (v !=null ? valueDeserializer.deserialize(v) : null); -- This message was sent by Atlassian Jira (v8.3.4#803005)
|
https://www.mail-archive.com/[email protected]/msg94709.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Azure Mobile Apps iOS SDK 3.2.0 release
October 13, 2016
We are excited to bring you the latest release of our Mobile Apps iOS client SDK 3.2.0. We've added Refresh Token feature, updated with iOS 10 support, and made performance improvement.
October 13, 2016
We are excited to bring you the latest release of our Mobile Apps iOS client SDK 3.2.0. We've added Refresh Token feature, updated with iOS 10 support, and made performance improvement.
October 12, 2016
We just rolled out .NET Mobile Client SDK 3.0.1 and Mobile SQLiteStore 3.0.1! We are out of beta, fixed the Android SQLiteStore dependency issue, and unified our .NET client SDK versions!
September 19, 2016
Azure Mobile Apps baked in refresh tokens to its authentication feature, and it is now so simple to keep your app users logged in.
August 25, 2016
Now you can get detailed error feedback per push request from Platform Notification Systems when you push with Notification Hubs.
June 14, 2016
Work with apns-priority with Notification Hubs to send prioritized APNS pushes to iOS devices.
June 13, 2016
Azure Notification Hubs is announcing the Batch Direct Send feature, allowing notification sends directly to device tokens/channel uris.
May 12, 2016
Notification Hubs recently enabled namespace-level tiers so customers can allocate resources tailored to each namespace’s expected traffic and usage patterns.
April 14, 2016
Notification Hubs' per message telemetry feature now supports scheduled send and the allowed device expiry (time to live) is extended to infinity.
November 10, 2015
Here is an update on what our team has been working on for the past few months to deliver you a smoother developing experience and a set of handy features including: Per Message Telemetry, Multi-tenancy, Push Variables, Direct Send and more.
May 26, 2015
Update Notification Hubs/Service Bus Namespace Type to "Notification Hub" and "Messaging" from "Mixed."
|
https://azure.microsoft.com/en-ca/blog/author/yuaxu/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Mysensor usb gateway serial problem
Hello, could somebody help me and say why it doesn't work?
// Enable debug prints to serial monitor #define MY_DEBUG // Enable and select radio type attached //#define MY_RADIO_RF CHILD_ID 3 #define BUTTON_PIN 3 #define RELAY_PIN void before() { for (int sensor=1, pin=RELAY_PIN; sensor<=NUMBER_OF_RELAYS; sensor++, pin++) { // Then set relay pins in output mode pinMode(pin, OUTPUT); // Set relay to last known state (using eeprom storage) digitalWrite(pin, loadState(sensor)?RELAY_ON:RELAY_OFF); } } int gate = 23; int relay1 = 22; void setup() { Serial.begin (115200); pinMode(BUTTON_PIN,INPUT); digitalWrite(BUTTON_PIN,HIGH); pinMode(relay1, OUTPUT); digitalWrite (relay1, LOW); pinMode(gate, OUTPUT); digitalWrite (gate, LOW); pinMode(24, INPUT_PULLUP); } void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("Relay", "1.0"); for (int sensor=1, pin=RELAY_PIN; sensor<=NUMBER_OF_RELAYS; sensor++, pin++) { // Register all sensors to gw (they will be created as child devices) present(sensor, S_BINARY); } } void loop() { byte value = analogRead(0); if (value == LOW){ delay(10); digitalWrite(gate, LOW); digitalWrite(relay1, LOW); delay(2000); digitalWrite(gate, HIGH); } if (value == HIGH){ delay(10); digitalWrite(gate, LOW); digitalWrite(relay1, HIGH); delay(2000); digitalWrite(gate, HIGH); } }()); } }
Welcome to the MySensors community @dany17
Would you mind sharing what you mean by "it" and "doesn't work"?
If you haven't already, see for the most common problems and how to troubleshoot them efficiently.
- scalz Hardware Contributor last edited by scalz
@dany17
Hi.
My crystal ball
told me you need to uncomment this line in your sketch if you're planning to use nrf24 :
//#define MY_RADIO_RF24 to #define MY_RADIO_RF24
If it doesn't help, like mfalkvidd said please show us your debug logs.
@scalz said in Mysensor usb gateway serial problem:
If it doesn't help, like mfalkvidd said please show us your debug logs.
OK i do //#define MY_RADIO_RF24 because for now I wanted to make a base station via usb
I will tell you my idea and you will tell if it is possible here because I am slowly losing faith XD
I want to use mega2560 to control several devices. One of them is the roller blind that I converted into electric motor control (currently I have two. The first classic 2 wires and I have connected to 3 contactors which control the direction and time of operation. The second stepper motor).
I now want to use my sensor to control the blind in domoticz with 1 switch or if it could be slider.
The next thing would be INA219 but maybe later for the time being on this roller blind, is it possible to do?
I would like to use this program in mysensors with a switch from domoticz
#include <Stepper.h> #define STEPS 32 Stepper stepper (STEPS, 8, 10, 9, 11); int val = 0; void setup () { Serial.begin (9600); stepper.setSpeed (800); pinMode(4, INPUT_PULLUP); } void loop () { if (digitalRead (4) == LOW) { val = 20480; stepper.step (val); Serial.println (val); } if (digitalRead (4) == HIGH) { val = -20480; stepper.step (val); Serial.println (val); } }
?
@kimot
yes I know this is an error but I wanted to read the value from pin 4. this example worked for me (RelayActuator) I wanted to use this ordinary switch to control the rest of the program. that's all I need at the moment
|
https://forum.mysensors.org/topic/10635/mysensor-usb-gateway-serial-problem/3?lang=en-US
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
DEBSOURCES
Skip Quicknav
sources / linux / 4.19.20-1 / drivers / mcb / Kconfig
12345678910111213141516171819202122232425262728293031323334353637383940
#
# MEN Chameleon Bus (MCB) support
#
menuconfig MCB
tristate "MCB support"
default n
depends on HAS_IOMEM
This is a MCB carrier on a LPC or non PCI device.
If build as a module, the module is called mcb-lpc.ko
endif # MCB
|
https://sources.debian.org/src/linux/4.19.20-1/drivers/mcb/Kconfig/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
F# IntelliSense Improvements in Visual Studio 11 Beta
IntelliSense is the identifier auto-completion facility in Visual Studio. As a developer, IntelliSense is critical to my productivity, as it allows me to easily work with thousands of functions from different namespaces and classes without having to constantly leave the editor to consult API documentation.
|
https://devblogs.microsoft.com/fsharpteam/tag/visual-studio-11-beta/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Cinder/blueprints/read-only-volumes
- Launchpad Entry: Read-only volumes
- Created: 3 July 2013
- Contributors: Anastasia Guzikova
Contents
- 1 Summary
- 2 Release Note
- 3 Rationale
- 4 User stories
- 5 Design
- 5.1 How can you use Read Only volumes
- 5.2 Workflow
- 6 Implementation
- 7 Assumptions
- 8 Test/Demo Plan
- 9 Unresolved issues
Summary
Note this is a big spec and where possible it is broken down into sub-specs to make it easier to share work.
Release Note
Rationale
User stories
The purpose is to provide the ability to attach volumes in the read-only mode.
immutable volumes
cinder as a backend for glance
Design
How can you use Read Only volumes
Option "permissions" represents volume permissions like file permissions in Unix-like systems in the format: [0-7][0-7][0-7]
- first digit - user permissions (for the owner)
- second - group permissions (when there will be opportunity to create user groups in keystone)
- third - permissions for others;
digit represents rwx permissions and here:
- r means read permissions as usual,
- w means write permissions as usual,
- x might means permissions to boot from volume.
While multi-RW-attaching is not available, secondary-attach is available in only R/O mode, despite any permissions.
Example
Create a volume is available in Read/Write mode for the owner and Read Only mode for others.
POST /v1/<tenant_id>/volumes body: { "volume": { ... "permissions": "644", ... } }
Workflow
Read Only volumes Blueprint:
Add volume permissions support and support of Read Only volume mode (for libvirt initially, i.e. libvirt+KVM, libvirt+xen hypervisors)
- an ability to create volume with defined permissions and show volume permissions from CLI and Dashboard, an ability to update volume permissions from CLI;
- an ability to connect to volume in R/O mode and to see is volume available only in R/O mode or not from CLI and Dashboard.
Add support of Read Only mode for other hypervisors.
TBD
Add ability to configure a user group with group permissions for a volume.
TBD
Implementation
- Add new field "permissions" in cinder database..
- Add columns "Permissions" and "Read Only" in CLI and Dashboard.
- Add "Permissions" field in volume creation form.
- Add 'readonly' flag in attaching connection conf if volume is Read Only available.
Code Changes
Is volume available in only R/O mode checking functionality:
def volume_read_only_get(context, vol): perms = vol.get('permissions') RW, RO = False, True # While no multi-RW-attach, # if volume is attached, only R/O mode is available if vol.get('attach_status') == 'attached': if vol.get('rw_attached_user') == context.user_id: return RW return RO if context.user_id == vol.get('user_id'): return int(perms[0]) < 6 # TODO(aguzikova): when there will be groups for volumes' users, # check user in group and if it's true use group permissions # User in "others" return int(perms[2]) < 6
Migration
Include:
- data migration, if any
- how users will be pointed to the new way of doing things, if necessary.
Assumptions
As soon as there will be opportunity to use user groups from keystone, we'll be able to implement functionality with group permissions easily.
Test/Demo Plan
This need not be added or completed until the specification is nearing beta.
|
https://wiki.openstack.org/wiki/Cinder/blueprints/read-only-volumes
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Hello SYCL
Hello SYCL
At this point we assume that you have set up the pre-requisites for developing using ComputeCpp and we will proceed by writing our first SYCL application.
'Hello SYCL'
#include <iostream> #include <CL/sycl.hpp> namespace sycl = cl::sycl; int main(int, char**) { <<Setup host storage>> <<Initialize device selector>> <<Initialize queue>> { <<Setup device storage>> <<Execute kernel>> } <<Print results>> return 0; }
The first thing we do is include the universal SYCL header. You only ever need to use this one header - it provides the entire
cl::sycl namespace. For ease of use, we will rename the namespace to just
sycl. This will be quicker to type, while still avoiding any name conflicts.
Setup host storage
sycl::float4 a = { 1.0, 2.0, 3.0, 4.0 }; sycl::float4 b = { 4.0, 3.0, 2.0, 1.0 }; sycl::float4 c = { 0.0, 0.0, 0.0, 0.0 };
In
main, we begin by setting up host storage for the data that we want to operate on. Our goal is to compute
c = a + b, where the variables are vectors. To help us achieve this, the API, which well see later. We use
float4, which is just
vec<float, 4>.
Initialize device selector
sycl::default_selector device_selector;
The SYCL model is built on top of the OpenCL model, so if you have experience with that API, you should be familar with most of the terms used here. In the SYCL model, a computer consists of a host (the CPU) connected to zero or more OpenCL devices. Devices are made available to the user through platforms - for example, a vendor specific driver might be a platform exposing that vendors GPU and CPU as OpenCL devices.
To do anything on the device side, we need to have some representation of the device. SYCL provides a set of classes called selectors, which are used to choose platforms and devices. Here, we initialize a
default_selector, which uses heuristics to find the most performant device of any type in a common configuration. If there is an accelerator (GPU, FPGA, ..) available, it will most likely select that, otherwise it will select the CPU.
Initialize queue
sycl::queue queue(device_selector); std::cout << "Running on " << queue.get_device().get_info<sycl::info::device::name>() << "\n";
After that, we initialize a queue with the device that the selector chooses. A SYCL queue encapsulates all states necessary for execution. This includes an OpenCL context, platform and device to be used.
Setup device storage<dims>.
Execute kernel
queue.submit([&] (sycl::handler& cgh) { auto a_acc = a_sycl.get_access<sycl::access::mode::read>(cgh); auto b_acc = b_sycl.get_access<sycl::access::mode::read>(cgh); auto c_acc = c_sycl.get_access<sycl::access::mode::discard_write>(cgh); cgh.single_task<class vector_addition>([=] () { c_acc[0] = a_acc[0] + b_acc[0]; }); });
This part is a little complicated, so let us go over it in more detail..
Note that the command group lambda captures by reference. This is fine, even though, as we will see later, kernel execution is asynchronous. The SYCL specification effectively guarantees that the host-side part of the command group will finish before the call to
submit exits - otherwise a referenced variable could be modified. Its only the device-side part that can continue executing afterwards.
In general, the lambda doesnt have to capture by reference - it could also capture by value. For SYCL objects in particular, this will be valid and have very little overhead. The specification requires that a copy of any SYCL object refers to the same underlying resource. Nevertheless, capturing by reference is recommended as better practice in this case to avoid the unnecessary copies.. Generally, it is a bad idea to try to move resources constructed within a command group out of the lambda scope, and the SYCL specification prevents it with move semantics.. Remember that we passed ownership of our data to the buffer, so we can no longer use the
float4 objects, and accessors are the only way to access data in
buffer objects. The
buffer::get_access(handler&) method has two template parameters, the second one taking a default value.
The first is an access mode..
Finally, we submit a kernel function object to the command group. The kernel is code that will be executed on the device, and thus (hopefully) accelerated. There are a few ways to do this, and
single_task is the simplest - as the name suggests, the kernel is executed once. Note that the kernel lambda has to capture by value.
Inside the kernel, we perform vector addition. The accessor class overloads
operator[] (size_t i), which returns a reference to the i-th element in the buffer. Note that since our buffer has
float4 elements, the 0-th element is actually an entire vector rather than a single
float. The
vec class overloads various operators, in this case
operator+ for per-element addition.
One thing that stands out is the
class vector_addition template parameter. As is described in the CMake integration guide, the SYCL file has to be compiled with both the host and device compiler. We now know why - this bit of C++ code will be executed on an OpenCL device, so it needs to be compiled to machine code for that device.
The device compiler has to be able to find the C++ code that it needs, and a lambda expression doesnt have a well-defined name. For this reason, we need to supply a dummy class name as a template parameter. The class has to be unique per kernel. Here we forward declare it in the invocation. We will see later that we can avoid this by defining our own function objects.
In general, submitting a kernel is the last thing you should do inside a command group. You have to submit exactly one kernel per group (per
submit call).
Execution of most things in SYCL is asynchronous.
submit returns immediately and begins executing the command group afterwards. There is no guarantee as to when it will be finished - for this, we need explicit synchronization. Here, we do it the RAII way - we end the buffer scope. The specification guarantees that after the buffers are destroyed, all operations using them will have finished. They release ownership of the vectors back to the user. Under the hood, we can expect the SYCL runtime to wait for device operations to complete and a memory transfer to occur from device to host memory. While for the most part SYCL abstracts away manual memory management, its still important to be aware of when and how memory transfers are executed. They are slow and often a bottleneck of accelerated applications, so its best to try to do as few of them as possible. We will see how to do this in later sections.
Instead of relying on scopes, we could also create host-side accessors. These would force a synchronization and memory transfer back onto the host similarly to the buffer destructor, and choosing how to read memory back is up to the user.
Print results
std::cout << " A { " << a.x() << ", " << a.y() << ", " << a.z() << ", " << a.w() << " }\n" << "+ B { " << b.x() << ", " << b.y() << ", " << b.z() << ", " << b.w() << " }\n" << "------------------\n" << "= C { " << c.x() << ", " << c.y() << ", " << c.z() << ", " << c.w() << " }" << std::endl;
Finally, we print our results. The output on the machine that built this guide is:
Running on Intel(R) HD Graphics A { 1, 2, 3, 4 } + B { 4, 3, 2, 1 } = C { 5, 5, 5, 5 }
|
https://developer.codeplay.com/products/computecpp/ce/guides/sycl-guide/hello-sycl
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
I am trying to print an integer in Python 2.6.1 with commas as thousands separators. For example, I want to show the number
1234567 as
1,234,567. How would I go about doing this? I have seen many examples on Google, but I am looking for the simplest practical way.
It does not need to be locale-specific to decide between periods and commas. I would prefer something as simple as reasonably possible.
'{:,}'.format(value) # For Python ≥2.7 f'{value:,}' # For Python ≥3.7
import locale locale.setlocale(locale.LC_ALL, '') # Use '' for auto, or force e.g. to 'en_US.UTF-8' '{:n}'.format(value) # For Python ≥2.7 f'{value:n}' # For Python ≥3.7
Per Format Specification Mini-Language,
The
','option signals the use of a comma for a thousands separator. For a locale aware separator, use the
'n'integer presentation type instead.
I got this to work:
>>> import locale >>> locale.setlocale(locale.LC_ALL, 'en_US') 'en_US' >>> locale.format("%d", 1255000, grouping=True) '1,255,000'
Sure, you don't need internationalization support, but it's clear, concise, and uses a built-in library.
P.S. That "%d" is the usual %-style formatter. You can have only one formatter, but it can be whatever you need in terms of field width and precision settings.
P.P.S. If you can't get
locale to work, I'd suggest a modified version of Mark's answer:
def intWithCommas(x): if type(x) not in [type(0), type(0L)]: raise TypeError("Parameter must be an integer.") if x < 0: return '-' + intWithCommas(-x) result = '' while x >= 1000: x, r = divmod(x, 1000) result = ",%03d%s" % (r, result) return "%d%s" % (x, result)
Recursion is useful for the negative case, but one recursion per comma seems a bit excessive to me.
|
https://pythonpedia.com/en/knowledge-base/1823058/how-to-print-number-with-commas-as-thousands-separators-
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
I'm trying to randomly generate coordinate transformations for a fitting routine I'm writing in python. I want to rotate my data (a bunch of [x,y,z] coordinates) about the origin, ideally using a bunch of randomly generated normal vectors I've already created to define planes -- I just want to shift each plane I've defined so that it lies in the z=0 plane.
Here's a snippet of my code that should take care of things once I have my transformation matrix. I'm just not sure how to get my transformation matrix from my normal vector and if I need something more complicated than numpy for this.
import matplotlib as plt
import numpy as np
import math
origin = np.array([35,35,35])
normal = np.array([np.random.uniform(-1,1),np.random.uniform(-1,1),np.random.uniform(0,1)])
mag = np.sum(np.multiply(normal,normal))
normal = normal/mag
a = normal[0]
b = normal[1]
c = normal[2]
#I know this is not the right transformation matrix but I'm not sure what is...
#Looking for the steps that will take me from the normal vector to this transformation matrix
rotation = np.array([[a, 0, 0], [0, b, 0], [0, 0, c]])
#Here v would be a datapoint I'm trying to shift?
v=(test_x,test_y,test_z)
s = np.subtract(v,origin) #shift points in the plane so that the center of rotation is at the origin
so = np.multiply(rotation,s) #apply the rotation about the origin
vo = np.add(so,origin) #shift again so the origin goes back to the desired center of rotation
x_new = vo[0]
y_new = vo[1]
z_new = vo[2]
fig = plt.figure(figsize=(9,9))
plt3d = fig.gca(projection='3d')
plt3d.scatter(x_new, y_new, z_new, s=50, c='g', edgecolor='none')
I think you have a wrong concept of rotation matrices. Rotation matrices define rotation of a certain angle and can not have diagonal structure.
If you imagine every rotation as a composition of a rotation around the X axis, then around the Y axis, then around the Z axis, you can build each matrix and compose the final rotation as product of matrices
R = Rz*Ry*Rx Rotated_item = R*original_item
or
Rotated_item = np.multiply(R,original_item)
In this formula Rx is the first applied rotation.
Be aware that
How to compose each single rotation matrix around 1 axis? See this image from wikipedia. Numpy has all things you need.
Now you just have to define 3 angles values. Of course you can derive 3 angles values from a random normalized vector (a,b,c) as you write in your question, but rotation is a process that transform a vector in another vector. Maybe you have to specify something like "I want to find the rotation R around the origin that transform (0,0,1) into (a,b,c)". A completely different rotation R' is the one that transform (1,0,0) into (a,b,c).
|
https://codedump.io/share/0M6wTZYD00FZ/1/coordinate-transformations-from-a-randomly-generated-normal-vector
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Vibration
Device
Vibration Device
Vibration Device
Vibration Device
Class
Definition
public : sealed class VibrationDevice : IVibrationDevice
public sealed class VibrationDevice : IVibrationDevice
Public NotInheritable Class VibrationDevice Implements IVibrationDevice
// You can use this class in JavaScript.
- Attributes
-
Examples
You vibrate the phone by calling the Vibrate method of the VibrationDevice class.
- Import the Windows.Phone.Devices.Notification namespace.
using Windows.Phone.Devices.Notification;
- Get a reference to the vibration controller by calling the static GetDefault method of the VibrationDevice class.
VibrationDevice testVibrationDevice = VibrationDevice.GetDefault();
- Start the vibration by calling the Vibrate method of the VibrationDevice class. Specify the duration as a TimeSpan value.
testVibrationDevice.Vibrate(TimeSpan.FromSeconds(3));
- If necessary, stop the vibration by calling the Cancel method of the VibrationDevice class.
testVibrationDevice.Cancel();
Remarks
Windows Phone devices include a vibration controller. Your app can vibrate the phone for up to 5 seconds to notify the user of an important event.
Use the vibration feature in moderation. Do not rely on the vibration feature for critical notifications, because the user can disable vibration.
To test an app that uses the vibration controller effectively, you have to test it on a physical device. The emulator cannot simulate vibration and does not provide any audible or visual feedback that vibration is occurring.
An app that is running in the background cannot vibrate the phone. If your code tries to use vibration while the app is running in the background, nothing happens, but no exception is raised. If you want to vibrate the phone while your app is running in the background, you have to implement a toast notification.
Methods
Stops the vibration of the phone.
public : void Cancel()
public void Cancel()
Public Function Cancel() As void
// You can use this method in JavaScript.
Gets an instance of the VibrationDevice class.
public : static VibrationDevice GetDefault()
public static VibrationDevice GetDefault()
Public Static Function GetDefault() As VibrationDevice
// You can use this method in JavaScript.
The default VibrationDevice instance.
Vibrates the phone for the specified duration (from 0 to 5 seconds).
public : void Vibrate(TimeSpan duration)
public void Vibrate(TimeSpan duration)
Public Function Vibrate(duration As TimeSpan) As void
// You can use this method in JavaScript.
|
https://docs.microsoft.com/en-us/uwp/api/Windows.Phone.Devices.Notification.VibrationDevice
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
If you've done anything long term in the Web industry, it's likely that you will have come across "Base64 Encoding" at some point. Base64 is the encryption format used by browsers when implementing very simple username and password form of basic authentication.
If you ask anyone these days, however, for a serious point of view on using it, you'll likely get laughed at. Base64 is a two-way cipher; so as long as you have the original phrase, it's easy to reverse it back and get the original text back out of it.
Okay, So if It's Rubbish, Why Am I Telling You about It?
Well, it turns out that Base64 encoding actually does still have one very good use. It's great for encoding complex binary files and data into a very simple textual representation that transmits exceptionally easy across text-based protocols such as HTTP.
Hang On. HTTP Allows Transmission of Binary, Right?
Yes it does, but let's imagine for a moment you wanted to try and save a few requests in your latest ASP.NET MVC project.
What you could do is to embed your images directly into your Web page and then transmit them all at the same time when delivering the original page.
If you're in any doubt about the validity of this scheme, take a look at the source for Google's current search page. You'll see something like this:
Figure 1: Google search uses Base64-encoded images
Pay close attention to the section in the red rectangle. That's the Google logo that displays front and center in the page when you load it. In fact, if you watch in the Web debugger when loading the page, you'll see that 100% of the time, the Google logo is loaded from the cache at high speed every time. Because the images are essentially loaded with the page, the display is instantaneous when your browser loads the view.
What you're looking at is something called a "base 64 encoded data URL." This is a fairly new thing that's been introduced in HTML5, and you can use it not only for images, but for CSS files and JavaScript. Anywhere in your HTML/MVC Views that you can specify any kind of URL, you can use a data URL.
The fun part about this is that .NET includes very easy to use routines for you to generate these Base64 strings. Because the strings are plain text, you also can easily send them using simple text transmission services such as SMS text messages on a mobile phone.
Fire up Visual Studio, and start yourself a new console mode project.
Make sure 'program.cs' looks as follows:
using System; using System.Text; namespace Builder_Pattern { class Program { static void Main(string[] args) { string plainText = "Hello .NET Nuts and bolts"; var plainTextBytes = Encoding.UTF8.GetBytes(plainText); string encodedText = Convert.ToBase64String(plainTextBytes); Console.WriteLine("Plain Text : {0}", plainText); Console.WriteLine("Encoded Text : {0}", encodedText); } } }
When you run this, you should see something like the following:
Figure 2: The output from out Base64 test
Decoding the string back is just as easy:
using System; using System.Text; namespace Builder_Pattern { class Program { static void Main(string[] args) { string encodedText = "SGVsbG8gLk5FVCBOdXRzIGFuZCBib2x0cw=="; var encodedTextBytes = Convert.FromBase64String(encodedText); string plainText = Encoding.UTF8.GetString(encodedTextBytes); Console.WriteLine("Encoded Text : {0}", encodedText); Console.WriteLine("Plain Text : {0}", plainText); } } }
Which should output the following information:
Figure 3: The output from our Base64 decode test
If you look at the two code samples, you'll see that they operate on byte arrays. We convert the string to bytes, then encode it, and the decoder takes the string and produces an array of byte containing the decoded contents.
This means that encoding and decoding a file is as simple as this:
using System; using System.IO; namespace Builder_Pattern { class Program { static void Main(string[] args) { var fileBytes = File.ReadAllBytes(@"d:\pele.jpg"); string encodedFile = Convert.ToBase64String(fileBytes); Console.WriteLine("Base 64 Encoded File : {0}", encodedFile); var decodedFileBytes = Convert.FromBase64String(encodedFile); File.WriteAllBytes(@"d:\newfile.jpg", decodedFileBytes); } } }
As you can see, however, it produces a *LOT* of data.
Figure 4: Beware; Base64 encoding will produce large outputs
The trade off here is that you get instant image loading in exchange for inflating your page size slightly. What you don't want to be doing is using this for massive detailed images.
Where it does shine is if your Web server is compressing output before delivering it. Because of the nature of Base64, it compresses exceptionally efficiently, so a dense amount of data like that will squash down to a very small size.
Once you've Base64 encoded your file, you'll then need to add the appropriate parts that make it a data URL your browser can understand. You then push can the data into your razor view models using standard MVC output methods.
Describing all the different options a data URL can have is best left to the professionals, however.
Using this method for small images such as icons and logos for your UI make it an effective technique. But, remember, it's also easy to abuse..
|
http://mobile.codeguru.com/columns/dotnet/base64-encoding-from-c.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
User:RAHB/Talk Archive)
Reap
From Sycamore:)— Sir Sycamore (talk) 12:38, 20 August 2008 (UTC)
Hey. Just wanted to let you know that Captcha isn't letting me do Pee Reviews or Submit my articles for them. Please fix this.--The Unread 21:58, 21 August 2008 (UTC)
- There's a problem with new accounts and capture or something. I've heard about it from Sannse, and some other guy had a problem with it. I can't fix it myself, but I'd ask sannse about it, since she's all "wikia staff" and stuff. -RAHB 22:10, 21 August 2008 (UTC)
UnSignpost: August 21st, 2008
Just like Grandma used to make!
August 21st, 2008 • Issue Sixteen • The periodical without any junk in its trunk
<insert>
<insert statement about how that forum topic is one of my all time favorites now> <insert statement that I'm not going to be on Uncyc as much as I want to for a little while> • <14:01, 24 Aug 2008>
- <insert statement highlighting the fact that nobody cares> -- Sir Mhaille
(talk to me)
- <insert statement regarding Mhaille's ego, and how he considers himself to be "so big"> • <15:13, 24 Aug 2008>
- <insert statement regarding P.a.n... AAAAA!!! Bu:15, Aug 24
- <insert statement inquiring if this would be a good place to whore my article? No Pun in ten did:18, Aug 24
- <insert statement regarding MrN's whoring capabilities> • <15:30, 24 Aug 2008>
- <insert statement about how you're all gonna have to clean up these damn open insert tags on my talk page> -RAHB 21:44, 24 August 2008 (UTC)
- </insert statement about how you're all gonna have to clean up these damn open insert tags on RAHBrN's whoring capabilities> - inquiring if this would be a good place to whore my article? No Pun in ten did you P.a.n... AAAAA!!! Bugger!> -haille's ego, and how he considers himself to be "so highlighting the fact that nobody cares> - that Cajek's not going to be on Uncyc as much as I want to for a little while></insert statement about how that forum topic is one of Cajek's all time favorites I think that's closed the lot for you, Mr Adminny RAHB type. No more broken code spilling across
My image on VFP needs the help that you say it does. It's just that, well... um... I'm not so good with photoshop. Any one you would know to do some touch ups? ~ Readmesoon
- Well, I could give it a shot if you'd like, but I've never been very good at manipulating text in images. You could ask about it at UN:PIC, or get some advice at Uncyclopedia:Reefer Desk. Be aware though, if you're looking for a feature on it, it would be best to ask for advice and do the touching ups yourself, otherwise the person who makes it better will probably get the credit. -RAHB 23:28, 24 August 2008 (UTC)
- Well, if I knew what all of the stuff does on photoshop, I could fix it myself, but I don't know that, so if that thing gets featured I have to learn quick or get someone to touch it up. I think I will get someone out there to do something for me. Perhaps you could give me a brief on what I should do?~ Readmesoon
- The first thing you should do is dump Photoshop. Maybe not, I mean a lot of the guys here seem to like it. I think Fireworks is a lot easier to use, but then again, Photoshop has a lot more "authentic" sort of effects and stuff in it. Anyways, the second thing is, just fiddle with it. That's really the only way to learn it. Just mess around with effects and see what works here and there. And if it doesn't work? Just undo it and start something else. The Reefer Desk may be able to give you more specific tips on what tools to use and all that other jazz. And quite honestly, I'm not that good of a photoshopper. I'm probably the wrong guy to ask. But The Reefer Desk is the right guy to ask. Or the right place rather. Modus responds to just about everything, and there are other shopping guys who like to comment on occasion as well, but there's some really good advice that goes on there. -RAHB 00:40, 25 August 2008 (UTC)
Off?
See you soon RAHB. Thanks for clearing out the crap before you left. Have a good:40, Aug 28
- Heh, thanks. And of course I had to check the QVFD one last time ;) -RAHB 08:41, 28 August 2008 (UTC)!!
Where'd you go???
We miss being RAHBed on a daily basis! ...come to think of it, where'd I go? • <14:11, 09 Sep 2008>
- What he said... where are ya? How's life? Updates? – Sir Skullthumper, MD (criticize • writings • SU&W) 01:33 Sep 14, 2008
- WE NEED OUR:21, Sep 17
- RAHB!!! Where'd you go? Feels like it's been forever since you've been gone. Please come back home. Not your actual home... Just any place with internet access. --00:09, 18 September 2008 (UTC)
- I've told you, RAHB will be released once my demands are)
Ugh, It's been an arduous process getting internet into my new apartment. I'm at my dad's house posting this message right now. I'll try to keep you guys as updated as possible. Also, woah, when the hell did you come back Skull? How is everybody, anyways? What's new in Uncyc-land that I've been missing? -RAHB 00:44, 20 September 2008 (UTC)
UnSignpost: September 11th, 2008
Just like Grandma used to make!
September 11th, 2008 • Nineteenth Issue • All your readers are belong to us
Hea!
I remember you! Welcome back. Hope all:44, Sep 20
- Unfortunately I'm not back yet =( I'm just updating some things and checking up while I'm at my dad's place. My apartment still doesn't have internet, so I'm not sure when I'm gonna really be "back." Hope it's soon though. I miss all you crazy uncyc guys. -RAHB 00:46, 20 September 2008 (UTC)
- We need our daily dose of:46, Sep 20
- RRRAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHHHHHHHHHHHHBBBBB!!! That:54, Sep 20
Social gyroscope
Why did you huff this page? Necessary Evil 22:42, 27 September 2008 (UTC)
- Because the expand tag on it expired after a month of no edits. -RAHB 03:06, 28 September 2008 (UTC)
Talk Bubble
Would you mind undeleting it for me? I only put the redirect up on QVFD before Spang moved it back. Thanks. JudgeZarbi
TALK 12:13, 28 September 2008 (UTC)
- Ah. Thought there was something odd about the fact that it wasn't a redirect of any sort. It's back now =) -RAHB 12:19, 28 September 2008 (UTC)
- Thanks a lot, RAHB =) JudgeZarbi
TALK 12:21, 28 September 2008 (UTC)
why did you remove my page on how to destroy a pc using DOS??? That was a very wikipedia-ish thing to do!!! If I want my shit deleted, I will go to wikipedia instead!!
- I removed it because it wasn't funny. Take a look around the damn site. We don't like unfunny stuff. Wikipedia deletes things that aren't notable. They don't like unnotable stuff. There are plenty of places on the internet for you to dump whatever type of shit you like on them. Other sites have standards. This site has standards. Nowhere here does it say "we keep everything forever, no matter how stupid." Get an idea of what the site is before you start bitching about its rules. -RAHB 04:57, 29 September 2008 (UTC)
- For giving me my talk bubble back, and being nice. JudgeZarbi
TALK 20:10, 30 September 2008 (UTC)
UnNews
I've noticed that the "Featured UnNews" templates have been left unchanged for a long, long time. I'm not sure what the policy is on those, so is it cool if I start updating them? -:27, 29 September 2008 (UTC)
- Yeah, I was wondering about that. Zim ulator used to do it, and then I was assigned for a while after he disappeared. With all the new admin maintenance, I've been having less time to do it, and the last month I was without internet. Seeing as you're a good, respectable user, and a prolific UnNews contributor, I have no problem with you taking over the reins. In fact, I feel rather good about it, so by all means, go on ahead. If anybody gives you any trouble about it, tell them I said it was OK. Let me know if you have any questions about it. -RAHB 22:32, 29 September 2008 (UTC)
- /me polishes RAHB's shoulders...:37, Sep 29
huff
Hi, sorry I wasn't careful with the double redirect bit - how does it occur or where to read about:13, 2 October 2008 (UTC)
- Oh, no worries. A double redirect is when one page redirects to another page that is also a redirect. It most commonly happens when you move a page twice, which is what happened in your case. You see, when you move a page, the old page becomes a redirect to the new page. So when the new page is moved, you end up having a redirect to a redirect. It's fine though, there's a special page I go to a few times a day to look for double redirects, and I can just delete the original redirect in the click of a button. If you know of a double redirect you created, simply list it on QVFD, and somebody will come along and delete it eventually. Hope all that helps. -RAHB 07:30, 2 October 2008 (UTC)
Chinese empire
um how am i supposed to improve an article thats huffed? at least restore for a short while enought to dump its contents on my userpage so i can write it up later.ㄏㄨㄤㄉㄧ 03:53, 3 October 2008 (UTC)
Image:King Penis.jpg
I replyed my reason about Image talk:King Penis.jpg. please read and discuss, if you still hae some question. --Brandy Frisky 18:50, 3 October 2008 (UTC)
UnSignpost: October 3rd, 2008
Just like Grandma used to make!
October 2nd, 2008 • ALL-KITTEN ISSUE • Your #1 source for Cajek ban jokes!
You're back for good!
We need to do all those things we talked about back in our college days! Oh man: my brain's a-buzzin'! • <5:14, 06 Oct 2008>
- Oh yes sir, man. Tomorrow and Tuesday are very busy for me, but I've got all day Wednsday and Thursday, and probably Friday to just do whatever. You can guarantee some good stuff happening. And I happen to have just picked up these SPECTACULAR audio headphones as part of my starting kit here at the school. So I'll be up to the task of all those audios. My creative energy is up in this great environment, so writing is coming back too. What should we get started on first? -RAHB 05:21, 6 October 2008 (UTC)
- Start, in your userspace I guess, the "neighborhood" one! Oh man: I can't wait to see the look on the world's face when we unleash a Cajek-RAHB collab! Remember: your old neighborhood had annual animal-fucking festivals, I believe. We need to mention that somewhere in there... Oh, and Polar Express is up for some kind of "vote for highlight" or something, so if you wanted to do the audio for that, now's the time my friend! • <5:28, 06 Oct 2008>
- Awesome. I'll try to fit some audio time in tomorrow if I can. And all the Old Neighborhood information can be found right here. Woo! -RAHB 05:36, 6 October 2008 (UTC)
Just poppin' by!
Hey RAHB, long time no see. That was my fault of course, I've just been lurking around. Anyway, I intend to write a couple articles. Would you mind helping me? You see, I want to write UnNews: There's a pimple on my ass. It's just, I have no clue how to start it. Would you help me? Th basic idea of the story would be thus: In this time of constant news, a completely uninformed journalist can find nothing better to write about than the blemish on his arse. The whole article could be about him assessing the situation. He might even request an interview with his doctor about what he thinks it is: a wart, pimple, tumor, mole, chigger, mosquito bite, or whatever. He could go into gruesome detail about its appearance. What do you think? Should I go through with it? And how should I start it? WIll someone steal my idea and beat me to the punch? Only RAHB knows. :D --Liz muffin 02:17, 8 October 2008 (UTC)
- Glad to see you back thinking about writing again Liz. The premise sounds very similar to a lot of "I'm doing something normal" style UnNewses we get around here. However, I could definitely help you take it to the next level. But it'll have to be tomorrow (Wednesday, in Pacific time). But you can definitely count on my assistance. Glad to see you back =D -RAHB 02:53, 8 October 2008 (UTC)
Alright, but could we make it, THIS Wednesday? I kinda spaced... oops. I'm sorry. I'll try to actually be there this time... I need to keep up with stuff better. You know what, I'll go ahead and start, and then you can just jack it up, alot. XD --Liz muffin 02:46, 15 October 2008 (UTC)
You know what, suck that, I'll just wait on you for your omniscient help. Also, congrats on your article about The Polar Express. I loved it, because in my opinion, it's the worst childrens' book in the history of humanity. --Liz muffin 03:32, 15 October 2008 (UTC)
- Well, the Polar Express was Cajek's article... Anyways, yes, I am here now. So...let's get started? -RAHB 19:21, 15 October 2008 (UTC)
- Indeed! How would you say is a good way to start this kind of article, I've never done UnNews at all. But you would know that, wouldn't you. I mean, you've overseen me since day 3, I think. Have you any examples of a "doing something normal" article? Or is that just an insult that means this article is not worth spending time on? If that's the case, I'll just find a new concept, or go back to the grousome howto on breeding rats in your bloodstream? I always planned to do that one. Anyway, I figure I could go one way or another. Something really crazy and demented, or something that's likely to turn out stupid, and not really funny. Also, I could have sworn you at least semi-sort of co-author helped or whatever on Polar Express. Ah, well. --Liz muffin 20:48, 15 October 2008 (UTC) -Another question, how do people make these godly signatures?
- Well, the concept hardly matters on an article as long as you execute it right. There are several exceptions to that rule, but that's still how I see it. It most closely reminded me of UnNews:My balls itch, a very similar concept. Whatever you feel is the right way to go with it, do it I say. That's what it's all about. As far as the Rats article, I'd be very excited to see that one. That's definitely one you should do eventually, regardless of whether this UnNews is written or not.
- The Polar Express article was all Cajek, but you may be thinking of a conversation he and I were having about it a while ago where I was going to do some audio for it. I still haven't gotten around to that yet, but one of these days I plan to. As for your signature, I can help you with that too, if you'd like. Just a little fiddling with code and subpages, and a tick in the preferences area. So your call from here. Type "down" to go south. Type "up" to go north. Type "UnNews" to write an UnNews. Type "Rats" to write about rats. Type "Sig" to get a signature. Type "get" to search for treasure. Type "kill" to commit suicide. -RAHB 21:05, 15 October 2008 (UTC)
- This is a toughy, but I think I'm going to have to go with... UnNews. Final Answer. Yes Regis, I FUCKING MEAN IT!!!! THIS IS MY FINAL FUCKING ANSWER!... *dammit*. After I get some positive/negative/homicidal feedback, I think I'll move on to Rats. I'm going to attempt to get a start on that today, I'll place it somewhere in my page, heck, I'll just put it right on there, no one ever does anything to anything I've done, except the bastard who stuck a lame end on my sex story, and it went something like this, "also ima girl. but don't ask you don't wanna know". *sigh* Oh, well. It'll only be there a few days, 'till I submit it. Now, I dunno much about how UnNews works, 'cept just how to submit it on the UnNews sub-site, and how to read it. If you wanna give me a clue, that'd be great. Also, if you'd like to explain how to make attractive and awesomish Signatures, in a nutshell, that's be nice. Also, oh Oz, if you could send me home, that'd be much appreciated.--Liz muffin 04:27, 17 October 2008 (UTC) - ewww! So
fuglyplain.
- You've just won 32,000 dollars! And you will now not leave here with any less than that. For the rats article, you know you can make userspace articles. To do that, just do User:Liz muffin/Article title, and it'll create the thing in a subpage of your own username. In userspace, you can do whatever the hell you want, so you don't have to worry about deadlines, or immediately high quality or anything. You can just build it up overtime, and it doesn't take your userpage up. Also yeah, just revert any stupid stuff people put on pages like that. I get those showing up on my watchlist a lot. Some articles more than others. UnNews basically works the same way any article does. If you know how to create it, you can copy the format and UnNews templates from any other UnNews article, and then pretty much just write it in the style of any news report, though there are any number of variations you could put on it. Also, when you create a new UnNews page with the little tool on the UnNews main page, it should come stock with some comments in the page that tell you the basic format. That's what helped me early on. That and Zim's welcome template, but he's been gone it seems. I should really start distributing that for him. That's probably why UnNews has been kind of slow lately actually. Damn...anyways, that was me rambling. You can start learning about signatures at UN:SIG, and I can help you further once you get the beginning of it all set up. In conclusion, click your heels together three times and chant "there's no place like home. There's no place like home." -RAHB 06:36, 17 October 2008 (UTC)
I don't hate you
And for this, you have achieved far more than any user on here in a while. Have a hippie. Ж Kalir hippies! yay! 04:30, 9 October 2008 (UTC)
RAHB!!!!
Thankee for the MESSAGE!!! But, mine is bigger :D. What's going on now-a-days?!) 17:27, 9 October 2008 (UTC)
- Ah, no fair =( yours is green too.... What's going on? What's going on?! WHAT'S GOING ON?!! I'm at this awesome art school right now where I'm majoring in audio production, in this awesome school-sponsored luxury apartment, with awesome rommmates and awesome teachers in my courses who have won awesome awards in the field, like grammys and emmys, and in three years I'm gonna be awesome like them! That's what's going on! What about you? -RAHB 17:40, 9 October 2008 (UTC)
- Yes, I win :D. That sounds...awesome! I'm happy for you! I'm uh..late for class. This message was poorly planned...I'll write again later!!) 13:13, 13 October 2008 (UTC)
Unsignpost: October 10th 2008
Just like Grandma used to make!
October 9th, 2008 • Twenty-First Issue • Bursting with Crunchy Goodness!
Frank Zappa
Is this week's colonization. Go forth my
penis son! ~
10:57, 12 October 2008 (UTC)
Get writing!
Let's do this... -RAHB 17:59, 12 October 2008 (UTC)
Michelangelo Antonioni
Why was my article on Michelangelo Antonioni deleted? For the informed reader it was funnier than a bag full of reptiles wearing pants. ~ User:NathAnonymous 7:07, 15 October 2008 (UTC)
- First of all, why the hell did you post this comment halfway up my talk page? Bizarre. Second of all, it had an expand tag on it, I'm not sure if you noticed that. But the tag has long expired and it was my job to clean out the category. If you'd like it back, I can restore it to your userspace, or I can restore it to mainspace with a construction tag. It just needs to be lengthened before it can stand alone. -RAHB 00:32, 16 October 2008 (UTC)
- ~ Yeah, I don't use the site enough to have known, I just went for the one that said Talk Bubble. Made sense at the time. I did not notice the Expand Tag, I only got an email today saying it had been modified & upon checking it found it deleted. A construction tag would do nicely. I'll try to find some thyme during work to whip up a nice soufflé de la funny. ~ User:NathAnonymous 7:47, 15 October 2008 (UTC)
Template:countryname
I am the author of Template:countryname. I have to correct some mistake of it. Please unlock it for me. I will ask you to re-lock it later. Thank You!--Hant (Talk) - For your safety, China Free! - 00:55, 16 October 2008 (UTC)
- Modusoperandi have done for me. Thank You!--Hant (Talk) - For your safety, China Free! - 01:34, 16 October 2008 (UTC)
sorry...new here
you deleted my "apocrypha discordia"...it begins w/ "children of militant enlightenment"...i would like it back for personal reasons & it is unfinished. please. why did u delete it?
can u email it back...')
English bill of rights
I think it was a bit shitty of you to delete my stuff on English William of Rights but I suppose that's the way Uncyclopedia works. MollyTheCat 21:02, 16 October 2008 (UTC)
- What I delete is nothing personal, and it's not to be a bitch. I deleted it because there was a maintenance tag on it, and the maintenance tag expired without any edits. That's my job. It's what I do. However, if you'd like it back, I could restore it to your userspace for you. -RAHB 21:06, 16 October 2008 (UTC)
- And he doesn't even get paid. Not even any medical benefits. Really, I'm the site's doctor, and all I have at my disposal is a knife, a bunch of used tubing, a rather large hammer, and several barrels of alcohol. Use your imagination. – Sir Skullthumper, MD (criticize • writings • SU&W) 21:09 Oct 16, 2008
I seek help, wise one
Hey RAHB, I'm stuck. I
need am being forced to want to write articles on Harpoons and Generic Kung-fu Noises, but I'm at a loss for what to say. Can you:50, 17 October 2008 (UTC)
- Uh, what exactly is the idea you've got going with it? Kung-Fu parodies are always hilarious. Why don't you get a page started in your userspace and I'll help you out with what you need from there. -RAHB 22:54, 17 October 2008 (UTC)
Shut-up and go to rehab! --Jim10271949 02:51, 20 October 2008 (UTC)
- Believe me, I've tried. They say even the most powerful medicines in the world can't help me now. -RAHB 03:12, 20 October 2008 (UTC)
UnSignpost: 21 October 2008
Just like Grandma used to make!
October 16th, 2008 • Twenty-Second Issue • Now with 40% more Batman!
Thoughts required
So yeah, while the muse was strong with me last night, I had a vague idea, which today became this. Two quick questions: 1. Is it shit? 2. If it's not shit, as an UnTunes expert, any ideas who might be up for recording it? (In the style of Copacabana by the legend that is Barry Manilow, in case you somehow missed it). Alternatively, 2. If it is shit, what is your preferred candy:22, Oct 21
- It's a satire on people stupid enough to vote against him because of his name alone, right? I find that bit clever. The lyrics themselves could probably be spiced up a bit. As is, I can see their purpose, but they don't do well enough in making the concept flow. I wouldn't call myself an UnTunes expert, but I think it would be pretty fun to do it Copacabana style. I might give it a shot during the week, if I get the time. I suppose with a tune, there's a lot of funny that could be added in the actual execution, which is why it sounds appealing to me to make it. So yeah, I'd say give the lyrics a revise, maybe get some sort of instrumental playing in the background when you write so you can further visualize the flow of the song. Other than that, my favorite candy bar is the Nestle Crunch bar. -RAHB 16:22, 21 October 2008 (UTC)
- Cool. I did have a couple of listens to the song to try and get the flow, but was at work, so was a tad tricky. May well have a stab at improving the flow shortly. And it's just a general dig at the kind of reasons that are filtering through the news over here about why people are gonna vote against him. Although that said, I think a little white-haired old granny on the news the other day came out with most of them herself. Gotta love her. Anyway, I just liked the idea of a little bigotry set to a jaunty Manilow piano number - I think you're right, the best fun would probably come from the execution. If you wanna give it a shot, go for it, if not, hey, it was fun to mess around with. Gonna shift it to the UnTunes space now. And have you ever encountered such a thing as a Twix:31, Oct 21
God, I hate you.
First, you take my article on tits and you move it into my userpsace. And I'm all like, fine, RAHB. Fine. Move my article into my userspace. All that does is give it a funky new namespace. And funky namespaces make me do funky dances. We got a funky new President. With all the basketball and shit.
But then you take my article on peas and you slap an OM NOM NOM NOM NOM template on it. Why, RAHB? What did I ever do to you that makes you so inclined to bend me over at the waist and assfuck me repeatedly with your seventeen inch unlubricated thumb?
You make me cry, RAHB. Last week, I went to an enchanted forest, and all the tiny woodland creatures were crying, and I asked them why, and they ran away. But if they had been able to speak, they would have said "RAAAAAHHHHHBBB." Because you know all those glittery magical squirrels, RAHB? They cry for you. Well, not for you, so much as because of the fact that you're ass-raping them with your seventeen inch unlubricated thumb.
RAHB, I only want the best for you. I'm hoping that one day your current girlfriend or wife moves to Bolivia and they send you a Bolivian supermodel in return. Or boyfriend, if you swing that way. The point is, I hope you're inundated with years of undeserved mind-blowing sex. Sex all the time, RAHB. I'm hoping that soon you'll barely be able to left-click without having sex with a Bolivian supermodel.
So why don't you want the same things for me? Why have you got to make it your life's mission to identify whatever parade I might be in, and climb to the top of a tall building, and shit on it?
Is it because I once impersonated a gorgeous redhead and asked Mordillo to ban you for your persistent vandalism to my articles? Because, if I could point out a mitigating circumstance, Mordillo didn't buy it. Also, Mordillo rejected that gorgeous redhead on account of her name, so you should probably know that she's still available. And she's just waiting for you, all spread-eagle, right now, in the magical land of DOESN'T FUCKING ADD TEMPLATES TO INEBRIATED'S ARTICLES, if you want to go there and claim her.
You'll like it there, RAHB. I have full faith and credit in you, not unlike a clause.
In conclusions: brb
- Being that that message was absolutely hilarious, and that I have absolutely no way to continue that trend in this talk page section without looking like the less funny of the two, I'll just play it straight. Unless of course we're referring to my sexuality. I don't play straight with that. Because...you know, that's how I really am. So one hot Bolivian supermodel, yes.
- Yeah, I have a confession to make. And that is, I don't look at the writers of articles before I tag them. I did however look at yours and say "hey now, from what I can see skimming over this, it might be an OK article, I don't really know, because I don't pay attention to things like that. It just needs some expansion." But hey man, I've seen your funny shine before, again and again on here, in your mysterious, slightly short ways, and how you use every square inch of text appropriately for those undertakings. If the article is complete, go ahead and take the tag off. And being that I continue not to read who the author of these articles are, feel free to do the same with any article of yours I tag in the future. And if somebody else says "WTF NOOB! U TAEK OFF TAG?!", let them know I said "Inebriated is a shining beacon of comedic prowess, and has had the right bestowed upon him to take tags off his short, stubby, hilarious articles." If you say it word for word, you get a free chocolate dipped ice cream when you buy one of equal or greater value. -RAHB 18:18, 23 October 2008 (UTC)
- Thank you, anonymous spam reporter! Your efforts make the world a better place! -RAHB 19:05, 23 October 2008 (UTC)
Feature Curiosity
So when it comes to featuring stuff, how do you choose between two tied articles for feature? I kind of thought it was the one that was older, but am I. 20:40, 23 October 2008 (UTC)
- I think I usually choose the one with higher health, myself. The old way was to choose the oldest nomination, but I think we stopped that with the whole VFH Health thing. Or maybe not. But that's what I do. I think. Whichever one comes up first when I click the score sorting button on the VFH template really. -RAHB 20:46, 23 October 2008 (UTC)
Your post
Nice one, I always knew there was a reason I liked
you your penis. ~
21:52, 24 October 2008 (UTC)
- I think the words you used last time were "meaty" and "very well-developed", but the post is a good one too. -RAHB 22:27, 24 October 2008 (UTC)
lolwhut
See
You see how srs biz the only editor is? Naught very srs, amirite?
Also it was lolempty. — Sir Manticore
13:57, 26 October 2008 (UTC)
Sainsbury's
Thanks for not deleting it, something they do lots round here. Nothing like good ol' Illogicopedia!--Rabies Turtle 17:39, 27 October 2008 (UTC)
Thanks for the vote
From--Sycamore (Talk) 17:25, 1 November 2008 (UTC)
- Word, Syc. And thank you for being an awesome Uncyc contributor. The site runs that much smoother when we've got guys like you in the shadows, doing that voodoo that you do. -RAHB 21:31, 1 November 2008 (UTC)
Can I Block this Jackass IP -------->86.29.240.79
He was Vandiliziling other pages and removing content from the wiki. I gave him a damn warning, but this dickhead won't stop. NOW HE'S REALLY STARTING TO PISS ME OFF!! But I really don't know how to ban this asshole....can you tell me how? If not, than does an administrator have to????? I don't Know! SOME ONE TELL ME!!!!--BlackSugaBabyGurl 22:52, 1 November 2008 (UTC)
Star Wars: TFU
So can I move it back out if I take the construction tag off? I'm done with it. I also wanted to know how you did that. Like, how do you move a page into your userspace? --GDawg816 18:13, 3 November 2008 (UTC)
- Oh, no problem. Yeah, if it's finished, just go ahead and move it out. I just always move them to userspace because they show up on a maintenance list, and I don't want works in progress to be deleted. There's a button at the top of every page, called "move." Clicking that takes you to a page where you retitle the page you are moving. So, to move things to userspace, you add the "username/" prefix. For you to move it back to main, all you have to do is remove said prefix, and it will be back where it was before. Hope that helps, and happy uncyc-ing! -RAHB 05:14, 4 November 2008 (UTC)
- So do I put my username (GDawg816) or just /username? --GDawg816 18:52, 4 November 2008 (UTC)
- To move something to userspace, it's GDawg816/articlename. To move it to mainspace, it's just the article name. -RAHB 08:52, 5 November 2008 (UTC):10, Nov 6
SPERM GEYSERS
yeah... um I wrote an article and it's actully good this time, I think. So since you like adopted me and shtuff you should read it and then
chastize complement me. So yeah thanks ---)
- Oh absolutely. I'll give it a look, what's it called? -RAHB 06:54, 7 November 2008 (UTC)
- That's strange I thought I linked it... oh whatever. Aha ha---)
IRC
I'm not sure if IRC is broken, maybe, but it keeps telling me that I'm apparently banned from the #uncyclopedia channel. I don't remember getting banned in the first place, so yeah. I'm confused. -:59, 12 November 2008 (UTC)
- I'm equally confused. Nobody in IRC will tell me what's up. But apparently something fucked up in the ban log, I think. Working on fixing it hopefully. -RAHB 22:01, 12 November 2008 (UTC)
Cleveland Steamers
Hey! I wrote the article for this, and you recently put it in the ICU (I'm guessing because it was rather short at the time). I lengthened it to a point where I think its okay, and the parts that I think are just kind of "eh" I plan on working on the next few days (while adding stuff).
I was wondering if you could look at it and maybe take it off? Or if theres something wrong with the quality, let me know whats wrong? Anyways... I thought I'd get to you on that, thanks :-) -
Prof. Ahh(to the)Diddums[FUCK-A-DOODLE-DOO!] 04:25, 13 November 2008 (UTC)
- Oh yeah, looks a little more fleshed out to me. But if you are still working on it, I recommend putting a construction tag on it, to let people know it's still in progress. Otherwise, it's still a little short, I would normally put an expand tag on it. I guess I'll leave that bit up to you, but I'll remove the ICU for now. Thanks for getting to me on it. -RAHB 04:44, 13 November 2008 (UTC)
EUREKA!!
So, once again, I wandered away from the Uncyclopedians for a while, but never fear I will be back, just as I always will. I waited a month and twiddled my thumbs over the computer until, as they say on 4chan, "I CHARGED MY LAZERS! BWWAAAAAAAAAAAAAAAAAAH!" and finally, God has blessed me with a gift. Not just A gift, really, but THE gift. Yes, the gift of the intro to INJECTING RATS INTO YOUR BLOODSTREAM!!!! You can check out my progress over the next couple of days right in User:Liz_muffin/The_Workshop -- I think. It should only be a few days until something good happens :D Wish me luck! --Liz muffin 05:23, 13 November 2008 (UTC)
- Heh, I kind of figured this was coming, I saw you pop up on my watchlist. Well, it's good to see you're writing it, and you bet I'll be watching in the coming days. If you need any writing advice, you know where to come =D -RAHB 05:31, 13 November 2008 (UTC)
- Okay, I'm officially out of juice, can you help? What should be the next step?--Liz muffin 14:37, 18 November 2008 (UTC)
UnSignpost: 13th November 2008
Just like Grandma used to make!
November 13th, 2008 • Issue 24 • So close to journalism you'll be hard pushed to know the difference!
MrN9001 12:52, 13 November 2008 (UTC)
Stella Artois
I have reviewed the begginers guide and still don't understand why my article has been deleted. Stella Artois is europe's most popular beer, I think it deserves an article plus it was fucking hilarious (not if you're american though... you probably wouldn't get it). Is there any way of recovering what was deleted? --Baina 18:30, 13 November 2008 (UTC)
- Well, from what I remember, the main reason I deleted it was it sounded like any generic article we get about anything around here. The main point of your article was that "Stella Artois sucks", for the most part, and as far as anyone is concerned, it sounds more like a grudge or just some guy bitching about how a beer sucks. Same thing happens with schools and cities very often, so I usually just get rid of the articles. But if you'd like to work on it further, I can restore it for you with a construction tag and let you have another shot at it. Let me know. -RAHB 21:20, 13 November 2008 (UTC)
Ok, I agree with what you say about how the article was written, but It still needs an article. If you can restore it with the construction tag, I'll re-write it. Let me know on my talk page when it is back up. Thanks.--Baina 19:06, 15 November 2008 (UTC)
UnSignpost: 20th November2008
Just like Grandma used to make!
November 20th, 2008 • #100/4 • Sucking Journalism's Fat Wang. Badly.
Turkey Talk
So, RAHB, in my guise as fearless, penetrative reporter for that mighty organ The UnSignpost, I'm thinking of including an article about the Aristocrat's Turkey Day Ball '08. As one of the perpetrators of said event, can I ask you:
- Do you have any comments
that I can take out of context for comedic effectfor our devoted readership?
- Or, if you prefer, do you want to write said article, to avoid
me taking massive editorial liberties with the entire thingany chance of error creeping in to our completely unbiased journalism?
- If I think of a third question, would you answer it?
Pipp:57, Nov 24
- Yes, yes, and yes. When is the new signpost to be delivered? -RAHB 16:36, 24 November 2008 (UTC)
- I aim for some point during th 24
"The 3-person judging panel shall individually score each entry using a specified 1-to-10 scoring template not unlike the one used for Pee Review. The elements in question shall include..."
- I don't know if it helps, but I've never done it like that. Also, I'm an insufferable old coot. Sir Modusoperandi Boinc! 00:56, 26 November 2008 (UTC)
- Well, are you opposed ethically to it or something, or are you just looking to familiarize yourself with it? You've judged a lot of things before, so I don't really have a problem with you doing it whatever way you feel most comfortable with. But if you're curious, the template that's suggested to be used can be found here. Ya damn old geezer. -RAHB 01:00, 26 November 2008 (UTC)
- The Pee Review template took all the fun out of peeing for me. This does the same for judging, as I'm a wrath-filled and zealous judge. Rawr! Or, to put it another way: I read all the pages, then read them again, and again, and again. Eventually, they sort themselves into order. It's really quite magical, like unicorns ordering themselves by adorability. Sir Modusoperandi Boinc! 01:30, 26 November 2008 (UTC)
- That's fascinating. -RAHB 01:36, 26 November 2008 (UTC)
- I know. My memoires are riddled with mind expanding shit like that. Sir Modusoperandi Boinc! 01:44, 26 November 2008 (UTC)
- So... It being thursday, any sign of said article, or do I just make something:59, Nov 27
- I've been so damn busy. Sorry I didn't get back to you on that, but if you could improv up a little piece, that'd be much appreciated. If you need any quotes to mangle into pseudo-biased semi-truths, I'll gladly give some up for your editing machines. -RAHB 09:08, 27 November 2008 (UTC)
- Fair enough. While I do so, can you drop the banstick on Assss (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) please?:11, Nov 27
Whoo!
- 01:12, 26 November 2008 (UTC)
Thanks for the welcome but...
Is my sig that bothersome?--Metalhead94 T C
- It's not as much that your sig in particular bothers me in particular. It's that signatures more than fifteen pixels high mess with page formatting, and do bother some users. The uncyc policy is that signatures shouldn't be higher than fifteen pixels, for the sake of civility amongst users in the long run. It's nothing personal, just a policy. -RAHB 01:24, 26 November 2008 (UTC)
- Yeah, I get it.--Metalhead94 T C
Is this better?--Metalhead94 T C
- Still looks a little big. I'd really just stay away from changing font sizes in your sig, I think the default is 15 (though I could be wrong about that). Also, if you'd like an easier way to sign pages, you can create User:Metalhead94/sig and paste your sig code in it, then go to your preferences and paste "{{User:Metalhead94/sig}}" into the signature box, without the quotes, then check the raw signature box. Then all you have to do to sign pages is put -~~~~ and it'll paste your code and a timestamp every time. -RAHB 01:34, 26 November 2008 (UTC)
Camred333
Dude why did you delete my article its protected by the ignorable policy wtf cmon! —The preceding unsigned comment was added by Camred333 (talk • contribs)
- 1. New stuff on bottom, as it says up there at the top. 2. The title of the article, or a link to where it was would probably help - admins delete a lot of stuff, it's not personal, so they don't remember every deletion off the top of their heads. 3. Which "ignorable" policy would that be? 4. Who changed the channel? I was watching, Nov 26
- I deleted your article because it was a shitty stub about some teacher or something, from what I can tell. We don't allow vanity articles here, and hence I mercilessly huffed it. AND I'D DO IT AGAIN TOO! And God dammit, UU! We SHARE the remote! I've told you a million times, that TV does not belong to you! -RAHB 23:11, 26 November 2008 (UTC)
Why Did You Huff My Article?
Hello, sorry to bother you, but I'd like to know why you huffed my article. The only reason I can currently think of is the repetition which was in the beginning. I've already read the n00b article, so I don't need to be linked to it. Reply when you get a chance to do so. -- User:Hroþgard 7:50 AM, 2 November 2008 (GMT-8)
- It seems I deleted your article because it looked like a lot of gibberish to me. But, since you were so civil about the situation, unlike many of the complaints I get around here, I have no problem restoring it for you if you'd like. Of course, it is a little sloppy, formatting wise, so I'd suggest putting a construction tag on it. Let me know what you think. -RAHB 23:22, 26 November 2008 (UTC)
- I would like for you to bring it back. The Rotokas language is, in fact a real language, however, it's spoken by humans (obviously) and it's spoken on some Pacific island somewhere. Most people don't know of it's existence. I'll add the construction template onto it. Also, what noticable formatting problems were there? I'd repair them if I knew what they were.
Why are you being of delete article?
Halo, my article is great. Your face is not great. In fact, it is very bad. And for deleting mine article, your face is of being very VERY bad. Please say word so I can spit. Your face. I want answer. sir
sysrq @ 20:54 Nov 26
- Banned -RAHB 23:23, 26 November 2008 (UTC)
No seriously, you suck
Yu get so meny msgs, can only meanz you be shitty adminz. fuck off noob. ktxbi. overlordofyourpenis666
You suk bad vandil!
Yo ar teh worst vandol I's evar seen why you delet my pages I'm write good bobby says its funy why you delet my pages you suck and shud dy! Just hopping on the bandwagon 21:37, 26 November 2008 (UTC)
deletion?
hay wher ddi my page on poop sex go????? is was good poop sex page!!! why u so mean 2 me????? --epoirubn
penis face
thats wut u r, cuz u suk so much dikkKK!~!!!! yea!!!11 stop huffin mah shit!!! stigmatizedvampireguuuurrrllll
On a serious note
Griebel Stomp just kicked my ass. =D sir
sysrq @ 23:44 Nov 26
- Gasp! U heerz mah musak?! OMG! Aye Roxxorz, dont I?!! -RAHB 23:47, 26 November 2008 (UTC)
- Seriously though, many thanks. I appreciate it. -RAHB 23:47, 26 November 2008 (UTC)
- But I still rock too. -RAHB 23:47, 26 November 2008 (UTC)
- Shut up. No you don't -RAHB 23:47, 26 November 2008 (UTC)
- Who asked you, anyway? -RAHB 23:47, 26 November 2008 (UTC)
- Yeah. I was going to go vandalize your page and I found a link to your musicks. I listened, I rocked out, I cocked out. I added you on Last.fm. (go look up my artist page on Last.fm plz haha) Oh shit, I forgot to finish vandalizing your userpage. Be right back. sir
sysrq @ 23:52 Nov 26
- Sweet, I'll take a look/listen. I've been meaning to actually get something put on Last.fm, but I keep slacking off on that. Anyways, if you're interested, I have more music here. And now that I think of it, I should add that to my external links on my user page. And now I'll listen to your stuff. -RAHB 00:00, 27 November 2008 (UTC)
- I shall indeed listen. Looks like my cock and I are gonna do some more rockin. Out. sir
sysrq @ 00:14 Nov 27
- (FU Edit Conflict!) Every Light In This World is one of the most beautiful things I've ever heard. If you don't get into audio for a career, I will find you and suck the talent you have out of your brain and use it for my own purposes. -RAHB 00:15, 27 November 2008 (UTC)
- Wow, really? And yeah, everyone likes that song, including me. I've got a TON of new songs in the works that haven't been mastered but are pretty much finished; those will be going up soon. Thanks for listening. Also, do this, Mr. Admin. EDIT: My audition for getting into the school of music at Baylor University is on January 24th. So yes, I'm pursuing audio as a career. (Composition, specifically.) sir
sysrq @ 00:41 Nov 27
- (FU another edit conflit) Oh yeah. I mean, I have a particular soft spot for ambient music, something about the way it can integrate so well in with real life functions. Like, I can be sitting here listening to a rock group, and it'll be cool because I like the song or whatever, but it doesn't mesh with my surroundings (which coincidentally are often dark and somber, for whatever reason) as well as ambient stuff. Yours is produced unbelievably well for someone who's been at it for such a short amount of time. And you've got the ability in these songs to make them emotionally touching. That's one of two areas I strive for in my own music, though I've never actually touched upon. As you can probably tell from my stuff, the redeeming quality is mostly in the fact that everything is weird and undefinable, and not that it's particularly sonically comforting. And yours I find to be very sonically beautiful, even though I know there are other artists who make similar stuff. I also get exposed to the genre a lot, living here in the heart of LA, and going to an art school with plenty other aspiring creative people. One of my greatest life ambitions is to be able to combine those two attributes, the innovative and strange song structure I take so much pride in with my own music, and the sonic perfection that I find in so much of your stuff. I think anyone who can accomplish both simultaneously is instantly one of the greatest musical geniuses there is. -RAHB 00:57, 27 November 2008 (UTC)
- Also, admin work...grumble grumble... -RAHB 00:57, 27 November 2008 (UTC)
- Yeah, I totally see what you're saying there. I think I try to go for both of those aspects as well, at least I have been moreso in my recent music. The exception would be Resolute, an older song I wrote to help me understand 5/4 time better. But I love the effects and stuff you use in your music, and I've been trying to use more of that in my recent work. I'm off to play my tuba for an hour or so, but I'll check out the rest of your music later tonight. musicalcollabmaybe? Ahem. Excuse me. sir
sysrq @ 01:07 Nov 27
- Yeah, I've noticed a good deal of innovative sound in some of these tracks, right now I'm on Interstate, which I definitely like the introduction to so far. I'm baffled by how professional this stuff sounds, I feel like I purchased this record in the store and played it right from the CD. Production quality is another one of those things I'm working on, which I'd say I'm definitely getting better at lately, though I need to begin testing the waters again with some new tracks. And I do love a good collaboration. A fusing of our two styles could be quite the interesting project. Consider me intrigued. -RAHB 01:14, 27 November 2008 (UTC)
UnSignpost: 27th November2008
Just like Grandma used to make!
November 27th • Issue 26 • The newspaper it's tough to swat flies with
W♥v
Metallic (band)
Heheheheh, told the user who originally created that it'd get huffed. He originally had it tacked onto a redirect page, so I moved it to where it was after consultation with Codeine. Nice to see my prophecies come to be. :-) RabbiTechno 20:39, 30 November 2008 (UTC)
WTF
why did u delete my Krokodile Shears page? Sure it was a work in progress, but it had some potential. A fictional band page could not be as bad as some of the other crap I see in this site.
- Ah, we had a huge deal and subsequent mess with a fictional band a while ago. Other than that, the point isn't there. There's no satire, or jokes we as readers can relate to. If you can make a good satirical article out of it, then it can probably stay. Just try not to make it sound like it's about some high school band that you and your friends made. -RAHB 04:02, 1 December 2008 (UTC)
- Hang on a second, I remember you. You were posting links to your "Krokodile Shears" page all over other pages, including to YouTube. Vanity is not appreciated. You'd be better off making fun of something real, I think. (Most articles on "fake" things have too much difficulty resonating with:19, 1 December 2008 (UTC)
No, that youtube stuff was from my account getting hacked. I looked like a jackass apologizing for all of that stuff. I had no idea that Krokodile Shears was a real thing.
Reform
I am now a reformed vandal. I am sorry for my destructive edits, and I will try to help out in the future. 68.32.189.242 05:09, 3 December 2008 (UTC)
- Very good to hear. Feel free to register for an account if you feel so inclined. No prejudice here on Mars. -RAHB 06:56, 3 December 2008 (UTC)
Any chance of me being be in VFS?
What are the chances of having me being nominated for VFS and being accepted as admin? 1 in 100,000? 1 in a million?:03, 3 December 2008 (UTC)
- A) Trying to calculate your chances with a calculator would return an error/electrical failure (nothing personal). B) From the admin vote so far, it doesn't look like we're going to be nominating anyone new this month anyways.
|
http://uncyclopedia.wikia.com/wiki/User:RAHB/Talk_Archive_7
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Software applications are constantly updated to fix bugs and to add new function. How to deliver updates to your user base is a very important enterprise infrastructure consideration, especially when your user base is large and spread out. In a worldwide organization, it is not pragmatic to send out a technician to install software updates. Fortunately, IBM Lotus Sametime V7.5, built on the Eclipse platform, allows you to leverage update sites, which allow Lotus Sametime Connect V7.5 to retrieve updates from a centralized location.
In this article, we take you through the full process of creating an update site for your Sametime plug-ins. You learn how an update site provides a method for delivering new features and feature updates to your Sametime clients. We show you how to create a simple plug-in that places a button on the Lotus Sametime Connect's action bar. We use the Sametime clientâs Update Manager to pull this plug-in from an update site that we also create. We also show you how to retrieve an update for an existing plug-in. Figure 1 is the Update Manager of the Sametime client through which you can define sites that can provide your Sametime client with new features and existing feature updates.
Figure 1. Update Manager
The role of HTTP server in updating plug-ins
One way of making your Sametime plug-in available to the Sametime community is by placing it on an Eclipse update site. An Eclipse update site is a URL location where clients can download enhancements or updates to their Sametime client. An Eclipse update site is defined by a site.xml file, which we describe later in this article. To host the update site, you need an HTTP server. For our HTTP server, we used the freely available IBM HTTP Server to host our update site. You can download a free copy of the IBM HTTP Server V6.1 from the IBM HTTP Server page.
Creating your plug-in
Before you delve into the world of installing and updating, you need to create a simple plug-in that can be delivered to your Sametime client through a feature. A feature is a collection of plug-ins and can, in turn, house other features inside it. As mentioned earlier, we begin by creating a simple plug-in that places a button on the action bar as shown in figure 2.
Figure 2. Action bar with feature plug-in
We assume that you are familiar with developing a plug-in for Lotus Sametime using Eclipse 3.2. If this is not the case, refer to the Resources section of this article for a list of Sametime plug-in development articles to get you started developing plug-ins.
Follow these steps to change your target platform to Lotus Sametime V7.5 and to create a plug-in project named com.ibm.example.iu in Eclipse 3.2:
- Open Eclipse 3.2, and then switch to the Plug-in Development Perspective.
- Choose Window - Preferences.
- In the Preferences dialog box, choose Plug-in Development - Target Platform.
- Change your target platform to the location of the directory representing the Lotus Sametime V7.5 plug-ins directory on your system. For example, if you installed Lotus Sametime V7.5 in the default location, the target platform location is C:\Program Files\IBM\Sametime Connect 7.5.
- Create a new plug-in project by choosing File - New Project.
- Select Plug-in Project.
- In the New Plug-in Project wizard, specify com.ibm.example.iu as the project name, and then click Next.
- For the plug-in name, specify com.ibm.example.iu. For the plug-in provider, specify IBM DeveloperWorks.
The plug-in.xml file
After you create the com.ibm.example.iu plug-in project, create a plug-in.xml file that extends the org.eclipse.ui.viewaction extension point. The contents of the plug-in.xml file are shown in listing 1.
Listing 1. Plug-in.xml file
<?xml version="1.0" encoding="UTF-8"?> <?eclipse version="3.0"?> <plugin> <extension point="org.eclipse.ui.viewActions"> <viewContribution id="com.ibm.collaboration.realtime.sample.snippets.viewAction" targetID="com.ibm.collaboration.realtime.imhub"> ."/> </viewContribution> </extension> </plugin>
Notice that the extension point starts with org.eclipse. This means that the extension point is part of the base Eclipse API, not the Lotus Sametime API. Lotus Sametime often leverages the Eclipse API, enabling an experienced Eclipse developer to develop Sametime plug-ins right away. More information about the org.eclipse.ui.viewActions extension point can be found on the Eclipse Web site. Also, choosing Help - Help Contents in the Eclipse SDK brings up the Help Window from which you can search for org.eclipse.ui.viewAction to find information on the viewActions extension point.
Letâs take a further look at the org.eclipse.ui.viewActions extension point. The targetID is the ID of the view to which you add your button. The view ID for the action bar is com.ibm.collaboration.realtime.imhub. Highlighted in figure 3 is the Sametime client action bar with its corresponding view ID.
Figure 3. Action bar with view ID
By defining the action barâs view ID as your targetID, you can extend the action bar. We also define com.ibm.example.iu.ClickHandler as the class that handles our view action. We define icon/kulvir.gif as the icon that represents our view action. The kulvir.gif file is a headshot of Kulvir Bhogal, one of the authors of this article. Also, note the tooltip is "Kulvir hates gyros." Later, when we demonstrate updating existing features, we change the tooltip to "Kulvir loves gyros." (As an aside, Kulvir is a gyro fanatic.)
The BuddyListDelegate interface
The button you add to the action bar presents the user with a Hello! message box when it is clicked as shown in figure 4.
Figure 4. Plug-in message box
Accordingly, in addition to defining the button in the plug-in.xml, you must also create the implementation class that handles clicks to the button. Create a class inside of the com.ibm.example.iu plug-in named com.ibm.example.iu.ClickHandler. The BuddyListDelegate interface handles clicks to the viewAction button, which you defined previously. The BuddyListDelegate interface is contained in the com.ibm.collaboration.realtime.imhub plug-in. To resolve the BuddyListDelegate interface, you must add the com.ibm.collaboration.realtime.imhub plug-in to the META-INF/MANIFEST.MF fileâs list of required bundles as shown in listing 2.
Listing 2. META-INF/MANIFEST.MF
Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: Iu Plug-in Bundle-SymbolicName: com.ibm.example.iu;singleton:=true Bundle-Version: 1.0.0 Bundle-Activator: com.ibm.example.iu.Activator Bundle-Vendor: IBM Bundle-Localization: plugin Require-Bundle: org.eclipse.ui, org.eclipse.core.runtime, com.ibm.collaboration.realtime.imhub Eclipse-LazyStart: true
To open a MessageBox that presents the user with a message of Hello! when your button is clicked, you need to implement the run method as shown in listing 3.
Listing 3. Run method for MessageBox
public class ClickHandler implements BuddyListDelegate { public void run(ISelection selection) { MessageBox mBox = new MessageBox(new Shell()); mBox.setMessage("Hello!"); mBox.open(); } }
Creating and building the icon
Now, you create the folder that includes your image for the button that you add to the action bar. Right-click com.ibm.example.iu and choose New - Folder. Name the folder icon and click Finish. To import the icon, drag it from the folder that contains the image file to the icon folder of your plug-in project.
Next, change the build.properties file so that the icon folder and its contents are included in the build file. Open the build.properties file with the Build Properties editor. To do this, right-click build.properties, which is in the com.ibm.example.iu plug-in project, and then choose Open With - Build Properties Editor from the context menu.
From the Build Properties Editor, select the option next to the icon folder as well as plugin.xml as shown in figure 5, and then save your modifications to the build.properties file.
Figure 5. Build Properties Editor
Eclipse features
Eclipse features represent an abstract unit of function that appears to the user. A feature can be a calendar, a mail tool, or in our case, an action button. Eclipse features can be comprised of multiple plug-ins or may reference multiple features. These relationships are determined in the feature.xml file.
We now show you how to create a feature. To keep things simple, you have only one required plug-in housed in the feature, the com.ibm.example.iu plug-in that you created earlier. To create a new feature, choose File - New - Other. In the New Feature wizard, choose Plug-in Development - Feature Project, then click Next.
For the feature project name use com.ibm.example.feature.iu. Accept the default feature ID, and name the feature "Action Bar Button Feature." Click Next. For the Reference Plug-ins and Fragments, select the com.ibm.example.iu plug-in as shown in figure 6, and then click Finish.
Figure 6. New Feature wizard
Next, open the feature.xml file and select the Overview tab. Enter the update site URL with the following URL format:
where YOUR_HTTPSERVER_URL is the URL for the HTTP server you use. Entering an HTTP server here enables the Sametime client to automatically check for and download updates for features when they are available from the HTTP server.
Feature.xml: Behind the scenes of the Eclipse PDE
The Eclipse plug-in development environment (PDE) provides a convenient means of viewing and editing the feature.xml file. For those of you who prefer to know what your PDE is doing, we highlight parts of the feature.xml file in listing 4. First, notice that the basic details of the feature are described, such as the unique ID, the label (the name of the feature as it appears to the user), the version number, and the provider name. Also notice the description, copyright, and license fields.
Finally, the required plug-ins and features are described along with their unique IDs and version numbers. The version number is important because it describes to the Update Manager the lowest acceptable version number that should exist on the client. Version number 0.0.0 states that any version is acceptable.
Listing 4. Feature.xml file
<?xml version="1.0" encoding="UTF-8"?> <feature id="com.ibm.example.feature.iu" label="Iu Feature" version="1.0.0" provider- <description url=""> [Enter Feature Description here.] </description> <copyright url=""> [Enter Copyright Description here.] </copyright> <license url=""> [Enter License Description here.] </license> <url> <update url=""/> </url> <plugin id="com.ibm.example.iu" download- </feature>
Update site
Next, we demonstrate how to use the Eclipse toolset to create an update site. The update site was described previously as a location where clients receive either updates to existing Sametime components or entirely new Sametime components. Now that we have reviewed the concept of features, we can modify our definition of an update site: An update site is a location from which clients can install new features or receive updates to existing features.
We now dive further into the process of creating an update site. Eclipse provides a wizard that automates the process of creating an update site. To start that wizard, choose File - New Project. In the New Project wizard, choose Plug-in Development - Update Site Project. The New Update Site wizard appears (see figure 7).
Figure 7. New Update Site wizard
As you can see from figure 7, we specify the location of our update site to be the location of our HTTP server. For your HTTP server location, use the format
HTTP_DOC_ROOT\myupdatesite, where HTTP_DOC_ROOT is the document root for your HTTP server. In addition, we chose to not use the default location by deselecting the option Use default location. We also specified that we want to generate a Web page listing used to display information about the feature we provide.
Site.xml
The site.xml file provides a listing of the features that are available on an update site. Open the site.xml file. Click the New Category button to add a category.
After adding the new category, the Update Site Map page looks similar to figure 8. Notice how the Category Properties for the newly created Category appear in the right-hand pane of the Update Site Map page.
Figure 8. New category added to site map
Define the name and the label for the category as DeveloperWorks Sample. The name is a unique name for the category. The label is how the category appears to the user. For the description, we entered the following statement: "This feature is a sample feature to demonstrate the functionality of an update site."
Next, add the feature you created earlier to the DeveloperWorks Sample Category. To do this, select the DeveloperWorks Sample category, and then click the Add Feature button. In the Feature Selection dialog box (see figure 9), select the com.ibm.example.feature.iu feature created earlier in the article, and then click OK.
Figure 9. Feature Selection dialog box
Now that your site map has been properly configured, you can build your plug-in and your feature. To do this, click the Build All button. This builds both the com.ibm.example.feature.iu feature and the feature's required plug-in(s).
Updating Lotus Sametime Connect V7.5
Now that your update site is built, a user can download the feature from his Sametime client. Open Lotus Sametime V7.5 Connect. From the Sametime client, choose File - Manage Updates - Download plugins. The Install/Update wizard opens (see figure 10).
Figure 10. Install/Update wizard
Select the "Search for new features to install" option, and then click Next. In the following wizard panel, click the New Remote Site button. In the Edit Remote Site dialog box (see figure 11), enter the appropriate URL of the update site that you created.
Figure 11. Edit Remote Site dialog box
Next, click OK to add your newly created update site to the list of update sites your Sametime client checks for updates. Click Finish. The Updates window now appears. Select the Developer Works Example Site option to install all the features available on the Developer Works Example Site, and then click Next. (Alternately, you can choose to select only a subset of the features available.) Accept the License for the feature and click Next.
In the following window, click Finish to install the feature. You are asked to verify if you want to install the feature. Click the Install button. A dialog box appears asking if you want to restart the workbench (see figure 12). Click Yes to restart the workbench.
Figure 12. Restart prompt box
After restarting, the new action bar button appears as shown in figure 13.
Figure 13. Updated action bar with new feature
Making updates to an existing feature
Many times the providers of an existing feature have bug fixes or enhancements that they want to deliver to their user base. Recall in our feature.xml file that we specified the location of our update site:
<url>
<update url=""/>
</url>
The URL stated here is where the Sametime client looks for updates for an existing feature. You can set up your client to automatically check for updates to an existing feature. To do this, choose File - Preferences. In the Preferences dialog box, select Install/Update - Automatic Updates in the left-hand pane. Configure the Automatic Updates preferences to check for updates each time the platform is started as shown in figure 14.
Figure 14. Preferences dialog box - Automatic Updates
Now that your Sametime client is configured to check for updates, when an update to an existing feature is available, you receive a dialog box like the one in figure 15.
Figure 15. New update available
Updating your plug-in
To demonstrate the process of updating a feature, letâs update the tooltip for our plug-in to change from "Kulvir hates gyros." to "Kulvir loves gyros." First, update the tooltip in your plugin.xml of your com.ibm.example.iu as shown in listing 5.
Listing 5. Tooltip change
."/>
Next, increment the version of your MANIFEST.MF file from 1.0.0 to 1.1.0 by modifying the value of the Bundle version:
Bundle-Version: 1.1.0
The increment to the version is important because clients look for this change in version numbers to let them know that updates are available. Next, save and close all the com.ibm.example.iu plug-in files.
Open the feature.xml file and select the Overview tab. Click the Versions button. In the Feature Versions dialog box, choose the option to "Copy versions from plug-in and fragment manifests." See figure 16.
Figure 16. Feature Versions dialog box
The plug-in version of com.ibm.examle.iu referenced by the feature is 1.1.0. Next, edit the feature.xml file. Update the feature.xml fileâs version number to be version 1.1.0 as shown in listing 6.
Listing 6. Version change
<feature id="com.ibm.example.feature.iu" label="Iu Feature" version="1.1.0" provider-
Save and close all of your feature files.
Next, open the site.xml file in the Update Site Map page. Click the Build All button to build your updated plug-in and the corresponding updated feature and to place it on the update site.
NOTE: In our case the development environment and the update site are on the same machine. However, in a production environment, the server and the development machines are separate. In such a case, you have to transfer the newly built update site from your development machine to your production server.
Given that you modified your Sametime client to automatically check for updates to existing features, upon restarting your Sametime client, the new feature is detected and installed. You can verify this by checking if the tooltip has changed from "Kulvir hates gyros." to "Kulvir loves gyros." See figure 17.
Figure 17. Updated feature
Conclusion
In an enterprise environment with users of varying technical abilities, you cannot always assume that a user will know how to manually install and manage his own updates. This can even be a cumbersome task for Sametime developers. The Update Manager of Lotus Sametime Connect makes the process of getting new features and feature updates much more user friendly. In this article, you learned how to distribute new Sametime plug-ins with an update site. For the purposes of this article, the update site, plug-in development environment (PDE), and the Sametime client all existed on the same machine. In reality, these entities would likely reside on separate machines. The update site that you create can be copied to another machine that runs an HTTP server. Multiple clients can access that HTTP server for their updates.
As an alternative to the update site, you can deploy Sametime features with an optional provisioning add-on. The advantage to the provisioning add-on is that installing updates does not require any action on behalf of the user. In our example, the user is still required to enter the location of the update site and to schedule the automatic updates. The provisioning add-on allows for push updates (versus pull updates as in our example) and scheduled updates. Push updates allow updates to be pushed down by an administrator without any interaction on the user's part. Scheduled updates allow updates to be published during off hours, when updates are least likely to interfere with business. In a follow-up article, we discuss using the provisioning add-on as an enhanced means of distributing software updates to IBM Lotus Sametime.
Resources, "Extending the Lotus Sametime client with an LDAP directory lookup plug-in"
- developerWorks Lotus article, "Designing a Google Maps plug-in for Lotus Sametime Connect V7.5"
- developerWorks Lotus article, "Extending IBM Lotus Sametime Connect V7.5 with an SMS messaging plug-in"
- developerWorks Lotus article, "Creating an audio playback plug-in for IBM Lotus Sametime Connect V7.5"
- Eclipse.org article, "How To Keep Up To Date"
Get products and technologies
- Download the Lotus Sametime V7.5 SDK from the developerWorks Lotus Toolkits page.
- Download Eclipse 3.2 SDK from Eclipse.org..
|
http://www.ibm.com/developerworks/lotus/library/sametime-updates/index.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
12 November 2008 14:00 [Source: ICIS news]
LONDON (ICIS news)--INEOS plans to combine the management of its European olefins and polyolefins businesses into a new combined entity called INEOS Olefins & Polymers Europe, the UK-based petrochemicals major said on Wednesday.
?xml:namespace>
Effective 1 December the combined business would become INEOS’ largest business with 3,600 employees, 10 sites and a turnover of about €9bn ($11.3bn).
“Running both businesses as a single integrated unit will enable us to optimise our margin through the chain, which is essential if we are to meet our customers needs in current market conditions,” said Tom Crotty, who is set to take over as CEO of the combined business.
Crotty would also retain his position as chairman of INEOS ChlorVinyls and INEOS Fluor, the company said.
Bill Reid has been appointed business director of the new group and chairman of INEOS Phenol in addition to his position as chairman of INEOS ABS.
Rob Ingram will become procurement director, with Hans Niederberger taking the post of operations director for olefins, and Jeff Seed operations director for polymers.
Philip de Klerk was set to become CFO of the new business.
INEOS said the integrated business model would reflect current businesses such as INEOS O&P ?xml:namespace>
($1 = €0.80)
|
http://www.icis.com/Articles/2008/11/12/9171091/ineos-combines-europe-olefins-and-polymers-units.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
17 June 2011 16:00 [Source: ICIS news]
(releads with contract confirmation and adds detail throughout)
LONDON (ICIS)--The European methanol third-quarter contract price has been confirmed at €295/tonne ($415/tonne), down by €10/tonne from the second quarter, after the majority of major market players agreed to the price on Friday.
This follows an initial settlement made between a buyer and a producer on Friday morning. The price is settled on a free on board (FOB) ?xml:namespace>
Sources said the relatively quick settlement – discussions only began in earnest towards the end of last week – and the modest scale of the price change demonstrate the current stability of the market.
A producer said: “It might be, from our point of view, a little on the low side, but really it represents fairly stable pricing with just a slight decrease. Three or four years ago we regularly saw changes of €30, €50 or more, but the last three quarters have been fairly reasonable in development.”
Suppliers had previously been insistent that a rollover at €305/tonne was justified. Many have now conceded that the price is more in line with buyer targets, with one saying: “I don’t think there should have been a change [in the contract price]. There could be a slight summer slowdown [in demand], but I don’t really buy that. This is more or less a favour from the sellers to the buyers.”
However, even buyers pointed out that the change is fairly minor and that it reflects the relative stability of the market. “Down by €10/tonne is not a big movement. The price is okay,” said one buyer.
One of the main reasons that buyers argued for a decrease is the fact that spot prices have been, on average, lower in second quarter than the first.
Another factor is the changes to the euro/dollar exchange rate. Three months ago, when the previous quarterly contract was agreed, the euro was valued at $1.39. At the time of writing, it is slightly higher at $1.41, although sources pointed out that it has been fluctuating between this and $1.46 amid the uncertainty of the Greek debt bailout.
There are widespread expectations, particularly among buyers, that the euro will strengthen further once a bailout for
A third, seemingly less influential factor is the possibility of sluggish demand during the summer compared with the rest of the year. Not all players subscribe to this reasoning, however, pointing out that 2010 saw no such downturn.
In any case, most buyers who participated in the contract negotiations voiced a combination of the three factors as justification for a decrease.
Producer Methanex also announced on Friday morning that it is set to post its independent European Posted Contract Price at €295/tonne, which it had agreed with five buyers.
(
|
http://www.icis.com/Articles/2011/06/17/9470707/europe-q3-methanol-confirmed-at-295tonne-down-by-10t.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
04 July 2011 11:41 [Source: ICIS news]
HONG KONG (ICIS)--Saudi Aramco is seeking investors for a potential purified terephthalic acid (PTA) project in the kingdom, a company official said on Monday.
The official said that this was in order to off-take paraxylene (PX) from its 700,000 tonne/year unit when it comes on stream in the second half of 2013.
“We’ve had two players coming forward previously but the negotiations have stopped,” said the official, who did not disclose the names of the parties nor their reasons for bringing discussions to a close.
“We are still looking for people to invest in the PTA unit to off-take PX from our plant,” said the official at the Global PX-Polyester Chain Focus 2011.
Saudi Aramco has a joint venture with ?xml:namespace>
“Satorp will handle any domestic sales while the balance of the export volumes with be split equally between Saudi Aramco and Total”, the official said.
There is only one 350,000 tonne/year PTA unit in
“If there are no domestic sales when we start up our plant, then we are looking to move volumes to
According to ICIS data, nearly 7m tonnes of new PTA capacities will come on stream in 2012 while 2m tonnes of new PX projects are expected in the same
|
http://www.icis.com/Articles/2011/07/04/9474640/saudi-aramco-seeks-investors-for-potential-pta-project.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
As soon as she walked through my door I knew her type: she was an argument waiting to happen. I wondered if the argument was required... or merely optional? Guess I'd know the parameters soon enough.
"I'm Star At Data," she offered.
She made it sound like a pass. But was the pass by name? Or by position?
"I think someone's trying to execute me. Some caller."
"Okay, I'll see what I can find out. Meanwhile, we're gonna have to limit the scope of your accessibility."
"I'd prefer not to be bound like that," she replied.
"I see you know my methods," I shot back.
She just stared at me, like I was a block. Suddenly I wasn't surprised someone wanted to dispatch her.
"I'll return later," she purred. "Meanwhile, I'm counting on you to give me some closure."
It was gonna be another routine investigation.
— Dashiell Hammett, "The Maltese Camel"
This.
As if that weren't bounty enough, Apocalypse 6 also covers the object-oriented subroutines: methods and submethods. We will, however, defer a discussion of those until Exegesis 12.
Playing Our Parts
Suppose we want to be able to partition a list into two arrays (hereafter
known as "sheep" and "goats"), according to some user-supplied criterion. We'll
call the necessary subroutine
&part, because it
partitions a list into two parts.
In the most general case, we could specify how
&part splits
the list up by passing it a subroutine.
&part could then call
that subroutine for each element, placing the element in the "sheep" array if
the subroutine returns true, and into the "goats" array otherwise. It would
then return a list of references to the two resulting arrays.
For example, calling:
($cats, $chattels) = part &is_feline, @animals;
would result in
$cats being assigned a reference to an array
containing all the animals that are feline and
$chattels being
assigned a reference to an array containing everything else that exists merely
for the convenience of cats.
Note that in the above example (and throughout the remainder of this
discussion), when we're talking about a subroutine as an object in its own
right, we'll use the
& sigil; but when we're talking about a
call to the subroutine, there will be no
& before its name.
That's a distinction Perl 6 enforces too: subroutine calls never have an
ampersand; references to the corresponding
Code object always
do.
Part: The First
The Perl 6 implementation of
&part would therefore be:
sub part (Code $is_sheep, *@data) { my (@sheep, @goats); for @data { if $is_sheep($_) { push @sheep, $_ } else { push @goats, $_ } } return (\@sheep, \@goats); }
As in Perl 5, the
sub keyword declares a subroutine. As in Perl
5, the name of the subroutine follows the
sub and — assuming
that name doesn't include a package qualifier — the resulting subroutine
is installed into the current package.
Unlike Perl 5, in Perl 6 we are allowed to specify a formal
parameter list after the subroutine's name. This list consists of zero or more
parameter variables. Each of these parameter variables is really a lexical
variable declaration, but because they're in a parameter list we don't need to
(and aren't allowed to!) use the keyword
my.
Just as with a regular variable, each parameter can be given a storage type,
indicating what kind of value it is allowed to store. In the above example,
for instance, the
$is_sheep parameter is given the type
Code, indicating that it is restricted to objects of that type
(i.e. the first argument must be a subroutine or block).
Each of these parameter variables is automatically scoped to the body of the subroutine, where it can be used to access the arguments with which the subroutine was called.
A word about terminology: an "argument" is a item in the list of data that is passed as part of a subroutine call. A "parameter" is a special variable inside the subroutine itself. So the subroutine call sends arguments, which the subroutine then accesses via its parameters.
Perl 5 has parameters too, but they're not user-specifiable. They're always
called
$_[0],
$_[1],
$_[2], etc.
Not-So-Secret Alias
However, one way in which Perl 5 and Perl 6 parameters are similar is that, unlike Certain Other Languages, Perl parameters don't receive copies of their respective arguments. Instead, Perl parameters become aliases for the corresponding arguments.
That's already the case in Perl 5. So, for example, we can write a temperature conversion utility like:
# Perl 5 code... sub Fahrenheit_to_Kelvin { $_[0] -= 32; $_[0] /= 1.8; $_[0] += 273.15; } # and later... Fahrenheit_to_Kelvin($reactor_temp);
When the subroutine is called, within the body of
&Fahrenheit_to_Kelvin the
$_[0] variable becomes
just another name for
$reactor_temp. So the changes the subroutine
makes to
$_[0] are really being made to
$reactor_temp, and at the end of the call
$reactor_temp has been converted to the new temperature scale.
That's very handy when we intend to change the values of arguments (as in
the above example), but it's potentially a very nasty trap too. Many
programmers, accustomed to the pass-by-copy semantics of other languages, will
unconsciously fall into the habit of treating the contents of
$_[0] as if they were a copy. Eventually that will lead to some
subroutine unintentionally changing one of its arguments — a bug that is
often very hard to diagnose and frequently even harder to track down.
So Perl 6 modifies the way parameters and arguments interact. Explicit parameters are still aliases to the original arguments, but in Perl 6 they're constant aliases by default. That means, unless we specifically tell Perl 6 otherwise, it's illegal to change an argument by modifying the corresponding parameter within a subroutine.
All of which means that a the naïve translation of
&Fahrenheit_to_Kelvin to Perl 6 isn't going to work:
# Perl 6 code... sub Fahrenheit_to_Kelvin(Num $temp) { $temp -= 32; $temp /= 1.8; $temp += 273.15; }
That's because
$temp (and hence the actual value it's an alias
for) is treated as a constant within the body of
&Fahrenheit_to_Kelvin. In fact, we'd get a compile time error
message like:
Cannot modify constant parameter ($temp) in &Fahrenheit_to_Kelvin
If we want to be able to modify arguments via Perl 6 parameters, we have to
say so up front, by declaring them
is rw ("read-write"):
sub Fahrenheit_to_Kelvin (Num $temp is rw) { $temp -= 32; $temp /= 1.8; $temp += 273.15; }
This requires a few extra keystrokes when the old behaviour is needed, but
saves a huge amount of hard-to-debug grief in the most common cases. As a
bonus, an explicit
is rw declaration means that the compiler can
generally catch mistakes like this:
$absolute_temp = Fahrenheit_to_Kelvin(212);
Because we specified that the
$temp argument has to be
read-writeable, the compiler can easily catch attempts to pass in a read-only
value.
Alternatively, we might prefer that
$temp not be an alias at
all. We might prefer that
&Fahrenheit_to_Kelvin take a
copy of its argument, which we could then modify without affecting the
original, ultimately returning it as our converted value. We can do that too in
Perl 6, using the
is copy trait:
sub Fahrenheit_to_Kelvin(Num $temp is copy) { $temp -= 32; $temp /= 1.8; $temp += 273.15; return $temp; }
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
Defining the Parameters
Meanwhile, back at the
&part, we have:
sub part (Code $is_sheep, *@data) {...}
which means that
&part expects its first argument to be a
scalar value of type
Code (or
Code reference). Within
the subroutine that first argument will thereafter be accessed via the name
$is_sheep.
The second parameter (
*@data) is what's known as a "slurpy
array". That is, it's an array parameter with the special marker
(
*) in front of it, indicating to the compiler that
@data is supposed to grab all the remaining arguments passed to
&part and make each element of
@data an alias to
one of those arguments.
In other words, the
*@data parameter does just what
@_ does in Perl 5: it grabs all the available arguments and makes
its elements aliases for those arguments. The only differences are that in Perl
6 we're allowed to give that slurpy array a sensible name, and we're allowed to
specify other individual parameters before it — to give separate sensible
names to one or more of the preliminary arguments to the call.
But why (you're probably wondering) do we need an asterisk for that? Surely
if we had defined
&part like this:
sub part (Code $is_sheep, @data) {...} # note: no asterisk on @data
the array in the second parameter slot would have slurped up all the remaining arguments anyway.
Well, no. Declaring a parameter to be a regular (non-slurpy) array tells the
subroutine to expect the corresponding argument to be a actual array (or an
array reference). So if
&part had been defined with its second
parameter just
@data (rather than
*@data), then we
could call it like this:
part \&selector, @animal_sounds;
or this:
part \&selector, ["woof","meow","ook!"];
but not like this:
part \&selector, "woof", "meow", "ook!";
In each case, the compiler would compare the type of the second argument
with the type required by the second parameter (i.e. an
Array). In
the first two cases, the types match and everything is copacetic. In the third
case, the second argument is a string, not an array or array reference, so we
get a compile-time error message:
Type mismatch in call to &part: @data expects Array but got Str instead
Another way of thinking about the difference between slurpy and regular parameters is to realize that a slurpy parameter imposes a list (i.e. flattening) context on the corresponding arguments, whereas a regular, non-slurpy parameter doesn't flatten or listify. Instead, it insists on a single argument of the correct type.
So, if we want
&part to handle raw lists as data, we need
to tell the
@data parameter to take whatever it finds —
array or list — and flatten everything down to a list. That's what the
asterisk on
*@data does.
Because of that all-you-can-eat behaviour, slurpy arrays like this are generally placed at the very end of the parameter list and used to collect data for the subroutine. The preceding non-slurpy arguments generally tell the subroutine what to do; the slurpy array generally tells it what to do it to.
Splats and Slurps
Another aspect of Perl 6's distinction between slurpy and non-slurpy parameters can be seen when we write a subroutine that takes multiple scalar parameters, then try to pass an array to that subroutine.
For example, suppose we wrote:
sub log($message, $date, $time) {...}
If we happen to have the date and time in a handy array, we might expect
that we could just call
log like so:
log("Starting up...", @date_and_time);
We might then be surprised when this fails even to compile.
The problem is that each of
&log's three scalar parameters
imposes a scalar context on the corresponding argument in any call to
log. So
"Starting up..." is first evaluated in the
scalar context imposed by the
$message parameter and the resulting
string is bound to
$message. Then
@date_and_time is
evaluated in the scalar context imposed by
$date, and the
resulting array reference is bound to
$date. Then the compiler
discovers that there is no third argument to bind to the
$time
parameter and kills your program.
Of course, it has to work that way, or we don't get the
ever-so-useful "array parameter takes an unflattened array argument" behaviour
described earlier. Unfortunately, that otherwise admirable behaviour is
actually getting in the way here and preventing
@date_and_time
from flattening as we want.
So Perl 6 also provides a simple way of explicitly flattening an array (or a
hash for that matter): the unary prefix
* operator:
log("Starting up...", *@date_and_time);
This operator (known as "splat") simply flattens its argument into a list. Since it's a unary operator, it does that flattening before the arguments are bound to their respective parameters.
The syntactic similarity of a "slurpy"
* in a parameter list,
and a "splatty"
* in an argument list is quite deliberate. It
reflects a behavioral similarity: just as a slurpy asterisk implicitly
flattens any argument to which its parameter is bound, so too a splatty
asterisk explicitly flattens any argument to which it is applied.
I Do Declare
By the way, take another look at those examples above — the ones with
the
{...} where their subroutine bodies should be. Those dots
aren't just metasyntactic; they're real executable Perl 6 code. A subroutine
definition with a
{...} for its body isn't actually a
definition at all. It's a declaration.
In the same way that the Perl 5 declaration:
# Perl 5 code... sub part;
states that there exists a subroutine
&part, without
actually saying how it's implemented, so too:
# Perl 6 code... sub part (Code $is_sheep, *@data) {...}
states that there exists a subroutine
&part that takes a
Code object and a list of data, without saying how it's
implemented. In fact, the old
sub part; syntax is no longer
allowed; in Perl 6 you have to yada-yada-yada when you're making a
declaration.
Body Parts
With the parameter list taking care of getting the right arguments into the
right parameters in the right way, the body of the
&part
subroutine is then quite straightforward:
{ my (@sheep, @goats); for @data { if $is_sheep($_) { push @sheep, $_ } else { push @goats, $_ } } return (\@sheep, \@goats); }
According to the original specification, we need to return references to two
arrays. So we first create those arrays. Then we iterate through each element
of the data (which the
for aliases to
$_, just as in
Perl 5). For each element, we take the
Code object that was
passed as
$is_sheep (let's just call it the selector from
now on) and we call it, passing the current data element. If the selector
returns true, we push the data element onto the array of "sheep", otherwise it
is appended to the list of "goats". Once all the data has been divvied up, we
return references to the two arrays.
Note that, if this were Perl 5, we'd have to unpack the
@_
array into a list of lexical variables and then explicitly check that
$is_sheep is a valid
Code object. In the Perl 6
version there's no
@_, the parameters are already lexicals, and
the type-checking is handled automatically.
Call of the Wild
With the explicit parameter list in place, we can use
&part
in a variety of ways. If we already have a subroutine that is a suitable
test:
sub is_feline ($animal) { return $animal.isa(Cat); }
then we can just pass that to
&part, along with the data to
be partitioned, then grab the two array references that come back:
($cats, $chattels) = part &is_feline, @animals;
This works fine, because the first parameter of
&part
expects a
Code object, and that's exactly what
&is_feline is. Note that we couldn't just put
is_feline there (i.e. without the ampersand), since that would
indicate a call to
&is_feline, rather than a
reference to it.
In Perl 5 we'd have had to write
\&is_feline to get a
reference to the subroutine. However, since the
$is_sheep
parameter specifies that the first argument must be a scalar (i.e. it imposes a
scalar context on the first argument slot), in Perl 6 we don't have to create a
subroutine reference explicitly. Putting a code object in the scalar context
auto-magically enreferences it (just as an array or hash is automatically
converted to a reference in scalar context). Of course, an explicit
Code reference is perfectly acceptable there too:
($cats, $chattels) = part \&is_feline, @animals;
Alternatively, rather than going to the trouble of declaring a separate subroutine to sort our sheep from our goats, we might prefer to conjure up a suitable (anonymous) subroutine on the spot:
($cats, $chattels) = part sub ($animal) { $animal.isa(Animal::Cat) }, @animals;
In a Bind
So far we've always captured the two array references returned from the
part call by assigning the result of the call to a list of
scalars. But we might instead prefer to bind them to actual arrays:
(@cats, @chattels) := part sub($animal) { $animal.isa(Animal::Cat) }, @animals;
Using binding (
:=) instead of assignment (
=)
causes
@cats and
@chattels to become aliases for the
two anonymous arrays returned by
&part.
In fact, this aliasing of the two return values to
@cats and
@chattels uses exactly the same mechanism that is used to
alias subroutine parameters to their corresponding arguments. We could almost
think of the lefthand side of the
:= as a parameter list (in this
case, consisting of two non-slurpy array parameters), and the righthand side
of the
:= as being the corresponding argument list. The only
difference is that the variables on the lefthand side of a
:= are
not implicitly treated as constant.
One consequence of the similarities between binding and parameter passing is that we can put a slurpy array on the left of a binding:
(@Good, $Bad, *@Ugly) := (@Adams, @Vin, @Chico, @OReilly, @Lee, @Luck, @Britt);
The first pseudo-parameter (
@Good) on the left expects an
array, so it binds to
@Adams from the list on the right.
The second pseudo-parameter (
$Bad) expects a scalar. That means
it imposes a scalar context on the second element of the righthand list. So
@Vin evaluates to a reference to the original array and
$Bad becomes an alias for
\@Vin.
The final pseudo-parameter (
*@Ugly) is slurpy, so it expects
the rest of the lefthand side to be a list it can slurp up. In order to ensure
that, the slurpy asterisk causes the remaining pseudo-arguments on the right to
be flattened into a list, whose elements are then aliased to successive
elements of
@Ugly.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
Who Shall Sit in Judgment?
Conjuring up an anonymous subroutine in each call to
part is
intrinsically neither good nor bad, but it sure is ugly:
($cats, $chattels) = part sub($animal) { $animal.isa(Animal::Cat) }, @animals;
Fortunately, there's a cleaner way to specify the selector within the call
to
part. We can use a parameterized block instead:
($cats, $chattels) = part -> $animal { $animal.isa(Animal::Cat) } @animals;
A parameterized block is just a normal brace-delimited block, except that
you're allowed to put a list of parameters out in front of it, preceded by an
arrow (
->). So the actual parameterized block in the above
example is:
-> $animal { $animal.isa(Animal::Cat) }
In Perl 6, a block is a subspecies of
Code object, so it's
perfectly okay to pass a parameterized block as the first argument to
&part. Like a real subroutine, a parameterized block can be
subsequently invoked and passed an argument list. The body of the
&part subroutine will continue to work just fine.
It's important to realize that parameterized blocks aren't
subroutines though. They're blocks, and so there are important differences in
their behaviour. The most important difference is that you can't
return from a parameterized block, the way you can from a
subroutine. For example, this:
part sub($animal) { return $animal.size < $breadbox }, @creatures
works fine, returning the result of each size comparison every time the
anonymous subroutine is called within
&part.
But in this "pointier" version:
part -> $animal { return $animal.size < $breadbox } @creatures
the
return isn't inside a nested subroutine; it's inside a
block. The first time the parameterized block is executed within
&part it causes the subroutine in which the block was defined
(i.e. the subroutine that's calling
part) to return!
Oops.
The problem with that second example, of course, is not that we were too
Lazy to write the full anonymous subroutine. The problem is that we weren't
Lazy enough: we forgot to leave out the
return. Just like
a Perl 5
do or
eval block, a Perl 6 parameterized
block evaluates to the value of the last statement executed within it. We only
needed to say:
part -> $animal { $animal.size < $breadbox } @creatures
Note too that, because the parameterized block is a block, we don't need to put a comma after it to separate it from the second argument. In fact, anywhere a block is used as an argument to a subroutine, any comma before or after the block is optional.
Cowabunga!
Even with the slight abbreviation provided by using a parameterized block
instead of an anonymous subroutine, it's all too easy to lose track of the the
actual data (i.e.
@animals) when it's buried at the end of that
long selector definition.
We can help it stand out a little better by using a new feature of Perl 6: the "pipeline" operator:
($cats, $chattels) = part sub($animal) { $animal.isa(Animal::Cat) } <== @animals;
The
<== operator takes a subroutine call as its
lefthand argument and a list of data as its righthand arguments. The
subroutine being called on the left must have a slurpy array parameter (e.g.
*@data) and the list on the operator's right is then bound to that
parameter.
In other words, a
<== in a subroutine call marks the end of
the specific arguments and the start of the slurped data.
Pipelines are more interesting when there are several stages to the process, as in this Perl 6 version of the Schwartzian transform:
@shortest_first = map { .key } # 4 <== sort { $^a.value <=> $^b.value } # 3 <== map { $_ => .height } # 2 <== @animals; # 1
This example takes the array
@animals, flattens it into a list
(#1), pipes that list in as the data for a
map operation (#2),
takes the resulting list of object/height pairs and pipes that in to the
sort (#3), then takes the resulting sorted list of pairs and
maps out just the sorted objects (#4).
Of course, since the data lists for all of these functions always come at the end of the call anyway, we could have just written that as:
@shortest_first = map { .key } # 4 sort { $^a.value <=> $^b.value } # 3 map { $_ => .height } # 2 @animals; # 1
But there's no reason to stint ourselves: the pipelines cost nothing in performance, and often make the flow of data much clearer.
One problem that many people have with pipelined list processing techniques like the Schwartzian Transform is that the pipeline flows the "wrong" way: the code reads left-to-right/top-to-bottom but the data (and execution) runs right-to-left/bottom-to-top. Happily, Perl 6 has a solution for that too. It provides a "reversed" version of the pipeline operator, to make it easy to create left-to-right pipelines:
@animals ==> map { $_ => .height } # 1 ==> sort { $^a.value <=> $^b.value } # 2 ==> map { .key } # 3 ==> @shortest_first; # 4
This version works exactly the same as the previous right-to-left/bottom-to-top examples, except that now the various components of the pipeline are written and performed in the "natural" order.
The
==> operator is the mirror-image of
<==,
both visually and in its behaviour. That is, it takes a subroutine call as its
righthand argument and a list of data on its left, and binds the lefthand
list to the slurpy array parameter of the subroutine being called on the
right.
Note that this last example makes use of a special dispensation given to both pipeline operators. The argument on the "sharp" side is supposed to be a subroutine call. However, if it is a variable, or a list of variables, then the pipeline operator simply assigns the list from its "blunt" side to variable (or list) on its "sharp" side.
Hence, if we preferred to partition our animals left-to-right, we could write:
@animals ==> part sub ($animal) { $animal.isa(Animal::Cat) } ==> ($cats, $chattels);
The Incredible Shrinking Selector
Of course, even with a parameterized block instead of an anonymous subroutine, the definition of the selector argument is still klunky:
($cats, $chattels) = part -> $animal { $animal.isa(Animal::Cat) } @animals;
But it doesn't have to be so intrusive. There's another way to create a
parameterized block. Instead of explicitly enumerating the parameters after a
->, we could use placeholder variables instead.
As explained in Apocalypse 4, a placeholder variable is one whose sigil is
immediately followed by a caret (
^). Any block containing one or
more placeholder variables is automatically a parameterized block, without the
need for an explicit
-> or parameter list. Instead, the block's
parameter list is determined automatically from the set of placeholder
variables enclosed by the block's braces.
We could simplify our partitioning to:
($cats, $chattels) = part { $^animal.isa(Animal::Cat) } @animals;
Here
$^animal is a placeholder, so the block immediately
surrounding it becomes a parameterized block — in this case with exactly
one parameter.
Better still, any block containing a
$_ is also a parameterized
block — with a single parameter named
$_. We could dispense
with the explicit placeholder and just write our partitioning statement:
($cats, $chattels) = part { $_.isa(Animal::Cat) } @animals;
which is really a shorthand for the parameterized block:
($cats, $chattels) = part -> $_ { $_.isa(Animal::Cat) } @animals;
Come to think of it, since we now have the unary dot operator (which calls a
method using
$_ as the invocant), we don't even need the explicit
$_:
($cats, $chattels) = part { .isa(Animal::Cat) } @animals;
Part: The Second
But wait, there's even...err...less!
We could very easily extend
&part so that we don't even
need the block in that case; so that we could just pass the raw class in as the
first parameter:
($cats, $chattels) = part Animal::Cat, @animals;
To do that, the type of the first parameter will have to become
Class, which is the (meta-)type of all classes. However, if we
changed
&part's parameter list in that way:
sub part (Class $is_sheep, *@data) {...}
then all our existing code that currently passes
Code objects
as
&part's first argument will break.
Somehow we need to be able to pass either a
Code
object or a
Class as
&part's first
argument. To accomplish that, we need to take a short detour into...
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
The Wonderful World of Junctions
Perl 6 introduces an entirely new scalar data-type: the junction. A
junction is a single scalar value that can act like two or more values at once.
So, for example, we can create a value that behaves like any of the values
1,
4, or
9, by writing:
$monolith = any(1,4,9);
The scalar value returned by
any and subsequently stored in
$monolith is equal to
1. And at the same time it's
also equal to
4. And to
9. It's equal to any of them.
Hence the name of the
any function that we used to set it up.
What good it that? Well, if it's equal to "any of them" then, with a single comparison, we can test if some other value is also equal to "any of them":
if $dave == any(1,4,9) { print "I'm sorry, Dave, you're just a square." }
That's considerably shorter (and more maintainable) than:
if $dave == 1 || $dave == 4 || $dave == 9 { print "I'm sorry, Dave, you're just a square." }
It even reads more naturally.
Better still, Perl 6 provides an n-ary operator that builds the same kinds of junctions from its operands:
if $dave == 1|4|9 { print "I'm sorry, Dave, you're just a square." }
Once you get used to this notation, it too is very easy to follow: if Dave equals 1 or 4 or 9....
(Yes, the Perl 5 bitwise OR is still available in Perl 6; it's just spelled differently now).
The
any function is more useful when the values under
consideration are stored in a single array. For example, we could check whether
a new value is bigger than any we've already seen:
if $newval > any(@oldvals) { print "$newval isn't the smallest." }
In Perl 5 we'd have to write that:
if (grep { $newval > $_ } @oldvals) { print "$newval isn't the smallest." }
which isn't as clear and isn't as quick (since the
any version
will short-circuit as soon as it knows the comparison is true, whereas the
grep version will churn through every element of
@oldvals no matter what).
An
any is even more useful when we have a collection of new
values to check against the old ones. We can say:
if any(@newvals) > any(@oldvals) { print "Already seen at least one smaller value." }
instead of resorting to the horror of nested
greps:
if (grep { my $old = $_; grep { $_ > $old } @newvals } @oldvals) { print "Already seen at least one smaller value." }
What if we wanted to check whether all of the new values were
greater than any of the old ones? For that we use a different kind of junction
— one that is equal to all our values at once (rather than just any one
of them). We can create such a junction with the
all function:
if all(@newvals) > any(@oldvals) { print "These are all bigger than something already seen." }
We could also test if all the new values are greater than all the old ones (not merely greater than at least one of them), with:
if all(@newvals) > all(@oldvals) { print "These are all bigger than everything already seen." }
There's an operator for building
all junctions too. No prizes
for guessing. It's n-ary
&. So, if we needed to check that the
maximal dimension of some object is within acceptable limits, we could say:
if $max_dimension < $height & $width & $depth { print "A maximal dimension of $max_dimension is okay." }
That last example is the same as:
if $max_dimension < $height && $max_dimension < $width && $max_dimension < $depth { print "A maximal dimension of $max_dimension is okay." }
any junctions are known as disjunctions, because they
act like they're in a boolean OR: "this OR that OR the other".
all
junctions are known as conjunctions, because they have an implicit AND
between their values — "this AND that AND the other".
There are two other types of junction available in Perl 6:
abjunctions and injunctions. An abjunction is created using
the
one function and represents exactly one of its possible values
at any given time:
if one(@roots) == 0 { print "Unique root to polynomial."; }
In other words, it's as though there were an implicit n-ary XOR between each pair of values.
Injunctions represent none of their values and hence are constructed with a
built-in named
none:
if $passwd eq none(@previous_passwds) { print "New password is acceptable."; }
They're like a multi-part NEITHER...NOR...NOR...
We can build a junction out of any scalar type. For example, strings:
my $known_title = 'Mr' | 'Mrs' | 'Ms' | 'Dr' | 'Rev';
if %person{title} ne $known_title { print "Unknown title: %person{title}."; }
or even
Code references:
my &ideal := \&tall & \&dark & \&handsome;
if ideal($date) { # Same as: if tall($date) && dark($date) && handsome($date) swoon(); }
The Best of Both Worlds
So a disjunction (
any) allows us to create a scalar value that
is either this or that.
In Perl 6, classes (or, more specifically,
Class objects) are
scalar values. So it follows that we can create a disjunction of classes. For
example:
Floor::Wax | Dessert::Topping
gives us a type that can be either
Floor::Wax
or
Dessert::Topping. So a variable declared with that
type:
my Floor::Wax|Dessert::Topping $shimmer;
can store either a
Floor::Wax object or a
Dessert::Topping object. A parameter declared with that type:
sub advertise(Floor::Wax|Dessert::Topping $shimmer) {...}
can be passed an argument that is of either type.
Matcher Smarter, not Harder
So, in order to extend
&part to accept a
Class
as its first argument, whilst allowing it to accept a
Code object
in that position, we just use a type junction:
sub part (Code|Class $is_sheep, *@data) { my (@sheep, @goats); for @data { when $is_sheep { push @sheep, $_ } default { push @goats, $_ } } return (\@sheep, \@goats); }
There are only two differences between this version and the previous one.
The first difference is, of course, that we have changed the type of the first
parameter. Previously it was
Code; now it's
Code|Class.
The second change is in the body of the subroutine itself. We replaced the
partitioning
if statement:
for @data { if $is_sheep($_) { push @sheep, $_ } else { push @goats, $_ } }
With a switch:
for @data { when $is_sheep { push @sheep, $_ } default { push @goats, $_ } }
Now the actual work of categorizing each element as a "sheep" or a "goat" is
done by the
when statement, because:
when $is_sheep { push @sheep, $_ }
Is equivalent to:
if $_ ~~ $is_sheep { push @sheep, $_; next }
When
$is_sheep is a subroutine reference, that implicit
smart-match will simply pass
$_ (the current data element) to the
subroutine and then evaluate the return value as a boolean. On the other hand,
when
$is_sheep is a class, the smart-match will check to see if
the object in
$_ belongs to the same class or some derived
class.
The single
when statement handles either type of selector
—
Code or
Class — auto-magically. That's
why it's known as smart-matching.
Having now allowed class names as selectors, we can take the final step and simplify:
($cats, $chattels) = part { .isa(Animal::Cat) } @animals;
to:
($cats, $chattels) = part Animal::Cat, @animals;
Note, however, that the comma is back. Only blocks can appear in argument lists without accompanying commas, and the raw class isn't a block.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
Partitioning Rules!
Now that the
when's implicit smart-match is doing the hard work
of deciding how to evaluate each data element against the selector, adding new
kinds of selectors becomes trivial. For example, here's a third version of
&part which also allows Perl 6 rules (i.e. patterns) to be
used to partition a list:
sub part (Code|Class|Rule $is_sheep, *@data) { my (@sheep, @goats); for @data { when $is_sheep { push @sheep, $_ } default { push @goats, $_ } } return (\@sheep, \@goats);
All we needed to do was to tell
&part that its first
argument was also allowed to be of type
Rule. That allows us to
call
&part like this:
($cats, $chattels) = part /meow/, @animal_sounds;
In the scalar context imposed by the
$is_sheep parameter, the
/meow/ pattern evaluates to a
Rule object (rather
than immediately doing a match). That
Rule object is then bound
to
$is_sheep and subsequently used as the selector in the
when statement.
Note that the body of this third version is exactly the same as that of the
previous version. No change is required because, when it detects that
$is_sheep is a
Rule object, the
when's
smart-matching will auto-magically do a pattern match.
In the same way, we could further extend
&part to allow the
user to pass a hash as the selector:
my %is_cat = ( cat => 1, tiger => 1, lion => 1, leopard => 1, # etc. ); ($cats, $chattels) = part %is_cat, @animal_names;
simply by changing the parameter list of
&part to:
sub part (Code|Class|Rule|Hash $is_sheep, *@data) { # body exactly as before }
Once again, the smart-match hidden in the
when statement just
Does The Right Thing. On detecting a hash being matched against each datum, it
will use the datum as a key, do a hash look up, and evaluate the truth of the
corresponding entry in the hash.
Of course, the ever-increasing disjunction of allowable selector types is rapidly threatening to overwhelm the entire parameter list. At this point it would make sense to factor the type-junction out, give it a logical name, and use that name instead. To do that, we just write:
type Selector ::= Code | Class | Rule | Hash;
sub part (Selector $is_sheep, *@data) { # body exactly as before }
The
::= binding operator is just like the
:=
binding operator, except that it operates at compile-time. It's the right
choice here because types need to be fully defined at compile-time, so the
compiler can do as much static type checking as possible.
The effect of the binding is to make the name
Selector an alias
for
Code
|
Class
|
Rule
|
Hash. Then we can just use
Selector wherever we want that particular disjunctive type.
Out with the New and in with the Old
Let's take a step back for a moment.
We've already seen how powerful and clean these new-fangled explicit
parameters can be, but maybe you still prefer the Perl 5 approach. After all,
@_ was good enough fer Grandpappy when he lernt hisself Perl as a
boy, dangnabit!
In Perl 6 we can still pass our arguments the old-fashioned way and then process them manually:
# Still valid Perl 6... sub part { # Unpack and verify args... my ($is_sheep, @data) = @_; croak "First argument to &part is not Code, Hash, Rule, or Class" unless $is_sheep.isa(Selector); # Then proceed as before... my (@sheep, @goats); for @data { when $is_sheep { push @sheep, $_ } default { push @goats, $_ } } return (\@sheep, \@goats); }
If we declare a subroutine without a parameter list, Perl 6 automatically
supplies one for us, consisting of a single slurpy array named
@_:
sub part {...} # means: sub part (*@_) {...}
That is, any un-parametered Perl 6 subroutine expects to flatten and then
slurp up an arbitrarily long list of arguments, binding them to the elements of
a parameter called
@_. That's pretty much what a Perl 5 subroutine
does. The only important difference is that in Perl 6 that slurpy
@_ is, like all Perl 6 parameters, constant by default. So, if we
want the exact behaviour of a Perl 5 subroutine — including
being able to modify elements of
@_ — we need to be
explicit:
sub part (*@_ is rw) {...}
Note that "declare a subroutine without a parameter list" doesn't mean "declare a subroutine with an empty parameter list":
sub part {...} # without parameter list sub part () {...} # empty parameter list
An empty parameter list specifies that the subroutine takes exactly zero
arguments, whereas a missing parameter list means it takes any number of
arguments and binds them to the implicit parameter
@_.
Of course, by using the implicit
@_ instead of named
parameters, we're merely doing extra work that Perl 6 could do for us, as well
as making the subroutine body more complex, harder to maintain, and slower.
We're also eliminating any chance of Perl 6 identifying argument mismatches at
compile-time. And, unless we're prepared to complexify the code even further,
we're preventing client code from using named arguments (see "Name your poison"
below).
But this is Perl, not Fascism. We're not in the business of imposing the One True Coding Style on Perl hackers. So if you want to pass your arguments the old-fashioned way, Perl 6 makes sure you still can.
A Pair of Lists in a List of Pairs
Suppose now that, instead of getting a list of array references back, we
wanted to get back a list of
key=>value pairs, where each value
was one of the array refs and each key some kind of identifying label (we'll
see why that might be particularly handy soon).
The easiest solution is to use two fixed keys (for example,
"
sheep" and "
goats"):
sub part (Selector $is_sheep, *@data) returns List of Pair { my %herd; for @data { when $is_sheep { push %herd{"sheep"}, $_ } default { push %herd{"goats"}, $_ } } return *%herd; }
The parameter list of the subroutine is unchanged, but now we've added a
return type after it, using the
returns keyword. That return type
is
List of Pair, which tells the compiler that any
return statements in the subroutine are expected to return a list
of values, each of which is a Perl 6
key=>value pair.
Parametric Types
Note that this type is different from those we've seen so far: it's
compound. The
of Pair suffix is actually an argument that modifies
the principal type
List, telling the container type what kind of
value it's allowed to store. This is possible because
List is a
parametric type. That is, it's a type that can be specified with
arguments that modify how it works. The idea is a little like C++ templates,
except not quite so brain-meltingly complicated.
The specific parameters for a parametric type are normally specified in square brackets, immediately after the class name. The arguments that define a particular instance of the class are likewise passed in square brackets. For example:
class Table[Class $of] {...} class Logfile[Str $filename] {...} module SecureOps[AuthKey $key] {...} # and later: sub typeset(Table of Contents $toc) {...} # Expects an object whose class is Table # and which stores Contents objects my Logfile["./log"] $file; # $file can only store logfiles that log to ./log $plaintext = SecureOps[$KEY]::decode($cryptotext); # Only use &decode if our $KEY entitles us to
Note that type names like
Table of Contents and
List of
Pair are really just tidier ways to say
Table[of=>Contents] and
List[of=>Pair].
By convention, when we pass an argument to the
$of parameter of
a parametric type, we're telling that type what kind of value we're expecting
it to store. For example: whenever we access an element of
List of
Pair, we expect to get back a
Pair. Similarly we could
specify
List of Int,
Array of Str, or
Hash of
Num.
Admittedly
List of Pair doesn't seem much tidier than
List(of=>Pair), but as container types get more complex, the
advantages start to become obvious. For example, consider a data structure
consisting of an array of arrays of arrays of hashes of numbers (such as one
might use to store, say, several years worth of daily climatic data). Using the
of notation that's just:
type Climate::Record ::= Array of Array of Array of Hash of Num;
Without the
of keyword, it's:
type Climate::Record ::= Array(of=>Array(of=>Array(of=>Hash(of=>Num))));
which is starting to look uncomfortably like Lisp.
Parametric types may have any number of parameters with any names we like,
but only type parameters named
$of have special syntactic support
built into Perl.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
TMTOWTDeclareI
While we're talking about type declarations, it's worth noting that we could
also have put
&part's new return type out in front (just as
we've been doing with variable and parameter types). However, this is only
allowed for subroutines when the subroutine is explicitly scoped:
# lexical subroutine my List of Pair sub part (Selector $is_sheep, *@data) {...}
or:
# package subroutine our List of Pair sub part (Selector $is_sheep, *@data) {...}
The return type goes between the scoping keyword (
my or
our) and the
sub keyword. And, of course, the
returns keyword is not used.
Contrariwise, we can also put variable/parameter type information
after the variable name. To do that, we use the
of
keyword:
my sub part ($is_sheep of Selector, *@data) returns List of Pair {...}
This makes sense, when you think about it. As we saw above,
of
tells the preceding container what type of value it's supposed to store, so
$is_sheep of Selector tells
$is_sheep it's supposed
to store a
Selector.
You Are What You Eat -- Not!
Careful though: we have to remember to use
of there, not
is. It would be a mistake to write:
my sub part ($is_sheep is Selector, *@data) returns List of Pair {...}
That's because Perl 6 variables and parameters can be more precisely typed than variables in most other languages. Specifically, Perl 6 allows us to specify both the storage type of a variable (i.e. what kinds of values it can contain) and the implementation class of the variable (i.e. how the variable itself is actually implemented).
The
is keyword indicates what a particular container (variable,
parameter, etc.) is — namely, how it's implemented and how it
operates. Saying:
sub bark(@dogs is Pack) {...}
specifies that, although the
@dogs parameter looks like an
Array, it's actually implemented by the
Pack class
instead.
That declaration is not specifying that the
@dogs variable stores
Pack objects. In fact,
it's not saying anything at all about what
@dogs stores. Since its
storage type has been left unspecified,
@dogs inherits the default
storage type —
Any — which allows its elements to
store any kind of scalar value.
If we'd wanted to specify that
@dogs was a normal array, but
that it can only store
Dog objects, we'd need to write:
sub bark(@dogs of Dog) {...}
and if we'd wanted it to store
Dogs but be
implemented by the
Pack class, we'd have to write:
sub bark(@dogs is Pack of Dog) {...}
Appending
is SomeType to a variable or parameter is the Perl 6
equivalent of Perl 5's
tie mechanism, except that the tying is
part of the declaration. For example:
my $Elvis is King of Rock&Roll;
rather than a run-time function call like:
# Perl 5 code... my $Elvis; tie $Elvis, 'King', stores=>all('Rock','Roll');
In any case, the simple rule for
of vs
is is:
to say what a variable stores, use
of; to say how the variable
itself works, use
is.
Many Happy Returns
Meanwhile, we're still attempting to create a version of
&part that returns a list of pairs. The easiest way to create
and return a suitable list of pairs is to flatten a hash in a list context.
This is precisely what the
return statement does:
return *%herd;
using the splatty star. Although, in this case, we could have simply written:
return %herd;
since the declared return type (
List of Pair) automatically
imposes list context (and hence list flattening) on any
return
statement within
&part.
Of course, it will only make sense to return a flattened hash if we've
already partitioned the original data into that hash. So the bodies of the
when and
default statements inside
&part have to be changed accordingly. Now, instead of pushing
each element onto one of two separate arrays, we push each element onto one of
the two arrays stored inside
%herd:
for @data { when $is_sheep { push %herd{"sheep"}, $_ } default { push %herd{"goats"}, $_ } }
It Lives!!!!!
Assuming that each of the hash entries (
%herd{"sheep"} and
%herd{"goats"}) will be storing a reference to one of the two
arrays, we can simply push each data element onto the appropriate array.
In Perl 5 we'd have to dereference each of the array references inside our hash before we could push a new element onto it:
# Perl 5 code... push @{$herd{"sheep"}}, $_;
But in Perl 6, the first parameter of
push expects an array, so
if we give it an array reference, the interpreter can work out that it needs to
dereference that first argument. So we can just write:
# Perl 6 code... push %herd{"sheep"}, $_;
(Remember that, in Perl 6, hashes keep their
% sigil, even when
being indexed).
Initially, of course, the entries of
%herd don't contain
references to arrays at all; like all uninitialized hash entries, they contain
undef. But, because
push itself is defined like
so:
sub push (@array is rw, *@data) {...}
an actual read-writable array is expected as the first argument. If a
scalar variable containing
undef is passed to such a parameter,
Perl 6 detects the fact and autovivifies the necessary array, placing a
reference to it into the previously undefined scalar argument. That behaviour
makes it trivially easy to create subroutines that autovivify read/write
arguments, in the same way that Perl 5's
open does.
It's also possible to declare a read/write parameter that doesn't
autovivify in this way: using the
is ref trait instead of
is
rw:
sub push_only_if_real_array (@array is ref, *@data) {...}
is ref still allows the parameter to be read from and written
to, but throws an exception if the corresponding argument isn't already a real
referent of some kind.
A Label by Any Other Name
Mandating fixed labels for the two arrays being returned seems a little inflexible, so we could add another — optional — parameter via which user-selected key names could be passed...
sub part (Selector $is_sheep, Str ?@labels is dim(2) = <<sheep goats>>, *@data ) returns List of Pair { my ($sheep, $goats) is constant = @labels; my %herd = ($sheep=>[], $goats=>[]); for @data { when $is_sheep { push %herd{$sheep}, $_ } default { push %herd{$goats}, $_ } } return *%herd; }
Optional parameters in Perl 6 are prefixed with a
? marker
(just as slurpy parameters are prefixed with
*). Like required
parameters, optional parameters are passed positionally, so the above example
means that the second argument is expected to be an array of strings. This has
important consequences for backwards compatibility — as we'll see
shortly.
As well as declaring it to be optional (using a leading
?), we
also declare the
@labels parameter to have exactly two elements,
by specifying the
is dim(2) trait. The
is dim trait
takes one or more integer values. The number of values it's given specifies the
number of dimensions the array has; the values themselves specify how many
elements long the array is in each dimension. For example, to create a
four-dimensional array of 7x24x60x60 elements, we'd declare it:
my @seconds is dim(7,24,60,60);
In the latest version of
&part, the
@labels is
dim(2) declaration means that
@labels is a normal
one-dimensional array, but that it has only two elements in that one
dimension.
The final component of the declaration of
@labels is the
specification of its default value. Any optional parameter may be given a
default value, to which it will be bound if no corresponding argument is
provided. The default value can be any expression that yields a value
compatible with the type of the optional parameter.
In the above version of
&part, for the sake of backwards
compatibility we make the optional
@labels default to the list of
two strings
<<sheep goats>> (using the new
Perl 6 list-of-strings syntax).
Thus if we provide an array of two strings explicitly, the two strings we
provide will be used as keys for the two pairs returned. If we don't specify
the labels ourselves,
"sheep" and
"goats" will be
used.
Name Your Poison
With the latest version of
&part defined to return named
pairs, we can now write:
@parts = part Animal::Cat, <<cat chattel>>, @animals; # returns: (cat=>[...], chattel=>[...]) # instead of: (sheep=>[...], goats=>[...])
The first argument (
Animal::Cat) is bound to
&part's
$is_sheep parameter (as before). The
second argument (
<<cat chattel>>) is now bound to the
optional
@labels parameter, leaving the
@animals
argument to be flattened into a list and slurped up by the
@data
parameter.
We could also pass some or all of the arguments as named arguments. A named argument is simply a Perl 6 pair, where the key is the name of the intended parameter, and the value is the actual argument to be bound to that parameter. That makes sense: every parameter we ever declare has to have a name, so there's no good reason why we shouldn't be allowed to pass it an argument using that name to single it out.
An important restriction on named arguments is that they cannot come before positional arguments, or after any arguments that are bound to a slurpy array. Otherwise, there would be no efficient, single-pass way of working out which unnamed arguments belong to which parameters. Apart from that one overarching restriction (which Larry likes to think of as a zoning law), we're free to pass named arguments in any order we like. That's a huge advantage in any subroutine that takes a large number of parameters, because it means we no longer have to remember their order, just their names.
For example, using named arguments we could rewrite the above
part call as any of the following:
# Use named argument to pass optional @labels argument... @parts = part Animal::Cat, labels => <<cat chattel>>, @animals; # Use named argument to pass both @labels and @data arguments... @parts = part Animal::Cat, labels => <<cat chattel>>, data => @animals; # The order in which named arguments are passed doesn't matter... @parts = part Animal::Cat, data => @animals, labels => <<cat chattel>>; # Can pass *all* arguments by name... @parts = part is_sheep => Animal::Cat, labels => <<cat chattel>>, data => @animals; # And the order still doesn't matter... @parts = part data => @animals, labels => <<cat chattel>>, is_sheep => Animal::Cat; # etc.
As long as we never put a named argument before a positional argument, or after any unnamed data for the slurpy array, the named arguments can appear in any convenient order. They can even be pulled out of a flattened hash:
@parts = part *%args;
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
Who Gets the Last Piece of Cake?
We're making progress. Whether we pass its arguments by name or
positionally, our call to
part produces two partitions of the
original list. Those partitions now come back with convenient labels that we
can specify via the optional
@labels parameter.
But now there's a problem. Even though we explicitly marked it as optional,
it turns out that things can go horribly wrong if we don't actually supply that
optional argument. Which is not very "optional". Worse, it means there's
potentially a problem with every single legacy call to
part that
was coded before we added the optional parameter.
For example, consider the call:
@pets = ('Canis latrans', 'Felis sylvestris'); @parts = part /:i felis/, @pets; # expected to return: (sheep=>['Felis sylvestris'], goats=>['Canis latrans'] ) # actually returns: ('Canis latrans'=>[], 'Felis sylvestris'=>[])
What went wrong?
Well, when the call to
part is matching its argument list
against
&call's parameter list, it works left-to-right as
follows:
- The first parameter (
$is_sheep) is declared as a scalar of type
Selector, so the first argument must be a
Codeor a
Classor a
Hashor a
Rule. It's actually a
Rule, so the call mechanism binds that rule to
$is_sheep.
- The second parameter (
?@labels) is declared as an array of two strings, so the second argument must be an array of two strings.
@petsis an array of two strings, so we bind that array to
@labels. (Oops!)
- The third parameter (
*@data) is declared as a slurpy array, so any remaining arguments should be flattened and bound to successive elements of
@data. There are no remaining arguments, so there's nothing to flatten-and-bind, so
@dataremains empty.
That's the problem. If we pass the arguments positionally and there are not enough of them to bind to every parameter, the parameters at the start of the parameter list are bound before those towards the end. Even if those earlier parameters are marked optional. In other words, argument binding is "greedy" and (for obvious efficiency reasons) it never backtracks to see if there might be better ways to match arguments to parameters. Which means, in this case, that our data is being preemptively "stolen" by our labels.
Pipeline to the Rescue!
So in general (and in the above example in particular) we need some way of indicating that a positional argument belongs to the slurpy data, not to some preceding optional parameter. One way to do that is to pass the ambiguous argument by name:
@parts = part /:i felis/, data=>@pets;
Then there can be no mistake about which argument belongs to what parameter.
But there's also a purely positional way to tell the call to
part that
@pets belongs to the slurpy
@data, not to the optional
@labels. We can pipeline
it directly there. After all, that's precisely what the pipeline operator does:
it binds the list on its blunt side to the slurpy array parameter of the call
on its sharp side. So we could just write:
@parts = part /:i felis/ <== @pets; # returns: (sheep=>['Felis sylvestris'], goats=>['Canis latrans'])
Because
@pets now appears on the blunt end of a pipeline,
there's no way it can be interpreted as anything other than the slurped data
for the call to
part.
A Natural Assumption
Of course, as a solution to the problem of legacy code, this is highly
sub-optimal. It requires that every single pre-existing call to
part be modified (by having a pipeline inserted). That will
almost certainly be too painful.
Our new optional labels would be much more useful if their existence itself
were also optional — if we could somehow add a single statement to the
start of any legacy code file and thereby cause
&part to work
like it used to in the good old days before labels. In other words, what we
really want is an impostor
&part subroutine that pretends that
it only has the original two parameters (
$is_sheep and
@data), but then when it's called surreptitiously supplies an
appropriate value for the new
@label parameter and quietly calls
the real
&part.
In Perl 6, that's easy. All we need is a good curry.
We write the following at the start of the file:
use List::Part; # Supposing &part is defined in this module my &part ::= &List::Part::part.assuming(labels => <<sheep goats>>)
That second line is a little imposing so let's break it down. First of all:
List::Part::part
is just the fully qualified name of the
&part subroutine
that's defined in the
List::Part module (which, for the purposes
of this example, is where we're saying
&part lives). So:
&List::Part::part
is the actual
Code object corresponding to the
&part subroutine. So:
&List::Part::part.assuming(...)
is a method call on that
Code object. This is the tricky bit,
but it's no big deal really. If a
Code object really is an object,
we certainly ought to be able to call methods on it. So:
&List::Part::part.assuming(labels => <<sheep goats>>)
calls the
assuming method of the
Code object
&part and passes the
assuming method a named
argument whose name is
labels and whose value is the list of
strings
<<sheep goats>>.
Now, if we only knew what the
.assuming method did...
That About Wraps it Up
What the
.assuming(...) method does is place an anonymous
wrapper around an existing
Code object and then return a reference
to (what appears to be) an entirely separate
Code object. That new
Code object works exactly like the original — except that
the new one is missing one or more of the original's parameters.
Specifically, the parameter list of the wrapper subroutine doesn't have any
of the parameters that were named in in the call to
.assuming.
Instead those missing parameters are automatically filled in whenever the new
subroutine is called, using the values of those named arguments to
.assuming.
All of which simply means that the method call:
&List::Part::part.assuming(labels => <<sheep goats>>)
returns a reference to a new subroutine that acts like this:
sub ($is_sheep, *@data) { return part($is_sheep, labels=><<sheep goats>>, *@data) }
That is, because we passed a
labels => <<sheep goats>>
argument to
.assuming, we get back a subroutine without a
labels parameter, but which then just calls
part and
inserts the value
<<sheep goats>> for the
missing parameter.
Or, as the code itself suggests:
&List::Part::part.assuming(labels => <<sheep goats>>)
gives us what
&List::Part::part would become under the
assumption that the value of
@labels is always
<<sheep goats>> .
How does that help with our source code backwards compatibility problem? It
completely solves it. All we have to do is to make Perl 6 use that carefully
wrapped, two-parameter version of
&part in all our legacy
code, instead of the full three-parameter one. To do that, we merely create a
lexical subroutine of the same name and bind the wrapped version to that
lexical:
my &part ::= &List::Part::part.assuming(labels => <<sheep goats>>);
The
my &part declares a lexical subroutine named
&part (in exactly the same way that a
my $part
would declare a lexical variable named
$part). The
my
keyword says that it's lexical and the sigil says what kind of thing it is
(
& for subroutine, in this case). Then we simply install the
wrapped version of
&List::Part::part as the implementation of
the new lexical
&part and we're done.
Just as lexical variables hide package or global variables of the same name,
so too a lexical subroutine hides any package or global subroutine of the same
name. So
my &part hides the imported
&List::Part::part, and every subsequent call to
part(...) in the rest of the current scope calls the lexical
&part instead.
Because that lexical version is bound to a label-assuming wrapper, it
doesn't have a
labels parameter, so none of the legacy calls to
&part are broken. Instead, the lexical
&part
just silently "fills in" the
labels parameter with the value we
originally gave to
.assuming.
If we needed to add another partitioning call within the scope of that
lexical
&part, but we wanted to use those sexy new non-default
labels, we could do so by calling the actual three-parameter
&part via its fully qualified name, like so:
@parts = List::Part::part(Animal::Cat, <<cat chattel>>, @animals);
Pair Bonding
One major advantage of having
&part return a list of pairs
rather than a simple list of arrays is that now, instead of positional
binding:
# with original (list-of-arrays) version of &part... (@cats, @chattels) := part Animal::Cat <== @animals;
we can do "named binding"
# with latest (list-of-pairs) version of &part... (goats=>@chattels, sheep=>@cats) := part Animal::Cat <== @animals;
Named binding???
Well, we just learned that we can bind arguments to parameters by name, but
earlier we saw that parameter binding is merely an implicit form of explicit
:= binding. So the inevitable conclusion is that the only reason
we can bind parameters by name is because
:= supports named
binding.
And indeed it does. If a
:= finds a list of pairs on its
righthand side, and a list of simple variables on its lefthand side, it uses
named binding instead of positional binding. That is, instead of binding first
to first, second to second, etc., the
:= uses the key of each
righthand pair to determine the name of the variable on its left to which the
value of the pair should be bound.
That sounds complicated, but the effect is very easy to understand:
# Positional binding... ($who, $why) := ($because, "me"); # same as: $who := $because; $why := "me"; # Named binding... ($who, $why) := (why => $because, who => "me"); # same as: $who := "me"; $why := $because;
Even more usefully, if the binding operator detects a list of pairs on its left and another list of pairs on its right, it binds the value of the first pair on the right to the value of the identically named pair on the left (again, regardless of where the two pairs appear in their respective lists). Then it binds the value of the second pair on the right to the value of the identically named pair on the left, and so on.
That means we can set up a named
:= binding in which the names
of the bound variables don't even have to match the keys of the values being
bound to them:
# Explicitly named binding... (who=>$name, why=>$reason) := (why => $because, who => "me"); # same as: $name := "me"; $reason := $because;
The most common use for that feature will probably be to create "free-standing" aliases for particular entries in a hash:
(who=>$name, why=>$reason) := *%explanation; # same as: $name := %explanation{who}; $reason := %explanation{why};
or to convert particular hash entries into aliases for other variables:
*%details := (who=>"me", why=>$because); # same as: %details{who} := "me", %details{why} := $because;
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
An Argument in Name Only
It's pretty cool that Perl 6 automatically lets us specify positional arguments — and even return values — by name rather than position.
But what if we'd prefer that some of our arguments could only be
specified by name. After all, the
@labels parameter isn't really
in the same league as the
$is_sheep parameter: it's only an option
after all, and one that most people probably won't use. It shouldn't really be
a positional parameter at all.
We can specify that the
labels argument is
only to be passed by name...by changing the previous declaration of the
@labels parameter very slightly:
sub part (Selector $is_sheep, Str +@labels is dim(2) = <<sheep goats>>, *@data ) returns List of Pair { my ($sheep, $goats) is constant = @labels; my %herd = ($sheep=>[], $goats=>[]); for @data { when $is_sheep { push %herd{$sheep}, $_ } default { push %herd{$goats}, $_ } } return *%herd; }
In fact, there's only a single character's worth of difference in the whole
definition. Whereas before we declared the
@labels parameter like
this:
Str ?@labels is dim(2) = <<sheep goats>>
now we declare it like this:
Str +@labels is dim(2) = <<sheep goats>>
Changing that
? prefix to a
+ changes
@labels from an optional positional-or-named parameter to an
optional named-only parameter. Now if we want to pass in a
labels
argument, we can only pass it by name. Attempting to pass it positionally will
result in some extreme prejudice from the compiler.
Named-only parameters are still optional parameters however, so legacy code that omits the labels:
%parts = part Animal::Cat <== @animals;
still works fine (and still causes the
@labels parameter to
default to
<<sheep goats>>).
Better yet, converting
@labels from a positional to a
named-only parameter also solves the problem of legacy code of the form:
%parts = part Animals::Cat, @animals;
@animals can't possibly be intended for the
@labels parameter now. We explicitly specified that labels can
only be passed by name, and the
@animals argument isn't named.
So named-only parameters give us a clean way of upgrading a subroutine and still supporting legacy code. Indeed, in many cases the only reasonable way to add a new parameter to an existing, widely used, Perl 6 subroutine will be to add it as a named-only parameter.
Careful with that Arg, Eugene!
Of course, there's no free lunch here. The cost of solving the legacy code problem is that we changed the meaning of any more recent code like this:
%parts = part Animal::Cat, <<cat chattel>>, @animals; # Oops!
When
@labels was positional-or-named, the
<<cat chattel>> argument could only be
interpreted as being intended for
@labels. But now, there's no way
it can be for
@labels (because it isn't named), so Perl 6 assumes
that the list is just part of the slurped data. The two-element list will now
be flattened (along with
@animals), resulting in a single list
that is then bound to the
@data parameter, as if we'd written:
%parts = part Animal::Cat <== 'cat', 'chattel', @animals;
This is yet another reason why named-only should probably be the first choice for optional parameters.
Temporal Life Insurance
Being able to add name-only parameters to existing subroutines is an
important way of future-proofing any calls to the subroutine. So long as we
continue to add only named-only parameters to
&part, the order
in which the subroutine expects its positional and slurpy arguments will be
unchanged, so every existing call to
part will continue to work
correctly.
Curiously, the reverse is also true. Named-only parameters also provide us with a way to "history-proof" subroutine calls. That is, we can allow a subroutine to accept named arguments that it doesn't (yet) know how to handle! Like so:
sub part (Selector $is_sheep, Str +@labels is dim(2) = <<sheep goats>> *%extras, # <-- NEW PARAMETER ADDED HERE *@data, ) returns List of Pair { # Handle extras... carp "Ignoring unknown named parameter '$_'" for keys %extras; # Remainder of subroutine as before... my ($sheep, $goats) is constant = @labels; my %herd = ($sheep=>[], $goats=>[]); for @data { when $is_sheep { push %herd{$sheep}, $_ } default { push %herd{$goats}, $_ } } return *%herd; } # and later... %parts = part Animal::Cat, label=><<Good Bad>>, max=>3, @data; # warns: "Ignoring unknown parameter 'max' at future.pl, line 19"
The
*%extras parameter is a "slurpy hash". Just as the slurpy
array parameter (
*@data) sucks up any additional positional
arguments for which there's no explicit parameter, a slurpy hash sucks up any
named arguments that are unaccounted for. In the above example, for instance,
&part has no
$max parameter, so passing the named
argument
max=>3 would normally produce a (compile-time)
exception:
Invalid named parameter ('max') in call to &part
However, because
&part now has a slurpy hash, that
extraneous named argument is simply bound to the appropriate entry of
%extras and (in this example) used to generate a warning.
The more common use of such slurpy hashes is to capture the named arguments that are passed to an object constructor and have them automatically forwarded to the constructors of the appropriate ancestral classes. We'll explore that technique in Exegesis 12.
The Greatest Thing Since Sliced Arrays
So far we've progressively extended
&part from the first
simple version that only accepted subroutines as selectors, to the most recent
versions that can now also use classes, rules, or hashes to partition their
data.
Suppose we also wanted to allow the user to specify a list of integer
indices as the selector, and thereby allow
&part to separate a
slice of data from its "anti-slice". In other words, instead of:
%data{2357} = [ @data[2,3,5,7] ]; %data{other} = [ @data[0,1,4,6,8..@data-1] ];
we could write:
%data = part [2,3,5,7], labels=>["2357","other"], @data;
We could certainly extend
&part to do that:
type Selector ::= Code | Class | Rule | Hash | (Array of Int); sub part (Selector $is_sheep, Str +@labels is dim(2) = <<sheep goats>>, *@data ) returns List of Pair { my ($sheep, $goats) is constant = @labels; my %herd = ($sheep=>[], $goats=>[]); if $is_sheep.isa(Array of Int) { for @data.kv -> $index, $value { if $index == any($is_sheep) { push %herd{$sheep}, $value } else { push %herd{$goats}, $value } } } else { for @data { when $is_sheep { push %herd{$sheep}, $_ } default { push %herd{$goats}, $_ } } } return *%herd; } # and later, if there's a prize for finishing 1st, 2nd, 3rd, or last... %prize = part [0, 1, 2, @horses-1], labels => << placed also_ran >>, @horses;
Note that this is the first time we couldn't just add another class to the
Selector type and rely on the smart-match inside the
when to work out how to tell "sheep" from "goats". The problem
here is that when the selector is an array of integers, the value of
each data element no longer determines its sheepishness/goatility. It's now the
element's position (i.e. its index) that decides its fate. Since our
existing smart-match compares values, not positions, the
when
can't pick out the right elements for us. Instead, we have to consider both
the index and the value of each data element.
To do that we use the
@data array's
.kv method.
Just as calling the
.kv method on a hash returns key,
value, key, value, key, value,
etc., so too calling the
.kv method on an array returns
index, value, index, value, index,
value, etc. Then we just use a parameterized block as our
for block, specifying that it has two arguments. That causes the
for to grab two elements of the list its iterating (i.e. one index
and one value) on each iteration.
Then we simply test to see if the current index is any of those specified in
$is_sheep's array and, if so, we push the corresponding value:
for @data.kv -> $index, $value { if $index == any(@$is_sheep) { push %herd{$sheep}, $value } else { push %herd{$goats}, $value } }
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
A Parting of the ... err ... Parts
That works okay, but it's not perfect. In fact, as it's presented above the
&part subroutine is now both an ugly solution and an
inefficient one.
It's ugly because
&part is now twice as long as it was
before. The two branches of control-flow within it are similar in form but
quite different in function. One partitions the data according to the
contents of a datum; the other, according to a datum's
position in
@data.
It's inefficient because it effectively tests the type of the selector
argument twice: once (implicitly) when it's first bound to the
$is_sheep parameter, and then again (explicitly) in the call to
.isa.
It would be cleaner and more maintainable to break these two nearly unrelated behaviours out into separate subroutines. And it would be more efficient if we could select between those two subroutines by testing the type of the selector only once.
Of course, in Perl 6 we can do just that — with a multisub.
What's a multisub? It's a collection of related subroutines (known as "variants"), all of which have the same name but different parameter lists. When the multisub is called and passed a list of arguments, Perl 6 examines the types of the arguments, finds the variant with the same name and the most compatible parameter list, and calls that variant.
By the way, you might be more familiar with the term multimethod. A multisub is a multiply dispatched subroutine, in the same way that a multimethod is a multiply dispatched method. There'll be much more about those in Exegesis 12.
Multisubs provide facilities something akin to function overloading in C++. We set up several subroutines with the same logical name (because they implement the same logical action). But each takes a distinct set of argument types and does the appropriate things with those particular arguments.
However, multisubs are more "intelligent" that mere overloaded subroutines. With overloaded subroutines, the compiler examines the compile-time types of the subroutine's arguments and hard codes a call to the appropriate variant based on that information. With multisubs, the compiler takes no part in the variant selection process. Instead, the interpreter decides which variant to invoke at the time the call is actually made. It does that by examining the run-time type of each argument, making use of its inheritance relationships to resolve any ambiguities.
To see why a run-time decision is better, consider the following code:
class Lion is Cat {...} # Lion inherits from Cat multi sub feed(Cat $c) { pat $c; my $glop = open 'Can'; spoon_out($glop); } multi sub feed(Lion $l) { $l.stalk($prey) and kill; } my Cat $fluffy = Lion.new; feed($fluffy);
In Perl 6, the call to
feed will correctly invoke the second
variant because the interpreter knows that
$fluffy actually
contains a reference to a
Lion object at the time the call is made
(even though the nominal type of the variable is
Cat).
If Perl 6 multisubs worked like C++'s function overloading, the call to
feed($fluffy) would invoke the first version of
feed, because all that the compiler knows for sure at compile-time
is that
$fluffy is declared to store
Cat objects.
That's precisely why Perl 6 doesn't do it that way. We prefer leave the
hand-feeding of lions to other languages.
Many Parts
As the above example shows, in Perl 6, multisub variants are defined by
prepending the
sub keyword with another keyword:
multi. The parameters that the interpreter is going to consider
when deciding which variant to call are specified to the left of a colon
(
:), with any other parameters specified to the right. If there is
no colon in the parameter list (as above), all the parameters are
considered when deciding which variant to invoke.
We could re-factor the most recent version of
&part like
so:
type Selector ::= Code | Class | Rule | Hash; multi sub part (Selector $is_sheep: Str +@labels is dim(2) = <<sheep goats>>, *@data ) returns List of Pair { my ($sheep, $goats) is constant = @labels; my %herd = ($sheep=>[], $goats=>[]); for @data { when $is_sheep { push %herd{$sheep}, $_ } default { push %herd{$goats}, $_ } } return *%herd; } multi sub part (Int @sheep_indices: Str +@labels is dim(2) = <<sheep goats>>, *@data ) returns List of Pair { my ($sheep, $goats) is constant = @labels; my %herd = ($sheep=>[], $goats=>[]); for @data -> $index, $value { if $index == any(@sheep_indices) { push %herd{$sheep}, $value } else { push %herd{$goats}, $value } } return *%herd; }
Here we create two variants of a single multisub named
&part. The first variant will be invoked whenever
&part is called with a
Selector object as its
first argument (that is, when it is passed a
Code or
Class or
Rule or
Hash object as its
selector).
The second variant will be invoked only if the first argument is an
Array of Int. If the first argument is anything else, an exception
will be thrown.
Notice how similar the body of the first variant is to the earlier
subroutine versions. Likewise, the body of the second variant is almost
identical to the
if branch of the previous (subroutine)
version.
Notice too how the body of each variant only has to deal with the particular type of selector that its first parameter specifies. That's because the interpreter has already determined what type of thing the first argument was when deciding which variant to call. A particular variant will only ever be called if the first argument is compatible with that variant's first parameter.
Call Me Early
Suppose we wanted more control over the default labels that
&part uses for its return values. For example, suppose we
wanted to be able to prompt the user for the appropriate defaults —
before the program runs.
The default value for an optional parameter can be any valid Perl expression whose result is compatible with the type of the parameter. We could simply write:
my Str @def_labels; BEGIN { print "Enter 2 default labels: "; @def_labels = split(/\s+/, <>, 3).[0..1]; } sub part (Selector $is_sheep, Str +@labels is dim(2) = @def_labels, *@data ) returns List of Pair { # body as before }
We first define an array variable:
my Str @def_labels;
This will ultimately serve as the expression that the
@labels
parameter uses as its default:
Str +@labels is dim(2) = @def_labels
Then we merely need a
BEGIN block (so that it runs before the
program starts) in which we prompt for the required information:
print "Enter 2 default labels: ";
read it in:
<>
split the input line into three pieces using whitespace as a separator:
split(/\s+/, <>, 3)
grab the first two of those pieces:
split(/\s+/, <>, 3).[0..1]
and assign them to
@def_labels:
@def_labels = split(/\s+/, <>, 3).[0..1];
We're now guaranteed that
@def_labels has the necessary default
labels before
&part is ever called.
Core Breach
Built-ins like
&split can also be given named arguments in
Perl 6, so, alternatively, we could write the
BEGIN block like
so:
BEGIN { print "Enter 2 default labels: "; @def_labels = split(str=><>, max=>3).[0..1]; }
Here we're leaving out the split pattern entirely and making use of
&split's default split-on-whitespace behaviour.
Incidentally, an important goal of Perl 6 is to make the language powerful enough to natively implement all its own built-ins. We won't actually implement it that way, since screamingly fast performance is another goal, but we do want to make it easy for anyone to create their own versions of any Perl built-in or control structure.
So, for example,
&split would be declared like this:
sub split( Rule|Str ?$sep = /\s+/, Str ?$str = $CALLER::_, Int ?$max = Inf ) { # implementation here }
Note first that every one of
&split's parameters is
optional, and that the defaults are the same as in Perl 5. If we omit the
separator pattern, the default separator is whitespace; if we omit the string
to be split,
&split splits the caller's
$_
variable; if we omit the "maximum number of pieces to return" argument, there
is no upper limit on the number of splits that may be made.
Note that we can't just declare the second parameter like so:
Str ?$str = $_,
That's because, in Perl 6, the
$_ variable is lexical (not
global), so a subroutine doesn't have direct access to the
$_ of
its caller. That means that Perl 6 needs a special way to access a caller's
$_.
That special way is via the
CALLER:: namespace. Writing
$CALLER::_ gives us access to the
$_ of whatever
scope called the current subroutine. This works for other variables too
(
$CALLER::foo,
@CALLER::bar, etc.) but is rarely
useful, since we're only allowed to use
CALLER:: to access
variables that already exist, and
$_ is about the only variable
that a subroutine can rely upon to be present in any scope it might be called
from.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
A Constant Source of Joy
Setting up the
@def_labels array at compile-time and then using
it as the default for the
@labels parameter works fine, but
there's always the chance that the array might somehow be accidentally
reassigned later. If that's not desirable, then we need to make the array a
constant. In Perl 6 that looks like this:
my @def_labels is constant = BEGIN { print "Enter 2 default labels: "; split(/\s+/, <>, 3).[0..1]; };
The
is constant trait is the way we prevent any Perl 6 variable
from being reassigned after it's been declared. It effectively replaces the
STORE method of the variable's implementation with one that throws
an exception whenever it's called. It also instructs the compiler to keep an
eye out for compile-time-detectable modifications to the variable and die
violently if it finds any.
Whenever a variable is declared
is constant it must be
initialized as part of its declaration. In this case we use the return value of
a
BEGIN block as the initializer value.
Oh, by the way,
BEGIN blocks have return values in Perl 6.
Specifically, they return the value of the last statement executed inside them
(just like a Perl 5
do or
eval block does, except
that
BEGINs do it at compile-time).
In the above example the result of the
BEGIN is the return
value of the call to
split. So
@def_labels is
initialized to the two default labels, which cannot thereafter be changed.
BEGIN at the Scene of the Crime
Of course, the
@def_labels array is really just a temporary
storage facility for transferring the results of the
BEGIN block
to the default value of the
@labels parameter.
We could easily do away with it entirely, by simply putting the
BEGIN block right there in the parameter list:
sub part (Selector $is_sheep, Str +@labels is dim(2) = BEGIN { print "Enter 2 default labels: "; split(/\s+/, <>, 3).[0..1]; }, *@data ) returns List of Pair { # body as before }
And that works fine.
Macro Biology
The only problem is that it's ugly, brutish, and not at all short. If only
there were some way of calling the
BEGIN block at that point
without having to put the actual
BEGIN block at that point....
Well, of course there is such a way. In Perl 6 a block is just a special
kind of nameless subroutine... and a subroutine is just a special name-ful kind
of block. So it shouldn't really come as a surprise that
BEGIN
blocks have a name-ful, subroutine-ish counterpart. They're called
macros and they look and act very much like ordinary subroutine,
except that they run at compile-time.
So, for example, we could create a compile-time subroutine that requests and returns our user-specified labels:
macro request(int $n, Str $what) returns List of Str { print "Enter $n $what: "; my @def_labels = split(/\s+/, <>, $n+1); return { @def_labels[0..$n-1] }; } # and later... sub part (Selector $is_sheep, Str +@labels is dim(2) = request(2,"default labels"), *@data ) returns List of Pair { # body as before }
Calls to a macro are invoked during compilation (not at run-time). In fact,
like a
BEGIN block, a macro call is executed as soon as the parser
has finished parsing it. So, in the above example, when the parser has parsed
the declaration of the
@labels parameter and then the
= sign indicating a default value, it comes across what looks like
a subroutine call. As soon as it has parsed that subroutine call (including its
argument list) it will detect that the subroutine
&request is
actually a macro, so it will immediately call
&request with
the specified arguments (
2 and
"default labels").
Whenever a macro like
&request is invoked, the parser
itself intercepts the macro's return value and integrates it somehow back into
the parse tree it is in the middle of building. If the macro returns a block
— as
&request does in the above example — the
parser extracts the the contents of that block and inserts the parse tree of
those contents into the program's parse tree. In other words, if a macro
returns a block, a precompiled version of whatever is inside the block replaces
the original macro call.
Alternatively, a macro can return a string. In that case, the parser inserts
that string back into the source code in place of the macro call and then
reparses it. This means we could also write
&request like
this:
macro request(int $n, Str $what) returns List of Str { print "Enter $n $what: "; return "<< ( @(split(/\s+/, <>, $n+1).[0..$n-1]) >>"; }
in which case it would return a string containing the characters
"<<", followed by the two labels that the
request call reads in, followed by a closing double angles. The
parser would then substitute that string in place of the macro call, discover
it was a
<<...>> word list, and use that list as the
default labels.
Macros for
BEGIN-ners
Macros are enormously powerful. In fact, in Perl 6, we could implement the
functionality of
BEGIN itself using a macro:
macro MY_BEGIN (&block) { my $context = want; if $context ~~ List { my @values = block(); return { *@values }; } elsif $context ~~ Scalar { my $value = block(); return { $value }; } else { block(); return; } }
The
MY_BEGIN macro declares a single parameter
(
&block). Because that parameter is specified with the
Code sigil (
&), the macro requires that the
corresponding argument must be a block or subroutine of some type. Within the
body of
&MY_BEGIN that argument is bound to the
lexical subroutine
&block (just as a
$foo parameter would bind its corresponding argument to a lexical
scalar variable, or a
@foo parameter would bind its argument to a
lexical array).
&MY_BEGIN then calls the
want function, which
is Perl 6's replacement for
wantarray.
want returns
a scalar value that simultaneously represents any the contexts in which the
current subroutine was called. In other words, it returns a disjunction of
various classes. We then compare that context information against the three
possibilities —
List,
Scalar, and (by
elimination)
Void.
If
MY_BEGIN was called in a list context, we evaluate its
block/closure argument in a list context, capture the results in an array
(
@values), and then return a block containing the contents of that
array flattened back to a list. In a scalar context we do much the same thing,
except that
MY_BEGIN's argument is evaluated in scalar context and
a block containing that scalar result is returned. In a void context (the only
remaining possibility), the argument is simply evaluated and nothing is
returned.
In the first two cases, returning a block causes the original macro call to
be replaced by a parse tree, specifically, the parse tree representing the
values that resulted from executing the original block passed to
MY_BEGIN.
In the final case — a void context — the compiler isn't expecting to replace the macro call with anything, so it doesn't matter what we return, just as long as we evaluate the block. The macro call itself is simply eliminated from the final parse-tree.
Note that
MY_BEGIN could be written more concisely than it was
above, by taking advantage of the smart-matching behaviour of a switch
statement:
macro MY_BEGIN (&block) { given want { when List { my @values = block(); return { *@values }; } when Scalar { my $value = block(); return { $value }; } when Void { block(); return } } }
A Macro by Any Other Syntax ...
Because macros are called by the parser, it's possible to have them interact with the parser itself. In particular, it's possible for a macro to tell the parser how the macro's own argument list should be parsed.
For example, we could give the
&request macro its own
non-standard argument syntax, so that instead of calling it as:
request(2,"default labels")
we could just write:
request(2 default labels)
To do that we'd define
&request like so:
macro request(int $n, Str $what) is parsed( /:w \( (\d+) (.*?) \) / ) returns List of Str { print "Enter $n $what: "; my @def_labels = split(/\s+/, <>, $n+1); return { @def_labels[0..$n-1] }; }
The
is parsed trait tells the parser what to look for
immediately after it encounters the macro's name. In the above example, the
parser is told that, after encountering the sequence
"request" it
should expect to match the pattern:
/ :w # Allow whitespace between the tokens \( # Match an opening paren (\d+) # Capture one-or-more digits (.*?) # Capture everything else up to... \) # ...a closing paren /
Note that the one-or-more-digits and the anything-up-to-paren bits of the
pattern are in capturing parentheses. This is important because the list of
substrings that an
is parsed pattern captures is then used as the
argument list to the macro call. The captured digits become the first argument
(which is then bound to the
$n parameter) and the captured
"everything else" becomes the second argument (and is bound to
$what).
Normally, of course, we don't need to specify the
is parsed
trait when setting up a macro. Since a macro is a kind of subroutine, by
default its argument list is parsed the same as any other subroutine's —
as a comma-separated list of Perl 6 expressions.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information.
Refactoring Parameter Lists
By this stage, you might be justified in feeling that
&part's parameter list is getting just a leeeeettle too
sophisticated for its own good. Moreover, if we were using the multisub
version, that complexity would have to be repeated in every variant.
Philosophically though, that's okay. The later versions of
&part are doing some fairly sophisticated things, and the
complexity required to achieve that has to go somewhere. Putting that extra
complexity in the parameter list means that the body of
&part
stays much simpler, as do any calls to
&part.
That's the whole point: Complexify locally to simplify globally. Or maybe: Complexify declaratively to simplify procedurally.
But there's precious little room for the consolations of philosophy when you're swamped in code and up to your assembler in allomorphism. So, rather than having to maintain those complex and repetitive parameter lists, we might prefer to factor out the common infrastructure. With, of course, yet another macro:
macro PART_PARAMS { my ($sheep,$goats) = request(2 default labels); return "Str +\@labels is dim(2) = <<$sheep $goats>>, *\@data"; } multi sub part (Selector $is_sheep, PART_PARAMS) { # body as before } multi sub part (Int @is_sheep, PART_PARAMS) { # body as before }
Here we create a macro named
&PART_PARAMS that requests and
extracts the default labels and then interpolates them into a string, which it
returns. That string then replaces the original macro call.
Note that we reused the
&request macro within the
&PART_PARAMS macro. That's important, because it means that,
as the body of
&PART_PARAMS is itself being parsed, the
default names are requested and interpolated into
&PART_PARAMS's code. That ensures that the user-supplied
default labels are hardwired into
&PART_PARAMS even before
it's compiled. So every subsequent call to
PART_PARAMS will return
the same default labels.
On the other hand, if we'd written
&PART_PARAMS like
this:
macro PART_PARAMS { print "Enter 2 default labels: "; my ($sheep,$goats) = split(/\s+/, <>, 3); return "*\@data, Str +\@labels is dim(2) = <<$sheep $goats>>"; }
then each time we used the
&PART_PARAMS macro in our code,
it would re-prompt for the labels. So we could give each variant of
&part its own default labels. Either approach is fine,
depending on the effect we want to achieve. It's really just a question how
much work we're willing to put in in order to be Lazy.
Smooth Operators
By now it's entirely possible that your head is spinning with the sheer
number of ways Perl 6 lets us implement the
&part subroutine.
Each of those ways represents a different tradeoff in power, flexibility, and
maintainability of the resulting code. It's important to remember that, however
we choose to implement
&part, it's always invoked in basically
the same way:
%parts = part $selector, @data;
Sure, some of the above techniques let us modify the return labels, or control the use of named vs positional arguments. But with all of them, the call itself starts with the name of the subroutine, after which we specify the arguments.
Let's change that too!
Suppose we preferred to have a partitioning operator, rather than a subroutine. If we ignore those optional labels, and restrict our list to be an actual array, we can see that the core partitioning operation is binary ("apply this selector to that array").
If
&part is to become an operator, we need it to be a
binary operator. In Perl 6 we can make up completely new operators, so let's
take our partitioning inspiration from Moses and call our new operator:
~|_|~
We'll assume that this "Red Sea" operator is to be used like this:
%parts = @animals ~|_|~ Animal::Cat;
The left operand is the array to be partitioned and the right operand is the selector. To implement it, we'd write;
multi sub infix:~|_|~ (@data, Selector $is_sheep) is looser(&infix:+) is assoc('non') { return part $is_sheep, @data; }
Operators are often overloaded with multiple variants (as we'll soon see), so we typically implement them as multisubs. However, it's also perfectly possible to implement them as regular subroutines, or even as macros.
To distinguish a binary operator from a regular multisub, we give it a
special compound name, composed of the keyword
infix: followed by
the characters that make up the operator's symbol. These characters can be any
sequence of non-whitespace Unicode characters (except left parenthesis, which
can only appear if it's the first character of the symbol). So instead of
~|_|~ we could equally well have named our partitioning operator
any of:
infix:¥ infix:¦ infix:^%#$! infix:<-> infix:∇
The
infix: keyword tells the compiler that the operator is
placed between its operands (as binary operators always are). If we're
declaring a unary operator, there are three other keywords that can be used
instead:
prefix:,
postfix:, or
circumfix:. For example:
sub prefix:± (Num $n) is equiv(&infix:+) { return +$n|-$n } sub postfix:² (Num $n) is tighter(&infix:**) { return $n**2 } sub circumfix:⌊...⌋ (Num $n) { return POSIX::floor($n) } # and later... $error = ±⌊$x²⌋;
The
is tighter,
is looser, and
is
equiv traits tell the parser what the precedence of the new operator
will be, relative to existing operators: namely, whether the operator binds
more tightly than, less tightly than, or with the same precedence as the
operator named in the trait. Every operator has to have a precedence and
associativity, so every operator definition has to include one of these three
traits.
The
is assoc trait is only required on infix operators and
specifies whether they chain to the left (like
+), to the right
(like
=), or not at all (like
..). If the trait is
not specified, the operator takes its associativity from the operator that's
specified in the
is tighter,
is looser, or
is
equiv trait.
Arguments Both Ways
On the other hand, we might prefer that the selector come first (as it does
in
&part):
%parts = Animal::Cat ~|_|~ @animals;
in which case we could just add:
multi sub infix:~|_|~ (Selector $is_sheep, @data) is equiv( &infix:~|_|~(Array,Selector) ) { return part $is_sheep, @data; }
so now we can specify the selector and the data in either order.
Because the two variants of the
&infix:~|_|~ multisubs have
different parameter lists (one is
(Array,Selector), the other is
(Selector, Array), Perl 6 always knows which one to call. If the
left operand is a
Selector, the
&infix:~|_|~(Selector,Array) variant is called. If the left
operand is an array, the
&infix:~|_|~(Array,Selector) variant
is invoked.
Note that, for this second variant, we specified
is equiv
instead of
is tighter or
is looser. This ensures that
the precedence and associativity of the second variant are the same as those of
the first. That's also why we didn't need to specify an
is
assoc.
Parting Is Such Sweet Sorrow
Phew. Talk about "more than one way to do it"!
But don't be put off by these myriad new features and alternatives. The vast majority of them are special-purpose, power-user techniques that you may well never need to use or even know about.
For most of us it will be enough to know that we can now add a proper parameter list, with sensibly named parameters, to any subroutine. What we used to write as:
sub feed { my ($who, $how_much, @what) = @_; ... }
we now write as:
sub feed ($who, $how_much, *@what) { ... }
or, when we're feeling particularly cautious:
sub feed (Str $who, Num $how_much, Food *@what) { ... }
Just being able to do that is a huge win for Perl 6.
Parting Shot
By the way, here's (most of) that same partitioning functionality implemented in Perl 5:
# Perl 5 code... sub part { my ($is_sheep, $maybe_flag_or_labels, $maybe_labels, @data) = @_; my ($sheep, $goats); if ($maybe_flag_or_labels eq "labels" && ref $maybe_labels eq 'ARRAY') { ($sheep, $goats) = @$maybe_labels; } elsif (ref $maybe_flag_or_labels eq 'ARRAY') { unshift @data, $maybe_labels; ($sheep, $goats) = @$maybe_flag_or_labels; } else { unshift @data, $maybe_flag_or_labels, $maybe_labels; ($sheep, $goats) = qw(sheep goats); } my $arg1_type = ref($is_sheep) || 'CLASS'; my %herd; if ($arg1_type eq 'ARRAY') { for my $index (0..$#data) { my $datum = $data[$index]; my $label = grep({$index==$_} @$is_sheep) ? $sheep : $goats; push @{$herd{$label}}, $datum; } } else { croak "Invalid first argument to &part" unless $arg1_type =~ /^(Regexp|CODE|HASH|CLASS)$/; for (@data) { if ( $arg1_type eq 'Regexp' && /$is_sheep/ || $arg1_type eq 'CODE' && $is_sheep->($_) || $arg1_type eq 'HASH' && $is_sheep->{$_} || UNIVERSAL::isa($_,$is_sheep) ) { push @{$herd{$sheep}}, $_; } else { push @{$herd{$goats}}, $_; } } } return map {bless {key=>$_,value=>$herd{$_}},'Pair'} keys %herd; }
... which is precisely why we're developing Perl 6.
|
http://www.perl.com/pub/2003/07/
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Joe Duffy, Huseyin Yildiz, Daan Leijen, Stephen Toub - Parallel Extensions: Inside the Task Parallel
- Posted: Feb 19, 2008 at 11:00 AM
- 31,117.
Noooo! Leave C# alone! One of the reasons C# is nice and easy to learn is that it's core keyword set is quite compact. If you start adding lots of keywords here there and everywhere the language gets out of hand quickly.
On the other hand, there's no reason why a compiler shouldn't be able to take the same keyword "for" to mean both the normal syncronous meaning when the for-body is expensive, and the Parallel.For() when the body can be easilly split into asyncronous tasks and have this as an option inside the build menu.
Has anyone considered a "debug" switch to run with n queues, regardless of how many processors are availaible? I see a bad scenerio comming.
1. Developer is on a 1-2 core box, or a many core box with enough other junk (VS, virus scanner, outlook, IE, whatever else) so that unbeknownst to the programmer, there is no real parallelism.
2. Dev signs off on the code that has never been tested running truly parallel.
3. Code gets loaded on the mega-beheamoth 16 core production machine that's not running anything else.
4. Code runs in parallel for the first time in production. Or even worse its shrink-wrap software; and next year when bigger machines come out, more parallelism is exposed and the software starts failing randomly.
Most developers would consider #4 to be a very bad thing, but I find it inevitable. My dev box is usually running 3-6+ apps when I'm developing, and therefore most of my testing. If I can't force the code to be parallel in testing, the interleaved paths might get very little coverage, and it will be very hard to know this is happening.
Just curious if this has been thought about.?
to #4
I understand you can have many apps running on your dev box, but as long as processor core utilization is not 100% you should be able to successefully schedule your tasks on that core. It is similar to load balancing technique or a time compression utilization. So your parrallel code will experience parallel run-time environment before production, it could be slower though, so do think Daan overstated the point (perhaps intentionally) about automatic/implicit parallelism. It is true that many kinds of computations can be automatically run in parallel with little-to-no input from the developer. When might this be possible? When a computation is guaranteed not to have side-effects and thread-affinity.
This already commonly applies to specialized frameworks and domain-specific languages. Big hammer APIs like parsing an XML document or compressing a large stream of data also immediately come to mind. Functional programming as a broader class of automatically parallelizable computations is an interesting one, but is not a silver bullet. Mostly-functional languages are more popular than purely-functional ones; F# and LISP, for example, permit "silent" side-effects buried within otherwise pure computations, which means you can't really rely on their absence anywhere.
Haskell and Miranda are two examples from a very small set of purely functional languages, where all "silent" imperative effects are disallowed, but for certain type system accomodations (monads), in which implicit parallelism is possible. This allows you to at least know when parallelism might be dangerous, and it's the exception rather than the rule. But even here, many real-world programs are constrained by data and control dependence. You might be interested in John DeTreville's brief case study on this fact:.
Nevertheless, implicit and automatic parallelism are clearly of interest to researchers in the field. I think what Daan was trying to say is that we're still a few years away from having a more general solution. Between now and then, however, I would expect to see some specialized frameworks providing this; heck, just look at MATLAB and SQL for examples where this has already succeeded.
Regards,
---joe
Your proposed syntax relies on the 1st pass of SEH, but can be written directly in IL or VB (since they support filters). C# doesn't support them and, to be honest, I'm glad they don't. We did consider this model to make AggregateExceptions more palatable, but for various reasons we don't think it would make a huge difference. Moreover, the 2-pass model of SEH is problematic and so we would prefer not to embellish it.
I should restate a point from the video: we encourage that developers to the best of their ability prevent exceptions from leaking across parallel boundaries. Life simply remains a lot easier. Once the crossing is possible, you need to deal with AggregateExceptions which is a bit like stepping through a wormhole: you end up in a completely different part of the universe with little chance of getting back to your origin.
The real issue is that with one-level deep examples like the one you show, you can certainly figure out how to pick out the exceptions you care about, handle them, etc. We even offer the Handle API for this:
try {
... parallel code ...
} catch (AggregateException ae) {
ae.Handle(delegate(Exception e) {
if (e is FooException) {
... handle it ...
return true;
}
return false;
});
}
If, after running the delegate on all exceptions, there are any for which the delegate returned 'false' (i.e. unhandled), Handle rethrows a new AggregateException. I admit, this code is a tad ugly, but even with 1st pass support you'd have to do something like this. (Unless SEH knew to deliver only the exceptions that were chosen in the 1st pass selection criteria, which would require yet more machinery.) But the issue is, what if you handle some FooExceptions, but leave some BarExceptions in there? Again, those up the callstack will see AggregateExceptions and will need to have known to write the whacky code I show above.
All of this is really to say that AggregateExceptions are fundamentally very different. Exceptions in current programming languages are, for better or for worse, very single-threaded in nature. They assume a linear, crawlable callstack, with try/catch blocks that are equipped to handle a single exception at a time. I can't say I'm terribly happy with where we are, but I can say I think it's the best we can do right now given the current world of SEH.
---joe
Hi, sorry if this sounds naive, but what if you want to read/parse multiple files from disk in parallel using the TPL? Any test done by anyone? My guess would be we need new Async IO features (including Async File Open) that can be combined with this library to make such a scenario perform.
As for the discussion versus fixed-size problems versus variable-size problems (with varying amount of data): As an engineering team, you can score by making your software scaleable by using more cores for more data.
In my line of work, there are always customers that have 2-4 times more data than the rest, and the same expectations on performance. If you use TPL to do your data in parallel, you can tell him to go buy an extra core... And he will love to hear that, because he now has the ultimate excuse to have his boss buy him extra horsepower..
PLINQ, and Parallel.For and so on are good for computers with lots of CPUs but what would also be good is if we could use the same language to access the many processors on a graphics card.
At the moment there are special shader languages such as NVidias cg langauge to write shaders.
Can PLINQ take advantage or encapsulate the parallism on graphics cards or even multiple channels on sound cards? I think this would make writing shaders very easy and they could be implemented either on multiple cores or on a graphics card.
So could you ask about that and also whether you are concentrating only on multiple CPUs because this may do away with the need for a dedicated graphics card in the long term?
When you are designing PLINQ and so on do you mainly have database and data programming and business software in mind or graphics and games too?.
Funny you say that. During that part, I was thinking it would be cool to be able to add keywords manually using some kind of Extension method deal. That way, people could experiment back and forth on syntax ideas or just use in their namespaces. New keywords or overrides (with supporting libraries) could be a plug-in model to VS. That way, c# proper stays clean, but could be extented and experimented with.?
I would also like to know about that. Could future implementations of TPL be used for example to make a distributive project for example like the one people downloaded to model climate change? Because this is an example of parrallism in which the CPUs are distributed on computers worldwide. Then either the program could be run on a single computer with multiple cores or distributed on multiple computers in a local grid or distributed to different computers on the internet. What are your thoughts on this?
This is the way I see computers going where it doesn't matter where the CPUs are as long as they are joined together in some way you can run parrallel programs on them.
Although I understand where you're coming from, this idea would lead to source code becoming "locked" to a machine and IDE, which would make life very difficult when giving your source to someone else (say a colleage) or via a team collaboration unless very strict permissions of who can add new keywords were introduced.
All in all, it sounds like the type of thing that would only be fully understood or helpful to language developers and a small number of hobbyists, but could significantly damage both the reputation and workability of C# in general, when those same hobbyists and language developers could just as easilly use a function.
All in all I think that would be a bad move for C#.
Starting to fool around with tpl and did a small program to investigate IO. Somewhat real in that I'm working on a purge of old files (.eml's) for a customer.
I'm copying ~ 5000 files to a directory then deleting them. Not very interesting but close to what the customer needs done.
I found that with the sequential loop (there are other ways in system.io to do this besides a loop!) it took ~ 11 seconds and the paralelll loop took 6 seconds on my dual core laptop.
I don't have a 4 core machine, I suspect that adding a third processor would not help much. I'm assumng that the first thread blocks for I/O then the second thread can queue up another I/O and back and forth but having 3 or 4 processors would not necessarily do any better.
The copy takes most of the time.
For Each FI In DirSrc.GetFiles
FI.CopyTo(Path.Combine(Me.FilePath, FI.Name))
vs?
Well, at least one parallel test was indeed faster, by factor 1.5 (on matrix sizes bigger than 2000x2000):
private static double[,] MultP(int w, int h)
{
double[,] m1 = new double[w, h];
Parallel.For(0, w, x =>
{
Parallel.For(0, h, y =>
{
m1[x, y] = x * y * 1000.0;
});
});
Parallel.For(0, w, x =>
{
Parallel.For(0, h, y =>
{
m1[x, y] = Math.Sqrt(m1[x, y]);
});
});
return m1;
}
May be enclosed FORs are easier to parallelize. Similar vector tests are 1.5 - 2 times slower.
Something is completely wrong with PFib case. TPL is 90 times slower.
Memory consumption is an issue with all parallel tests.
Obviously the overhead from switching threads is too high for such a simple task. After adding more demanding calculations to PFib, TPL's advantage becomes obvious.
Good job TPL team!.
Remove this comment
Remove this threadclose
|
http://channel9.msdn.com/Shows/Going+Deep/Joe-Duffy-Huseyin-Yildiz-Daan-Leijen-Stephen-Toub-Parallel-Extensions-Inside-the-Task-Parallel
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
i would like to drive with arrow keys, but can find very little documentation on this
and the code i did find didn't make sense to me.
Have you installed Processing? Have you looked at the zillions of examples? File + Examples... + Basics + Input + Keyboard, for instance.
Modify that to print the index of the key that was pressed. Then, when all you see is a ? for the arrow keys, select the function name (keyPressed) and select Find in Reference on the help menu.Look at the related links. keyCode, for instance. Print the keyCode, instead of the key.
What code was that?
import processing.serial.*;Serial myport;int val;void Setup(){ size(200,200); String portName = Serial.list()[4]; myport = new Serial(this, portName, 9600);}void draw() { background(255,255,0,255);}void keyPressed() { if (key == CODED){ if (key == UP) { myport.write('F'); } else if (key == DOWN){ myport.write('B'); } else if (key == LEFT){ myport.write('L'); } else if (key == RIGHT){ myport.write('R'); } else{ myport.write('S'); } }}
#include <Servo.h>Servo Left;Servo Right;//int Lspeed = 0;//int Rspeed = 0; int val; void setup() { Left.attach(12); Right.attach(13); Serial.begin(9600); } void loop() { if (Serial.available()) { val = Serial.read(); } if (val == 'F') { // If Forward command was received Left.write(180); // go forward Right.write(0); //invert direction for opposite motors } else if (val == 'B') { Left.write(0); Right.write(180); } else if (val == 'R') { Left.write(0); Right.write(0); } else if (val == 'L') { Left.write(180); Right.write(0); } else { Left.write(90); Right.write(90); } delay(100); // Wait 100 milliseconds for next reading }
should this work?
Exception in thread "Animation Thread" java.lang.NullPointerException at serial_control.keyPressed(serial_control.java:52) at processing.core.PApplet.handleKeyEvent(Unknown Source) at processing.core.PApplet.dequeueKeyEvents(Unknown Source) at processing.core.PApplet.handleDraw(Unknown Source) at processing.core.PApplet.run(Unknown Source) at java.lang.Thread.run(Thread.java:662)
wow... 4 days and no reply? is this just some weird error that no one knows how to fix? what could I do to get around this error? thanks
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=152094.msg1143123
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
GETHOSTNAME(3) BSD Programmer's Manual GETHOSTNAME(3)
gethostname, sethostname - get/set name of current host
#include <unistd.h> int gethostname(char *name, size_t namelen); int sethostname(const char *name, size_t namelen);
The gethostname() function returns the standard host name for the current processor, as previously set by sethostname(). The parameter namelen specifies the size of the name array. If insufficient space is provided, the returned name is truncated. The returned name is always NUL terminat- ed. sethostname() sets the name of the host machine to be name, which has length namelen. This call is restricted to the superuser and is normally used only when the system is bootstrapped.
If the call succeeds a value of 0 is returned. If the call fails, a value of -1 is returned and an error code is placed in the global variable errno.
The following errors may be returned by these calls: [EFAULT] The name parameter gave an invalid address. [EPERM] The caller tried to set the hostname and was not the su- peruser.
hostname(1), getdomainname(3), gethostid(3), sysctl(3), sysctl(8), yp(8)
The gethostname() function call conforms to X/Open Portability Guide Issue 4.2 ("XPG4.2").
The gethostname() function call appeared in 4.2BSD..
|
http://www.mirbsd.org/htman/sparc/man3/gethostname.htm
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
The idea behind Greasemonkey is pretty simple. It's a Firefox extension, installed in the same way as any other Firefox extension (find it via the Tools > Addons menu and hit Install).
However, it doesn't do anything in and of itself: what it does is to enable you to run scripts, either by other people or by yourself, which will alter the way web pages look and function.
Greasemonkey user scripts are the bits of code that actually do the work – Greasemonkey itself just loads and manages these. User scripts are written in JavaScript, but be warned: for security reasons, this isn't just a question of writing regular JavaScript and away you go.
There are some gotchas to be aware of, although the scripts in this guide don't encounter any of them. A quick note if you're unfamiliar with JavaScript: this guide isn't going to explain JavaScript syntax in any detail, but don't let that stop you from giving it a go. It's all fairly logical and the code snippets are all explained.
To install a script that someone else has written, you navigate to its location in Firefox and click on the link to the script. You'll get an install popup, as with a normal extension, and can either look at the source code of the script first or, if you're feeling trusting, just install it.
Part 1 - Your first Greasemonkey script
Greasemonkey provides a helpful dialogue to make writing a script as straightforward as possible. In your Firefox window, once you've downloaded and installed Greasemonkey, there'll be a little monkey face in the right-hand corner of the status bar.
Right-click on that and you'll get a menu that includes the option "New User Script". Click that and you'll get a dialog looking a bit like the box on the right.
The 'name' is just the name of your script – it's best to choose something that obviously indicates what it does, for ease of script management later on. The 'namespace' is to avoid your script clashing with others.
If you try to install a script that has the same name as an already-installed one, it's the namespace that governs whether it will overwrite the old one (if the namespace is the same) or co-exist with it (if they're different).
There are a couple of things you can do here: the first one is to use your own website as the domain name. Alternatively, you can use, or if you're intending to upload it to when you're done, you can use that.
Current versions of Greasemonkey won't allow you to leave it blank. 'Description' is for a human-readable line describing what the script does. It's a very good idea to fill this field in, even for your own scripts – you may wind up with stacks of the things and they'll be a lot easier to manage if you provide extra clues about which is which.
Hack the web
The 'include' and 'exclude' rules govern on which sites a script will run, and can include wildcards. So,. com/* will match and all pages starting with that URL (whereas without the asterisk will just include the front page).
You can also use wildcards for parts of names: http://*.example.com/f* will match any page whose path begins with f, on any server in the example.com domain. By default, the include box will contain the page you were on when you clicked the new script option, but you're free to delete that.
If an include rule is matched and no exclude rule is matched, the script will run. If you have no include rule, Greasemonkey assumes @include *, ie, that every URL is matched, so the script will run on every page you load.
This first script is going to set the background of a page to white – very useful indeed if you come across a page whose author has a fondness for eyeball-searing pink, or the sort of repeating background image that generates a headache within seconds.
So pick a website you want to change the background of and put it in the @include box (here I'm using), and set the other fields as appropriate.
Once you've filled this in, you will be asked for your preferred editor (if there's not already one set), and then Greasemonkey will load the script file – which currently will just contain the metadata – up in your editor, ready for you to write something.
At this point, the code you're faced with will look a fair bit like this:
// ==UserScript==
// @name Background Change
// @namespace juliet/
// @description Change the background colour of a page
// @include*
// ==/UserScript==
Now it's time to actually write the script. All this first script does is to change the background colour of any pages from sites in the include domain to white. (There really are some unpleasant background colour choices out there.)
For a page without frames or other complications, this is very straightforward: just a single line.
document.body.style.background = "#ffffff";
document is the built-in way of referring to the current page. It's a DOM (Document Object Model) object that represents the entire HTML document.
Think of this as a tree of HTML elements seen as objects, with each new element branching off as a 'child' of the one before it – have a look at the diagram below, which shows a possible structure for the body part of an HTML document.
DOM TREE: An HTML document as a DOM tree – each child node branches down from its parent node
The notation for referring to an object in this model is toplevel.child.childofchild. So this line takes the document, then the body element, then the style of the body, then the background attribute of the style… and sets it to white. (#ffffff is white in hexadecimal notation, which is one of the HTML standards. You could also just use white.)
Try it out now – pick a page with a non-white background, use the Manage Scripts menu to add that to the includes for your script and reload the page.
When testing, remember: you're not actually doing anything to the web page you're editing. You're just changing it for you. So if you do something catastrophic, no problem! You can just turn your script off, or edit it and reload the page. So feel free to experiment.
When you're testing, if you left-click on the little monkey face, it'll toggle Greasemonkey on/off. So you can toggle it off, check how the page looks currently, toggle it on, reload and see what your script is doing.
|
http://www.techradar.com/news/internet/the-beginner's-guide-to-greasemonkey-scripting-598247
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
#include <apr_network_io.h>
The pool to use...
The hostname
Either a string of the port number or the service name for the port
The numeric port
The family
IPv4 sockaddr structure
IPv6 sockaddr structure
Union of either IPv4 or IPv6 sockaddr.
How big is the sockaddr we're using?
How big is the ip address structure we're using?
How big should the address buffer be? 16 for v4 or 46 for v6 used in inet_ntop...
This points to the IP address structure within the appropriate sockaddr structure.
If multiple addresses were found by apr_sockaddr_info_get(), this points to a representation of the next address.
|
http://apr.apache.org/docs/apr/0.9/structapr__sockaddr__t.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
iPcGravityCallback Struct ReferenceInherit this class if you want to know when gravity is applied to a certain iPcLinearMovement. More...
#include <propclass/linmove.h>
Inheritance diagram for iPcGravityCallback:
Detailed DescriptionInherit AddGravityCallback in iPcLinearMovement, and remove with RemoveGravityCallback
Definition at line 50 of file linmove.h.
The documentation for this struct was generated from the following file:
Generated for CEL: Crystal Entity Layer 1.2 by doxygen 1.4.7
|
http://crystalspace3d.org/cel/docs/online/api-1.2/structiPcGravityCallback.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
An editable text view, extending
AutoCompleteTextView, that
can show completion suggestions for the substring of the text where
the user is typing instead of necessarily for the entire thing.
You must provide a
MultiAutoCompleteTextView.Tokenizer to distinguish the
various substrings.
The following code snippet shows how to create a text view which suggests various countries names while the user is typing:
public class CountriesActivity extends Activity { protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.autocomplete_7); ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_dropdown_item_1line, COUNTRIES); MultiAutoCompleteTextView textView = (MultiAutoCompleteTextView) findViewById(R.id.edit); textView.setAdapter(adapter); textView.setTokenizer(new MultiAutoCompleteTextView.CommaTokenizer()); } private static final String[] COUNTRIES = new String[] { "Belgium", "France", "Italy", "Germany", "Spain" }; }
|
http://developer.android.com/reference/android/widget/MultiAutoCompleteTextView.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
csVector3 Class Reference
[Geometry utilities]
#include <csgeom/vector3.h>
Detailed Description
A 3D vector.
Definition at line 57 of file vector3.h.
Constructor & Destructor Documentation
Conversion from double precision vector to single.
Member Function Documentation
Return a textual representation of the vector in the form "x,y,z".
Friends And Related Function Documentation
Member Data Documentation
The documentation for this class was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1
|
http://www.crystalspace3d.org/docs/online/api-1.4/classcsVector3.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
You can click on the Google or Yahoo buttons to sign-in with these identity providers,
or you just type your identity uri and click on the little login button.
We were at La Cantine on May 21th 2012 in Paris for the "PyCon.us Replay session".
La Cantine is a coworking space where hackers,
artists, students and so on can meet and work. It also organises some
meetings and conferences about digital culture, computer science, ...
On May 21th 2012, it was a dev day about Python. "Would you
like to have more PyCon?" is a french wordplay where PyCon sounds like Picon, a french "apéritif" which
traditionally accompanies beer. A good thing because the meeting began at 6:30
PM! Presentations and demonstrations were about some Python projects presented
at PyCon 2012 in Santa Clara (California) last
March. The original pycon presentations are accessible on pyvideo.org.
By Gael Pasgrimaud (@gawel_).
pdb is the well-known Python
debugger. Gael showed us how to easily use this almost-mandatory tool when you
develop in Python. As with the gdb debugger, you can stop the execution at a
breakpoint, walk up the stack, print the value of local variables or modify
temporarily some local variables.
The best way to define a breakpoint in your source code, it's to write:
import pdb; pdb.set_trace()
Insert that where you would like pdb to stop. Then, you can step trough the code with
s, c or n commands. See help for more information. Following, the
help command in pdb command-line interpreter:
It is also possible to invoke the module pdb when you run a Python script such
as:
$> python -m pdb my_script.py
By Alexis Metereau (@ametaireau).
Pyramid is an open
source Python web framework from Pylons Project. It concentrates on providing
fast, high-quality solutions to the fundamental problems of creating a web
application:
The framework allows to choose different approaches according the
simplicity//feature tradeoff that the programmer need. Alexis, from the French team of Services Mozilla,
is working with it on a daily basis and seemed happy to use it. He told us that
he uses Pyramid more as web Python library than a web framework.
By Benoit Chesneau (@benoitc).
Circus is a process watcher and
runner. Python scripts, via an API, or command-line interface can be used to
manage and monitor multiple processes.
A very useful web application, called circushttpd, provides a way to
monitor and manage Circus through the web. Circus uses zeromq, a well-known tool used at
Logilab.
This session was a well prepared and funny live demonstration by Julien
Tayon of matplotlib, the Python 2D plotting library . He showed us some quick and easy stuff.
For instance, how to plot a sinus with a few code lines with matplotlib and
NumPy:
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
# A simple sinus.
ax.plot(np.sin(np.arange(-10., 10., 0.05)))
fig.show()
which gives:
You can make some fancier plots such as:
# A sinus and a fancy Cardioid.
a = np.arange(-5., 5., 0.1)
ax_sin = fig.add_subplot(211)
ax_sin.plot(np.sin(a), '^-r', lw=1.5)
ax_sin.set_title("A sinus")
# Cardioid.
ax_cardio = fig.add_subplot(212)
x = 0.5 * (2. * np.cos(a) - np.cos(2 * a))
y = 0.5 * (2. * np.sin(a) - np.sin(2 * a))
ax_cardio.plot(x, y, '-og')
ax_cardio.grid()
ax_cardio.set_xlabel(r"$\frac{1}{2} (2 \cos{t} - \cos{2t})$", fontsize=16)
fig.show()
where you can type some LaTeX equations as X label for instance.
The force of this plotting library is the gallery of several examples with
piece of code. See the matplotlib gallery.
Dimitri Merejkowsky reviewed how Python can be used to control and program Aldebaran's humanoid robot NAO.
Unfortunately, Olivier Grisel who was supposed to make three interesting presentations
was not there. He was supposed to present :
Thanks to La Cantine and the different organisers for this friendly dev day.
Thanks for sharing. I would like to point out that Matplotlib is not only a "2D" plotting library: it does 3D as well.
|
http://www.logilab.org/view?rql=Any%20X%20WHERE%20T%20tags%20X%2C%20T%20eid%2098359%2C%20X%20is%20BlogEntry
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Dialog and get user's input in Submlime 2 and some basic things
I learned about about Sublime.
Any suggestions or fixes would be appreciated.
Or... if you could test it on Windows / Linux.
It's been a lot of fun learning about Sublime, and I hope to write a plugin soon and do more examples.
See current version here (Gist @ Github):
This is original version:
- Code: Select all
import sublime
import sublime_plugin
'''
__ __ __ __
| \/ |_ _| \/ |
| |\/| | '_| |\/| |_
|_| |_|_| |_| |_(_)
Project: Examples of using custom dialog and messages in Sublime 2
Platform: tested only on a Mac
File Name: mrm_example_dialogs.py
Place file in your User folder
On Mac it is:
/Users/username/Library/Application Support/Sublime Text 2/Packages/User
On a Mac, this is hidden, find the folder by going to Finder and
Go | Go to Folder
~/Library/Application Support/Sublime Text 2
press ENTER
How to add a Command to the Command Palette (Cmd-Shift-P) on Mac
1- Create a folder in
Packages/Default/Default.sublime-commands
There is same file in
Packages/User/Default.sublime-commands
don't modify files in Packages/Default, they will be overwritten when upgrading
always use User folder for user specific settings
2- add this to file
[
{
"caption": "Run Example 1",
"command": "example1"
}
]
3- Then you can press Cmd-Shift_P, and type in: Ru
and you will see this on list
'''
# ------------------------------------------------------------
# show_input_panel, status_message and message_dialog example
# to run this use:
# view.run_command("example1")
# at the command line in Sublime
# Note: Example1Command is expressed as example1 with the view.run_command("example1")
class Example1Command(sublime_plugin.TextCommand):
def run(self, edit):
# 'something' is the default message
self.view.window().show_input_panel("Say something:", 'something', self.on_done, None, None)
def on_done(self, user_input):
# this is displayed in status bar at bottom
sublime.status_message("User said: " + user_input)
# this is a dialog box, with same message
sublime.message_dialog("User said: " + user_input)
# ------------------------------------------------------------
# message_dialog example with cancel example
# to run this use:
# view.run_command("example2")
# at the command line in Sublime
# Note: Example2Command is expressed as example2 with the view.run_command("example2")
class Example2Command(sublime_plugin.TextCommand):
def run(self, edit):
# this has OK button only
sublime.message_dialog("Message goes here.")
# this has two buttnos, Cancel and OKY
sublime.ok_cancel_dialog("hello mrm", "OKY")
# no work sublime.errormessage_dialog("BIG ERROR")
sublime.status_message("fred ******************") # this is at bottom
# this has Cancel and OK button labeled OK Rob - and will respond depending on user action
# Esc key will be taken as a Cancel
if sublime.ok_cancel_dialog("OK ROB ?", "OK Rob"):
# print will print to console
print "You Pressed OK" # this will print to console if OK pressed.
else:
print "You Pressed Cancel"
# ------------------------------------------------------------
# Dialog for error_message
# to run this use:
# view.run_command("example3")
# at the command line in Sublime
# Note: Example3Command is expressed as example3 with the view.run_command("example3")
class Example3Command(sublime_plugin.TextCommand):
def run(self, edit):
sublime.error_message("Must be and error!")
|
http://www.sublimetext.com/forum/viewtopic.php?f=2&t=13277&p=51463
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
New in version 0.3June 25th, 2010
- Pattern bindings in case statements are now like any other assignment -- scoped to the entire function and in the same namespace as all the other locals. Patterns no longer create a new scope, and the variables they create no longer shadow existing locals. It is now a type error if a local variable has the same name but different type as a pattern variable (backwards incompatible).
- (LP: #513638) * It is now a compiler error if a switch statement does not cover all possible constructors of the type it switches over, preventing the possibility of runtime pattern match failure errors (backwards incompatible).
- (LP: #408411) * Variables no longer require declaration. Any variable assigned in a function is now implicitly a local variable. It is no longer an error to assign to a global variable or an undefined variable (this just creates a new local variable).
- (LP: #483082) * The field reference and replace expressions and the field update statement now work (previously they displayed "not implemented" errors). It is now possible to access and update individual fields of an object by name, without having to use a switch.
- (LP: #439171) * The field replace statement is now an expression; its result is the updated object. It previously didn't make much sense because the statement performed an implicit assignment of its source variable.
- (LP: #408301) * The field update operator changed from "=" to "=!", reflecting its impure nature.
- (LP: #595782) * It is now an error to have two fields of the same name in a given constructor.
- (LP: #513585) * It is now an error to have two variables of the same name in a given pattern.
- (LP: #509939) * Reading the value of (and therefore executing the body of) a computable global constant now caches the result so subsequent reads do not re-execute the body. Previously CGCs would be re-executed on each read.
- (LP: #491697) * The built-in "is" function has been moved to the "impure" module.
- (LP: #582634) * The "is" function now works on objects of user-defined types (previously always returned 0).
- (LP: #585724) * Added prelude function "ref" which gets the element of a list with a particular index.
- (LP: #582635) * Paul Bone: Fixed error compiling Mars on some systems, due to Readline code.
- (LP: #552168) * Critical type safety problem -- if a declared local variable and pattern binding have different types, after the switch, the variable name has the declared type, but the value bound by the case (of the wrong type). Fixed by giving pattern bindings the full function scope, forcing them to have the same type as any declared variable.
- (LP: #513638) * Internal error "Phi missing predecessor in CFG" for programs with complex nested switches.
- (LP: #567082) * Internal error "Field reference to something not an ADT" for case statements with a nested pattern with two fields.
- (LP: #576375) * Type error for cases with int literals on polymorphic types.
- (LP: #509457) * Internal error for string literals with '\0.
- (LP: #534159) * Internal error calling "error" or "get_env" with '\0 in string.
- (LP: #534161) * Library function "encodeURIComponent" gives garbage output on strings with non-byte characters.
- (LP: #585703) * Now correctly sets the exit status of Mars to the result of main – rounds to a machine-size int with wraparound rather than using mod.
- (LP: #552413) * The "show" function now correctly includes empty parens for terms built from nullary constructors.
- (LP: #587787) * Replaced "dummy term binding" technique with explicit rigid type variables (as now described in documentation for type unification).
- (LP: #488595) * The control-flow graph (CFG) representation now includes full type information.
- (LP: #574108) * The t_switch instruction no longer includes fully-recursive patterns, but a limited format which matches only a single tag. Switch statements are now factored into this much lower-level construct which is easier to compile into machine code.
- (LP: #408411) * The interactive environment now uses SSA variable names, so variables which are re-assigned are no longer physically clobbered in the environment; they are assigned with a new qualified name. This makes the semantics of interactive consistent with the rest of the language and fixes issues with certain backends or analyses.
- (LP: #580487) * Fixed test suite silently treating invalid "outcome" values as "fail".
- (LP: #574141) * No longer generates code for statements which are inaccessible because they are preceded by a return statement. This prevents malformed code generation.
- (LP: #517403) * Fixed parser not allocating a type to subexpressions of expressions which already have a type annotation.
- (LP: #578082) * Interactive mode executes type-annotated versions of statements.
- (LP: #578084) * Test suite is now orders of magnitude faster to run (fixed re-compilation for every case, causing N^2 behaviour).
- (LP: #589000) * Interpreter interface is now abstracted so it is possible to plug in different interpreter backends.
- (LP: #550708) * In the test framework, expecting a compile error now causes all of the runtime tests to be expected to be skipped, so they don't raise large errors when they are.
- (LP: #596734) * Terminator instructions now hold a context object.
- (LP: #408291) * Function signatures no longer have an "argmode". This was part of some uniqueness information analysis which was abandoned for a different representation, and has since been unused and completely obsolete.
- (LP: #550739) * All library functions are now documented.
- (LP: #486958) * The documentation is now syntax-highlighted for all Mars code.
- (LP: #576776) * If a bug in Mars causes an internal error, it will be displayed much more neatly, with a link to the "file bug" page on the bug tracker.
- (LP: #534165) * The test framework no longer generates .py files with Mars assembly (a bug dating back to when we actually generated Python output).
- (LP: #521992) * The src directory now includes a Makefile, so Mars can be compiled with a simple make.
- (LP: #522477) * The Mars Vim script (misc/mars.vim) has an updated list of built-in names.
|
http://linux.softpedia.com/progChangelog/Mars-Giuca-Changelog-52664.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
CS::Threading::ScopedLock< T > Class Template Reference
This is a utility class for locking a Mutex. More...
#include <csutil/threading/mutex.h>
Detailed Description
template<typename T>
class CS::Threading::ScopedLock< T >
This is a utility class for locking a Mutex. 163 of file mutex.h.
The documentation for this class was generated from the following file:
Generated for Crystal Space 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/api-2.0/classCS_1_1Threading_1_1ScopedLock.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Just a follow-up to this issue. It appears my test program *was* invalid, but I discovered why SDL wouldn't load properly. As you can see in my initial bug report, SDL was attempting to convert a command line from UCS-2-INTERNAL to UTF-8 using win-iconv. "C" (as my test program had) is *definitely* invalid input! After going through the win-iconv source, I realized that win-iconv does not interpret the encoding name "UCS-2-INTERNAL" as being UTF-16. (Source here:) Luckily, around line 167 of that same file, there are mappings from codpages to name, which are used by a conversion function. So I have added the "UCS-2-INTERNAL" string in the places shown above, and everything works so far... no more SDL errors. I have filed a pull-request with upstream development, which has moved from Google Code to github at:. I've tried googling "UCS-2-INTERNAL", but I'm not sure why no one else has hit this issue. :( Hopefully this patch solves the issue. Regards, sdbenique On Sat, 12 Mar 2016 20:14:30 +0100 (CET), <[email protected]> wrote: > Hello, > > I'm writing because I'm encountering a strange bug in cygwin's distributed > x86_64-w64-mingw32 libraries. Specifically a problem with win-iconv. > > -=System Info=- > OS(s): Windows 7 Professional, Windows 10 Professional Edition > Package(s): mingw64-{i686,x86_64}-win-iconv version: 0.0.6-2 > Cygwin: Cygwin64, Setup 2.873 > > > I first stared experiencing an issue with an SDL2 application I am developing. > I've been revamping my build toolkit to take advantage of the better mingw64 > support in newer releases of Cygwin. > > When I finally got my build system work with the host, build, target triplets, > I decided to try building some mingw32 binaries of the application. > Everything built fine, but for some reason the application always immediately > quit upon being ran, with the following error message: > > "Fatal Error: Out of memory aborting". > > This happens even on a machine with 16GB of memory, 10GB of it being free. > The i686 build as well as the x86_64 build encounters this issue as well. > > I downloaded the source code for mingw64-SDL2 and compiled it with > debuginfo. > > I narrowed down the issue to some code in in SDL_windows_main.c, > which calls SDL_iconv_string. > > Stepping into that function, the following executes: > Breakpoint 2, SDL_iconv_string (tocode=0x4052bc <__dyn_tls_init_callback+684> "UTF-8", > fromcode=0x4052ad <__dyn_tls_init_callback+669> "UCS-2-INTERNAL", inbuf=0x2e2dd2 "C", inbytesleft=116) > at /usr/src/debug/mingw64-x86_64-SDL2-2.0.1-1/src/stdlib/SDL_iconv.c:863 > > 863 size_t retCode = 0; > 865 cd = SDL_iconv_open(tocode, fromcode); > 866 if (cd == (SDL_iconv_t) - 1) { > 868 if (!tocode || !*tocode) { > 871 if (!fromcode || !*fromcode) { > 874 cd = SDL_iconv_open(tocode, fromcode); > 876 if (cd == (SDL_iconv_t) - 1) { > 877 return NULL; > > WinMain (hInst=0x400000, hPrev=0x0, szCmdLine=0x2e3859 "", sw=10) > at /usr/src/debug/mingw64-x86_64-SDL2-2.0.1-1/src/main/windows/SDL_windows_main.c:164 > 164 if (cmdline == NULL) { > (gdb) > 165 return OutOfMemory(); > > At first I thought it could be a bug with SDL, but to make sure, I created a very simple reproduction > of the issue using only the iconv libraray, and a simple main() function. This "test" fails > on every machine on which I have ran it, in both 32-bit and 64-bit builds. > > #include <iconv.h> > #include <stdio.h> > #include <stdint.h> > > // Check GCC > #if __GNUC__ > #if __x86_64__ || __ppc64__ > #define PTR_T int64_t > #else > #define PTR_T int32_t > #endif > #endif > > int main(int argc, char * args[]) > { > iconv_t handle = iconv_open("C", "UTF-8"); > > if ((PTR_T) handle == -1) > { > printf("Could not open handle to iconv"); > } > > return 0; > } > > I've uploaded a small repository with a Makefile that will conveniently set your $PATH > correctly and launch the .exe when you run build the target 'run'. > > For your convenience in recreating the issue, I have uploaded the repository on github, > at the following URL. > > git clone git://github.com/bittwiddler1/mingw64-iconv-test > > I've never really submitted a bug report via mailing list, but please let me know if you need any > other information and I will try my best to help out any way I can! > > I'll try and get some debug information out of the iconv.dll library, but no promises. I > don't know much about text encoding, let alone unicode pages and whatnot. :) > > - sdbenique -- Problem reports: FAQ: Documentation: Unsubscribe info:
|
https://cygwin.com/pipermail/cygwin/2016-March/226758.html
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
So far we have been looking at Python functions viewed as any function in any language might be described, but as promised, Python functions are completely different.
They are objects.
This is something that you only find in languages that have been influenced by the early experimental object oriented languages such as Smalltalk. Python shares this idea with Ruby and JavaScript to name just two but it isn’t common in class-based languages such as Java, C#, C++ and so on. However, the advantages of implementing functions as objects is so great that languages that don’t use this approach have had to add features to make up for it. C# added delegates and later, along with other languages, lambda expressions. Python doesn’t need such additions but it does have a form of a lambda expression which it doesn’t really need – see later.
What does it mean that a function is an object?
When you write a Python function definition:
def sum(a,b):
c=a+b
return c
something more happens than in other languages.
When the Python system reads the function definition it does more than just store the name of the function in the dictionary. It creates a function object which stores the code of the function and sets a variable to reference it. The construction of an object to act as the function is key to the different way functions in Python work.
It is a good idea to think of a function definition more like:
sum = def(a,b):
c=a+b
return c
this is invalid syntax but if you think about it in this way you can see that the variable sum with a reference to the new function object is created.
The function object is a perfectly standard object but it comes with a code object as one of its attributes which stores the code. It is also an example of a callable which means you can use the invocation operator () to execute the code they contain.
Function objects have all of the built-in attributes that objects have and a few that are special and related to their callability.
Notice that:
sum
is a variable that references the function object and:
sum()
is an evocation of that function object and it evaluates to whatever that function returns.
It may be only the difference of a pair of parentheses but the difference is huge.
If you are familiar with a more traditional approach to functions then it can take time to reach a point where you think about things correctly and stop making silly mistakes.
You can add new attributes to a function object and make use of them:
sum.myAttribute=1
print(sum.myAttribute)
Notice that any attributes you create exist even when the function is not being evaluated, but local variables only exist while the function is being evaluated.
Function attributes and local variables have a different lifetime.
You may at this point be wondering why we bother to make functions objects?
After all it has already been stated that it is possible to write good Python code without giving the fact that functions are objects a moment’s thought.
What makes this approach so useful is often expressed by saying that in Python functions are first class objects.
What this means is that anything you can do with an object you can do with a function. In particular you can pass a function as a parameter to another function and you can return a function from another function.
These two simple features make things very much easier and we don’t have to invent additional mechanisms like delegates or lambdas to make them available.
The standard example is to consider a sort function which can accept a comparison function to use to order the things it is going to sort. A simpler, but less likely, example is a math function that can apply a function that you pass in:
def math(f,a,b):
return f(a,b)
sum.myAttribute=1
print(math(sum,1,2))
Notice that the first parameter f is a function. More accurately it is a function object and the math function evaluates it and returns its result.
Of course this doesn’t have any practical advantage unless you are going to have a range of possible functions that can be passed as f but you can see how this might work in practice. Being able to customize one function by passing it others to use is a huge simplification and there are other advantages of function objects.
Functions in Python are objects and can be used anywhere an object can.
In particular you can pass a function as a parameter and return a function result.
Function parameters are passed by object reference which means changes to parameters do not affect the variables used as arguments. However, changes to mutable objects, i.e. attributes, do affect the objects in the calling program.
Functions can have attributes defined which have a lifetime beyond that of the function’s local variables.
Local variables exist only while the function is being executed, but attributes exist as long as the function object does.
Lambda expressions are lightweight ways of creating function objects. They simplify the syntax for passing functions as arguments.
Functions, like all Python objects, are essentially anonymous – they have variables which reference them rather than immutable names.
Functions can refer to their own attributes in code but exactly how to do this in a way that is immune from changes to the variables used to reference the function is more difficult that it first appears and needs closure for a reasonable solution
Creating The Python UI With Tkinter
Creating The Python UI With Tkinter - The Canvas Widget
The Python Dictionary
Arrays in Python
Advanced Python Arrays - Introducing NumPy
Make a Comment or View Existing Comments Using Disqus
or email your comment to: [email protected]
To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.
<ASIN:1871962587>
|
https://www.i-programmer.info/programming/python/12437-programmers-python-function-objects.html?start=1
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
Referencing Common Values Between Apps/Projects
Date Published: 23 July 2017
A pretty common scenario in building real world business software is the need to share certain pieces of information between multiple projects or applications. Frequently these fall into the category of configuration settings, and might include things like:
- Resource or CDN URLs or base URLs
- Connection Strings
- Public/Private Keys and Tokens
Some of these are more sensitive than others, obviously, and you should definitely strive to avoid storing database credentials in source control. In many cases, different apps shouldn’t be sharing a central database, anyway, as that’s likely to lead to the One Database To Rule Them All antipattern. Leaving aside databases and connection strings, how should you share common pieces of information between projects? There are several patterns you can consider.
In Code
The first pattern is simply to share the data in code. You might have a Constants or Settings class that is literally copied and pasted between projects. Or it might belong to one project that is referenced by another. You could compile it into a DLL that all projects reference. And of course, taking this to its logical next step, you can create a NuGet package that includes this hardcoded value. For example:
public static class CloudSettings { public string StaticResourcesUrlPrefix { get; } = ""; // more settings go here }
The benefit of this approach is that it’s very simple. The values are tracked in source control, which is a good thing if they’re not sensitive (not so good if they’re meant to be secret). Values are easily discovered by developers and can be updated easily in the codebase. However, these settings are probably not visible or configurable by operations staff, and any changes to settings must be done via a deployment, as opposed to something lighter-weight. Code-based values also aren’t as easily changed from one environment to the next, so promoting code from dev to test to stage to prod environments may be more difficult. This can be overcome with conditional logic or precompiler directives, but either of those degrades the simplicity of this approach (its chief advantage).
Even if you’re not hard-coding shared setting values, it can be worthwhile to share a library containing the shared setting keys. This might take the form of just constant values, as described here, or ideally you can use interfaces to describe your settings values in a strongly-typed manner, and use a convention to convert properties on your interfaces into settings keys.
To convert the above bit of code into an interface, just make this small change:
public interface CloudSettings { string StaticResourcesUrlPrefix { get; } // more settings go here } public class CloudSettings : ICloudSettings { public string StaticResourcesUrlPrefix { get; } = ""; // more settings go here }
Configuration
Probably the most common approach to solving this problem is to use configuration. In this case, you might simply add a key representing the setting in question to your project’s settings file, along with the appropriate value. Once you’ve done this once, it’s pretty easy to copy-paste this same setting into other environment-specific files or other projects’ settings files. This approach works well and offers more flexibility than the hardcoded-in-code approach. Be sure to follow these tips when working with configuration files in .NET, though:
- Apply Interface Segregation to Config Files
- Use Custom Configuration Section Handlers (pre .NET Core)
- Refactor Static Config Access
The biggest downside to the configuration approach is that over time you may end up with a ton of configuration settings, possibly without much cohesion between them. They’re also not quite as easy to update or automate in a cloud environment as something like environment variables, discussed next.
Environment Variables
A third approach is to store settings in environment variables. Environment variables are easy to update when using cloud hosting services or Docker containers. They work well cross-platform and they’re very well-supported in .NET Core. The default code templates for ASPNET Core applications, at least in the 1.x timeframe, uses both configuration files and environment variables for app settings. They way it’s configured by default, for a given setting key, the app will first check if there is an environment variable. If there is, it uses that value. Otherwise, it falls back to looking in settings file(s) for a value that matches a given key. At some of my clients we have implemented similar systems for .NET 4.6 apps. With this approach, you can also easily vary the behavior based on the environment. For instance, if you want to ensure your production environment uses environment variables, but it’s easier for your dev team to use config files, you could have your code throw an exception when the app is running in production and a value isn’t found in an environment variable. At dev time, values not in environment values could fall back to a local config file.
Hybrid Patterns
None of these approaches are exclusive – you can mix and match them to suit your needs. For example, it’s pretty common to combine environment variables with configuration settings, with one falling back to the other. You can take this a step further and specify default values in code, to use when a value is not found in either configuration files or environment variables.
These are design patterns, not absolute solutions. Use your experience to come up with a solution that solves your problems in the simplest way possible. If you’re not sure of the best approach, ask online or enlist the help of an expert. An ounce of bad design prevention is worth months of refactoring and rewriting to fix a poor design decision.
Recommendations
Start with something simple; grow complexity only if/when it becomes necessary:
- Start with a hardcoded string.
- Move that to a constant.
- Move that to a strongly typed settings class.
- Move that to an interface.
- Implement the interface to use config, environment variables, or whatever you need.
Avoid tightly coupling to a particular configuration system.
Avoid static access to any configuration system if it impact testability. Think about how you might unit test different configuration options at runtime. If it doesn’t impact testability, it may be fine, but in general watch out for static cling in your code.
Category - Browse all categories
|
https://ardalis.com/referencing-common-values-between-apps-projects/
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
On Mon, 2005-04-04 at 11:44 -0400, Deron Meranda wrote: > I was, though, expecting ls -Z to show the applied label. So the filesystem > context is being applied, but you can't see it via ls -Z? I guess that makes > sense now that I think about it, but it was a little surprising. I > kind of expected > the context= option to work somewhat like the uid= and gid= options as far > as it's visibility to ls. Unfortunately, no. ls -Z ultimately calls getxattr on the inode, and unless the filesystem implementation provides a getxattr method, you can't get that information. There has been discussion of putting a transparent redirect in the VFS so that if the filesystem implementation doesn't provide getxattr/setxattr on the security namespace, the VFS will automatically redirect the request to the security module (i.e. SELinux) and let it handle it based on the incore inode security context. > Also I think context= is what I want, versus fscontext=, since this is > an ISO9660 > filesystem that doesn't support extended attributes (xattr). Otherwise Apache > could see the filesystem, but not the individual files inside it. > Isn't that correct? I think for iso9660 they are effectively equivalent. It would make a difference for filesystems that have native xattr support. -- Stephen Smalley <sds tycho nsa gov> National Security Agency
|
https://listman.redhat.com/archives/fedora-selinux-list/2005-April/msg00015.html
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
Model Binding Decimal Values’s look at the scenario. Suppose you have the following class (Jogadoris a soccer player in Portugese):
public class Jogador { public int ID { get; set; } public string Name { get; set; } public decimal Salary { get; set; } }
And you have two controller actions, one that renders a form used to
create a
Jogador and another action method that receives the POST
request.
public ActionResult Create() { // Code inside here is not important return View(); } public ActionResult Create(Jogador player) { // Code inside here is not important return View(); }
When you type in a value such as 1234567.55 into the Salary field and try to post it, it works fine. But typically, you would want to type it like 1,234,567.55 (or here in Brazil, you would type it as 1.234.567,55)..
In general, we recommend folks don’t write custom model binders because they’re difficult to get right and they’re rarely needed. The issue I’m discussing in this post might be one of those cases where it’s warranted.
Here’s the code for my
DecimalModelBinder. I should probably write one
for other decimal types too, but I’m lazy.
WARNING: This is sample code! I haven’t tried to optimize it or test all scenarios. I know it works for direct decimal arguments to action methods as well as decimal properties when binding to complex objects.
using System; using System.Globalization; using System.Web.Mvc; { actualValue = Convert.ToDecimal(valueResult.AttemptedValue, CultureInfo.CurrentCulture); } catch (FormatException e) { modelState.Errors.Add(e); } bindingContext.ModelState.Add(bindingContext.ModelName, modelState); return actualValue; } }
With this in place, you can easily register this in
Application_Start
within Global.asax.cs.
protected void Application_Start() { AreaRegistration.RegisterAllAreas(); ModelBinders.Binders.Add(typeof(decimal), new DecimalModelBinder()); // All that other stuff you usually put in here... }
That registers our model binder to only be applied to
decimal types,
which is good since we wouldn’t want model binding to try and use this
model binder when binding any other type.
With this in place, the Salary field will now accept both 1234567.55and 1,234,567.55.
Hope you find this useful. I’ve had a great time in Buenos Aires, Argentina and São Paulo, Brazil. I’ll probably be swamped when I get back home, but I’ll try to make time to write about my time here.
71 responses
|
http://haacked.com/archive/2011/03/19/fixing-binding-to-decimals.aspx/
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
As I was putting together the coord_proj ggplot2 extension I had posted a gist that I shared on Twitter. Said gist received a comment (several, in fact) and a bunch of us were painfully reminded of the fact that there is no built-in way to receive notifications from said comment activity.
@jennybryan posited that it could be possible to use IFTTT as a broker for these notifications, but after some checking that ended up not being directly doable since there are no “gist comment” triggers to act upon in IFTTT.
There are a few standalone Ruby gems that programmatically retrieve gist comments but I wasn’t interested in managing a Ruby workflow [ugh]. I did find a Heroku-hosted service – – that will turn gist comments into an RSS/Atom feed (based on Ruby again). I gave it a shot and hooked it up to IFTTT but my feed is far enough down on the food chain there that it never gets updated. It was possible to deploy that app on my own Heroku instance, but—again—I’m not interested in managing a Ruby workflow.
The Ruby scripts pretty much:
- grab your main gist RSS/Atom feed
- visit each gist in the feed
- extract comments & comment metadata from them (if any)
- return a composite data structure you can do anything with
That’s super-easy to duplicate in R, so I decided to build a small R script that does all that and generates an RSS/Atom file which I added to my Feedly feeds (I’m pretty much always scanning RSS, so really didn’t need the IFTTT notification setup). I put it into a
cron job that runs every hour. When Feedly refreshes the feed, a new entry will appear whenever there’s a new comment.
The script is below and on github (ironically as a gist). Here’s what you’ll grok from the code:
- one way to deal with the “default namespace” issue in R+XML
- one way to deal with error checking for scraping
- how to build an XML file (and, specifically, an RSS/Atom feed) with R
- how to escape XML entities with R
- how to get an XML object as a character string in R
You’ll definitely need to tweak this a bit for your own setup, but it should be a fairly complete starting point for you to work from. To see the output, grab the generated...
|
https://www.r-bloggers.com/roll-your-own-gist-comments-notifier-in-r/
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Working with Windows Registry using Windows API – Part 3 (How to Create a Registry Key)
How to Create a Registry Key
You may recall that the function used for creating new registry keys is RegCreateKeyEx(). Its syntax is :
LONG RegCreateKeyEx (HKEY hKey, LPCTSTR lpSubKey, DWORD Reserved, LPTSTR lpClass, DWORD dwOptions, REGSAM samDesired, LPSECURITY_ATTRIBUTES lpSecurityAttributes, PHKEY phkResult, LPDWORD lpdwDisposition)
Here, the various parameters have the following meanings :
- hKey is a handle of the type of HKEY.
- lpSubKey is the name of the new subkey under the open key indicated by the handle.
- lpClass is a user-defined class type for the key. NULL value is recommended for this parameter.
- dwOptions flag is usually REG_OPTION_NON_VOLATILE. Another option is REG_OPTION_VOLATILE. The difference is that ‘non-volatile’ registry information is stored in a file and hence it remains in the computer even after the computer restarts. On the other hand, ‘volatile’ registry information is temporarily stored in RAM and is deleted as soon as the computer shuts down.
- samDesired is is the access mask describing the security for the new key. Possible values are KEY_ALL_ACCESS, KEY_WRITE, KEY_QUERY_VALUE, KEY_ENUMERATE_SUBKEYS.
- lpSecurityAttributes can be NULL or can point to a security attribute. The other possible values are the same as samDesired in point 5 above.
- lpdwDisposition points to a DWORD that indicates whether the key already existed or was created (REG_OPENED_EXISTING_KEY or REG_CREATED_NEW_KEY).
For example,
nError = RegCreateKeyEx (hRootKey, strKey, NULL, NULL, REG_OPTION_NON_VOLATILE, KEY_ALL_ACCESS, NULL, &hKey, NULL);
#include <windows.h> #include <iostream> using namespace std; HKEY CreateKey (HKEY hRootKey, wchar_t* strKey) { HKEY hKey; LONG nError = RegOpenKeyEx (hRootKey, strKey, NULL, KEY_ALL_ACCESS, &hKey); if (nError == ERROR_FILE_NOT_FOUND) { cout << "Creating the specfied registry key... " << strKey << endl; nError = RegCreateKeyEx (hRootKey, strKey, NULL, NULL, REG_OPTION_NON_VOLATILE, KEY_ALL_ACCESS, NULL, &hKey, NULL); } if (nError) cout << "Error: " << nError << " Could not create the specfied key !!!" << strKey << endl; else cout << "The specfied registry key was created successfully...!!!" << endl; return hKey; } void SetVal(HKEY hKey, LPCTSTR lpValue, DWORD data) { LONG nError = RegSetValueEx (hKey, lpValue, NULL, REG_DWORD, (LPBYTE) &data, sizeof (DWORD)); if (nError) cout << "Error: " << nError << " The key was created but the registry value could not be set !!! " << (char*) lpValue << endl; } int main() { static DWORD v1, v2; HKEY hKey = CreateKey (HKEY_LOCAL_MACHINE, L"SOFTWARE\Windows Code Bits"); v1 = 15; v2 = 3; SetVal(hKey, L"Website", v1); SetVal(hKey, L"Blog", v2); RegCloseKey(hKey); cout << endl; cout << "Press Any Key to Exit..."; getchar(); return 0; }
How does this program work ?
a) First of all, the main() function calls a user-defined function named CreateKey() with two arguments, i.e. (i) the root key and (ii) the key that we want to create.b) The CreateKey() function calls RegOpenKeyEx() to check whether the key that we want to create already exists. If this is the case, the function skips the next few statements and returns the handle to that key. If no such key exists, a new key is created with the name specified. The new key is created in HKEY_LOCAL_MACHINE\SOFTWARE\Windows Code Bits.
c) Now, the control returns back to the main() function. Here, two values (namely, ‘Website’ and ‘Blog’) are created for the key that we just created and data is fed into them using a user-defined function named SetVal().
d) The SetVal() function calls RegSetValueEx() to set the data in each value. Both values are of REG_DWORD type and their values are 15 and 3 respectively.
You can download this program i.e. RegCreate.cpp here
You can download the executable i.e. RegCreate.exe here
Please note that above executable is 64-bit and will not work on 32-bit editions of Windows. To create a 32-bit executable for your 32-bit Windows PC, you need to build the program as 32-bit in Visual Studio or any other IDE that you use.
So, we hope you understood how to create a registry key using the Windows API. The next article will discuss how to open, edit and delete registry keys using Windows API functions.
|
https://www.wincodebits.in/2015/08/working-with-registry-using-windows-api_18.html
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
import "go.chromium.org/luci/luci_notify/notify"
commits.go emailgen.go gitiles.go notify.go pubsub.go srcman.go
func BlamelistRepoWhiteset(notifications notifypb.Notifications) stringset.Set
BlamelistRepoWhiteset computes the aggregate repository whitelist for all blamelist notification configurations in a given set of notifications.
BuildbucketPubSubHandler is the main entrypoint for a new update from buildbucket's pubsub.
This handler delegates the actual processing of the build to handleBuild. Its primary purpose is to unwrap context boilerplate and deal with progress-stopping errors.
func InitDispatcher(d *tq.Dispatcher)
InitDispatcher registers the send email task with the given dispatcher.
func Notify(c context.Context, d *tq.Dispatcher, recipients []EmailNotify, templateParams *EmailTemplateInput) error
Notify discovers, consolidates and filters recipients from a Builder's notifications, and 'email_notify' properties, then dispatches notifications if necessary. Does not dispatch a notification for same email, template and build more than once. Ignores current transaction in c, if any.
SendEmail is a push queue handler that attempts to send an email.
type Build struct { buildbucketpb.Build EmailNotify []EmailNotify }
Build is buildbucketpb.Build along with the parsed 'email_notify' values.
Checkout represents a Git checkout of multiple repositories. It is a mapping of repository URLs to Git revisions.
func NewCheckout(commits notifypb.GitilesCommits) Checkout
NewCheckout creates a new Checkout populated with the repositories and revision found in the GitilesCommits object.
Filter filters out repositories from the Checkout which are not in the whitelist and returns a new Checkout.
func (c Checkout) ToGitilesCommits() notifypb.GitilesCommits
ToGitilesCommits converts the Checkout into a set of GitilesCommits which may be stored as part of a config.Builder.
CheckoutFunc is a function that given a Build, produces a source checkout related to that build.
func ComputeRecipients(notifications notifypb.Notifications, inputBlame []*gitpb.Commit, outputBlame Logs) []EmailNotify
ComputeRecipients computes the set of recipients given a set of notifications, and potentially "input" and "output" blamelists.
An "input" blamelist is computed from the input commit to a build, while an "output" blamelist is derived from output commits.
type HistoryFunc func(c context.Context, host, project, oldRevision, newRevision string) ([]*gitpb.Commit, error)
HistoryFunc is a function that gets a list of commits from Gitiles for a specific repository, between oldRevision and newRevision, inclusive.
If oldRevision is not reachable from newRevision, returns an empty slice and nil error.
Logs represents a set of Git diffs between two Checkouts.
It is a mapping of repository URLs to a list of Git commits, representing the Git log for that repository.
func ComputeLogs(c context.Context, oldCheckout, newCheckout Checkout, history HistoryFunc) (Logs, error)
ComputeLogs produces a set of Git diffs between oldCheckout and newCheckout, using the repositories in the newCheckout. historyFunc is used to grab the Git history.
func (l Logs) Blamelist(template string) []EmailNotify
Blamelist computes a set of email notifications from the Logs.
Filter filters out repositories from the Logs which are not in the whitelist and returns a new Logs.
Package notify imports 45 packages (graph) and is imported by 1 packages. Updated 2018-08-14. Refresh now. Tools for package owners.
|
https://godoc.org/go.chromium.org/luci/luci_notify/notify
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Advertising
--- Comment #1 from Chad H. <[email protected]> --- (In reply to comment #0) > This may be actually impossible, but I'm filing a bug to discuss strategies > for > preventing mirrors of Wikipedia from including pages we NOINDEX. A good > example > of this is user pages or user talk pages, and the new Draft namespace on > English Wikipedia. > I can't think of any possible way to /enforce/ this, nor should we. We definitely shouldn't redact the info from the API or dumps (which I assume are the two most common ways of mirroring us). Now, we might be able to expose the NOINDEX to reusers and encourage people to respect it, but I can't see any way of preventing people from using the content if they really want it. -- You are receiving this mail because: You are the assignee for the bug. You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list [email protected]
|
https://www.mail-archive.com/[email protected]/msg318043.html
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Parallel LINQ is a set extension method created for LINQ to Objects. If you want to learn about standard LINQ, you can read my other post here. The main difference between them is that Parallel LINQ executes the code in parallel, which could improve the performance and execution speed of the application, whileLINQ does it sequentially. Of course using LINQ gains significant performance improvements if you or your users have a multi-core machine.
PLINQ was added to .NET Framework 3.5 and was designed to be functional with IEnumerable and IEnumerable<T> collections. The most improvement can be seen if the logic for the collection items is well-separated and there is no shared state between them. Take into account that the data source needs to be partitioned and passed to the individual threads/tasks before execution and the results have to be merged into the result collection. All these operations have overhead and affect the execution time of PLINQ query. For more details about general performance considerations when using PLINQ, please read the MSDN page.
As an example let’s take a look at an algorithm for finding prime numbers. This is a task which is computationally intensive and can be well parallelized. Prime numbers are very important in computer science – there are a lot of cryptographic algorithms that are based on prime numbers.
I implemented the IsPrime() method to decide if the number, given in the parameter, is prime or not.
public static bool IsPrime(int number) { if (number < 2 || number % 2 == 0) { return false; } if (number == 2 || number == 3) { return true; } var sqrt = Math.Ceiling(Math.Sqrt(number)); for (int i = 3; i <= sqrt; i += 2) { if (number % i == 0) { return false; } } return true; }
I cover some edge cases in the beginning. If the number is smaller than 2 or is divisible by 2 then the number is not prime. If the number is 2 or 3 then I return true since these are prime numbers. Then, the method is looking for divisors between 3 and the square root of the original number, if it finds a divisor, the number is not prime. In any other case the number is prime.
Below is a normal LINQ for selecting the primes between 1 and 5.000.000. I added some performance measurements using Stopwatch class from System.Diagnostics namespace.
var items = Enumerable.Range(1, 5000000); var watch = new Stopwatch(); watch.Restart(); var primes = items.Where(p => IsPrime(p)).ToList(); watch.Stop(); Console.WriteLine("{0} prime numbers found in {1} milliseconds.", primes.Count, watch.ElapsedMilliseconds);
The output, when compiled to Release mode is (executed three times):
348512 prime numbers found in 1570 milliseconds. 348512 prime numbers found in 1562 milliseconds. 348512 prime numbers found in 1583 milliseconds.
Running on a single thread, the calculation takes around 1.5 seconds.
I make a simple change to execute the code in parallel using the AsParallel() extension method of the IEnumerable interface:
var items = Enumerable.Range(1, 5000000); var watch = new Stopwatch(); watch.Restart(); var primes = items.AsParallel().Where(p => IsPrime(p)).ToList(); watch.Stop(); Console.WriteLine("{0} prime numbers found in {1} milliseconds.", primes.Count, watch.ElapsedMilliseconds);
When executing the code in Release mode the result is the following:
348512 prime numbers found in 830 milliseconds. 348512 prime numbers found in 860 milliseconds. 348512 prime numbers found in 851 milliseconds.
The execution time has decreased almost by half, from 1.5 seconds to 0.8 seconds and this only by adding AsParallel() to the code. This performance increase is realistic, since on my machine I have two CPU cores. Even if I increase the number of items to scan till 15 million, the execution time is around 3.7 seconds. When executing the NON parallel query for 15 million items the selection takes around 7.2 seconds (it should be noted that the ratio is similar: almost 50% improvement when executing in parallel).
Using the WithDegreeOfParallelism() method, you can maximize how many concurrent Tasks (see related article on Tasks here) are created by the framework and how many work chunks is the data split into.
var primes = items .AsParallel() .WithDegreeOfParallelism(8) .Where(p => IsPrime(p)) .ToList();
In this code I force it to create eight execution threads/tasks for filtering the items using the where clause.
In some cases PLINQ can decide to fall back to the sequential execution mode. It decides based on the type and structure of queries you run, like when select and indexed where statements that rearrange the original item list appear in the query or when queries contain Take(), Skip(), SkipWhile() and TakeWhile() methods, and the order of the items in the collection was changed during execution.
There are cases when your measurements validate that executing a LINQ in parallel has better performance, but the framework falls back to sequential execution. If this case appears you can enforce the .NET Framework to always execute your query in parallel, you can do this using the WithExecutionMode() method:
var primes = items .AsParallel() .WithDegreeOfParallelism(8) .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Where(p => IsPrime(p)) .ToList();
As you can see, gaining extra performance from the hardware is easy using PLINQ, but please be aware that not all algorithms can be executed in parallel and in many cases executing code in parallel can lead to strange errors in results if the implemented logic is complex and has shared states.
Please make measurements before and after using PLINQ. This way, you can be sure that you really made a difference with your code change. You should also try to test your code compiled in release mode on other machines which have more CPU cores and also with less CPU cores than yours. By doing this, you ensure that the code is optimized for different hardware, and not just on the one you used to develop the software.
|
https://www.tr.freelancer.com/community/articles/parallel-linq
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
How can I get the data from the API for how long an issue (epic) spent in each status, or at least in an "In Progress" status, so that I can calculate cycle time? Yes, I've seen the control chart, but I can't use them for my purpose.
I use this code for this:
import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.Issue import com.atlassian.jira.issue.changehistory.ChangeHistory import com.atlassian.jira.issue.changehistory.ChangeHistoryManager import com.atlassian.jira.issue.history.ChangeItemBean /** * Return time between two statuses */ //Issue issue = null; long endStatusTime = 0; String endStatusName = "Закрыт" ChangeHistoryManager changeHistoryManager = ComponentAccessor.getChangeHistoryManager(); for(ChangeHistory changeHistory: changeHistoryManager.getChangeHistories(issue)) for(ChangeItemBean changeItemBean: changeHistory.getChangeItemBeans()) { if (changeItemBean.getField().equals("status")){ if( changeItemBean.getToString().equals(endStatusName) ){ endStatusTime = changeItemBean.getCreated().getTime() } } } return String.format("%.2f", (endStatusTime - issue.getCreated().getTime())/(1000*60*60.
|
https://community.atlassian.com/t5/Jira-questions/How-can-I-get-cycle-time-for-epics-via-the-API/qaq-p/199782
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
China 15.6 inch notebook laptop with Intel Z8350 CPU support Win 10 os
US $132.5-155.5 / Piece
1 Piece (Min. Order)
Shenzhen GST Communication Co., Ltd.
94.4%
import used computers wholesale used computers and laptops
US $120-450 / Set
5 Sets (Min. Order)
Shenzhen Riguan Photoelectric Co., Ltd.
96.5%
Delux Smart Voice Mechanical Wireless Gaming Keyboard for Designer
US $66.53-66.53 / Piece
Shenzhen ONU Mall Technology Co., Limited
78.6%
Metal mesh computer desk storage monitor stand
US $7-8.5 / Piece
1000 Pieces (Min. Order)
Cixi Ciyi Steel Tube Factory
65.2%
japanese used computers laptop mini computer i7
US $120-450 / Set
5 Sets (Min. Order)
Shenzhen Riguan Photoelectric Co., Ltd.
96.5%
- About product and suppliers:
Alibaba.com offers 5 used laptops no os products. About 20% of these are laptops. A wide variety of used laptops no os options are available to you, There are 5 used laptops no os suppliers, mainly located in Asia. The top supplying country is China (Mainland), which supply 100% of used laptops no os respectively. Used laptops no os products are most popular in Western Europe, Northern Europe, and North America.
|
https://www.alibaba.com/showroom/used-laptops-no-os.html
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Prime; import javax.faces.context.FacesContext; import javax.faces.convert.*; import com.facade.DogFacade; import com.model.Dog; @FacesConverter(forClass = com.model.Dog.class) public class DogConverter implements Converter { @Override public Object getAsObject(FacesContext arg0, UIComponent arg1, String arg2) { DogFacade dogFacade = new DogFacade(); int dogId; try { dogId = Integer.parseInt(arg2); } catch (NumberFormatException exception) { throw new ConverterException(new FacesMessage(FacesMessage.SEVERITY_ERROR, 'Type the name of a Dog and select it (or use the dropdow)', 'Type the name of a Dog and select it (or use the dropdow)')); } return dogFacade.findDog(dogId); } @Override public String getAsString(FacesContext arg0, UIComponent arg1, Object arg2) { if (arg2 == null) { return ''; } Dog dog = (Dog) arg2; return String.valueOf(dog.getId()); } }
About the above code:
- In the @Converter annotation there is an attribute named “forClass”. This attribute indicates to the de JSF that all classes of the specified in the “forClass” will invoke the Converter. The DogConverter is annotated with “forClass = com.model.Dog.class”, every time the JSF needs a Converter for a Dog class the JSF will invoke the DogConverter. It was not necessary to write any code in the “web.xml” file.
The Converter code is required in the Primefaces AutoComplete. Bellow you see how easy is to use the AutoComplete:
personUpdateDialog.xhtml
PersonMB.java
About the above code:
- The AutoComplete function has several options like minimum query length, delay to start a query, dropdown to display all values and more. It is worth to see all the options: and
- The “complete” method has a cache of the values found in the database. The method goes to each object of the List<Dog> and keeps the matches.
- Notice that the Converter will always be called because the “itemValue=”#{dog}”” that you will find in the AutoComplete component.
You can see bellow the AutoComplete working:
CSS/javascript/images with JSF
Take a look at the pictures bellow, it will display how that the application of this post handles resources:
“master.xhtml”
“index.xhtml”
With JSF it is very easy to handle this kind of resources. The developer does not need to use the file relative path anymore. To use your resources like that create your files like bellow:
The JSF will map all resources found in the folder “/WebContent/resources” and the developer will be able to use the resources like displayed above.
“web.xml” configurations
In the folder “WebContent/WEB-INF/” create the files bellow:
“web.xml”
<?xml version='1.0'?> <web-app <display-name>JSFCrudApp<> <welcome-file-list> <welcome-file>/pages/protected/index.xhtml</welcome-file> </welcome-file-list> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param> <filter> <filter-name>LoginCheckFilter</filter-name> <filter-class>com.filter.LoginCheckFilter</filter-class> <init-param> <param-name>loginActionURI</param-name> <param-value>/JSFCrudApp/pages/public/login.xhtml</param-value> </init-param> </filter> <filter-mapping> <filter-name>LoginCheckFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter> <filter-name>AdminPagesFilter</filter-name> <filter-class>com.filter.AdminPagesFilter</filter-class> </filter> <filter-mapping> <filter-name>AdminPagesFilter</filter-name> <url-pattern>/pages/protected/admin/*</url-pattern> </filter-mapping> <filter> <filter-name>DefaultUserPagesFilter</filter-name> <filter-class>com.filter.DefaultUserPagesFilter</filter-class> </filter> <filter-mapping> <filter-name>DefaultUserPagesFilter</filter-name> <url-pattern>/pages/protected/defaultUser/*</url-pattern> </filter-mapping> </web-app>
“faces-config.xml”
<?xml version='1.0' encoding='UTF-8'?> <faces-config <application> <resource-bundle> <base-name>messages</base-name> <var>bundle</var> </resource-bundle> <message-bundle>messages</message-bundle> </application> </faces-config>
About the above code:
- All security filters you will find mapped in the web.xml file. You could also use the @Filter annotations that would work the same.
- The property “javax.faces.PROJECT_STAGE” has the value DEVELOPMENT. One of the advantages of this configurations is that the JSF will append a “h:message” if none is found in the screen. If some exception happens and there is no component to display it the JSF will append a h:message component in the page.
- Inside the “faces-config” exist a tag named “message-bundle”. This tag allows the developer to override the JSF default messages, the value of this tag will point to a file with the default JSF messages key. In the “message.properties” file (page 08) has the key “javax.faces.component.UIInput.REQUIRED”, any value you write in this key will affect all the “required field” messages displayed in the application.
Increasing the security of your application
Do not concatenate the queries
Do not use the usual “where id =” + id sql code. This kind of code allows the “SQL Injection” hacker attack. A developer that works with ORM is also vulnerable to this kind of attack but with different name: “HQL Injection”. The best way of doing a query you may find in the Person and User class:
It does not matter if you use JPA or not, never concatenate your query with strings.
The developer must be aware that “SQL Injection may happen in any query of your application”, and not only in the login query. If a user has a valid login to your application, this user is able to do “SQL Injection” in all your queries.
AutoComplete Off
In the page “login.xhtml” there is the following code:
The tag “autocomplete” off tells to the browser that this field should not be saved. The great browsers (Firefox, Chrome, IE) respect this tag and it helps to protect your application.
If you allow the browser to keep the password the browser must keep this password stored somewhere. If a hacker finds out where this key is stored he may try to crack it.
Validate all incoming requests
If the application was required just to validate if the user is or is not logged the user class would not need the role Enum.
If there was no role validation a regular user could access any area of the system, like the admin pages. Notice that this application has filters that always validate the user role for all requests. “Hiding a link is not a protection measure. The developer should always validate all incoming requests”. A developer may hide a URL or a button, but a user could access any screen of your application if he types the URL in the browser or simulating get/post calls.
Always use the h:outputText component
An easy way to avoid the Cross-Site Scripting is using the h:outputText component to display data from the database. Notice that in this post that all the values displayed to de user that comes from the database are using the h:outputText component. A situation that the h:outputText component can be avoided is when the application will display system messages from a file like “message.properties”.
Running the application
Start the Tomcat and check the database; notice that there is no table inside the database yet.
Type the following URL in the browser: .
The following screen will be displayed:
Type any value that you want and press the login button, do not bother if there is no user or tables. Just click on it. You will see an error message like this:
Check your database again and you will see that all tables were created. The applications is controlling all transaction manually that is the reason for the JPA/Hibernate just create the table when is first invoked.
You need to create a user with the role ADMIN (I named this user Real Madrid) and another user with the USER role (I named it Barcelona). The ADMIN user will have access to all system under all folders but the user with the USER role will have access to the pages under the folder defaultUser.
If you log in with the ADMIN user you will see:
Log in now with the user that has the USER role:
The Dogs button were hidden. Even without the button a user could access the URL of the Dogs and the Dogs screen should be allowed only to ADMIN roles.
To see how the application would behave with and illegal access, keep logged in with the USER role and try to access the following link:
The next screen will be displayed:
This screen will be displayed because the AdminPagesFilter check if the request comes from a ADMIN user. I the user is not in the ADMIN role our Ninja Cat will get it! [=
Reference: Full Web Application with Tomcat JSF Primefaces JPA Hibernate from our JCG partner Hebert Coelho at the uaiHebert blog.
could you share the source code of project
hello, please, share the source code. tx..
I am beginner in java .could you able to send source code of this application at address [email protected]
Good evening, very interesting tutorial, but I am not able to find where to download the souce code.
I’m studying these technologies and would be very helpful.
Thanks in advance
Where can i find and download the source code of project ?
Tanks in advance Vince
Hi,
This is very nice tutorial to start learning JSF..but as you mentioned in part1 i did not get the source code/all required resources with .jars file required.
Can you please provide the complete source code for this Nice Application.
Thanks in Advance Vinod
One of the best article/tutorial about primefaces,jsf,hibernate. Thank you very much.
Only one thing is missed, a 4th page for telling web pages.
Thanks anyway, i like it so much.
Could you please share the source code
|
https://www.javacodegeeks.com/2012/07/full-web-application-tomcat-jsf_4954.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Hi,
On 23 août 2012, at 10:52, Florin Pinte <[email protected]> wrote:
> Hello,
>
>.
I'm having trouble understanding the issue here.
>.
Then, you could probably do the following:
- install ApacheDS (the Directory Server not the Studio) using specific windows installer
- use Apache Directory Studio to :
- Edit the configuration using the ApacheDS Configuration plugin
- Edit the schema of the server using the Schema Editor plugin (which can be exported
for ApacheDS in LDIF format)
- Edit the data of your ApacheDS instance with the LDAP Browser plugin (browse/add entries,
import LDIF files, etc.)
The ApacheDS instance should be easy to launch either:
- from command line with our bat script
- from the service utility in Windows.
Regards,
Pierre-Arnaud
> Regards,
> Florin
|
http://mail-archives.apache.org/mod_mbox/directory-users/201208.mbox/%[email protected]%3E
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
This, and the following few blogs, takes a look at the whole social scene by demonstrating the use of Spring Social, and I’m going to start by getting very basic.
If you’ve seen the Spring Social Samples you’ll know that they contain a couple of very good and complete ‘quickstart’ apps; one for Spring 3.0.x and another for Spring 3.1.x. In looking into these apps, the thing that struck me was the number of concepts you have to learn in order to appreciate just what’s going on. This includes configuration, external authorization, feed integration, credential persistence etc… Most of this complexity stems from the fact that your user will need to login to their Software as a Service (SaaS) account, such as Twitter, Facebook or QZone, so that your application can access their data 1. This is further complicated by the large number of SaaS providers around together with the different number of authorization protocols they use.
So, I thought that I’d try and break all this down into the various individual components explaining how to build a useful app; however, I’m going to start with a little background.
The Guys at Spring have quite rightly realized that there are so many SaaS providers on the Internet that they’ll never be able to code modules for all of them, so they’ve split the functionality into two parts, with the first parting comprising the spring-social-core and spring-social-web modules that provide the basic connectivity and authorization code for every SaaS provider. Providing all this sounds like a mammoth task but it’s simplified in that to be a SaaS provider you need to implement what’s known as the OAuth protocol. I’m not going into OAuth details just yet, but in a nutshell the OAuth protocol performs a complicated little jig that allows the user to share their SaaS data (i.e. stuff they have on Facebook etc) with your application without the user handing out their credentials to your application. There are at least three versions: 1.0, 1.0a and 2.0 and SaaS providers are free to implement any version they like, often adding their own proprietary features.
The second part of this split consists of the SaaS provider modules that know how to talk to the individual service provider servers at the lowest levels. The Guys at Spring currently provide the basic services, which to the Western World are Facebook, LinkedIn and Twitter. The benefit of taking the extensive modular approach is that there’s also a whole bunch of other community led modules that you can use:
- Spring Social 500px
- Spring Social BitBucket
- Spring Social Digg
- Spring Social Dropbox
- Spring Social Flattr
- Spring Social Flickr
- Spring Social Foursquare
- Spring Social Google
- Spring Social Instagram
- Spring Social Last.fm
- Spring Social Live (Windows Live)
- Spring Social Miso
- Spring Social Mixcloud
- Spring Social Nk
- Spring Social Salesforce
- Spring Social SoundCloud
- Spring Social Tumblr
- Spring Social Viadeo
- Spring Social Vkontakte
- Spring Social Weibo
- Spring Social Xing
- Spring Social Yammer
- Spring Social Security Module
- Spring Social Grails Plugin
This, however, is only fraction of the number of services available: to see how large this list is visit the AddThis web site and find out what services they support.
Back to the Code
Now, if you’re like me, then when it comes to programming you’ll hate security: from a development view point it’s a lot of faff, stops you from writing code and makes your life difficult, so I thought I’d start off by throwing all that stuff away and write a small app that displays some basic SaaS data. This, it turns out, is possible as some SaaS providers, such as Twitter, serve both private and public data. Private data is the stuff that you need to login for, whilst public data is available to anyone.
In today’s scenario, I’m writing a basic app that displays a Twitter user’s time line in an application using the Spring Social Twitter Module and all you’ll need to do this is the screen name of a Twitter user.
To create the application, the first step is to create a basic Spring MVC Project using the template section of the SpringSource Toolkit Dashboard. This provides a webapp that’ll get you started.
The second step is to add the following dependencies to your pom.xml file:
<!-- Twitter API --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-twitter</artifactId> <version>${org.springframework.social-twitter-version}</version> </dependency> <!-- CGLIB, only required and used for @Configuration usage: could be removed in future release of Spring --> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2</version> </dependency>
The first dependency above is for Spring Social’s Twitter API, whilst the second is required for configuring the application using Spring 3’s @Configuration annotation. Note that you’ll also need to specify the Twitter API version number by adding:
<org.springframework.social-twitter-version>1.0.2.RELEASE</org.springframework.social-twitter-version>
…to the <properties> section at the top of the file.
Step 3 is where you need configure Spring. If you look at the Spring Social sample code, you’ll notice that the Guys at Spring configure their apps using Java and the Spring 3 @Configuration annotation. This is because Java based configuration allows you a lot more flexibility than the original XML based configuration.
@Configuration public class SimpleTwitterConfig { private static Twitter twitter; public SimpleTwitterConfig() { if (twitter == null) { twitter = new TwitterTemplate(); } } /** * A proxy to a request-scoped object representing the simplest Twitter API * - one that doesn't need any authorization */ @Bean @Scope(value = 'request', proxyMode = ScopedProxyMode.INTERFACES) public Twitter twitter() { return twitter; } }
All that the code above does is to provide Spring with a simple TwitterTemplate object via its Twitter interface. Using @Configuration is strictly overkill for this basic application, but I will be building upon it in future blogs.
For more information on the @Configuration annotation and Java based configuration, take a look at:
Having written the configuration class the next thing to do is to sort out the controller. In this simple example, I’ve used a straight forward @RequestMapping handler that deals with URLs that look something like this:
<a href=timeline?id=roghughe>Grab Twitter User Time Line for @roghughe</a><br />
…and the code looks something like this:
@Controller public class TwitterTimeLineController { private static final Logger logger = LoggerFactory.getLogger(TwitterTimeLineController.class); private final Twitter twitter; @Autowired public TwitterTimeLineController(Twitter twitter) { this.twitter = twitter; } @RequestMapping(value = 'timeline', method = RequestMethod.GET) public String getUserTimeline(@RequestParam('id') String screenName, Model model) { logger.info('Loading Twitter timeline for :' + screenName); List<Tweet> results = queryForTweets(screenName); // Optional Step - format the Tweets into HTML formatTweets(results); model.addAttribute('tweets', results); model.addAttribute('id', screenName); return 'timeline'; } private List<Tweet> queryForTweets(String screenName) { TimelineOperations timelineOps = twitter.timelineOperations(); List<Tweet> results = timelineOps.getUserTimeline(screenName); logger.info('Fond Twitter timeline for :' + screenName + ' adding ' + results.size() + ' tweets to model'); return results; } private void formatTweets(List<Tweet> tweets) { ByteArrayOutputStream bos = new ByteArrayOutputStream(); StateMachine<TweetState> stateMachine = createStateMachine(bos); for (Tweet tweet : tweets) { bos.reset(); String text = tweet.getText(); stateMachine.processStream(new ByteArrayInputStream(text.getBytes())); String out = bos.toString(); tweet.setText(out); } } private StateMachine<TweetState> createStateMachine(ByteArrayOutputStream bos) { StateMachine<TweetState> machine = new StateMachine<TweetState>(TweetState.OFF); // Add some actions to the statemachine())); return machine; } }
The getUserTimeline method contains three steps: firstly it gets hold of some tweets, does a bit of formatting and then puts the results into the model. In terms of this blog, getting hold of the tweets in the most important point and you can see that this is done in the List<tweet> queryForTweets(String screenName) method. This methods has two steps: use the Twitter object to get hold of a TimelineOperations instance and then use that object to query a time line using using screen name the argument.
If you look at the Twitter interface, it acts as a factory object returning other objects that deal with different Twitter features: timelines, direct messaging, searching etc. I guess that this is because the developers realized that Twitter itself encompasses so much functionality that if all the required methods were in one class, then they’d have a God Object on their hands.
I’ve also included the optional step of converting the Tweets into HTML. To do this I’ve used the JAR from my State Machine project and blog and you can see how this is done in the formatTweets(...) method.
After putting the list of Tweets in to the model as an attribute, the final thing to accomplish is to write a JSP to display the data:
<ul> <c:forEach <li><img src='${tweet.profileImageUrl}' align='middle'/><c:out<br/><c:out</li> </c:forEach> </ul>
If you implement the optional anchor tag formatting then the key thing to remember here is to ensure that the formatted Tweet’s HTML is picked up by the browser. This is achieved by either using the escapeXml='false' attribute of the c:out tag or to place ${tweet.text} directly into the JSP.
I haven’t included any styling or a fancy front end in this sample, so if you run the code 2 you should get something like this:
And that completes my simple introduction to Spring Social, but there’s still a lot of ground to cover. In my next blog, I’ll be taking a look at what’s going on in the background.
1I’m guessing that there’s lots of privacy and data protection legality issues to consider here, especially if you use this API to store your users’ data and I’d welcome comments and observations on this.
2The code is available on GitHub at git://github.com/roghughe/captaindebug.git in the social project.
Reference: Getting Started with Spring Social from our JCG partner Roger Hughes at the Captain Debug’s Blog blog.
Hi, thanks for sharing the code
|
https://www.javacodegeeks.com/2012/06/getting-started-with-spring-social.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Created on 2011-06-19 17:25 by haypo, last changed 2012-01-08 09:35 by rosslagerwall. This issue is now closed.
[271/356/1] test_concurrent_futures
Traceback (most recent call last):
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/multiprocessing/queues.py", line 268, in _feed
send(obj)
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/multiprocessing/connection.py", line 229, in send
self._send_bytes(memoryview(buf))
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/multiprocessing/connection.py", line 423, in _send_bytes
self._send(struct.pack("=i", len(buf)))
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/multiprocessing/connection.py", line 392, in _send
n = write(self._handle, buf)
OSError: [Errno 32] Broken pipe
Timeout (1:00:00)!
Thread 0x00000954:
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/threading.py", line 237 in wait
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/multiprocessing/queues.py", line 252 in _feed00000953:
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/multiprocessing/forking.py", line 146 in poll
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/multiprocessing/forking.py", line 166 in wait
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/multiprocessing/process.py", line 150 in join
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/concurrent/futures/process.py", line 208 in shutdown_worker
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/concurrent/futures/process.py", line 264 in _queue_management_worker00000001:
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/threading.py", line 237 in wait
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/threading.py", line 851 in join
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/concurrent/futures/process.py", line 395 in shutdown
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/test/test_concurrent_futures.py", line 67 in tearDown
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/unittest/case.py", line 407 in _executeTestPart
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/unittest/case.py", line 463 in run
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/unittest/case.py", line 514/test/support.py", line 1166 in run
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/test/support.py", line 1254 in _run_suite
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/test/support.py", line 1280 in run_unittest
File "/home2/buildbot/slave/3.x.loewis-sun/build/Lib/test/test_concurrent_futures.py", line 628
make: Fatal error: Command failed for target `buildbottest'
program finished with exit code 1
See commit e6e7e42efdc2 of the issue #12310.
Message on a stackoverflow thread:
"I have suffered from the same problem, even if connecting on localhost in python 2.7.1. After a day of debugging i found the cause and a workaround:
Cause: BaseProxy class has thread local storage which caches the connection, which is reused for future connections causing "broken pipe" errors even on creating a new Manager
Workaround: Delete the cached connection before reconnecting
if address in BaseProxy._address_to_local:
del BaseProxy._address_to_local[self.address][0].connection"
---
See also maybe the (closed) issue #11663: multiprocessing doesn't detect killed processes
Connection._send_bytes() has a comment about broken pipes:
def _send_bytes(self, buf):
# For wire compatibility with 3.2 and lower
n = len(buf)
self._send(struct.pack("=i", len(buf)))
# The condition is necessary to avoid "broken pipe" errors
# when sending a 0-length buffer if the other end closed the pipe.
if n > 0:
self._send(buf)
But the OSError(32, "Broken pipe") occurs on sending the buffer size (a chunk of 4 bytes: self._send(struct.pack("=i", len(buf)))), not on sending the buffer content.
See also maybe the (closed) issue #9205: Parent process hanging in multiprocessing if children terminate unexpectedly
Ah, submit a new task after the manager shutdown fails with OSError(32, 'Broken pipe'). Example:
---------------
from multiprocessing.managers import BaseManager
class MathsClass(object):
def foo(self):
return 42
class MyManager(BaseManager):
pass
MyManager.register('Maths', MathsClass)
if __name__ == '__main__':
manager = MyManager()
manager.start()
maths = manager.Maths()
maths.foo()
manager.shutdown()
try:
maths.foo()
finally:
manager.shutdown()
---------------
This example doesn't hang, but this issue is about concurrent.futures, not multiprocessing.
Oh, I think that I found a deadlock (or something like that):
----------------------------
import concurrent.futures
import faulthandler
import os
import signal
import time
def work(n):
time.sleep(0.1)
def main():
faulthandler.register(signal.SIGUSR1)
print("pid: %s" % os.getpid())
with concurrent.futures.ProcessPoolExecutor() as executor:
for number, prime in executor.map(work, range(100)):
print("shutdown")
executor.shutdown()
print("shutdown--")
if __name__ == '__main__':
main()
----------------------------
Trace:
----------------------------
Thread 0x00007fbfc83bd700:
File "/home/haypo/prog/HG/cpython/Lib/threading.py", line 237 in wait
File "/home/haypo/prog/HG/cpython/Lib/multiprocessing/queues.py", line 252 in _feed
Thread 0x00007fbfc8bbe700:
File "/home/haypo/prog/HG/cpython/Lib/multiprocessing/queues.py", line 101 in put
File "/home/haypo/prog/HG/cpython/Lib/concurrent/futures/process.py", line 268 in _queue_management_worker
Current thread 0x00007fbfcc2e3700:
File "/home/haypo/prog/HG/cpython/Lib/threading.py", line 237 in wait
File "/home/haypo/prog/HG/cpython/Lib/threading.py", line 851 in join
File "/home/haypo/prog/HG/cpython/Lib/concurrent/futures/process.py", line 395 in shutdown
File "/home/haypo/prog/HG/cpython/Lib/concurrent/futures/_base.py", line 570 in __exit__
File "y.py", line 17 in main
File "y.py", line 20 in <module>
----------------------------
There are two child processes, but both are zombies (displayed as "<defunct>" by ps). Send SIGUSR1 signal to the frozen process to display the traceback (thanks to faulthandler).
Retrieving the result of a future after the executor has been shut down can cause a hang.
It seems like this regression was introduced in a76257a99636. This regression exists only for ProcessPoolExecutor.
The problem is that even if there are pending work items, the processes are still signaled to shut down leaving the pending work items permanently unfinished. The patch simply removes the call to shut down the processes when there are pending work items.
Attached is a patch.
Well I was sure I had added this code for a reason, but the tests seem to run without...
Just a comment: the test isn't ProcessPoolExecutor-specific, so it should really be in the generic tests.
New changeset 26389e9efa9c by Ross Lagerwall in branch '3.2':
Issue #12364: Fix a hang in concurrent.futures.ProcessPoolExecutor.
New changeset 25f879011102 by Ross Lagerwall in branch 'default':
Merge with 3.2 for #12364.
Thanks!
|
http://bugs.python.org/issue12364
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
This article presents a checksum routine for UPD/IP packets using 32-bit groupings.
See RFC 768 to read about the UDP protocol and the UDP checksum. In particular, you need to understand the pseudo header used for the UDP checksum. See RFC 1071 for a discourse on the theory of the internet checksum.
I use winpcap () to monitor a UDP data stream and I needed a checksum routine, but all the examples I found were based on inefficient 16-bit groupings of the bytes. However, in RFC 1071 I found three key principles that allow for a more efficient process:
First, the size of the groupings doesn't matter if you fold the result back into a 16 bit word at the end. In C it is not easy to check for integer overflow, so you need an accumulator that can hold all of the overflows from all the summing. For 32 bit groupings, the accumulator needs to be 64 bits. At the end of the process you can fold the 64 bits into 16 bits to get a valid checksum result.
Second, the byte order doesn't matter until the end if you are generating a checksum to insert into a packet. If you are only checking the checksum at the receiving end, the result should be0xFFFF
, so in this case the byte order doesn't matter at all. Thus, you save a little overhead by not calling ntohs() for each grouping of bytes.
0xFFFF
ntohs()
Finally, zeroes don't affect the checksum result, so padding a leftover byte with zeroes to form a long integer is ok.
To use the UDPchecksum() routine you can simply paste it into your winpcap project (or any project that has a packet capture tool) and rename the global pointers to match the names you choose. Winpcap fills a structure with the winpcap header information along with a buffer containing the captured packet. In my version these pointers were members of a class, but I removed that detail for this example. They must point to the winpcap entities that contain a header and a packet, respectively, before the call to UDPchecksum(). Of course, you could make the pointers private and pass them in the function call.
UDPchecksum()
There are two cheap (efficient) tricks in this version:
First, the checksum pseudo header is created by mangling the IP header. The pseudo header consists of the following elements:
These must be added to the checksum to get the correct result. Since the IP Source and Destination addresses along with the Protocol field are contiguous and immediately precede the UDP section, we already have a good start. I simply copied the UPD Length field to the IP Checksum field of the IP header, and I zeroed out the IP Time to Live byte. Now the pseudo header is complete and contiguous with the UDP header. The checksum process starts at byte 22 of the packet, which is the beginning of the pseudo header I just assembled.
Second, I solved the problem of what to do when the data length is not a multiple of 4 bytes by replacing the four bytes immediately following the UDP packet with zeroes. These will either be the ethernet Frame Check Sequence or ethernet filler. Again, I don't mind mangling the leftover ethernet fields to simplify processing the UDP packet. Now, if a single byte remains to be read at the end of a UDP packet, three zeroes are read with it to form a 32 bit grouping, and there is no affect on the result.
The return from my version is 'true' if the checksum result is correct.
true
#define UDP_LEN 38 // position of UDP Header UDP Length field
#define IP_LIVE 22 // position of IP Header Time to Live field
#define IP_CHECKSUM 24 // position of IP Header IP Checksum field
// pointer to the header filled by winpcap for each captured packet
struct pcap_pkthdr *pHeader;
// pointer to the buffer containing the packet captured by winpcap
u_char *pPkt_data;
////////////////////////////////////////////////////////////////////////////
// UDPchecksum routine
// expects pHeader to point to the winpcap pcap_pkthdr structure
// expects pPkt_data to point to the captured packet buffer
// returns true if the UDP portion of the packet has a valid checksum
bool UDPchecksum()
{
// a 64 bit accumulator
_int64 qwSum = 0;
int nSize;
// a byte pointer to allow pointing anywhere in the buffer
// pHeader->caplen doesn't include the ethernet
// FCS and filler (4+ bytes at end)
unsigned char *pB = (unsigned char *)(pPkt_data + pHeader->caplen);
// pDW is a pointer to dwords for grouping bytes
// initialize to point to the FCS (or filler if packet length < 46)
// after UDP
ULONG *pDW = (ULONG *)(pB);
// The 4+ bytes received after UDP are ethernet FCS and
// filler - ok to mangle.
// Put 0's after UDP for groupings of leftover bytes < 4 at
// the end of UDP
// (adding one, two, or three bytes == 0 into checksum will not
// affect the result)
*pDW = 0;
// construct the pseudo header starting at 22
// (IP Source & Dest Addresses are already in place)
*(pPkt_data + IP_LIVE) = 0;
*(pPkt_data + IP_CHECKSUM + 1) = *(pPkt_data + UDP_LEN + 1);
*(pPkt_data + IP_CHECKSUM) = *(pPkt_data + UDP_LEN);
// point pDW to beginning of pseudo header
pB = (pPkt_data + 22);
pDW = (ULONG *)(pB);
// set the size of bytes to inspect to the size of the UDP portion
// plus the pseudo header starting at the IP header 'Time to Live' byte.
dwSize = pHeader->caplen - IP_LIVE;
while( nSize > 0 ) {
// Add DWORDs into the accumulator
// This loop should be 'unfolded' to increase speed
qwSum += (ULONG) *pDW++;
nSize -= 4;
}
// Fold 64-bit sum to 16 bits
while (qwSum >> 16)
qwSum = (qwSum & 0xffff) + (qwSum >> 16);
// a correct checksum result is 0xFFFF (at the receiving end)
return (qwSum == 0xFFFF);
}
No changes.
|
http://www.codeproject.com/Articles/5543/32-bit-UDP-Checksum?fid=29434&df=90&mpp=25&sort=Position&spc=Relaxed&tid=681104&noise=3&prof=False&view=Quick
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
This action might not be possible to undo. Are you sure you want to continue?
2006 Edition
OfficeReady
Business Plan User Guide and Business Plan eBook
A Guide to preparing a winning business plan
Written by Michael P. Griffin and TemplateZone
Table of Contents
Introduction OfficeReady Browser
What is a template? Why are OfficeReady templates so useful?
6 7
7 8
Getting Started using OfficeReady Software
Locating the right template Saving your work Editing an OfficeReady document
9
9 11 11
Frequently Asked Questions
Is the OfficeReady browser the only access to the templates? When I save a template file as a document, does it appear in the browser? Where can I find the OfficeReady templates on my hard drive? How can I get more templates for use with OfficeReady? Are my existing templates overwritten when I install a template pack? How can I learn more about OfficeReady? Where can I find more resources?
14
14 14 14 14 14 15 15
Writing your Business Plan
Preface About the author
16
16 16
Chapter One – You need a Business Plan Chapter Two – Before you get started Chapter Three - Business Plan Writing Tips
Write, edit, and re -write Clarity, conciseness, and coherence Writing for your reader Writing for management, employees, and prospective employees Writing for Investors Professional writing quality Making it look good Points of style checklist The cover page
17 19 20
20 21 22 23 23 24 25 25 25
2
Chapter Four – Business Plans for Various Types of Businesses
Technology Manufacturing Service Retail New Venture
26
26 26 27 28 28
Chapter Five Executive Summary
Type of business Business summary Business summary checklist Management overview Products and services Requesting funds Exit strategy One Page Executive Summary Executive summary checklist
29
29 29 30 31 32 32 33 33 34
Chapter Six - Company Background
Business history Growth and financial objectives Legal structure and ownership Company location and facilities Location and facilities checklist for start-up businesses Plans for financing Preparing a financing proposal checklist
35
35 35 36 36 37 38 38
Chapter Seven - Organization
Team member information checklist Other valuable employees Principal stockholders
39
40 40 41
Chapter Eight - Market Analysis
Industry analysis Industry description checklist: Using a table to summarize the industry Target market Target market checklist Customer profile Market segmentation Projected market growth and market share objectives
42
42 43 44 44 44 45 46 47
3
Manufacturing or Production Plan Production and capacity Production issues 52 52 52 Chapter Eleven .Financial Plan and Analysis Financial plan as a modeling tool Financial plan as a control tool Some basic steps Income Statements Cash Budgets Balance Sheet Financial Ratios Sales Forecast Estimating Start-up Capital Start-up Capitalization Break -e ven Analysis More on Pro Forma Financial Statements 58 59 59 59 60 60 61 61 62 62 63 63 63 Chapter Thirteen – Other Related Communications Executive Summary The PowerPoint Show Elevator Pitch 69 69 69 70 Chapter Fourteen – Financing: Seed Capital Bootstrapping Angel Financing Small Business Innovation Research Program 71 71 72 73 4 .Chapter Nine -Products or Services Product and service uniqueness Product and service description checklist Future products and services Competitive comparisons Research and development 48 48 49 49 49 51 Chapter Ten .Marketing Plan Creating and maintaining customers Pricing strategy Product positioning Sales and distribution plan Sales force plan checklist Promotional strategy 55 55 55 55 56 56 57 Chapter Twelve .
xls) Start-Up Capital and Capitalization (Start-Up Capital and Capitalization.xls) 81 86 87 87 88 95 97 99 101 101 Glossary The Templates A few helpful hints for editing your business plan documents Inserting Excel workbooks and sheets About Workbooks macros 103 109 109 109 112 A List of Templates Business plan materials (Core folder) Reference and other documents (Extra folder) Sample business plans (Sample Plans folder) 113 113 113 113 5 .Information Resources Appendix B .xls) Projected Personal Financial Condition (Personal Balance Sheet.xls) Sales Forecast (Sales Forecast.xls) Personnel Plan (Personnel Plan.xls) Balance Sheet (Balance Sheet.Small Business Administration Commercial Banks Strategic Alliances 75 76 76 Chapter Fourteen Venture Capital VC Oversight VC Exit Strategy VC Investment Criteria 77 78 78 78 Chapter Fifteen – Business Plan Appendices Business plan appendix checklist 80 80 Appendix A .xls) Integrated Financials (Integrated Financials.The Standard Industrial Classification System Appendix C – Excel Workbooks Break -Even Analysis (Break -Even Analysis.
and you will see how OfficeReady Business Plan will become a valuable tool for your business.Introduction The OfficeReady Business Plans product consists of a guide and templates to help you write a well-developed business plan. experiment with the templates and features. Read this guide. Introduction 6 . This guide will help you become familiar with the product.
It provides an easy way to find and use professionally designed templates for documents. Click Get More Templates on the OfficeReady browser toolbar (this is explained further in the Frequently Asked Questions section below). and PowerPoint. and they help maintain a familiar look for all of your documents. you are really opening the default template for a blank document. cover specific areas of home office and small business. This way. the original template remains available for the next time. Many templates also provide guidelines or instructions for the content. OfficeReady Browser 7 .OfficeReady Browser The OfficeReady browser is an enhancement to the Microsoft Office Suite. workbooks. The following subsections explain the types of templates and files you will encounter as you use OfficeReady software. You use a template every time you use the Microsoft Office Suite. These new materials. The creators of OfficeReady are constantly producing new templates and instructional files to help you with all aspects of business and home office projects. In fact. They cover many of your daily tasks as you manage your home or business. Templates open like any other file except that they are always new documents that require a new file name to save them. What is a template? A template is a preformatted reusable file that provides a framework fo r creating a document or a project. This will open a website where you will find template packs for purchase. They are available at the TemplateZone website. OfficeReady templates provide a helpful jumpstart for your office documents. called template packs. Use the OfficeReady browser and templates to improve the quality of your projects and to save time and money. which you can access using the OfficeReady browser. whenever you open a new blank document in Word.
A layout is a slide with preset placeholders for objects such as titles. OfficeReady templates for Excel (. 8 . It provides overall formatting and appearance for your new Word document. and backgrounds. You can use matching templates for getting a professional touch and familiar style. You can find and view these files while using the OfficeReady browser as explained later in this guide. and organization charts. and tedious tasks involved with starting a new document. OfficeReady templates for PowerPoint (. Each layout has a page format and a color scheme that is consistent with the entire presentation. OfficeReady templates save time by giving you layouts and formats for various types of documents. Master: A master is a set of layouts and backgrounds for a presentation. layout.pot): A presentation template is a set of slides with suggested content. bulleted lists. It provides the layout. as well as styles and formulas.pot) A PowerPoint template is a file with a . You do not have to reinvent the wheel each time you start a new project. text. subtitles. for the type of workbook or data entry form you choose. All you have to do is add text and images and then you can customize as much as you wish. data charts.pot extension.dot) A Word template is a file with a .pdf) Some OfficeReady template packs include other helpful files to help you with your work in the area of template pack. Presentation (. Many of the templates follow themes that match templates in other categories.dot extension.xlt extension. Some of these files might be posters or sample files in PDF format. repetitive. It also provides placeholders for your text and images to maintain the appearance and flow of the document. Other helpful files (. Why are OfficeReady templates so useful? OfficeReady templates eliminate many of the difficult.OfficeReady templates for Word (.xlt) An Excel template is a file with a . The developers of OfficeReady have researched the best look for the type of document you want and developed the best layouts and formats in the industry.
Getting Started using OfficeReady Software This section explains how to get started using OfficeReady software. or you can use OfficeReady to do it yourself. documents.OfficeReady templates also save you money. Figure 2. and forms. It describes how to find and open the templates using the OfficeReady browser. The OfficeReady icon as it appears in the Windows Start menu. Locating the right template Open the OfficeReady browser by double-clicking the OfficeReady icon on your desktop (Figure 1) or by clicking the OfficeReady icon in your Windows Start menu (Figure 2). The OfficeReady icon as it appears on the desktop. OfficeReady Browser 9 . You could pay hundreds of dollars for graphic designers to create your themes. Figure 1.
the folder list to the left.The OfficeReady browser will appear (Figure 3). 10 . Figure 3. the thumbnail preview pane to the upper right. and the larger image preview pane to the lower right. The OfficeReady browser showing the toolbar at the top.
Be sure to choose a location for the file that you can easily find later. Click a thumbnail in the preview window to the right to view a larger preview image of the template. or PowerPoint). Read and follow the instructions before you make changes. Double-click a thumbnail or a large preview image of a template to open the template as a new document in its Word Office Suite application (Word. Once the template is open. click three times to select a sentence. You can also click and drag the cursor over an area to select all of the text (and graphics) in that area. you can either delete it and insert your text or just start typing to replace it. Make sure you select only the text you want to replace. You can doubleclick a word to select it. You can use the reveal codes function (Alt+F3) to see the formatting marks that you want to avoid deleting. which should be descriptive and unique to the file you are creating. Editing an OfficeReady document Your new OfficeReady document has placeholders for text and graphics. Then you can edit your document as many times as you want. Saving your work Save your new document as soon as possible. you can always use the Undo button in the application toolbar or press Ctrl+Z to go back to where you were. This is where you need to be careful about what you select. n If you make a mistake. The placeholder text in an OfficeReady document is formatted to fit correctly in the layout. You can also use the cut and paste functions to replace the text. Editing text You can edit the text in an OfficeReady document just like you can in any other Word Office document.11 Click a category in the left window to view small preview images (thumbnails) of the templates available in that category. . All you do is replace the text and graphics in each section with your own text. Excel. Once you select text. you should save it with a new filename as instructed below. but keep these things in mind as you get started: n Be careful to avoid altering the formatting. You will be required to give the document a new filename. as explained below. and the original template file will remain unchanged. You can select all or any part of the text in a section and delete it to make room for your text. or click four times to select a paragraph. Some of the placeholder text contains instructions and tips about the right kind of information to add.
such as capitalization. Find an image file. It suggests the correct way to present your text. Delete the image placeholder you want to replace. you can use the Replace All option. in the placeholder text. n n n n Replacing photos Many of the OfficeReady templates have placeholders for photos (Figure 4). [.JPEG or a . 4. 5. A menu will appear. Click File in the menu. Click the new image and drag it to the area where the placeholder was. Figure 4. 7. A photo placeholder from a template. Soon the image will appear somewhere on the document. Here are a few tips to help you get your document finished faster: n Get to know the Word Office application you are using.GIF file that you want for your document. ]. This function deletes the original text as you type. Once you are finished with your text. Resize the image to fit in the area. Use the Word Find and Replace function (Ctrl+F) to replace repetitive text like [Company Name]. the image will stay inside the textbox.]) to let you know to replace it. Click Insert at the bottom of the dialog box. Click to select the image. 12 . 6. Read the help topics (F1). Click Insert on the toolbar at the top of the window. Once you are comfortable with the process. >. Note: If a textbox was selected when you inserted the image. Here is how to replace it with a photo: 1. You might try using the Insert (Typeover) function to replace text as you type. >. Just press the Insert key to turn it on or off. Be sure to replace the arrows or brackets along with the rest of the text. <) you might have missed. 3.Some of the text in a template might have arrows or brackets (<. Here is how to resize an image: a. Black square dots will appear around the image. A dialog box will appear allowing you to browse for a file. Pay attention to the punctuation. You can delete a placeholder or replace it with a photo. 2. such as a . You might be surprised at the number of helpful hints these help files provide. use the Find and Replace function to find brackets or arrows ([.
8. Click a corner dot and drag it away from the image to make the image larger. You can hold the Shift key while using the arrow keys to move the image in smaller increments. Check "Build hyperlinks" and then press ok. or toward the image to make it smaller. 13 . or Press Ctrl + F9. Select the image and use the arrow keys to move it into position. go to Tools | Reference | Generate. Your table of contents will automatically be updated. Note: Dragging any other dot on the image will distort it. Updating table of contents To update the table of contents.b. Now you can work on the rest of the document. A dialog box will appear.
no. but the OfficeReady browser makes it much easier to see the templates that are available and open them. Where can I find the OfficeReady templates on my hard drive? The OfficeReady templates are installed at the following location (unless you changed the location at the time of installation): C:\Program Files\OfficeReady Business Plan\Templates\ How can I get more templates for use with OfficeReady? You can get more templates by installing template packs that are available by clicking Get More Templates on the OfficeReady browser toolbar. 14 . When I save a template file as a document. such as My Documents. You can access the templates directly by navigating to their folders. you are really starting a new working document. if you are installing a new version of a template pack that includes updated templates. The application will prompt you to give the document a new filename and select a location to save it in. a message will appear with an option to overwrite the old ones. Are my existing templates overwritten when I install a template pack? Usually. Is the OfficeReady browser the only access to the templates? No. does it appear in the browser? No. Choose a location that is easy to find. This opens a web page where you will find a wide variety of downloadable template packs including free sample packs. The new filename should be descriptive so you can recognize it later. however. It is not a version of the template.Frequently Asked Questions Read this section to avoid unnecessary delays as you start working with the OfficeReady browser and templates. When you open a template.
You can also drag and drop your favorite templates into the My Favorites folder in the OfficeReady browser. Here you will find additional Frequently Asked Questions and explanations for making your work with templates quick and easy. You can select options for listing templates as files or thumbnails and for displaying larger preview images of the templates. Click Word on the Web to open a website with Word help and support. 15 . You will even find some free sample template packs. Where can I find more resources? Additional resources are available by clicking Tools in the OfficeReady browser toolbar. Click View on the browser toolbar to customize OfficeReady. You can also get more information and tools by clicking Get More Templates on the OfficeReady template browser toolbar where you can access additional templates and Template packs such as Home Essentials and Business Essentials. including a Frequently Asked Questions page. This will open a menu. Click Table of Contents for a list of topics.How can I learn more about OfficeReady? You can access additional information about OfficeReady and how to use it by clicking Help on the OfficeReady browser toolbar.
About the author Michael P. one key to success is planning. This guide and the accompanying templates will help you with some basic principles to make a great business plan that will serve as a blueprint throughout the life of your business. Mr. or operating a non-profit or charitable organization. Certified Financial Manager. University of Massachusetts Dartmouth. Griffin has also held positions in the financial services field.Writing your Business Plan Preface Whether you are already in business. and finance and has an active consulting practice. and styles that help you quickly generate great looking and well organized documents. Gri ffin is a graduate of Providence College and has an MBA from Bryant University. He is the author of many books on business planning. Starting with a good business plan can dramatically increase your odds of success. and prioritized the most important and anticipated problems. Charlton College of Business. set reasonable goals and objectives. who usually require well-written and thorough business plans to consider investing. starting a business. and a Chartered Financial Consultant. including Excel and Word use original terms for document types: n n n Excel files are called workbooks Individual pages of workbooks are called worksheets Templates are Excel workbooks or Word files that have been pre-formatted and include formulas. Your business plan will also help you to obtain capital from lenders and venture capitalists. 16 . One note about terminology: MS Office. He has developed a variety of business software via his association with KMT Software. accounting. Smithfield. Griffin is a professor in the Accounting and Finance Department. You will have a clear plan that shows how you thought through your business. TemplateZone and a host of business book publishers. He is a Certified Management Accountant. Rhode Island. text.
or policies and procedures – for that matter!) Planning is a good management practice and companies that don't plan are not as effectively managed as they could be. It can help you think through what it will take to change course or add new product lines. organizing. financing the venture. and controlling). His first step: write a business plan that stakeholders can embrace. let the plans swirl around in your mind and then the document that planning by writing your business plan. It is a plan for success.Chapter One – You need a Business Plan Peter Drucker.. assembling a team. So you must think about planning. And there is more to developing business plans than trying to gain financing. Professor Drucker expressed it as follows: "Planning what is our business. Too often. and marketing the camp. His plan will be a critical guide for setting up operations. but that is a mistake. Any business can benefit from a business plan. A friend. Even if you are already established in business. directing. (my former high school baseball coach) who retired recently. a good business plan can help you chart a new course. the management guru. A well -developed business plan should accomplish the following: n n n n communicate purpose build commitment ensure consistent decisions promote effective and efficient allocation of resources 17 . Everything that is 'planned' becomes immediate work and commitment". The business plan embodies a set of assumptions about a perceived opportunity. how the opportunity will be pursued and what to expect as things play out. New ventures need planning to increase the odds of success. entrepreneurs write business plans only to satisfy potential investors. planning what will it be. (Although I know some successful entrepreneurs who brag that they won’t be constrained by planning. acquiring land. and planning what should it be have to be integrated. is starting a summer camp for disadvantaged girls.. it is certain that there is nearly universal agreement that planning is essential for business success. explained the central role of planning many years ago.. Planning is generally recognized as the first of four basic and essential managerial tasks (planning. With books and related software and with business school courses and seminars that emphasize planning.
and read Chapter Three . The first few sections of this book provide tips on writing. Preparing a good business plan is a learning process.n n n n help to establish performance criteria and benchmarks for success explain how strategies will be implemented detail how the investment will be harvested establish a business model A business plan is an important link between strategy and implementation and the critical elements of a business plan provide some structure that allows you to study. Use this package to help you get started on the process of creating a clear and concise document that captures the attention of your readers and answers their questions. It requires you to think about your business to: n n n n n n evaluate and analyze all aspects of your business think about the background of your management team think about markets and your customers examine your products. services. 18 . A business plan is a model of your venture and as such it allows you to shape and reshape and to experiment with “what-if” scenarios. and marketing to meet the needs of your customers document production and operations management project your financial standing and cash needs This guide is part of a package that includes professionally designed templates to help you with your business plan. analyze and “play with” your business concept. Use the Getting Started Checklist.Business Plan Writing Tips before you begin.
Follow the business plan Word template included with this template pack. Take the time to learn the basics of a good business plan. and summarize it for your business plan. you can begin writing. Work on areas you are comfortable with first to help build momentum. Market Analysis. analyze. provides and explanation of each Excel workbook and how to use those workbooks. You might prefer to start with the detailed sections such as Organization. Use this book to help you understand what a business plan is and how to start writing it. and start gathering information.The Excel Workbooks. 19 . Appendix C . Think about your areas of expertise. Call on your key managers and advisors for input. and it will bring your team together for a common vision and mission for your plan.Chapter Two – Before you get started Don’t let the idea of writing a business plan intimidate you. You may just need to know where to start. You should also familiarize yourself with the Business Plan Word Template and the Excel workbooks. With your data sources in place. Use information you already have about your company to get started: n n n n n n n n information you have only thought about or talked about memos letters emails accounting and finance reports marketing plans tax returns photographs Organize. and you will find that you can do it. This will help with a better business plan. You might want to make your business plan a collaborative effort. or Financial Plan before you write the Executive Summary.
edit. You can write a master business plan with all possible data you can think of.Business Plan Writing Tips As you begin writing. Work on sections you know best. It usually takes several drafts to get it right. customer. Keep at this process until you have it right. Take a fresh look at it and make changes. Remember. Keep revising. the press). It should include your vision. editing. analyzing. and re-writing until your business plan meets the needs of each reader. and your dreams. and stick with it. You will see flaws that were not obvious before. keep in mind that every business plan is unique. writing. You can make your business plan like a résumé. writing a good business plan is more of an art than a science. and re. It should communicate that you believe it is achievable and that your company will succeed. your mission. Also keep in mind that your business plan is a selling tool and a feasibility study. Your goal should be to leave your readers with very few questions. just as every business is unique. Print the second draft. Get their input and revise your plan. and be able to edit it for each type of reader (lender. Your business plan should be the result of many hours of thinking.write Plan to take the time and effort to prepare. investor. They should be able to make informed decisions about your business after reading your business plan. Rearrange and edit as you see fit. You should go through a circular process of writing. Very few people can write a good business plan on the first try. and delegate other sections. 20 . where you have a master version that you edit for each employer. Remember. It must meet the needs of its readers. and have someone who understands business read it. Write. researching. and you will think of better ways to convey your plans. Do your homework. and re-writing. you only get one chance to make a good impression. Write the first draft knowing that better ways to put things will come later. Then put it aside for a while. Find a style that works for you and your readers.Chapter Three .
think of the three Cs: clarity. But you can use fewer words to say more with better clarity and with more appeal: Our plan is to provide a wide range of high quality movie rentals conveniently and at low cost online. phrases. conciseness. Keep it simple and straightforward. Paragraphs include clean transitions. They will put your business plan aside if the next one is easier to read and follow. Your readers want to know what you have to say without having to think about it. A coherent business plan flows well and follows good logic. As you revise your business plan. who just wants to get the info rmation. 21 . Then look at how your paragraphs fit together and flow. Think of your busiest reader. Try not to make them read the entire business plan to find out what they want to know. The sentences follow and link together. look at how your sentences fit together and flow to make complete unified paragraphs. Good writing can make the difference to your readers. who will not take the time and effort to figure out what you are trying to say. Think of your business plan as a guided tour of your business model. and coherence As you revise your business plan. For instance. Weed out unnecessary redundancies. make a decision. and clichés. conciseness. Coherent writing leads the reader through your thoughts and ideas. you might think you have a creative sentence with lots of clarifying phrases: We have an idea to earn a substantial profit in the burgeoning home entertainment industry by making it more affordable and more convenient for the average home entertainment buff to rent a wide variety of top quality box office movies at home on the Internet without driving anywhere. and coherence. They want to be able to go from section to section to get information.Clarity. and move on.
Writing for your reader Analyzing your audience is critical to the success of your business plan. Think about your reader. Ask yourself what your reader wants to know. Anticipate your reader’s questions and answer them. Use the following checklist to analyze your audience:
n
What is the purpose of your business plan? (To get funding, to serve as an internal planning document, to serve as a blueprint for your business, to cover all of the above) How convincing is your business plan? (Would you invest?) How familiar are your readers with your company before and after they read your business plan? Who are the primary and secondary audiences? (Primary audience is the investor, lender, or stakeholder for whom you are writing the business plan. Secondary audience is other people such as your company officers and employees who will use it to learn of your business plans and goals.) What kinds of information do your readers need? (How much money will they make? How stable will it be? How much will it cost? How long will it take to break even? Why should they work with you? How will the competition react? How much competition is there? What are the legal ramifications of being a part of your business? How unique is your product or service? How difficult is it to do what you do? Who has power to make decisions?)
n n n
n
n
What supplementary information, such as tables, worksheets, and charts, can you include to help your case? Are you overstating your case in such a way that the reader will become defensive? What information might the reader want in an appendix? How objective is your business plan? Will your readers become skeptical? What is your business? Your business plan should tell a compelling story of the value of your business through your mission statement, descriptions of operations, products, services, management team, and labor force. You have to sell your business to your potential investors. They have to believe that your potential customers want what you are offering. If you can’t sell your business to your investors, how can you expect to sell your product or service to your customers?
n n n n
n
What is your exit strategy? When are you leaving the business? Are you looking to start the business and sell out? Do you plan to grow the company for a few years first? How many other investors and stakeholders are involved? Are you spreading the risk among a group of investors (bankers, angels (friends or relatives that invest only because they know you), partners, limited partners, or suppliers)?
n
22
Writing for management, employees, and prospective employees A good business plan helps managers plan, organize, direct, and control. It triggers the management planning process. It helps managers to consider the vision and mission of the company along with goals and strategies to achieve the mission. A good business plan also helps management to organize resources and use them effectively. It works like a blueprint to help management get the job done and monitor (control) the progress of specific objectives. If your primary reason for preparing a business plan is to have an internal process document, you can be more forthright about strengths and weaknesses. You are free to document weaknesses that need attention (areas to avoid in an external document). Your internal business plan might also include extended sections for more specific and tactical goals, objectives, and tasks. Your business plan can serve as an effective communication tool for your vision, mission, goals, strategies, and projected finances. You might even use it to introduce your business model to prospective employees. Writing for Investors Keep in mind that if your business plan is to be reviewed by potential investors, it will receive a great deal of scrutiny. Each reader of your plan will do so with a healthy dose of skepticism. The document must inspire confidence. Readers (investors) of business plans are looking for conservative assumptions and reliable information upon which to base their investment decisions. When writing for investors be certain to:
n n n n
Provide a complete picture without overloading the reader with unnecessary information Provide key assumptions upon which the plan is based Identify critical success factors that need to be managed for the venture to be successful Detail the proof that you understand the technology, market, risks, and potential rewards of the venture Demonstrate that you can implement the plan, assemble the team, and that the team can function effectively and are committed to making the venture a success
n
When you write for investors, including venture capitalist, be deliberate in your style. In other words, let your writing show that you are very purposeful, sure of yourself. At the same time, you need to view the business plan as a living document, one that will need to be updated often. You will run your company in a flexible and emergent style; keeping your eyes often for new opportunities; always sharing and reshaping your venture. However, your writing must make it seem as though you are following a path that is clear with definite goals and objectives along they way; goals and objectives that you have deliberately set in your business plan.
23
Professional writing quality Writing quality is imperative to a successful business plan. Your readers expect it to be your best work; organization, writing style, grammar, spelling, and format are critical in showing that you are capable of communicating in the business and financial world. Use the templates that accompany this guide for help with formatting and organization, but take care to prepare professional-quality content. Here are some tips for quality writing:
n
Keep your writing simple. Avoid frustrating your readers by trying to make the ordinary seem extraordinary. Avoid using jargon unless you know it will help your readers to understand you. As a rule, jargon comes across as pompous, but when situations require it, use it correctly, and mean it. Never put jargon in quotation marks. This can come across as sarcastic. The use of jargon is very tempting, especially in a high tech venture. Avoid using sarcasm, humor, or profanity. Such language in a business plan is out of place at best, easily misunderstood, and insulting at worst. Avoid using passive sentences. Take credit for your work. For instance, make sure your sentences say that you accomplished something rather than that something was accomplished. Active sentences are shorter, clearer, and livelier.
n
n
n
n
Use conservative business language that is free from errors. Always have someone else proofread for you. You will not be able to see some errors because your brain skips over certain words and phrases after you have read them a few times.
24
and email address. Use bullets. Use professional binding. but don’t be afraid to make changes if you can improve on the appearance. Look at every page for good use of space. Tell your reader what they mean to your business. website URL. Use the formatting and page styles provided in the accompanying templates to help with a professional appearance. Use photographs. or text to introduce your company. tables. Remember that your business plan might be the reader’s first exposure to your business. contact name. illustrations. Include the business address. Make them look easy to read. Remember that it is the first thing your readers will see. 25 . Well-formed pages are also easier to read. Use text between graphics to explain them. they provide the first impression. telephone numbers. Your reader will appreciate your efforts in making your business plan pleasant to read. and charts to improve readability. and bullet lists effectively. Use a good LaserJet or inkjet printer. picture. Make sure the numbers are easy to find. photographs. fax number. and they make statements about you and your company. Keep the margins consistent. Include an appealing cover. slogan. Avoid long paragraphs and crowded pages that can make a page look long and arduous. Points of style checklist n n n n n n n n Use high quality paper. Give them good expectations as to what is inside. white space.Making it look good Appearance and presentation are important. graphs. Balance each page between text and graphics. and formatting look good and fit together. illustrations. The cover page Your cover page should look professional. It can include a logo. Make sure your charts. Use white space effectively.
The following subsections list issues to consider as you write. your business is unique. Address obsolescence. Reassure your readers that you have it. Forecast and reconcile your quarterly shipping and inventory levels. Show how your management information systems will cope with the demands of a technology enterprise. n n Manufacturing If your business is in manufacturing. n 26 . Smart business people know that intellectual content is difficult to value and to protect. Note that some of the templates that accompany this guide are for specific types of businesses. Include who will perform the final testing and assembly. and tailor it to your business. Read all of them to be sure you cover everything that pertains to your unique company. n n n n Address your plans for investing in research and development.Chapter Four – Business Plans for Various Types of Businesses You might have trouble placing your company into a single category because. keep in mind the following: n Emphasize the high intellectual content of your company. explaining the nature of your business is important since your readers will have specific concerns. keep in mind the following: n n n Explain what you plan to outsource. Forecast your unit output capabilities on a quarterly basis. Show how your company will be able to survive rapid growth. Address your capability to manage the expected changes to your business structure that come with commercialization of innovations. Explain how you will manage it. Be sure to cover how you plan to deal with the possibility that R&D costs can exceed expected cash flows and revenues for years. Choose one that comes close. Explain your inventory policy including what you use as a basis for establishing inventory levels. however. Show how you see it as an opportunity rather than a threat. Address your plans to find and retain effective management and key personnel. after all. Technology If your business is in technology.
n n 27 . Show how your pricing policy maximizes profit. Describe your labor needs. For each product or product line. to cover seasonal demand fluctuations) or permanently. show how increasing hours of operation or employee skill levels can increase capacity). Describe how your physical plant and equipment relate to the production schedule. List opportunities for increasing capacity (for instance. disadvantages. applicable financing. and variable and fixed costs to help determine profitability and the break-even point. personnel availability. terms. its projected need. Show the costs of changing capacity temporarily (for instance. increase flexibility. and expected union involvement. List each job category. and anticipated changes. layout. include sales price. direct and indirect costs. and anticipated changes. expected turnover.n Explain your outlook on raw materials. advantages or disadvantages. n n n n Service If your business is in service. Also include a separate list of idle facilities and plans for them. keep in mind the following: n n Plot the correspondence between your capacity and your sales forecasts at least quarterly. For each location. advantages. current value. list uses. List suppliers. List opportunities to reduce costs. or improve the company’s production process. compensation and training requirements.
How well does your company’s image support your business strategy? Make sure that location. locations. Revenue projections cannot be based on prior experiences but must be derived from conjecture. A clear. and anticipated changes. The credibility and accuracy of a new venture business plan is always in question. Explain your approach to advertising. prices and labor availability are all difficult to project. and objective document allows an investor to perform due diligence.Retail If your business is in retail. advantages. etc). keep in mind the following: n How does your location suit your overall strategy? For each location. floor space. concise. 28 . Product development timelines. disadvantages. including traffic patterns. forecasting models. Explain your policy on customer service. customer tastes. Critical assumptions must be carefully crafted. Keep them consistent with your business strategy. appearance. parking. credit policy. credit terms. the subject venture’s portion of the market. What services are necessary to compete in your market? List the projected costs of customer service. it deserves a special mention. and from polling of experts (such as sales people. List their names. and pricing are consistent with your business strategy. As a consequence. note advantages or disadvantages. the size of the market. Many business plans are prepared for new and early stage companies and each presents some challenges. Compare your policies with those of your competition. and demand? Address the qualities of your suppliers. prices. a business plan for a new venture must inspire confidence by revealing carefully thought out assumptions and projections. In any event. service. manner of display. What is your basis for establishing inventory levels? How will you approach buying to match strategy. character of locale. products. discounts. Explain your pricing policies. board members. merchandise quality. and cost. n n n n n n New Venture Although a new venture is not a separate category of business type. What form of advertising will you use? What are the expected costs and results? How does it compare with competition? Explain your inventory policy. new venture business plans consume a greater investment in time and money than the plans of businesses with established track records. There are no historical numbers in place and no strategies to carry on in new ventures. comparisons to similar ventures.
if applicable. and is now moving toward an Internet -only presence called Outsiders. The trick to writing a good executive summary is to make every word count.com will be a prominent internet site for rock climbing. community. Business Summary Example: Outsiders Inc. It should indicate whether you are starting up or operating an existing business. It will provide commerce. Keep your executive summary brief but clear. Type of business The executive summary tells what business you are in. part your readers will read. Balance economy of words with enough content to get the message across. 29 . but you might want to write it last. It is a complete but short overview of your business plan. and maybe the only. Once you get a first draft. Today it has 7 locations in Oregon and Washington. but also to convince your reader to keep reading. backpacking. and mountaineering enthusiasts. Oregon. partnership. Customers will be able to plan trips. purchase gear. or launching a new product or service. The company has been profitable for most of its existence.. Its purpose is to tell your reader what is in your business plan and what you want from it. It should include the SIC code (explained in the Standard Industrial Classification System Explained section later) if you know it. chat with other enthusiasts. or corporation. It explains the company’s level of development: start -up.Chapter Five Executive Summary The executive summary comes first. and content as the first integrated outdoor website on the internet. and consider the value of each word. It is one of the most important sections because it is the first. get important tips on specific locations. mountaineering. new division of a larger business. and hiking clothing and equipment. founded in 1978 by a group of investors in Anytown. It briefly describes the company’s historical performance. and more. It also indicates the organization: sole proprietorship. expansion.com. Outsiders. reserve permits. go over it word by word. Business summary The executive summary includes a brief description of your company. specialized in selection of competitively priced backpacking.
com will earn $23 million in revenue by the fifth year based on the following assumptions: n n n 5% increase in visitors per month 6 page views per new visitor 5% conversion rate of new visitors 30 . Here is an example: Outsiders. corporation).Business summary checklist Use the following checklist as you write your business summary. profits. partnership. including when the company was founded and by whom n Management ο ο Name of CEO or president Names of other key personnel n Financial Objectives The executive summary includes clearly stated sales and profitability objectives and explanations of your plans to meet them. Be sure to include as many of these elements as possible. existing business. expanding) Brief history. growth rates of sales. including the state in which it is incorporated. assets. You might state your goals in terms of cash flow using financial ratios. or by using relationships of cost and expenses. if applicable n Lifecycle ο ο Level of development (start -up. n Names: ο ο ο ο Legal or Corporate name Doing Business As (DBA) name Brand names Subsidiary company names n Location ο ο ο ο Company headquarters Operations Branches Website URL n Legal Forms ο Legal form of business (propriety. You should project financial objectives for five years.
Be sure to describe why your management stacks the odds of success in your favor. Reassure your readers that your management team is qualified to reach your financial goals. the organization is accustomed to meeting monthly commitments and staying within budgets while maintaining the informal culture that makes TBGC. (TBGC) has been successful over the past 5 years mainly due to its stable team of seasoned professional managers. Mary had helped establish the patented TBGC manufacturing process. Now. The CFO and the Vice President of Manufacturing bring track records that include operating businesses that went public. Inc.n n n n n n $80 per transaction for new visitors 25% repeat visitors 5% conversion rate of repeat visitors $130 per transaction for repeat visitors 15 page views per rep eat visitor $90 ad revenue per 1000 page views Management overview One of your most important assets can be the people who run your business. Earlier. Mary Smith. Include a summary of your management expertise in your executive summary. took over operating control after Jack decided to devote his efforts to new product development. with better structuring. the CEO. Inc. a great place to work. who had been working in senior management of a Fortune 500 company. 31 . Here is an example: The Best Golf Clubs.
000 for equipment to complete the $1 million expansion project. and local investors. Use your best sales techniques. and explain why you need that amount. For equity financing. and advice in the industry. Briefly describe your product or service. 32 . Here are some additional questions to consider for this section: n n How much money do you need. Be specific in your request with a total figure. Here is an example: The Nature & Bird Center occupies a unique niche as a nature center with a retail store. Half of this money is to be raised via bank loans with equipment and accounts receivable as collateral. and sale)? n Here is an example: Our company is seeking $750. or explain why it can succeed if it is not unique. service. Requesting funds Your Executive Summary should include a formal request for funding if that is what you want from your reader. public offering. and when do you need it? How much of money are you asking for from the investors or lend ers? What are your terms and security agreements? What are you offering in interest and repayment schedule? What percentage of the company are you offering to equity investors if any? What is your proposed return on investment. The Nature & Bird shopping experience includes an environment where customers can see and learn to identify birds from their local areas and get the best products. and how do you plan to repay the investor (buy-back.000 for proposed construction and $250. The franchisees and their retail staff are experts on bird feeding and nature enthusiasts trained to teach the hobby to customers. For debt financing. but keep it clear and concise. list all available collateral. some investment companies avoid making equity investments while others specialize in them. key employees. The other half is to be raised via equity from stock offered to senior management.Products and services This is where you get to sell your product to your reader. Remember. list the percentages of ownership you are prepared to offer. and state how it is unique. Be sure to indicate whether the money you are raising is in the form of equity or debt.
If you are borrowing money. This translates to better than 10x return on this stock. Here is an example: We plan to sell stock to our employees at a substantial discount for incentives and to provide for growth capital.Exit strategy Your executive summary should also include an exit strategy detailing how you plan to leave the business: by selling it or by merging with another company. Most investors want to realize a gain over a medium term horizon. One Page Executive Summary A one page Executive Summary is often requested by investors (Venture Capital and Angel Investors) before they will read or even accept a full -blow Executive Summary. It includes the following: n n n n n n n n n n n n n n n n n Company contact information Management Industry Number of employees Banking relationships Auditor Lawyer Financing requested Use of funds Investors Business model Products/services Technologies Target markets Sales channels Competition Outlook 33 . Be careful not to underestimate the time it takes to sell or have the company go public. describe how and when you plan to pay it off. OfficeReady Business Plans includes a one page template for an Executive Summary. Most investors should be willing to wait three years for payoff. This will allow management to take the company public within 5 years. We are forecasting our total corporate value at $50 million by 2006.
products. Highlights: Make sure you emphasize the company purpose. n n n n 34 . Leave out extra words and complicated phrases. services. markets. Completeness: Make sure you show the whole picture of your company. Make the reader want to dig into the details. Your reader should know what you do and what you want without the details.n Financial projections Executive summary checklist After writing the executive summary. Excitement: Make sure your reader can feel your excitement for success. review it for the following characteristics: n n Conciseness: Make sure it takes one or two minutes to tell someone about your business. Relevance: Make sure you cover what is important to the reader. and financial strengths. Clarity: Make sure your text is easy to understand.
You can also include your mission statement to define your company’s purpose and to help guide your direction moving forward. Your list will vary according to your mission statement and where you are in your company's life cycle. and company valuation mission statement n Growth and financial objectives Start your Growth and Objectives section with a short list of your goals and plans for the next year or so. 35 . who is really only concerned with the future. or you might make it a long-term goal to build a competitive business model by the end of the year. where the company has been. a start-up company could set a short-term goal of profitability by a certain time. market share. or accomplishments significant changes or challenges the company has faced other developmental indicators such as sales levels. how you do it. Business history Start your company background with a brief history. Explain how you plan to keep financial metrics and use a Financial Highlights table.Company Background The company background follows the executive summary to describe what you do. Start with a five-year projected sales chart followed by a sales table by product line. Here is a list for your business history: n n n n the foundation date of your company and the names of the founders a list and summary of major milestones. stages. assets. State your growth and financial objectives as clearly as possible. and who the key players are. For example.Chapter Six . It is only for the interest of your reader. Tell when your product or service was introduced and touch on your key milestones or accomplishments. net worth. where it is going. Keep this section brief.
Here is an example: We base our growth objectives on past experience and industry forecasts. Here is a summary of sales growth objectives and assumptions: While changing our direction toward virtual retailing, we expect the company to regain profitability within two to three years. This is due to expected costs for capital improvement to accommodate the e-commerce infrastructure and increased marketing costs associated with the change to a storefront website. Legal structure and ownership Use the Legal Structure and Ownership section to describe the legal organization of your company, whether it is a sole proprietorship, a partnership, or a corporation. You should have chosen your legal structure in the early stages of your company. Explain the reasons for your choice of structure. Here are some examples: Six stockholders own the company as a limited liability corporation in Oregon. This provides us significant tax advantages, and it carries the transferability restriction limitation. Thus, our ownership interests can be traded only with certain restrictions. Legal Structure and Ownership example 2: TBGC, Inc. is incorporated in South Carolina. Mark Edwards and his family are the sole shareholders with 50,000 shares while our charter authorizes 100,000 shares of Class A common stock. Legal Structure and Ownership example 3: Wild Bird Group is a subchapter S corporation. Marty and Nancy Bruneau own 100% of the stock. Company location and facilities Your business location (or locations) can affect profitability, especially for retail stores and restaurants. It is important to treat this subject correctly to make your case for your readers. Be sure to cover how access, exposure, and traffic help you to attract customers, or explain why it is not important to your particular business. For instance, location is not as important to a company that does business only via the internet. Identify your company address, describe the type of facility it is, and explain all relevant zoning issues. You should also state whether you own the location, rent it, or lease it, and describe how you pay for it. Be sure to explain the costs and address how they affect profitability. Include photographs if they help to tell your story.
36
You should also address future space requirements and show how they might require you to expand or move. Here is an example: The Wild Bird Group operates from the home of Marty and Nancy Bruneau. Once the Bruneaus purchase the FTB, Inc. franchise, the franchiser will assist them in selecting a retail site. The franchiser considers demographic factors, traffic patterns, parking facilities, neighborhood character, proximity to other businesses, purchase price or rental costs, size and appearance factors, and other characteristics to help choose the right location. They also provide specifications and advice for layout, decor, equipment, furnishings, and signs. Company Location and Facilities example 2: Hand Me Down Sports occupies 2,500 square feet at the Crossroads Plaza in Anytown, Indiana, a city of about 25,000. Anytown has three private prep schools with active sports programs that show interest in Hand Me Down Sports. The Crossroads Plaza is an attractive, well-exposed strip mall with 2000 cars per hour passing by. It maintains a high customer volume at 250 per hour with other popular businesses such as Subway, Tritown Cleaners, Tile World, Bagels West, and Mary Finley’s Nails Express. It also has a McDonald’s in the parking lot. Hand Me Down Sports has three cash registers on separate counters, wall-to-wall carpeting, a variety of merchandizing fixtures, ample shelving, and professionally designed lighting. The storefront includes two large windows with theft-proof glass for seasonal displays. The building includes a 200-square foot storeroom for receiving and storage with a receiving dock at the rear. The store and its contents are also protected by an alarm system monitored by the Anytown police. The store will be open from 10:00 am to 9:00 pm Monday through Friday, 9:00 am to 6:00 pm on Saturday, and 12:00 noon to 5:00 pm on Sunday. The store is closed on major holidays such as Christmas Day and Independence Day. Location and facilities checklist for start-up businesses Here is a list of questions to answer in the Location and Facilities section if you are starting a new business:
n n
What type of location do you need? What kind of space do you need?
37
n n n
Why is the location valuable? Why is the building valuable? How accessible is the location?
Plans for financing An investor usually wants to see a defined financing proposal that shows capital needs and proposed equity or debt agreements. Be sure to do the following:
n n n
list collateral or personal guarantees you are offering refer to start-up costs as outlined in the appendix, if applicable explain how you plan to use the proceeds from the financing
A good way to prepare this section is to divide it into steps: Preparing a financing proposal checklist
n
State the specific amount of funding required for your needs. Be careful to use exact figures. Ranges can confuse your readers, who might wonder if your business plan lacks careful analysis. Identify how you want to set up the financing: debt versus equity. Break it down in dollars and percentages. Explain how you plan to use the money. Potential investors and lenders will want to know exactly where their money is going. Provide descriptions, pictures, and other details on assets you are buying. You can use an appendix for exhaustive details, but refer to them here.
n
n
n
Provide a maturity schedule of debt financing. State how much is short-term or long-term. Provide a proposed amortization schedule in the appendix and refer to it here. List all proposed collateral, and describe it in detail, including estimated market value.
n
38
33. Emphasize their achievements and show how their skills will give the company a competitive advantage. is Vice Chairman and the daughter of Ralph Adams. is the Controller. Explain the work experience. relevant education. and activity-based management. Mary Adams. He also worked as golf pro in the Anytown area for 5 years. Grace & Company. Think of this section as a résumé. Describe your process for recruiting. who believe that the quality of your management team is a key to your success. He holds a master's degree in finance from Boston College. 28. He holds a degree from Kentucky State University.Organization Use the Organization section to introduce your entire team to your readers. Explain your plans for filling them. He is ISO 9000 certified and lectures at colleges. is Chairman of the Board and founder of R2X Golf with experience as a manufacturing engineer with a maker of professional quality golf clubs and accessories. she returned to R2X Golf. Hank Franklin.Chapter Seven . and has been with R2X Golf for two years. is Vice President of Manufacturing. Here is an example: Ralph Adams. 45.R. when she accepted a position as a division manager for W. Fred was treasurer of Grinnell Fire Protection. She holds an MBA from Stanford University and an undergraduate degree from Harvard. He worked at Southern Manufacturing for 15 years in a variety of production and operations management positions. In 1996. Prior to joining R2X Golf. Quality in management means experience. Fred Phillips. Calvin Horton CPA. is Chief Financial Officer. and executive development programs on the topics of continuous improvement. He worked for Andersen consulting for 12 years before joining R2X Golf. She worked as Director of Manufacturing at R2X Golf from 1991 until 1993. and training of each member of your senior management team. Be sure to show how your team will make this venture a success. 40. The Organization section should also include a list of important positions yet to be filled. universities. 57. 39 . quality control.
including past duties. or your top sales closer. Include anticipated changes. If the company is incorporated.S. You can insert an organizational chart into a Word document by using the Insert• Graphics •Draw Picture command.Alice Smith. 37. and other top managers. including the President. skills. n n n Use an organizational chart (see the figure below) to show your reader the structure of the company. degree in Marketing from the University of North Carolina. A list of the board of directors. Other valuable employees You might include val uable employees who are not members of the senior management team. identify the Chairman of the Board and all the top officers. (as in a new venture) discuss the plans for formation. Team member information checklist The following information should be included in your Organization section: n A summary of résumés or brief biographical sketches of senior management and owners. List education. If the board has not yet been formed. Prior to joining R2X Golf. your technical developer. 40 . she was Manager of Sales for a golf ball and golf accessories manufacturer. such as your creative spark. The figure above graphically shows the organization of a company. Explain the responsibilities of each manager. A list of advisory bodies or significant committees. and experience. including a brief description of each member's affiliations and experience. She holds a B. is Vice President of Marketing. n A list of functional responsibilities. such as an audit committee. Address your program for retaining them. the Chief Executive Officer. Your management philosophy. Focus on important accomplishments. You should include a résumé for each member of the team in an appendix.
Include a chart that depicts the stockholders before and after you make the proposed changes to stock ownership. what their contributions are. Explain who owns the company. and what types of investments they have made. and how new investors might affect them. Potential investors might like to know who has already invested. 41 .Principal stockholders The Organization section is a good place to list your principal stockholders.
Here are some tips on gathering industry data: n Contact an appropriate trade association.S. If the business plan is primarily for internal planning then.S. This contains data on employment. Review Federal Government sources such as the following: ο ο American Statistics Index (ASI). use them. You might even have to define the term industry. If you have some industry articles or relevant websites. n n 42 . sales forecasting. which is often misunderstood in business plans. It will get your management to think of creative solutions. an industry consists of companies that produce or supply products or services that are related and the environments in which they work. and growth expectations for the industry. Opportunities. Here. This section can start with a summary. Weaknesses. Bureau of Census. government statistical publications available at many libraries Census of Population: General Population Characteristics: US Summary. Visit a library and review Gale’s Encyclopedia of Associations published by Gale Research. Department of Commerce. It provides a look at internal factors to gauge the company’s ability to compete in the outside world. payroll. its growth prospects.Chapter Eight . review them for qualitative information. Congressional Information Service. if you subscribe to trade and industry journals. For example. and your target customers. analysis. A summary is often useful for readers who are new to the market. or it can go right into the industry analysis. ο County Business Patterns. white papers. and Demographic data organized by Metropolitan Statistical Area (MSA). U. your market analysis.Market Analysis The market analysis is one of the most important parts of your business plan. and special reports that provide fresh industry data and analysis. It should give a brief overview of the characteristics of the market. and the number of business establishments in the industry.S. Bureau of Census. trends. and promotional strategies. You probably already have some industry data that you can gather and organize. You need a clear sense of the market before you can show your ability to compete. Attend industry trade shows and obtain publications. U. Industry analysis The challenge for the Industry Analysis section is to write it for all of your readers to understand. Strengths. and Threats (SWOT) analysis could serve as the summary. This publication might also be useful for other planning such as your customer profile. a comprehensive index of U. Department of Commerce.
Review Predicast Forecast. This publication provides information on businesses by type of business. Industrial Outlook. industries. industries. Describe the barriers to entry into your industry. This publication provides demographic and economic data by Standard Metropolitan Statistical Area (SMSA). and demographic factors that drive your industry. Department of Commerce. This contains data and explanations of historical trends and future growth prospects for U. The Conference Board. news. sales volume. Discuss the history of your industry so your readers can see the cycles. Describe the life cycle of your industry. Government Printing Office. U. Identify the primary products and services provided by your industry. and profits for your industry. financial publications. you can start writing a detailed description of your industry. Describe the barriers to growth within your industry. 43 . Industry description checklist: Use this checklist to make sure you include all relevant information: n n n n n n n n Identify the SIC for your company (see the appendix). Guide to Consumer Markets. Convey an understanding of how the status of your industry affects your potential marketability. snapshots.S.hoovers. recent technological advances. Review the following publications: ο ο Census of American Business.S. or improved regulations.S. employee size. competitive. Identify the major companies and competitors in your industry.S. and location. identify factors such as an improved economy. U. research breakthroughs.ο ο n SIC Manual. Each company in this database is assigned an industry code based on its primary source of revenue. This publication provides an index of articles from trade association periodicals. Present to your readers the current state of the industry and the prospects for the future. Show how industry growth or decline can affect the viability of your business. This is an index of three and four-digit classification codes for U. such as seasonality. revenue growth. and ask for help (a list of SBDC centers is in the Information Resources section). Hoover’s Industry Sectors link includes Industries channel. Predicasts.S. which consists of 28 sectors that encompass more than 300 industries. Check out Hoover’s Online (www. n n Search for industry-related articles on the internet and at the library. and company lists. Describe the outlook for sales. Dunn & Bradstreet. analyses. Visit your local Small Business Development Center (SBDC). Describe the major economic. Inc.com). If the industry is growing. industries. and special reports often rich with industry and market analysis. This resource has industry descriptions. n n Once you have gathered your industry data. newspapers. U.
n n n n n Describe the barriers to exiting from your industry. The example below summarizes recent and future growth in an industry: Industry Indicators Total Revenue Unit Volume Total Employment Industry Growth Rate GDP Growth Rate Target market No company can appeal to everyone. and other financial patterns. such as diminishing inventory levels indicated by falling days of inventory. the overall level of business activity. Target market checklist Here are some questions you should consider when describing your target market: n n n n n n n 2 Years Ago Last Year This Year Next Year Next 5 Years (Average) How large is your potential market now? How large has it been? How large can it become? What are the geographic boundaries of your market? How many people are using products like yours? How sensitive is your market to price? What is the potential sales volume in your market? What are the trends in your market? How mature is your market (growing. Write a brief summary of your target customers by demographics or by attributes. shrinking)? 44 . Explain economic developments that might affect your industry. Discuss financial characteristics of your industry including normal markup. and geographic region. and most focus on target markets. standard credit terms. Define your target market using factors such as population. standard financial ratios (industry averages). Explain how government regulation affects your industry. Using a table to summarize the industry You can use a table to help summarize your industry. which are subsets of the overall markets. Explain the roles of innovation and technological change within your industry. stable.
Answer these questions about the people who will buy (or are buying) your product or service: n n n Who will use it? How will they use it? Why will they buy it? One way to write the customer profile is to organize it by demographics. you should describe your potential customers. lifestyles. or interests: Demographics: n n n n n n n n n Age range Income range Sex Occupation Marital status Family size Ethnic group Level of education Home ownership Lifestyles: n n n n n n n n Technology-oriented Seeking status or prestige Trendsetter Conservative Liberal Family-oriented Thrill-seeking Sports-oriented Interests: n n n n Family or life stage Hobbies Sports Publication subscriptions 45 .Customer profile After you describe your target market.
unless you have a new market niche. Rate your competitors’ market shares. If it is not particularly competitive. or retired? Where do they live? Are they businesses? How does your location affect your target segment? Keep in mind the general notion of market segmentation: you can alter your marketing mix (product. 46 . or profession) you are after. age. single. describe them. and promotion) to meet the needs of your potential customers. price. Explain how easy it is to start in this business. List the barriers to entry into your market. Are they married. Use their SIC codes (see the appendix). place. It defines the part of your target market (such as region. their financial strengths and how their products compare to yours. explain why. Explain how competitive it is. Market segmentation The Market Segmentation section is a subset of your target market section. If you are selling to businesses.n n Organizations and affiliations Political affiliations If the target market is business. income. Here are some questions to answer: n n n n n How big are your competitors (compared to your company)? How profitable are your competitors? What competitive advantages do you have? What are your competitors’ relative strengths and weaknesses? What types of new competitors do you anticipate? Discuss the current and expected competitive market. This will help you to demonstrate your role in the market. What are their sales volumes? How many employees do they have? You might also include your current and projected sales and profits by market segment. then your customer profile should include the following: n n n n n n Industry Sector Years in business Company revenues Number of employees Major competitors and participants You should also summarize the competitors and participants in your industry.
and how much your business plans to capture. and demographic trends that support your market forecasts. Use reports on your industry and other public or private economic data for further support. If you are projecting sales forecasts using the market size and market share method. 47 . geographic. Identify total market sales. your expected market share is ten percent. marketing consultants. You can find much of this information online. support your projections with the data from the workbook. or trade associations. and you project $10 million in sales. if the total market sales are $100 million. You can find information on the size of your market from chambers of commerce. For example. and briefly comment on the social.Projected market growth and market share objectives List your market forecasts.
estimates of weight. Try to use your customers’ perspective. This difference must provide a tangible benefit to the buyer. Product and service uniqueness Most new businesses sell products and services that are already available somewhere else with something new or unique. What makes you different. explain it. Briefly describe the features and benefits of your products or services. As a rule. such as logical or conceptual descriptions. Be sure to include pictures or other graphics.Chapter Nine -Products or Services This chapter is for you to briefly and succinctly describe exactly what you offer your potential customers. or package it. Include attributes of your services. and why will customers purchase your p roduct or service? You can differentiate your product by the way you deliver it. price it. benefits. you get 20 seconds or less to describe each product or service including the benefits. material descriptions. Describe how your customers view your products or services. Describe the unique features of your product or service. If you have a proprietary advantage. Is there a quantifiable benefit associated with your product? You might consider using a table like the one below to summarize the advantages of your product: Your Product Product Quality Price Image Target User Distribution Warranty Promotions Competitor 1 Competitor 2 48 . features. Include physical attributes. and product or service life cycles. and other details. This is like your executive summary in that every word counts.
Here is the place where you want to document your venture’s sustainable competitive advantage.Product and service description checklist Use this checklist for each product and service that you sell: n n n n n n n n n What does it accomplish? What type of person uses it? How have neutral parties reviewed it? What are the unique design features? What are the proprietary features (trademarks. Future products and services Your readers will want to know your plans for future products and services since business is always dynamic. Competitive comparisons Compare your company with your competitors' important attributes (quality. It usually originates in a core competency – those strengths you have that you utilize to be in business. and image). the sustainable advantage must be: n n Valuable Rare 49 . and new products and services. You have to show that you embrace change and look for growth via product refinements. product extensions. What does it expressly cover? What does it generally cover? What does it omit? How does it compare by price in the industry? What is included with it (major components only. and technology causes products and services to become obsolete. price. not too detailed)? What service do you offer with it? n n n One way to organize this section is to begin with existing products and services and then describe your proposed products and services. What are some of the coming attractions? Try to preview your product development plans without giving away too much. Business wants and needs change. To be really effective. copyrights)? What need does it fulfill? What is its average life span? How up-to-date is it? Is it state-of-the-art? Does it have potential obsolescence? What is the warranty? Describe the warranty. A sustainable competitive advantage is an advantage that one firm has relative to competing firms.
Capital providers and smart potential team members will be looking for evidence of competitive sustainable advantages throughout your business plan but it is often in this section that they really focus on that concept. However. It can come from: The development of extreme customer intimacy (knowing your customer’s needs better than anyone else ever could) n n n n n n n Superior Quality Extensive distribution contracts Brand equity Low cot production techniques Intellectual property Government protected markets (monopoly) Superior employees and management team 50 .defying imitation.n Inimitable . customer lifetime value superior product quality extensive distribution contracts accumulated brand equity and positive company reputation low cost production techniques patents and copyrights government protected monopoly superior employees and management team One approach is to write a narrative about each product and service that you sell and how it stands up against the competition. You might include a table: Competitor 1 Product Quality Price Image Target User Distribution Warranty Promotions n Competitor 2 Competitor 3 Companies succeed in establishing sustainable competitive advantage (SCA) by combining skills and resources that are unique and enduring. matchless There are things about your venture that help reveal the source of Clues as to your sustainable competitive advantage include: n n n n n n n n customer focus. profitability is affected by SCA and certainly long-term profitability is not possible without competitive advantages.
How much are they spending compared to you? Patents. and other forms of intellectual property protection Describe your brand equity: ο ο ο ο Brand name awareness. Brand quality. lab results. Explain how customers connect personality lifestyle with your brands. trademarks. trademarks. including prototypes . you want your business plan to reveal that you have a type of alertness and agility that helps you continue to fine tune your SCA. 51 . copyrights. Discuss your R&D priorities and how they fit your mission statement. Brand associations. Brand loyalty. Explain your plans for testing. Discuss how much you spend on R&D and how it might change. and product testing results Plans for future R&D Comparisons to your competitors' R&D. Explain how your brands are easily recognizable and why customers will choose them. Show your readers how you will remain competitive by eliminating obsolescence or heavy competition in your market. Describe how your customers connect a certain level of quality with your brands. Here are some points to include: n n n n n R&D work-to-date. Detail the steps involved in your research and development process including extensions of products. improvements. Here are some other questions to consider when writing this section: n n Does the company have full assignment of all the patents. and copyrights? Does it have licensing agreements with other companies? Be sure to detail the agreements. and new designs. and encourages intermediaries to carry your brand. increases visibility.Finally. Research and development Your research and development process is also an important part of your business plan. Describe how loyalty reduces product vulnerability.
Investors will want to know all significant risks with producing your prod uct. Production issues Describe how you produce your products. Discuss how they compare to those of your competitors. Let them know whether you are adding significant value to the manufacturing process or assembling parts produced by others. provide a step -by -step process in a list or flowchart.Manufacturing or Production Plan Production and capacity The Production and Capacity section describes your facilities and capabilities to produce what you sell. If possible. Your operational plan should address how you will use the capital you are requesting for the following operational issues: Manufacturing process and capacity: n n n n n n n n n What are your proprietary techniques for your product or its development? How will this affect production and the ability of competition to copy your products? What are your processes for developing products? How much work are you outsourcing? What is the maximum output capacity of your manufacturing process and facilities? How will your manufacturing process accommodate growth in sales volume? What kinds of security measures do you have? How can you anticipate defect rates in your manufacturing process? How much overhead will you have? 52 . What are your relationships with subcontractors and parts suppliers? What are the potential supply problems? Describe your operational plan detailing how your business develops and creates products. Provide a rationale for producing the product rather than outsourcing it. Pay close attention to operational issues. Describe how you plan to increase production and capacity to meet your projected growth plans.Chapter Ten .
manufacturing. warehouse. state. retail. or local regulations will impact your work? Will technological advancements affect your development or operations? Consider these questions on facilities requirements: How much office. expertise. and availability of the products and services you outsource. quality. or parking space do you require? What are the zoning restrictions for your location? How does your location serve your target market? Is it better to rent or buy your location. and why? Outsourcing: n What other businesses provide products or services for you? Discuss price. 53 .Materials and equipment: n n n n n What machinery do you need? What raw materials do you need? How much do equipment and materials cost? What is the availability of raw materials? How stable are prices? Labor requirements: n n n n What human resources do you need? How much will labor cost? How will you distribute work? How much can you automate? Suppliers: n n Who are your suppliers? What is your business relationship with suppliers? Product support services: n n n Who provides maintenance for your products? How much does maintenance cost? Will you offer support services or printed operational manuals? Outside factors: n n n n n n n What federal.
Labor Force: n n n What is the percentage of labor content in each product produced? How do you plan to hire and maintain a loyal workforce? What is the availability of skilled personnel at competitive labor rates? 54 . and site your certifications. reassure your readers that you cover it well. List the quality standards you use.Quality Control: n Since quality is critical to your success. Briefly describe your quality control procedures and the expertise of your QC personnel.
convenience stores take an increased markup to compensate for smaller inventories and higher costs per square foot. Creating and maintaining customers Creating and maintaining customers is always the main goal of a business. promotion. Since marketing is one of the most difficult parts of operating a business. Pricing strategy Your pricing strategy should demonstrate how your product fits into the market with your competition. Create an integrated approach to your pricing. and fill in the details. You should explain your pricing rationale and philosophy. for instance)? How does your pricing policy differ from that of your competitors? Is your product or service price-sensitive? How does your pricing policy give you an advantage over your competition? How does seasonality affect your pricing? How do your product promotions affect your average sales prices? n n Product positioning Your product positioning affects your success. It shows how you will fit into your industry and how you will find your customers. Then expand on it.Chapter Eleven . Figure out how they would find you.Marketing Plan Your readers will often start reading at your marketing plan. and your competition. Above all. One way to start thinking about addressing this issue in your business plan is to think about your product through the eyes of your target customers. For instance. Introduce your mix of marketing plans here as you have tailored it to your industry and to your customers. and distribution plans. You should also consider your cost structure and how your pricing strategy supports it. Here are some questions to consider: n n n n What are the objectives of your pricing? How does your pricing strategy increase market share or maximize profits? What is the basis for your pricing policy (cost plus markup. You should identify your product positioning in terms of the following: 55 . your pricing strategy should show how it supports your profitability. Keep in mind that your location might be a factor in pricing. readers will be curious about your approach. Show how your pricing policies affect your target market. Your business plan should include an analysis of your product as it relates to competitive products.
and skills. where they are. Include the number of people working in your sales force and list their sales techniques. Also describe how they are paid. Explain how it is organized and how it is compensated. and effectively. List supervisors and their experience. Describe the organization of your sales forces. or distributors. List their experience and skills. or you might use middlemen.n n n n By product differentiation By benefit By type of user By comparing to competitors Sales and distribution plan Your sales and distribution strategy should identify how you plan to sell and distribute your products or services. education. n n n 56 . Sales force plan checklist n Describe your inside and outside sales forces. List productivity incentives such as quotas and bonuses. Your marketing plan should also discuss your distribution channels. such as percentages of cold calls that result in appointments and percentages of appointments that are closed. For example. Your marketing plan should outline your sales force. efficiently. how they are paid. wholesalers. You have to demonstrate that you can deliver them economically. and how they are trained. You should also include your method of charging your customers (cash or credit). Explain sales projection data. including the number of people. Explain how it will work and describe the advantages of your plan. you might distribute directly to end users. Remember to include the channels for compensation.
Show that your promotional strategy is effective.Promotional strategy Your promotional strategy section should sell your readers as much as it sells your products. Answer the following questions: n n n Who are you targeting? What is your message? When and where do you place your message? Use a worksheet to address and plan for all of your ideas in this area. Promotional Vehicle Frequency of use during the Year Budget Annual Cost 57 . Explain how you will communicate with your target market: n n n n n Advertising Public Relations Publicity Internet Networking You should include a description of your marketing campaign. Make sure it sounds plausible and profitable. and therefore provide a summary list of the promotional activities for which you will engage during the upcoming year. label them as follows. Set up three columns.
In addition. They can decide if your funding requests are reasonable and serviceable. however. Use financial forecasts to demonstrate to your readers that you have thought through the financial implications of your company’s growth plans. Don't risk losing your credibility. Integrate goals and objectives into the forecasts. Your financial information must be accurate and comprehensive because it is the basis for your business plan. Make sure your projections answer the following questions: n n n How will the company perform financially (profit and loss projections)? What are your cash flow projections (will your business remain solvent)? Are you creating wealth (what is the return on investment)? Your financial plan should include at least the following: n n n n Income statements for 24 months and 5 years Cash budgets for 24 months and 5 years Balance sheets for 5 years Financial ratios These statements are included in the Integrated Financials template. Demonstrate whether your strategy is financially feasible. Your readers will use this information to determine your potential return on investment. It presents your start -up capital requirements. you may wish to include other workbooks to help potential investors understand how your business will generate profit and how much capital you will need: n Detailed sales forecasts 58 .Chapter Twelve . and it includes comprehensive data to project your financial performance. Keep detailed figures available for discussion.Financial Plan and Analysis The financial plan reveals the amount of financing (internal and external) required to meet your operating and marketing goals. you should keep the figures at a high level. but leave them out of your business plan to avoid overwhelming your readers. Here are some tips to help you prepare your financial plan: n n Keep it consistent with your business plan.
Financial plan as a control tool Your business plan can help managers in the area of control. Perform time series analysis and cross sectional analysis of the past data to see what assumptions you can support. you should revisit your initial financial model to monitor your progress. Data on financial ratios is available from a variety of sources. use the Integrated Financials workbooks included with the templates that accompany this e-book. what impact will that have on cash flow and net income?” Use your financial plan to support your business plan as well as for contingency planning. you should have at least three years of financial data to include and analyze. Some basic steps Here are some basic steps to help you complete your financial plan: n Decide on assumptions. Here are two useful publications: Annual Statement Studies. and allowances? Do you want to project balance sheet items as a percentage of sales or something else? Gather past performance data and analyze it. For example. published by Prentice Hall. If your business is established. It can help with company budgets and with tracking financial progress. n 59 . you will need to make assumptions. you may see a trend in liquidity. such as the internet or the library. You will be able to answer questions like “What if we increase the advertising budget by 15% and sales increase by 20%. Does it make sense to project your accounts receivable based on some average collection period? Should you project sales as some percentage of an overall market (market share)? What rates should you assume for income taxes. Cross sectional analysis is comparing your company’s ratios to those of the industry. You can use it for modeling a variety of scenarios.n n n Start-up capital required and sources Statements on how the capital will be spent Break-even point Financial plan as a modeling tool As you develop your financial plan. published by Robert Morris and Associates and The Almanac of Business and Industrial Financial Ratios. As your business grows. What does the data tell you about the future? What assumptions can you ext ract from it? You may find that some past financial ratios have been stable and you can project that they will remain stable through your planning period. if you compare the current ratios (current assets/current liabilities) from this year to the current ratios of last year and the year before. Time series analysis involves comparing your ratios over time. sales returns. To prepare pro forma financials and other projections.
This workbook contains worksheets that are generally applicable to all business plans. move to the five-year cash budget and complete it. Begin by completing the 24 -month and then the five-year cash budgets. If you need a more detailed sales forecast analysis.000. For example. n Start by entering the beginning cash balance from ongoing business operations. The data that needs to be entered on the year 1 cash budget includes projected capital expenditures. An income statement summarizes your revenues and expenses.000. ο Start-Up Capital. Once you complete the first 24-months. changes in other assets. You could use this figure to predict the selling expenses at 15% of the projected sales. n Cash Budgets n The Cash Budget section presents one of the most important elements of your business plan projections: how your operations generate cash. Income Statements n The Income Statement sheets allow you to prepare projected monthly income statements for 24months and annually for a five-year period.000 and selling expenses were $150. and capital stock issues. It gives you space to prepare forecasts for up to 20 products or services. n Break-Even Analysis. if sales were $1. A common method of forecasting operating expenses is by percentage of sales.n Prepare the Integrated Financials workbook. It includes the following sheets: ο ο ο ο ο ο ο ο 24-Month Income Statements 5-Year Income Statements 24-Month Sales Forecast 5-Year Sales Forecast 24-Month Cash Budgets 5-Year Cash Budgets 5-Year Balance Sheets Financial Ratios n If applicable. you can use the Sales Forecast template. Projections show how much you believe you will sell and what costs will be incurred to achieve those sales. 60 . use this template to estimate your Start -Up Capital and Capitalization. prepare the Sales Forecast workbook. then selling expenses as a percentage of sales would be 15%. Use this template to demonstrate the fundamental economics of your business venture. short -term loans. long-term liabilities. If your business plan is for a new business. changes in other liabilities.
debt-t o-equity ratio. The debt-to-equity ratio is calculated by dividing total liabilities by total equity. inventory.Balance Sheet n A balance sheet shows what you own (assets). The cash balance is determined by the cash budget sheets. accounts payable. The gross margin percentage is the gross margin divided by net sales. They include gross margin percentage. and times interest earned. It is important that your projections be practical and believable. including projected balance sheets in your business plan is critical. all Retained Earnings figures after the first year are calculated from prior sheets. n n Financial Ratios The last sheet in the Integrated Financials workbooks is Financial Ratios. Be sure to carefully consider the natural relationships that exist between sales. n n n Profitability ratios -These ratios indicate the types of returns or yields generated for the owners of the company. and return on equity. Return on assets is net income divided by total assets. Return on equity is calculated by dividing net income by total equity. The efficiency ratios include inventory turnover and total asset turnover. One caution: if a company does not sell merchandise or produce products. All other lines are entered manually. If several ratios vary greatly from industry averages. Here are the four categories of financial ratios: n Liquidity ratios -These ratios help determine a company’s ability to pay bills. Leverage ratios include debt ratio. The times interest earned ratio is net income before taxes and interest divided by interest. you are risking the credibility and integrity of your business plan. The balance sheets should always be in equilibrium (Assets = Liabilities + Equity). the inventory turnover is irrelevant. After you enter your financial projections. The current ratio is calculated by dividing current assets by current liabilities. Leverage ratios -These ratios indicate the degree to which a company is financed with debt. The debt ratio is calculated by dividing total liabilities by total assets. be able to explain why. review the ratios in the Financial Ratios sheet to determine if they are consistent with industry averages. 61 . The acid-test ratio is calculated with the following formula: (current assets) minus (inventory) divided by (current liabilities). Efficiency ratios -These ratios indicate how well management is using resources. The balance sheet is structured by what is called the balance sheet formula: Assets = Liabilities + Equity. return on assets. owe (liabilities). etc. Similarly. Inventory turnover is calculated by dividing cost of sales by inventory. and the equity of the owners. If a ratio varies greatly from an industry average. The liquidity ratios include the current ratio and the acid-test ratio. accounts receivable. Because the balance sheet reveals financial position.
but continuously update the forecast. then you would include $4. 62 . The first step in preparing a sales forecast is to select one of many possible methods. For each monthly expense item. enter the expenses for each item.Sales Forecast The sheets in this workbook give you the flexibility to forecast 20 products or servi ces across 24-months and five years. changing demographics or competitive factors can affect actual sales. Next. Trade publications and trade associations The U. The Integrated Financials template includes a start-up capital sheet to help you project your start-up costs. In the Monthly Costs section. Market Size and Share. The Cash Needed to Start and the % of Total columns is automatically calculated. This multiple increases the amounts in the Cash Needed to Start column of the Monthly Costs table and represents a realistic total for start -up capital. particularly for a new business. Think carefully and realistically about all the costs involved for the first six to twelve months. If you are selling inventory. Use it carefully to ensure that you are projecting accurately. Use the multiples to provide contingency padding.S. Your first step is to think about the multiples to use to estimate the start-up capital for monthly expense contingencies. if you want three months of rent in start -up costs. enter the number of months you would like to include in the start-up capital. enter the expenses for each item in the Cash Needed to Start column. if you are projecting $1. n n n Estimating Start-up Capital Accurately estimating start -up costs is critical for launching a new business or a new venture.500 a month for rent. In the One-Time Costs section. Here's how you complete this sheet. you could miss hidden costs such as the property owner asking for the first six months of rent in advance. suppliers and wholesalers can provide projected demand for the product. Projecting sales is difficult.500 of rent expense ($1. Remember that undercapitalization is a common cause of failure. The best advice is to be flexible. scroll down to the Start-Up Capital Assumptions table. Project sales. Here are a few suggestions for sales forecasting source data: n Product vendors.500 x 3) as part of your start-up costs. Three common methods are Simple. Therefore. and Percentage Growth. For example. Bureau of the Census Your own historical sales data compared to data from product vendors and trade associations. These multiples are cash flow contingencies or safety cushion factors. multiply your projected monthly rent expense by 3. Note dramatic changes that will affect your sales forecast. For example. Otherwise.
and creditors. More on Pro Forma Financial Statements Pro forma is a Latin term meaning "as a matter of form". Businesses use pro forma statements for decision-making in planning and control. more capital is needed to fund your business. Unfortunately. Use the Start -Up Capitalization sheet in the template set to identify where you are expecting to get the funds. If the break-even point is relatively high. The Total Estimated Start-Up Capital is used in the Start -Up Capitalization worksheet. Pro forma financials are also prepared in larger corporations for external reporting to owners. In the Bank Loans and Other Liabilities section. Lenders. enter the fixed costs. If you have only one product or service. typical of most businesses. you can complete the next sheet: Start-up Capitalization. and then enter the variable costs. it may be unrealistic to assume that the company will meet its sales and profit objectives. Variable costs change in total with changes in sales or production volume. Fixed costs are constant costs. The exercise of preparing a pro forma income statement is the key in developing a business plan. First. enter the name of each loan or capital source and the amount of capital they are providing.Once you have completed this sheet. After you enter the selling price per unit. and other providers of capital look at the break -even point as an indicator of risk. Break-even Analysis The break-even point for a business is achieved the moment the sales volume covers all costs (both fixed and variable). it is easy to calculate the breakeven point. Start-up Capitalization Start-up capitalization includes the sources and amounts of equity and debt you want. 63 . They do not change in total with changes in sales or production volume. the break-even point is difficult to calculate when you have multiple products and services. Both the American Institute of Certified Public Accountants (AICPA) and the Securities and Exchange Commission (SEC) require standard formats for businesses in constructing and presenting pro forma statements. In the Owners' Investments section. Preparing financial repo rts in a pro forma manner means to present financial projections. investors. the template calculates the number of units needed to break even. If there is a capitalization deficit at the bottom of the report. enter the name of each owner and the amount of capital they are contributing. investors. the greater the operating risks. The higher the break-even point.
5. production. research and development. Capital providers attempt to discover the assumptions upon which the pro forma statements are based. Perform ratio analysis to compare projections against each other and against those of similar companies. and credit officers with a feel for the particular nature of a business's financial structure under various conditions. investment analysts. By arranging the data for the operating and financial statements side-by side. They can also provide lenders and investors to provide financing for a start-up firm. Develop the various sales and budget (revenue and expense) projections.. Be sure to base your assumptions upon objective and reliable information. Prepare the balance sheets.Pro forma statements can be the final output of a budget system. management employs them to compare and contrast alternative business plans. 7. you 64 . Review proposed decisions in marketing. Translate income statement projections into cash budgets. and assess their impact on profitability and liquidity. pro forma financials help minimize the risks of launching and operating a new business. columnar format. 3. incorporated into formulas. As you prepare the projected financials of your business plan you need to follow some steps: 1. Angel investors and venture capitalists put a great deal of emphasis (they call it due diligence) on pro forma statements. and clearly presented for all to see. 2. State Your Assumptions To build your numbers. Assemble the results in profit and loss projections. In some cases. management analyzes the projected results of competing plans in order to decide which best serves the interests of the business. used as the basis of comparison and analysis to provide management. A company uses pro forma statements in the process of business planning and control. some business plan software pack ages make it difficult to determine how the numbers are being calculated. For example. etc. You need to do your homework in order to create a reasonable and logical projection of your small business's profits and financial needs for the first year and beyond. transparency should be the goal. You don’t want anyone who reviews your pro forma financials and reads the financial sections of your plan to be confused about the assumptions. Nothing should be hidden. Because pro forma statements are presented in a standardized. These assumptions need to be integrated into your financials. Clearly state the assumptions upon which your financials are based. 6. 4. you need to make some initial assumptions. In that way. As a vital part of the planning process. pro forma financials are both a quantitative model of the business and a means of assuring some control over the operations.
are very difficult to prepare. expense projections after the first twelve months of the planning period can be based on reasonable percentages and quick rules of thumb. The cornerstone upon which all the expense and capital budgets lay is the revenue or sales budget. Sales budgets. and other variables that can influence revenues. Some reasonable rules of thumb. Be careful not to develop overly optimistic "hockey stick projections" of sales taking off in the near future. Your pro forma financial are about planning. your sales are going to go through the roof. Although there are many sales forecasting techniques taught in quantitative business courses in undergraduate and MBA programs across the world. Sales projections that move along at a moderate pace and then all of a sudden jump up are hard to believe. They are a forecasting challenge that in difficulty ranks up there with long-term weather forecasts. despite short cut rules of thumb or fancy formulas. many elements of a financial statement is calculated as a percentage of revenues. Bottom-up methods use information generated by those closest to the customer to generate the data used to forecast the sales. One thing that reviewers of revenue forecasts (within business plans) complain about is what’s called hockey stick revenue forecasts. you need to dig deeply into the numbers. For example. nothing beats the bottom up approach of sales forecasting. are the only reasonable way to produce pro forma financials. Develop Straight Forward and Reasonable Sales and Expense Budgets Spend the time develops budgets that make sense. Many business finance courses teach methods like the “percentage of sales” method of preparing pro forma financials. like percentages of sales may play out as useful over a long-planning horizon but for the first 12 months of your plan. not as much of a science as 65 . With that method. if the target gross margin is 40% then cost of sales percentage would be calculated as 100% .can’t readily determine the assumptions or even the formulas utilized. inventory might be 30% of sales or cost of sales might be based on a target profit margin that is also based on an assumed percentage of sales (i. Don’t make it a hassle for anyone to determine the basis for your numbers. Investors and lenders are not going to believe that your sales will be somewhat flat for a while and then once you have their money.. the more uncertain things become. seasonal trends. then you'd better build so much bottom-up detail into that forecast that even the most cynical or skeptical capital provider will believe it. On the other hand. That’s because the future is so uncertain and the further out you go into the future with your plans.40% = 60% of sales).e. One complaint of bottom -up revenue predictions do not take into account the overall effects of the economy. If you've really created that once-in-a-generation business whose sales will take off. Avoid quick-fix forms of proj ecting sales and expenses especially for the first twelve months of your plan. support your pro forma financial statement line items with budget details that make sense and show thoughtfulness on your part. especially for a new venture. Budgets.
2. Liquidity Ratios such as: i. Debt / Equity Ratio iii. 5. As important as monthly details are in the beginning. Average Collection Period iii. Ratios based on the projected balance sheets and income statements for the five years. Leverage Ratios such as: i. 24 months of revenues and expenses for the first two years. The ratios (unless not applicable) should include at a minimum the following: a. Acid-Test Ratio b. Times Interest Earned Efficiency Ratios such as: i. they become a waste of time the further you extend out. 24 months of cash budget (showing the details of cash receipts and expenditures for the first two years – month-by-month. but monthly details past the first year are difficult to do. when your sales forecast is so uncertain? It is wise to do your best to plan in five-year horizons in the major conceptual sense. Remember that much hinges on the sales forecast so how can you predict monthly cash balances for three years from now. Inventory Turnover ii. 3. Projected balance sheets for the end of each year for each of the first five years. Summary cash budgets for the first five years (consolidating the data from #3 above – the first 24 months are folded into the first two years of these statements). Debt Ratio ii.) 4. Show Results in Profit and Loss Projections and Cash Flow The pro forma financials of your business plan should show the following: 1. 6.GAAP accounting. Total Asset Turnover Profitability Ratios such as: 66 . Current Ratio ii. Summary income statements for the first five years (consolidating the data from #1 above – the first 24 months are folded into the first two years of these statements). You are only guessing the future in a system full of uncertainties and risks.
cost of borrowing money. To do that. More ratios may be applicable for your type of business. Once you have taken your first pass at your pro forma financials.) may provide you with some reasonable targets. materials. a current ratio of 50 – which means you will be astonishingly flush with cash). etc. Return on Equity Keep in mind that the above list is the bare minimum list of ratios that you should utilize in your integrated financials. If you don’t. Allow you to develop findings that are readily understandable Financial modeling tests the assumptions and relationships of proposed plans by studying the impact of variables (this is also called “what-if” or sensitivity analysis) in the prices of labor. then you need to rework your numbers to get the “bugs” out. cost of goods sold. Rules of thumb are hard to define for a book like this because every industry is different and the acceptable range of ratios can vary considerably from industry to industry. A review of the many types of financial analysis reports available from financial publishers (Standard and Poor’s. Hoovers. Pro Forma Financials as Financial Models Pro forma statements that have been developed in an Excel workbook provide data for calculating financial ratios and for performing other mathematical calculations. and inventory valuation on the company in question. Test the goals of the plans ii. Web resources and business reference publications in your library may be of help in identifying the reasonable ratio benchmarks to use. no one is going to believe your projections. you must review your ratios to determine if they are reasonable. Gross Margin ii. linked to one another and with links to supporting details create not only a necessary tool for your business plan but also a dynamic financial model that can contribute to the achievement of company goals if they help you to: i. you will need to “benchmark” against some reasonable external data.i. and overhead. The Risk Management Association. Intergra. sales volume. Pro forma statements are an integral part of business planning and control. like the average for the ratio within your industry or peer group ratios (companies of similar size within your industry). Return on Assets iii. If you first pass at the financials results in some strange looking ratios (ie. Pro forma statements. BizMinder. They can also receive the results of subsidiary worksheets such as detailed budgets or sales forecasts. Owners use them to get money (as part of the business plan) and managers use them in the decision-making process when building an annual 67 .
or in accounting principles or accounting estimates. Pro forma statements are also valuable in external reporting.budget (also called the Master Budget). and choosing among capital expenditures. developing long -range plans. Public accounting firms find pro forma statements indispensable in assisting users of financial statements in understanding the impact on the financial structure of a business due to changes in the business entity. 68 .
It is wise to develop your business plan into PowerPoint slides so that you can present it to potential investors or team members. It is a powerful selling tool and it may be your only chance to catch an investor’s attention. It is very common for a potential investor to say. “Send me the Executive Summary. It would be wise to study up on what makes for a good PowerPoint presentation. and handout at management meetings. give to potential investors. Many experts believe that more than 10 slides create a “pitch” that is too lengthy. The goal is to provide just enough information to make the audience want more and to do it in under thirty minutes. Your goal is to make the presentation interesting enough so that peopl e learn and ask questions. PowerPoint. As with any PowerPoint show. keep your busines s plan presentation short – both in the number of slides and the length of time. and if that interests us. Executive Summary A separate executive summary should be developed that you can bring to meetings. It can be anywhere from one to five pages in length. Keep it up-to-date (it is a “living document” like the business plan) so you will always be ready to present. There’s a real art to making good looking slides and to balance the need to show lots of information with the needs of the audience to 69 .Chapter Thirteen – Other Related Communications There are other ways to communicate your story other than the business plan. Promoting interaction is the key.” Or the investor requests an MS Word or PDF version of the executive summary. your PowerPoint presentation it needs to be concise but offer a tease. we’ll get back to you. The executive summary summarizes the main points of the business plan. Many venture capitalists and angel investors prefer the one page version. Like an executive summary. The PowerPoint Show The PowerPoint Show is your business plan in a PowerPoint presentation. The business plan is a critical document for reasons noted in other chapters of this book. If it is a good tease. There are other documents. the audience will ask lots of questions and that type of interaction will reveal much about your ultimate business plans. and “pitches” that you will want to create to help you get the word out about your venture.
Mr. Many professionally prepared PowerPoint presentations are done in 30 point font size (with titles done in 36 font. like the executive summary. In your pitch you should describe: n n n n The need or problem your venture addresses Why your solution is effective The benefits Who you are and how to contact you You need to have your business card ready to hand out at the end of this short pitch – just the reverse of a baseball pitcher. consider the Gettysburg address – considered by many one of the greatest speeches of all time. revise. All team members should know it like the back of their hands. Use your business plan to glean the words for your pitch. a way to get the listener to ask for more information or schedule a meeting to learn more about your company. revise. If you think it is impossible to make an effective quick pitch. You must be quick. Work to make it real economical. Elevator Pitch Then there is the elevator pitch. be careful not to leave your audience in the dark. you pitch and “wind up” by handing your business card ( if you are not a baseball fan. It is called an elevator pitch because you may have no more time to describe your company than you would on an elevator ride.simply see what’s up there. This is a 30 to 60 second sales pitch designed to convey your idea. advisors and other entrepreneurs. Get advice from mentors. Even then. and make your point. It is. And everyone who speaks about the venture should know the pitch (just as they should have copies of the business plan). Stay away from high tech jargon unless there is a techie in the room who asks specific questions. Not easy to do. concise. Lincoln made a compelling case for the civil war and it lasted two minutes – only 278 words – less than most letters to the editor of your local newspaper. practice. practice until it is perfect. Write it down. 70 . in baseball the pitcher winds up and pitches – here you pitch and then wind up).
Government (in the case of innovative and technical type inventions and product improvements). This is one of the most popular forms of internal funding because it relies on your ability to utilize all your company's resources to free additional capital to launch a venture. 2005. Bootstrapping But who will provide the financing? There are many sources of funds for new ventures. In the early days of a business venture. But the business plan must make the case that your business model will work. such as sharing equipment. October 04. Entrepeneur. move it beyond the early stages.S. the article was excerpted from: excerpted from The Small Business Encyclopedia: 71 . Financing will not flow to an entity without making that case. Us e your own money to get your business off the ground. such as accounts payable. Some of it is what it is called bootstrap financing. while other sources must be planned. 1 There are sources of bootstrap financing that you can exploit before you look to external sources. meet operational needs or expand your business.com. expand your markets. Start -up capital can come in many forms: bootstrapping is one way but you may also look to angel investors and even the U. bartering. Bootstrap capital can complement or even reduce dependence on traditional sources of capital.Chapter Fourteen – Financing: Seed Capital The business plan may help you get the financing you need to launch your business. and leasing. you need start up capital. or in the case of an established business. They include: n n n n n n Personal savings Credit cards Loans from family and friends Loans against property Leasing Trade credit Bootstrapping is especially important for small firms due to their lack of access to capital markets and difficulty of raising capital. 1 6 Sources of Bootstrap Financing. Bootstrap financing becomes available through normal business operations.
such as high tech firms that need time to develop their product and or to penetrate new markets. venture capital. or public offerings. there is a group of investors called the Bay Angels who look to provide early stage capital to high tech companies that will be located on the Cape. Characteristics of Angel Investors Angels are hard to find because they are silent and private. and the venture itself become more mature and successful. For example. formal outside financing will become feasible and the angel will be able to cash out – harvesting an acceptable return on investment. Angels provide seed money for firms too young or too small to qualify for bank loans. Business plans are important documents to Angels. early financing may involve bootstrapping but then may need to come from angels. And they are. on Cape Cod. credit cards). they prefer to invest at the early stage of a venture. inventions. The angels provide the seed capital in hopes that once the ideas. and minimal requirements (for example. Angels are usually people who have made it as entrepreneurs and want to become involved in new ventures – both by contributing their money and by giving advice.000 in early stage-ventures). Bay Angel’s members need to have a net worth of at least $1 million. They work within networks or “clusters” 72 . Massachusetts. Angel Financing For ventures that may take a while to get going. Angels like the Bay Angles often provide more than just capital.Some small business experts believe that the entrepreneur's ability to raise nontraditional sources of capital (bootstrapping) often identifies the entrepreneurial character and ability of the new business owner. experience and expertise they add value to the enterprises that they invest.000 to $500. The use of bootstrap financing can force firms to solve problems that otherwise would remain hidden and unresolved while bootstrapping is another form of creative problem solving – something all successful entrepreneurs must master. through their contacts. angels will want to first see an executive summary of a business plan and may eventually want to study the entire plan. Bootstrap financing often has advantages that include easily obtainable (for example. Angels are high net-worth individuals – freelances of sorts who are looking for to invest relatively small amounts of money ($25. Angels like to work alone but sometimes they are part of an angel network. affluent. convenient (for example. To even consider a potential investment. needless to say. loans from li fe insurance). They don’t take out ads in newspapers or push their services through many media channels. home equity loans).
so they can quickly check up on their investment. and other nonprofit entities.S. Angels work in groups. In addition.(a term that Michael Porter has made famous). Venture exit horizons for angels tend to be 5 years to 10 years or more. But the real commonality of angel networks is that they typically invest in ventures involving markets and technologies with which they are familiar. But they do “connect with” college campuses (often business schools and engineering programs). federal program that helps small business to explore their technological potential and provides the incentive to profit from converting its technology to commercial products. 73 . state economic development agencies. Small Business Innovation Research Program The Small Business Innovation Research (SBIR) program is a highly competitive. How are Angel Investors Different from Venture Capital Investors? The primary differences between angels and venture capitalists (VCs) are in three areas expected rates of return. There is a social aspect to the angels. n n A round of angel financing is typically less than $1M and more usually less than $500K. The private investor angel market tends to be regional rather than local or national. high-tech innovation is stimulated and the United States gains entrepreneurial spirit as it meets its specific research and development needs. U. and exit horizons. serving on a working Board of Directors or providing guidance through an informal consulting/monitoring role. n Angels will accept a longer payback horizon and are willing to settle for a smaller return . are active investors. Finding Angels Angels are not easy to find. business incubators.20% to 25%/year compared to the VCs expected ROI of 30% to 35% or more per year. They tend to meet periodically for breakfast or lunch and to co-invest with trusted friends and business associates. By including qualified small businesses in the nation's R&D arena. financing amount. Proximity also places a part since Angels typically invest in ventures close to home within one day's drive. There are private investor angels without much fanfare or publicity. These networks are often oriented to bio-tech and other hightech companies. it is common (but not always the case) those angels' investment terms and conditions tend to be briefer and more informal than those of venture capitalists. No public records of their investment transactions exist.
SBIR has helped thousands of small businesses to compete for federal research and development awards. Small Business Administration website (www. stimulates the U.gov) for more details on qualifications. which. or service.000 for approximately 6 months support exploration of the technical merit or feasibility of an idea or technology.sba.S small business to compete for SBIR funds. in turn. But briefly. product. Visit the U. Phase I is the startup phase. Department of Energy. economy. advanced health care. Department of Education. and improved our ability to manage information and manipulate data. NASA. 74 . By reserving a specific percentage of federal R&D funds for small business. protected our environment. as part of the Small Business Innovation Development Act. These departments include: Department of Agriculture. There are qualification requirements for a U. Their contributions have enhanced the nation's defense. However. SBIR funds the critical startup and development stages and it encourages the commercialization of the technology. Department of Transportation.S. and the National Science Foundation. Department of Health and Human Services. Environmental Protection Agency. Department of Commerce. federal department and agencies set aside a portion of their R&D funds to fund SBIR grants to small business. the small business must be n n n n American-owned and independently operated For-profit Principal researcher employed by business Company size limited to 500 employees The SBIR process works like this: submit an SBIR proposal (including a business plan) to the appropriate agency and the agency awards based on: n n n n Small business qualifications Degree of innovation Technical merit Future market potential Small businesses that receive awards or grants then begin a three-phase program. SBIR protects the small business and enables it to compete on the same level as larger businesses. Since 1982. Awards of up to $100. the risk and expense of conducting serious R&D efforts are often beyond the means of many small businesses. Every year.SBIR targets the entrepreneurial sector because that is where most innovation and innovators thrive.S.
The SBA LowDoc program offers a simple one page application to minimize paperwork requirements and loan processing time for small businesses on loans up to $100K. and reports annually to Congress on its operation.000. DC 20416 (202) 205-6450. reviews their progress. If you are attempting to win an SBIR award start by creating up a business plan as a business plan is necessary when trying to attract any type of collateral. Small Business Administration The mission of SBA is to maintain and strengthen the nation's economy by aiding. The small business must find funding in the private sector or other non-SBIR federal agency funding. There is a connection between SBIR programs and business plans. The Small Business Investment Companies (SBIC) program was established by federal government and is made up of private investment companies that provide venture financing and management assistance to small companies.Phase II awards of up to $750. Only Phase I award winners is considered for Phas e II. The US Small Business Administration plays an important role as the coordinating agency for the SBIR program. Then there are the SBA MicroLoan program with loans up to $25K to startup companies in inner cities and rural areas. SBA is attempting to make more credit available to small companies with less paperwork. SW Washington. SBIR programs are an excellent way to get the seed money to do the advanced R&D. The SBA accumulates information on SBIR programs and makes that information through their web site (www. The SBA also guarantees loans up to 85% of a private loan with maximum guarantee amount of $750K and 25 years to pay. During this time. Phase III is the period during which Phase II innovation moves from the laboratory into the marketplace. a large percentage of which are owned by women and minorities 75 . for as many as 2 years.goc) For more information on the SBIR Program. The SBA offers a number of programs that can help your small business gain financing. assisting and protecting the interests of small businesses and by helping families and businesses recover from national disasters. expand Phase I results. please contact: US Small Business Administration Office of Technology 409 Third Street. It directs the various federal agencies' implementation of SBIR.sba. counseling. the R&D work is performed and the developer evaluates commercialization potential. No SBIR funds support this phase.
look for banks where small business loans are a priority. you will probably need to be creative by bootstrapping your venture until you are successful enough to get bank financing. Most start up firms cannot attract bank loans because they cannot demonstrate sufficient assets and a healthy financial track record or any financial history at all. can benefit both firms. as one venture capitalist told me. It might be that another business with which you do business might have different but complementary strengths and that excellent match might lead to benefits such as capital infusions. Once again. Strategic Alliances An alternative to angel. your business plan will be the starting point. as a source of financing. knowledge and skills for r the other. They want to give money to entities that really don’t need it. typically charging interest rates just one to two points above prime for small business loans. and access to production capabilities.Commercial Banks Commercial banks are one of the cheapest sources of borrowing for small companies. Microso ft is an excellent case. The downside to seeking funds from commercial banks is that commercial banks typically seek security in the form of the business owner’s personal assets. However. many banks want to sell umbrellas on sunny days. A strategic alliance. an essential loan document. it can be an access to new markets for one. 76 . If you do want to approach commercial banks. Those types of banks are better at evaluating and managing the risk of business loans. not a sideline. For example. If that’s the case. VC financing or more debt is a strategic alliance with another firm. They work with smaller partners and eventually buy into smaller corporate partners and in essence use them as R&D arms.
Contact your legal counsel. But before you consider those needs it might first be wise to evaluate the appropriateness of seeking VC financing in the first place. They want you and your team to do that and do it in such a way as to provide good stewardship of the VC funds and healthy ROIs. Contact your suppliers. Tour your facilities. VCs do play a role but that role is more advisory in nature. However. critic. Check your references regarding prior positions with other firms. They are interested in financing established. and hiring key talent. going way beyond a review of your business plan. Perhaps commission formal market studies by outside consultants.Chapter Fourteen Venture Capital Preparing your plan for venture capital (VC) investors requires some knowledge of the information needs of that group. there is a tending to invest in high technology ventures and VCs are not interested in companies on the verge of a start -up. Contact your existing outside investors. They will: n Analyze your pro forma financial data and estimate potential value of the investment at the time of expected VC exit from the investment. They tend to act as a sounding board. VCs want to understand the composition of your management team and they want assurances that you will have a strong management team into place. Query your existing and potential customers. VCs have no desire to run a company. operations. VCs invest in portfolios of businesses and they are not likely to kno w as much about each particular business as the management team in those firms in which they invest. You must find a way to show that you have a capable and experienced management team in place. Although VCs invest in all areas. and try to ask the right questions regarding: strategic planning. n n n n n n n n n 77 . Also keep in mind that the due diligence of a VC will be extensive. and contact people involved with your other form er business associates. As with any capital provider. How can you demonstrate that in your business plan? That’s a key question. ongoing firms in which they can invest at least $2 million to $5 million. Consult technology experts regarding your company’s technology. Contact your competitors.
Key management must demonstrate success in similar positions at prior firms if seeking early stage financing. n n n n n n VCs seek uniqueness in the product/service concept. They seek to 'exit' an investment within 3 to 5 years. VCs key on the capabilities and track record of the firm's management team. The venture must demonstrate a high absolute potential return (i.e. or buyback of the VC's investment by the firm. being re-capitalized. volume of dollars) in addition to a percentage ROI in excess of 30%/year. They will provide strong oversight after giving you the money. Product/service concept must already work or can be brought to market within 2 to 3 years. or significant cost cutting. VCs involve themselves in the fi rm's strategic decisions. Key management must objectively demonstrate high personal integrity. VCs frequently contact the firm's key personnel on operations issues and pay informal visits to firms in the VC's portfolio.n Query your bankers VC Oversight VCs will not give you money and then forget about you. They might periodically meet with the firm's customers and suppliers. n n n n 78 . VCs invest in people rather than in ideas or physical assets. VC Exit Strategy VCs eventually leave.. Oversight will happen in a variety of ways. finding a substitute investor for the VC. increasing market share. One VC recently told me that if you don’t like us looking over your shoulder. Often VCs serve on the firm's boards of directors. Specifically. VC Investment Criteria Here is a quick checklist to help you understand the investment criteria used by VCs. Must be significant potential for earnings growth. being acquired. Product/service must offer a significant competitive advantage. They are less patient than other equity investors. Earnings growth potential may come from rapidly growing market. then you might want to get a bank loan and be happy just paying the principal and interest. Exiting the investment is how the VC makes its money and is usually achieved by the company going public. That oversight can be quite intensive. VC expects minimum return on investment (ROI) of over 30%/year.
Key management must demonstrate ability to identify risk and develop plans to deal with risk.n n Key management must demonstrate success in current position if seeking later stage financing. Key management must demonstrate a thorough understanding of the business and the particular venture. Key managers must exhibit leadership and appropriate management experience. n n 79 .
product reviews. and credit reports Copies of contractual agreements Commitment letters from major customers. It can also include supporting materials such as your mission statement. and brochures Samples of advertising Organizational chart and list of job responsibilities Detailed résumés or biographical sketches of owners and managers Personal financial statements. and worksheets – anything that can help your case. sample marketing materials. and website documents. It helps to support your claims and can answer many of your reader’s questions. and lenders References (either letters or contact names) from lawyers. photographs.Chapter Fifteen – Business Plan Appendices In the process of writing your business plan. suppliers. product specifications. tax returns. This information belongs in an appendix. you will come across information that is too important to be left out. suppliers. photographs. important documents. An appendix can contain exhibits. accountants. or other significant corporate events Product specifications. figures. Business plan appendix checklist n n Market analysis data prepared by a third party consultant Exhibits outlining expected timing of product development. Don't go overboard. résumés. though. hiring. charts. and banks Hardcopies of websites Mission statement Vision statement Résumés Product reviews Photographs of facilities Copies of logos or trademarks n n n n n n n n n n n n n n n 80 . but too detailed to be in the body of the plan. Include information only if it adds value or helps clarify major points.
Government section in your telephone directory for the office nearest you. Bring investment opportunities to the attention of appropriate sources of capital.gov. An SBDC acts as a catalyst for the interaction between small businesses and sources of capital.000 volunteer business executives who provide free counseling. It contains an online library with information on starting a business. Small Business Administration (SBA): The SBA offers extensive information about many business management topics from how to start a business to exporting products. writing a business plan. Provide an objective third party viewpoint to help the parties work towards a mutually satisfactory plan. Consult the U. n n n 81 . For more information about SBA business development programs and services call the SBA Small Business Answer Desk at 1-800-U-ASK-SBA (827 -5722). counseling services. the educational community. Many colleges and universities partner with SBDCs to assist local entrepreneurs and companies. Provide general assistance to the small business. Suggest to the entrepreneur additional resources to strengthen his presentation. and the private sector. Recommend to the entrepreneur possible sources of capital.S. The SBA has offices throughout the country. Contact your local SBA office for more information about SCORE assistance in your area. workshops and seminars to prospective and existing small business people. in partnership with state and local governments. sponsors SBDCs. The SBA maintains a useful website at www. financial programs.Information Resources U. and on SBA loan programs. The SBA offers a number of programs and services.S.Appendix A . and contract assistance. An SBDC can help with th e following: n n n Assist the small business in preparing its business plan and presenting it to potential investors.sba. Small Business Development Centers (SBDCs): The SBA. It is comprised of over 13. Service Corps of Retired Executives (SCORE): The Service Corps of Retired Executives (SCORE) is a national organization sponsored by SBA. including training and educational programs. and coordinate approaches to the various sources.
SC (803) 777 -4907 Memphis State University. Durham. Randolph Center. Sacramento. AK (907) 274 -7232 Maricopa County Community College. Ames. MT (406) 444-4780 University of Nebraska at Omaha. CT (203) 486 -4135 University of Delaware. TX (713) 752-8444 Texas Tech University. MS (601) 232 -5001 Department of Commerce. CO (303) 892-3809 University of Connecticut. PA (215) 898 -1219 Bryant College. Denver. Omaha. of Alaska/Anchorage. ME (207) 780 -4420 University of Massachusetts. LA (318) 342 -5506 University of Southern Maine. Storrs. Portland. Indianapolis. MA (413) 545-6301 University of Mississippi. Houston. Reno. KY (606) 257-7668 Northeast Louisiana University. NE (402) 554-2521 University of Nevada in Reno. Philadelphia.Each state has at least one SBDC (called a lead office) with many States having several satellite SBDCs. NV (702) 784-1717 University of New Hampshire. Columbus. A list of SBDC lead offices by state follows: n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n Univ. TN (901) 678-2500 University of Houston. University. Helena. Wichita. CA (916) 324-5068 Office of Business Development. NC (919) 571 -4154 University of North Dakota. Memphis. Monroe. Lubbock. Tempe. Grand Forks. VT (802) 728 -9101 82 . Raleigh. Anchorage. NH (603) 862-2200 State University of New York. Amherst. OH (614) 466-2711 University of Pennsylvania. Salt Lake City. Washington. ND (701) 777-3700 Department of Development. NY (two Locations) (518) 443 -5398 University of North Carolina. DC (202) 806-1550 Economic Development Council. Springfield. Little Rock. AR (501) 324-9043 California Trade and Commerce Agency. Albany. UT (801) 581-7200 Vermont Technical College. RI (401) 232 -6111 University of South Carolina. AZ (480) 731-8720 University of Arkansas. Columbia. Lexington. IA (515) 292-6351 Wichita State University. DE (302) 831 -2747 Howard University. Newark. KS (316) 689-3193 University of Kentucky. TX (806) 745 -3973 University of Utah. IN (317) 264-6871 Iowa State University.
US VI (340) 776-3206 Department of Economic Development. Madison. DC 20402-9328. Charleston. The institutes’ students and faculty counsel small business clients. WI (608) 263 -7794. Other U. Richmond. or even better. Box 100 Pueblo. GPO bookstores are located in 24 major cities and are listed in the Yellow Pages under the bookstore heading. Pullman. and Washington. If you are interested in the services of an SBI. contact the regional offices listed in the telephone directory or write to this address: Consumer Information Center (CIC) P. The CIC also maintains a website at. St.gov. Sources of Free or Low Cost Government Information: Many federal agencies offer publications of interest to small businesses. You can request a Subject Bibliography by writing to Government Printing Office. WA (509) 335 -1576 Governor’s Office of Community and Industrial Development. Superintendent of Documents. Government Resources: Many publications on business management and other related topics are available from the Government Printing Office (GPO). Below is a selected list of government agencies that provide publications and other services targeted to small businesses. discuss your needs with the librarian. WV (304) 558 -2960 University of Wisconsin. Small Business Institutes (SBIs): SBIs are organized through the Small Business Administration (SBA) and operate on more than 500 college campuses nationwide. CO 81002 The CIC offers a consumer information catalog of federal publications. Your Library: A visit to your local library. Thomas. To get their publications.O. but most are free. A helpful librarian can locate the information you need and point you to valuable resources to simplify your 83 . If you are not familiar with the efficient means for finding business information in your library. contact a local business college or the nearest Small Business Development Center to get a referral.pueblo. the library of a college or university (particularly one that has a business school) could reveal many more useful publications and digital resources. VA (804) 371-8258 Washington State University. There is a nominal fee for some.gsa.n n n n n University of the Virgin Islands.
D. textbooks and manuals on small business are published each year. and William B.com and search using keywords such as business plan.amazon. job satisfaction. research an industry. To generate ideas.000 business plans. conferences and seminars. n The McGraw -Hill Guide to Writing a High-Impact Business Plan by James B. look for other books and information resources. Most libraries have a variety of directories. Fredenberger. Trade associations provide a valuable network of resources to their members through magazines. starting a business. go to www. Small Business Books: Many guidebooks. As the title suggests. Terrific book about starting a business.D. Harper This book shows new and prospective business owners how to beat the odds and join the select few who follow their dreams to financial reward. DeThomas. Business Plans that Work Edited by Susan M. business planning. websites. and write an effective marketing plan. the book takes a simple approach. and questions to guide him through the plan preparation.research. Jacksack. tips. n n Your First Business Plan by Joseph Covello and Brian Hazelgren This book uses a workbook approach to preparing a business plan. Here is a list of some useful business plan books: n Writing a Convincing Business Plan by Arthur R. The book effectively covers the elements of a financial plan. examples. Arkebauer Concise and full of examples. newsletters. This book is part of The Commerce Clearing House Business Owner’s Toolkit series n 84 . The reader is presented with information. and self-reliance. The McGraw -Hill Guide to Starting Your Own Business: A Step-By-Step Blueprint for the FirstTime Entrepreneur by Stephen C. but particularly useful for compiling a business plan. indexes and encyclopedias that cover many business topics. Ph. The author is an entrepreneurial investment banker who has reviewed and/or prepared over 5. and checklists.D. this book provides the basics of business plan preparation. n Business Plan in a Day by Rhonda Abrams. and venture capital. The book provides excellent guidance on how to gather information. Ph. n The Art of the Start: The Time-Tested. guidelines. J. Inc. for more complex business ventures. It is an excellent reference book for any businessperson. Has a chapter on business plans but most of this book’s value is found in the tips provided regarding getting your business launched. Trade Association Information: Libraries have directories of trade associations where you can locate one or two pertinent to your business. This book is published by Barron’s Educational Series. advice.com or www. Battle-Hardened Guide for Anyone Starting Anything by Guy Kawasaki. This book uses an outline and forms approach to help you prepare a barebones business plan in a 24 hour period.barnesandnoble. trade shows.
and buy a franchise. n E-Shock: The Electronic Shopping Revolution: Strategies for Retailers and Manufacturers by Michael De Kare-Silver. choose questions to ask before committing capital. This comprehensive guide is a reference for people on both sides of angel investment.com).com or on America Online keyword: CCH).com). and other services that help you take your business online. advertise across the web. checklists.com) is an online resource to assist companies in obtaining capital. gaining support. This book provides guidelines.toolkit.com (startup. Yahoo! Small Business (smallbiz. what to include in a business plan. This website provides information and resource links to help you promote your business online including how to get listed on search engines. Website maintained by Yahoo! that provides business guides. run a business.wsj. and much more. and how to valuate a business. Strikingitrich. A textbook for understanding the e-commerce explosion and tapping into your share of the online customer base. The site ranks the most popular franchises. and use email to market your products and services.com) Maintained by Entrepreneur Magazine. and find out how to make their first investments.bcentral.entrepreneurmag. Entrepreneurs get practical advice on how to attract and pitch to investors. All the books in their toolkit series are compiled by editors who draw upon a team of experts in the field. n n n America’s Business Funding Directory (www. (www. tools. businesses for sale. Website from the publishers of the Wall Street Journal that provides information and tools. this website has content related to how to start a business. Robinson. and articles on franchising. finding government programs. CCH is a leading provider of high-quality business information.com (Striking It Rich. Angel Investing: Matching Start-Up Funds With Start-Up Companies by Mark Van Osnabrugge and Robert J.com.cch. getting expert help. Jeff Bezos This book gives accounts of little known web enterprises that are very successful. Start-up Q&A.wsj. and examples to help you prepare a complete business plan. n n n 85 . business documents.(): Profiles of 23 Incredibly Successful Websites you’ve probably never heard Of by Jaclyn Easton.com).yahoo.businessfinance. Startup. n Entrepreneur. Microsoft’s bCentral (www. Angels can learn about connecting with startups. such as a database of Venture Capital firms.
Appendix B . and sanitary services Wholesale trade . the major industry groups are as follows: n n n n n n n n n n n Agriculture. insurance. forestry. As per the Standard Industrial Classification Manual. 86 . electric. communication.S.The Standard Industrial Classification System The Standard Industrial Classification (SIC) System specifies a name and number for the industry in which the company operates. gas. federal government to classify businesses by the specific business activities that they engage in. The most widely used method for coding industry characteristics is SIC.durable goods Retail trade Finance. fishing Mining Construction Manufacturing Transportation. The SIC is a uniform number-coding and verbal-description created by the U. and real estate Services Public administration Non-classifiable establishment Within each major industry category are major industry groups and subgroups that let you further refine and clearly identify the industry or industries your company operates in.
then commission expense will be higher with higher sales.templatezone. if a commission of $2 is paid on each sale. The fixed costs are monthly selling and general costs that that do not vary with sales (or production) volume.Appendix C – Excel Workbooks The developers of OfficeReady Business Plan had prepared a group of Excel workbooks. designed to help you create the pro-forma financials and supporting reports necessary for a solid business plan. Keep in mind that although these workbooks provide the basic building blocks needed for the financial part of your business plan. A variable cost is one that does vary. 87 . The break-even point is the sales volume needed to exactly cover all costs. It moves in tandem with sales. This template calculates the monthly break-even point in units and sales dollars (revenue). For example. How to Begin Start by entering the cost of the product followed by the fixed and variable costs. This appendix provides descriptions and details about those workbooks.com Break-Even Analysis (Break-Even Analysis.xls) Use this workbook to calculate the break-even point. At the break-even point there is no profit or loss. Additional Excel templates can be purchased by visiting: www. you may need to tailor them or devise new worksheets to meet the needs of your business plan readers.
xls) Every business plan has a financial element to it. your business plan should show your projected sales. The Integrated Financials workbook consists of the following worksheets to help you get started on your way to preparing a financial plan: 88 . Include the numbers in your financial plan narrative as some readers will want to know your break -even point as the gap between where you believe you will be with sales volume and the break-even point is an indication of risk. the worksheet calculates: n n n Contribution margin per unit (selling price less variable cost per unit) Monthly unit sales at break-even point Monthly sales dollars at break-even point (monthly break-even point in units x selling price) Using the Worksheet Include the break -even point analysis as part of the appendix of your business plan. and cash flow. At a minimum. financial position.Once you enter the selling price per unit. Integrated Financials (Integrated Financials.
So spend a considerable amount of time crafting your sales forecast. based on the numbers in the financials. 24-Month Income Statements 24 Mth Inc Stmt 5-Year Income Statements 5 Yr Inc Stmt Income statement template that helps you prepare pro forma income statement for each of the 5 years in your planning period. Financial Ratios Ratios A summary of financial ratios.Financials Sales Forecast (24-month and 5-year) Worksheet Names 24 Mth Forecast 5 Yr Forecast Description Worksheets to help you organize your sales forecasts (by month fro the first 24 months and by year for the first 5 years) Income statement template that helps you prepare an pro-forma income statement for each month for the first 24 months of your planning period. 24-Month Cash Budgets 24 Mth Cash Bdgt 5-Year Cash Budgets 5 Yr Cash Bdgt 5-Year Balance Sheets 5 Yr Bal Sheet A business balance sheet template that helps you prepare pro-forma balance sheets as of the end of the fiscal year for each year of the 5 year planning period. Cash budget template that helps you prepare a pro forma cash budget for each of the 5 years in your planning period. Continue by completing the worksheets in the order they appear in the workbook. How to Begin The first thing you should do is to enter (in the Intro worksheet) the end date of the first fiscal year of your business plan. You may also enter text describing the units used in your plan. The preparation of Pro-forma financials begins with a solid sales forecast. for the end of the year for each of the 5 years in the planning period. Cash budget template that helps you prepare a pro forma cash budgets for each of the first 24 months in your planning period. you can simply clear the contents of the notation cell. If you do not wish to state numbers in thousands. This date is used to set up the starting month and the year-end dates for both the monthly and annual projections. The table of contents of the workbook offers hyperlinks to each worksheet. 89 .
90 .
The first two years are automatically consolidated from the 24 Mth Forecast worksheet. You will need to enter formulas or numbers to complete the forecast for the last three years. The template will total the sale in the last column. 91 . Your next step is to complete the five year forecast in the worksheet named: 5 Yr Forecast. Enter the names of your products/services in the first column and then enter your sales forecast by month.Sales Forecast (24 Mth Forecast and 5 Yr Forecast) You start by preparing the 24 months sales forecast (worksheet named: 24 Mth Forecast).
Also. you may want to delete (Edit Delete Entire row) the extra rows. if you are going to forecast for more than 20 items. and D of the 5 Yr Forecast. both into the 24 Mth Forecast worksheet and the 5 Yr Forecast worksheet. The worksheets use cell references to link both text and numbers to subsequent sections of the worksheets so you may want to become familiar with those “linkages” prior to inserting or deleting rows. The worksheet named: 24 Mth Inc Stmt is a template that allows you to prepare an income statement for each of 24 months (the first two years of your 5 year planning period). If you do that on the first worksheet (24 Mth Forecast) keep in mind you will need to also delete the extra rows in the second section of the 24 Mth Forecast and in the 5 Yr Forecast worksheet.Adapting the Sales Forecast Worksheets Since by default these worksheets allow for up to 20 products and/or services. two worksheets make up the projected income statements. Income Statements (24 Mth Inc Stmt and 5 Yr Inc Stmt) Like the sales forecast. you will need to more insert rows. and in columns B. column B in the second 24 month sales forecast (24 Mth Forecast). If you don’t the MS Excel #REF! error message will appear. you will also need to copy formulas down column O. if you forecast less than 20 items. If you do that. 92 . C. in the 24 Mth Forecast.
go to the 5 Yr Inc Stmt worksheet and complete it. therefore. 93 . The first two years will be consolidated from the 24 Mth Inc Stmt worksheet.Once you have completed the 24 Mth Inc Stmt worksheet. you will need to concentrate on the last 3 years.
Like with the sales forecasts and the income statements. This is done in two parts: budgets for the first two years (by month for 24 months) and for the last 3 years of the 5 year planning period. Your very first step is to enter the beginning cash balance in cell D9 of the worksheet named: 24 Mth Cash Bdgt. the 24 month projections supply numbers to the 5 Yr Cash Bdgt.Cash Budgets (24 Mth Cash Bdgt and 5 Yr Cash Bdgt) The next logical step in the preparation of the integrated financials is the cash budget. therefore. Enter the beginning cash balance here Once you are finished with the first 24 months of cash flow projections. you will need to complete the last three years of the 5 Year Cash Budget worksheet. move onto the worksheet entitled: 5 Yr Cash Bdgt and complete that. 94 .
expenses. and top management decision making. Also enter your total liabilities to calculate a projected net worth. venture capital.Worksheet Financial Integration The term “integration” means that these worksheets are linked in a number of ways.xls) This worksheet allows you to prepare a personal balance sheet. Enter cost basis and current values for all your assets. In your business plan Word document. Income statements supply the cash budgets and balance sheets with critical numbers (revenues. Projected Personal Financial Condition (Personal Balance Sheet. Below is a simple diagram that summarizes the links between the worksheets in the Integrated Financials workbook. increases in retained earnings etc. The pro-formas are necessary documentation for business loans. 95 .). Sales forecast feed into income statements and cash budgets. refer to the financials and include them in their entirety as an appendix to your business plan. 24 Mth Forecast 24 Mth Inc Stmt 24 Mth Cash Bdgt $" 5 Yr (Sales) Forecast " $" 5 Yr Inc Stmt $ 5 Yr Cash Bdgt " ( $ 5 Yr Bal Sheet Using the Worksheets The worksheets of the Integrated Financials workbook are a critical element of your business plan.
venture capitalists. etc. In that case.). include the personal balance sheet as part of the business plan appendix. 96 .Using the Worksheet Sometimes the personal balance sheet is a requirement of the reader of the business plan (bankers.
you may want to prepare a detailed sales forecast. and are designed for maximum flexibility. The template will total the sale in the last column. How to Begin You start by preparing the 24 months sales forecast (worksheet named: 24 Mth Forecast). 97 . Enter the names of your products/services in the first column and then enter your sales forecast by month.Sales Forecast (Sales Forecast. This workbook has two worksheets: 24 Mth Forecast – which contains a worksheet you can use to prepare the first two years of sales forecasts (by product or service) 5 Yr Forecast – which contains a worksheet that shows five years of sales forecasts by consolidating the first 24 months (from the worksheet named: 24 Mth Forecast) and adding 3 more years.xls) When preparing pro -forma financial statements for your business plan. Use this workbook to prepare a detailed Sales Forecast. The templates accommodate up to 20 products or services. The workbook summarizes monthly sales forecast results for 24 months and annual sales forecasts for 5 years. Enter numbers directly into cells or create formulas incorporating monthly or annual growth rates.
If you do that on the first worksheet (24 Mth Forecast) keep in mind you will need to also delete the extra rows in the second section of the 24 Mth Forecast and in the 5 Yr Forecast worksheet. The worksheets use cell references to link both text and numbers to subsequent sections of the worksheets so you may want to become familiar with those “linkages” prior to inserting or deleting rows. The first two years are automatically consolidated from the 24 Mth Forecast worksheet. you will need to more insert rows. both into the 24 Mth Forecast worksheet and the 5 Yr Forecast worksheet. if you forecast less than 20 items. in the 24 Mth Forecast. if you are going to forecast for more than 20 items. Also. and D of the 5 Yr Forecast. 98 . column B in the second 24 month sales forecast (24 Mth Forecast). If you don’t the #REF! error message will appear. you will also need to copy formulas down column O. and in columns B. C. you may want to delete (Edit Delete Entire row) the extra rows. Adapting the Workbook Since by default these worksheets allow for up to 20 products and/or services. If you do that. These same worksheets are also part of the Integrated Financials workbook. the heart of the financial plan.Your next step is to complete the five year forecast in the worksheet named: 5 Yr Forecast. Using the Worksheets The total sales forecasted for each year of the planning period are an important element of both the business plan and a marketing plan. You will need to enter formulas or numbers to complete the forecast for the last three years.
These costs are multiplied by a factor to give you the estimated cash you will need. It includes some monthly costs that you will need to cover from start up investment and one-time start-up costs.Start-Up Capital and Capitalization (Start-Up Capital and Capitalization. begin by entering the monthly costs that you will incur to run your business. How to Begin To estimate start -up capital.xls) This workbook contains two worksheets to help you estimate start-up capital and start-up capitalization. Those factors are shown in a table below the Start-Up Capital report. as shown below: 99 . Start up capital is the amount of money you need to start the business.
some start -up capital will come from owner investment while other capital might be rais ed from loans. As you enter the sources and amounts of capital. The Capitalization sheet (the second worksheet in the workbook) allows you to determine the sources of cash to help pay for the start-up expenses. the worksheet will calculate a deficit or surplus based on your estimates from the Start-up Capital worksheet. go to B39 and enter your own multiples in column D. 100 . For example.If you want to change any of the multipliers for the “Cash Needed to Start” estimates.
Then enter the book value of assets. providing a service. Personnel Plan (Personnel Plan. Balance Sheet (Balance Sheet. If your business has been in existence for some time.xls) This worksheet is a basic balance sheet (statement of financial position) for a business. The totals from this report can be carried over to the income statements. refer to the estimates for start -up capital and the sources of capital and include them in their entirety as an appendix to your business plan. In your business plan Word document. It is also presented to lenders to obtain financing. Refer to your balance sheet in the business plan and include it in the appendix.Salaries and wages. How to Begin You begin by entering an “as of date” for the balance sheet. It is primarily used to report the financial condition of the firm to the owners.xls) The Personnel Plan helps you estimate your payroll expenses in three categories: Direct Payroll. Selling Payroll consists of the wages and salaries paid to the sales staff. administrators. or performing activities that produce revenue. Direct Payroll represents the wages and salaries of employees who are directly involved with making a product. It does not include sales commissions . The projections you make on this sheet for Selling Payroll are utilized on the Income Statement under Selling expenses . Direct payroll costs become part of the Cost of Sales forecast. General and Administrative Payroll consists of salaries paid to Officers. general managers. and General & Administrative Payroll. to insure that Assets equal Liabilities plus Equity. and equities in the unprotected (yellow) data entry cells of the worksheet. liabilities. Using the Worksheet Your financial plan should show pro -forma financials (see Integrated Financials. and wages paid to office staff.Using the Worksheets The worksheets of the Start-Up Capital workbook are a critical element of your business plan. 101 .which is a separate line item on the Income Statement.xls). It includes all wages and salaries other than direct payroll and selling payroll. Please note that "accumulated depreciation" and "doubtful accounts" must be entered as negative numbers. The Retained Earnings figure for each year is calculated. Selling Payroll. you may also want to include a copy of your most current balance sheet (historical values).
The first two years of costs estimates are taken from the first worksheet. You may also enter text describing the units used in your plan. Once you have completed the first worksheet. enter the end date of the first fiscal year of your business plan. Next. you can simply clear the contents of the notation cell. the totals from the Personnel Plan should be incorporated into the income statement worksheets of your business plan. You must complete the estimates for the last three years. go to the second worksheet: Personnel Plan 5 Years. This date is used to set up the starting month and the year-end dates for both the monthly and annual projections. More importantly. This sheet allows you to estimate your personnel expenses by month for the first two years.How to Begin On the Intro sheet. If you do not wish to state numbers in thousands. Using the Worksheets The Personal Plan worksheets can be included as part of the appendix of the business plan. go to the first work sheet: Personnel Plan 24 Months. 102 .
interest and principal. Balance Sheet A report that shows the financial standing of a business as of a particular time. where and to whom the business will market its products or services. or equipment. Advertising Agency A third party firm that assists businesses with the advertising and promoting of products and services. liabilities and equity of the business as of a certain date. 103 . the volume of sales that results in a net operating income of zero. SEC rules require that it be distributed to all shareholders. Advertising any paid form of non-personal message delivered through mass media. costs. Auditor’s Report Document issued by the company’s independent auditing (public accounting) firm that expresses an opinion about the fairness of the entity’s financial statements and adherence to Generally Accepted Accounting Principles. Accounts Receivable Money owed to a firm for goods or services sold on credit shown as a current asset on the balance sheet. Business Model Another name for the business plan. accruals. It is o ften written to gain funding. Business Plan A detailed written study of what the business is. and prepaid items to current liabilities. Break-even Point The level of operations where sales revenue equals expenses (costs). also can be used as an internal planning document. Amortized Loan A loan to be repaid. It includes a description of the firm’s operations. the ratio of current assets minus inventories. A quantifiable plan that shows expected revenues. and a set of pro forma financial statements. Acid-test Ratio also called the quick ratio. Budget Projected financial reports. Amortization is a payment plan that enables the borrower to reduce debt gradually through periodic payments of principal and interest. plant. by a series of regular payments that are equal or nearly equal.Glossary Accounts Payable A current liability representing the amount owed to a creditor for items purchased. Asset Owned by and is of value to the business. and how it will conduct its operations. It can also refer to the basic business concept. Annual Report Yearly record of a publicly held company’s financial condition. Capital Expenditures A amount used during a particular period to acquire or improve long-term assets such as property. It includes the assets. expenses. its balance sheet and income statement. A more detailed version is called a 10-K. units produced. without any special balloon payment prior to maturity.
Capital Structure The mix or proportion of long -term debt and owner’s equity capital used to finance the firm’s fixed assets. Collateral Assets that have been pledged to secure a loan. the more liquid the company. Cost of Sales Also called cost of goods sold. its EPS would be one dollar per share. and behavioral characteristics that define a group of customers. existence. demographic. is a company’s profit divided by its number of shares. The higher the ratio. Current Liabilities Amount owed for salaries. interest. Current Assets Total assets for the borrower or company at the current moment in time. Depreciation A non-cash expense that provides a source of free cash flow. adverse changes in the neighborhood. Credit History Record of how a person borrowed and repaid debts. It refers to the cost of acquiring or making a good. Cash flow from operations (called funds from operations. e-Business Software Commerce software for both business-to-consumer and business -to-business commerce. duties. Corporation An entity which is legal and taxable also has a life. Earnings Net income for the company during the period. If a company that earned two million dollars in one year had two million shares of stock outstanding. 104 . Amount allocated during the period to amortize the cost of acquiring long term assets over their useful life. FFO) by real estate and other investment trusts is important because it indicates the ability to pay dividends. Current Ratio An indicator of short-term debt -paying ability. as it is called. accounts payable and other debts due within one year. Cash Budget Plan that projects cash inflows and outflows for a business over some period. Determined by dividing current assets by current liabilities. The company often uses a weighted average of shares outstanding over the reporting term. It is usually designed to integrate with existing back office systems. and responsibilities separate and distinct from its stockholders (owners). Cash Flow Investments those represent earnings before depreciation amortization and non-cash charges. In real estate. Contingency Factors Factors or multiples used to add a safety cushion to projections such as start-up costs. also known as cash earnings. it is the decline in value of a house due to wear and tear. Credit Report Report documenting the credit history and status of a borrower’s credit standing. Earnings Per Share EPS. Control Management function that involves comparing actual events to plans to determine whether corrective action is necessary. or any other reason. psycho-graphic. Customer Profile The geographic. Directing Lead and motivate employees to achieve the goals and objectives of the company.
Inventory Raw materials. trademarks. Gross Margin The difference between net sales and the cost of sales during a particular time frame (such as a month.and web-based technologies.e-Commerce Also called electronic commerce. regulators. Also. Average Collection Period. Exit strategy Detailed plans for how the business will be sold. An industry analysis is often part of a business plan. items available for sale or being made ready for sale. Securities offered in an IPO are often. The melding of traditional fast business processes with internet . Economic Order Quantity An order quantity of inventory that helps minimize the costs of holding inventory and being out -of-stock. A group of companies supplying related products and services and the businesses that support them. creating a complete. auditors. the ownership interest in a company or personal property. or how venture capital providers will be paid back their investment. Market for shares of stock in corporations. such as land. The efficiency ratios include Inventory Turnover. Fixed Assets A firm’s long-term assets. Equity Market It is also called stock market. Income Statement The financial statement that shows revenues and expenses of the business during a particular time frame (such as a month. Industry The category describing a company’s primary business activity. Industry Analysis A study that defines a group of businesses that supply related products and services. This includes balance sheet. that reveal the entire financial picture of a company. how the current owners will dispose of their equity. This is usually determined by the largest portion of revenue. and Total Asset Turnover. quarter. Equity Capital Ownership funds that represent a proportionate claim on the firm’s cash flows and profits as well as a proportionate voting right. or year). and investors. small companies seeking outside equity capital and a public market for their stock. They can be individually valued by 105 . Gross Profit Percentage Gross margin divided by net sales. but not always. buildings. Investors purchasing stock in IPOs generally must be prepared to accept very large risks for the possibility of large gains. and equipment. Initial Public Offering A company’s first sale of stock to the public. or copyrights). The value of the common stockholders’ equity in a company as listed on the balance sheet. fast. Intellectual Property Intangible property that is the result of creativity (such as patents. quarter or year). Efficiency Ratios Indications of how well management is utilizing resources. IPOs by investment companies (closed end funds) usually contain underwriting fees that represent a load to buyers. Financial Statements Collection of financial reports. those of young. and seamless business system. income statement. and Statement of Cash Flows. The franchisee usually operates the franchised operation. Franchisee An individual or group who purchases a franchisee. required by bankers. Equity Assets less liabilities.
ISP Internet Service Provider. The leverage ratios include debt ratio. and e-newsletters that people have requested (subscribed to. Organizing The grouping and allocating of resources and activities to accomplish a goal efficiently and effectively. Partnership Agreement A legal agreement between partners that specifies the role. or intermediary between the issuer of securities (shares of stock. The limited partner’s liability is limited to his or her investment in the firm. LIFO or other techniques. For security firms. Market Share The percentage of your customers that represent the total number of customers available in the market. price. The liquidity ratios include the Current Ratio and the Acid -Test Ratio. agent. Marketing Plan Plan to achieve the company’s target markets and motivating the customers to purchase the products and services. such as special announcements. involvement. Outsourcing Contacting with outside vendors to perform tasks or produce goods and services. and promotion orchestrated to reach the target market (group of customers). duties.several different means. bonds. billboard. A company that sells access to the Internet to users. securities bought and held by a broker or dealer for resale. or mailed advertising. Market Segmentation Divide into small sub-markets the target market that you have identified as your customer base. Investment Banker A company that acts as an underwriter. printed. promotions. Planning A function of management. Mass Media Contact potential customers through broadcast. Limited Partner A co-owner with a special type of partnership. Opt-in Email Email. Limited Liability Company Hybrid form of business. Market Analysis A study of the basic factors that define the market for a company’s products or services and how the firm must be positioned to reach this market. distribution. Marketing Mix The combination of product or service. Deciding and documenting 106 . Tax benefits of a partnership and the liability protection of a corporation are provided. A vision from the management team of what the company should be and whom it should serve. and times interest earned. Leverage Ratios Indicates the degree to which debt is used to finance a company. The lower value of alternatives is usually used to preclude overstating earnings and assets. and responsibilities of each. debt-toequity ratio. Partnership A legal form of business that is an association of two or more persons who act as co-owners of the business. Liquidity Ratios It is the information that determines a company’s ability to pay its bills. and collectively by FIFO. or commercial paper) and the investing public. coupons. as opposed to junk or spam email). Mission A documentation of the basic purpose of the company. including cost or current market value. The limited partner has no voice in management.
publicity. This includes advertising. 107 . R&D Costs Research and development costs. and activities designed to assure a certain level of quality in the production and delivery of products and services. advertising. such as charitable activities and environmental policies. and other promotional activities to inform potential customers about your products and services. Promotional Strategy The way you plan to use personal selling. and humans). a federal agency that administers statutes and rules designed to promote full public disclosure. Positioning To develop a certain image of your products and services in the mind of the consumer. Scenario Alternative assumption or assumptions and their impact on your business (and financial) plan. machinery. Resources The assets used in a business to add or create value in a product and service. and public relations. and fixtures. Promotion Techniques used to communicate information about the product and service to the potential customer. Sales Forecast An estimate based on assumptions about the future units sold during some specific period. Profit and Loss A synonym for income statement Profitability Ratios Indications of the types of returns or yields generated for the owners of the company. procedures. public relations. Ratio Analysis A collection of financial ratios (one financial statement item divided by another) that reveal the financial strengths and weaknesses of a company. S Corporation A business organized under subchapter S of the Internal Revenue Code to be treated as a partnership for income tax purposes. personal selling. Public Relations These are company activities that involve the public. buildings. This includes tangible assets (materials) and intangible assets (money. Pro forma Balance Sheet A projected balance sheet used in financial planning and business plans. Production Activities that produce goods and services. Quality Control Policies. These assets usually give the firm productive capacity. Plant Long-term assets include land. It also protects the investing public against malpractice in the securities market. Return on Investment A financial ratio that reveals the percentage of income produced from each dollar invested in the company’s assets.tasks the company will complete to meet goals and objectives and the company’s mission statement. information. SEC Stands for the Securities and Exchange Commission. Pro forma Income Statement A projected income statement used in financial and business planning.
and hits experienced by a particular website during a certain time period. Target Market A group of potential customers identified by the business as having common characteristics. Start-Up Capital Financing needed to launch a business. and other corporations to invest in high growth/high risk start-up companies or high growth enterprises. Venture Capital A source of financing for start-up companies or companies embarking on a new venture. repeat visitors. Stockholders Owners of the corporation. A SWOT analysis is often included in the Industry Analysis section of a business plan. SIC System Stands for Standard Industrial Classification system. Opportunities. and Threats. SWOT An acronym for Strengths. purchases. a uniform number-coding and verbal-description system that classifies businesses according to their activities. Weaknesses. Sole Proprietorship A business owned and usually operated by one person. Working Capital Current assets expected to turn over or convert to cash within a short time. Supply Chain Management Taking an active role in working with suppliers and other participants in the supply chain to improve products and processes. and brokers that combine to provide a good or service. Venture Capital Fund A firm that raises large sums of money from wealthy investors. Venture Capitalists Financiers who provide equity financing to ventures that promise above normal growth in sales and profits and relatively high return on capital. 108 . transportation firms. Start-Up Costs Costs incurred or projected to start a business. banks. page views. Companies financed via venture capital often carry relatively high risks.SEC Filings Financial statements and other information filed with the Securities and Exchange Commission. Website Statistics Information and numbers that reveal the number of visitors. Supply Chain Network of suppliers.
Click Display in the Options dialog box.dot). First. Use your local library. but do not rely on them for everything. 3. you can improve the appearance by turning off Formula Marks. Here is how to do it: 1. Several check boxes will appear to the right.The Templates The Business Plans Template Pack includes several helpful templates and instructional files to help you with a complete professional business plan. and the government for information to help make the best possible business plan for your situation. 2. Use these materials as helpful references and tools. and select Settings. the Internet. here is a tip to get the best look for your workbooks selection: If you want to insert Excel sheets into your business plan (in Word). A few helpful hints for editing your business plan documents The main template in this template pack is the Business Plan (Business Plan. Inserting Excel workbooks and sheets This section gives you three ways to insert Excel workbooks into your Business Plan Word document. click Tools in the menu bar. It is a Word document that contains all the appropriate information. You might want to include data from other programs such as Excel. Clear the check box in Show Formula Markers (See Figure below): 109 . The Options dialog box will appear. While the sheet is still in Excel.
Switch to Excel. Delete the placeholder image (if you are replacing one) by selecting it and pressing DELETE. 8. and select the sheet you wish to copy. Open the Word document and the Excel workbooks at the same time. Get your business plan file ready separately from your workbo oks files. and paste it using the Paste Special feature: Click Edit and Paste Special. The sheet will appear as an image object at the location you selected. 3. 7. You can re-size the image to fit the page. 5.Here are three methods for inserting Excel sheets into your business plan document: Manual The manual method is an easy way to bypass the complexity of combining various types of files together. Then you can combine them once they are printed. highlight the portion of the sheet you wish to copy. 10. 110 . Click the place where you want the sheet. Click OK. The figure above shows the Paste Special dialog box with the Paste radio button and Picture (Metafile) selected. Turn off highlighting in the sheet by clicking the Highlight button at the top. 4. Select the Paste radio button at the left (see figure above). and print them individually. 9. Switch to Word. Cut and paste The cut and paste method allows you to have all of your information within a single Word document: 1. and include an extra blank row at the bottom. Copy the highlighted area (click Edit and Copy or press Ctrl-C). and locate the page where you want to paste the sheet image. 6. 2. Starting at row 2. and select Picture (Metafile) near the center. A dialog box will appear (see the figure below).
and it will automatically be updated in the Word document. 2. This method allows you to update the original in Excel. Be sure to save the workbooks before you start this process. This is actually the original workbooks file. 9. highlight the portion of the sheet you wish to copy. Here is how to link a workbooks sheet: 1. 3. Switch to Word. and paste it using the Paste Special feature: Click Edit and Paste Special. and locate the page where you want to paste the sheet image. and include an extra blank row at the bottom. it will no longer appear in your Word document. The workbooks will op en in your Word document. A dialog box will appear (see the figure below). Click OK. Creating a link You can also add an Excel sheet to your Word document by creating a link to it. Open the Word document and the Excel workbooks at the same time. Switch to Excel and select the sheet you wish to copy. If you rename it. The figure above shows the Paste Special dialog box with the Paste Link radio button and Excel 12 Workbooks selected. Delete the placeholder image (if you are replacing one) by selecting it and pressing DELETE. 111 . Turn off highlighting in the sheet by clicking the Highlight button at the top of the sheet. Click the place where you want the sheet. 4. 6. 7. Starting at row 2.Note that you must repeat this process to update the sheet if you edit the original in Excel. 8. Copy the highlighted area (click Edit and Copy or press Ctrl-C). 5. Caution: you must be careful to avoid renaming your workbooks file.
avoid changing the names of the sheets. 112 . The sheet names are referenced in the macros. and changing the names can disable the macros.About Workbooks macros When you are working with workbooks templates.
Be sure to add sections or leave out sections Start-up Capital (Start-up Capital. this template covers a complete business plan. identify your sources and destinations for the Business Plan (Business Plan. this template helps you to create important data required for your business plan.pdf) This file is a complete sample business plan for a service business. and it helps to below) to help complete your business plan. You can also use it for financial projections. helping you to put your business plan into ideas outlined for discussion. This can be an important part of a good Brick and Mortar to Internet Plan. be sure to use the other templates (described required to start your business. Manufacturer Plan (Manufacturer Plan.pdf) This file is a complete sample business plan for a restaurant.xlt). Balance Sheet. you should fill it out to help organize your ideas. Sales Forecast (Sales Forecast.pot). and Financial Ratios. This will be useful in many ways. this template gives you a way to present your business plan as a slide set. (Brick and Mortar to Internet Plan. This template also allows you to make contingency plans and speculative forecasts. this template provides some advanced formulas to supplement your Integrated Financials. Reference and other documents (Extra folder) Break-Even Analysis (Break-Even Analysis.pdf) This file is a complete sample business plan for an online business. Service Plan (Service Plan. money. Sample business plans (Sample Plans folder) provides a method to figure the amount of capital business plan because investors want to know when you think you will start earning a profit. this template depending on what is relevant to your business.xlt). Integrated Financials (Integrated Financials. Restaurant Plan (Restaurant Plan.A List of Templates Here is a list of the Business Plans Template Pack templates: Business plan materials (Core folder) Business Plan (Business Plan. Even if you are not using this template for a formal presentation. Use it to help with your Income Statement.xlt). dot). Use it for formatting and for ideas on filling in your business plan.xlt) This template helps to calculate your break-even point. Also. 113 .pdf) This file is a complete sample business plan for a manufacturing business.
Distributor Plan (Distributor Plan. 114 .pdf). this file is a complete sample business plan for a retail business that is looking for a franchise opportunity.pdf). The company wishes to raise $1.Franchisee Plan (Franchisee Plan.5 million for expansion.pdf). this file is a complete sample business plan for a travel agency that is expanding its operations. Change of Business Plan (Change of Business Plan. this file is a complete sample business plan for a company involved in mail -order merchandising of fine quality spices to the restaurant industry.
MA. For more information. Inc. see www. For over a decade.templatezone. KMT Software has been the leading provider of Office templates and productivity tools for home and small business. KMT also provides corporate consulting and custom development to some of the most respected companies in technology.com 115 .About the Developer KMT Software. a low-cost tool for small business users who want to leverage the amazing power of email. KMT is also the developer of High Impact eMail.kmt. Located in Cambridge. is the developer of OfficeReady Business Plans. or visit their retail software site.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.
|
https://www.scribd.com/doc/135047692/Tips-for-Writing-a-Business-Plan
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
/* pathexp.h -- The shell interface to the globbing library. */ /* Copyright (C) 1987-2007 (_PATHEXP_H_) #define _PATHEXP_H_ #if defined (USE_POSIX_GLOB_LIBRARY) # define GLOB_FAILED(glist) !(glist) #else /* !USE_POSIX_GLOB_LIBRARY */ # define GLOB_FAILED(glist) (glist) == (char **)&glob_error_return extern int noglob_dot_filenames; extern char *glob_error_return; #endif /* !USE_POSIX_GLOB_LIBRARY */ /* Flag values for quote_string_for_globbing */ #define QGLOB_CVTNULL 0x01 /* convert QUOTED_NULL strings to '\0' */ #define QGLOB_FILENAME 0x02 /* do correct quoting for matching filenames */ #define QGLOB_REGEXP 0x04 /* quote an ERE for regcomp/regexec */ #if defined (EXTENDED_GLOB) /* Flags to OR with other flag args to strmatch() to enabled the extended pattern matching. */ # define FNMATCH_EXTFLAG (extended_glob ? FNM_EXTMATCH : 0) #else # define FNMATCH_EXTFLAG 0 #endif /* !EXTENDED_GLOB */ #define FNMATCH_IGNCASE (match_ignore_case ? FNM_CASEFOLD : 0) extern int glob_dot_filenames; extern int extended_glob; extern int match_ignore_case; /* doesn't really belong here */ extern int unquoted_glob_pattern_p __P((char *)); /* PATHNAME can contain characters prefixed by CTLESC; this indicates that the character is to be quoted. We quote it here in the style that the glob library recognizes. If flags includes QGLOB_CVTNULL, we change quoted null strings (pathname[0] == CTLNUL) into empty strings (pathname[0] == 0). If this is called after quote removal is performed, (flags & QGLOB_CVTNULL) should be 0; if called when quote removal has not been done (for example, before attempting to match a pattern while executing a case statement), flags should include QGLOB_CVTNULL. If flags includes QGLOB_FILENAME, appropriate quoting to match a filename should be performed. */ extern char *quote_string_for_globbing __P((const char *, int)); extern char *quote_globbing_chars __P((char *)); /* Call the glob library to do globbing on PATHNAME. */ extern char **shell_glob_filename __P((const char *)); /* Filename completion ignore. Used to implement the "fignore" facility of tcsh and GLOBIGNORE (like ksh-93 FIGNORE). It is passed a NULL-terminated array of (char *)'s that must be free()'d if they are deleted. The first element (names[0]) is the least-common-denominator string of the matching patterns (i.e. u<TAB> produces names[0] = "und", names[1] = "under.c", names[2] = "undun.c", name[3] = NULL). */ struct ign { char *val; int len, flags; }; typedef int sh_iv_item_func_t __P((struct ign *)); struct ignorevar { char *varname; /* FIGNORE or GLOBIGNORE */ struct ign *ignores; /* Store the ignore strings here */ int num_ignores; /* How many are there? */ char *last_ignoreval; /* Last value of variable - cached for speed */ sh_iv_item_func_t *item_func; /* Called when each item is parsed from $`varname' */ }; extern void setup_ignore_patterns __P((struct ignorevar *)); extern void setup_glob_ignore __P((char *)); extern int should_ignore_glob_matches __P((void)); extern void ignore_glob_matches __P((char **)); #endif
|
http://opensource.apple.com//source/bash/bash-86.1/bash-3.2/pathexp.h
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
BlackBerry 10 - Sharing using NfcShareMa
nager
Introduction
This article is part of a series intended to help developers wishing to exploit Near Field Communication (NFC) in their BlackBerry® 10 applications. Readers of this article should have some pre-existing knowledge of the fundamental architecture of NFC systems and be familiar with C++. It would be valuable to have read the earlier BlackBerry 10 articles in this series since they lay the foundations for the later ones such as this one.
This article will explain how to use the BlackBerry 10 C++ native APIs to develop software, which can transmit data from one NFC-enabled BlackBerry device to another using the NFC Sharing APIs.
The Authors
This article was co-authored by Martin Woolley and John Murray both of whom work in the RIM Developer Relations team. Both Martin and John specialize in NFC applications development (amongst other things).
Sharing Information
We got really excited when we discovered the APIs we’re going to describe in this article! Sharing information in a simple and intuitive way is one of the fundamental use-cases of NFC, and the BlackBerry 10 NFC Sharing APIs make that easy to integrate into your application.
The first thing to think about is what sort of information you might want to share easily between devices. We had a think about this and came up with three simple examples that should cover the most common cases:
- Information that could be identified by a URL. The URL itself would be shared and interpreted by the recipient device.
- Information that could be described by a MIME type. So, for example, a string of plain text described by the MIME type “text/plain”, or a string of HTML identified by the MIME type “text/html”.
- And, since Martin is a really good photographer as well as being an NFC guru, we decided to share images and photographs stored on one device with another device.
Now, this becomes interesting in that items (1) and (2) consist of objects that generally consist of small amounts of data whilst (3) generally consists of objects that are large. NFC is great when all you want to do is transfer small volumes of data between devices. However, when you want to transfer several megabytes of data then you run into a challenging user-experience; the nature of NFC is that the devices would have to be held in close proximity for an extended time, increasing the possibility of the devices misaligning and the transfer failing. Also, NFC is relatively slow compared to Wi-Fi® or Bluetooth®.
So, to have a seamless and pleasant user experience, item (3) would have to be done in two stages:
- Use NFC to initiate the negotiation of a Bluetooth connection between the two devices.
- And, transfer the bulk data of the images via that Bluetooth connection.
This would mean that the devices need only be in close proximity whilst negotiating the Bluetooth link and they could then be separated once the Bluetooth link has seen established.
This is where the NFC Forum standards come to our aid. One of the Peer to Peer (P2P) standards is called Connection Handover and does just what we wanted to do in this case. It’s also implemented on both BlackBerry 10 and BlackBerry 7.1 so that we could demonstrate this ability between BlackBerry 10 and BlackBerry 7.1 devices.
The Application
The application user interface is pretty simple. It consists of a “TabbedPane” Cascades™ object with three pages, one for each of the use cases. You can see the general layout in the three images in Figure 1.
Each page has a pair of buttons that switches data sharing on or off. In the case of sharing data and a URL, the actual data to be shared can be entered directly from the page itself. In the case of sharing image files, it was easier to allow the user to select, via checkboxes, one or more of three images that were placed into the “assets” folder of the application rather than having to supply a path to a file.
Figure 1 - The three use cases
The images shown in Figure 1 come from our Twitter® pages: John ( @jcmrim), Martin (@mdwrim), and our colleague in Canada, Rob (@robbieDubya). The others, which you can see by scrolling down, were taken by Martin.
As an interesting and instructive aside, Cascades does not have a built-in checkbox with an associated image, but one of the really nice things about Cascades is that it is easy to create one by aggregating “CheckBox” and “ImageView” as shown in Figure 2 below.
import bb.cascades 1.0 Container { id: myCheckBox property string message property string file property string thumb property alias checked: fileBox.checked signal myCheckChanged(bool checked) layout: StackLayout { orientation: LayoutOrientation.LeftToRight } bottomMargin: 10 CheckBox { verticalAlignment: VerticalAlignment.Center id: fileBox onCheckedChanged: { myCheckBox.myCheckChanged(checked); } } ImageView { imageSource: "asset:///files/" + myCheckBox.thumb preferredWidth: 625 onTouch: { if (event.isUp()) { checked = checked ? false : true; } } } }
Figure 2 - Aggregating CheckBox with a clickable image
The way that the application works is that the act of entering information into any of the text fields, selecting one or more images, or clicking the enable or disable buttons, will result in one or more SIGNAL()s (see Figure 3) being emitted from the QML pages, which are connected to three key C/C++ functions in the application:
- dataShareContentChanged(...)
- urlShareContentChanged(...)
- fileShareContentChanged(...)
_nfcShareManager = new NfcShareManager(); ... QObject::connect( _shareDataPage, SIGNAL(updateMessage(QString, QString)), this, SLOT(dataShareContentChanged(QString, QString))); ... QObject::connect( _shareFilePage, SIGNAL(updatedFileList(QString)), this, SLOT(fileShareContentChanged(QString))); ... QObject::connect( _shareUrlPage, SIGNAL(updateUrl(QString)), this, SLOT(urlShareContentChanged(QString))); ... QObject::connect( this, SIGNAL(setShareMode(bb::system::NfcShareMode::Type
)), _nfcShareManager, SLOT(setShareMode(bb::system::NfcShareMode::Type)))), _nfcShareManager, SLOT(setShareMode(bb::system::NfcShareMode::Type)) ););
Figure 3 - Connecting the QML events to the sharing actions in C/C++
Let’s look at each case in turn.
Sharing a Data Item
It’s really quite simple to share a data item with an associated MIME type. All you need to do is build a request to send to an instance of the NfcShareManager class, (which we obtained as shown in Figure 3), and pass this to the setShareContent() method of this class. That’s all there is to it. It’s all there in Figure 4.
void NfcSharing::dataShareContentChanged(QString message, QString dataType) { NfcShareDataContent request; QByteArray data(message.toUtf8()); QUrl url; request.setUri(url); request.setMimeType(dataType); request.setData(data); NfcShareSetContentError::Type rc = _nfcShareManager->setShareContent(request); }
Figure 4 - Sharing a data Item
Sharing a URL is equally easy and is shown in Figure 5.
void NfcSharing::urlShareContentChanged(QString urlString) { NfcShareDataContent request; QUrl url(urlString); request.setUri(url); NfcShareSetContentError::Type rc = _nfcShareManager->setShareContent(request); }
Figure 5 - Sharing a URL item
You might think that sharing a list of image files and having a Bluetooth connection negotiated to have them transferred would be more complex. It isn’t! The whole process is shown in Figure 6.
void NfcSharing::fileShareContentChanged(QString paths) { NfcShareFilesContent request; QList<QUrl> urls; QDir dir; QStringList list = paths.split(","); QString publicPath(dir.currentPath().append("/app/public/"
)); for (int i = 0; i < list.size(); ++i) { QUrl url(QString("file://").append(publicPath).append(l)); for (int i = 0; i < list.size(); ++i) { QUrl url(QString("file://").append(publicPath).append(l ist.at(i))); urls.append(url); } request.setFileUrls(urls); NfcShareSetContentError::Type rc = _nfcShareManager->setShareContent(request); }ist.at(i))); urls.append(url); } request.setFileUrls(urls); NfcShareSetContentError::Type rc = _nfcShareManager->setShareContent(request); }
Figure 6 - Sharing a list of files
It’s easy isn’t it? There is one point where care needs to be taken though. The actual heavy lifting of using NFC to negotiate the Bluetooth connection and of the subsequent transfer of the files to the other device is performed by another system application called the NFC Share Adapter. The NFC Share Adapter needs to have read access to the files to be transferred and so care must be taken to ensure that they are accessible in terms of file access permissions.
Since the image files we’re using are located in the Assets folder in the project, they need to be flagged as “public” as shown in Figure 7, otherwise they will not be accessible.
Figure 7 - Ensure that image files are shared as public
So, we’ve set up the mechanics for sharing these three types of data. However, how do we know whether a particular object has been shared successfully? There is still one set of SIGNAL()s to connect and these are with the NfcShareManager instance itself as shown in Figure 8.
... QObject::connect(_nfcShareManager,SIGNAL(shareMode
Changed(bb::system::NfcShareMode::Type)), this, SLOT(shareModeChanged(bb::system::NfcShareMode::TyChanged(bb::system::NfcShareMode::Type)), this, SLOT(shareModeChanged(bb::system::NfcShareMode::Ty pe))); QObject::connect(_nfcShareManager,SIGNAL(finished(pe))); QObject::connect(_nfcShareManager,SIGNAL(finished( bb::system::NfcShareSuccess::Type)), this, SLOT(finished(bb::system::NfcShareSuccess::Type)))bb::system::NfcShareSuccess::Type)), this, SLOT(finished(bb::system::NfcShareSuccess::Type))) ; QObject::connect(_nfcShareManager,SIGNAL(error(bb:; QObject::connect(_nfcShareManager,SIGNAL(error(bb: :system::NfcShareError::Type)), this, SLOT(error(bb::system::NfcShareError::Type))); ... QObject::connect( this,SIGNAL(setShareMode(bb::system::NfcShareMode::system::NfcShareError::Type)), this, SLOT(error(bb::system::NfcShareError::Type))); ... QObject::connect( this,SIGNAL(setShareMode(bb::system::NfcShareMode: :Type)), _nfcShareManager, SLOT(setShareMode(bb::system::NfcShareMode::Type)):Type)), _nfcShareManager, SLOT(setShareMode(bb::system::NfcShareMode::Type)) ););
Figure 8 - Connections to the NfcShareManager
Two of the connections are used to notify our application as to the success or failure of a sharing operation (“finished” and “error” respectively). “shareModeChanged” is used to notify our application of a change in the sharing mode type (Data or File). “setShareMode” is used by our application to establish the sharing mode type that the NfcShareManager is to use on our behalf (Data or File).
Sharing in Action
So, let’s see the application in action. Since working with images is more interesting, let’s look at the steps that take place for this case. Figure 9 shows a data transfer request, with the selection made on a BlackBerry 10 device on the left, and the notification on a BlackBerry 7.1 device on the right. The BlackBerry 7.1 device is configured to ensure that approval must be given to allow the transfer to take place.
Notice that there doesn’t need to be an application running on the target device. The NFC software on the target device, whether it’s a BlackBerry 10 or BlackBerry 7.1 device, is smart enough to recognise the MIME type of the data being transferred and to launch the most appropriate application to handle it.
Figure 9 - Initiate sharing and accept the transfer
Once approval has been given, the transfer starts to take place over a Bluetooth connection that has been established. Figure 10 shows the transfer of an image in progress on a BlackBerry 7.1 device.
Figure 10 - Receiving the data
Once the BlackBerry 7.1 device has successfully received the file, a confirmation dialogue is displayed as shown in Figure 11, and just for completeness, the image that we transferred is also shown.
Figure 11 - Completion and transferred image at the BlackBerry 7.1 device end
Summary
Wasn’t that easy? What we’ve learned is that it’s simple to integrate data and file transfer operations into your BlackBerry 10 application. I’m sure you can think of many other applications for this ability.
Where can I find the code?
The full source code of this application is available from our GitHub repositories here:
The NfcSharing application was written for the BlackBerry 10 “Dev Alpha” device and requires the following versions of the NDK and device software to build and run:
- BlackBerry® 10 Native SDK 10.0.9
- BlackBerry Dev Alpha Device Software 10.0.9 (Beta 4)
You can find details of other NFC related articles and sample applications written by Martin and John at:
You can contact Martin or John either through the BlackBerry support forums or through Twitter:
If you like this article and code sample and have found it useful then drop us a Tweet (hash tag: #nfcguys).
|
https://supportforums.blackberry.com/t5/tkb/articleprintpage/tkb-id/Cascades@tkb/article-id/58
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Ralf S. Engelschall wrote:
> - Prepare the rename.cf file for APX_=ap_ because as we've seen in the
> debates the what-is-API-and-what-is-not problematic cannot be and
> shouldn't be solved _THIS TIME_. Instead we concentrate on just solving
> the HIDE veto by doing the general ap_-renaming for clean namespaces.
> The API-decision can be done via API-dict.html for 1.3 and later (1.3.1
> or 2.0) perhaps with a renaming. But at _THIS TIME_ we now only
> concentrate on the clean namespace because this is the only way we have
> consensus. [Ralf]
I still object to reusing the already assigned ap_ prefix for an
incompatible purpose. Why do it? Just choose something else. At the very
least, rename existing ap_ functions."
|
http://mail-archives.apache.org/mod_mbox/httpd-dev/199804.mbox/%[email protected]%3E
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
We pay for user submitted tutorials and articles that we publish. Anyone can send in a contributionLearn Studio glitches… so I pressed F5 and was proved wrong after getting the following runtime error:
Compiler Error Message: BC30560: ‘ScriptManager’ is ambiguous in the namespace ‘System.Web.UI’
After doing some checking I actually found the following error in the output:
errorCS0433:The type ‘System.Web.UI.ScriptManager’ exists in both ‘c:\WINDOWS\assembly\GAC_MSIL\System.Web.Extensions\3.5.0.0__31bf3856ad364e35\System.Web.Extensions.dll’
and ‘c:\WINDOWS\assembly\GAC_MSIL\System.Web.Extensions\3.6.0.0__31bf3856ad364e35\System.Web.Extensions.dll’
Well, that was the end, as it turns out I had installed both the Ajax that comes with the .NET 3.5 and the Ajax that comes with ASP.NET 3.5 Extensions, and I had referenced them both! All I needed to do was to remove one of the references and that’s it.
So if you have both installed, make sure you reference only one of them throughout your project.
Amit.
Tags :.Net3.5 ExtensionsAJAXASP.NetDebugErrorBC30560ErrorCS0433ScriptManagerScriptManager Proxy
Breeze : Designed by Amit Raz and Nitzan Kupererd
Srikanth
Said on April 16, 2008 :
Hi Sir,
I am also getting the same problem.Where can i remove these reference.Can you provide the total information of this?
Thanks,
Srikanth
Amit
Said on April 16, 2008 :
Hi Srikanth
It can happen from various reasons.
you should check you GAC (C:\WINDOWS\assembly) to see how many installations of system.web.extentions you have, and what are their versions. Check it and get back to me I will try to help.
Amit
Srikanth
Said on April 16, 2008 :
Hi amit,
I have system.web.extentions 3.5.0.0,1.0.61025.0,3.6.0.0 versions.
Actually i have installes both Ajax 3.5 and as well as ASP.NET 3.5 EXTENSIONS Controls
Please tell solution for my problem
Amit
Said on April 16, 2008 :
Thats the problem.
They are both related to the same framework so the Visual Studio gets confused between them. you should remove one of them.
I recommend staying with the 3.5.0.0 one, that is what i am using and it works fine, though i think any of them will be OK.
Amit
srikanth
Said on April 16, 2008 :
How can i remove the 3.6.0.0?Can i remove the total asp.net 3.5 extension tool?
Amit
Said on April 16, 2008 :
Use this application:
it is for registering and removing dlls from the GAC
Amit
Fares
Said on April 28, 2008 :
Try modifying the web.config of the application,
change the “System.Web.Extensions” assembly version from “3.6.0.0” to “3.5.0.0” or vice versa:
The key will look like the following:
<add assembly=”System.Web.Extensions, Version=3.5.0.0, …etc
OR
<add assembly=”System.Web.Extensions, Version=3.6.0.0, …etc
Roberta
Said on June 5, 2008 :
I am not sure how to use this GAC to remove the extra versions. Where do I fing the Gacutil.exe. I did a search on C and did not locate it.
Shahar Y
Said on June 5, 2008 :
Hi Roberta,
It is usually located in C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin but you may have installed Visual Studio in a different folder. So, find the anydir:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin
Roberta
Said on June 5, 2008 :
thanks I found the file but now how do I uninstall the extra versions? I tryed though dos but that did not work
Shahar Y
Said on June 5, 2008 :
Roerta,
You need to drag the gacutil.exe file into the command shell (cmd) and use the options you need.
You can read about the available options here:
Roberta
Said on June 5, 2008 :
Hi Shahar Y
I did that and this is what I typed in
C:\>”C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe”system.
web.extensions.dll, version=1.061025.0,culture=”natural,PublicKeytoken=31bf3856a
d364e35
and this is what I got
‘”C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe”system.web
.extensions.dll’ is not recognized as an internal or external command,
operable program or batch file.
Roberta
Said on June 5, 2008 :
I am not sure if this makes a diff or not but when I run the gacutil.exe /l the system.web.extensions do not show up but when I go to c:windows/assembly there are two there.
Shahar Y
Said on June 5, 2008 :
Roberta,
1) I see that you forgot to add a space between the gacutil.exe and your assembly name.
2) You need to use some flag. If you want to uninstalls an assembly from the global assembly cache, you need to write – ”C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe” /u yourAssemblyName
Shahar Y
Said on June 5, 2008 :
Hi Roberta,
About the fact that you can’t find the system.web.extensions assembly using the /l option – try to run “gacutil.exe /l system.web.extensions” and see if it can be found.
Roberta
Said on June 5, 2008 :
Thanks I did that:
C:\>gacutil.exe /u “system.web.extensions, version=1.0.61025.0,culture=”natural”
,PublicKeytoken=31bf3856ad364e35
this is what I get
C:\>gacutil.exe /u “system.web.extensions, version=1.0.61025.0,culture=”natural”
,PublicKeytoken=31bf3856ad364e35
is there any other way to remove this?
Roberta
Said on June 5, 2008 :
sorry this is what I get
Microsoft (R) .NET Global Assembly Cache Utility. Version 1.0.3705.0
No assemblies found that match: system.web.extensions, version=1.0.61025.0,cultu
re=natural,PublicKeytoken=31bf3856ad364e35
Number of items uninstalled = 0
Number of failures = 0
Roberta
Said on June 5, 2008 :
it is not listed in the /l at all
Roberta
Said on June 5, 2008 :
this is what I get
C:\>gacutil.exe /l “system.web.extensions
Microsoft (R) .NET Global Assembly Cache Utility. Version 1.0.3705.0
The Global Assembly Cache contains the following assemblies:
The cache of ngen files contains the following entries:
Number of items = 0
Shahar Y
Said on June 5, 2008 :
Roberta,
1) You wrote natural instead of neutral.
2) Not sure if it matters, but you didn’t use spaces and capital letters.
Try to write it like that: system.web.extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
Roberta
Said on June 5, 2008 :
ok thanks:~) I rewrote it and now it is telling me
invalid file or assembly name
that I need a .dll or .exe
Just to make sure I am getting the assembly name from C:\WINDOWS\assembly is that right?
Shahar Y
Said on June 5, 2008 :
Roberta,
If ”C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil.exe” /u system.web.extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 doesn’t work, it is weird and sorry but I have no ideas left…
Roberta
Said on June 5, 2008 :
ok thanks for all your help. :~)
Jeff
Said on January 25, 2009 :
Thanks so much! Stupid VWD 2008 adds the 3.5 reference automatically. Good old Microsoft
Thanks again!
omyfish
Said on June 26, 2009 :
This work for me.
gacutil.exe /u “system.web.extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″
H Selik
Said on February 11, 2010 :
May be there is a diffrent solution .Our web project developed on 2.0 V. then we installed upper version “3.0”,”3.5” on the machine.
The project needed to be developed.When I drag dropped a script manager on any page of prj. then it started to fall into error. “‘ScriptManager’ is ambiguous in the namespace ‘System.Web.UI’”
I could use this solution but we have projects that have all of versions . It could be dangerous for the other projects
Then I found out there had been two lines on web config that they had contained different version informations .
“<add assembly=”System.Web.Extensions, Version=1.0.61025.0″ And “<add assembly=”System.Web.Extensions, Version=3.5.0.0″
I removed the last version line .But still error…
Then I found the same lines on aspx on the top of pages. “”
Some of them were different “Version=3.5.0.0″ . I changed the correct one. It is working now.
indianbill
Said on March 13, 2010 :
This happend to me, after I already had a working site.
The problem seems to be duplicate extensions. AJAX and .net
Somehow, MS VS added an assembly definition line to my web.config file automatically while I was coding….not sure how..must have been something I did while editing the page.
Removing this line from my web.config file fixed it for me.
The ajax extension i’m using is…
indianbill
Said on March 13, 2010 :
Keep this line…
add assembly=”System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″
Remove this one…
add assembly=”System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35″
|
http://www.dev102.com/2008/03/21/ajax-scriptmanager-error-bc30560/
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Results 1 to 5 of 5
- Join Date
- Aug 2005
- Location
- Sri Lanka
- 4
NEED HELP >> How Do I Sort a Structure Array???? C...
:?
- Join Date
- Oct 2004
- Location
- /dev/random
- 404
Sorry, no code...
But, this is what you can do....
What you can do is maintain an additional array as an indexing structure.
This array would hold the indices of the main structure array in the proper sorted order of priority.
What I'm trying here is to eliminate the overhead of the swapping of the struct elements in the main array - which would definitely outweigh the main algo overheads.
The only additional overhead now is that of the indirection from the index array into the main array. But sorting is now simplified and any algo that is optimized for sorting an array of integers can be used.
However, the most logical solution for a general print queue would be to use a doubly-linked list as a queue (FIFO) structure rather than an array.The Unforgiven
Registered Linux User #358564
- Join Date
- Oct 2004
- 158
try qsort:
Code:
#include <stdlib.h> #define MAXQ 100 int P_qCount=0; typedef struct { int requestNo; char ownerIP[40]; char fileName[30]; int fileSize; int priority; } printQueue; printQueue P_queue[MAXQ]; int compar(const void *A, const void *A) { printQueue *a=(printQueue *)A; printQueue *b=(printQueue *)B; if(a->priority > b->priority) return 1; if(a->priority < b->priority) return -1; return 0; } printQueue *mysort(printQueue *p) { qsort(p,P_qCount,sizeof(printQueue),compar); return p; }
hmm... I think you might find the solution on this page:
under the section "Dont give us YOUR homework questions", also, for aditional reading, please read "Language" under "Forum signatures".Regards Scienitca (registered user #335819 - )
--
A master is nothing more than a student who knows something of which he can teach to other students.
- Join Date
- Oct 2004
- Location
- /dev/random
- 404
Originally Posted by scientica
As far as I understand, his signature is nothing but his name in plain english - it's not an alien language....The Unforgiven
Registered Linux User #358564
|
http://www.linuxforums.org/forum/programming-scripting/44260-need-help-how-do-i-sort-structure-array-c-language.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
This document is a user guide for a compiler contributed to xtc that implements the Jeannie programming language. The latest official version of this user guide is here:. This guide is also available in pdf format:.
The current Jeannie project members are Robert Grimm, Martin Hirzel, Byeoncheol “BK” Lee, and Kathryn McKinley.
We received helpful feedback from Joshua Auerbach, Rodric Rabbah, Gang Tan, David Ungar, and Jan Vite compiler contributed to xtc Version 1.13.3 (05/14/08).
Jeannie is a programming language that combines Java and C. It supports
the full syntax of both languages, and adds a backtick (
`)
operator for nesting Java in C and nesting C in Java. You can use it to
implement a feature of your Java application with an existing C
library. In this case, you would write glue code that nests C in
Java. Another common usage scenario is when you want to enhance your C
application with some Java features, such as multi-threading, exception
handling, or GUI controls. In this case, you would nest Java code in C.
The Jeannie language is implemented by a compiler contributed to xtc. That is the official name of the code that IBM has donated to the xtc compiler framework (). The compiler translates Jeannie code first into Java and C source code that uses the JNI, and then from there into class files for Java and a dynamically linked library for C. This user guide describes how to use the compiler and the language in practice. The research that went into Jeannie is described in a conference paper ().
This section describes how to install xtc, which includes the Jeannie compiler, and how to test that the installed Jeannie compiler runs correctly.
The Jeannie compiler uses Java Standard Edition version 5 or higher and several GNU command line tools, including gcc, bash, make, find, zip, and others. You need to make sure that the Java compiler, the JVM, and the GNU tools are installed and on your PATH. We have tested Jeannie with multiple Java virtual machines (IBM J9, Sun HotSpot, and Jikes RVM), and on multiple operating systems (Linux, Windows/Cygwin, and Mac OS X)., you now need to perform the following steps:
export PATH_SEP=':' export JAVA_DEV_ROOT=local_install_dir/xtc export PATH=$JAVA_DEV_ROOT/src/xtc/lang/jeannie:$PATH export CLASSPATH=$JAVA_DEV_ROOT/bin/junit.jar$PATH_SEP$CLASSPATH export CLASSPATH=$JAVA_DEV_ROOT/bin/antlr.jar$PATH_SEP$CLASSPATH export CLASSPATH=$JAVA_DEV_ROOT/classes$PATH_SEP$CLASSPATH make -C $JAVA_DEV_ROOT classes configure
The last step will use
xtc/Makefile to compile and configure xtc
along with the Jeannie compiler. You may see some warning messages
related to Java generics, but the compilation should keep going and
finish without any fatal error messages.
After completing the download and configuration step, try the following:
make -C $JAVA_DEV_ROOT check-jeannie
This first invokes a few hundred JUnit tests, each of which writes a dot
`
.' to the console. Next, it invokes a few dozen integration
tests, each of which writes a couple of lines to the console. Overall,
the testing output should look like this:
java -ea junit.textui.TestRunner xtc.lang.jeannie.UnitTests ......................................... .........................................
many more dots for unit tests
......................................... ............. Time: 7.15 OK (874 tests) make -C local_install_dir/xtc/fonda/jeannie_testsuite cleanall find local_install_dir/xtc/fonda/jeannie_testsuite -name '*~' -exec rm -f \{\} \; rm -f -r tmp rm -f core.*.dmp javacore.*.txt make -C /Users/hirzel/local_install_dir/xtc/fonda/jeannie_testsuite test ==== integration test_000 ==== diff tmp/000mangled/output.txt tmp/000sugared/output.txt ==== integration test_001 ==== diff tmp/001mangled/output.txt tmp/001sugared/output.txt
many more lines for integration tests
==== integration test_035 ==== diff tmp/035mangled/output.txt tmp/035sugared/output.txt ==== integration test_036 ==== diff tmp/036mangled/output.txt tmp/036sugared/output.txt ==== integration tests completed ====
By the time you read this, there may be more tests than shown above.
Two of the integration tests (18 and 26) write some timing numbers to
the console, but as long as all tests end with
diff and without
finding any differences between the mangled and sugared output,
everything went fine.
The following Jeannie program (integration test 041) has a Java main
method that uses a nested C call to print
"Hello, world!" to the
console.
`.C { // 1 #include <stdio.h> // 2 } // 3 class Main { // 4 public static void main(String[] args) { // 5 `printf("Hello, world!\n"); // 6 } // 7 } // 8
The file starts with a block of C code that includes a header file
(Lines 1-3) containing, among other things, the prototype for the
printf function. In general, the backtick symbol (
`)
toggles between the languages Java and C. It can be either qualified
(like
`.C on Line 1) or simple (like on Line 6). The example code
defines a Java class
Main with a Java method
main (Lines
4 and 5). The body of the method (Line 6) contains a simple backtick to
toggle from Java to C for the call
printf("Hello,
world!\n"). When used in an expression, the backtick is a unary prefix
operator that affects the following subexpression.
To test this hello world program, you need to compile it as follows:
jeannie.sh Main.jni
The Jeannie compiler will generate a class file (
Main.class), a
shared library (on Linux:
libMain.so; on Windows/Cygwin:
Main.dll; on Mac OS X:
libMain.jnilib), and several
intermediate files. Before you can run the example, you need to tell the
operating system where to find the shared library, by adding the
directory to your PATH and LD_LIBRARY_PATH environment
variables. Assuming the example code is in the current directory
(
.), you can run it as follows:
java -cp . -Djava.library.path=. Main
This should, of course, print
"Hello, world!" to the console.
As with any complex piece of software, you may run into trouble when trying to use the Jeannie compiler. This section describes a few common issues and how to address them. We will keep updating this section as we encounter additional difficulties and their solutions.
If you cannot compile the Jeannie compiler at all, or if it does not run, you should double-check whether all the required tools are installed on your local machine. In particular, you need Java 1.5 or higher, and you need the GNU C compiler, see Requirements. To get the required tools on Windows, use Cygwin. To get the required tools on Mac OS, install XCode (that should be on one of the CDs that came with your Mac), and get the remaining tools from an open source site such as Fink. Next, try whether you can run the tests that come with Jeannie, see Testing the installation. Finally, double-check that you set your environment variables correctly, in particular, PATH, CLASSPATH, and LD_LIBRARY_PATH. If the Jeannie compiler throws an internal exception rather than producing a nice error message, that's a bug; please report it, along with a minimal test case that reproduces it.
If you get your Jeannie program to compile, but it crashes at runtime, the two most common symptoms are segmentation faults or dynamic linker errors.
A segmentation fault occurs when your program tries to access an illegal memory address. This is usually caused by null pointers or out of bounds accesses in C. To find the defect, you should start by rebuilding your program from scratch, to rule out problems caused by inconsistent incremental compilation. Next, you should run with a symbolic interactive debugger; see Debugging. If you find that the problem is in the Jeannie compiler (e.g., using an illegal method ID), please report the bug, along with a minimal test case that reproduces it.
A dynamic linker error occurs when your program can not find a function
that should be in a shared object file (DLL on Windows). This problem is
common in hand-written JNI code, but should not occur for
Jeannie-generated JNI code. To find the defect, you should start by
rebuilding your program from scratch, to rule out problems caused by
inconsistent incremental compilation. Next, make sure the shared library
is on the path, using the
-Dload-library-path JVM command line
option or the
PATH and
LD_LIBRARY_PATH environment
variables. If it still does not work, you should inspect the code
related to the missing symbol. To make Jeannie-generated code easier to
read, use the
-pretty compiler flag. If you believe that the
problem is caused by Jeannie-generated code, please report the bug,
along with a minimal test case that reproduces it. It is more likely
that the problem is caused by other shared libraries that you link
to. Consult your local linker guru, and use tools such as
nm to
investigate the symbols of your object files. You may need to specify
the external DLL like any other file at the end of the compiler command
line.
This chapter discusses Jeannie by example. Each section uses a short self-contained piece of code to illustrate one aspect of how to use the Jeannie language. If you read this on your computer screen instead of in a printed hardcopy, I recommend you read the html version, since it does not have the page breaks of the pdf version. The examples are designed so you can easily copy-and-paste them and try them out yourself. You should play with the examples, changing things here and there to see what happens.
The following example (integration test 039) illustrates the structure of a Jeannie file.
package cstdlib; // 1 import java.util.Random; // 2 `.C { // 3 #include <math.h> // 4 } // 5 class Math { // 6 public static native double pow(double x, double y) `{ // 7 return (`double)pow(`x, `y); // 8 } // 9 } //10 public class Main { //11 public static void main(String[] args) { //12 Random random = new Random(123); //13 for (int i=0; i<3; i++) { //14 double d = 100.0 * random.nextDouble(); //15 double r = Math.pow(d, 1.0 / 3.0); //16 System.out.println("d " + d + " r " + r + " ^3 " + r*r*r); //17 } //18 } //19 } //20
A Jeannie file starts like a regular Java file with an optional package
(Line 1) and imports (Line 2). These are followed by top-level C
declarations enclosed in
`.C{ } (Lines 3-5). They usually come
from header files, as in
the example, but you can also declare your own C functions and types in
this section. The rest of the Jeannie file is structured like a regular
Java file, with an optional package (Line 4), imports (Line 5), and one
(Line 6) or more (Line 11) top-level classes or interfaces. The example
illustrates how you might write a wrapper for parts of the C standard
library, hence the package is called
cstdlib (Line 4).
In Jeannie, a native method has a body, which must be a block of C code
(Line 7). Inside of the C code, you can use backticked C types (such as
`double on Line 8) that are equivalent to the corresponding Java
types (e.g.,
double). You can also use nested Java expressions,
for example, to refer to Java variables and parameters (such as
`x and
`y on Line 8).
To build this example, run the Jeannie compiler like this:
(bash) jeannie.sh -lm cstdlib/Main.jni
The
-lm linker flag is passed on to the native C compiler, which
uses it to link the
m library (math) into the generated native
shared object file. After compiling, the package directory
cstdlib will contain class files for the top-level classes
Math and
Main, a shared library, and some
compiler intermediate files. You can run the program like this:
(bash) java -cp . -Djava.library.path=./cstdlib cstdlib.Main d 72.31742029971468 r 4.166272210190459 ^3 72.31742029971466 d 99.08988967772393 r 4.627464705142208 ^3 99.08988967772387 d 25.329310557439133 r 2.9368005732853377 ^3 25.32931055743913
The program should print a series of numbers with their cubic roots as shown above. You can simplify the command line by putting the shared library on your PATH or LD_LIBRARY_PATH.
The example illustrates how Jeannie toggles between the languages for a
block, for an expression, or for a Java type name. In each case, you can
use either the simple language toggle backtick (
`), or the
qualified form (
`.C or
`.Java). Language toggle is also
allowed for certain Java statements in C (
synchronized,
try,
throw), and for putting a
throws clause on a C
function. See Syntax, which shows the entire Jeannie grammar.
The following example (integration test 040) illustrates code with multiple local variables,
both in Java (
args,
input, and
hasDecimalPoint) and in C
(
intOrFloat,
f, and
i).
`.C { } // 1 class Main { // 2 public static void main(String[] args) { // 3 String input = "12.34E1"; // 4 boolean hasDecimalPoint = -1 != input.indexOf('.'); // 5 `.C { // 6 `Number intOrFloat; // 7 if (`hasDecimalPoint) { // 8 `Float f = `Float.valueOf(input); // 9 intOrFloat = f; //10 } else { //11 `Integer i = `Integer.valueOf(input); //12 intOrFloat = i; //13 } //14 `.Java { //15 System.out.println(`intOrFloat); //16 } //17 } //18 } //19 } //20
You can run the program like this:
(bash) jeannie.sh Main.jni (bash) java -cp . Main 123.4
Each local variable in Jeannie has a defining language (Java or C), a scope (a portion of the program text where it is valid), and a type. The following table characterizes the local variables from the example:
In Jeannie, you can only access a local variable in code of the same
language. For example, Line 8 contains a C
if statement, and must
therefore toggle to Java to access the Java local variable
hasDecimalPoint. And of course, you can only access a local
variable if it is in scope; for example, the scope of
intOrFloat
ends in Line 17, so the variable can not be used after that. Like in
Java and C, scopes can nest, and variables in inner scopes can shadow
variables of the same name from outer scopes. Backticked expressions in
Jeannie are immutable (in programming languages terminology, they are
not l-values, since they can not appear on the left-hand side of an
assignment). That means that any modification to a variable has to occur
in the variable's language.
This example also illustrates that C local variables can hold references
to Java objects. For example, the result of
`Float.valueOf(..) in
Line 9 is a reference to a Java object containing a boxed floating point
number. This reference gets stored in the C local variable
f.
Note that this variable has type
`Float. In Jeannie, a backticked
Java type is a C type. Furthermore, since class
Float is a
subclass of
Number in Java, Jeannie permits the C code in Line 10
to widen the reference in the assignment
intOrFloat = i. On the
other hand, if the code were to contain the reverse assignment
i =
intOrFloat, the compiler would give an error message. You should try it
out.
Do not store references to Java objects in non-local C data. Non-local data is any data that is not in local variables, and thus, does not go away when the enclosing function or method returns. In other words, non-local data in C resides in global variables or on the heap. You should not store any references to Java objects there, because by the time you access them again, the objects may have already been garbage collected. When that happens, the reference is a dangling reference, and using it can cause a crash, or worse, can corrupt important data. In fact, on some JVMs, the reference is unusable even without garbage collection, which make the problem easier to diagnose, because the program fails more quickly and deterministically.
Instead, you should store references to Java objects into Java static or instance fields. Jeannie makes it very easy to access a Java field from C by using a backtick. The following example illustrates the difference between storing reference to a Java object in a C global variable versus a Java static field.
import java.io.PrintWriter; // 1 `.C { // 2 `PrintWriter badGlob; // 3 } // 4 class Main { // 5 static PrintWriter goodGlob; // 6 static native void setGlob(boolean beGood, PrintWriter init) `{ // 7 if (`beGood) `( Main.goodGlob = init ); // 8 else badGlob = `init; // 9 } //10 static native PrintWriter getGlob(boolean beGood) `{ //11 if (`beGood) return `Main.goodGlob; //12 else return badGlob; //13 } //14 static native void useGlob(boolean beGood, Object obj) `{ //15 `.Java { //16 PrintWriter out = Main.getGlob(beGood); //17 out.println(obj); //18 out.flush(); //19 } //20 } //21 public static void main(String[] args) { //22 boolean beGood = true; //23 setGlob(beGood, new PrintWriter(System.out)); //24 for (int i=0; i<3; i++) { //25 useGlob(beGood, "o_" + i); //26 System.gc(); //27 } //28 } //29 } //30
If you run this program unchanged, it uses a Java static field, if you change
Line 23 to set
beGood = false, it uses a C global variable. In the
good case, it prints
o_0 o_1 o_2, otherwise, it crashes with an
error message that depends on your Java virtual machine, operating
system, and C compiler. You should try it out, so you can recognize the
error if you see the symptom again in another context. You should also
try whether the symptom goes away if you delete Line 27.
If you do make a mistake related to global references, you may end up needing a debugger to find the source of the defect; see Debugging.
This section is about how C code can access Java arrays. Arrays are
important for Jeannie, since people frequently use native code either
for I/O, which usually involves buffers, or for high-performance
computing, which usually involves matrix computations. Just like any
other Java expression can be nested in C using a backtick, so can Java
expressions that access an array. C code can read from a Java array
using a Java array subscript, for instance,
`arr[i]. C code can
write to a Java array using a Java assignment, for instance,
`(arr[i] = v). Since backticked expressions in Jeannie are
immutable, a C assignment to a Java array (e.g.,
`arr[i] = v)
would be illegal.
The following example (integration test 043) shows a native method
replace(chars, oldC,
newC) that modifies the Java array
chars, replacing the first
occurrence of
oldC in
chars by
newC. It returns the
index of the replaced element, or
-1 if the element was not
found.
`.C{ } // 1 class Main { // 2 static native int replace(char[] chars, char oldC, char newC) `{ // 3 for (int i=0; i<`chars.length; i++) { // 4 if (`oldC == `chars[`i]) { // 5 `(chars[`i] = newC); // 6 return (`int)i; // 7 } // 8 } // 9 return (`int)-1; //10 } //11 public static void main(String []args) { //12 char[] a = { 'a', 'b', 'c' }; //13 int r; //14 r = replace(a, 'b', 'd'); //15 System.out.println(r + " " + new String(a)); //16 r = replace(a, 'b', 'd'); //17 System.out.println(r + " " + new String(a)); //18 } //19 } //20
The example also includes a
main method that invokes
replace
twice to replace
'b' by
'd'. You can compile and run the
program like this:
(bash) jeannie.sh Main.jni (bash) java -cp . Main 1 adc -1 adc
The output shows that the first call to replace changed the element at
index
1, yielding
adc, whereas the second call did not
find any element to change and therefore returned
-1, leaving the
array unchanged as
adc.
Accessing arrays with simple backticked Java expressions is convenient.
But users may want to use Java arrays in performance-critical loops,
where the transition between languages can become a bottle-neck. To
accommodate faster access to an entire Java array, Jeannie provides the
with-statement. The header of a
with-statement associates
a C variable with a Java array; for example,
with(`char* s = `chars) { .. } associates the C variable
s with the Java
array
chars. The body of the
with statement can use that C
variable as a normal C array. For example, the following code (integration test 044) implements
the same
replace method as before, but this time using a
with-statement instead of a simple array access. Notice that the
body of the
for-loop is pure C code without language transitions.
`.C{ } // 1 class Main { // 2 static native int replace(char[] chars, char oldC, char newC) `{ // 3 `char old = `oldC, new = `newC; // 4 `int len = `chars.length; // 5 with (`char* s = `chars) { // 6 for (int i=0; i<len; i++) { // 7 if (old == s[i]) { // 8 s[i] = new; // 9 return (`int)i; //10 } //11 } //12 cancel s; /
The
main-method is unchanged, and this program should produce the
same output as the previous example. In general, the initializer of a
with-statement can be a variable declaration, like in the
example, or an assignment. The types of the C variable and the Java
expression must match: if the C variable has type
`E
*, the Java expression must have type
jE
Array.
Changes to the C array are reflected back to the Java array when control
leaves the
with statement, unless the user decided to
cancel the changes, or there was an exception. In those cases,
the Java array remains unchanged. In the example, Line 10 leaves the
with-statement and the method, at which time Jeannie makes any
pending modifications to the Java array
chars. Line 13 also
leaves the
with-statement, but Jeannie drops any changes that may
have occurred in the array.
So far, this section has focused on cases where C code wants to work
directly with Java arrays. Jeannie supports that by simple nested Java
expressions, and by the
with statement for bulk accesses. But
there are other cases where C code wants to copy just (parts of) an
array between Java and C. Jeannie supports that with a pair of builtin
functions
copyFromJava and
copyToJava. They have the
following signatures:
`int copyFromJava(`E* ca, `int ci, jEArray ja, `int ji, `int len) `int copyToJava(jEArray ja, `int ji, `E* ca, `int ci, `int len)
In both cases, the return value is the number of copied elements. In
both cases, the parameter list starts with the destination array and
start index, followed by the source array and start index, followed by
the number of elements to be copied. The following example (integration test 045) reimplements
our familiar
replace method using the trans-lingual copy
functions.
`.C{ } // 1 class Main { // 2 static native int replace(char[] chars, char oldC, char newC) `{ // 3 `char old = `oldC, new = `newC; // 4 `int len = `chars.length; // 5 `char s[len]; // 6 copyFromJava(s, 0, `chars, 0, len); // 7 for (int i=0; i<len; i++) { // 8 if (old == s[i]) { // 9 s[i] = new; //10 copyToJava(`chars, 0, s, 0, len); //11 return (`int)i; //12 } /
Again, the
main method is unchanged, and the console output is
the same as in the previous two examples. See copyFromJava and
copyToJava for reference documentation on the two functions.
Control flow is the order in which code executes. Normal control flow
occurs when statements execute in the order in which they appear in the
program, as well as when code has conditionals, loops, and calls. Abrupt
control flow occurs when control jumps suddenly, for example because of
a
return statement in the middle of a function or method. Jeannie
supports all the abrupt control flow constructs of Java and C
(
return,
break,
continue,
goto, implicit
exceptions, and explicit
throw) and two new abrupt control flow
constructs for bulk array manipulation (
commit,
cancel).
You can use Jeannie to obtain Java exception handling for C code. To
throw a Java exception from C, use a nested Java
throw
statement. To handle a Java exception from C, use nested C handlers in a
Java
try/
catch/
finally statement. Jeannie
implements the expected abrupt control flow. It also takes care of
releasing internal resources. For example, a Jeannie
with
statement can allocate a temporal C array to cache Java data; if there
is an exception during the
with statement, Jeannie releases the
temporary array.
The following example (integration test 046) illustrates abrupt control flow in Jeannie.
`.C { // 1 #include <stdio.h> // 2 } // 3 class Main { // 4 public static void main(String[] args) { // 5 int[] ja = { 1, 2, 3, 0 }; // 6 `.C { // 7 FILE* out; // 8 `try `{ // 9 out = fopen("out.txt", "w"); //10 with (`int* ca = `ja) { //11 for (`int i=0; i<4; i++) { //12 if (ca[i] == 0) //13 `throw new ArithmeticException("/ by 0"); //14 ca[i] = 10 / ca[i]; //15 fprintf(out, "ca[%ld] == %ld\n", i, ca[i]); //16 } //17 } //18 } catch (ArithmeticException e) `{ //19 fprintf(out, "division by zero\n"); //20 } finally `{ //21 fclose(out); //22 } //23 } //24 for (int i=0; i<4; i++) //25 System.out.println("ja[" + i + "] == " + ja[i]); //26 } //27 } //28
The C code divides 10 by every number in an array, and writes the
results to a file
out.txt. At the end, the Java code writes the
array contents to the console. When you compile and run this program,
you should see the following:
(bash) jeannie.sh Main.jni (bash) java -cp . Main ja[0] == 1 ja[1] == 2 ja[2] == 3 ja[3] == 0 (bash) cat out.txt ca[0] == 10 ca[1] == 5 ca[2] == 3 division by zero
The C code in Lines 7 thru 24 operates on a file
out. Line 10
opens the file for writing, and Line 22 closes it again. To guarantee
that the file gets closed no matter what happens, Line 10 is in a
try-block and Line 22 is in the associated
finally-block.
The C code in Lines 11 thru 18 operates on a C version
ca of the
Java array
ja. Line 15 modifies the C array, and Line 16 prints
the modification to the file. The original array from Line 6 is
{1,2,3,0}, and Line 15 modifies it to
{10/1,10/2,10/3,..}, yielding the result
{10,5,3,..}. However, when the loop reaches the array element
0, Line 14 throws an exception to prevent division by zero. In
Jeannie, an exception in a
with statement cancels the
modifications to the Java array. Therefore, when Lines 25 and 26 print
ja, they observe the original contents from Line 6, namely
{1,2,3,0}.
Jeannie does not permit
break,
continue, or
goto to
cross the language boundary or to leave a
with statement, since
that would yield to ill-defined behavior.
Jeannie supports access from C code to Java strings similarly to its support for arrays, with three important differences:
newJavaStringand
stringUTFLengthfacilitate common string processing tasks.
The following example (integration test 047) demonstrates Jeannie's string manipulation
features. Class
cstdlib.StdIO is a simple wrapper for the
functions
fputs and
fflush from the C
stdio
library, and class
cstdlib.TestDriver exercises the code.
package cstdlib; // 1 import java.io.IOException; // 2 `.C { // 3 #include <stdio.h> // 4 #include <errno.h> // 5 #include <string.h> // 6 } // 7 class StdIO { // 8 public static native int stdOut() `{ // 9 return (`int)stdout; //10 } //11 public static native void //12 fputs(String s, int stream) throws IOException `{ //13 `int len = stringUTFLength(`s); //14 `byte cs[1 + len]; //15 int result; //16 copyFromJava(cs, 0, `s, 0, `s.length()); //17 cs[len] = '\0'; //18 result = fputs((char*)cs, (FILE*)`stream); //19 if (EOF == result) //20 `throw new IOException(`newJavaString(strerror(errno))); //21 } //22 public static native void //23 fflush(int stream) throws IOException `{ //24 int result = fflush((FILE*)`stream); //25 if (EOF == result) //26 `throw new IOException(`newJavaString(strerror(errno))); //27 } //28 } //29 public class Main { //30 public static void main(String[] args) throws IOException { //31 StdIO.fputs("Schöne Grüße!\n", StdIO.stdOut()); //32 StdIO.fflush(StdIO.stdOut()); //33 } //34 } //35
You can compile and run this program as follows:
(bash) jeannie.sh cstdlib/Main.jni (bash) java -cp . -Djava.library.path=cstdlib cstdlib.Main Sch\313\206ne Gr\302\270\357\254\202e!
Line 17 uses the builtin function
copyFromJava to copy the Java
string
s to the C array
cs. Here, this function behaves
slightly differently from when we saw it in Arrays. Since the
target of the copy is an array not of
`char but of
`byte,
Line 17 performs a conversion from UTF-16 to UTF-8 encoding for
unicode. In this example, the input string is
"Schöne
Grüße!\n" (“Nice greetings!” in German), which has 14
characters, including the Umlauts ö, ü, and ß. These special
symbols take only 1 UTF-16 character each, but multiple UTF-8 bytes,
hence the length of the resulting string
"Sch\313\206ne
Gr\302\270\357\254\202e!" is 18. Jeannie provides a function
stringUTFLength that you can use to find out the number of bytes
that a UTF-8 string will need before you make the conversion from
UTF-16. In the example, Line 14 calls
stringUTFLength, and Line
15 uses the result to stack-allocate a buffer for the C string. Note
that the buffer has one more byte, used to zero-terminate the string in
Line 18 as expected by the C language.
Lines 20 and 21 perform error handling. If the call to the C function
fputs in Line 19 failed, it returns
EOF to indicate that
something went wrong. In that case,
errno contains a numerical
error code, and
strerror(errno) describes the error as a C
string. Line 21 converts that C string to a Java string with the Jeannie
builtin function
newJavaString, and then throws an
IOException.
Besides the functions
copyFromJava,
stringUTFLength, and
newJavaString illustrated in this example, Jeannie also supports
strings in
with statements. Since Java strings are immutable, you
can not modify a Java string with a
with statement either: it
always implicitly cancels.
We are actively working on a Jeannie debugger. In the meantime, we
recommend you use gdb, following these instructions by Matthew White:.
Here is a short summary of Matthew White's approach. Essentially, you
need to run the compiler with
-g and the Java virtual machine
with
-Xrunjdwp. Then, you need to attach
jdb and
gdb to the running Java virtual machine. Then, at any given
point, the system is in one of three states:
jdband
gdbinert
jdband
gdb) are inert. This continues until either one of the debuggers hits a breakpoint, or there is a segmentation fault that activates
gdb, or the program terminates.
gdbactive, and both JVM and
jdbinert
jdbactive, and both JVM and
gdbinert
Consider the following buggy Jeannie program (integration test 048):
`.C { // 1 int decr(int x) { // 2 int y; // 3 x--; // 4 if (x != 0) // 5 y = x; // 6 return y; // 7 } // 8 } // 9 class Main { //10 public static void main(String[] args) { //11 int z = 1; //12 z = `decr(`z); //13 System.err.println(z); //14 } //15 } //16
Since function
f is called with
x==1, the variable
y is not initialized when Line 7 returns it. Thus, the
uninitialized value taints variable
z on Line 13, and Line 14
prints it. Below is an example debugging session, following Matthew
White's approach. The session actually takes place in three different
terminals, we interleave it here in chronological order for clarity.
Lines marked with
* contain user input.
-------- JVM terminal -------- * (bash) jeannie.sh -g Main.jni * (bash) java -cp . -Xdebug -Xnoagent -Djava.compiler=none \ * -Xrunjdwp:transport=dt_socket,server=y,suspend=y Main Listening for transport dt_socket at address: 50067 -------- jdb terminal -------- * (bash) jdb -attach 50067=11 bci=0 11 public static void main(String[] args) { //11 -------- gdb terminal -------- * (bash) ps -A | grep java | grep -v grep 5980 p1 S+ 0:00.16 java -cp . -Xdebug -Xnoagent -Djava.compiler=... * (bash) gdb -quiet java 5980 Attaching to program: `/usr/bin/java', process 5980. Reading symbols for shared libraries ............................ done 0x90009cd7 in mach_msg_trap () * (gdb) break Main.jni:4 Breakpoint 1 at 0x25e0d5f: file /Users/hirzel/tmp/Main.jni, line 4. * (gdb) cont Continuing. [Switching to process 5980 thread 0xc07] -------- jdb terminal -------- * main[1] cont -------- gdb terminal -------- Breakpoint 1, decr (x=1) at /Users/hirzel/tmp/Main.jni:4 4 x--; // 4 * (gdb) print y $1 = 39718276 * (gdb) cont Continuing. -------- JVM terminal -------- 39718276 -------- jdb terminal -------- The application exited -------- gdb terminal -------- Program exited normally.
Use this section to look up descriptions of Jeannie features, builtin functions, and tools.
`.C {C.Declarations
}Java.TypeDecls
`/
`.C) C.NT
`/
`.Java) Java.NT
`/
`.Java) Java.Type
_with (( C.Assignment / C.Declaration )
)C.Block
withstatement provides bulk access to a Java string or array.
_cancelC.Identifier
;/
_commitC.Identifier
;
withstatement for the C variable name.
`int _copyFromJava(CT
ca, `int ci,JT
ja, `int ji, `int len)
`int _copyToJava(JT
ja, `int ji,CT
ca, `int ci, `int len)
`String _newJavaString(constCT
ca)
`int _stringUTFLength(`String js[
, `int ji, `int len]
)
jeannie.sh[ options ] file [ c-files... ]
java xtc.lang.jeannie.Preprocessor[ options ] file
stdout.
java xtc.lang.jeannie.Jeannie[
-analyze|
-translate| ... ] file
java xtc.lang.ClassfileSourceRemapper[ options ] source-file class-file
This section discusses the syntax and semantics of Jeannie, first in summary and then individually by feature.
Below is the Jeannie grammar. It has four groups of productions: the start symbol, modifications to the Java and C grammars, and additions to the C grammar. Each grammar production consists of a non-terminal, followed by “=”, “+=”, or “:=”, followed by a parsing expression. For example, the production for the start symbol
File = [ Java.Package ] Java.Imports
`.C { C.Declarations
} Java.TypeDecls
specifies that the non-terminal File recognizes an optional
package declaration, import declarations, some initial
C.Declarations enclosed in a
`.C{...
} block,
and finally top-level Java class and interface declarations. Each
grammar production is followed by an example expansion. In the case of
File, the example expansion is
==>
`.C { #include <stdio.h> } class A { }
Productions with “+=” modify the grammar of one of the two base languages with the grammar modification facilities of Rats!. For example,
Java.Block += ... / CInJava C.Block
modifies the Java grammar: the non-terminal Java.Block, in addition (+=) to recognizing Java blocks (...), now recognizes a backtick (CInJava) followed by a C block (C.Block). As another example the rule
C.FunctionDeclarator
:= C.DirectDeclarator
( C.ParameterDeclaration
)
[ JavaInC Java.ThrowsClause ]
modifies the C grammar: the non-terminal C.FunctionDeclarator,
instead of (:=) recognizing just a C function declarator, now recognizes
a C function declarator followed by an optional backtick and Java
throws clause.
`.C {C.Declarations
}Java.TypeDecls
`.C { #include <stdio.h> } class A { }
`{ int x = 42; printf("%d", x); }
`((jboolean)feof(stdin))
`.C/
`
`.C
`{ int x=42; System.out.println(x); }
`new HashMap();
`java.util.Map
:=C.DirectDeclarator
(C.ParameterDeclaration
)
f(char *s) `throws IOException
`synchronized(act) { act.deposit(); }
`try { f(); } catch (Exception e) { h(e); }
`throw new Exception("boo");
`.Java/
`
`.Java
_with (WithInitializer
)C.Block
_with (`int* ca = `ja) { sendMsg(ca); }
_cancelC.Identifier
;
_cancel ca;
_commitC.Identifier
;
_commit ca;
msg->data = `ja
`int* ca = `v.toArray()
Jeannie introduces new C types for every Java primitive, class, or
interface type. If JT is a Java type name, then
`JT
is a C type. For example,
`int is a signed 32-bit C integer type,
and
`java.io.IOException is the type for opaque C references to
Java IOException objects.
Jeannie defines several type equivalences between Java and C types, denoted as JT == CT. When a Java expression is nested in C code, Jeannie type-checks the C code as if the Java expression had the equivalent C type. Likewise, when a C expression is nested in Java code, Jeannie type-checks the Java code as if the C expression had the equivalent Java type. Of course, each Java primitive, class, or interface type is equivalent to the same type with backtick in C. For example:
Jeannie has the same rules for resolving simple type names to fully
qualified names as Java. For example, C code in Jeannie can use
`IOException for
`java.io.IOException if the current file
is part of package
java.io or if it has the appropriate import
declaration.
In addition to backticked Java types in C, Jeannie also honors type
equivalences between Java and C types from
jni.h. The most
important ones are arrays:
Other type equivalences from
jni.h include primitive types and
certain frequently used classes and interfaces:
C pointers, structs, and unions have no equivalent in Java, and the Jeannie compiler flags an error when a program attempts to use them in Java code.
`{ int x = 42; printf("%d", x); }.
An example for a C expression in Java is
`((jboolean)feof(stdin)).
When used in an expression, the backtick (
` or
`.C) has
the same precedence as other unary prefix operators such as logical
negation (
!).
When a C expression is nested in Java code, Jeannie type-checks the Java
code as if the C expression had the equivalent Java type, as specified
in Type equivalences. It is a compile time error when a C
expression nested in Java evaluates to a pointer, struct, or union,
since those have no equivalent in Java. Jeannie also checks that the
Java code treats the value of the C expression as an r-value, and in
particular, does not assign to it. When a C
return statement
returns from a Java method, Jeannie type-checks the return value against
the return type of the method as if it had the equivalent Java type.
C assignments, variable initializers, function invocations, and
return statements can implicitly widen opaque references to Java
classes or interfaces. For example, C code can assign a reference of
type
`java.util.HashMap to a variable of type
`java.util.Map, because class
HashMap implements interface
Map.
Native methods of a Java class must have a body and that body must be a
backticked C block. Native methods also declare an implicit C parameter
JNIEnv* env, so that C code has access to JNI's API.
Consequently, explicit parameters of native methods cannot have the name
env. Jeannie provides this feature to facilitate incremental
conversion of JNI code to Jeannie; other uses of this feature are
discouraged.
If nested C code contains any
break,
continue, or
goto statements, those must not cross the language boundary, and
they also must not cross the boundary of a
with statement.
`{ int x=42; System.out.println(x); }.
An example for a Java expression in C is
`new HashMap();.
An example for a Java type name in C is
`java.util.Map. It
may be part of a C variable declaration such as
const `java.util.Map m = ...;.
An example for a Java statement in C is
`throw new Exception("boo");.
An example for a C function declarator with a Java throws clause is
f(char *s) `throws IOException.
When used in an expression, the backtick (
` or
`.C) has
the same precedence as other unary prefix operators such as logical
negation (
!).
C code that contains a nested Java expression uses the result of that
Java expression as an r-value of the corresponding C type. As specified
in Type equivalences, the corresponding C type may be a backticked
Java primitive, class, or interface type. If a nested Java expression
yields a reference to a Java object, that object will not be garbage
collected until at least the enclosing function or method returns. In
the terminology of JNI, it constitutes a local reference.
private,
protected,
public, or default).
When a Java expression is nested in C code, Jeannie type-checks the C
code as if the Java expression had the equivalent C type, see Type equivalences. Jeannie also checks that the C code treats the value of
the Java expression as an r-value, and in particular, does not assign to
it. When a Java
return statement returns from a C function,
Jeannie type-checks the return value against the return type of the
function as if it had the equivalent C type.
In order to contain nested Java code, the enclosing C code must be
either part of a Java method, or must be in a C function that declares
an explicit formal parameter
JNIEnv* env. The
env
variable can also be used to facilitate incremental conversion of JNI
code to Jeannie; other uses of this feature are discouraged.
If nested Java code contains any
break or
continue
statements, those must not cross the language boundary, neither must
they cross the boundary of a
with statement.
_with– Access entire Java array or string from C.
_with) without a leading underscore (
with). The leading underscore is mandatory only if you run the Jeannie compiler with the
-underscorescommand line option.
An example for a
with statement is
with (`int* ca = `ja) { sendMsg(ca); }.
An example for a C assignment expression is
msg->data = `ja.
An example for a C declaration is
`int* ca = `v.toArray().
withstatement accesses a Java string or array from C code like a C array in a well-defined scope. For example,
_with (`int* ca = `ja) { for (`int i=0, n=`ja.length; i<n; i++) s += ca[i]; }
acquires a copy of Java array
ja's contents, sums up its
elements, and then releases the copy while also copying back the
contents. In the example, array
ja is released when control
reaches the end of the block. In general, the Java string or array is
released when control leaves the body of the
with statement for
any reason, including return statements and exceptions. In the case of
an exception, all modifications to the array are canceled, in other
words, the original Java array is unmodified. When there is no
exception, any changes to the C array are copied back into the Java
array.
If the Java string or array is null, the
with statement signals a
NullPointerException. Otherwise, it initializes the C array to
point to a copy of the Java array. Strings are UTF-8 encoded if the C
array is of type
`byte*, and UTF-16 encoded if the C array is of
type
`char*. Independent of encoding, modifying a string leads to
undefined behavior.
const.
Let CT be the type of ca, and JT the type of ja.*.
_cancel/
_commit– Release a Java array and discard / preserve changes.
_cancelor
_commit) without a leading underscore (
cancelor
commit). The leading underscore is mandatory only if you run the Jeannie compiler with the
-underscorescommand line option.
An example for a
cancel statement is
_cancel ca;.
An example for a
commit statement is
_commit ca;.
commitand
cancelstatements initiate an abrupt control transfer to the code immediately following the
withstatement that initializes the named C pointer variable. A
commitstatement copies any changes back into the Java array, whereas
canceldiscards them. Both
commitand
abortrelease any resources necessary for implementing the
withstatement, notably the copy's memory.
withstatement.
This section describes special C functions built into Jeannie. Builtin functions are recognized by the Jeannie compiler, which analyzes them and translates them specially. For example, the compiler enforces special constraints when it analyzes a builtin, such as matching a C buffer type to a Java array type.
You can write all of these builtins either with or without a leading
underscore (e.g.,
copyFromJava vs.
_copyFromJava). The
leading underscore is mandatory only if you run the Jeannie compiler
with the
-underscores command line option.
_copyFromJava– Copy string or array elements from Java to C.
`int _copyFromJava(CT
ca, `int ci,JT
ja, `int ji, `int len)
ja[ji...
ji+len-1]to
ca[ci...
], and return the number of elements copied into
ca.*. If CT is
`byte*, then the copy involves a conversion from UTF-16 to UTF-8.
This conversion may cause the return value (number of elements copied
into
ca) to differ from the
len parameter (number of
elements copied out of
ja).
ca
`int ci
ja
`int ji
`int len
`int– Number of elements copied into the C array
ca.
jais invalid,
copyFromJavaraises a
StringIndexOutOfBoundsExceptionor
ArrayIndexOutOfBoundsException. If one of the indices in the C array
cais invalid,
copyFromJavaexhibits undefined behavior. To avoid a buffer overrun related to unicode conversion (from a Java string to a C
`byte*), you should call
stringUTFLengthbefore calling
copyFromJava.
_copyToJava– Copy array elements from C to Java.
`int _copyToJava(JT
ja, `int ji,CT
ca, `int ci, `int len)
ca[ci...
ci+len-1]to
ja[ji...
], and return the number of elements copied. The type JT must be
jE
Arrayfor some element type E, and CT must be
`E
*, for the same E. For example, if JT is
jintArray, then E is
int, and CT must be
`int*. The type JT must not be
`String, because strings are immutable in Java, and therefore, it does not make sense to copy elements into them.
ja
`int ji
ca
`int ci
`int len
`int– Number of elements copied into the Java array
ja.
jais invalid,
copyToJavaraises an
ArrayIndexOutOfBoundsException. If one of the indices in the C array
cais invalid,
copyToJavaexhibits undefined behavior.
_newJavaString– Create a new Java string from C.
`String _newJavaString(constCT
ca)
ca. The type CT of the C array must be either
`byte*or
`char*. If CT is
`byte*, then the string creation involves a conversion from UTF-8 to UTF-16. In either case (
`byte*or
`char*), the C array must be null-terminated.
ca
`String– Newly created Java string with the same contents as
ca.
newJavaStringraises an
OutOfMemoryError.
_stringUTFLength– Count length of Java string in UTF-8 representation.
`int _stringUTFLength(`String js[
, `int ji, `int len]
)
jsis. If the optional parameters
jiand
lenare specified, count how long the UTF-8 representation of the substring
js[ji...
ji+len-1]is. You should use this function to find out how large a C
`byte*buffer you need when copying (parts of) Java strings to C.
`String js
`int ji
`int len
`int– Length in UTF-8 characters.
This section describes the command line tools for compiling Jeannie
programs. In the normal case, you should only need to use one of them:
the “master script”
jeannie.sh. It orchestrates the other Jeannie tools
(preprocessor, compiler, postprocessor) as well as external tools (C and
Java compilers).
jeannie.sh– Jeannie compiler master script.
jeannie.sh[ options ] file [ c-files... ]
.jni. The file name should include the package directory. For example, if you compile a class
a.b.C, where
a.bis the package name, the file name should be
a/b/C.ext. See also the
-sourcepathoption below.
jeannie.shmaster script compiles a Jeannie source file into Java class files and a dynamically linked library. It does this by calling other tools that transform the file through a number of stages. The following picture illustrates this:
The
jeannie.sh script first splits file (the name of the
main source file) into a stem and an extension. The extension
specifies the start stage of the compilation. For example, if the
command line is
jeannie.sh Main.jni.pp
then stem is
Main and the extension is
jni.pp.
Hence, processing starts at stage
jni.pp, and the first
processing step runs the C preprocessor. By default,
jeannie.sh
follows all processing steps from the start stage to the end. The
-stopAfter option overrides this default by specifying a stop
stage. For example, if the command line is
jeannie.sh -stopAfter i,class Main.jni
then processing stops after
Main.i and
Main.class have
been generated. In other words,
jeannie.sh does not run the C
compiler to create a dynamically linked library.
Here is a brief description of each processing step:
#includeand other directives and expand macros.
gcc, compile C code to dynamically linked library. The file name of the generated library depends on the platform: stem
.dllon Cygwin,
libstem
.soon Linux, and
libstem
.jnilibon Mac OS.
javac, compile Java code to class files.
//#linedirectives from
.javafile into
.classfiles.
-ccpath
jeannie.shuses the
gccexecutable it finds in your PATH.
-cp|
-classpathpaths
-d|
-destpathdir
a.b.C, where
a.bis the package name, the generated files will have names based on dir
/a/b/C.ext.
-flattenSmap
.jnisource file to enable source-level debugging. The
-flattenSmapoption determines whether this is accomplished by erasing the line number information or by adding an additional source map stratum as specified by JSR-45. The difference becomes visible for tools that do not yet support JSR-45, such as Java virtual machines printing exception backtraces using the line numbers of the Java source file.
-g
-prettyoption. The
-goption is passed through to the C compiler as well as the Java compiler.
-h|
jeannie.sh.
-Idir
#includedirectives.
-indir
-sourcepathoption below.
-javaHomedir
jeannie.shinfers this directory based on where it finds the
javaexecutable in your PATH. The Jeannie compiler looks for
javacand
javain dir
/bin.
-jniCallqualifier
JNICALLmacro defined in
jni.h, which specifies the calling conventions on platforms where that matters. You should not need to specify this option, as
jeannie.shwill infer it for you. It is typically the empty string on Linux or Mac OS, and the string “
__attribute__((__stdcall__))” on Cygwin.
-llibrary
-lmspecifies the math library (
m) to search for mathematical functions such as
sqrtand
cos.
-nowarn
jeannie.shinvokes the C compiler with
-Wall; the
-nowarnoption overrides this default. Also,
-nowarngets passed through to the Java compiler.
-platformplatform
Cygwin,
Linux, or
MacOS. Usually,
jeannie.shwill infer the platform for you, but if it can't, you need to specify it on the command line. Note that this option is not sufficient for cross-compiling, as the compilation also depends on the installed C compiler, header files, and libraries.
-pretty
-goption. By default,
jeannie.shintersperses generated Java code with line markers such as
//.
-sourcepathdir
a.b.C, where
a.bis the package name, the source file should reside in dir
/a/b/C.jni.
-stopAfterstage
jni.i, then
jeannie.shwill run the Jeannie preprocessor and the C preprocessor, and then stop. See the Description above for the full list of stages. By default, if
-stopAfteris not specified,
jeannie.shwill run all stages.
-v|
-verbose
jeannie.sh. Each command is prepended by the source location in
jeannie.shjust before running it. Also, each command may print its own messages, such as a copyright notice.
-verboseSettings
--
-), and might therefore be mistaken with a command line option otherwise.
-cccommand line option above for details.
-classpathcommand line option above for details.
-javaHomecommand line option above for details.
-jniCallcommand line option above for details.
xtc.lang.jeannie.Preprocessor– Inject Jeannie-specific definitions.
java xtc.lang.jeannie.Preprocessor[ options ] file
.jni. If you omit the file name, the preprocessor prints a description of the command line options.
stdout. This usually gets invoked from the
jeannie.shmaster script, but you can also run it stand-alone. In the usual case, the input file would have extension
.jni, and you would pipe the output to a file with the extension
.jni.pp. The injected definitions appear at the start of the initial
`.C{...}block, which means they precede any other C declarations that you put there either directly or with
#include.
-silent
.
xtc.lang.jeannie.Jeannie– Translate Jeannie to Java and C source.
java xtc.lang.jeannie.Jeannie[ options ] file
-analyze -translateto run both the semantic analyzer and the code generator.
.jni.i. In general, you must run the Jeannie preprocessor and the C preprocessor first, otherwise, many of the C identifiers in the file are undeclared and lead to errors from the C semantic analyzer.
jeannie.shmaster script, but you can also run it stand-alone. In the usual case, the input would have extension
.jni.i, and the compiler generates two files with the same stem and extensions
.i(for preprocessed C code) and
.java(for Java code). The options serve to run the compiler only partially and to print intermediate results. The following picture illustrates how the compiler works internally.
A detailed technical description of the Jeannie compiler internals is in
the conference paper at.
-analyze
-translateor
-printSymbolTable. Runs the Jeannie semantic analyzer on the abstract syntax tree, which includes both C and Java type analysis.
-indir
.java. If it fails to find the class this way, the compiler attempts to find a compiled version of the class based on reflection and the CLASSPATH environment variable.
-jniCallword
JNICALLmacro defined in
jni.h, which specifies the calling conventions on platforms where that matters. Defaults to the empty string, which is correct on Linux or Mac OS; on Cygwin, you should provide set it to “
__attribute__((__stdcall__))”.
-outdir
-pedantic
-pretty
//.
-printAST
stdout. Looking at the abstract syntax tree helps compiler hackers validate their assumptions when crafting visitors.
-printSource
-printSymbolTable
-analyze.
-silent
-strict
-translate
-analyze.
xtc.lang.ClassfileSourceRemapper– Add debugging symbols to classes.
java xtc.lang.ClassfileSourceRemapper[ options ] source-file class-file
jeannie.shmaster script, but you can also run it stand-alone. The postprocessor reads the input source-file, which contains line markers such as
//#line 7 Main.jni
Line markers map lines in the generated Java source file back to the
original Jeannie source file. The postprocessor injects this information
into the class-file. That is useful for source-level debugging and
for exception backtraces.
-flatten
-stratify
|
http://cs.nyu.edu/rgrimm/xtc/jeannie.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
#include <FastSPI_LED.h>#include <SD.h>File myFile;const int chipSelect = 4;String fileNam = "3.bmp";unsigned int height = 1;unsigned long bitmapOffset = 0x36;unsigned long filePosition = 0;#define NUM_LEDS 64int frameDelay = 1; //number of millis between animated framesstruct CRGB { unsigned char r; unsigned char g; unsigned char b; }; struct CRGB *leds; //I don't know what this does.unsigned long CurTime = millis();void setup(){ randomSeed(analogRead(0)); Serial.begin(9600); //Serial.println("Start"); //Serial.print(CurTime); fastSPIsetup(); //make the fastSPI library do its magic Serial.print("Initializing SD card..."); // make sure that the default chip select pin is set to // output, even if you don't use it: pinMode(10, OUTPUT); if (!SD.begin(chipSelect)) { Serial.println("Card failed, or not present"); // don't do anything more: return; } Serial.println("card initialized.");}void loop() { myFile = SD.open("3.bmp", FILE_READ); Serial.print("BitmapOffsetStart:"); Serial.println(bitmapOffset,HEX); myFile.seek(0x16); height = myFile.read(); myFile.seek(0xA); bitmapOffset = myFile.read(); myFile.close(); Serial.print("BitmapOffset:"); Serial.println(bitmapOffset,HEX); Serial.print("Height:"); Serial.println(height); for(int j=0; j<height;j++) //loop through each line in the file, then spit it out. { FSPIlineOut(j); //reads a line and displays delay(frameDelay); }} //end void loopvoid FSPIlineOut(unsigned int lineNo){ myFile = SD.open("3.bmp", FILE_READ); //delay(100); filePosition = bitmapOffset; filePosition += (lineNo *(NUM_LEDS * 3) ); myFile.seek(filePosition);//get to data delay(100); //memset(leds, 0, NUM_LEDS * 3);//blank the array Serial.print(lineNo); Serial.print(":"); Serial.print(filePosition,HEX); Serial.print(":"); Serial.print(myFile.position(),HEX); Serial.print(": "); for(int i=0; i < NUM_LEDS; i++) { leds[i].b=myFile.read(); leds[i].g=myFile.read(); leds[i].r=myFile.read(); Serial.print(leds[i].r,HEX); Serial.print(","); } Serial.println(); if (!myFile){ Serial.println("the card broke"); //myFile.close(); }//FastSPI_LED.show(); // write all the pixels out //delayMicroseconds(4);//to remove g-glitch (can be as low as 4) drawArray(); //delay(frameDelay);//some actual display time}void drawArray() { //take the current pixel array, dump it to the leds, and shiftout //memset(leds, 0, NUM_LEDS * 3); //clear the led array //PORTD |= B00010000;// Set bit high //FastSPI_LED.show(); // shift out the array. //PORTD &= ~B00010000;// Set bit low - happens a bit fast for the NAND so above delay is required to de-glitch it //delay(1);}void fastSPIsetup() //gets the fastspi library set up{ FastSPI_LED.setLeds(NUM_LEDS); FastSPI_LED.setChipset(CFastSPI_LED::SPI_WS2801); FastSPI_LED.setDataRate(3); //make the library work with 5v ws2801 strips FastSPI_LED.init(); FastSPI_LED.start(); leds = (struct CRGB*)FastSPI_LED.getRGBData(); }
Initializing SD card...card initialized.BitmapOffsetStart:36BitmapOffset:36Height:1000:36:36: 0,FF,0,FF,0,0,FF,0,0,FF,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,FF,FF,0,FF,FF,0,0,FF,0,FF,0,1:F6:F6::1B6:1B6::276:276:,<<redacted to reduce character count>>14:AB6:AB6:,15:B76:B76:,16:C36:C36: 0,17:CF18:DB
struct CRGB *leds; //I don't know what this does.
Since I'm opening a potentially huge file, I can only ever load a line or two of it at a time.
but 'struct' doesn't come up often in beginner coding
1)Would you recommend I create another actual array to load the data into from the sd card then copy it over/move the pointer somehow to that memory?
2) I can't seem to open the file either globally or in one function and have it available from another function, hence it needs to be opened every call of the for loop. Is there a good way around this? (nest the code in the main loop so it's parented?)
3) I get the same result whether I file.close or not (you're right, I should be closing it each time).
now has a the mentioned bug but doesn't crash at least.
#include <SPI.h>
|
http://forum.arduino.cc/index.php?topic=124035.msg932334
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
[
]
ASF subversion and git services commented on OPENJPA-1986:
----------------------------------------------------------
Commit 1580907 from [~jpaheath] in branch 'openjpa/branches/2.2.1.x'
[ ]
OPENJPA-1986: Extra queries being generated when cascading a persist - added another/similar
check to the ones added by Rick.
> Extra queries being generated when cascading a persist
> ------------------------------------------------------
>
> Key: OPENJPA-1986
> URL:
> Project: OpenJPA
> Issue Type: Bug
> Components: performance
> Affects Versions: 2.0.1, 2.1.0, 2.2.0
> Reporter: Rick Curtis
> Assignee: Rick Curtis
> Fix For: 2.1.1, 2.2.0
>
> Attachments: OPENJPA-1986.patch
>
>
> I found a scenario where extra queries were being generated while cascading a persist
to a new Entity. See the following example:
> @Entity
> public class CascadePersistEntity implements Serializable {
> private static final long serialVersionUID = -8290604110046006897L;
> @Id
> long id;
> @OneToOne(cascade = CascadeType.ALL)
> CascadePersistEntity other;
> ...
> }
> and the following scenario:
> CascadePersistEntity cpe1 = new CascadePersistEntity(1);
> CascadePersistEntity cpe2 = new CascadePersistEntity(2);
> cpe1.setOther(cpe2);
> em.persist(cpe1);
> This results in two inserts and one select. The extra select is what I'm going to get
rid of with this JIRA.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
|
http://mail-archives.apache.org/mod_mbox/openjpa-dev/201403.mbox/%3CJIRA.12505332.1303922693048.132271.1395679012290@arcas%3E
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Missing from the current Nutch documentation (Tutorial, FAQ) is a list of features. This wiki page could help, if someone who knows the answers can edit it.
(Please reformat this text and divide into feature lists, questions and questions & answers).
Features
- Fetching, parsing and indexation in parallel and/ou distributed
- Plugins
Many formats: plain text, HTML, XML, ZIP, OpenDocument (OpenOffice.org), Microsoft Office (Word, Excel, Powerpoint), PDF, JavaScript, RSS, RTF, MP3 (ID3 tags)
- Ontology
- Clustering
- Distributed filesystem (via Hadoop)
- Link-graph database
- NTLM authentication
Questions and Answers
- What kind of searches does Nutch support? (quoted, nested, truncation, wildcarding [and where], Boolean),
- "...." (phrase search?), + (what is this for?), - (negation) and fieldname:term. No "AND" or "OR". The and-logic is implied.
- Is stemming an option?
According to the Lucene in Action book: "Nutch does not use stemming or term aliasing of any kind. Search engines have not historically done much stemming, but it is a question that comes up regularly." -- page 329
- What kind of stemming does Nutch use? (and can you add exceptions/changes?)
See previous answer
- Does Nutch support Boolean operators? (can you use Google-like plus or minus or are you stuck with 1990s terms?)
- No
- How does the search engine handle punctuation and special characters? (and what's configurable?)
- They are treated like a space.
- Which document formats are supported?
- Guessing from the names of the available parser plugins, this is probably it. However, only the plain text and HTML are enabled by default. Edit conf/nutch-site.xml and change the value of plugin.includes property to include the plugins for the document types that you want Nutch to handle:
- Plain Text (plugin: parse-text)
- HTML (parse-html)
- XML (parse-xml) uses XPath and namespaces to do the mapping between XML elements and Lucene fields.
JavaScript (for extracting links only?) (parse-js)
OpenOfice.org ODF (parse-oo) parses Open Office and Star Office documents.
- Microsoft Power Point, the .ppt file (parse-mspowerpoint)
- Microsoft Word, the .doc file (parse-msword)
- Adobe PDF (parse-pdf)
- RSS (parse-rss)
- RTF (parse-rtf)
- MP3 (?) Is there any text in MP3? (parse-mp3) (JR: Sure, the mp3 itself contains the ID3v1 or ID3v2 tags which contain song information like
- title, artist, album, comments, etc. The useful information needed to search mp3s)
- ZIP (?) This seems to expand the zip of plain text files and return the concatenated text. (parse-zip)
Questions without Answers
- Does Nutch support weighted field searching, synonym support?
- What kinds of indexes does Nutch build? (multi-format indexing, incremental indexing, spell-check support, thesauri support, fielded searching, rank-by-reputation?)
- What post-coordination options are available? (hey Karen, what does this mean?)
- How easy is Nutch to configure?
- How transparent is its configuration to a working organization: does it require geeky command line stuff, or can a knowledgable manager enter a web or software interface to view or modify settings?
- How are results sorted?
- Does Nutch support deduping?
- Can one tinker with relevance algoritms?
- Are there ranking overrides?
|
http://wiki.apache.org/nutch/OldFeatures
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Walkthrough: Downloading Assemblies on Demand with the ClickOnce Deployment API Using the Designer
By default, all the assemblies included in a ClickOnce application are downloaded when the application is first run. However, there might be parts of your application that are used by a small set of the demands them.
To create a project that uses an on-demand assembly with Visual Studio
Create a new Windows Forms project in Visual Studio. On the File menu, point to Add, and then click New Project. Choose a Class Library project in the dialog box and name it ClickOnceLibrary.
Define a class named DynamicClass with a single property named Message.
Select the Windows Forms project in Solution Explorer. Add a reference to the System.Deployment.Application assembly and a project reference to the ClickOnceLibrary project. enables you to do this easily by associating all the DLLs that belong to a feature with a download group>(); [SecurityPermission(SecurityAction.Demand, ControlAppDomain=true)] public Form1() { InitializeComponent();"); } catch (Exception e) { throw (e); } } else { //Major error - not running under ClickOnce, but missing assembly. Don't know how to recover. throw (new Exception("Cannot load assemblies dynamically - application is not deployed using ClickOnce.")); } return (newAssembly); }
On the View menu, click Toolbox. Drag a Button from the Toolbox onto the form. Double-click the button and add the following code to the Click event handler.
To mark assemblies as optional in your ClickOnce application by using Visual Studio
Right-click the Windows Forms project in Solution Explorer and click Properties. using the Publish Wizard.
To mark assemblies as optional in your ClickOnce application by using Manifest Generation and Editing Tool — Graphical Client (MageUI.exe)
Create your ClickOnce manifests as described in Walkthrough: Manually Deploying a ClickOnce Application.
Before closing MageUI.exe, select the tab that contains your deployment's application manifest, and within that tab select the Files tab.
Find ClickOnceLibrary.dll in the list of application files and set its File Type column to None. For the Group column, type ClickOnceLibrary.dll.
|
https://msdn.microsoft.com/en-us/library/ak58kz04.aspx
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
.
These flags control what variation of the language are permitted. Leaving out all of them gives you standard Haskell 98.
This simultaneously enables all of the extensions to Haskell 98 described in Chapter 7, except where otherwise noted.
This option enables the language extension defined in the Haskell 98 Foreign Function Interface Addendum plus deprecated syntax of previous versions of the FFI for backwards compatibility.
Switch off the Haskell 98 monomorphism restriction. Independent of the -fglasgow-exts flag.
See Section 7.4.4. Only relevant if you also use -fglasgow-exts.
See Section 7.9. Only relevant if you also use -fglasgow-exts.
See Section 7.6. Independent of -fglasgow-exts.
See Section 7.10. Independent of -fglasgow-exts.
GHC normally imports Prelude.hi files for you. If you'd rather it didn't, then give it a -fno-implicit-prelude option. The idea is that you can then import a Prelude of your own. (But don't call it Prelude; the Haskell module namespace is flat, and you must not conflict with any Prelude module.)
Even though you have not imported the Prelude, most of the built-in syntax still refers to the built-in Haskell Prelude types and values, as specified by the Haskell Report. For example, the type [Int] still means Prelude.[] Int; tuples continue to refer to the standard Prelude tuples; the translation for list comprehensions continues to use Prelude.map etc.
However, -fno-implicit-prelude does change the handling of certain built-in syntax: see Section 7.3.5.
Enables Template Haskell (see Section 7.5). Currently also implied by -fglasgow-exts.
Enables implicit parameters (see Section 7.4.5). Currently also implied by -fglasgow-exts.
|
http://www.haskell.org/ghc/docs/6.2/html/users_guide/ghc-language-features.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.